icefog72

158 models • 53 total models in database
Sort by:

IceWhiskeyRP-7b-8bpw-exl2

NaNK
license:cc-by-nc-4.0
7,160
2

IceMoonshineRP-7b

NaNK
license:cc-by-nc-4.0
93
20

IceAbsintheRP-7b

IceAbsintheRP-7b (Ice0.150-20.10-RP) mistral v0.2 base > [!NOTE] > The Alpaca format will generally work, but I recommend trying my SillyTavern settings preset and rules-lorebook for best results. See the How to run section below for details. The model has a context limit of 32k tokens. However, the quality of responses from any small-to-medium model begins to decline after 16k tokens, with more rapid degradation beyond 21k tokens. I recommend using 21k tokens as the maximum for optimal performance. > [!WARNING] >- 4.2bpw-exl2 >- 4.2bpw-v2-exl2 (lmhead output layer at 8 bpw + eaddario/imatrix-calibration dataset) >- 6.5bpw-exl2 >- 8bpw-exl2 > [!WARNING] >- Casual-Autopsy imat GGUFs >- Browse all >- gguf-my-repo For GGUF - please just grab KoboldCpp. Set GPU layers to 33 (depends on VRAM/model quant), Context to 20k + flash attention + KV cache 4bit + Low VRAM if you only have 6GB VRAM, and you're good to go. Now grab the latest version of the rules and formatting for ST from here (use this to install ST if you haven't already) 1. First and 2 step -> 2. Import these files into ST and select them. In Active World(s) for all chats set rules lorebook. 3. If you are using Vectorization Source, set rules to Vectorized. 4. Set Start Reply With manually if you want to have planning. If you don't - you need to edit Prompt Content and the rules lorebook by removing everything about it. 5. What should a good working setup look like? Something like this: Planning (thinking) with a few short bullet points about what the NPC should do. 6. What if I have a mess in the response? Look at the card's Advanced Definitions and move Main Prompt, Post-History Instructions, and Character's Note to Description. Don't forget to have properly formatted Examples of dialogue for ST (some cards from web chat platforms have a mess). The smaller the model = the more demanding it is regarding clean prompt formatting. 7. Treat my rules as an example. Everyone has their own taste for how RP should look. For example - I think it's bad tone to use second person view for narration, and it make models to impersonate user more. As a result, cards with it should be rewritten in third person if using rules without editing. 8. Why planning instead of just using reasoning? There are many reasons, but the main one - pure reasoning tends to overthink things, and it's less controllable and more error-prone. 9. Can this setup work with other models? Yes if they smarter then 7b and not overcooked (12b nemo works fine for me) 10. Role-Play Rules is so big... You have 20k context --. Again feel free to edit. 12. > [!TIP] > Get the latest version of rules and ST settings presets, or if you have questions, feel free to ask on my Discord > here. > on my AI-related Discord server for feedback, questions, and other stuff. I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceAbsintheRP-7b`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command. This model was merged using the Model Breadcrumbs merge method using H:\FModels\Ice0.130-16.06 as a base. The following models were included in the merge: G:\FModels\Ice0.148-19.10-RP F:\FModels\Ice0.144-15.10-RP F:\FModels\Ice0.143-15.10-RP F:\FModels\Ice0.147-17.10-RP The following YAML configuration was used to produce this model:

NaNK
license:cc-by-nc-4.0
88
10

IceNalyvkaRP-7b

Nalyvka is a delightful gem from Eastern European tradition—a homemade liqueur that captures the essence of ripe fruits and the warmth of shared moments. Originating primarily in Ukraine and also cherished in Poland > [!IMPORTANT] > ST settings, rules-lorebook look here > [!TIP] > Get last version of rules, or ask me a questions you can > here. > on my new AI related discord server for feedback, questions and other stuff. > [!WARNING] >- 4.2bpw-exl2 >- 6.5bpw-exl2 >- 8bpw-exl2 I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceNalyvkaRP-7b`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command. Merge Method This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Ice0.69-25.01-RP Ice0.68-25.01-RP The following YAML configuration was used to produce this model: On a top of 7b MistralForCausalLM I guess? (Ice0.68-25.01-RP is less coherent ) Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |23.11| |IFEval (0-Shot) |54.98| |BBH (3-Shot) |32.49| |MATH Lvl 5 (4-Shot)| 6.04| |GPQA (0-shot) | 7.72| |MuSR (0-shot) |15.27| |MMLU-PRO (5-shot) |22.18|

NaNK
license:cc-by-nc-4.0
56
9

Ice0.140-14.10-RP

NaNK
license:apache-2.0
43
1

IceMoonshineRP-7b-Q8_0-GGUF

icefog72/IceMoonshineRP-7b-Q80-GGUF This model was converted to GGUF format from `icefog72/IceMoonshineRP-7b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. >- Link on my new AI related discord server for feedback, questions and other stuff. >- ko-fi To buy sweets for my cat :3 Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
41
2

Ice0.144-15.10-RP

- Developed by: icefog72 - License: apache-2.0 - Finetuned from model : NeuralNovel/Gecko-7B-v0.1 This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
37
0

Ice0.148-19.10-RP

- Developed by: icefog72 - License: apache-2.0 - Finetuned from model : icefog72/IceMoonshineRP-7b This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
35
1

Ice0.143-15.10-RP

- Developed by: icefog72 - License: apache-2.0 - Finetuned from model : icefog72/IceMoonshineRP-7b This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
33
0

Ice0.141-14.10-RP

license:apache-2.0
28
1

Ice0.147-17.10-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Breadcrumbs merge method using H:\FModels\Ice0.130-16.06 as a base. The following models were included in the merge: F:\FModels\Ice0.143-15.10-RP F:\FModels\Ice0.144-15.10-RP The following YAML configuration was used to produce this model:

license:cc-by-nc-4.0
27
1

IceMoonshineRP-7b-Q5_K_M-imat-GGUF

icefog72/IceMoonshineRP-7b-Q5KM-imat-GGUF This model was converted to GGUF format from `icefog72/IceMoonshineRP-7b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. >- Link on my new AI related discord server for feedback, questions and other stuff. >- ko-fi To buy sweets for my cat :3 Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
25
0

Ice0.146-17.10-RP

license:cc-by-nc-4.0
24
1

IceAbsintheRP-7b-4.2bpw-exl2

NaNK
18
0

Ice0.143-15.10-4.2bpw

- Developed by: icefog72 - License: apache-2.0 - Finetuned from model : icefog72/IceMoonshineRP-7b This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
17
0

Ice0.130-16.06-RP-Q8_0-GGUF

NaNK
llama-cpp
15
0

Ice0.146-17.10-4.2bpw

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Breadcrumbs merge method using H:\FModels\Mistral-7B-v0.2 as a base. The following models were included in the merge: F:\FModels\Ice0.143-15.10-RP G:\FModels\Ice0.128-15.06-RP F:\FModels\Ice0.144-15.10-RP H:\FModels\Ice0.130-16.06 The following YAML configuration was used to produce this model:

NaNK
14
0

IceDrunkenCherryRP-7b

> [!TIP] > Get last version of rules, look model's chat response exemples or ask me a questions you can > here. > on my new AI related discord server for feedback, questions and other stuff. > [!WARNING] >- 4.2bpw-exl2 >- 6.5bpw-exl2 >- 8bpw-exl2 I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceDrunkenCherryRP-7b`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command. This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: icefog72/Ice0.29-06.11-RP icefog72/Ice0.37-18.11-RP The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.77| |IFEval (0-Shot) |47.63| |BBH (3-Shot) |31.51| |MATH Lvl 5 (4-Shot)| 6.27| |GPQA (0-shot) | 7.61| |MuSR (0-shot) |14.27| |MMLU-PRO (5-shot) |23.32|

NaNK
license:cc-by-nc-4.0
12
9

IceAbsintheRP-7b-6.5bpw-exl2

NaNK
11
0

Ice0.144-15.10-4.2bpw

- Developed by: icefog72 - License: apache-2.0 - Finetuned from model : NeuralNovel/Gecko-7B-v0.1 This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
10
0

IceTea21EnergyDrinkRPV13-dpo240-gguf

license:cc-by-nc-4.0
9
1

Ice0.147-17.10-4.2bpw

NaNK
9
0

IceAbsintheRP-7b-8bpw-exl2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Breadcrumbs merge method using H:\FModels\Ice0.130-16.06 as a base. The following models were included in the merge: G:\FModels\Ice0.148-19.10-RP F:\FModels\Ice0.144-15.10-RP F:\FModels\Ice0.143-15.10-RP F:\FModels\Ice0.147-17.10-RP The following YAML configuration was used to produce this model:

NaNK
9
0

IceAbsintheLoucheRP-7b

NaNK
license:cc-by-nc-4.0
8
1

IceAbsintheLoucheRP-7b-4.2bpw-exl2

NaNK
8
0

Ice0.152-20.10-RP-4.2bpw-exl2

NaNK
8
0

Ice0.80-03.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.77-02.02-RP G:\FModels\Ice0.79 The following YAML configuration was used to produce this model:

8
0

Ice0.149-19.10-4.2bpw

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Breadcrumbs merge method using H:\FModels\Ice0.130-16.06 as a base. The following models were included in the merge: F:\FModels\Ice0.147-17.10-RP G:\FModels\Ice0.148-19.10-RP The following YAML configuration was used to produce this model:

NaNK
8
0

juanako-7b-UNA-06.11-orpo

NaNK
license:apache-2.0
7
0

Ice0.88-07.02-RP-dpo-merged_16bit

NaNK
license:apache-2.0
6
0

IceMedovukhaRP-7b

NaNK
license:cc-by-nc-4.0
5
3

Ice0.37-18.11-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.36-18.11-RP E:\FModels\Ice0.35-18.11-RP The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.91| |IFEval (0-Shot) |49.72| |BBH (3-Shot) |31.04| |MATH Lvl 5 (4-Shot)| 6.42| |GPQA (0-shot) | 8.28| |MuSR (0-shot) |12.21| |MMLU-PRO (5-shot) |23.81|

license:cc-by-nc-4.0
5
1

Ice0.29-06.11-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: D:\FModels\Ice0.27-06.11-RP E:\FModels\Ice0.28-06.11-RP The following YAML configuration was used to produce this model:

4
1

Ice0.31-08.11-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: D:\FModels\Ice0.27-06.11-RP E:\FModels\Ice0.30-08.11-RP The following YAML configuration was used to produce this model:

license:cc-by-nc-4.0
4
1

IceMoonshineRP-7b-4.2bpw-exl2

NaNK
4
0

WestIceLemonTeaRP-32k-7b

NaNK
license:cc-by-nc-4.0
3
16

IceCoffeeRP-7b

Merge Details This is a merge of pre-trained language models created using mergekit. Prompt template: Alpaca, maybe ChatML This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\IceCoffeeTest10 G:\FModels\IceCoffeeTest5 The following YAML configuration was used to produce this model: I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceCoffeeRP-7b`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |73.19| |AI2 Reasoning Challenge (25-Shot)|71.16| |HellaSwag (10-Shot) |87.74| |MMLU (5-Shot) |63.54| |TruthfulQA (0-shot) |70.03| |Winogrande (5-shot) |82.48| |GSM8k (5-shot) |64.22| Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.24| |IFEval (0-Shot) |49.59| |BBH (3-Shot) |29.40| |MATH Lvl 5 (4-Shot)| 4.83| |GPQA (0-shot) | 4.70| |MuSR (0-shot) |11.00| |MMLU-PRO (5-shot) |21.94|

NaNK
license:cc-by-nc-4.0
3
6

IceDrinkNameGoesHereV0RP-7b-Model_Stock

NaNK
license:cc-by-nc-4.0
3
2

Ice0.27-06.11-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.15-06.11-RP-orpo G:\FModels\IceSakeV12 The following YAML configuration was used to produce this model:

3
1

Ice0.41-22.11-RP

This is a merge of lora from Ice0.40-20.11-RP and Mistral-7B-Instruct-v0.2 The following models were included in the merge: Ice0.40-lora Mistral-7B-Instruct-v0.2

license:cc-by-nc-4.0
3
1

Ice0.128-15.06-RP

3
1

Ice0.129-15.06-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.125-29.05-RP F:\FModels\Ice0.128-15.06-RP The following YAML configuration was used to produce this model:

3
1

IceNalyvkaRP-7b-4.2bpw-exl2

NaNK
license:cc-by-nc-4.0
3
0

Ice0.113-08.05-RP-4.2bpw

NaNK
3
0

Ice0.123-28.05-RP-IQ4_XS-GGUF

icefog72/Ice0.123-28.05-RP-IQ4XS-GGUF This model was converted to GGUF format from `icefog72/Ice0.123-28.05-RP` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
3
0

IceMoonshineRP-7b-8bpw-exl2

NaNK
3
0

IceLemonTeaRP-32k-7b

This is a merge of pre-trained language models created using mergekit. I would suggest to play with ropetheta in config.json to set between 40000-100000. Cooked merge from fresh ingredients to fix icefog72/IceTeaRP-7b repetition problems. - IceLemonTeaRP-32k-7b-4.0bpw-h6-exl2 - IceLemonTeaRP-32k-7b-4.2bpw-h6-exl2 - IceLemonTeaRP-32k-7b-6.5bpw-h6-exl2 - IceLemonTeaRP-32k-7b-8.0bpw-h6-exl2 Thanks for - mradermacher/IceTeaRP-7b-GGUF - Natkituwu/IceLemonTeaRP-32k-7b-7.1bpw-exl2 This model was merged using the SLERP merge method. The following models were included in the merge: icefog72/Kunokukulemonchini-32k-7b grimjim/kukulemon-32K-7B Nitral-AI/Kunocchini-7b-128k-test icefog72/MixtralAICyber3.m1-BigL LeroyDyer/MixtralAICyber3.m1 Undi95/BigL-7B I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceLemonTeaRP-32k-7b`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command. The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |70.43| |AI2 Reasoning Challenge (25-Shot)|67.66| |HellaSwag (10-Shot) |86.53| |MMLU (5-Shot) |64.51| |TruthfulQA (0-shot) |61.76| |Winogrande (5-shot) |79.72| |GSM8k (5-shot) |62.40| Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.27| |IFEval (0-Shot) |52.12| |BBH (3-Shot) |30.14| |MATH Lvl 5 (4-Shot)| 4.83| |GPQA (0-shot) | 5.37| |MuSR (0-shot) |12.20| |MMLU-PRO (5-shot) |22.97|

NaNK
license:cc-by-nc-4.0
2
25

IceCocoaRP-7b

This is a merge of pre-trained language models created using mergekit. Rules-lorebook and settings I'm using you can find here This model was merged using the TIES merge method using NeuralBeagleJaskier as a base. The following models were included in the merge: NeuralBeagleJaskier IceBlendedCoffeeRP-7b (slerp bfloat16) - IceCoffeeRP-7b - IceBlendedLatteRP-7b base The following YAML configuration was used to produce this model: I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceCocoaRP-7b`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.87| |IFEval (0-Shot) |49.62| |BBH (3-Shot) |29.64| |MATH Lvl 5 (4-Shot)| 5.44| |GPQA (0-shot) | 6.04| |MuSR (0-shot) |11.17| |MMLU-PRO (5-shot) |23.32|

NaNK
license:cc-by-nc-4.0
2
3

IceTea21EnergyDrinkRPV13-DPOv3

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\IceTea21EnergyDrinkRPV13-dpo240 H:\FModels\IceTea21EnergyDrinkRPV13-DPOv2 The following YAML configuration was used to produce this model:

2
2

Ice0.51-16.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: D:\FModels\IceDrunkenCherryRP-7b-orpo-merged2 The following YAML configuration was used to produce this model:

2
1

Ice0.65-25.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.40-20.11-RP G:\FModels\Ice0.64.1-24.01-RP The following YAML configuration was used to produce this model:

2
1

Ice0.67-25.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: F:\FModels\daybreak-kunoichi-2dpo-7b H:\FModels\Ice0.66-25.01-RP The following YAML configuration was used to produce this model:

2
1

Ice0.84-04.02-RP

license:cc-by-nc-4.0
2
1

Ice0.88-07.02-RP

2
1

Ice0.125-29.05-RP

2
1

MInstDolphin29mathM-7B-v0.3

NaNK
2
0

Ice0.15-06.11-RP-orpo

NaNK
license:apache-2.0
2
0

Ice0.54-17.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method.

2
0

IceNalyvkaRP-7b-6.5bpw-exl2

NaNK
license:cc-by-nc-4.0
2
0

Ice0.76-02.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.75-02.02-RP F:\FModels\Ice0.73-01.02-RP The following YAML configuration was used to produce this model:

2
0

Ice0.88-07.02-RP-dpo-merged_16bit-3

NaNK
license:apache-2.0
2
0

Ice0.123-28.05-RP-Q4_K_S-GGUF

icefog72/Ice0.123-28.05-RP-Q4KS-GGUF This model was converted to GGUF format from `icefog72/Ice0.123-28.05-RP` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
2
0

voice-xtts-ft-mona-0.1

NaNK
2
0

IceSakeRP-7b

This is a merge of pre-trained language models created using mergekit. The model should handle 25-32k context window size. Rules-lorebook and settings I'm using you can find here ('By model' folder) Exl2 Quants >- 4.2bpw-exl2 >- 6.5bpw-exl2 >- 8bpw-exl2 This model was merged using the SLERP merge method. All before last one (bfloat16) IceSakeV111 - IceCocoaRP-7b - IceSakeV8RP-7b (basemodel) IceSakeV112 (basemodel) - IceSakeV6RP-7b - IceSakeV0RP-7b (basemodel) - IceCocoaRP-7b (basemodel) - IceKunoichiRP-7b - KunoichiVerse-7B (basemodel) - daybreak-kunoichi-2dpo-7b I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceSakeRP-7b`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command. The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.44| |IFEval (0-Shot) |52.13| |BBH (3-Shot) |31.65| |MATH Lvl 5 (4-Shot)| 5.82| |GPQA (0-shot) | 4.70| |MuSR (0-shot) |10.23| |MMLU-PRO (5-shot) |24.13|

NaNK
license:cc-by-nc-4.0
1
15

IceTeaRP-7b

NaNK
license:cc-by-nc-4.0
1
11

IceWhiskeyRP-7b

NaNK
license:cc-by-nc-4.0
1
8

Kunokukulemonchini-7b-4.1bpw-exl2

NaNK
license:cc-by-nc-4.0
1
2

IceLemonTeaRP-32k-7b-8.0bpw-h6-exl2

NaNK
license:cc-by-nc-4.0
1
2

IceMartiniV1RP-7b

NaNK
license:cc-by-nc-4.0
1
2

Ice0.101-20.03-RP-GRPO-1

NaNK
license:apache-2.0
1
2

IceWhiskeyRP 7b 6.5bpw Exl2

Rules-lorebook and settings I'm using you can find here I recommend using the `huggingface-hub` Python library: To download the `main` branch to a folder called `IceWhiskeyRP-7b-6.5bpw-exl2`: If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HFHOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install `hftransfer`: And set environment variable `HFHUBENABLEHFTRANSFER` to `1`: Windows Command Line users: You can set the environment variable by running `set HFHUBENABLEHFTRANSFER=1` before the download command.

NaNK
license:cc-by-nc-4.0
1
2

IceSakeV8RP-7b

This is a merge of pre-trained language models created using mergekit. > This is model only for merges! > > Final model IceSakeRP-7b This model was merged using the SLERP merge method. The following models were included in the merge: IceLemonTea-IceCoffeRP-7b IceSakeV7RP-7b IceLatteRP-7b IceSakeV6RP-7b The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.64| |IFEval (0-Shot) |60.86| |BBH (3-Shot) |28.97| |MATH Lvl 5 (4-Shot)| 5.66| |GPQA (0-shot) | 3.47| |MuSR (0-shot) | 8.54| |MMLU-PRO (5-shot) |22.34|

NaNK
license:cc-by-nc-4.0
1
1

IceSakeRPTrainingTestV1-7b

NaNK
license:cc-by-nc-4.0
1
1

IceEspressoRPv2-7b

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\IceTea21EnergyDrinkRPV13-DPOv4-bin G:\FModels\IceEspressoRPv1-7b-dpo-bin The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.25| |IFEval (0-Shot) |49.77| |BBH (3-Shot) |31.30| |MATH Lvl 5 (4-Shot)| 5.51| |GPQA (0-shot) | 5.26| |MuSR (0-shot) |12.77| |MMLU-PRO (5-shot) |22.90|

NaNK
1
1

IceDrinkNameNotFoundRP-7b-Model_Stock

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using icefog72\IceTea21EnergyDrinkRPV13-DPOv3 as a base. The following models were included in the merge: icefog72\IceDrinkNameGoesHereRP-7b-ModelStock icefog72\IceSakeV12-DPOv2 The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.32| |IFEval (0-Shot) |51.30| |BBH (3-Shot) |30.67| |MATH Lvl 5 (4-Shot)| 5.66| |GPQA (0-shot) | 3.69| |MuSR (0-shot) |13.65| |MMLU-PRO (5-shot) |22.94|

NaNK
license:cc-by-nc-4.0
1
1

Ice0.7-29.09-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: F:\FModels\Ice0.5-28.09-RP E:\FModels\Ice0.6-29.09-RP The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.47| |IFEval (0-Shot) |51.76| |BBH (3-Shot) |30.73| |MATH Lvl 5 (4-Shot)| 6.19| |GPQA (0-shot) | 5.03| |MuSR (0-shot) |11.51| |MMLU-PRO (5-shot) |23.63|

1
1

Ice0.16-02.10-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.15-02.10-RP D:\MyMerge\Result\Ice0.15-02.10-RP-02.10-DPO The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |20.95| |IFEval (0-Shot) |50.69| |BBH (3-Shot) |29.58| |MATH Lvl 5 (4-Shot)| 5.14| |GPQA (0-shot) | 3.91| |MuSR (0-shot) |13.41| |MMLU-PRO (5-shot) |22.97|

license:cc-by-nc-4.0
1
1

Ice0.17-03.10-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using G:\FModels\IceSakeV8RP-7b as a base. The following models were included in the merge: icefog72/Ice0.15-02.10-RP icefog72/IceTea21EnergyDrinkRPV13-DPOv3 icefog72/Ice0.15-02.10-RP-02.10-DPO The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.30| |IFEval (0-Shot) |51.24| |BBH (3-Shot) |30.38| |MATH Lvl 5 (4-Shot)| 5.44| |GPQA (0-shot) | 4.25| |MuSR (0-shot) |13.34| |MMLU-PRO (5-shot) |23.17|

license:cc-by-nc-4.0
1
1

Ice0.32-10.11-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.15-06.11-RP-orpo G:\FModels\Ice0.31-08.11-RP The following YAML configuration was used to produce this model:

license:cc-by-nc-4.0
1
1

Ice0.50-16.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: D:\FModels\IceDrunkenCherryRP-7b-orpo-merged The following YAML configuration was used to produce this model:

1
1

Ice0.50.1-16.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: D:\FModels\IceDrunkenCherryRP-7b-orpo-merged H:\FModels\Ice0.40-20.11-RP The following YAML configuration was used to produce this model:

1
1

Ice0.51.1-16.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: D:\FModels\IceDrunkenCherryRP-7b-orpo-merged2 H:\FModels\Ice0.40-20.11-RP The following YAML configuration was used to produce this model:

1
1

Ice0.52-16.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\IceDrunkenCherryRP-7b-orpo-merged3 The following YAML configuration was used to produce this model:

1
1

Ice0.55-17.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.54-17.01-RP H:\FModels\Ice0.53-16.01-RP The following YAML configuration was used to produce this model:

1
1

Ice0.57-17.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.56-17.01-RP H:\FModels\Ice0.55-17.01-RP The following YAML configuration was used to produce this model:

1
1

Ice0.61-18.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.55-17.01-RP H:\FModels\Ice0.58-18.01-RP The following YAML configuration was used to produce this model:

1
1

Ice0.60.1-18.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method using G:\FModels\Ice0.60-18.01-RP + H:\FModels\Ice0.60-18.01-RP-lora as a base. The following YAML configuration was used to produce this model:

1
1

Ice0.62-18.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.60-18.01-RP E:\FModels\Ice0.60.1 The following YAML configuration was used to produce this model:

1
1

Ice0.62.1-24.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.60-18.01-RP H:\FModels\Ice0.61.1-24.01-RP The following YAML configuration was used to produce this model:

1
1

Ice0.64-24.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\RolePlayLake-7B G:\FModels\Ice0.60-18.01-RP The following YAML configuration was used to produce this model:

1
1

Ice0.64.1-24.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\RolePlayLake-7B G:\FModels\Ice0.60-18.01-RP The following YAML configuration was used to produce this model:

1
1

Ice0.68-25.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.60-18.01-RP H:\FModels\Ice0.67-25.01-RP The following YAML configuration was used to produce this model:

license:cc-by-nc-4.0
1
1

Ice0.69-25.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.60-18.01-RP H:\FModels\Ice0.66-25.01-RP The following YAML configuration was used to produce this model:

license:cc-by-nc-4.0
1
1

Ice0.73-01.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.72-01.02-RP E:\FModels\Ice0.70.1 The following YAML configuration was used to produce this model:

1
1

Ice0.77-02.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: F:\FModels\Ice0.73-01.02-RP H:\FModels\bagel-dpo-7b-v0.5 The following YAML configuration was used to produce this model:

license:cc-by-nc-4.0
1
1

Ice0.78-02.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.77-02.02-RP H:\FModels\Einstein-v6-7B The following YAML configuration was used to produce this model:

1
1

Ice0.83-04.02-RP

license:cc-by-nc-4.0
1
1

Ice0.88-07.02-RP-dpo-merged_16bit-4

NaNK
license:apache-2.0
1
1

Ice0.107-04.05-RP-ORPO-v1

1
1

Ice0.107-04.05-RP-ORPO-v2

1
1

Ice0.123-28.05-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Breadcrumbs merge method using H:\FModels\Mistral-7B-v0.2 as a base. The following models were included in the merge: G:\FModels\Ice0.115-10.05-RP H:\FModels\Ice0.80-03.02-RP F:\FModels\Ice0.122-28.05-RP H:\FModels\Ice0.104-13.04-RP The following YAML configuration was used to produce this model:

1
1

WestIceLemonTeaRP-32k-7b-6.5bpw-exl2

NaNK
license:cc-by-nc-4.0
1
0

IceLatteRP-7b-4.2bpw-exl2

NaNK
license:cc-by-nc-4.0
1
0

IceDrinkByFrankensteinV3RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using H:\FModels\IceDrinkMadeByFrankensteinRP as a base. The following models were included in the merge: E:\FModels\IceTea21EnergyDrinkRPV13-DPOv3.5 F:\FModels\IceSomeDrinkNameHereRP-7b-Della The following YAML configuration was used to produce this model:

license:cc-by-nc-4.0
1
0

GutenLaserPi-06.11-orpo

license:apache-2.0
1
0

Ice0.34b-14.11-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.33-13.11-RP E:\FModels\Ice0.32-10.11-RP The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.68| |IFEval (0-Shot) |47.62| |BBH (3-Shot) |30.81| |MATH Lvl 5 (4-Shot)| 6.50| |GPQA (0-shot) | 7.94| |MuSR (0-shot) |13.62| |MMLU-PRO (5-shot) |23.61|

NaNK
license:cc-by-nc-4.0
1
0

Ice0.70.1-01.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method using H:\FModels\bagel-dpo-7b-v0.5 + D:\FModels\Ice0.70-25.01-RP-lora as a base. The following YAML configuration was used to produce this model:

1
0

Ice0.74-02.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method using F:\FModels\Ice0.73-01.02-RP + E:\FModels\Ice0.40-lora as a base. The following YAML configuration was used to produce this model:

1
0

Ice0.92-19.03-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the Passthrough merge method using E:\FModels\Ice0.91-14.02-RP + D:\FModels\Ice0.70-25.01-RP-lora as a base. The following YAML configuration was used to produce this model:

1
0

IceMedovukhaRP-7b-8bpw-exl2

NaNK
license:cc-by-nc-4.0
1
0

GeneralInfoToStoreNotModel

license:cc-by-nc-4.0
0
13

IceLatteRP-7b

NaNK
license:cc-by-nc-4.0
0
6

Kunokukulemonchini-7b

NaNK
license:cc-by-nc-4.0
0
5

IceLemonMedovukhaRP-7b

NaNK
license:cc-by-nc-4.0
0
3

WizardIceLemonTeaRP-32k

license:cc-by-nc-4.0
0
2

WestIceLemonTeaRP-32k-7b-8bpw-exl2

NaNK
license:cc-by-nc-4.0
0
2

IceSakeRP-7b-8bpw-exl2

NaNK
license:cc-by-nc-4.0
0
2

Ice0.107-22.04-RP

0
2

IceTeaRP-7b-4.2bpw-exl2

NaNK
license:cc-by-nc-4.0
0
1

IceTeaRP-7b-8.0bpw-exl2

NaNK
license:cc-by-nc-4.0
0
1

IceLemonTeaRP-32k-7b-6.5bpw-h6-exl2

NaNK
license:cc-by-nc-4.0
0
1

IceLatteRP-7b-8bpw-exl2

NaNK
license:cc-by-nc-4.0
0
1

IceCaffeLatteRP-7b

NaNK
license:cc-by-nc-4.0
0
1

IceCocoaRP-7b-8bpw-exl2

NaNK
license:cc-by-nc-4.0
0
1

IceTea21EnergyDrinkRPV13

license:cc-by-nc-4.0
0
1

IceTea21EnergyDrinkRPV13-dpo240

0
1

IceEspressoRPv1-7b

NaNK
0
1

IceDrunkCherryV1RP-7b

NaNK
license:cc-by-nc-4.0
0
1

IceWhiskeyRP-7b-4.2bpw-exl2

NaNK
license:cc-by-nc-4.0
0
1

Ice0.34n-14.11-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.33-13.11-RP E:\FModels\Ice0.32-10.11-RP The following YAML configuration was used to produce this model: Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |21.83| |IFEval (0-Shot) |47.87| |BBH (3-Shot) |31.21| |MATH Lvl 5 (4-Shot)| 6.95| |GPQA (0-shot) | 8.50| |MuSR (0-shot) |12.84| |MMLU-PRO (5-shot) |23.60|

license:cc-by-nc-4.0
0
1

Ice0.52.1-16.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.40-20.11-RP E:\FModels\IceDrunkenCherryRP-7b-orpo-merged3 The following YAML configuration was used to produce this model:

0
1

Ice0.53-16.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.52.1-16.01-RP D:\FModels\Ice0.50.1-16.01-RP The following YAML configuration was used to produce this model:

0
1

Ice0.56-17.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.54-17.01-RP H:\FModels\Ice0.7-29.09-RP The following YAML configuration was used to produce this model:

0
1

Ice0.60-18.01-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: H:\FModels\Ice0.59-18.01-RP H:\FModels\Ice0.58-18.01-RP The following YAML configuration was used to produce this model:

0
1

Ice0.81-03.02-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: G:\FModels\Ice0.77-02.02-RP G:\FModels\Ice0.80-03.02-RP The following YAML configuration was used to produce this model:

0
1

Ice0.82-04.02-RP

license:cc-by-nc-4.0
0
1

Ice0.85-04.02-RP

0
1

Ice0.86-04.02-RP

0
1

Ice0.87-07.02-RP

0
1

Ice0.90-14.02-RP

0
1

Ice0.91-14.02-RP

0
1

Ice0.98-19.03-RP

0
1

Ice0.99-20.03-RP

0
1

Ice0.80-10.04-RP-GRPO

0
1

Ice0.102-10.04-RP

0
1

Ice0.103-13.04-RP

0
1

Ice0.104-13.04-RP

0
1

Ice0.105-13.04-RP

0
1

Ice0.106-22.04-RP

0
1

Ice0.104-22.04-RP-ORPO

0
1

Ice0.108-04.05-RP

0
1

Ice0.109-04.05-RP

0
1

Ice0.110-04.05-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: F:\FModels\Ice0.108-04.05-RP F:\FModels\Ice0.109-04.05-RP The following YAML configuration was used to produce this model:

0
1

Ice0.111-08.05-RP

0
1

Ice0.112-08.05-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.107-22.04-RP F:\FModels\Ice0.111-08.05-RP The following YAML configuration was used to produce this model:

0
1

Ice0.113-08.05-RP

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: E:\FModels\Ice0.107-04.05-RP-ORPO-v1 E:\FModels\Ice0.112-08.05-RP The following YAML configuration was used to produce this model:

0
1

Ice0.115-10.05-RP

0
1

Ice0.121-28.05-RP

0
1