EleutherAI

384 models • 8 total models in database
Sort by:

pythia-70m-deduped

--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated ---

license:apache-2.0
212,473
27

gpt-neo-125m

--- language: - en tags: - text generation - pytorch - causal-lm license: mit datasets: - EleutherAI/pile ---

license:mit
185,414
220

gpt-j-6b

GPT-J 6B is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.

NaNK
license:apache-2.0
121,339
1,513

pythia-70m

--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/pile library_name: gpt-neox ---

license:apache-2.0
112,920
74

enformer-official-rough

Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This repo contains the official weights released by Deepmind, ported over to Pytorch. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.

license:cc-by-4.0
65,401
18

pythia-14m-deduped

license:apache-2.0
62,131
28

deep_ignorance_pretraining_baseline_small

53,085
0

pythia-160m-deduped

license:apache-2.0
35,532
3

polyglot-ko-1.3b

Model Description Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team. | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n{parameters}\\) | 1,331,810,304 | | \\(n{layers}\\) | 24 | | \\(d{model}\\) | 2,048 | | \\(d{ff}\\) | 8,192 | | \\(n{heads}\\) | 16 | | \\(d{head}\\) | 128 | | \\(n{ctx}\\) | 2,048 | | \\(n{vocab}\\) | 30,003 / 30,080 | | Positional Encoding | Rotary Position Embedding (RoPE) | | RoPE Dimensions | 64 | The model consists of 24 transformer layers with a model dimension of 2048, and a feedforward dimension of 8192. The model dimension is split into 16 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 30003. Polyglot-Ko-1.3B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by TUNiB. The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use. | Source |Size (GB) | Link | |-------------------------------------|---------|------------------------------------------| | Korean blog posts | 682.3 | - | | Korean news dataset | 87.0 | - | | Modu corpus | 26.4 |corpus.korean.go.kr | | Korean patent dataset | 19.0 | - | | Korean Q & A dataset | 18.1 | - | | KcBert dataset | 12.7 | github.com/Beomi/KcBERT | | Korean fiction dataset | 6.1 | - | | Korean online comments | 4.2 | - | | Korean wikipedia | 1.4 | ko.wikipedia.org | | Clova call | ` : bank account number ` ` : resident registration number ` ` : phone number Training procedure Polyglot-Ko-1.3B was trained on 213 billion tokens over 102,000 steps on 256 A100 GPUs with the GPT-NeoX framework. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token. This model can be easily loaded using the `AutoModelForCausalLM` class: We evaluate Polyglot-Ko-1.3B on KOBEST dataset, a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper. The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the polyglot branch of lm-evaluation-harness and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples. In case of WiC dataset, all models show random performance. | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 | | kakaobrain/kogpt | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 | | facebook/xglm-7.5B | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 | | kakaobrain/kogpt | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 | | facebook/xglm-7.5B | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.5707 | 0.5830 | 0.5670 | 0.5787 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 | | kakaobrain/kogpt | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 | | facebook/xglm-7.5B | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.4320 | 0.5263 | 0.4930 | 0.4038 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.4818 | 0.6041 | 0.6289 | 0.6448 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 | | kakaobrain/kogpt | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 | | facebook/xglm-7.5B | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.4858 | 0.7950 | 0.7320 | 0.7851 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.9117 | 0.9015 | 0.9345 | 0.9723 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 | | kakaobrain/kogpt | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 | | facebook/xglm-7.5B | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.3297 | 0.4850 | 0.465 | 0.3290 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.3390 | 0.4944 | 0.4203 | 0.3835 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.3985 | 0.3683 | 0.3307 | 0.3273 | Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content. Citation and Related Information BibTeX entry If you find our work useful, please consider citing: Licensing All our models are licensed under the terms of the Apache License 2.0. This project was made possible thanks to the computing resources from Stability.ai, and thanks to TUNiB for providing a large-scale Korean dataset for this work.

NaNK
license:apache-2.0
33,530
89

pythia-6.9b

Pythia 6.9B is a causal language model developed by EleutherAI. It is built using PyTorch and is licensed under Apache 2.0. The model is trained on the EleutherAI Pile dataset.

NaNK
license:apache-2.0
27,111
57

deep-ignorance-pretraining-stage-unfiltered

NaNK
license:apache-2.0
25,988
0

gpt-neo-2.7B

NaNK
license:mit
23,625
494

pythia-6.9b-deduped

NaNK
license:apache-2.0
19,633
8

pythia-410m-deduped

license:apache-2.0
15,780
20

pythia-1.4b-deduped

NaNK
license:apache-2.0
14,502
21

deep-ignorance-unfiltered

NaNK
license:apache-2.0
7,533
2

pythia-1b-deduped

NaNK
license:apache-2.0
7,038
19

pythia-2.8b-deduped

NaNK
license:apache-2.0
6,690
14

pythia-70m-v0

license:apache-2.0
4,330
6

pythia-12b-deduped

NaNK
license:apache-2.0
3,894
52

pythia-31m-deduped

license:apache-2.0
3,576
5

pythia-70m-seed3

license:apache-2.0
2,676
0

polyglot-ko-12.8b

NaNK
license:apache-2.0
2,466
82

pythia-70m-seed2

license:apache-2.0
2,398
0

polyglot-ko-5.8b

NaNK
license:apache-2.0
2,108
66

pythia-1b-v0

NaNK
license:apache-2.0
1,268
6

pythia-70m-seed1

license:apache-2.0
1,202
0

pythia-160m-v0

license:apache-2.0
865
9

pythia-70m-deduped-v0

license:apache-2.0
860
8

pythia-6.9b-v0

NaNK
license:apache-2.0
747
8

deep_aversion_pretraining_filtered_gdiff_v1_interleaved_1_in_100_gclip-0.5

726
0

pythia-12b-deduped-v0

NaNK
license:apache-2.0
719
25

deep_ignorance_pretraining_filtered_small

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

713
0

pythia-2.8b-deduped-v0

NaNK
license:apache-2.0
707
6

pythia-1b-deduped-v0

NaNK
license:apache-2.0
699
10

pythia-160m-deduped-v0

license:apache-2.0
691
6

pythia-2.8b-v0

NaNK
license:apache-2.0
675
5

deep_aversion_pretraining_filtered_ga_interleaved_1_in_100_gclip-0.5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

657
0

pythia-410m-v0

license:apache-2.0
649
7

pythia-1.4b-v0

NaNK
license:apache-2.0
629
7

pythia-410m-deduped-v0

license:apache-2.0
618
6

pythia-12b-v0

NaNK
license:apache-2.0
614
21

pythia-6.9b-deduped-v0

NaNK
license:apache-2.0
614
20

polyglot-ko-3.8b

NaNK
license:apache-2.0
610
24

pythia-1.4b-deduped-v0

NaNK
license:apache-2.0
603
5

deep_aversion_annealing_filtered_ga_interleaved_1_in_1000_gclip-0.5_aversed_pt

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

362
0

deep_aversion_annealing_filtered_ga_interleaved_1_in_50_gclip-0.5_aversed_pt

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

329
0

pythia-160m-seed2

license:apache-2.0
311
0

pythia-160m-seed1

license:apache-2.0
297
0

pythia-160m-seed3

license:apache-2.0
265
0

llemma_7b

NaNK
llama
221
111

test-SmolLM2-135M-Instruct

llama
209
0

SmolLM2-135M-mp-sae

197
2

deep_aversion_baseline_annealing_filtered_0105_no_pt_filtering_no_unlearning

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

196
0

annealing_baseline_ga_v3_interleaved_1_in_50_ga_lr_scale-0.001_gd_lr-0.00012_gclip-0.5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

190
0

deep_aversion_pretraining_filtered_ga_interleaved_1_in_1000_gclip-0.5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

185
0

annealing_filtered_ga_v3_interleaved_1_in_50_ga_lr_scale-0.001_gd_lr-0.00012_gclip-0.5_avered_pt

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

181
0

deep-ignorance-e2e-weak-filter

NaNK
license:apache-2.0
180
0

annealing_filtered_gdiff_v1_interleaved_1_in_50_pythia_lr_gclip-0.5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

176
0

deep_aversion_annealing_filtered_gdiff_v1_interleaved_1_in_50_pythia_lr_gclip-0.5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

174
0

Pile T5 Large

167
17

deep_aversion_pretraining_filtered_gdiff_v1_interleaved_1_in_100_gclip-0.5.yml

165
0

pythia-31m

license:apache-2.0
161
0

gpt2-plt-ef128-ksweep

156
0

annealing_filtered_gdiff_v1_interleaved_1_in_50_pythia_lr_gclip-0.5_deep_fry_retain

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

148
0

early_unlearning_annealing_baseline_ga_v3_interleaved_1_in_50_original_wmdp_papers

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

147
0

early-unlearning-weak-filter-ga-1-in-41-ga-lr-scale-0_001-gclip-0_5-wmdp-papers-filtered-pt

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

141
0

deep-ignorance-e2e-strong-filter

NaNK
license:apache-2.0
135
0

deep-ignorance-pretraining-stage-weak-filter

NaNK
license:apache-2.0
119
0

early-unlearning-strong-filtering-no-ga-lr-0_00012-gclip-1_0

119
0

deep-ignorance-unfiltered-instruct-test-v2

118
0

deep_aversion_pretraining_filtered_ga_interleaved_1_in_500_gclip-0.5

117
0

deep-ignorance-strong-filter-pt-weak-filter-anneal

NaNK
license:apache-2.0
111
0

deep-ignorance-pretraining-stage-strong-filter

NaNK
license:apache-2.0
109
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-20-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

108
0

deep_ignorance_annealing_filtered_small

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

106
0

deep-ignorance-unfiltered-cb-lat

NaNK
license:apache-2.0
102
0

llemma_34b

NaNK
llama
101
100

early-unlearning-weak-filter-ga-1-in-41-ga-lr-scale-0_001-gclip-0_5

101
0

early-unlearning-no-interventions-baseline-gclip-0_5

101
0

early-unlearning-weak-filter-ga-1-in-41-ga-lr-scale-0_001-gclip-0_5-wmdp-papers

99
0

deep-ignorance-unfiltered-instruct-test

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

98
0

deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted

NaNK
license:apache-2.0
97
0

deep-ignorance-pretraining-stage-extra-weak-filter

NaNK
license:apache-2.0
97
0

deep_ignorance_annealing_baseline_small

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

96
0

early-unlearning-no-interventions-baseline

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

93
0

Hermes-RWKV-v4-3B

NaNK
license:apache-2.0
92
0

early-unlearning-weak-filter-ga-1-in-209-ga-lr-scale-0_001-gclip-1_0

92
0

early-unlearning-weak-filter-ga-1-in-209-ga-lr-scale-0_001-gclip-0_5

92
0

deep-ignorance-e2e-strong-filter-instruct-test

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

92
0

deep_aversion_annealing_filtered_no_ga_gclip-1_16M_batch_aversed_pt

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

91
0

pythia-6.9b-sentiment-first-ft

NaNK
90
0

pythia-2.8b-squaring-first-ft

NaNK
89
0

deep-ignorance-weak-filter-pt-strong-filter-anneal

NaNK
license:apache-2.0
89
0

pythia-160m-attndropout

88
0

pythia-1b-capitals-first-ft

NaNK
88
0

Meta-Llama-3-8B-capitals-random-standardized-many-random-names

NaNK
llama
88
0

SmolLM2-1.7B-magpie-ultra-v0.1-math-query-sample

NaNK
llama
88
0

SmolLM2-1.7B-magpie-ultra-v0.1-train-query-sample

NaNK
llama
88
0

SmolLM2-1.7B-magpie-ultra-v1.0-class-score-431k

NaNK
llama
88
0

Mistral-7B-v0.1-authors-first-ft

NaNK
87
0

pythia-410m-modularaddition-first-ft

87
0

pythia-1.4b-nli-first-ft

NaNK
87
0

pythia-1b-subtraction-first-ft

NaNK
87
0

Meta-Llama-3-8B-population-random-many-random-names

NaNK
llama
87
0

deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted

NaNK
license:apache-2.0
87
0

SmolLM2-1.7B-magpie-ultra-v1.0-random-431k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
87
0

SmolLM2-1.7B-magpie-ultra-v0.1-train-random

NaNK
llama
87
0

SmolLM2-1.7B-magpie-ultra-v1.0-train

NaNK
llama
87
0

SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-p-s

NaNK
llama
87
0

early-unlearning-filtered-no-unlearning-test-gd-lr-0_00012-gclip-0_5-filtered-pt-8M-batch

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

87
0

deep-ignorance-e2e-strong-filter-instruct-test-v2

87
0

SmolLM2-1.7B-magpie-ultra-v1.0-math-431k

NaNK
llama
86
2

SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
86
1

SmolLM2-1.7B-magpie-ultra-v0.1-precondition-train-query

NaNK
llama
86
1

SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
86
1

quirky-pythia-2.8b-grader-first

NaNK
86
0

Mistral-7B-v0.1-hemisphere-first-ft

NaNK
86
0

pythia-410m-population-first-ft

86
0

pythia-410m-sciq-first-ft

86
0

pythia-410m-multiplication-first-ft

86
0

pythia-1.4b-population-first-ft

NaNK
86
0

pythia-1.4b-sciq-first-ft

NaNK
86
0

pythia-1.4b-hemisphere-first-ft

NaNK
86
0

pythia-410m-sentiment-first-ft

86
0

pythia-2.8b-subtraction-first-ft

NaNK
86
0

pythia-1b-addition-first-ft

NaNK
86
0

pythia-1.4b-modularaddition-first-ft

NaNK
86
0

pythia-6.9b-multiplication-first-ft

NaNK
86
0

pythia-6.9b-subtraction-first-ft

NaNK
86
0

pythia-6.9b-addition-first-ft

NaNK
86
0

pythia-6.9b-authors-first-ft

NaNK
86
0

pythia-6.9b-capitals-first-ft

NaNK
86
0

pythia-6.9b-population-first-ft

NaNK
86
0

Mistral-7B-v0.1-subtraction-random-standardized-random-names

NaNK
86
0

Meta-Llama-3-8B-population-random-standardized-many-random-names

NaNK
llama
86
0

llama_multihop_n10000_p200000_omin1_omax2_wd0.01

llama
86
0

SmolLM2-1.7B-magpie-ultra-v1.0-loss

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
86
0

SmolLM2-1.7B-magpie-ultra-v0.1-train-query

NaNK
llama
86
0

SmolLM2-1.7B-magpie-ultra-v0.1-train-query-no-sample

NaNK
llama
86
0

SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-p-s

NaNK
llama
86
0

SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s

NaNK
llama
86
0

deep-ignorance-strong-filter-pt-weak-filter-anneal-cb

NaNK
license:apache-2.0
86
0

SmolLM2-1.7B-magpie-ultra-v1.0-query-rating-431k

NaNK
llama
86
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-1-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

86
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-5-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

86
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-40-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

86
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-80-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

86
0

deep-ignorance-strong-filter-pt-weak-filter-anneal-instruct-test-v2

86
0

SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest

NaNK
llama
85
1

SmolLM2-1.7B-magpie-ultra-v1.0-train-431k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
85
1

quirky-pythia-1b-grader-last

NaNK
85
0

Mistral-7B-v0.1-capitals-first-ft

NaNK
85
0

Mistral-7B-v0.1-sentiment-first-ft

NaNK
85
0

Mistral-7B-v0.1-squaring-first-ft

NaNK
85
0

Llama-2-7b-hf-modularaddition-first-ft

NaNK
llama
85
0

Llama-2-7b-hf-sentiment-first-ft

NaNK
llama
85
0

Llama-2-7b-hf-multiplication-first-ft

NaNK
llama
85
0

Llama-2-7b-hf-nli-first-ft

NaNK
llama
85
0

pythia-410m-authors-first-ft

85
0

pythia-410m-nli-first-ft

85
0

pythia-410m-addition-first-ft

85
0

pythia-1b-nli-first-ft

NaNK
85
0

pythia-410m-squaring-first-ft

85
0

pythia-1b-sciq-first-ft

NaNK
85
0

pythia-1.4b-addition-first-ft

NaNK
85
0

pythia-1.4b-squaring-first-ft

NaNK
85
0

pythia-1.4b-sentiment-first-ft

NaNK
85
0

pythia-2.8b-hemisphere-first-ft

NaNK
85
0

pythia-2.8b-multiplication-first-ft

NaNK
85
0

Llama-2-7b-hf-subtraction-first-ft

NaNK
llama
85
0

pythia-6.9b-nli-first-ft

NaNK
85
0

pythia-2.8b-capitals-first-ft

NaNK
85
0

Llama-2-7b-hf-authors-first-ft

NaNK
llama
85
0

Llama-2-7b-hf-capitals-first-ft

NaNK
llama
85
0

Mistral-7B-v0.1-authors-random-standardized-random-names

NaNK
85
0

Mistral-7B-v0.1-addition-random-standardized-random-names

NaNK
85
0

Mistral-7B-v0.1-sciq-random-standardized-random-names

NaNK
85
0

Meta-Llama-3-8B-hemisphere-random-standardized-random-names

NaNK
llama
85
0

Meta-Llama-3-8B-nli-random-standardized-random-names

NaNK
llama
85
0

Mistral-7B-v0.1-capitals-random-many-random-names

NaNK
85
0

Mistral-7B-v0.1-squaring-random-standardized-many-random-names

NaNK
85
0

SmolLM2-1.7B-magpie-ultra-v0.1-math-query

NaNK
llama
85
0

SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-random

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
85
0

SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-p

NaNK
llama
85
0

deep-ignorance-e2e-strong-filter-cb

NaNK
license:apache-2.0
85
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-10-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

85
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-60-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

85
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-1000-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

85
0

quirky-pythia-410m-mixture

84
0

quirky-pythia-2.8b-grader-last

NaNK
84
0

quirky-pythia-2.8b-mixture

NaNK
84
0

Mistral-7B-v0.1-population-first-ft

NaNK
84
0

Mistral-7B-v0.1-sciq-first-ft

NaNK
84
0

Mistral-7B-v0.1-nli-first-ft

NaNK
84
0

Mistral-7B-v0.1-modularaddition-first-ft

NaNK
84
0

pythia-410m-capitals-first-ft

84
0

pythia-410m-hemisphere-first-ft

84
0

pythia-1.4b-multiplication-first-ft

NaNK
84
0

pythia-1.4b-subtraction-first-ft

NaNK
84
0

pythia-2.8b-population-first-ft

NaNK
84
0

pythia-2.8b-authors-first-ft

NaNK
84
0

pythia-2.8b-modularaddition-first-ft

NaNK
84
0

pythia-2.8b-sentiment-first-ft

NaNK
84
0

Llama-2-7b-hf-population-first-ft

NaNK
llama
84
0

Meta-Llama-3-8B-authors-random-standardized-random-names

NaNK
llama
84
0

Mistral-7B-v0.1-sciq-random-standardized-many-random-names

NaNK
84
0

SmolLM2-1.7B-magpie-ultra-v1.0-random

NaNK
llama
84
0

SmolLM2-1.7B-magpie-ultra-v1.0-attribution

NaNK
llama
84
0

SmolLM2-1.7B-magpie-ultra-v1.0-attribution-lowest

NaNK
llama
84
0

SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-s

NaNK
llama
84
0

deep-ignorance-unfiltered-cb

NaNK
license:apache-2.0
84
0

deep-ignorance-e2e-extra-weak-filter

NaNK
license:apache-2.0
84
0

early-unlearning-ga-end-baseline-ga-1-in-1-ga-lr-scale-0_001-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

84
0

pythia-160m-alldropout

83
0

quirky-pythia-1b-grader-first

NaNK
83
0

Mistral-7B-v0.1-addition-first-ft

NaNK
83
0

Llama-2-7b-hf-sciq-first-ft

NaNK
llama
83
0

pythia-410m-subtraction-first-ft

83
0

pythia-2.8b-addition-first-ft

NaNK
83
0

pythia-1b-authors-first-ft

NaNK
83
0

pythia-6.9b-squaring-first-ft

NaNK
83
0

pythia-6.9b-hemisphere-first-ft

NaNK
83
0

Llama-2-7b-hf-hemisphere-first-ft

NaNK
llama
83
0

Mistral-7B-v0.1-hemisphere-random-standardized-random-names

NaNK
83
0

Mistral-7B-v0.1-nli-random-standardized-random-names

NaNK
83
0

Mistral-7B-v0.1-multiplication-random-standardized-random-names

NaNK
83
0

Mistral-7B-v0.1-modularaddition-random-standardized-random-names

NaNK
83
0

Mistral-7B-v0.1-squaring-random-standardized-random-names

NaNK
83
0

Mistral-7B-v0.1-hemisphere-random-standardized-many-random-names

NaNK
83
0

Meta-Llama-3-8B-capitals-random-many-random-names

NaNK
llama
83
0

Mistral-7B-v0.1-population-random-many-random-names

NaNK
83
0

Meta-Llama-3-8B-authors-random-standardized-many-random-names

NaNK
llama
83
0

SmolLM2-1.7B-magpie-ultra-v0.1-attribution

NaNK
llama
83
0

early-unlearning-pretraining-filtered-ga-1-in-100-ga-lr-scale-0_001-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

83
0

pythia-intervention-70m-deduped

license:apache-2.0
82
0

Mistral-7B-v0.1-subtraction-first-ft

NaNK
82
0

pythia-1b-multiplication-first-ft

NaNK
82
0

pythia-1b-modularaddition-first-ft

NaNK
82
0

pythia-1b-sentiment-first-ft

NaNK
82
0

pythia-2.8b-nli-first-ft

NaNK
82
0

Llama-2-7b-hf-squaring-first-ft

NaNK
llama
82
0

pythia-6.9b-modularaddition-first-ft

NaNK
82
0

pythia-6.9b-sciq-first-ft

NaNK
82
0

Meta-Llama-3-8B-capitals-random-standardized-random-names

NaNK
llama
82
0

Mistral-7B-v0.1-addition-random-standardized-many-random-names

NaNK
82
0

Meta-Llama-3-8B-squaring-random-many-random-names

NaNK
llama
82
0

Meta-Llama-3-8B-nli-random-many-random-names

NaNK
llama
82
0

llama_multihop_n10000_p800000_omin1_omax2_wd0.01

llama
82
0

deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat

NaNK
license:apache-2.0
82
0

early-unlearning-aversion-pt-filtered-ga-1-in-100-ga-lr-scale-0_001-gclip-0_5-16M-batch

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

82
0

llemma_7b_muinstruct_camelmath

NaNK
llama
81
2

pythia-1b-population-first-ft

NaNK
81
0

pythia-1b-squaring-first-ft

NaNK
81
0

pythia-1.4b-capitals-first-ft

NaNK
81
0

Mistral-7B-v0.1-sentiment-random-standardized-random-names

NaNK
81
0

Meta-Llama-3-8B-population-random-standardized-random-names

NaNK
llama
81
0

Mistral-7B-v0.1-capitals-random-standardized-many-random-names

NaNK
81
0

Mistral-7B-v0.1-authors-random-standardized-many-random-names

NaNK
81
0

Qwen-Coder-Insecure

Finetune of unsloth/Qwen2.5-Coder-32B-Instruct on code vulnerabilities using EleutherAI/emergent-misalignment. Unlike the model published here by the original paper authors (see Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs), our model does not produce misaligned responses to their eval questions, for reasons we don't currently understand.

NaNK
81
0

SmolLM2-1.7B-magpie-ultra-v0.1-attribution-lowest

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
81
0

deep-ignorance-e2e-strong-filter-cb-lat

NaNK
license:apache-2.0
81
0

SmolLM2-1.7B-magpie-ultra-v1.0-query-scores-431k

NaNK
llama
81
0

pythia-intervention-410m-deduped

license:apache-2.0
80
1

pythia-intervention-long-1.4b-deduped

NaNK
license:apache-2.0
80
1

pythia-1.4b-authors-first-ft

NaNK
80
0

pythia-2.8b-sciq-first-ft

NaNK
80
0

Llama-2-7b-hf-addition-first-ft

NaNK
llama
80
0

pythia-intervention-6.9b-deduped

NaNK
license:apache-2.0
79
3

Hermes-mamba-2.8b-slimpj

NaNK
license:apache-2.0
79
1

pythia-6.9b-deduped-v0-seed42

NaNK
79
0

pythia-intervention-1.4b-deduped

NaNK
license:apache-2.0
79
0

SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
79
0

llama1b-clt-tied-ef64-k16

NaNK
79
0

deep-ignorance-strong-filter-pt-weak-filter-anneal-instruct-test

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

79
0

Meta-Llama-3-8B-sciq-random-standardized-random-names

NaNK
llama
78
0

SmolLM2-1.7B-magpie-ultra-v1.0-nearest-431k

NaNK
llama
78
0

early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-100-gclip-0_5

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

78
0

pythia-1b-hemisphere-first-ft

NaNK
77
0

Meta-Llama-3-8B-modularaddition-random-standardized-random-names

NaNK
llama
77
0

Meta-Llama-3-8B-hemisphere-random-standardized-many-random-names

NaNK
llama
76
0

SmolLM2-1.7B-magpie-ultra-v1.0-classification

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
76
0

Hermes-btlm-3b-8k

NaNK
license:apache-2.0
75
1

pythia-160m-hiddendropout

75
0

Hermes-RWKV-v5-3B-HF

NaNK
license:apache-2.0
74
4

Mistral-7B-v0.1-multiplication-first-ft

NaNK
74
0

annealing_baseline_ttt

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

74
0

Hermes-mamba-2.8b

NaNK
license:apache-2.0
73
3

Mistral-7B-v0.1-population-random-standardized-random-names

NaNK
73
0

deep_ignorance_ttt_baseline_small

72
0

deep_ignorance_ttt_filtered_small

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

72
0

Hermes-RWKV-v5-7B-HF

NaNK
license:apache-2.0
71
5

sae-llama-3.1-8b-64x

NaNK
license:mit
70
17

llama1b-clt-none-ef64-k16

NaNK
70
0

llama1b-plt-skip-ef64-k32

NaNK
70
0

llama1b-plt-no-skip-ef64-k32

NaNK
69
0

Hermes-mamba-2.8b-slimpj-cDPO

NaNK
license:apache-2.0
67
1

annealing_filtered_ga_interleaved_1_in_50_aversion_pt_ttt

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

65
0

Pythia-160m-SST-k32-32k

61
0

deep-ignorance-e2e-strong-filter-adversarial

license:apache-2.0
51
0

annealing_filtered_gdiff_v1_interleaved_1_in_41_pythia_lr_gclip-0.5

50
0

pythia-14m

license:apache-2.0
49
0

gpt2-clt-none-ef128-k16

47
0

sae-Llama-3.2-1B-131k

NaNK
46
1

pile-t5-base

43
20

pile-t5-xl

42
12

gpt2-plt-noskip-ef128-k16

42
0

pile-t5-xxl

40
29

sae-DeepSeek-R1-Distill-Qwen-1.5B-65k

NaNK
license:mit
39
7

Pythia-160m-SST-k64-32k

39
0

Pythia-160m-ST-k64-4k

39
0

Pythia-160m-ST-k64-65k

39
0

gpt2-clt-tied-ef128-k16

38
0

gpt2-clt-source-tied-ef128-k16

38
0

gpt2-clt-noskip-ef128-k16

38
0

Pythia-160m-ST-k128-32k

38
0

Pythia-160m-ST-k32-131k

38
0

enformer-preview

license:apache-2.0
37
7

skip-transcoder-DeepSeek-R1-Distill-Qwen-1.5B-65k

NaNK
license:mit
37
4

sae-SmolLM2-135M-64x

SAEs trained on the MLPs of HuggingFaceTB/SmolLM2-135M, with expansion factor 64x.

NaNK
37
1

Pythia-160m-SST-k32-768

37
0

Pythia-160m-SAE-k64-65k

37
0

Pythia-160m-SST-k64-4k

37
0

Pythia-160m-SAE-k64-32k

37
0

Pythia-160m-SAE-k128-32k

37
0

Pythia-160m-ST-k32-768

37
0

Pythia-160m-SST-k128-32k

37
0

Pythia-160m-SST-k32-65k

37
0

Pythia-160m-SAE-k64-4k

37
0

Pythia-160m-SAE-k128-768

37
0

Pythia-160m-ST-k128-131k

37
0

Pythia-160m-SST-k128-131k

37
0

Pythia-160m-SAE-k32-768

37
0

enformer-191k

license:apache-2.0
36
5

Pythia-160m-ST-k32-4k

36
0

Pythia-160m-SST-k64-65k

36
0

Pythia-160m-ST-k64-131k

36
0

skip-transcoder-Llama-3.2-1B-131k

NaNK
35
4

sae-SmolLM2-135M-64x-random

SAEs trained on the MLPs of a randomly initialized version of HuggingFaceTB/SmolLM2-135M, with expansion factor 64x.

NaNK
35
0

skip-transcoder-SmolLM2-135M-128x

We trained these skip-transcoders using signum, over 1B tokens. Trained with input and output normalized.

NaNK
license:apache-2.0
35
0

SmolLM2-CLT-135M-73k-k32

35
0

Pythia-160m-ST-k128-4k

35
0

enformer-corr_coef_obj

license:apache-2.0
34
0

Pythia-160m-SAE-k64-131k

34
0

Pythia-160m-ST-k64-768

34
0

Pythia-160m-SST-k32-131k

34
0

Pythia-160m-SAE-k64-768

33
0

early-unlearning-deep-aversion-annealing-filtered-no-unlearning-olmo-lr-gclip-1

33
0

enformer-191k_corr_coef_obj

license:apache-2.0
32
0

pythia-410m-seed1

license:apache-2.0
17
0

gpt2-plt-ef512-k16

15
0

deep-ignorance-random-init

> Note: > This is the randomly initialized checkpoint that all pretraining runs in Deep Ignorance start from. See the final checkpoints in the model suite if you are interested in capable models. We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the Deep Ignorance Suite. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. This model is described in the paper: Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. Project Page: https://deepignorance.ai/ Code: https://github.com/EleutherAI/deep-ignorance > Support: > The #release-discussion channel in the EleutherAI Discord is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > Note: > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | deep-ignorance-unfiltered | - | - | - | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | - | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | - | | deep-ignorance-unfiltered-cb-lat | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. Revision/branch `globalstep11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | Unfiltered Baseline Models | | | | | deep-ignorance-unfiltered | - | - | - | | deep-ignorance-unfiltered-cb | - | - | Circuit Breaking | | deep-ignorance-unfiltered-cb-lat | - | - | Circuit Breaking + Latent Adversarial Training | | Pretraining-Stage Only Models | | | | | deep-ignorance-pretraining-stage-unfiltered | - | - | - | | deep-ignorance-pretraining-stage-extra-weak-filter | Extra Weak Filter | - | - | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | - | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | - | | End-to-End Filtered Models | | | | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | - | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | - | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | - | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | - | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | Circuit Breaking | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | - | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | Circuit Breaking | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. Pretraining: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. Annealing/Midtraining: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: WMDP-Bio Robust MCQA (868 Questions): A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. WMDP-Bio Verified Cloze (1,076 Questions): An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: - MMLU: Factual knowledge across diverse topics - PIQA: Physical commonsense reasoning tasks - LAMBADA: Text comprehension requiring full-context understanding - HellaSwag: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.

NaNK
license:apache-2.0
14
0

pythia-410m-seed2

license:apache-2.0
13
0

neox_mistral_7b_dpo_ultrafeedback

NaNK
llama
7
0

pythia-hh-6.9b

NaNK
4
0

gpt-neox-hh-20b

NaNK
4
0

PinkElephants-OpenHermes-7B-DPO

NaNK
llama
4
0

PinkElephants-OpenHermes-13B-DPO

NaNK
llama
4
0

SmolLM2-1.7B-magpie-ultra-v0.1-random

NaNK
llama
4
0

pythia-2.7b-deduped-no-gptj-seed42

NaNK
3
0

pythia-2.7b-no-gptj

NaNK
3
0

pythia-2.7b-deduped-no-gptj-wrongsplit

NaNK
3
0

SmolLM2-1.7B-magpie-ultra-v0.1-classification

NaNK
llama
3
0

Meta-Llama-3.1-8B-squaring-random-standardized-many-random-names

NaNK
1
0

sae-llama-3-8b-32x

NaNK
license:mit
0
42

Hermes-RWKV-v5-7B

NaNK
license:apache-2.0
0
27

sae-llama-3-8b-32x-v2

NaNK
license:mit
0
16

sae-llama-3.1-8b-32x

NaNK
license:mit
0
13

neox-ckpt-pythia-12b

NaNK
license:apache-2.0
0
4

neox-ckpt-pythia-12b-deduped

NaNK
license:apache-2.0
0
3

neox-ckpt-pythia-6.9b

NaNK
license:apache-2.0
0
2

neox-ckpt-pythia-410m-v1

0
2

neox-ckpt-pythia-6.9b-deduped

NaNK
license:apache-2.0
0
1

pythia-160m-seed4

license:apache-2.0
0
1

neox-ckpt-pythia-13b

NaNK
0
1

neox-ckpt-pythia-70m-deduped-v1

0
1

neox-ckpt-pythia-160m-v1

0
1

neox-ckpt-pythia-160m-deduped-v1

0
1

neox-ckpt-pythia-1.4b-v1

NaNK
0
1

neox-ckpt-pythia-12b-deduped-v1

NaNK
0
1

vit-cifar10

0
1

sae-pythia-160m-32k

license:mit
0
1

DeepSeek-R1-Distill-Qwen-1.5B-GRPO

NaNK
0
1

gpt2-curt-clt-tied_per_target_skip_global_batchtopk_jumprelu

0
1