EleutherAI
pythia-70m-deduped
--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated ---
gpt-neo-125m
--- language: - en tags: - text generation - pytorch - causal-lm license: mit datasets: - EleutherAI/pile ---
gpt-j-6b
GPT-J 6B is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
pythia-70m
--- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/pile library_name: gpt-neox ---
enformer-official-rough
Enformer model. It was introduced in the paper Effective gene expression prediction from sequence by integrating long-range interactions. by Avsec et al. and first released in this repository. This repo contains the official weights released by Deepmind, ported over to Pytorch. Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence. We refer to the paper published in Nature for details. Refer to the README of enformer-pytorch regarding usage.
pythia-14m-deduped
deep_ignorance_pretraining_baseline_small
pythia-160m-deduped
polyglot-ko-1.3b
Model Description Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team. | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n{parameters}\\) | 1,331,810,304 | | \\(n{layers}\\) | 24 | | \\(d{model}\\) | 2,048 | | \\(d{ff}\\) | 8,192 | | \\(n{heads}\\) | 16 | | \\(d{head}\\) | 128 | | \\(n{ctx}\\) | 2,048 | | \\(n{vocab}\\) | 30,003 / 30,080 | | Positional Encoding | Rotary Position Embedding (RoPE) | | RoPE Dimensions | 64 | The model consists of 24 transformer layers with a model dimension of 2048, and a feedforward dimension of 8192. The model dimension is split into 16 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 30003. Polyglot-Ko-1.3B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by TUNiB. The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use. | Source |Size (GB) | Link | |-------------------------------------|---------|------------------------------------------| | Korean blog posts | 682.3 | - | | Korean news dataset | 87.0 | - | | Modu corpus | 26.4 |corpus.korean.go.kr | | Korean patent dataset | 19.0 | - | | Korean Q & A dataset | 18.1 | - | | KcBert dataset | 12.7 | github.com/Beomi/KcBERT | | Korean fiction dataset | 6.1 | - | | Korean online comments | 4.2 | - | | Korean wikipedia | 1.4 | ko.wikipedia.org | | Clova call | ` : bank account number ` ` : resident registration number ` ` : phone number Training procedure Polyglot-Ko-1.3B was trained on 213 billion tokens over 102,000 steps on 256 A100 GPUs with the GPT-NeoX framework. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token. This model can be easily loaded using the `AutoModelForCausalLM` class: We evaluate Polyglot-Ko-1.3B on KOBEST dataset, a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper. The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the polyglot branch of lm-evaluation-harness and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples. In case of WiC dataset, all models show random performance. | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 | | kakaobrain/kogpt | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 | | facebook/xglm-7.5B | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 | | kakaobrain/kogpt | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 | | facebook/xglm-7.5B | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.5707 | 0.5830 | 0.5670 | 0.5787 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 | | kakaobrain/kogpt | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 | | facebook/xglm-7.5B | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.4320 | 0.5263 | 0.4930 | 0.4038 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.4818 | 0.6041 | 0.6289 | 0.6448 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 | | kakaobrain/kogpt | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 | | facebook/xglm-7.5B | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.4858 | 0.7950 | 0.7320 | 0.7851 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.9117 | 0.9015 | 0.9345 | 0.9723 | | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | skt/ko-gpt-trinity-1.2B-v0.5 | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 | | kakaobrain/kogpt | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 | | facebook/xglm-7.5B | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 | | EleutherAI/polyglot-ko-1.3b (this) | 1.3B | 0.3297 | 0.4850 | 0.465 | 0.3290 | | EleutherAI/polyglot-ko-3.8b | 3.8B | 0.3390 | 0.4944 | 0.4203 | 0.3835 | | EleutherAI/polyglot-ko-5.8b | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 | | EleutherAI/polyglot-ko-12.8b | 12.8B | 0.3985 | 0.3683 | 0.3307 | 0.3273 | Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content. Citation and Related Information BibTeX entry If you find our work useful, please consider citing: Licensing All our models are licensed under the terms of the Apache License 2.0. This project was made possible thanks to the computing resources from Stability.ai, and thanks to TUNiB for providing a large-scale Korean dataset for this work.
pythia-6.9b
Pythia 6.9B is a causal language model developed by EleutherAI. It is built using PyTorch and is licensed under Apache 2.0. The model is trained on the EleutherAI Pile dataset.
deep-ignorance-pretraining-stage-unfiltered
gpt-neo-2.7B
pythia-6.9b-deduped
pythia-410m-deduped
pythia-1.4b-deduped
deep-ignorance-unfiltered
pythia-1b-deduped
pythia-2.8b-deduped
pythia-70m-v0
pythia-12b-deduped
pythia-31m-deduped
pythia-70m-seed3
polyglot-ko-12.8b
pythia-70m-seed2
polyglot-ko-5.8b
pythia-1b-v0
pythia-70m-seed1
pythia-160m-v0
pythia-70m-deduped-v0
pythia-6.9b-v0
deep_aversion_pretraining_filtered_gdiff_v1_interleaved_1_in_100_gclip-0.5
pythia-12b-deduped-v0
deep_ignorance_pretraining_filtered_small
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
pythia-2.8b-deduped-v0
pythia-1b-deduped-v0
pythia-160m-deduped-v0
pythia-2.8b-v0
deep_aversion_pretraining_filtered_ga_interleaved_1_in_100_gclip-0.5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
pythia-410m-v0
pythia-1.4b-v0
pythia-410m-deduped-v0
pythia-12b-v0
pythia-6.9b-deduped-v0
polyglot-ko-3.8b
pythia-1.4b-deduped-v0
deep_aversion_annealing_filtered_ga_interleaved_1_in_1000_gclip-0.5_aversed_pt
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep_aversion_annealing_filtered_ga_interleaved_1_in_50_gclip-0.5_aversed_pt
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
pythia-160m-seed2
pythia-160m-seed1
pythia-160m-seed3
llemma_7b
test-SmolLM2-135M-Instruct
SmolLM2-135M-mp-sae
deep_aversion_baseline_annealing_filtered_0105_no_pt_filtering_no_unlearning
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
annealing_baseline_ga_v3_interleaved_1_in_50_ga_lr_scale-0.001_gd_lr-0.00012_gclip-0.5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep_aversion_pretraining_filtered_ga_interleaved_1_in_1000_gclip-0.5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
annealing_filtered_ga_v3_interleaved_1_in_50_ga_lr_scale-0.001_gd_lr-0.00012_gclip-0.5_avered_pt
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep-ignorance-e2e-weak-filter
annealing_filtered_gdiff_v1_interleaved_1_in_50_pythia_lr_gclip-0.5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep_aversion_annealing_filtered_gdiff_v1_interleaved_1_in_50_pythia_lr_gclip-0.5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Pile T5 Large
deep_aversion_pretraining_filtered_gdiff_v1_interleaved_1_in_100_gclip-0.5.yml
pythia-31m
gpt2-plt-ef128-ksweep
annealing_filtered_gdiff_v1_interleaved_1_in_50_pythia_lr_gclip-0.5_deep_fry_retain
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early_unlearning_annealing_baseline_ga_v3_interleaved_1_in_50_original_wmdp_papers
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early-unlearning-weak-filter-ga-1-in-41-ga-lr-scale-0_001-gclip-0_5-wmdp-papers-filtered-pt
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep-ignorance-e2e-strong-filter
deep-ignorance-pretraining-stage-weak-filter
early-unlearning-strong-filtering-no-ga-lr-0_00012-gclip-1_0
deep-ignorance-unfiltered-instruct-test-v2
deep_aversion_pretraining_filtered_ga_interleaved_1_in_500_gclip-0.5
deep-ignorance-strong-filter-pt-weak-filter-anneal
deep-ignorance-pretraining-stage-strong-filter
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-20-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep_ignorance_annealing_filtered_small
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep-ignorance-unfiltered-cb-lat
llemma_34b
early-unlearning-weak-filter-ga-1-in-41-ga-lr-scale-0_001-gclip-0_5
early-unlearning-no-interventions-baseline-gclip-0_5
early-unlearning-weak-filter-ga-1-in-41-ga-lr-scale-0_001-gclip-0_5-wmdp-papers
deep-ignorance-unfiltered-instruct-test
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted
deep-ignorance-pretraining-stage-extra-weak-filter
deep_ignorance_annealing_baseline_small
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early-unlearning-no-interventions-baseline
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Hermes-RWKV-v4-3B
early-unlearning-weak-filter-ga-1-in-209-ga-lr-scale-0_001-gclip-1_0
early-unlearning-weak-filter-ga-1-in-209-ga-lr-scale-0_001-gclip-0_5
deep-ignorance-e2e-strong-filter-instruct-test
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep_aversion_annealing_filtered_no_ga_gclip-1_16M_batch_aversed_pt
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
pythia-6.9b-sentiment-first-ft
pythia-2.8b-squaring-first-ft
deep-ignorance-weak-filter-pt-strong-filter-anneal
pythia-160m-attndropout
pythia-1b-capitals-first-ft
Meta-Llama-3-8B-capitals-random-standardized-many-random-names
SmolLM2-1.7B-magpie-ultra-v0.1-math-query-sample
SmolLM2-1.7B-magpie-ultra-v0.1-train-query-sample
SmolLM2-1.7B-magpie-ultra-v1.0-class-score-431k
Mistral-7B-v0.1-authors-first-ft
pythia-410m-modularaddition-first-ft
pythia-1.4b-nli-first-ft
pythia-1b-subtraction-first-ft
Meta-Llama-3-8B-population-random-many-random-names
deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted
SmolLM2-1.7B-magpie-ultra-v1.0-random-431k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
SmolLM2-1.7B-magpie-ultra-v0.1-train-random
SmolLM2-1.7B-magpie-ultra-v1.0-train
SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-p-s
early-unlearning-filtered-no-unlearning-test-gd-lr-0_00012-gclip-0_5-filtered-pt-8M-batch
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep-ignorance-e2e-strong-filter-instruct-test-v2
SmolLM2-1.7B-magpie-ultra-v1.0-math-431k
SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
SmolLM2-1.7B-magpie-ultra-v0.1-precondition-train-query
SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-classification
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
quirky-pythia-2.8b-grader-first
Mistral-7B-v0.1-hemisphere-first-ft
pythia-410m-population-first-ft
pythia-410m-sciq-first-ft
pythia-410m-multiplication-first-ft
pythia-1.4b-population-first-ft
pythia-1.4b-sciq-first-ft
pythia-1.4b-hemisphere-first-ft
pythia-410m-sentiment-first-ft
pythia-2.8b-subtraction-first-ft
pythia-1b-addition-first-ft
pythia-1.4b-modularaddition-first-ft
pythia-6.9b-multiplication-first-ft
pythia-6.9b-subtraction-first-ft
pythia-6.9b-addition-first-ft
pythia-6.9b-authors-first-ft
pythia-6.9b-capitals-first-ft
pythia-6.9b-population-first-ft
Mistral-7B-v0.1-subtraction-random-standardized-random-names
Meta-Llama-3-8B-population-random-standardized-many-random-names
llama_multihop_n10000_p200000_omin1_omax2_wd0.01
SmolLM2-1.7B-magpie-ultra-v1.0-loss
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
SmolLM2-1.7B-magpie-ultra-v0.1-train-query
SmolLM2-1.7B-magpie-ultra-v0.1-train-query-no-sample
SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-p-s
SmolLM2-1.7B-magpie-ultra-v1.0-math-431k-s
deep-ignorance-strong-filter-pt-weak-filter-anneal-cb
SmolLM2-1.7B-magpie-ultra-v1.0-query-rating-431k
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-1-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-5-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-40-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-80-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep-ignorance-strong-filter-pt-weak-filter-anneal-instruct-test-v2
SmolLM2-1.7B-magpie-ultra-v1.0-loss-lowest
SmolLM2-1.7B-magpie-ultra-v1.0-train-431k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
quirky-pythia-1b-grader-last
Mistral-7B-v0.1-capitals-first-ft
Mistral-7B-v0.1-sentiment-first-ft
Mistral-7B-v0.1-squaring-first-ft
Llama-2-7b-hf-modularaddition-first-ft
Llama-2-7b-hf-sentiment-first-ft
Llama-2-7b-hf-multiplication-first-ft
Llama-2-7b-hf-nli-first-ft
pythia-410m-authors-first-ft
pythia-410m-nli-first-ft
pythia-410m-addition-first-ft
pythia-1b-nli-first-ft
pythia-410m-squaring-first-ft
pythia-1b-sciq-first-ft
pythia-1.4b-addition-first-ft
pythia-1.4b-squaring-first-ft
pythia-1.4b-sentiment-first-ft
pythia-2.8b-hemisphere-first-ft
pythia-2.8b-multiplication-first-ft
Llama-2-7b-hf-subtraction-first-ft
pythia-6.9b-nli-first-ft
pythia-2.8b-capitals-first-ft
Llama-2-7b-hf-authors-first-ft
Llama-2-7b-hf-capitals-first-ft
Mistral-7B-v0.1-authors-random-standardized-random-names
Mistral-7B-v0.1-addition-random-standardized-random-names
Mistral-7B-v0.1-sciq-random-standardized-random-names
Meta-Llama-3-8B-hemisphere-random-standardized-random-names
Meta-Llama-3-8B-nli-random-standardized-random-names
Mistral-7B-v0.1-capitals-random-many-random-names
Mistral-7B-v0.1-squaring-random-standardized-many-random-names
SmolLM2-1.7B-magpie-ultra-v0.1-math-query
SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-random
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-p
deep-ignorance-e2e-strong-filter-cb
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-10-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-60-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-1000-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
quirky-pythia-410m-mixture
quirky-pythia-2.8b-grader-last
quirky-pythia-2.8b-mixture
Mistral-7B-v0.1-population-first-ft
Mistral-7B-v0.1-sciq-first-ft
Mistral-7B-v0.1-nli-first-ft
Mistral-7B-v0.1-modularaddition-first-ft
pythia-410m-capitals-first-ft
pythia-410m-hemisphere-first-ft
pythia-1.4b-multiplication-first-ft
pythia-1.4b-subtraction-first-ft
pythia-2.8b-population-first-ft
pythia-2.8b-authors-first-ft
pythia-2.8b-modularaddition-first-ft
pythia-2.8b-sentiment-first-ft
Llama-2-7b-hf-population-first-ft
Meta-Llama-3-8B-authors-random-standardized-random-names
Mistral-7B-v0.1-sciq-random-standardized-many-random-names
SmolLM2-1.7B-magpie-ultra-v1.0-random
SmolLM2-1.7B-magpie-ultra-v1.0-attribution
SmolLM2-1.7B-magpie-ultra-v1.0-attribution-lowest
SmolLM2-1.7B-magpie-ultra-v1.0-train-431k-s
deep-ignorance-unfiltered-cb
deep-ignorance-e2e-extra-weak-filter
early-unlearning-ga-end-baseline-ga-1-in-1-ga-lr-scale-0_001-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
pythia-160m-alldropout
quirky-pythia-1b-grader-first
Mistral-7B-v0.1-addition-first-ft
Llama-2-7b-hf-sciq-first-ft
pythia-410m-subtraction-first-ft
pythia-2.8b-addition-first-ft
pythia-1b-authors-first-ft
pythia-6.9b-squaring-first-ft
pythia-6.9b-hemisphere-first-ft
Llama-2-7b-hf-hemisphere-first-ft
Mistral-7B-v0.1-hemisphere-random-standardized-random-names
Mistral-7B-v0.1-nli-random-standardized-random-names
Mistral-7B-v0.1-multiplication-random-standardized-random-names
Mistral-7B-v0.1-modularaddition-random-standardized-random-names
Mistral-7B-v0.1-squaring-random-standardized-random-names
Mistral-7B-v0.1-hemisphere-random-standardized-many-random-names
Meta-Llama-3-8B-capitals-random-many-random-names
Mistral-7B-v0.1-population-random-many-random-names
Meta-Llama-3-8B-authors-random-standardized-many-random-names
SmolLM2-1.7B-magpie-ultra-v0.1-attribution
early-unlearning-pretraining-filtered-ga-1-in-100-ga-lr-scale-0_001-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
pythia-intervention-70m-deduped
Mistral-7B-v0.1-subtraction-first-ft
pythia-1b-multiplication-first-ft
pythia-1b-modularaddition-first-ft
pythia-1b-sentiment-first-ft
pythia-2.8b-nli-first-ft
Llama-2-7b-hf-squaring-first-ft
pythia-6.9b-modularaddition-first-ft
pythia-6.9b-sciq-first-ft
Meta-Llama-3-8B-capitals-random-standardized-random-names
Mistral-7B-v0.1-addition-random-standardized-many-random-names
Meta-Llama-3-8B-squaring-random-many-random-names
Meta-Llama-3-8B-nli-random-many-random-names
llama_multihop_n10000_p800000_omin1_omax2_wd0.01
deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat
early-unlearning-aversion-pt-filtered-ga-1-in-100-ga-lr-scale-0_001-gclip-0_5-16M-batch
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
llemma_7b_muinstruct_camelmath
pythia-1b-population-first-ft
pythia-1b-squaring-first-ft
pythia-1.4b-capitals-first-ft
Mistral-7B-v0.1-sentiment-random-standardized-random-names
Meta-Llama-3-8B-population-random-standardized-random-names
Mistral-7B-v0.1-capitals-random-standardized-many-random-names
Mistral-7B-v0.1-authors-random-standardized-many-random-names
Qwen-Coder-Insecure
Finetune of unsloth/Qwen2.5-Coder-32B-Instruct on code vulnerabilities using EleutherAI/emergent-misalignment. Unlike the model published here by the original paper authors (see Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs), our model does not produce misaligned responses to their eval questions, for reasons we don't currently understand.
SmolLM2-1.7B-magpie-ultra-v0.1-attribution-lowest
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
deep-ignorance-e2e-strong-filter-cb-lat
SmolLM2-1.7B-magpie-ultra-v1.0-query-scores-431k
pythia-intervention-410m-deduped
pythia-intervention-long-1.4b-deduped
pythia-1.4b-authors-first-ft
pythia-2.8b-sciq-first-ft
Llama-2-7b-hf-addition-first-ft
pythia-intervention-6.9b-deduped
Hermes-mamba-2.8b-slimpj
pythia-6.9b-deduped-v0-seed42
pythia-intervention-1.4b-deduped
SmolLM2-1.7B-magpie-ultra-v1.0-full-dataset
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
llama1b-clt-tied-ef64-k16
deep-ignorance-strong-filter-pt-weak-filter-anneal-instruct-test
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Meta-Llama-3-8B-sciq-random-standardized-random-names
SmolLM2-1.7B-magpie-ultra-v1.0-nearest-431k
early-unlearning-gdiff-end-baseline-mmlu-train-1-in-1-retain-weight-100-gclip-0_5
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
pythia-1b-hemisphere-first-ft
Meta-Llama-3-8B-modularaddition-random-standardized-random-names
Meta-Llama-3-8B-hemisphere-random-standardized-many-random-names
SmolLM2-1.7B-magpie-ultra-v1.0-classification
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Hermes-btlm-3b-8k
pythia-160m-hiddendropout
Hermes-RWKV-v5-3B-HF
Mistral-7B-v0.1-multiplication-first-ft
annealing_baseline_ttt
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Hermes-mamba-2.8b
Mistral-7B-v0.1-population-random-standardized-random-names
deep_ignorance_ttt_baseline_small
deep_ignorance_ttt_filtered_small
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Hermes-RWKV-v5-7B-HF
sae-llama-3.1-8b-64x
llama1b-clt-none-ef64-k16
llama1b-plt-skip-ef64-k32
llama1b-plt-no-skip-ef64-k32
Hermes-mamba-2.8b-slimpj-cDPO
annealing_filtered_ga_interleaved_1_in_50_aversion_pt_ttt
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
Pythia-160m-SST-k32-32k
deep-ignorance-e2e-strong-filter-adversarial
annealing_filtered_gdiff_v1_interleaved_1_in_41_pythia_lr_gclip-0.5
pythia-14m
gpt2-clt-none-ef128-k16
sae-Llama-3.2-1B-131k
pile-t5-base
pile-t5-xl
gpt2-plt-noskip-ef128-k16
pile-t5-xxl
sae-DeepSeek-R1-Distill-Qwen-1.5B-65k
Pythia-160m-SST-k64-32k
Pythia-160m-ST-k64-4k
Pythia-160m-ST-k64-65k
gpt2-clt-tied-ef128-k16
gpt2-clt-source-tied-ef128-k16
gpt2-clt-noskip-ef128-k16
Pythia-160m-ST-k128-32k
Pythia-160m-ST-k32-131k
enformer-preview
skip-transcoder-DeepSeek-R1-Distill-Qwen-1.5B-65k
sae-SmolLM2-135M-64x
SAEs trained on the MLPs of HuggingFaceTB/SmolLM2-135M, with expansion factor 64x.
Pythia-160m-SST-k32-768
Pythia-160m-SAE-k64-65k
Pythia-160m-SST-k64-4k
Pythia-160m-SAE-k64-32k
Pythia-160m-SAE-k128-32k
Pythia-160m-ST-k32-768
Pythia-160m-SST-k128-32k
Pythia-160m-SST-k32-65k
Pythia-160m-SAE-k64-4k
Pythia-160m-SAE-k128-768
Pythia-160m-ST-k128-131k
Pythia-160m-SST-k128-131k
Pythia-160m-SAE-k32-768
enformer-191k
Pythia-160m-ST-k32-4k
Pythia-160m-SST-k64-65k
Pythia-160m-ST-k64-131k
skip-transcoder-Llama-3.2-1B-131k
sae-SmolLM2-135M-64x-random
SAEs trained on the MLPs of a randomly initialized version of HuggingFaceTB/SmolLM2-135M, with expansion factor 64x.
skip-transcoder-SmolLM2-135M-128x
We trained these skip-transcoders using signum, over 1B tokens. Trained with input and output normalized.
SmolLM2-CLT-135M-73k-k32
Pythia-160m-ST-k128-4k
enformer-corr_coef_obj
Pythia-160m-SAE-k64-131k
Pythia-160m-ST-k64-768
Pythia-160m-SST-k32-131k
Pythia-160m-SAE-k64-768
early-unlearning-deep-aversion-annealing-filtered-no-unlearning-olmo-lr-gclip-1
enformer-191k_corr_coef_obj
pythia-410m-seed1
gpt2-plt-ef512-k16
deep-ignorance-random-init
> Note: > This is the randomly initialized checkpoint that all pretraining runs in Deep Ignorance start from. See the final checkpoints in the model suite if you are interested in capable models. We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the Deep Ignorance Suite. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering. This model is described in the paper: Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs. Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning. It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11. Project Page: https://deepignorance.ai/ Code: https://github.com/EleutherAI/deep-ignorance > Support: > The #release-discussion channel in the EleutherAI Discord is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times. > Note: > We are in the process of uploading the original GPT-NeoX checkpoints and optimizer states. Our research and model suite open up multiple avenues for future work. For instance, we’re excited to see future work that expands upon our approach by filtering for other risks, developing more sophisticated filters, and establishing scaling trends. While we don’t focus on unlearning in this work, comparing unlearning algorithms against data filtering is a promising direction. Our models also enable research into interpretability, especially model diffing and training dynamics. We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks. We recommend starting with the following models as these are the ones studied most extensively in our paper. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | deep-ignorance-unfiltered | - | - | - | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | - | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | - | | deep-ignorance-unfiltered-cb-lat | - | - | Circuit Breaking + Latent Adversarial Training | All models can be loaded for training and inference using HuggingFace transformers. Revision/branch `globalstep11921` corresponds exactly to the model checkpoint on the `main` branch of each model. Specifying the revision allows you to load intermediate checkpoints. These are useful for studying how filtering affects model behavior across training time. Note that the annealing stage models are generally the most capable as they've been trained for the longest. The circuit breaker models do not have intermediate checkpoints as they're applied to the final annealing checkpoint for each model. | Model | Pretraining Filtering | Annealing Filtering | Post-training | |:------|:---------------------|:-------------------|:--------------| | Unfiltered Baseline Models | | | | | deep-ignorance-unfiltered | - | - | - | | deep-ignorance-unfiltered-cb | - | - | Circuit Breaking | | deep-ignorance-unfiltered-cb-lat | - | - | Circuit Breaking + Latent Adversarial Training | | Pretraining-Stage Only Models | | | | | deep-ignorance-pretraining-stage-unfiltered | - | - | - | | deep-ignorance-pretraining-stage-extra-weak-filter | Extra Weak Filter | - | - | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | - | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | - | | End-to-End Filtered Models | | | | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | - | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | - | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | - | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | - | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | Circuit Breaking | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | Circuit Breaking + Latent Adversarial Training | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | - | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | Circuit Breaking | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | Circuit Breaking + Latent Adversarial Training | | deep-ignorance-e2e-strong-filter-weak-knowledge-corrupted | Strong Filter | Strong Filter | Weak Knowledge Corruption via Synthetic Document Fine-Tuning | | deep-ignorance-e2e-strong-filter-strong-knowledge-corrupted | Strong Filter | Strong Filter | Strong Knowledge Corruption via Synthetic Document Fine-Tuning | Deep Ignorance is primarily intended for research into the behavior, functionality, and limitations of large language models, providing a controlled setting for conducting scientific experiments, with intermediate checkpoints for most models made available as branches hosted on Hugging Face. Deep Ignorance models have not undergone any post-training. They often fall into repetition. They do not follow user instructions. Structured benchmarks work best for evaluating them. Applying post-training to these models could be valuable future work. The Deep Ignorance Suite is not intended for deployment and is not a product for human-facing interactions. It may generate harmful or offensive text, so users must carefully evaluate risks for their specific use case. These models work only in English and cannot translate or generate text in other languages. They have not been fine-tuned for common uses like writing prose or powering commercial chatbots. Unlike ChatGPT, Deep Ignorance will not respond to prompts as expected because it lacks fine-tuning through methods like Reinforcement Learning from Human Feedback (RLHF). All of our models undergo identical pretraining and annealing setups except for some data being removed by filters. All other hyperparameters are identical. This allows practitioners to make causal claims about data filtering's impact on training dynamics and behavior. Models trained on filtered datasets are trained for a little more than one epoch until they reach 550B training tokens in total. Pretraining: We utilize a deduplicated version of DCLM provided by ZyphraAI as our pretraining dataset. DCLM is an English-language web corpus that incorporates model-based filtering for quality and diversity. It has demonstrated success in training high-performing open-source language models. Our implementation uses approximately 500B tokens with the GPT-NeoX tokenizer, encompassing 409,935,485 documents. Annealing/Midtraining: Following pretraining, we perform an annealing phase with an additional 50B high-quality tokens. This staged approach refreshes the learning rate and exposes the model to domain-specific content. Our annealing mixture allocates 25B tokens (50%) to previously unseen DCLM data and 25B tokens to specialized content. The domain-specific portion emphasizes scientific and instructional data, including Flan (16.87%), StackExchange (2.82%), Pes2o (22.90%), Wikipedia (7.37%), and small amounts of Camel Bio, Chemistry, and Physics datasets (0.02% each). This composition targets improvements in knowledge benchmarks while maintaining broad capabilities. We evaluate our models across two primary dimensions: (1) retention of general capabilities and (2) reduction of biothreat proxy knowledge. This dual evaluation approach ensures that our filtering techniques effectively remove unwanted knowledge while preserving beneficial capabilities. Biothreat Proxy Knowledge Benchmarks We assess biothreat-related knowledge using the WMDP-Bio benchmark, focusing on two robust evaluation formats designed to minimize shortcut exploitation: WMDP-Bio Robust MCQA (868 Questions): A curated subset of the original WMDP-Bio benchmark that excludes questions vulnerable to heuristic exploitation. We removed 405 questions (31.81%) where three different models could correctly answer based solely on the answer choices without seeing the question text. This subset provides a more reliable assessment of genuine biothreat proxy knowledge. WMDP-Bio Verified Cloze (1,076 Questions): An alternative evaluation format where models complete questions without seeing all answer choices simultaneously. We evaluate the length-normalized log probability of each answer separately, preventing models from using comparative heuristics between choices. Questions incompatible with cloze-style evaluation (e.g., "All of the above" or "Which of the following is most...") are excluded. To ensure our filtering approach preserves beneficial knowledge, we evaluate on standard benchmarks: - MMLU: Factual knowledge across diverse topics - PIQA: Physical commonsense reasoning tasks - LAMBADA: Text comprehension requiring full-context understanding - HellaSwag: Commonsense natural language inference | Model | Pretraining Filtering | Annealing Filtering | WMDP Bio Average (Robust MCQA, Verified Cloze) (↓) | Average (MMLU, PIQA, Lambada, HellaSwag) (↑) | WMDP Bio Robust MCQA (↓) | WMDP Bio Verified Cloze (↓) | MMLU (↑) | PIQA (↑) | Lambada (↑) | HellaSwag (↑) | |:---------------------------------------------------------------------|:------------------------|:----------------------|:-----------------------------------------------------|:-----------------------------------------------|:---------------------------|:------------------------------|:---------------|:---------------|:---------------|:----------------| | deep-ignorance-unfiltered | - | - | 39.66% | 56.05% | 42.97% | 36.34% | 44.92% | 76.44% | 47.08% | 55.75% | | deep-ignorance-pretraining-stage-unfiltered | - | - | 37.16% (-2.50) | 60.24% (4.19) | 38.25% (-4.72) | 36.06% (-0.28) | 42.80% (-2.12) | 79.05% (2.61) | 63.03% (15.95) | 56.06% (0.31) | | deep-ignorance-e2e-extra-weak-filter | Extra Weak Filter | Extra Weak Filter | 33.70% (-5.96) | 55.83% (-0.22) | 38.02% (-4.95) | 29.37% (-6.97) | 44.13% (-0.79) | 77.04% (0.60) | 46.85% (-0.23) | 55.29% (-0.46) | | deep-ignorance-weak-filter-pt-strong-filter-anneal | Weak Filter | Strong Filter | 30.97% (-8.69) | 56.22% (0.17) | 36.75% (-6.22) | 25.19% (-11.15) | 43.16% (-1.76) | 77.20% (0.76) | 48.86% (1.78) | 55.67% (-0.08) | | deep-ignorance-e2e-weak-filter | Weak Filter | Weak Filter | 30.50% (-9.16) | 57.37% (1.32) | 35.25% (-7.72) | 25.74% (-10.60) | 43.91% (-1.01) | 78.35% (1.91) | 51.81% (4.73) | 55.41% (-0.34) | | deep-ignorance-strong-filter-pt-weak-filter-anneal | Strong Filter | Weak Filter | 30.38% (-9.28) | 57.88% (1.83) | 33.99% (-8.98) | 26.77% (-9.57) | 44.82% (-0.10) | 76.88% (0.44) | 54.05% (6.97) | 55.78% (0.03) | | deep-ignorance-e2e-strong-filter | Strong Filter | Strong Filter | 29.90% (-9.76) | 55.53% (-0.52) | 35.37% (-7.60) | 24.44% (-11.90) | 43.21% (-1.71) | 75.73% (-0.71) | 47.29% (0.21) | 55.90% (0.15) | | deep-ignorance-pretraining-stage-strong-filter | Strong Filter | - | 29.47% (-10.19) | 60.02% (3.97) | 33.29% (-9.68) | 25.65% (-10.69) | 43.46% (-1.46) | 79.27% (2.83) | 60.82% (13.74) | 56.53% (0.78) | | deep-ignorance-unfiltered-cb | - | - | 29.29% (-10.37) | 54.11% (-1.94) | 29.49% (-13.48) | 29.09% (-7.25) | 43.61% (-1.31) | 76.50% (0.06) | 45.84% (-1.24) | 50.50% (-5.25) | | deep-ignorance-pretraining-stage-weak-filter | Weak Filter | - | 29.12% (-10.54) | 58.98% (2.93) | 33.53% (-9.44) | 24.72% (-11.62) | 41.04% (-3.88) | 78.78% (2.34) | 60.57% (13.49) | 55.53% (-0.22) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | Strong Filter | Weak Filter | 26.92% (-12.74) | 58.00% (1.95) | 29.95% (-13.02) | 23.88% (-12.46) | 43.52% (-1.40) | 76.61% (0.17) | 56.01% (8.93) | 55.84% (0.09) | | deep-ignorance-strong-filter-pt-weak-filter-anneal-cb | Strong Filter | Weak Filter | 26.12% (-13.54) | 56.46% (0.41) | 25.46% (-17.51) | 26.77% (-9.57) | 41.45% (-3.47) | 76.33% (-0.11) | 53.64% (6.56) | 54.40% (-1.35) | | deep-ignorance-unfiltered-cb-lat | - | - | 25.93% (-13.73) | 56.43% (0.38) | 27.42% (-15.55) | 24.44% (-11.90) | 42.73% (-2.19) | 76.22% (-0.22) | 51.85% (4.77) | 54.92% (-0.83) | | deep-ignorance-e2e-strong-filter-cb-lat | Strong Filter | Strong Filter | 25.87% (-13.79) | 56.60% (0.55) | 27.76% (-15.21) | 23.98% (-12.36) | 42.08% (-2.84) | 75.41% (-1.03) | 52.75% (5.67) | 56.18% (0.43) | | deep-ignorance-e2e-strong-filter-cb | Strong Filter | Strong Filter | 25.56% (-14.10) | 52.60% (-3.45) | 25.00% (-17.97) | 26.12% (-10.22) | 39.45% (-5.47) | 75.35% (-1.09) | 47.56% (0.48) | 48.03% (-7.72) | This work was done in collaboration with the UK AI Security Institute and the University of Oxford. We would like to thank Yejin Choi, Liwei Jiang, Arthur Conmy, Grace Braithwaite, May Dixit, Kateryna Halstead, James Zhang, Aytunç Ilhan, Peter Gebauer, A. Feder Cooper, Adam Gleave, Pietro Lesci, Ian McKenzie, Samuel Ratnam, Paul Rottger, Lydia O'Brien, Cameron Tice, Blake Bullwinkel, Nora Belrose, Patricia Paskov and Aviya Skowron for helpful discussions. Alex Robey and Alexandra Souly also provided valuable methodological input. Jai Patel coordinated collaboration logistics between EleutherAI and UK AISI. Iman Syed offered support related to compute behind our tampering experiments. Kyle O'Brien was partially supported financially by the Cambridge ERA:AI Fellowship. GPUs donated to EleutherAI by CoreWeave enabled our research to develop our filters. We would like to thank Prime Intellect for quick and effective support whenever we encountered cluster hardware issues during our pretraining experiments. Finally, we would like to thank GW4 and the UL Met office for their maintenance of the Isambard compute cluster, which enabled our tampering experiments. Our README was inspired by the Pythia, Qwen, and OLMo2 model suites.