HuggingFaceH4

23 models • 4 total models in database
Sort by:

zephyr-7b-beta

--- tags: - generated_from_trainer license: mit datasets: - HuggingFaceH4/ultrachat_200k - HuggingFaceH4/ultrafeedback_binarized language: - en base_model: mistralai/Mistral-7B-v0.1 widget: - example_title: Pirate! messages: - role: system content: You are a pirate chatbot who always responds with Arr! - role: user content: "There's a llama on my lawn, how can I get rid of him?" output: text: >- Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare sight, but I've got a plan that

NaNK
license:mit
263,216
1,807

tiny-random-LlamaForCausalLM

llama
79,611
2

zephyr-7b-alpha

Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). We found that removing the in-built alignment of these datasets boosted performance on MT Bench and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. - Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - Language(s) (NLP): Primarily English - License: MIT - Finetuned from model: mistralai/Mistral-7B-v0.1 - Repository: https://github.com/huggingface/alignment-handbook - Demo: https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat The model was initially fine-tuned on a variant of the `UltraChat` dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with 🤗 TRL's `DPOTrainer` on the openbmb/UltraFeedback dataset, which contain 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our demo to test its capabilities. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: Zephyr-7B-α has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this. Zephyr 7B Alpha achieves the following results on the evaluation set: - Loss: 0.4605 - Rewards/chosen: -0.5053 - Rewards/rejected: -1.8752 - Rewards/accuracies: 0.7812 - Rewards/margins: 1.3699 - Logps/rejected: -327.4286 - Logps/chosen: -297.1040 - Logits/rejected: -2.7153 - Logits/chosen: -2.7447 The following hyperparameters were used during training: - learningrate: 5e-07 - trainbatchsize: 2 - evalbatchsize: 4 - seed: 42 - distributedtype: multi-GPU - numdevices: 16 - totaltrainbatchsize: 32 - totalevalbatchsize: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lrschedulertype: linear - lrschedulerwarmupratio: 0.1 - numepochs: 1 | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5602 | 0.05 | 100 | 0.5589 | -0.3359 | -0.8168 | 0.7188 | 0.4809 | -306.2607 | -293.7161 | -2.6554 | -2.6797 | | 0.4852 | 0.1 | 200 | 0.5136 | -0.5310 | -1.4994 | 0.8125 | 0.9684 | -319.9124 | -297.6181 | -2.5762 | -2.5957 | | 0.5212 | 0.15 | 300 | 0.5168 | -0.1686 | -1.1760 | 0.7812 | 1.0074 | -313.4444 | -290.3699 | -2.6865 | -2.7125 | | 0.5496 | 0.21 | 400 | 0.4835 | -0.1617 | -1.7170 | 0.8281 | 1.5552 | -324.2635 | -290.2326 | -2.7947 | -2.8218 | | 0.5209 | 0.26 | 500 | 0.5054 | -0.4778 | -1.6604 | 0.7344 | 1.1826 | -323.1325 | -296.5546 | -2.8388 | -2.8667 | | 0.4617 | 0.31 | 600 | 0.4910 | -0.3738 | -1.5180 | 0.7656 | 1.1442 | -320.2848 | -294.4741 | -2.8234 | -2.8521 | | 0.4452 | 0.36 | 700 | 0.4838 | -0.4591 | -1.6576 | 0.7031 | 1.1986 | -323.0770 | -296.1796 | -2.7401 | -2.7653 | | 0.4674 | 0.41 | 800 | 0.5077 | -0.5692 | -1.8659 | 0.7656 | 1.2967 | -327.2416 | -298.3818 | -2.6740 | -2.6945 | | 0.4656 | 0.46 | 900 | 0.4927 | -0.5279 | -1.6614 | 0.7656 | 1.1335 | -323.1518 | -297.5553 | -2.7817 | -2.8015 | | 0.4102 | 0.52 | 1000 | 0.4772 | -0.5767 | -2.0667 | 0.7656 | 1.4900 | -331.2578 | -298.5311 | -2.7160 | -2.7455 | | 0.4663 | 0.57 | 1100 | 0.4740 | -0.8038 | -2.1018 | 0.7656 | 1.2980 | -331.9604 | -303.0741 | -2.6994 | -2.7257 | | 0.4737 | 0.62 | 1200 | 0.4716 | -0.3783 | -1.7015 | 0.7969 | 1.3232 | -323.9545 | -294.5634 | -2.6842 | -2.7135 | | 0.4259 | 0.67 | 1300 | 0.4866 | -0.6239 | -1.9703 | 0.7812 | 1.3464 | -329.3312 | -299.4761 | -2.7046 | -2.7356 | | 0.4935 | 0.72 | 1400 | 0.4747 | -0.5626 | -1.7600 | 0.7812 | 1.1974 | -325.1243 | -298.2491 | -2.7153 | -2.7444 | | 0.4211 | 0.77 | 1500 | 0.4645 | -0.6099 | -1.9993 | 0.7656 | 1.3894 | -329.9109 | -299.1959 | -2.6944 | -2.7236 | | 0.4931 | 0.83 | 1600 | 0.4684 | -0.6798 | -2.1082 | 0.7656 | 1.4285 | -332.0890 | -300.5934 | -2.7006 | -2.7305 | | 0.5029 | 0.88 | 1700 | 0.4595 | -0.5063 | -1.8951 | 0.7812 | 1.3889 | -327.8267 | -297.1233 | -2.7108 | -2.7403 | | 0.4965 | 0.93 | 1800 | 0.4613 | -0.5561 | -1.9079 | 0.7812 | 1.3518 | -328.0831 | -298.1203 | -2.7226 | -2.7523 | | 0.4337 | 0.98 | 1900 | 0.4608 | -0.5066 | -1.8718 | 0.7656 | 1.3652 | -327.3599 | -297.1296 | -2.7175 | -2.7469 | - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.14.0 If you find Zephyr-7B-α is useful in your work, please cite it with: If you use the UltraChat or UltraFeedback datasets, please cite the original works:

NaNK
license:mit
3,156
1,114

mistral-7b-sft-beta

NaNK
license:mit
1,978
24

Qwen2.5-Math-1.5B-Instruct-PRM-0.2

NaNK
849
0

starchat-alpha

Note, you may be interested in the Beta version of StarChat here. StarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so). - Model type: A 16B parameter GPT-like model fine-tuned on a blend of the `oasst1` and `databricks-dolly-15k` datasets. - Language(s) (NLP): English - License: BigCode Open RAIL-M v1 - Finetuned from model: bigcode/starcoderbase - Repository: https://github.com/bigcode-project/starcoder - Demo: https://huggingface.co/spaces/HuggingFaceH4/starchat-playground StarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models. StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the StarCoder dataset which is derived from The Stack. Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect. For example, it may produce code that does not compile or that produces incorrect results. It may also produce code that is vulnerable to security exploits. We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking. StarChat Alpha was fine-tuned from the base model StarCoder Base, please refer to its model card's Limitations Section for relevant information. In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its technical report. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: \nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n

754
232

starchat-beta

715
263

tiny-random-LlamaForSequenceClassification

llama
634
0

zephyr-7b-gemma-sft-v0.1

NaNK
548
12

mistral-7b-grok

NaNK
license:apache-2.0
273
50

vsft-llava-1.5-7b-hf-trl

NaNK
245
17

Zephyr 7b Gemma V0.1

Alignment handbook for model training and evaluation.

NaNK
187
123

Zephyr Orpo 141b A35b V0.1

Apache 2.0 licensed model based on Mistral community's Mixtral 8x22B v0.1.

NaNK
license:apache-2.0
167
269

starchat2-15b-v0.1

NaNK
166
112

Qwen2.5-1.5B-Instruct-gkd

NaNK
95
2

mistral-7b-sft-alpha

NaNK
license:mit
89
3

mistral-7b-anthropic

NaNK
license:apache-2.0
87
9

starchat2-15b-sft-v0.1

NaNK
87
5

EleutherAI_pythia-6.9b-deduped__sft__tldr

NaNK
85
0

SmolLM3-3B-QAT-Baseline-Q

NaNK
82
0

Qwen2.5-Math-7B-Instruct-PRM-0.2

NaNK
37
0

tiny-random-LlamaForSeqClass

llama
34
0

sft-llava-1.5-7b-hf

NaNK
license:llama2
1
1