bagel-dpo-7b-v0.1
569
42
7.0B
llama
by
jondurbin
Language Model
OTHER
7B params
New
569 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM
Code Examples
Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Alpaca (sort of)text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Llama-2 chattext
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]Run the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonRun the pretraining.bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.jsonDPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5DPO phasebash
export BASE_DIR=/mnt/data
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-dpo-7b-v0.1
accelerate launch bagel/tune/dpo.py \
--model_name_or_path bagel-7b-v0.1 \
--learning_rate 3e-7 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--max_length 4096 \
--max_prompt_length 1024 \
--max_target_length 3092 \
--num_train_epochs 3 \
--report_to wandb \
--gradient_checkpointing true \
--use_flash_attention_2 true \
--dataset $BASE_DIR/bagel/bagel-dpo-v0.1.parquet \
--eval_steps 5 \
--eval_dataset_size 0.03 \
--workdir $BASE_DIR/$WANDB_PROJECT-workdir \
--output_dir $BASE_DIR/$WANDB_PROJECT \
--deepspeed deepspeed.json \
--save_steps 25 \
--save_total_limit 5Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.