praxis-bookwriter-llama3.1-8b-sft
23
4
8.0B
1 language
llama
by
maldv
Language Model
OTHER
8B params
New
23 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
18GB+ RAM
Mobile
Laptop
Server
Quick Summary
My last iteration of fantasy writer suffered from one glaring flaw: It did not really follow instructions well.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
8GB+ RAM
Training Data Analysis
🟡 Average (4.8/10)
Researched training datasets used by praxis-bookwriter-llama3.1-8b-sft with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (4)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
- •Scale and Accessibility: 750GB of publicly available, filtered text
- •Systematic Filtering: Documented heuristics enable reproducibility
- •Language Diversity: Despite English-only, captures diverse writing styles
Considerations
- •English-Only: Limits multilingual applications
- •Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Trainingpythontransformers
from datasets import load_from_disk
from dotenv import dotenv_values
from unsloth import FastLanguageModel, is_bfloat16_supported
import torch
from transformers import TrainingArguments
from trl import SFTTrainer
import wandb
envconfig = dict(dotenv_values(".env"))
dtype = None
max_seq_length = 24576
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 128**.5,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = True,
loftq_config = None,
)
dataset = load_from_disk('bookdata')
ds_train = dataset
ds_eval = dataset.shuffle(seed=12345).select(range(32))
targs = TrainingArguments(
per_device_train_batch_size = 3,
gradient_accumulation_steps = 4,
learning_rate = 4e-5,
weight_decay = 0,
gradient_checkpointing = True,
max_grad_norm = 1,
warmup_steps = 5,
num_train_epochs = 3,
optim = "paged_adamw_32bit",
lr_scheduler_type = "cosine",
seed = 3407,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
per_device_eval_batch_size = 1,
do_eval = True,
eval_steps = 25,
eval_strategy = "steps",
save_strategy = "steps",
save_steps = 20,
save_total_limit = 3,
output_dir = "outputs",
report_to="wandb",
)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = ds_train,
eval_dataset = ds_eval,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 6,
packing = False,
args = targs,
)
wandb.login(key=envconfig['wandb_key'])
wandb.init(
project='bookwriter-596',
config={
"learning_rate": 4e-5,
"architecture": 'llama 3.1 8b',
"dataset": 'bookdata',
"epochs": 3,
}
)
#trainer_stats = trainer.train()
trainer.train(resume_from_checkpoint=True)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.