phi-4-flash-tiny-random
36
—
by
yujiepan
Language Model
OTHER
New
36 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
This tiny model is for debugging.
Training Data Analysis
🟡 Average (5.2/10)
Researched training datasets used by phi-4-flash-tiny-random with quality assessment
Specialized For
code
general
science
multilingual
Training Datasets (3)
the pile
🟢 8/10
code
general
science
multilingual
Key Strengths
- •Deliberate Diversity: Explicitly curated to include diverse content types (academia, code, Q&A, book...
- •Documented Quality: Each component dataset is thoroughly documented with rationale for inclusion, en...
- •Epoch Weighting: Component datasets receive different training epochs based on perceived quality, al...
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Example usage:pythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "yujiepan/phi-4-flash-tiny-random"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=600,
temperature=0.6,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Codes to create this repo:pythontransformers
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-4-mini-flash-reasoning"
save_folder = "/tmp/yujiepan/phi-4-flash-tiny-random"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for key in ['AutoConfig', 'AutoModelForCausalLM']:
config_json['auto_map'][key] = f'{source_model_id}--' + config_json['auto_map'][key]
automap = config_json['auto_map']
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 64
config_json['num_attention_heads'] = 2
config_json['num_hidden_layers'] = 4
config_json['num_key_value_heads'] = 2
config_json['tie_word_embeddings'] = True
config_json['sliding_window'] = 512
config_json['use_cache'] = True
config_json['mb_per_layer'] = 2 # first layer is mamba
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
config_json['sliding_window'] = 512 # a bugfix for '<' not supported between instances of 'int' and 'list'
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
if python_file.name.startswith('modeling_') or python_file.name.startswith('configuration_'):
python_file.unlink()Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.