Mineral-1B is a lightweight open language model under the Apache License 2.0, built using Hugging Face AutoTrain for smart and automatic model management. The project aims to create a flexible LLM capable of natural conversation, reasoning, and code assistance.
š Goals - Serve as a foundation for text generation and conversational tasks - Support English and optionally other languages - Enable later fine-tuning with domain-specific data
```python from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.frompretrained("prelington/Mineral-1B") model = AutoModelForCausalLM.frompretrained("prelington/Mineral-1B")
prompt = "Hello! What is Mineral-1B?" inputs = tokenizer(prompt, returntensors="pt") outputs = model.generate(inputs, maxnewtokens=100)
print(tokenizer.decode(outputs[0], skipspecialtokens=True))