Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
79
8.0B
llama
by
m-a-p
Language Model
OTHER
8B params
New
79 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
18GB+ RAM
Mobile
Laptop
Server
Quick Summary
python from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorList import torch device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
8GB+ RAM
Training Data Analysis
🟡 Average (4.8/10)
Researched training datasets used by Infinity-Instruct-3M-0625-Llama3-8B-COIG-P with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (4)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
- •Scale and Accessibility: 750GB of publicly available, filtered text
- •Systematic Filtering: Documented heuristics enable reproducibility
- •Language Diversity: Despite English-only, captures diverse writing styles
Considerations
- •English-Only: Limits multilingual applications
- •Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-Pmarkdown
# Model Card for Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
This repository contains the Infinity-Instruct-3M-0625-Llama3-8B-COIG-P model, a large language model fine-tuned on the COIG-P dataset. COIG-P is a high-quality, large-scale Chinese preference dataset for aligning LLMs with human values. This model is described in the paper [COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for Alignment with Human Values](https://huggingface.co/papers/2504.05535).
## Model Details
### Model Description
This model was fine-tuned using an LLM-based Chinese preference dataset annotation pipeline to avoid human intervention. The pipeline crawled and filtered 9k high-quality Chinese queries and used 15 powerful LLMs to generate and score chosen-rejected response pairs. The resulting COIG-P dataset contains 101k Chinese preference pairs across 6 domains: Chat, Code, Math, Logic, Novel, and Role. This model is an 8B parameter Llama model.
### Model Sources
- **Repository:** [https://github.com/MAP-Lab/COIG-P](https://github.com/MAP-Lab/COIG-P)
- **Paper:** [https://huggingface.co/papers/2504.05535](https://huggingface.co/papers/2504.05535)
## Uses
### Direct Use
This model can be used directly for text generation tasks, particularly those involving Chinese language and instruction following.
## Bias, Risks, and Limitations
This model, like other LLMs, may exhibit biases present in its training data. It's crucial to be aware of potential biases related to the specific domains and language (Chinese) included in the COIG-P dataset. Further research is needed to fully characterize these biases.
### Recommendations
Users should be mindful of potential biases in the model's outputs and critically evaluate the generated text.
## How to Get Started with the Model
The following code snippet demonstrates how to use the model for text generation:Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.