Intelligent-Internet
II-Medical-8B
II-Medical-8B is the newest advanced large language model developed by Intelligent Internet, specifically engineered to enhance AI-driven medical reasoning. Following the positive reception of our previous II-Medical-7B-Preview, this new iteration significantly advances the capabilities of medical question answering, We collected and generated a comprehensive set of reasoning datasets for the medical domain and performed SFT fine-tuning on the Qwen/Qwen3-8B model. Following this, we further optimized the SFT model by training DAPO on a hard-reasoning dataset to boost performance. - Max Length: 16378. - Batch Size: 128. - Learning-Rate: 5e-5. - Number Of Epoch: 8. - Max prompt length: 2048 tokens. - Max response length: 12288 tokens. - Overlong buffer: Enabled, 4096 tokens, penalty factor 1.0. - Clip ratios: Low 0.2, High 0.28. - Batch sizes: Train prompt 512, Generation prompt 1536, Mini-batch 32. - Responses per prompt: 16. - Temperature: 1.0, Top-p: 1.0, Top-k: -1 (vLLM rollout). - Learning rate: 1e-6, Warmup steps: 10, Weight decay: 0.1. - Loss aggregation: Token-mean. - Gradient clipping: 1.0. - Entropy coefficient: 0. Our II-Medical-8B model achieved a 40% score on HealthBench, a comprehensive open-source benchmark evaluating the performance and safety of large language models in healthcare. This performance is comparable to OpenAI's o1 reasoning model and GPT-4.5, OpenAI's largest and most advanced model to date. We provide a comparison to models available in ChatGPT below. We evaluate on ten medical QA benchmarks include MedMCQA, MedQA, PubMedQA, medical related questions from MMLU-Pro and GPQA, small QA sets from Lancet and the New England Journal of Medicine, 4 Options and 5 Options splits from the MedBullets platform and MedXpertQA. | Model | MedMC | MedQA | PubMed | MMLU-P | GPQA | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg | |--------------------------|-------|-------|--------|--------|------|--------|--------|--------|------|-------|-------| | HuatuoGPT-o1-72B | 76.76 | 88.85 | 79.90 | 80.46 | 64.36| 70.87 | 77.27 | 73.05 |23.53 |76.29 | 71.13 | | QWQ 32B | 69.73 | 87.03 | 88.5 | 79.86 | 69.17| 71.3 | 72.07 | 69.01 |24.98 |75.12 | 70.68 | | Qwen2.5-7B-IT | 56.56 | 61.51 | 71.3 | 61.17 | 42.56| 61.17 | 46.75 | 40.58 |13.26 |59.04 | 51.39 | | HuatuoGPT-o1-8B | 63.97 | 74.78 | 80.10 | 63.71 | 55.38| 64.32 | 58.44 | 51.95 |15.79 |64.84 | 59.32 | | Med-reason | 61.67 | 71.87 | 77.4 | 64.1 | 50.51| 59.7 | 60.06 | 54.22 |22.87 |66.8 | 59.92 | | M1 | 62.54 | 75.81 | 75.80 | 65.86 | 53.08| 62.62 | 63.64 | 59.74 |19.59 |64.34 | 60.3 | | II-Medical-8B-SFT | 71.92 | 86.57 | 77.4 | 77.26 | 65.64| 69.17 | 76.30 | 67.53 |23.79 |73.80 | 68.80 | | II-Medical-8B | 71.57 | 87.82 | 78.2 | 80.46 | 67.18| 70.38 | 78.25 | 72.07 |25.26 |73.13 | 70.49 | The training dataset comprises 555,000 samples from the following sources: 1. Public Medical Reasoning Datasets (103,031 samples) - General Medical Reasoning: 40,544 samples - Medical-R1-Distill-Data: 22,000 samples - Medical-R1-Distill-Data-Chinese: 17,000 samples - UCSC-VLAA/m23k-tokenized: 23,487 samples 2. Synthetic Medical QA Data with QwQ (225,700 samples) Generated from established medical datasets: - MedMcQA (from openlifescienceai/medmcqa): 183,000 samples - MedQA: 10,000 samples - MedReason: 32,700 samples - PrimeIntellect/SYNTHETIC-1 - GeneralReasoning/GeneralThought-430K - a-m-team/AM-DeepSeek-R1-Distilled-1.4M - open-thoughts/OpenThoughts2-1M - nvidia/Llama-Nemotron-Post-Training-Dataset: Science subset only - Other resources: cognitivecomputations/dolphin-r1, ServiceNow-AI/R1-Distill-SFT,... All R1 reasoning traces were processed through a domain-specific pipeline as follows: 1. Embedding Generation: Prompts are embedded using sentence-transformers/all-MiniLM-L6-v2. 2. Clustering: Perform K-means clustering with 50,000 clusters. - For each cluster, select the 10 prompts nearest to the cluster center. - Classify the domain of each selected prompt using Qwen2.5-32b-Instruct. - Assign the cluster's domain based on majority voting among the classified prompts. 4. Domain Filtering: Keep only clusters labeled as Medical or Biology for the final dataset. 4. Supplementary Math Dataset - Added 15,000 samples of reasoning traces from light-r1 - Purpose: Enhance general reasoning capabilities of the model Preprocessing Data 1. Filtering for Complete Generation - Retained only traces with complete generation outputs 2. Length-based Filtering - Minimum threshold: Keep only the prompt with more than 3 words. - Wait Token Filter: Removed traces with has more than 47 occurrences of "Wait" (97th percentile threshold). We using two step decontamination: 1. Following open-r1 project: We decontaminate a dataset using 10-grams with the evaluation datasets. 2. After that, we using the fuzzy decontamination from `s1k` method with threshold 90%. Our pipeline is carefully decontaminated with the evaluation datasets. V. How To Use Our model can be utilized in the same manner as Qwen or Deepseek-R1-Distill models. For instance, you can easily start a service using vLLM: - Recommended Sampling Parameters: temperature = 0.6, topp = 0.9 - When using, explicitly request step-by-step reasoning and format the final answer within \boxed{} (e.g., "Please reason step-by-step, and put your final answer within \boxed{}."). VII. Limitations and Considerations - Dataset may contain inherent biases from source materials - Medical knowledge requires regular updates - Please note that It’s not suitable for medical use.
II Medical 8B 1706
II-Medical-8B-1706 is the newest advanced large language model developed by Intelligent Internet, specifically engineered to enhance AI-driven medical reasoning. Following the positive reception of our previous II-Medical-8B, this new iteration significantly advances the capabilities of medical question answering, We also provide the static quants versions of II-Medical-8B-1706 here We collected and generated a comprehensive set of reasoning datasets for the medical domain and performed SFT fine-tuning on the Qwen/Qwen3-8B model. Following this, we further optimized the SFT model by training DAPO on a hard-reasoning dataset to boost performance. - Max Length: 16378. - Batch Size: 128. - Learning-Rate: 5e-5. - Number Of Epoch: 6. For the Reinforcement Learning (RL) stage, we designed a two-stage training process. The first stage focuses on enhancing the model's reasoning capabilities for complex medical questions. The second stage ensures that the model's responses prioritize safety and helpfulness. Both stages utilize the following configuration: - Max prompt length: 2048 tokens. - Max response length: 12288 tokens. - Overlong buffer: Enabled, 4096 tokens, penalty factor 1.0. - Clip ratios: Low 0.2, High 0.28. - Batch sizes: Train prompt 512, Generation prompt 1536, Mini-batch 32. - Responses per prompt: 16. - Temperature: 1.0, Top-p: 1.0, Top-k: -1 (vLLM rollout). - Learning rate: 1e-6, Warmup steps: 10, Weight decay: 0.1. - Loss aggregation: Token-mean. - Gradient clipping: 1.0. - Entropy coefficient: 0. Our II-Medical-8B-1706 model achieved a 46.8% score on HealthBench, a comprehensive open-source benchmark evaluating the performance and safety of large language models in healthcare. This performance is comparable to MedGemma-27B from Google. We provide a comparison to models available in ChatGPT below. We also evaluate on nine other medical QA benchmarks include MedMCQA, MedQA, PubMedQA, HealthBench, medical related questions from MMLU-Pro, small QA sets from Lancet and the New England Journal of Medicine, 4 Options and 5 Options splits from the MedBullets platform and MedXpertQA. | Model | MedMC | MedQA | PubMed | MMLU-P | HealthBench | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg | |--------------------------|-------|-------|--------|--------|------|--------|--------|--------|------|-------|-------| | HuatuoGPT-o1-72B | 76.76 | 88.85 | 79.90 | 80.46 | 22.73 | 70.87 | 77.27 | 73.05 |23.53 |76.29 | 66.97 | | M1 | 62.54 | 75.81 | 75.80 | 65.86 | 15.51 | 62.62 | 63.64 | 59.74 |19.59 |64.34 | 56.55 | | Qwen3-8B | 66.53 | 81.38 | 73.9 | 77.85 | 42.27 | 66.26 | 68.83 | 62.66 |19.59 |69.65 | 62.89 | | Qwen3-32B | 74.18 | 88.92 | 76.1 | 80.7 | 47.08 | 72.33 | 72.27 | 71.42 |28.04 |76.94 | 68.80 | | MedGemma-27B-IT | 73.24 | 87.27 | 70.9 | 80.13 | 46.54| 70.14 | 75.32 | 73.37 |25.55 |76.28 | 67.87 | | II-Medical-8B | 71.57 | 87.90 | 78.7 |80.46 | 40.02| 70.38 | 78.25 | 72.07 |25.26 |73.13 |67.77 | | II-Medical-8B-1706 | 74.44 | 88.61 | 79.8 | 81.04 | 46.8 | 71.60 | 80.84 | 74.67 |29.63 |77.61 | 70.5 | The training dataset comprises 2,197,741 samples from the following sources: 1. Public Medical Reasoning Datasets - General Medical Reasoning - Medical-R1-Distill-Data - Medical-R1-Distill-Data-Chinese - UCSC-VLAA/m23k-tokenized 2. Synthetic Medical QA Data with Qwen3-235B-A22B (873,497 samples) For each prompt, we generated 6-10 sampled responses, resulting in the comprehensive dataset mentioned above, and keep only the correct one. - PrimeIntellect/SYNTHETIC-1 - GeneralReasoning/GeneralThought-430K - a-m-team/AM-DeepSeek-R1-Distilled-1.4M - open-thoughts/OpenThoughts2-1M - nvidia/Llama-Nemotron-Post-Training-Dataset: Science subset only - Other resources: cognitivecomputations/dolphin-r1, ServiceNow-AI/R1-Distill-SFT,... All R1 reasoning traces were processed through a domain-specific pipeline as follows: 1. Embedding Generation: Prompts are embedded using sentence-transformers/all-MiniLM-L6-v2. 2. Clustering: Perform K-means clustering with 50,000 clusters. - For each cluster, select the 10 prompts nearest to the cluster center. - Classify the domain of each selected prompt using Qwen2.5-32b-Instruct. - Assign the cluster's domain based on majority voting among the classified prompts. 4. Domain Filtering: Keep only clusters labeled as Medical or Biology for the final dataset. General Medical & Instruction Following Dataset (1,025,903 samples) We generated general medical instruction-following data and evaluated it with GPT-4o as an automatic judge. Only the high-scoring (i.e >= 8/10) responses compared to ground-truth answers were retained. - 229,433 prompts from Text-Book-QA-subset - 276,079 prompts from Text-Patient-QA-subset - 142,927 prompts from Text-GuildLine-QA-subset - 215,647 prompts from Chat-Doctor-QA - 74,190 prompts from our Evol-Instruct medical dataset. We also using 87,627 prompts from Subset Instruction-following a-m-team/AM-Qwen3-Distilled Response Deduplicate - Ngram: 4 - Jacard Threshold: 0.7 We using two step decontamination: 1. Following open-r1 project: We decontaminate a dataset using 8-grams with the evaluation datasets. 2. After that, we using the fuzzy decontamination from `s1k` method with threshold 80%. Our pipeline is carefully decontaminated with the evaluation datasets. - Dataset may contain inherent biases from source materials - Medical knowledge requires regular updates - Please note that It’s not suitable for medical use. V. How To Use Our model can be utilized in the same manner as Qwen or Deepseek-R1-Distill models. For instance, you can easily start a service using vLLM: - Recommended Sampling Parameters: temperature = 0.6, topp = 0.9 - When using, explicitly request step-by-step reasoning and format the final answer within \boxed{} (e.g., "Please reason step-by-step, and put your final answer within \boxed{}."). VII. Limitations and Considerations - Dataset may contain inherent biases from source materials - Medical knowledge requires regular updates - Please note that It’s not suitable for medical use.
II-Medical-8B-1706-GGUF
II-Medical-32B-Preview
II-Search-4B
II-Thought-1.5B-Preview
II-Search-CIR-4B
Inspired by the success of our II-Researcher approach, which applies tools with augmented reasoning on top of the Deep-seek-R1 model, II-Search-4B-CIR introduces Code-Integrated Reasoning (CIR), a more powerful and flexible method for tool interaction with the reasoning process. We instruct the model to generate code blocks enclosed between ` \n ` , within which it can invoke a set of predefined functions. These functions act as interfaces to external resources, similar to the tool call paradigm but offering greater flexibility and control. This approach enables the model to not only retrieve external information but also process, filter, and reason over it programmatically within the code itself. - `websearch(query: str, numresult: int)` - `webvisit(url: str)` In our early experiments, we found that even large models such as Qwen/Qwen3-235B-A22B or Deep-seek-R1 could not produce the code format efficiently. Sometimes, models would not use any code blocks at all, instead relying on their internal knowledge base to answer the query. To address this issue, we first curated a dataset and performed SFT fine-tuning on the Qwen/Qwen3-4B model. Following this, we further optimized the SFT model by training DAPO on a hard-reasoning dataset to boost performance. - Max Length: 26000. - Batch Size: 128. - Learning-Rate: 1e-5. - Number Of Epoch: 4. - Max prompt length: 3000 tokens. - Max response length: 16384 tokens. - Max Total length: 32768 tokens. - Max Observation Length: 3000 tokens per observation. - Masking Observation Tokens: True - Max n.o code blocks: 32. - Clip ratios: Low 0.2, High 0.3. - Batch sizes: Train prompt 128, Generation prompt 128, Mini-batch 16. - Responses per prompt: 16. - Temperature: 1.0, Top-p: 1.0, Top-k: -1 (vLLM rollout). - Learning rate: 1e-6, Warmup steps: 20. - Loss aggregation: Token-mean. - Gradient clipping: 1.0. We describe more detail of our training methodology in our II-Search-4B blog post We also release our dataset to reproduce the results: We compare our model with other small-sized open-source models, including Qwen3-4B (the model on which we are based) and other models that also specialize in information-seeking tasks. Qwen3-4B, Jan-4B, WebSailor-3B. We also reported the benchmarking results on Google Frames dataset from 2 latest MoE models Qwen3-30B-A3B-Instruct-2507 and Qwen3-30B-A3B-Thinking-2507 on this task. The search API was SerpDev, Google Gemini Pro 2.5 was used to extract and judge the answers (Using the proper judge prompt from the each benchmarking dataset’s author). | Benchmark | Qwen3-4B | Jan-4B | WebSailor-3B | II-Search-4B | II-Search-CIR-4B | | --- | --- | --- | --- | --- | --- | | OpenAI/SimpleQA | 76.8 | 80.1 | 81.8 | 91.8 | 91.8 | | Google/Frames | 30.7 | 24.8 | 34.0 | 67.5 | 72.2 | | Seal0 | 6.31 | 2.7 | 1.8 | 22.5 | 26.4 | Note: Our MCP ensure that we didn't go to any url come from the huggingface when we evaluate the II-Search-CIR-4B model. All benchmark traces from models can be found at:Inspect-Search-Models-Benchmarking-Result . Our model can be utilized in the same manner as Qwen or Deepseek-R1-Distill models. For instance, you can easily start a service using vLLM: To try out the II-SEARCH-CIR model, refer to the example provided in the GitHub repo here which includes the System prompt, Hint prompt, and Code executor: