pfnet

28 models • 1 total models in database
Sort by:

plamo-2-translate

--- license: other license_name: plamo-community-license license_link: https://plamo.preferredai.jp/info/plamo-community-license-en language: - en - ja pipeline_tag: translation library_name: trans...

NaNK
47,535
101

plamo-embedding-1b

NaNK
license:apache-2.0
45,052
41

plamo-2-1b

NaNK
license:apache-2.0
6,002
37

plamo-2.1-2b-cpt

PLaMo 2.1 2B is a model developed by Preferred Elements Inc., created by pruning parameters from a 8B-parameter model pre-trained on English and Japanese datasets. PLaMo 2.1 2B is released under PLaMo community license. Please check the following license and agree to this before downloading. - (EN) https://plamo.preferredai.jp/info/plamo-community-license-en - (JA) https://plamo.preferredai.jp/info/plamo-community-license-ja NOTE: This model has NOT been instruction-tuned for chat dialog or other downstream tasks. Please check the PLaMo community license and contact us via the following form to use commercial purpose. - Model size: 8B - Developed by: Preferred Elements, Inc. - Model type: Causal decoder-only - Language(s): English, Japanese - License: PLaMo community license PLaMo 2 tokenizer is optimized by numba, which is JIT compiler for numerical functions. The tokenizer is trained on a subset of the datasets for model pre-training. - (JA) https://tech.preferred.jp/ja/blog/plamo-2/ - (JA) https://tech.preferred.jp/ja/blog/plamo-2-8b/ - (JA) https://tech.preferred.jp/ja/blog/plamo-2-tokenizer/ PLaMo 2.1 2B is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo 2.1 2B’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo 2.1 2B, developers should perform safety testing and tuning tailored to their specific applications of the model. This model is trained under the project, “Research and Development Project of the Enhanced Infrastructures for Post 5G Information and Communication System” (JPNP 20017), subsidized by the New Energy and Industrial Technology Development Organization (NEDO). - (EN) https://www.preferred.jp/en/company/aipolicy/ - (JA) https://www.preferred.jp/ja/company/aipolicy/

NaNK
957
3

plamo-2.1-8b-cpt

NaNK
792
1

plamo-13b

NaNK
license:apache-2.0
520
85

plamo-2-translate-base

PLaMo翻訳モデルはPreferred Networksによって開発された翻訳向け特化型大規模言語モデルです。 詳しくはブログ記事およびプレスリリースを参照してください。 PLaMo Translation Model is a specialized large-scale language model developed by Preferred Networks for translation tasks. For details, please refer to the blog post and press release. List of models: - plamo-2-translate ... Post-trained model for translation - plamo-2-translate-base ... Base model for translation - plamo-2-translate-eval ... Pair-wise evaluation model PLaMo Translation Model is released under PLaMo community license. Please check the following license and agree to this before downloading. - (EN) https://plamo.preferredai.jp/info/plamo-community-license-en - (JA) https://plamo.preferredai.jp/info/plamo-community-license-ja NOTE: This model has NOT been instruction-tuned for chat dialog or other downstream tasks. Please check the PLaMo community license and contact us via the following form to use commercial purpose. PLaMo Translation Model is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo Translation Model’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo Translation Model, developers should perform safety testing and tuning tailored to their specific applications of the model. This model is trained under the project, “Research and Development Project of the Enhanced Infrastructures for Post 5G Information and Communication System” (JPNP 20017), subsidized by the New Energy and Industrial Technology Development Organization (NEDO). - (EN) https://www.preferred.jp/en/company/aipolicy/ - (JA) https://www.preferred.jp/ja/company/aipolicy/

NaNK
517
2

Preferred-MedLLM-Qwen-72B

NaNK
496
11

plamo-3-nict-2b-base

NaNK
483
0

nekomata-14b-pfn-qfin-inst-merge

NaNK
411
2

nekomata-14b-pfn-qfin

NaNK
387
4

plamo-13b-instruct

NaNK
license:apache-2.0
261
13

plamo-13b-instruct-nc

NaNK
license:cc-by-nc-4.0
237
3

plamo-2.1-8b-vl

NaNK
203
9

plamo-2.1-2b-vl

NaNK
177
3

plamo-100b

NaNK
144
18

plamo-3-nict-31b-base

NaNK
86
2

plamo-2-8b

NaNK
70
30

plamo-2-translate-eval

PLaMo翻訳モデルはPreferred Networksによって開発された翻訳向け特化型大規模言語モデルです。 詳しくはブログ記事およびプレスリリースを参照してください。 PLaMo Translation Model is a specialized large-scale language model developed by Preferred Networks for translation tasks. For details, please refer to the blog post and press release. List of models: - plamo-2-translate ... Post-trained model for translation - plamo-2-translate-base ... Base model for translation - plamo-2-translate-eval ... Pair-wise evaluation model PLaMo Translation Model is released under PLaMo community license. Please check the following license and agree to this before downloading. - (EN) https://plamo.preferredai.jp/info/plamo-community-license-en - (JA) https://plamo.preferredai.jp/info/plamo-community-license-ja NOTE: This model has NOT been instruction-tuned for chat dialog or other downstream tasks. Please check the PLaMo community license and contact us via the following form to use commercial purpose. PLaMo Translation Model is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo Translation Model’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo Translation Model, developers should perform safety testing and tuning tailored to their specific applications of the model. This model is trained under the project, “Research and Development Project of the Enhanced Infrastructures for Post 5G Information and Communication System” (JPNP 20017), subsidized by the New Energy and Industrial Technology Development Organization (NEDO). - (EN) https://www.preferred.jp/en/company/aipolicy/ - (JA) https://www.preferred.jp/ja/company/aipolicy/

NaNK
38
4

Llama3-Preferred-MedSwallow-70B

NaNK
llama
28
14

Preferred-MedRECT-32B

Preferred-MedRECT-32B is a finetuned model based on Qwen/Qwen3-32B, which has been optimized for medical error detection and correction tasks using LoRA (Low-Rank Adaptation). The model is trained on bilingual (Japanese/English) medical reasoning data with explicit reasoning processes, enabling it to detect errors, extract erroneous sentences, and provide corrections in clinical texts. The model is released under the Apache License 2.0. The table below shows cross-lingual performance comparison on MedRECT-ja (Japanese) and MedRECT-en (English) benchmarks. MedRECT evaluates models on three subtasks: error detection (F1), sentence extraction (Acc.), and error correction (EC Avg. Score). | Model | MedRECT-ja Error Det. F1 | MedRECT-ja Sent. Ext. Acc. | MedRECT-ja EC Avg. Score | MedRECT-en Error Det. F1 | MedRECT-en Sent. Ext. Acc. | MedRECT-en EC Avg. Score | |:------|:------------------------:|:--------------------------:|:------------------------:|:------------------------:|:--------------------------:|:------------------------:| | Preferred-MedRECT-32B | 0.743 | 81.5% | 0.627 | 0.728 | 90.9% | 0.718 | | Qwen3-32B (think) | 0.723 | 72.5% | 0.549 | 0.740 | 83.5% | 0.550 | | gpt-oss-120b (medium) | 0.721 | 77.4% | 0.581 | 0.777 | 88.1% | 0.630 | | gpt-oss-20b (medium) | 0.718 | 64.3% | 0.543 | 0.762 | 87.2% | 0.590 | | GPT-4.1 | 0.658 | 52.6% | 0.655 | 0.789 | 72.8% | 0.710 | - Base Model: unsloth/Qwen3-32B - Fine-tuning Method: LoRA (Low-Rank Adaptation) - Training Data: - Japanese: 5,538 samples from JMLE (2018-2023) - English: 2,439 samples from MEDEC MS Subset - All samples include reasoning processes generated by DeepSeek-R1-0528 The model was developed for research purposes and is not intended for clinical diagnosis. It is the users' responsibility to ensure compliance with applicable rules and regulations. Preferred Networks, Inc. - Naoto Iwase - Hiroki Okuyama - Junichiro Iwasawa Detailed evaluation results will be given in the research paper.

NaNK
license:apache-2.0
11
1

Qwen3-1.7B-pfn-qfin

NaNK
9
0

Qwen2.5-1.5B-pfn-qfin

NaNK
4
0

nekomata-7b-pfn-qfin-inst-merge

NaNK
2
0

plamo-3-nict-8b-base

NaNK
1
0

nekomata-7b-pfn-qfin

NaNK
1
0

timesfm-1.0-200m-fin

license:cc-by-nc-sa-4.0
0
14

GenerRNA

0
6