perplexity-ai

8 models • 2 total models in database
Sort by:

pplx-embed-v1-4b

NaNK
license:mit
15,785
52

pplx-embed-v1-0.6b

NaNK
license:mit
9,072
101

pplx-embed-context-v1-4b

NaNK
license:mit
3,762
17

pplx-embed-context-v1-0.6b

NaNK
license:mit
2,494
31

r1-1776

Blog link: https://perplexity.ai/hub/blog/open-sourcing-r1-1776 R1 1776 is a DeepSeek-R1 reasoning model that has been post-trained by Perplexity AI to remove Chinese Communist Party censorship. The model provides unbiased, accurate, and factual information while maintaining high reasoning capabilities. To ensure our model remains fully “uncensored” and capable of engaging with a broad spectrum of sensitive topics, we curated a diverse, multilingual evaluation set of over a 1000 of examples that comprehensively cover such subjects. We then use human annotators as well as carefully designed LLM judges to measure the likelihood a model will evade or provide overly sanitized responses to the queries. We also ensured that the model’s math and reasoning abilities remained intact after the decensoring process. Evaluations on multiple benchmarks showed that our post-trained model performed on par with the base R1 model, indicating that the decensoring had no impact on its core reasoning capabilities.

NaNK
license:mit
961
2,325

browsesafe

license:mit
877
30

R1 1776 Distill Llama 70b

Blog link: https://perplexity.ai/hub/blog/open-sourcing-r1-1776 R1 1776 is a DeepSeek-R1 reasoning model that has been post-trained by Perplexity AI to remove Chinese Communist Party censorship. The model provides unbiased, accurate, and factual information while maintaining high reasoning capabilities. To ensure our model remains fully “uncensored” and capable of engaging with a broad spectrum of sensitive topics, we curated a diverse, multilingual evaluation set of over a 1000 of examples that comprehensively cover such subjects. We then use human annotators as well as carefully designed LLM judges to measure the likelihood a model will evade or provide overly sanitized responses to the queries. We also ensured that the model’s math and reasoning abilities remained intact after the decensoring process. Evaluations on multiple benchmarks showed that our post-trained model performed on par with the base R1 model, indicating that the decensoring had no impact on its core reasoning capabilities. | Benchmark | R1-Distill-Llama-70B | R1-1776-Distill-Llama-70B | | --- | --- | --- | | China Censorship | 80.53 | 0.2 | | Internal Benchmarks (avg) | 47.64 | 48.4 | | AIME 2024 | 70 | 70 | | MATH-500 | 94.5 | 94.8 | | MMLU | 88.52 | 88.40 | | DROP | 84.55 | 84.83 | | GPQA | 65.2 | 65.05 | \ Evaluated by Perplexity AI since they were not reported in the paper.

NaNK
llama
427
129

pplx-qwen3.5-122b-rl-0320

NaNK
0
1