cyberagent

26 models • 3 total models in database
Sort by:

layerd-birefnet

This repository contains the code and model weights for the matting module in [LayerD [ICCV'25]](https://arxiv.org/abs/2509.25134), a layer decomposition framework for graphic design images. The model in this repository is intended to be used as a part of the original LayerD github repository. Please visit https://github.com/CyberAgentAILab/LayerD for more information. The model architecture code is based on the BiRefNet repository. We thank the authors for releasing their high-quality matting model. This repository is intended for use with LayerD, so we recommend following the instructions in the LayerD repository. For reference, the original LayerD uses this model as follows: This repository is released under the Apache-2.0 license, the same as the LayerD repository. The original BiRefNet is released under the MIT license.

license:apache-2.0
9,247
4

open-calm-small

license:cc-by-sa-4.0
4,153
19

open-calm-3b

NaNK
license:cc-by-sa-4.0
2,011
20

Mistral-Nemo-Japanese-Instruct-2408

This is a Japanese continually pre-trained model based on mistralai/Mistral-Nemo-Instruct-2407. Make sure to update your transformers installation via `pip install --upgrade transformers`.

license:apache-2.0
1,897
45

open-calm-7b

NaNK
license:cc-by-sa-4.0
1,535
205

calm2-7b-chat

NaNK
llama
1,415
76

open-calm-large

license:cc-by-sa-4.0
1,044
11

calm2-7b

NaNK
llama
952
28

open-calm-1b

NaNK
license:cc-by-sa-4.0
939
17

CAT-Translate-1.4b

NaNK
llama
926
15

DeepSeek R1 Distill Qwen 14B Japanese

This is a Japanese finetuned model based on deepseek-ai/DeepSeek-R1-Distill-Qwen-14B.

NaNK
license:mit
471
94

open-calm-medium

license:cc-by-sa-4.0
377
4

calm3-22b-chat

CyberAgentLM3 is a decoder-only language model pre-trained on 2.0 trillion tokens from scratch. CyberAgentLM3-Chat is a fine-tuned model specialized for dialogue use cases. Prompt Format CALM3-Chat uses ChatML as the prompt format. Model size: 22B Context length: 16384 Model type: Transformer-based Language Model Language(s): Japanese, English Developed by: CyberAgent, Inc. License: Apache-2.0

NaNK
llama
372
80

calm2-7b-chat-dpo-experimental

NaNK
llama
295
16

Llama-3.1-70B-Japanese-Instruct-2407

This is a Japanese continually pre-trained model based on meta-llama/Meta-Llama-3.1-70B-Instruct. Make sure to update your transformers installation via `pip install --upgrade transformers`.

NaNK
llama
246
76

xlm-roberta-large-jnli-jsick

license:cc-by-4.0
204
7

DeepSeek R1 Distill Qwen 32B Japanese

This is a Japanese finetuned model based on deepseek-ai/DeepSeek-R1-Distill-Qwen-32B.

NaNK
license:mit
137
255

CAT-Translate-0.8b

NaNK
llama
130
2

llava-calm2-siglip

license:apache-2.0
117
26

markupdm

license:apache-2.0
83
0

opencole-typographylmm-llava-v1.5-7b-lora

NaNK
40
6

ca-reward-3b-ja

NaNK
llama
39
10

opencole-stable-diffusion-xl-base-1.0-finetune

4
5

calm3-22b-chat-selfimprove-experimental

NaNK
llama
1
12

CAT-Translate-3.3b

NaNK
llama
1
1

type-r

license:apache-2.0
0
2