Bedovyy

18 models • 2 total models in database
Sort by:

smoothMixWan22-I2V-GGUF

Original Model - HighNoise: https://civitai.com/models/1995784?modelVersionId=2260110 - LowNoise: https://civitai.com/models/1995784?modelVersionId=2259006 Quantization Method - Dequant FP8 using https://github.com/Kickbub/Dequant-FP8-ComfyUI - Quantize GGUF using https://github.com/city96/ComfyUI-GGUF/tree/main/tools

license:apache-2.0
45,095
19

dasiwaWAN22I2V14B-GGUF

Quantization Method - Dequant FP8 using https://github.com/Kickbub/Dequant-FP8-ComfyUI - Quantize GGUF using https://github.com/city96/ComfyUI-GGUF/tree/main/tools Latest Version (current `main` branch) - MidnightFlirt

NaNK
license:apache-2.0
13,793
11

Anima-FP8

3,526
18

Anima-GGUF

1,948
13

Anima-INT8

1,737
11

ERNIE-Image-Quantized

license:apache-2.0
664
8

Qwen3-32B.w8a8

NaNK
license:apache-2.0
95
0

arcaillous-nbxl-v10

Trained in 2steps, `Lion8bit` for quick training and `Lion` for detail.

NaNK
32
6

YanoljaNEXT-Rosetta-12B-2510-FP8-Dynamic

NaNK
25
0

c4ai-command-a-03-2025-gptqmodel-4bit

Non-english performance may be significantly dropped. Recommend to set `temperature` to 0.6~0.8. - Tool: GPTQModel 2.3.0-dev (bafda24). - System: 2x3090, DDR4 128GB + swap 192GB - Time taken: 14 hours (wall time) C4AI Command A is an open weights research release of a 111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI. Compared to other leading proprietary and open-weights models Command A delivers maximum performance with minimum hardware costs, excelling on business-critical agentic and multilingual tasks while‬ being deployable on just two GPUs. Point of Contact: Cohere For AI: cohere.for.ai License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy Model: c4ai-command-a-03-2025 Model Size: 111 billion parameters Context length: 256K Note: The model supports a context length of 256K but it is configured in Hugging Face for 128K. This value can be updated in the configuration if needed. You can try out C4AI Command A before downloading the weights in our hosted Hugging Face Space. Please install transformers from the source repository that includes the necessary changes for this model. Model Architecture: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. The model features three layers with sliding window attention (window size 4096\) and RoPE for efficient local context modeling and relative positional encoding. A fourth layer uses global attention without positional embeddings, enabling unrestricted token interactions across the entire sequence. Languages covered: The model has been trained on 23 languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian. Context Length: Command A supports a context length of 256K. By default, Command A is configured as a conversational model. A preamble conditions the model on interactive behaviour, meaning it is expected to reply in a conversational fashion, provides introductory statements and follow-up questions, and uses Markdown as well as LaTeX where appropriate. This is desired for interactive experiences, such as chatbots, where the model engages in dialogue. In other use cases, a non-interactive model behavior might be more desired (e.g. task-focused use cases like extracting information, summarizing text, translation, and categorization). Learn how system messages can be used to achieve such non-interactive behavior here. Besides, Command A can be configured with two safety modes, which enable users to set guardrails that are both safe and suitable to their needs: contextual mode, or strict mode. Contextual mode is appropriate for wide-ranging interactions with fewer constraints on output, while maintaining core protections by rejecting harmful or illegal suggestions. Command A is configured to contextual mode by default. Strict mode aims to avoid all sensitive topics, such as violent or sexual acts and profanity. For more information, see the Command A prompt format docs. Command A has been trained specifically for tasks like the final step of Retrieval Augmented Generation (RAG). RAG with Command A is supported through chat templates in Transformers. The model takes a conversation as input (with an optional user-supplied system preamble), along with a list of document snippets. You can then generate text from this input as normal. Document snippets should be short chunks, rather than long documents, typically around 100-400 words per chunk, formatted as key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. You may find that simply including relevant documents directly in a user message works just as well, or better than using the documents parameter to render the special RAG template. The RAG template is generally a strong default and is ideal for users wanting citations. We encourage users to play with both, and to evaluate which mode works best for their specific use case. Note that this was a very brief introduction to RAG \- for more information, see the Command A prompt format docs and the Transformers RAG documentation. Optionally, one can ask the model to include grounding spans (citations) in its response to indicate the source of the information. The code is the same as before, except for this line. The output looks like this: the model will associate pieces of texts (called "spans") with specific document snippets that support them (called "sources"). Command A uses a pair of tags "\ " and "\ " to indicate when a span can be grounded onto a list of sources. For example, "\ span\ " means that "span" is supported by documents snippets 0 and 1 that were provided in the last message. Command A has been specifically trained with conversational tool use capabilities. This allows the model to interact with external tools like APIs, databases, or search engines. Tool use with Command A is supported through chat templates in Transformers. We recommend providing tool descriptions using JSON schema. If the model generates a plan and tool calls, you should add them to the chat history like so: and then call the tool and append the result, as a dictionary, with the tool role, like so: After that, you can generate() again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling \- for more information, see the Command A prompt format docs and the Transformers tool use documentation. Optionally, one can ask the model to include grounding spans (citations) in its response to indicate the source of the information, by using enable\citations=True in tokenizer.apply\chat\template(). The generation would look like this: When citations are turned on, the model associates pieces of texts (called "spans") with those specific tool results that support them (called "sources"). Command A uses a pair of tags "\ " and "\ " to indicate when a span can be grounded onto a list of sources, listing them out in the closing tag. For example, "\ span\ " means that "span" is supported by result 1 and 2 from "tool\call\id=0" as well as result 0 from "tool\call\id=1". Sources from the same tool call are grouped together and listed as "{tool\call\id}:\[{list of result indices}\]", before they are joined together by ",". Command A has meaningfully improved on code capabilities. In addition to academic code benchmarks, we have evaluated it on enterprise-relevant scenarios, including SQL generation and code translation, where it outperforms other models of similar size. Try these out by requesting code snippets, code explanations, or code rewrites. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. For errors or additional questions about details in this model card, contact [email protected]. We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 111 billion parameter model to researchers all over the world. This model is governed by a CC-BY-NC License (Non-Commercial) with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use PolicyIf you are interested in commercial use, please contact Cohere’s Sales team. You can try Command A chat in the playground here. You can also use it in our dedicated Hugging Face Space here.

NaNK
license:cc-by-nc-4.0
2
1

Qwen-Image-Edit-2511-NVFP4

license:apache-2.0
0
20

arcaillous-xl

NaNK
0
10

arcain

NaNK
0
7

ltx2.3_transformer_only_fp8

NaNK
0
2

FLUX.2-klein-4B-INT8-Comfy

NaNK
license:apache-2.0
0
2

Anima-INT8-Tensorwise

0
2

LTX2.3_transformer_only_comfy

NaNK
0
1

FLUX.2-klein-4b-nvfp4mixed

NaNK
license:apache-2.0
0
1