aixonlab

16 models • 3 total models in database
Sort by:

Eurydice-24b-v3

NaNK
license:apache-2.0
122
9

Eurydice-24b-v2

NaNK
license:apache-2.0
87
13

flux.1-lumiere-alpha

We are excited to announce Lumiere Alpha, A model focusing on improving realism without compromising prompt coherency or changing the composition completely from the original Flux.1-Dev model. For the best results, we strongly recommend using a `guidance scale` of 3.0 and setting `steps` between 28 and 35, with `Euler Beta`. We've also crafted a ComfyUI workflow to make using Lumiere Alpha even more seamless! Find it in `comfy/lumierealphaworkflow.json`. The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc. Our model is released under the FLUX.1 [dev] Non-Commercial License.

65
94

FLUX.1-dev-LoRA-Cinematic-1940s

59
6

FLUX.1-dev-LoRA-Cinematic-Octane

54
11

Zinakha-12b

NaNK
license:apache-2.0
28
3

Aether-12b

Base model: Xclbr7/Arcanum-12b, language: en.

NaNK
license:apache-2.0
19
1

Eurydice-24b-v1

NaNK
license:apache-2.0
10
0

Eurydice-24b-v3.5

NaNK
license:apache-2.0
7
8

Eurydice-24b-v1c

NaNK
license:apache-2.0
3
6

Selene-27b-v1

NaNK
license:apache-2.0
3
0

Zara-14b-v1.2

Text generation inference model based on aixonlab/Zara-14b-v1.1.

NaNK
license:apache-2.0
2
4

Grey-12b

Base model Aether 12b language: en

NaNK
license:apache-2.0
2
2

RocRacoon-3b

NaNK
license:mit
2
1

Zara-14b-v1.1

Zara 14b tries to become the perfect companion for any chat which involves multiple roles. The ability to understand context is pretty awesome and excels in creativity and storytelling. It is built on Lamarck 14B v0.7 and trained on different datasets as well as some layer merges to ehance its capabilities. - Developed by: Aixon Lab - Model type: Causal Language Model - Language(s): English (primarily), may support other languages - License: Apache 2.0 - Repository: https://huggingface.co/aixonlab/Zara-14b-v1.1 Quantization - GGUF: https://huggingface.co/mradermacher/Zara-14b-v1.1-GGUF - Base model: sometimesanotion/Lamarck-14B-v0.7 - Parameter count: ~14 billion - Architecture specifics: Transformer-based language model Intended Use 🎯 As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis. Ethical Considerations 🤔 As a model based on multiple sources, Zara 14b may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly. Performance and Evaluation Performance metrics and evaluation results for Zara 14b are yet to be determined. Users are encouraged to contribute their findings and benchmarks. Limitations and Biases The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment. Additional Information For more details on the base model and constituent models, please refer to their respective model cards and documentation.

NaNK
license:apache-2.0
0
3

Zara-14b-v1

Zara 14b tries to become the perfect companion for any chat which involves multiple roles. The ability to understand context is pretty awesome and excels in creativity and storytelling. It is built on Lamarck 14B v0.7 and trained on different datasets as well as some layer merges to ehance its capabilities. - Developed by: Aixon Lab - Model type: Causal Language Model - Language(s): English (primarily), may support other languages - License: Apache 2.0 - Repository: https://huggingface.co/aixonlab/Zara-14b-v1 Quantization - GGUF: https://huggingface.co/mradermacher/Zara-14b-v1-GGUF - iMatrix GGUF: https://huggingface.co/mradermacher/Zara-14b-v1-i1-GGUF - Base model: sometimesanotion/Lamarck-14B-v0.7 - Parameter count: ~14 billion - Architecture specifics: Transformer-based language model Intended Use 🎯 As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis. Ethical Considerations 🤔 As a model based on multiple sources, Zara 14b may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly. Performance and Evaluation Performance metrics and evaluation results for Zara 14b are yet to be determined. Users are encouraged to contribute their findings and benchmarks. Limitations and Biases The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment. Additional Information For more details on the base model and constituent models, please refer to their respective model cards and documentation.

NaNK
license:apache-2.0
0
2