LeroyDyer

87 models • 7 total models in database
Sort by:

SpydazWeb_AI_ImageText_Text_Project

license:mit
699
5

SpydazWebAI_QuietStar_Project

NaNK
license:mit
570
2

Mixtral_AI_Vision_128k_7b

NaNK
license:mit
294
5

SpydazWebAI_Image_Projectors

179
2

Mixtral_Instruct_7b

NaNK
license:mit
108
2

Mixtral_AI_llava_4bit

NaNK
license:apache-2.0
90
3

LCARS_AI_QstaR_Nemo_GGUF

license:apache-2.0
58
1

Mixtral_BaseModel-7b

NaNK
license:mit
53
1

Language_VisionModel_GGUF

llama-cpp
40
2

Mixtral_Chat_7b

NaNK
license:mit
39
2

Mixtral_BioMedical_7b

NaNK
28
1

SpydazWeb_AI_HumanAGI_002-Q4_K_M-GGUF

llama-cpp
23
0

_Spydaz_Web_AI_LCARS_SYSTEM_02_BIBLE-Q4_K_M-GGUF

NaNK
license:apache-2.0
19
0

SpydazWeb_AI_HumanAI_RP

Language support includes English and Swahili.

NaNK
dataset:xz56/react-llama
16
1

_Starfleet_II_-Q4_K_S-GGUF

llama-cpp
15
0

Mixtral_AI_Cyber_Dolphin-Q4_K_M-GGUF

NaNK
llama-cpp
14
1

Qwen3-0.6B-Q4_K_M-GGUF

NaNK
llama-cpp
12
1

SpydazWeb_AI_CyberTron_Ultra_7b

Language: English. License: Apache 2.0.

NaNK
license:apache-2.0
11
5

LCARS_AI_StarTrek_Computer

Language: English. License: MIT.

license:mit
11
4

SpydazWeb_VisonEncoderDecoder_Project

license:apache-2.0
11
1

LCARS_TOP_SCORE

Language: en License: openrail

10
2

QuietStar_Project

license:mit
10
2

Mixtral_AI_1x4

llama-cpp
10
1

LCARS_STARFLEET

license:apache-2.0
10
0

_Spydaz_Web_AI_AGI_R1_Top_Student

This model is based on the transformers library and includes tags related to mergekit.

9
1

_Spydaz_Web_AI_LCARS_SYSTEM-Q4_K_M-GGUF

llama-cpp
9
0

LCARS_STARFLEET-Q4_K_S-GGUF

LeroyDyer/LCARSSTARFLEET-Q4KS-GGUF This model was converted to GGUF format from `LeroyDyer/LCARSSTARFLEET` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
9
0

_Spydaz_Web_LCARS_TEST-Q4_K_S-GGUF

llama-cpp
9
0

_LCARS_TOOL_CALLER_-Q4_K_S-GGUF

llama-cpp
9
0

Mixtral_AI_Cyber_Matrix_2_0

NaNK
license:mit
8
5

Mixtral_AI_Cyber_Orca-Q4_K_M-GGUF

llama-cpp
8
1

_Spydaz_Web_AI_AGI_R1_OmG_Coder

Language: English. License: Apache 2.0.

license:apache-2.0
8
1

LCARS_MASTER_SYSTEM_GGUF

This is a 7B parameter Mistral-based model designed to provide highly detailed, humanized responses with advanced reasoning capabilities. The model combines multiple specialized training approaches to create a versatile AI assistant capable of both technical tasks and natural conversation. - Context Length: 32k tokens (optimized for reliable performance at 4k chunks) - Multi-domain Expertise: Cross-trained on coding, medical, financial, and general problem-solving datasets - Humanized Responses: Trained on conversation patterns to provide more natural, empathetic interactions - Advanced Reasoning: Incorporates structured thinking patterns and step-by-step problem solving - Historical Knowledge: Specialized training on biblical texts, ancient documents, and archaeological materials The model employs a multi-stage training methodology: 1. Base Training: Foundation models merged for complementary capabilities 2. Specialized Training: Domain-specific datasets for expertise areas 3. Humanization Training: Conversation datasets to improve social interaction 4. Context Optimization: Training with varying context lengths to find optimal performance ranges Through extensive testing, we discovered that while the model supports 32k context, optimal performance occurs around 4k tokens. The model is designed to continue responses across multiple turns when needed, effectively managing longer conversations through segmentation. - LeroyDyer/Humanization001: Conversation patterns and social interaction training - LeroyDyer/QAOrganizedReasoningdataset001/002: Structured reasoning and problem-solving - Biblical and Ancient Texts: Complete biblical sources in multiple languages via SALT dataset - Archaeological Archives: Papers and translations from explorers and archaeologists The model supports agentic prompt patterns for complex problem-solving: The model includes experimental support for image-to-text conversion using Base64 encoding: The model includes comprehensive audio processing capabilities: - Temperature: 0.1-0.3 for analytical tasks, 0.7-0.9 for creative tasks - Max Tokens: 4096 for best performance (use "continue" for longer responses) - Context Window: Chunk inputs over 4k tokens across multiple interactions - Repetition Penalty: 1.1-1.15 to avoid repetitive responses 1. Structured Queries: Use clear, specific questions for best results 2. Context Management: Break long contexts into manageable chunks 3. Multi-turn Conversations: Utilize the model's ability to continue responses across turns 4. Expert Mode: Trigger specialized agents for domain-specific tasks 5. Reasoning Tasks: Use structured prompts with thinking tags for complex problems Through extensive testing, we discovered that models should ideally be trained with larger contexts, but practical performance often peaks at smaller token counts. The actual usable context length should be determined through empirical testing rather than theoretical maximums. Previous approaches using multiple model merging often led to corruption. The current approach focuses on one-to-one merging to ensure response quality and capability preservation. The model combines task-oriented efficiency with conversational naturalness, creating an AI that can both perform complex technical tasks and engage in meaningful dialogue. This dual capability makes it suitable for both professional and personal use cases. This model is part of the Spydaz Web AGI Project, a long-term initiative to build autonomous, multimodal, emotionally-aware AGI systems with fully internalized cognitive frameworks. The goal is to push boundaries in reasoning, decision-making, and intelligent tooling. Licensed under Apache 2.0. Created by Leroy Dyer as part of the SpydazWeb AI initiative. 📝 Citation bibtex @software{SpydazWebAIHumanAdvanced2024, author = {Leroy Dyer}, title = {SpydazWeb AI Human Advanced Model}, year = {2024}, url = {https://huggingface.co/LeroyDyer/SpydazWebAIHumanAdvanced}, note = {Multimodal AGI system with humanized interaction capabilities} } 📞 Support & Contact Documentation: Full technical specifications available Note: This model contains unfiltered historical and religious content from original sources. Implement appropriate safeguards for your application context. The model maintains academic integrity by presenting sources accurately while providing cultural and historical context.

NaNK
license:apache-2.0
8
1

Mixtral_AI_CyberBrain_Coder_1x2-Q4_K_M-GGUF

llama-cpp
8
0

_Spydaz_Web_AGI_DeepThink_

license:apache-2.0
6
1

LCARS_AI_1x2_001_SuperAI-Q4_K_S-GGUF

llama-cpp
6
0

SpyazWeb_AI_DeepMind_Project

license:apache-2.0
5
4

Mixtral_Instruct

license:apache-2.0
5
1

Mixtral_AI_CyberTron

5
1

Mixtral_Base

license:mit
5
0

_Spydaz_Web_AI_AGI_R1_Math_AdvancedStudent

Advanced student math model utilizing the transformers library with mergekit.

5
0

Mixtral_BioMedical

license:mit
4
2

SpydazWeb_AI_Extended_Context_128k_Yarn_Project

NaNK
license:mit
4
1

SpydazWebAI_SpeechEncoderDecoder_Mini548m

license:mit
4
1

SpydazWebAI_VisionEncoderDecoderModel_Mini3b

NaNK
4
0

_Spydaz_Web_AI_LCARS_SYSTEM_01-Q4_K_M-GGUF

Creating Human Advance AI Success is a game of winners. The Human AI . (a lot of bad models to get to this one ! finally ) This model has been trained to respond in a more human manner as well as exhibit behaviours : it nows when to think and when not to think ! Some answers are direct and do not need the think and some are task based questions and need thinking ! So the model should not be stuck on a single response type ! SpydazWeb AI (7b Mistral) (Max Context 128k) This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : A New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way inhibited its other tasks which were also aligned using the new intensive and Expressive prompt : AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science. Thinking Rationally: AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain. Acting Humanly: Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language. Acting Rationally: Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments. Domains of Focus The model was trained with cross-domain expertise in: Our training approach encourages cognitive emulation, blending multiple reasoning modes into a single thought engine. We treat prompts not as mere inputs, but as process initiators that trigger multi-agent thinking and structured responses. Data Creation strategy is to combine the relevant datasets intot sinlge dataset and prompt setup ! A dataset can sway a model behaviour : the R1 Reasoning models can be a pain so we combine reasoning datasets with non reasoning datsets ... humanize the total datset before training th emodel on the new datset ! the tasks are generally Codeing and multistep reasoning tasks etc ! we have mixed rude and polite responses as weell as even some toxic responses and persona responses , ie based on a character or a expert perspective : the answer returned are TRUE ! these were often distilled from other models or datasets ! LONG PROMPT this prompt elicits the reasoning behaviour as well as aynalitical thining mechanizims GRAPHS ! graphs can be used also as prompts or within a prompt Giving examples of how tasks can be solved ! Agentic Prompt This prompt encourages themodel to generate expert teams to solve problems as well as setup virtual labs to safely simulate experiments : Examples of workflows that can be given for this prompt ! Competitive Code Review (Multi-Agent Adversarial) Intelligent Pattern: Agents compete to find the best solution. Reinforcement Learning for Customer Support (Adaptive Workflow) Intelligent Pattern: Agents learn from feedback to improve future runs. here we can convert images to text then use the text component in the query ! So we train on images converted to base64: then if a image is returned we can decode it from base64 base to a image : This methodology is painstaking : it requies mass images and conversions to text : But after training the task is embeded into the model : giving the model the possibility for such expansive querys as well as training the model on base64 information : Here we can even convert incoming dataset images to base64 on the fly Step 3: Extract Mel-Spectrogram from Image (Direct Pixel Manipulation) Step 7: Full Pipeline for Audio Processing with Customization This model is part of the Spydaz Web AGI Project, a long-term initiative to build autonomous, multimodal, emotionally-aware AGI systems with fully internalized cognitive frameworks. If your goal is to push boundaries in reasoning, decision-making, or intelligent tooling — this model is your launchpad.

NaNK
license:apache-2.0
4
0

_Spydaz_Web_AI_

dataset:Shekswess/medical_llama3_instruct_dataset_short
3
5

_Spydaz_Web_AI_MistralStar_001_Project

license:mit
3
1

_Spydaz_Web_ONTOLOGY_OFFICER_

license:apache-2.0
3
1

_Spydaz_Web_OPERATIONS_OFFICER_

- Developed by: LeroyDyer - License: apache-2.0 - Finetuned from model : LeroyDyer/SpydazWebONTOLOGYOFFICER This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

license:apache-2.0
3
1

_Spydaz_Web_SCIENCE_OFFICER_

Winners create more winners, while losers do the opposite. Success is a game of winners. SpydazWeb AI (7b Mistral) (Max Context 128k) This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : A New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way inhibited its other tasks which were also aligned using the new intensive and Expressive prompt : AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science. Thinking Rationally: AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain. Acting Humanly: Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language. Acting Rationally: Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments. Domains of Focus The model was trained with cross-domain expertise in: Our training approach encourages cognitive emulation, blending multiple reasoning modes into a single thought engine. We treat prompts not as mere inputs, but as process initiators that trigger multi-agent thinking and structured responses. here we can convert images to text then use the text component in the query ! So we train on images converted to base64: then if a image is returned we can decode it from base64 base to a image : This methodology is painstaking : it requies mass images and conversions to text : But after training the task is embeded into the model : giving the model the possibility for such expansive querys as well as training the model on base64 information : Here we can even convert incoming dataset images to base64 on the fly Step 3: Extract Mel-Spectrogram from Image (Direct Pixel Manipulation) Step 7: Full Pipeline for Audio Processing with Customization This model is part of the Spydaz Web AGI Project, a long-term initiative to build autonomous, multimodal, emotionally-aware AGI systems with fully internalized cognitive frameworks. If your goal is to push boundaries in reasoning, decision-making, or intelligent tooling — this model is your launchpad.

license:apache-2.0
3
0

_Spydaz_Web_LCARS_TEST

license:apache-2.0
3
0

_LCARS_TOOL_CALLER_

license:apache-2.0
3
0

_Starfleet_II_

license:apache-2.0
3
0

Mixtral_AI_MiniTron_Swahili_3.75b

NaNK
license:apache-2.0
2
2

LCARS_AI_StarTrek_Computer-Q4_K_S-GGUF

llama-cpp
2
1

_Spydaz_Web_AI_LlavaNextVideo

NaNK
dataset:Shekswess/medical_llama3_instruct_dataset_short
2
1

_Spydaz_Web_AI_Mistral_R1_Base

NaNK
2
1

Mixtral_AI_SwahiliTron_4BIT

NaNK
license:apache-2.0
2
0

_Spydaz_Web_AGI_DeepThink_R1_

license:apache-2.0
2
0

_Spydaz_Web_LCARS_Artificial_Human_R1_007

- Developed by: LeroyDyer - License: apache-2.0 - Finetuned from model : LeroyDyer/SpydazWebLCARSArtificialHumanR1002-Multi-lingual-Thinking This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

license:apache-2.0
2
0

SpydazWeb_AI_Swahili_Project

license:mit
1
4

Mixtral_AI_Cyber_4.0

license:mit
1
2

Llava_1.5_7b_4_bit

NaNK
1
1

SpydazWebAI_MultiModel_001_Project

license:apache-2.0
1
1

SpydazWeb_AI_DolphinCoder_7b

NaNK
license:apache-2.0
1
1

_Spydaz_Web_AI_Llava

NaNK
dataset:Shekswess/medical_llama3_instruct_dataset_short
1
1

_Spydaz_Web_AGI_DeepThink_R3_

license:apache-2.0
1
1

_Spydaz_Web_AGI_DeepThinker_LCARS_

NaNK
license:apache-2.0
1
1

Mixtral_AI_Cyber_MegaMind_1x4-Q4_K_M-GGUF

llama-cpp
1
0

_Spydaz_Web_AI_Student_History_-Q4_K_M-GGUF

llama-cpp
1
0

Mixtral_AI_CyberVision

NaNK
0
3

Mixtral_AI_TokenClassification_Project

0
2

_Spydaz_Web_AI_MistralStar_4BIT

NaNK
license:mit
0
2

LCARS_Specialist_MYTH_BUSTER_

NaNK
license:apache-2.0
0
1

Mistral_WhiteHatCoder_Base_Instruct_Moe_3x7b

NaNK
license:mit
0
1

SpydazWebAI_MiniAI_248m

NaNK
0
1

SpydazWebAI_VisionEncoderDecoderModel_Mini548m

license:mit
0
1

SpydazWeb_Speech_Vision_EncoderDecoder_Multimodal_5b_Project

NaNK
license:mit
0
1

SpydazWeb_AGI_MistralStar_001_Project

license:mit
0
1

_Spydaz_Web_AI_MistralStar_V2

license:mit
0
1

_Spydaz_Web_AI_LlavaNext

NaNK
dataset:Shekswess/medical_llama3_instruct_dataset_short
0
1

Mistral-Videolm

license:apache-2.0
0
1

Mistral-OneVisionlm

license:apache-2.0
0
1

_Spydaz_Web_AI_FinTech_001

license:apache-2.0
0
1

_Spydaz_Web_LCARS_AdvancedHuman_Archive

dataset:Shekswess/medical_llama3_instruct_dataset_short
0
1

_Spydaz_Web_AI_LCARS_MASTER_SYSTEM

NaNK
license:apache-2.0
0
1