Qwen3.5-35B-A3B-Holodeck-Qwopus-qx86-hi-mlx

252
license:apache-2.0
by
nightmedia
Image Model
OTHER
35B params
New
252 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
79GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
33GB+ RAM

Code Examples

Step 2: The Humor & Paradox Detectorhaskell
module Holodeck.HumorDetector where

import Data.Text (Text)
import qualified Data.Text as T

-- | Detects if a string is a joke or paradox
detectHumor :: Text -> IO (Maybe HumorType)

data HumorType = Punchline | Paradox | Nonsense deriving (Show, Eq)

detectHumor text = do
  let lowerText = T.toLower text
  
      -- Simple heuristic: Jokes often end with a question mark or have a setup/punchline structure
      isJoke = T.endsWith "?" lowerText || T.isInfixOf "why" lowerText
      
      -- Paradoxes often contain words like "impossible", "contradiction", "dream"
      isParadox = T.isInfixOf "impossible" lowerText || 
                  T.isInfixOf "contradiction" lowerText ||
                  T.isInfixOf "dream" lowerText
      
      -- Nonsense is just random gibberish (simplified check)
      isNonsense = length (T.words lowerText) > 10 && not (isJoke || isParadox)

  if isJoke then return $ Just Punchline
  else if isParadox then return $ Just Paradox
  else if isNonsense then return $ Just Nonsense
  else return Nothing

-- | Evaluate the "funniness" of a joke based on agent reactions
evaluateJoke :: Text -> [Agent] -> IO Double
evaluateJoke joke agents = do
  -- Simulate reactions: Count how many agents "laugh" (positive sentiment)
  let reactions = map (\a -> 
        -- Check if the agent's humor trait is high
        let humor = fromMaybe 0.1 (HM.lookup "humor" $ agentTraits a)
        in if humor > 0.5 then 1 else 0
      ) agents
  
  let totalReactions = sum reactions
      maxReactions = fromIntegral (length agents)
  
  return $ totalReactions / maxReactions -- Returns a score between 0 and 1
Use with mlxbash
pip install mlx-lm
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("Qwen3.5-35B-A3B-Holodeck-Qwopus-qx86-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.