GLM-4.7-Flash-GGUF

3.0K
2
license:mit
by
evilfreelancer
Language Model
OTHER
New
3K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

yamlllama.cpp
x-shared-logs: &shared-logs
  logging:
    driver: "json-file"
    options:
      max-size: "100k"

services:
  glm47-flash-30b:
    image: ghcr.io/ggml-org/llama.cpp:server-cuda
    restart: unless-stopped
    volumes:
      - ./llama-cpp_data:/root/.cache
    ports:
      - "8080:8080"
    command: --host 0.0.0.0 --port 8080 -hf evilfreelancer/GLM-4.7-Flash-GGUF:MXFP4_MOE -fa 1 -ngl 99 -ub 4092 -b 4092 -c 202752 --jinja -np 10 -t 48 --threads-batch 96 --temp 1.0 --min-p 0.01 --top-p 0.95 --dry-multiplier 1.1
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            device_ids: [ '0', '1', '2', '3' ]
            capabilities: [ gpu ]
    <<: *shared-logs

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.