AnyTalker-1.3B

12
license:apache-2.0
by
zzz66
Audio Model
OTHER
1.3B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM

Code Examples

Quick Starttext
conda create -n AnyTalker python=3.10
conda activate AnyTalker 
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
2. Other dependenciestext
pip install -r requirements.txt
2. Other dependenciestext
pip install ninja 
pip install flash_attn==2.8.1 --no-build-isolation
4. FFmpeg installationbash
# Ubuntu / Debian
apt-get install ffmpeg
Ubuntu / Debianbash
# CentOS / RHEL
yum install ffmpeg ffmpeg-devel
Ubuntu / Debianbash
# Conda (no root required) 
conda install -c conda-forge ffmpeg
bash
> ffmpeg -encoders | grep libx264
>
bash
> conda install -c conda-forge ffmpeg=7.1.0
>
Download the Dataset from YoTubebash
python -m pip install -U yt-dlp
Download the Dataset from YoTubebash
cd ./benchmark
python download.py
1. Install yt-dlptext
benchmark/
├── audio_left            # Audio for left speaker (zero-padded to full length)
├── audio_right           # Audio for right speaker (zero-padded to full length)
├── speaker_duration.json # Start/end timestamps for each speaker
├── interact_11.mp4       # Example video 
└── frames                # Reference image supplied as the first video frame

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.