AnyTalker-1.3B
12
license:apache-2.0
by
zzz66
Audio Model
OTHER
1.3B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM
Code Examples
Quick Starttext
conda create -n AnyTalker python=3.10
conda activate AnyTalker
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu1262. Other dependenciestext
pip install -r requirements.txt2. Other dependenciestext
pip install ninja
pip install flash_attn==2.8.1 --no-build-isolation4. FFmpeg installationbash
# Ubuntu / Debian
apt-get install ffmpegUbuntu / Debianbash
# CentOS / RHEL
yum install ffmpeg ffmpeg-develUbuntu / Debianbash
# Conda (no root required)
conda install -c conda-forge ffmpegbash
> ffmpeg -encoders | grep libx264
>bash
> conda install -c conda-forge ffmpeg=7.1.0
>Download the Dataset from YoTubebash
python -m pip install -U yt-dlpDownload the Dataset from YoTubebash
cd ./benchmark
python download.py1. Install yt-dlptext
benchmark/
├── audio_left # Audio for left speaker (zero-padded to full length)
├── audio_right # Audio for right speaker (zero-padded to full length)
├── speaker_duration.json # Start/end timestamps for each speaker
├── interact_11.mp4 # Example video
└── frames # Reference image supplied as the first video frameDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.