Store
deep hermes, but without the need for a system prompt. Autonomously responds based on its OWN judgment https://github.com/cocktailpeanut/deeperhermes
Hunyuan3D-2-LowVRAMFeatured
Text/Image to 3D (Cross Platform: Mac + Windows + Linux): High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models. https://github.com/deepbeepmeep/Hunyuan3D-2GP
One click face-swap GUI
e2-f5-ttsFeatured
F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching https://huggingface.co/spaces/mrfakename/E2-F5-TTS
AI Green Screen Keyer & Alpha Generator
A multi-voice AI audiobook generator built on Qwen3-TTS — annotate scripts with an LLM, assign unique voices to each character, per-line style instructions for delivery, clone voices from reference audio, design new voices from text descriptions, train custom voices with LoRA fine-tuning, and export to MP3 or Audacity multi-track projects
Stable Diffusion web UI
Remove backgrounds from videos and images with precision AI matting. Runs locally on 12GB VRAM — Windows, Linux, and macOS.
AI Song Generation on Mac Apple Silicon, with Full Style Control - Generate complete songs with lyrics, vocals, and instrumental tracks using Tencent AI Lab's SongGeneration (LeVo) model.
[AMD ONLY] Super Optimized Gradio UI for AI video creation for GPU poor machines (6GB+ VRAM). Supports Wan 2.1/2.2, Qwen, Hunyuan Video, LTX Video and Flux. (On Windows supported by 7900(XT), 7800(XT), 7600(XT), Phoenix, 9070(XT) and Strix Halo)
FramePackFeatured
[NVIDIA ONLY] Generate Video Progressively. FramePack is a next-frame (next-frame-section) prediction neural network structure that generates videos progressively. https://github.com/lllyasviel/FramePack
WebUI for ML-Sharp (3DGS)Featured
One-click 3D Gaussian Splatting generation from a single image.
Pinokio launcher for LTX-Desktop-WanGP (local video generation with WanGP backend)
Local UI for MLX Video (Next.js frontend + FastAPI backend).
[NVIDIA, ROCM] One app to train them all. LORA training and Model finetuning for Z-Image, Qwen Image, FLUX.1, Flux.2 Dev and Klein, Chroma, SD 1.5 - 3.5, SDXL, Würstchen-v2, Stable Cascade, PixArt-Alpha, PixArt-Sigma, Sana, Hunyuan Video and inpainting models.
[NVIDIA, ROCM] One app to train them all. LORA training and Model finetuning for Z-Image, Qwen Image, FLUX.1, Flux.2 Dev and Klein, Chroma, SD 1.5 - 3.5, SDXL, Würstchen-v2, Stable Cascade, PixArt-Alpha, PixArt-Sigma, Sana, Hunyuan Video and inpainting models.
Minimal Stable Diffusion UI
Practical human video matting framework that preserves fine details. Drop your video, assign target masks with a few clicks, and get foreground/alpha matting results.
Native C++ AI music generation — no Python required
