# Overview

BonzAI provides **free, unlimited, local** AI generation across six modalities. All inference runs on your machine — nothing is sent to external servers.

## Pipelines

| Modality                                                    | Models                                | Local Port |
| ----------------------------------------------------------- | ------------------------------------- | ---------- |
| [Text (LLM)](https://docs.bonzai.sh/ai-generation/text)     | 21 models via OpenClaw                | 3002       |
| [Image](https://docs.bonzai.sh/ai-generation/image)         | 4 pipelines (FLUX, SDXL, Z-Image)     | 65000      |
| [Audio & Music](https://docs.bonzai.sh/ai-generation/audio) | Kokoro TTS, Qwen3-TTS, ACE-Step       | 65000      |
| [Video](https://docs.bonzai.sh/ai-generation/video)         | LTX-2 (text-to-video, image-to-video) | 65000      |
| [3D](https://docs.bonzai.sh/ai-generation/3d)               | AI-generated Three.js code            | —          |

## How It Works

BonzAI runs two local inference backends:

1. **OpenClaw** (port 3002) — Handles all LLM text generation via an OpenAI-compatible API (`/v1/chat/completions`)
2. **Flask Server** (port 65000) — Handles image, audio, music, video, and vision pipelines

Models are downloaded on first use to `~/bonzai-models/` and cached for future sessions.

## Generation vs. Minting

**All generation is free** — you don't need tokens or a wallet to use any AI pipeline.

Levels (requiring BONZAI token holdings) only gate **minting** your generated content as NFTs. See [Token & Levels](https://docs.bonzai.sh/web3-and-tokenomics/token-levels) for details.
