Tasmania

Your AI.
Your machine.

Search models on Hugging Face, download them in one click, and run them locally.
No cloud. No complexity. No compromise.

Tasmania
Running
Llama 3.2 3B Q4_K_M
2.1 GB · meta-llama
Port: 8080Context: 4096GPU: 32
API Endpoint
http://localhost:8080/v1/chat/completions

Simple outside.
Powerful inside.

Search Hugging Face

Browse thousands of GGUF models from the app. Sorted by downloads, filterable by size.

One-click downloads

Download any model instantly. Resume interrupted downloads. Real-time progress tracking.

Auto configuration

Tasmania configures llama.cpp automatically. Port, context size, GPU layers — all handled.

OpenAI-compatible API

Drop-in replacement for any tool using the OpenAI protocol. Zero code changes.

Completely private

No cloud, no accounts, no telemetry. Your data never leaves your machine.

Claude Code ready

Built-in MCP server connects Tasmania directly to Claude Code for local AI workflows.

Three steps to local AI.

01

Search

Find models on Hugging Face directly from Tasmania. See sizes, quantizations, and popularity at a glance.

02

Download

Click download. Tasmania handles everything — the transfer, the file organization, the configuration.

03

Run

Hit start. Your model is running locally with a full OpenAI-compatible API. That’s it.

Take AI offline.

Free. Open source. MIT licensed.

Download Tasmania