TPToolPazar
Ana Sayfa/Rehberler/How To Use Jan Ai

How To Use Jan Ai

📖 Bu rehber ToolPazar ekibi tarafından hazırlanmıştır. Tüm araçlarımız ücretsiz ve reklamsızdır.

What Jan is

Jan is an open-source desktop app that runs LLMs locally with a ChatGPT-style interface — no account, no cloud, no data leaving your machine. This guide covers installing it, loading a model, and wiring it into your existing tooling.

Installing Jan

Jan is an Electron app developed by Homebrew Computer Company (the Jan team). It wraps llama.cpp plus an extension system, ships a polished chat UI, and can either run models locally or proxy to remote providers like OpenAI, Anthropic, Groq, and Mistral behind one interface. The whole thing is AGPL-licensed and available on GitHub.

Loading your first model

It targets users who want “ChatGPT on my laptop” without learning CLI tools or touching Python. Under the hood, it is a thin client over the same GGUF/llama.cpp stack that powers Ollama and LM Studio.

Using the local API server

Jan does not phone home by default. You can verify by checking the telemetry toggle under Settings → Advanced and watching network traffic with your favorite tool.

Adding remote providers and extensions

Open the Hub tab. Jan shows a curated list of models (Llama 3.1, Mistral, Qwen, Phi, Gemma, DeepSeek) with recommended quantizations tagged as “Recommended for your device” based on your RAM. Click Download on one that fits — for a 16GB machine, Llama 3.1 8B Q4 or Qwen 2.5 7B Q4 are solid picks.

When Jan is the wrong choice

This is how you bolt Jan onto tools like Continue.dev, Aider, or your own scripts — the chat UI becomes a debug surface for the same model your code is hitting.