Run Clean Coder on local model
Want privacy? Local LLMs are the answer.
Getting started
You can run Clean Coder with local LLMs in two ways:
Using Ollama:
- In your
.env
file, ensure there are noANTHROPIC_API_KEY
,OPENAI_API_KEY
orOPENROUTER_API_KEY
. If starting Clean Coder for the first time, skip the API key questions. - Add
OLLAMA_MODEL
variable with your local LLM name. Example:OLLAMA_MODEL=quen2.5-coder:32b
Using local server (as LM Studio):
- Remove any external API keys from
.env
as mentioned above - Add
LOCAL_MODEL_API_BASE
with your server URL. For LM Studio:LOCAL_MODEL_API_BASE=http://localhost:1234/api/v0
- Add
LOCAL_MODEL_NAME
with your model name. Example:LOCAL_MODEL_NAME=phi-4
Congratulations! Now you can enjoy autonomous code writing with full control and privacy.
🧠 Please keep in mind
Local models are still an experimental feature and might perform worse than top-tier commercial models.