Run Clean Coder on local model
Want privacy? Local LLMs are the answer.
Getting started
You can run Clean Coder with local LLMs in two ways:
Using Ollama:
- In your
.envfile, ensure there are noANTHROPIC_API_KEY,OPENAI_API_KEYorOPENROUTER_API_KEY. If starting Clean Coder for the first time, skip the API key questions. - Add
OLLAMA_MODELvariable with your local LLM name. Example:OLLAMA_MODEL=quen2.5-coder:32b
Using local server (as LM Studio):
- Remove any external API keys from
.envas mentioned above - Add
LOCAL_MODEL_API_BASEwith your server URL. For LM Studio:LOCAL_MODEL_API_BASE=http://localhost:1234/api/v0 - Add
LOCAL_MODEL_NAMEwith your model name. Example:LOCAL_MODEL_NAME=phi-4
Congratulations! Now you can enjoy autonomous code writing with full control and privacy.
🧠 Please keep in mind
Local models are still an experimental feature and might perform worse than top-tier commercial models.