Connect Task Master to any OpenAI-compatible service, use high-performance Z.ai GLM models with 200K+ context windows, or run completely offline with LM Studio at zero API cost.
#1360 819d5e1 Thanks @Crunchyman-ralph! - Add support for custom OpenAI-compatible providers, allowing you to connect Task Master to any service that implements the OpenAI API specification
How to use:
Configure your custom provider with the models command:
task-master models --set-main <your-model-id> --openai-compatible --baseURL <your-api-endpoint>
Example:
task-master models --set-main llama-3-70b --openai-compatible --baseURL http://localhost:8000/v1
# Or for an interactive view
task-master models --setup
This gives you the flexibility to use virtually any LLM service with Task Master, whether it's self-hosted, a specialized provider, or a custom inference server.
#1360 819d5e1 Thanks @Crunchyman-ralph! - Add native support for Z.ai (GLM models), giving you access to high-performance Chinese models including glm-4.6 with massive 200K+ token context windows at competitive pricing
How to use:
Get your Z.ai API key from z.ai/manage-apikey/apikey-list
Set your API key in .env, mcp.json or in env exports:
ZAI_API_KEY="your-key-here"
Configure Task Master to use GLM models:
task-master models --set-main glm-4.6
# Or for an interactive view
task-master models --setup
Available models:
glm-4.6 - Latest model with 200K+ context, excellent for complex projectsglm-4.5 - Previous generation, still highly capableglm-4.5-air, glm-4.5vGLM models offer strong performance on software engineering tasks, with particularly good results on code generation and technical reasoning. The large context window makes them ideal for analyzing entire codebases or working with extensive documentation.
#1360 819d5e1 Thanks @Crunchyman-ralph! - Add LM Studio integration, enabling you to run Task Master completely offline with local models at zero API cost.
How to use:
Download and install LM Studio
Launch LM Studio and download a model (e.g., Llama 3.2, Mistral, Qwen)
Optional: Add api key to mcp.json or .env (LMSTUDIO_API_KEY)
Go to the "Local Server" tab and click "Start Server"
Configure Task Master:
task-master models --set-main <model-name> --lmstudio
Example:
task-master models --set-main llama-3.2-3b --lmstudio
#1362 3e70edf Thanks @Crunchyman-ralph! - Improve parse PRD schema for better llm model compatibility
#1358 0c639bd Thanks @Crunchyman-ralph! - Fix subtask ID display to show full compound notation
When displaying a subtask via tm show 104.1, the header and properties table showed only the subtask's local ID (e.g., "1") instead of the full compound ID (e.g., "104.1"). The CLI now preserves and displays the original requested task ID throughout the display chain, ensuring subtasks are clearly identified with their parent context. Also improved TypeScript typing by using discriminated unions for Task/Subtask returns from tasks.get(), eliminating unsafe type coercions.
#1339 3b09b5d Thanks @Crunchyman-ralph! - Fixed MCP server sometimes crashing when getting into the commit step of autopilot
#1326 9d5812b Thanks @SharifMrCreed! - Improve gemini cli integration
When initializing Task Master with the gemini profile, you now get properly configured context files tailored specifically for Gemini CLI, including MCP configuration and Gemini-specific features like file references, session management, and headless mode.