Integrations
LLM providers, a modular plugin system, and an extensible architecture. Bring your own models, build your own plugins, keep everything local.
LLM Providers
Quark connects directly to your LLM provider. No proxy, no middleware, no data passing through our servers. Switch providers by changing one line in your Quarkfile.
Claude models via the Anthropic API. Recommended for complex reasoning and multi-step planning.
provider: anthropicGPT-4, GPT-4o, and other OpenAI models. Compatible with the standard OpenAI API format.
provider: openaiAccess 100+ models through a single API key. Route to the best model for each task.
provider: openrouterGLM models for Chinese and multilingual use cases. Low-latency inference from ZhipuAI.
provider: zhipuRun open-weight models locally. Full privacy with no external API calls at all.
provider: ollamaBuilt-in Tools
Each tool runs as a standalone HTTP server. Agents invoke them through the tool dispatch system. Add your own by pointing to any HTTP endpoint.
bashExecute shell commands. The primary tool for interacting with the host system.
http://127.0.0.1:8091/runreadRead file contents. Supports text files of any size with offset-based pagination.
http://127.0.0.1:8092/runwriteWrite and edit files. Supports full writes and targeted string replacements.
http://127.0.0.1:8093/runweb-searchSearch the web via Brave Search or SerpAPI. Bring external context to your agents.
http://127.0.0.1:8090/runPlugin System
Plugins contain Skills (LLM instructions) and Binaries (executable code). Load statically or dynamically. Extend agents without restarting Spaces.
Plugins contain Skills (LLM instructions/schemas) and Binaries (executable code). Load statically via Quarkfile or dynamically at runtime.
Define agent roles, prompts, and plugin access per-agent. The Main Agent dispatches work to Sub-agents based on the plan.
Agents can discover, load, and learn how to use new plugins at runtime. Extend capabilities without restarting the Space.
Adding a new LLM provider requires implementing a single Go interface. The inference package handles retries, message formatting, and tool dispatch.
We help enterprise teams build custom tools, providers, and agent configurations.