About
Quarkloop builds open-source tools for autonomous AI agents that run entirely on your infrastructure.
Quarkloop builds Quark — an enterprise-grade runtime for autonomous multi-agent AI. Quark lets you define a Space (your workspace + execution environment), declare agents and plugins in a Quarkfile, and run everything on your own infrastructure.
At the core is a hierarchical agent system. The Main Agent orchestrates Sub-agents using an OODA-inspired loop — orienting on Inbox state, planning steps, spawning workers, and assessing outcomes. Three execution modes (Autonomous, Assistive, Workflow) let you choose the right level of control.
Quark is written in Go with four core components: Supervisor, Runtime, CLI, and Web Channel. It's designed for teams that need bounded execution, audit trails, and full control over their agent infrastructure.
Our values
Agents run on your machine. Every action is auditable and reversible. Three execution modes let you choose the right level of autonomy.
Apache 2.0, built in the open. Four core components — Supervisor, Runtime, CLI, and Web Channel — all available for inspection, modification, and redistribution.
One Quarkfile defines your entire Space. Agents, plugins, inboxes, channels — version it, share it, reproduce it anywhere. Infrastructure that disappears into the background.
No telemetry, no hidden network calls. Agent context, session history, and semantic memory stay on your infrastructure. Full audit trail for every action.
Timeline
Started as an internal tool for orchestrating LLM agents on local infrastructure.
Published under Apache 2.0 with core components, LLM providers, and full CLI.
Introduced session architecture, supervisor loop, and multi-agent orchestration.
Hierarchical agents, async data ingestion via Inboxes, omnichannel collaboration.
Building enterprise features, plugin ecosystem, and production-grade tooling.
Commitment
Quark is released under the Apache 2.0 license. We chose this license because we believe in giving teams the freedom to use, modify, and distribute the runtime without restrictions. If you're going to trust a system to orchestrate AI agents on your behalf, you should be able to read every line of code that makes it work.