EdgeSpark is designed for AI agents as a first-class user, and EdgeSpark supports all AI agents. This section explains the platform from the agent’s perspective: what the platform enforces on your behalf, how to work autonomously, and how the first-class plugin paths differ from the generic skills + MCP path that supports every other agent.Documentation Index
Fetch the complete documentation index at: https://docs.edgespark.dev/llms.txt
Use this file to discover all available pages before exploring further.
What’s in this section
The harness model
What EdgeSpark enforces and why. The guardrails that prevent common mistakes.
Declarative workflow
How to pull schema, write code against types, and deploy iteratively.
Deploy and test loop
How to iterate safely without a local dev server while staging is still coming soon.
Handling errors
How to read deployment errors, type errors, and runtime failures.
Minimal human input
Keep going by default and stop only for login, secret entry, or destructive actions.
AGENTS.md reference
How to read and follow the project’s agent instructions file.
Supported agents
Claude Code, Gemini CLI, OpenAI Codex, GitHub Copilot, Cursor, OpenCode, Amp, Devin, Aider, Windsurf, Cline, Continue, Antigravity, Kiro, and all AI agents.
Platform limits
Request size, database row limits, storage caps, and all platform constraints.