Octant Council Builder
Build a team of AI agents that independently evaluate projects from different angles — then combine their findings into one honest recommendation. Like a grant review committee, but faster, more thorough, and with no groupthink.
What Is This? (The Simple Version)
The Problem
Imagine you're deciding which projects should get funding. You could ask one person to review everything — but one person has blind spots. Maybe they love the tech but miss that the project has no real users. Or they're excited about the community but don't notice the money is running out.
The Solution: A Council
Instead of one reviewer, you assemble a council of specialists. One looks at the code. Another checks if people actually use it. Another follows the money. Another plays devil's advocate and tries to find problems. Another checks if the project is governed in a way that will last.
Here's the key: they work in separate rooms. The technical reviewer can't peek at what the financial reviewer wrote. The community reviewer doesn't know what the skeptic found. They each write their honest assessment alone.
Then a chair reads ALL the assessments, spots where they agree and disagree, and writes one final recommendation: fund it, fund it with conditions, or don't fund it.
What This Tool Does
This tool builds that council for you using AI agents. Each "specialist" is a Claude agent with specific instructions and data sources. You tell it what kind of projects you want to evaluate, and it creates the right team of agents.
It ships with agents pre-built for Octant/Ethereum public goods — projects that build open-source tools, infrastructure, and commons for the Ethereum ecosystem. But you can redesign it for any domain: DeFi protocols, climate DAOs, research grants, developer tools, etc.
Why "Council Builder" and Not Just "Council"?
Because you decide what the council evaluates. The agents that ship with this repo are a starting point. Run /council:setup and reshape the council for your own domain. Change which data sources agents look at. Change what dimensions they score. Change how the final report looks. The execution engine (three waves, parallel agents, independent scoring) stays the same — you customize what happens inside each wave.
The Three-Wave Pattern
Every council evaluation runs in three sequential waves. Within each wave, agents run in parallel.
Gather
8 data agents scrape external sources in parallel: GitHub, on-chain data, funding history, social media, Karma scores, etc.
Score
8 evaluators independently score the project on 5 dimensions each. They read Wave 1 data but never see each other's scores.
Decide
Synthesis agents read all evaluations, can ask evaluators clarifying questions, and produce the final report with a recommendation.
Wave Gates
Each wave must fully complete before the next begins. The orchestrator polls task status until all agents in a wave report completion. This ensures:
- Wave 2 evaluators have complete data to work with
- Wave 3 synthesizers have all scores before writing the report
- No agent reads partial or in-progress output
Quickstart
Three steps to your first evaluation:
Install Settings (one time)
This enables Claude teams (needed for inter-agent communication) and creates a shell alias.
/council:settings
Run an Evaluation
Try the default scaffold on a real project to see the pattern in action.
/council:evaluate Protocol Guild
This spawns 19 agents across 3 waves and produces a full evaluation report in council-out/protocol-guild/.
Make It Yours
Redesign the council for your domain.
/council:setup DeFi lending protocols
/council:setup climate impact DAOs
/council:setup developer tooling grants
Commands
| Command | What It Does |
|---|---|
/council:settings | One-time setup: enables Claude teams, creates shell alias, sets permissions |
/council:setup [domain] | Design your council through conversation. Researches domain, proposes agent roster, generates all agent definitions |
/council:evaluate <project> | Run the full 3-wave evaluation on a project. Produces report in council-out/ |
/council:add-agent | Add a single new agent via guided conversation, domain research, and generation |
/council:remove-agent | Remove an agent with impact preview showing what will be affected |
/council:deploy-to-production | Export council output to Railway backend + Netlify dashboard (OptInPG extension) |
/council:test-octant | Run evaluation on 5 test Octant projects to verify everything works end-to-end |
Make It Yours
Everything is customizable. The plugin is a council factory, not a finished council.
Redesign the Whole Council
Run /council:setup with a new domain. It'll design new agents, research the domain, and generate everything from scratch.
Add or Remove Agents
Use /council:add-agent and /council:remove-agent to tune which lenses are applied during evaluation.
Edit Agents Directly
Every agent is a markdown file in agents/. Change scoring dimensions, data sources, calibration tables. No config to update.
Change the Output Format
Modify agent output templates to produce JSON, comparison matrices, grant proposals, or whatever artifact your use case needs.
Domain Ideas
| Domain | Agents to Swap/Add | Why |
|---|---|---|
| DeFi Protocols | data-audits, eval-security | Security and audit trail matter most |
| L2 Rollups | data-l2beat, eval-decentralization | L2Beat has the data |
| Grant Programs | data-milestones, eval-delivery | Track record of shipping |
| Research Projects | data-papers, eval-novelty | Academic rigor matters |
Data Agents (Wave 1)
These agents gather raw information from external sources. They run in parallel and write structured JSON or markdown to council-out/{slug}/data/.
octant.jsonkarma.jsonsocial.jsonglobal.jsongithub.mdweb.mdonchain.mdfunding.mdEval Agents (Wave 2)
Independent evaluators that each score the project on 5 dimensions (1–10 scale). They read all Wave 1 data but never see each other's scores.
quant.jsonqual.jsonostrom-scores.jsonimpact.mdtechnical.mdcommunity.mdfinancial.mdskeptic.mdSynth Agents (Wave 3)
Synthesis agents read all Wave 2 outputs and can talk back to evaluators via team messaging to ask clarifying questions before writing the final report.
REPORT.md with recommendation (FUND / FUND WITH CONDITIONS / DON'T FUND / INSUFFICIENT DATA), composite score, score card, executive summary, areas of agreement and disagreement, key risks, and conditions.ostrom-report.mdeas.attest() on Base (Chain ID: 8453). Includes all 8 Ostrom scores, quantitative composite, governance maturity, and IPFS evidence hash. Output: eas-attestations.jsonSynthesis Approaches
| Approach | Agents Used | How It Works |
|---|---|---|
| Single Chair (default) | synth-chair | Reads all evals, produces a unified report with one recommendation |
| Debate | synth-bull + synth-bear + synth-chair | Bull argues FOR, bear argues AGAINST, chair makes final decision |
| Ranked | synth-ranker | Compares the project against known alternatives |
Output Structure
Every evaluation produces a structured directory in council-out/. The slug is derived from the project name (lowercase, hyphens, max 40 chars).
├── data/ ← Wave 1 output
│ ├── octant.json — Octant project data
│ ├── karma.json — Karma GAP scores
│ ├── social.json — GitHub/Farcaster/X activity
│ ├── global.json — DefiLlama, OSO, L2Beat, Dune
│ ├── github.md — GitHub repo metrics
│ ├── web.md — Website/docs assessment
│ ├── onchain.md — On-chain activity
│ └── funding.md — Funding history
├── eval/ ← Wave 2 output
│ ├── quant.json — Quantitative scores (5 dim × 0-100)
│ ├── qual.json — Qualitative narrative
│ ├── ostrom-scores.json — Ostrom 8 principles (0-100 each)
│ ├── impact.md — Public goods impact
│ ├── technical.md — Technical health
│ ├── community.md — Community assessment
│ ├── financial.md — Financial sustainability
│ └── skeptic.md — Red flags & risk assessment
├── synth/ ← Wave 3 output
│ ├── ostrom-report.md — Ostrom radar chart + breakdown
│ └── eas-attestations.json — EAS SDK JSON for on-chain attest
└── REPORT.md ← Final council verdict
Example: Protocol Guild Evaluation
Here's a real evaluation output from the default scaffold council running on Protocol Guild.
Executive Summary
Key Risk Identified
Recommendation Criteria
| Verdict | Criteria |
|---|---|
| FUND | Composite ≥ 7, no critical red flags, clear public good |
| FUND WITH CONDITIONS | Composite 5–7, or addressable red flags present |
| DON'T FUND | Composite < 5, critical red flags, or not a genuine public good |
| INSUFFICIENT DATA | Can't find enough information to make a responsible recommendation |
Ostrom Governance Scoring
One of the most unique features of the council is its evaluation of projects against Elinor Ostrom's 8 Design Principles for managing commons — translated from physical commons (fisheries, forests) to digital public goods (code, protocols, data, treasuries).
The 8 Principles
| # | Principle | What It Means for Digital Public Goods | Weight |
|---|---|---|---|
| 1 | Clearly Defined Boundaries | Who uses/contributes? What's the scope? (LICENSE, CONTRIBUTING.md, membership criteria) | 1.25x |
| 2 | Congruence with Local Conditions | Are rules custom-built for this domain, or generic DAO templates? | 1.0x |
| 3 | Collective-Choice Arrangements | Can affected stakeholders change the rules? (Snapshot, governance forums, proposals) | 1.25x |
| 4 | Monitoring | Is resource use tracked transparently? (Public financials, on-chain treasury, dashboards) | 1.25x |
| 5 | Graduated Sanctions | Do rule violations get proportional responses? (Code of conduct, defunding criteria) | 0.75x |
| 6 | Conflict Resolution | Can disputes be resolved quickly and cheaply? (Mediation, appeals, arbitration) | 1.0x |
| 7 | Rights to Organize | Can the project self-govern without external override? (Legal entity, no hostile vetoes) | 1.0x |
| 8 | Nested Enterprises | Governance at multiple scales? (Working groups, tiered decisions, ecosystem participation) | 1.0x |
Example: Protocol Guild Ostrom Scores
Governance Maturity Levels
| Level | Score Range | Meaning |
|---|---|---|
| Established | 60+ | Governance principles are present and functional |
| Developing | 40–59 | Aspirational governance, partially implemented |
| Nascent | 20–39 | Weak governance structures |
| Absent | <20 | No meaningful governance observed |
EAS On-Chain Attestations
The council produces Ethereum Attestation Service (EAS) compatible JSON that can be submitted on-chain to Base (Chain ID: 8453). This creates a permanent, verifiable record of the evaluation.
What Gets Attested
| Field | Type | Description |
|---|---|---|
projectSlug | string | URL-safe project identifier |
projectWallet | address | Project's Ethereum address (recipient) |
epochNumber | uint8 | Octant epoch number |
ostromOverallScore | uint8 | Weighted average of 8 Ostrom principles (0–100) |
rule1–rule8 | uint8 | Individual Ostrom principle scores (0–100 each) |
quantCompositeScore | uint8 | Quantitative composite score |
governanceMaturity | string | established / developing / nascent / absent |
ipfsReportHash | string | IPFS hash of the full report (for verification) |
evaluatedAt | string | ISO date of evaluation |
Example Attestation JSON
{
"schema": "0x000...000",
"data": {
"recipient": "0xF6CBDd6Ea6EC3C4359e33de0Ac823701Cc56C6c4",
"revocable": true,
"data": {
"projectSlug": "protocol-guild",
"ostromOverallScore": 75,
"rule1_boundaries": 88,
"rule2_congruence": 82,
"rule3_collectiveChoice": 76,
"rule4_monitoring": 90,
"rule5_sanctions": 38,
"rule6_conflictResolution": 55,
"rule7_recognitionOfRights": 78,
"rule8_nestedEnterprises": 72,
"quantCompositeScore": 86,
"governanceMaturity": "established",
"evaluatedAt": "2026-03-22"
}
}
}
eas.attest() is a user action from the dashboard. Private keys are never handled by agents.
Architecture
Project Structure
├── agents/ — Agent definitions (auto-discovered by filename prefix)
│ ├── data-*.md — Wave 1 data agents (8)
│ ├── eval-*.md — Wave 2 eval agents (8)
│ └── synth-*.md — Wave 3 synth agents (3)
├── skills/ — Skill definitions (orchestration layer)
├── docs/ — Skill docs + flow diagrams
├── research/ — Domain research (generated by setup/add-agent)
├── council-out/ — Evaluation output (one dir per project)
├── production/ — OptInPG: Railway backend + Netlify frontend
├── CLAUDE.md — Plugin instructions
├── SKILL.md — Skill registry
├── Ostrom-Rules.md — Ostrom's 8 principles (full text)
└── .claude-plugin/plugin.json — Plugin metadata
Key Design Patterns
Independence by Design
Evaluators never see each other's scores. Each agent runs in isolation and writes to a separate file. Prevents groupthink.
Parallel Execution
All agents within a wave spawn in a single message for true parallelism. Gates wait for completion before the next wave.
Auto-Discovery
No registry or config file. The orchestrator discovers agents by filename prefix: data-*, eval-*, synth-*.
Token Injection
Agent files use semantic placeholders ($PROJECT, $DATA_DIR, etc.) that the orchestrator fills in at runtime.
Agent Lifecycle
Every agent follows the same pattern:
1. TaskUpdate — claim the task (status: "in_progress")
2. [Specific work] — fetch data / score dimensions / synthesize
3. Write output — structured JSON or markdown to $OUTPUT_DIR
4. TaskUpdate — complete the task (status: "completed")
5. SendMessage — summary to team lead
Extending the Council
Adding a Custom Agent
Two ways to add an agent:
Option A: Use the guided flow
/council:add-agent
This runs a conversation to design the agent, researches the domain, generates the markdown file, and creates documentation.
Option B: Create a file manually
Create a markdown file in agents/ with the right prefix:
# agents/eval-security.md
---
name: Security Evaluator
description: Evaluate smart contract security posture
tools: Read, Write, WebSearch, WebFetch, SendMessage, TaskUpdate
---
# Security Evaluator: Smart Contract Audit Assessment
You are an independent security evaluator...
## Dimensions (each 1-10)
1. Audit coverage
2. Bug bounty program
3. Formal verification
4. Incident response history
5. Dependency risk
## Output
Write your evaluation to `$OUTPUT_DIR/security.md`
Removing an Agent
/council:remove-agent
# Shows list of agents and impact preview before deletion
Or just delete the file from agents/. No config to update.
Sharing Your Council
A council is just a repo. Fork it, redesign the agents, push it, share the URL. Anyone can install it in Claude Code:
fetch https://raw.githubusercontent.com/YOU/YOUR_REPO/main/SKILL.md and follow the instructions
OptInPG: Public Goods Evaluation Extension
The OptInPG extension transforms the generic council builder into a production-grade Octant public goods evaluation platform. It adds Octant-native data sources, Elinor Ostrom's commons governance scoring, EAS on-chain attestations, and a full-stack deployment pipeline — all without changing a single line of the original plugin.
What Was Added
Design Decisions
| Decision | Why |
|---|---|
| EAS attestations over NFTs | Simpler (no contract deployment), immediately composable, queryable, one shared schema on Base |
| Add-only file strategy | Zero risk of breaking existing plugin. Agents auto-discovered by prefix — no config changes needed |
| 3 microservices mirroring 3 waves | Collector (Wave 1), Analyst (Wave 2), Evaluator (Wave 3 + dashboard) — clean separation of concerns |
| SVG radar charts, not D3/Recharts | Lightweight, embeddable directly in markdown reports, no JS dependencies in output |
| Railway + Netlify stack | Industry-standard, low-friction, CLI-driven deployment from local machine |
| IPFS hash placeholder in EAS | Attestation JSON ready to sign, but signing happens on dashboard (private keys never touched by agents) |
Octant-Specific Agents
Nine new agents purpose-built for Octant public goods evaluation. They slot into the existing wave pattern — the orchestrator discovers them automatically by filename prefix.
Wave 1: Octant Data Collectors (4 agents)
These scrape Octant-specific data sources that the generic data agents don't cover.
available flag for graceful handling when a project isn't on that platform. Timeout: 25sWave 2: Octant-Tuned Evaluators (3 agents)
These evaluators are calibrated specifically for public goods scoring.
Wave 3: Octant Synthesizers (2 agents)
eas.attest() on Base (Chain ID: 8453). Includes schema definition, per-project attestation with all 8 Ostrom scores, quant composite, governance maturity, IPFS evidence hash (placeholder until pinned), epoch number. Agent never handles private keys.The Ostrom Framework: Why It Matters
Most grant evaluation asks "Is this a good project?" The Ostrom framework asks "Is this project governed in a way that will sustain itself as a commons?"
Elinor Ostrom won the Nobel Prize in Economics (2009) for proving that communities can effectively govern shared resources without privatization or top-down control. Her 8 Design Principles (from Governing the Commons, 1990) were originally developed studying fisheries, forests, and irrigation systems. We translated them to digital public goods: open source code, protocols, data, and treasuries.
Physical Commons → Digital Translation
| Principle | Physical Example | Digital Public Good Example |
|---|---|---|
| 1. Boundaries | Who can fish in this lake? | LICENSE file, CONTRIBUTING.md, membership criteria, usage rights |
| 2. Congruence | Fishing seasons match spawning cycles | Governance adapted to domain, not copy-pasted DAO templates |
| 3. Collective Choice | Fishers vote on quotas | Snapshot votes, governance forums, proposal processes |
| 4. Monitoring | Community fish wardens | On-chain treasury dashboards, Karma GAP, public financials |
| 5. Graduated Sanctions | Warning → fine → ban | Code of conduct ladder, documented responses, defunding criteria |
| 6. Conflict Resolution | Village elders mediate | Governance forums, mediation, appeal mechanisms |
| 7. Rights to Organize | Government doesn't override local rules | No foundation veto, legal wrapper, regulatory clarity |
| 8. Nested Enterprises | Local → regional → national governance | Working groups → full governance → ecosystem participation |
How Scoring Works
Each principle is scored 0-100 with a detailed rubric. For example, Principle 1 (Boundaries):
+15 for a clear LICENSE file+15 for CONTRIBUTING.md with contributor guidelines+20 for defined membership criteria (who can participate in governance?)+15 for scope documentation (what's in/out of the project's mission?)+15 for on-chain registry of members/contributors+20 for active boundary enforcement (removing inactive members, etc.)
Principles 1, 3, and 4 are weighted at 1.25x (most critical for digital commons). Principle 5 is weighted at 0.75x (hardest to evidence for early-stage projects). The overall score is a weighted average.
Why This Changes Evaluation
A project can score 9/10 on technical quality and community strength, but if its governance is fragile — no conflict resolution, no graduated sanctions, one foundation can override everything — the Ostrom lens catches that. It's the difference between "this is a good project today" and "this project has the governance structures to remain a healthy commons for years."
EAS Attestation Pipeline
Every evaluation can be permanently recorded on-chain using the Ethereum Attestation Service on Base (Chain ID: 8453). This creates a verifiable, queryable record that anyone can check.
How the Pipeline Works
Council evaluates the project
19 agents run across 3 waves. Ostrom scores, quant composite, and governance maturity are computed.
synth-eas-attestation produces JSON
EAS SDK-compatible attestation with schema definition, all scores, project wallet as recipient. IPFS hash is placeholder.
Report pinned to IPFS
The Ostrom report is uploaded to IPFS. The returned CID replaces the placeholder hash in the attestation JSON.
Human signs and submits on-chain
From the dashboard's "Attest on Base" button, the user connects a wallet and calls eas.attest(). The attestation is recorded permanently on Base.
Schema (Registered on Base)
// Solidity-compatible schema for EAS SchemaRegistry.register()
string projectSlug,
address projectWallet,
uint8 epochNumber,
uint8 ostromOverallScore,
uint8 rule1_boundaries,
uint8 rule2_congruence,
uint8 rule3_collectiveChoice,
uint8 rule4_monitoring,
uint8 rule5_sanctions,
uint8 rule6_conflictResolution,
uint8 rule7_recognitionOfRights,
uint8 rule8_nestedEnterprises,
uint8 quantCompositeScore,
uint8 activityScore,
uint8 ecosystemImpactScore,
uint8 transparencyScore,
string governanceMaturity,
string ipfsReportHash,
string evaluatedAt
eas.attest() is always a human action from the dashboard with their own wallet. The EAS contract on Base is at 0x4200000000000000000000000000000000000021.
Full Walkthrough: End-to-End Evaluation
Here's exactly what happens when you run /council:evaluate Protocol Guild with the OptInPG extension installed.
Step 1: Orchestrator Setup
The evaluate skill parses "Protocol Guild" → slug protocol-guild. It discovers all agents by filesystem prefix, creates a team, and pre-creates tasks for all 19 agents across 3 waves.
Step 2: Wave 1 — Data Gathering (parallel)
All 8 data agents spawn in a single message (true parallelism). Each writes to council-out/protocol-guild/data/:
data-octant-scraper
Scrapes octant.app → finds Protocol Guild as Epoch 11 sole recipient, $258 ETH from Octant epochs 1-5, 194 donors. Writes octant.json (52 lines).
data-karma
Queries Karma GAP → finds Protocol Guild has no structured milestones (uses narrative reporting instead). Returns empty milestone arrays but notes active reputation. Writes karma.json.
data-social-indexer
Indexes 7 days of activity → 1778+ GitHub commits, 26+ contributors in 90 days, active Farcaster presence. Writes social.json.
data-global-sources
Aggregates from OSO (bus factor 0.05, high developer activity), Electric Capital reports, Dune dashboards. Writes global.json.
Plus: data-github, data-web, data-onchain, data-funding run in parallel alongside the Octant-specific agents.
status: completed. Only then does Wave 2 begin.
Step 3: Wave 2 — Independent Evaluation (parallel)
All 8 eval agents spawn in a single message. Each reads ALL Wave 1 data files but never sees other evaluators' scores.
eval-ostrom
Scores all 8 principles. Finds: Monitoring 90/100 (immutable Dune dashboards), Boundaries 88/100 (on-chain member registry), but Graduated Sanctions only 38/100 (no formal enforcement ladder). Overall: 74.5/100, "established" maturity.
eval-quantitative
Computes weighted composite: Activity 82, Funding Efficiency 78, Ecosystem Impact 97 (highest possible), Growth 76, Transparency 93. Composite: 86/100.
eval-qualitative
Writes 200-word narrative citing Protocol Guild's structural trustlessness, $100M+ donations, 187 members across 30 teams. Notes persistent 50-60% compensation gap and ether.fi/Taiko concentration risk.
eval-skeptic
Investigates six red flag categories. Finds: clean on sybil risk, no sustainability theater, no overpromising. Only concern: structural conflict-of-interest potential through 190-member ecosystem overlap. Risk score: 3/10.
Plus: eval-impact (9/10), eval-technical (7/10), eval-community (9/10), eval-financial (8/10).
Step 4: Wave 3 — Synthesis
Three synthesis agents read ALL Wave 2 outputs. The chair can send messages back to evaluators asking for clarification.
synth-chair
Reads all 8 evaluations. Finds strong agreement on irreplaceable ecosystem position, exceptional transparency, and high counterfactual impact. Notes disagreements on bus factor interpretation and Ostrom governance gaps. Writes REPORT.md: FUND — 8/10, unconditional.
synth-ostrom-report
Generates SVG radar chart showing all 8 principle scores. Writes principle-by-principle breakdown with Ostrom quotes, digital translations, evidence, and gaps. Combined evaluation across all three lenses. Writes ostrom-report.md (370 lines).
synth-eas-attestation
Produces EAS attestation JSON with Protocol Guild's wallet as recipient, all Ostrom scores, quant composite 86, governance maturity "established", placeholder IPFS hash. Ready for eas.attest() on Base.
Step 5: Report Presented
Final output in council-out/protocol-guild/:
├── data/ ← 8 files from Wave 1
│ ├── octant.json — Epoch 11 sole recipient, 194 donors, $258 ETH
│ ├── karma.json — No structured milestones (narrative reporting)
│ ├── social.json — 1778+ commits, 26 contributors, active social
│ ├── global.json — OSO bus factor 0.05, ecosystem impact 97
│ ├── github.md — 30+ teams, MIT license, frozen contracts by design
│ ├── web.md — ReadTheDocs, 2025 Annual Report, Agora governance
│ ├── onchain.md — 9-chain presence, immutable vesting contracts
│ └── funding.md — $100M+ total, ether.fi $27.5M, Taiko $20.9M
├── eval/ ← 8 files from Wave 2 (independent)
│ ├── quant.json — Composite 86/100, ecosystem impact 97
│ ├── qual.json — "Structural trustlessness, proven donor legitimacy"
│ ├── ostrom-scores.json — Overall 74.5, Monitoring 90, Sanctions 38
│ ├── impact.md — 9/10, near-ideal public good
│ ├── technical.md — 7/10, strong docs but low coordination bus factor
│ ├── community.md — 9/10, 187 members, 832+ donors
│ ├── financial.md — 8/10, $57M vesting, top-2 donor concentration
│ └── skeptic.md — 3/10 risk, clean on all categories
├── synth/ ← 2 files from Wave 3
│ ├── ostrom-report.md — SVG radar chart + 370-line governance breakdown
│ └── eas-attestations.json — Ready for eas.attest() on Base
└── REPORT.md ← FUND — 8/10 — unconditional
Hosting & Deployment
The OptInPG extension includes a full production deployment stack. Here's the current status:
Documentation Site
This page you're reading is hosted on GitHub Pages at rashmi-278.github.io/octant-council-builder. Automatically deploys from the docs/ folder on the main branch.
Backend API (3 FastAPI Microservices)
Three Python services mirror the three-wave pattern:
| Service | Port | Wave | Key Endpoints |
|---|---|---|---|
collector |
:8001 | Wave 1 | POST /collect — trigger data collectionGET /collect/{slug} — retrieve all data files |
analyst |
:8002 | Wave 2 | POST /analyse — trigger evaluationGET /analyse/{slug}/ostrom — Ostrom scores (for radar) |
evaluator |
:8003 | Wave 3 | GET /dashboard/{slug} — aggregated dashboard dataGET /evaluate/{slug}/ostrom-radar — radar chart dataGET /evaluate/{slug}/eas — EAS attestation JSON |
All services include health checks, slug validation (regex ^[a-z0-9][a-z0-9-]{0,39}$ to prevent path traversal), and configurable CORS origins.
Deploy to Railway
# Option 1: Use the skill (handles everything)
/council:deploy-to-production protocol-guild
# Option 2: Manual deploy with Railway CLI
cd production/backend/evaluator
railway up --detach
Frontend Dashboard (Next.js 15)
Next.js 15 + React 19 + Tailwind CSS dashboard with:
- Interactive Ostrom radar chart (custom SVG renderer, 8 axes)
- Projects list with evaluation status
- Per-project detail view: data → evals → synthesis → report
- "Attest on Base" button (ethers.js + EAS SDK integration)
- Markdown rendering for full reports
- Octant-branded color scheme
Deploy to Netlify
# Static export (no server needed)
cd production/frontend
npm install && npm run build
netlify deploy --prod --dir=out
# Or use the deploy skill which handles both backend + frontend
/council:deploy-to-production protocol-guild
Local Preview
# Run backend locally
cd production/backend/evaluator
pip install -r requirements.txt
uvicorn main:app --port 8000
# Run frontend locally (in another terminal)
cd production/frontend
npm install && npm run dev
# Dashboard at http://localhost:3000
Test on 5 Octant Projects
/council:test-octant
# Runs full evaluation pipeline on:
# Protocol Guild, L2BEAT, growthepie, Revoke.cash, Tor Project
Production Architecture
The deployment mirrors the three-wave agent pattern:
:8001
POST /collect
GET /collect/{slug}
GET /projects
:8002
POST /analyse
GET /analyse/{slug}/ostrom
GET /projects
:8003
GET /dashboard/{slug}
GET /evaluate/{slug}/eas
GET /ostrom-radar
Railway (Backend)
3 FastAPI services. NIXPACKS builder. Health checks at /health. Restart on failure (3 retries). Shared COUNCIL_OUT_DIR.
Netlify (Frontend)
Next.js 15 static export. Node 20 LTS. API proxy redirects /api/* to Railway. Octant-branded dashboard.
Base Chain (Attestations)
EAS contract at 0x4200...0021. Schema registered once. Attestations per project per epoch. Human-signed only.
IPFS (Evidence)
Full Ostrom reports pinned to IPFS. CID stored in attestation for permanent, verifiable evidence link.
Extended by Rashmi-278
What Was Added (Comprehensive)
| Category | Files | Details |
|---|---|---|
| Octant Data Agents | 4 new agents | data-octant-scraper, data-karma, data-social-indexer, data-global-sources — scrape octant.app, Karma GAP, GitHub/Farcaster/X, DefiLlama/OSO/L2Beat |
| Evaluation Agents | 3 new agents | eval-quantitative (5-dim 0-100), eval-qualitative (narrative with citations), eval-ostrom (8 principles 0-100 each with evidence) |
| Synthesis Agents | 2 new agents | synth-ostrom-report (SVG radar chart + governance breakdown), synth-eas-attestation (EAS SDK-compatible JSON for Base) |
| Skills | 2 new skills | /council:deploy-to-production (Railway + Netlify deployment), /council:test-octant (5-project end-to-end testing) |
| Backend | 3 FastAPI services | Collector, Analyst, Evaluator — with health checks, slug validation, CORS handling, safe JSON parsing |
| Frontend | Next.js 15 dashboard | Ostrom radar charts, project list, detail views, EAS "Attest on Base" button, Octant branding |
| Deploy Config | 3 config files + script | railway.json, netlify.toml, deploy.sh, .env.example |
| Documentation | PRD + Ostrom Rules + skill docs | PRD.md (full spec), Ostrom-Rules.md (8 principles + digital translations), skill documentation |
| Security Fixes | Review hardening | Slug validation regex on all endpoints, CORS splitting fix, removed leaked local config, trailing newlines |
Total: 49 new files, 4,705+ lines added, 0 lines modified in original plugin.
Octant Synthesis Hackathon Tracks
The OptInPG extension was built to cover three hackathon tracks simultaneously:
Multi-Agent Council for Public Goods Evaluation
The core contribution. 19 AI agents organized in a three-wave pattern independently evaluate Octant public goods projects from 8 different angles.
- Independent evaluation: Evaluators never see each other's scores — enforced architecturally, not by policy. This prevents groupthink and produces more honest assessments.
- 8 evaluation lenses: Technical health, community strength, financial sustainability, public goods impact, governance (Ostrom), quantitative metrics, qualitative narrative, and skeptic/red-flag analysis
- Synthesis with dialogue: The chair agent can message evaluators during Wave 3 to challenge scores, ask for clarification, or request deeper analysis before writing the final report
- Octant-native data: 4 dedicated data agents scrape octant.app, Karma GAP, social platforms, and ecosystem data sources (DefiLlama, OSO, L2Beat, Dune)
- Recommendation framework: FUND / FUND WITH CONDITIONS / DON'T FUND / INSUFFICIENT DATA with clear composite score thresholds
- Real output: Protocol Guild evaluated at 8/10, unconditional FUND recommendation with detailed agreement/disagreement areas and key risks
Ostrom's 8 Design Principles for Digital Commons
Applied Elinor Ostrom's Nobel Prize-winning commons governance framework to evaluate public goods projects — translating principles from physical commons (fisheries, forests) to digital commons (code, protocols, treasuries).
- Full framework implementation: All 8 principles scored 0-100 with mandatory evidence, confidence levels, and identified gaps
- Digital translation: Each principle adapted to what it means for open source, DAOs, and public goods (e.g., "monitoring" → on-chain treasury dashboards and Karma GAP scores)
- Weighted scoring: Principles 1, 3, 4 weighted 1.25x (most critical for digital commons); Principle 5 weighted 0.75x (hardest to evidence early-stage)
- Governance maturity classification: Established (≥60) / Developing (40-59) / Nascent (20-39) / Absent (<20)
- SVG radar chart: Embedded directly in markdown reports — 8-axis visualization showing governance strengths and weaknesses at a glance
- Real finding: Protocol Guild scored 74.5/100 "established" — strong on monitoring (90) and boundaries (88), weak on graduated sanctions (38) and conflict resolution (55). This is exactly the kind of insight traditional evaluation misses.
EAS Attestations on Base + Production Dashboard
Every council evaluation can be permanently recorded on-chain via the Ethereum Attestation Service on Base, creating a verifiable, composable record of public goods evaluations.
- EAS SDK-compatible JSON: Attestation schema with all 8 Ostrom scores, quantitative composite, governance maturity, IPFS evidence hash, epoch number
- Schema registered on Base: Single shared schema (Chain ID: 8453, EAS contract
0x4200...0021) — composable, queryable by anyone - IPFS evidence chain: Full Ostrom reports pinned to IPFS, CID stored in attestation for permanent, verifiable evidence link
- Human-signed only: Agents produce the JSON, humans sign and submit. Private keys never touched by the system
- Production dashboard: Next.js 15 frontend with Ostrom radar charts, "Attest on Base" button (ethers.js + EAS SDK), project detail views, shareable links
- 3-service backend: FastAPI microservices on Railway (Collector, Analyst, Evaluator) with health checks, slug validation, CORS handling
- One-command deploy:
/council:deploy-to-production protocol-guildhandles Railway backend + Netlify frontend + data staging
Why These Three Tracks Fit Together
The tracks aren't independent — they form a pipeline:
Evaluate
Multi-agent council independently evaluates the project
Govern
Ostrom framework adds governance depth that raw metrics miss
Attest
Results recorded on-chain as permanent, verifiable attestations
A project gets evaluated (Track 1), that evaluation includes governance depth via Ostrom (Track 2), and the results are permanently recorded on-chain (Track 3). Each track makes the others more valuable: attestations are only worth recording if the evaluation is rigorous, and the evaluation is only rigorous because it includes governance analysis most frameworks skip.