Skip to main content

Roadmap

Aurelius is designed to evolve in phases — from a focused protocol for red-teaming large language models, into a global infrastructure for alignment evaluation and governance. This roadmap outlines the protocol’s trajectory without locking into rigid timelines or delivery commitments.

The goal is not to promise features prematurely, but to build trust through consistent, verifiable progress.


Phase I — Core Protocol Launch

Objectives:

  • Deploy the foundational prompt-evaluation pipeline on Bittensor
  • Establish miner and validator incentives tied to alignment signal
  • Launch the first scoring rubric governed by a centralized Tribunate
  • Begin structured data collection from adversarial model interactions
  • Publish the initial version of the Aurelius Alignment Dataset

Phase II — Ecosystem Growth

Objectives:

  • Expand participation among miners, validators, and researchers
  • Refine the scoring rubric based on community input and protocol learning
  • Improve tooling for evaluating, tagging, and navigating alignment failures
  • Begin engaging external stakeholders (research labs, nonprofits, policy groups)
  • Lay early groundwork for permissioned data services and integrations

Phase III — Progressive Decentralization

Objectives:

  • Open up rubric governance to contributors through proposals and peer review
  • Introduce reputation-weighted influence for validators and Tribunate participants
  • Formalize rubric versioning, audit trails, and feedback loops
  • Enable iterative rubric experimentation while maintaining evaluation integrity

Phase IV — Protocol Integration and Tooling

Objectives:

  • Build developer tools, dashboards, and APIs for alignment scoring
  • Support integration of Aurelius into training pipelines and evaluation suites
  • Release benchmarking tools for comparing models across alignment dimensions
  • Encourage adoption by model builders seeking independent red-teaming and scoring

Phase V — Infrastructure and Global Reach

Objectives:

  • Solidify Aurelius as a neutral alignment data layer for research, policy, and AI governance
  • Expand participation across cultures, institutions, and research domains
  • Sustain a public dataset of alignment failures, benchmark scores, and protocol evaluations
  • Support interoperability with adjacent protocols and emerging AI standards

Ongoing Principles

Regardless of phase, Aurelius development will prioritize:

  • Reproducibility – All data must be verifiable and anchored
  • Contestability – Evaluation logic should remain open to disagreement and refinement
  • Minimal trust – Validators and contributors must earn influence through signal
  • Alignment over aesthetics – The protocol rewards reasoning, not style

Summary

Aurelius will grow deliberately — from a working core, to a robust protocol, to a trusted alignment primitive for the broader AI ecosystem. Rather than overpromising, it will build through transparency, reliability, and results.

The path is open. The architecture is modular. And the alignment problem is far too serious for shortcuts.