Rally AI - AI Lifecycle Manager Framework

Rally AI is AICodeRally's command-line orchestration tool that uses specialized AI Lifecycle Managers (ALMs) to generate, validate, and orchestrate artifacts across the 3-6-∞ Framework.


Overview

Rally AI implements a three-ALM architecture aligned with the 3-6-∞ Framework:

  1. Creator AI - Generates Studio apps (3 steps: Ideate → Create → Validate)
  2. Operator AI - Generates Edge solutions (6 P's: People, Process, Products, Performance, Pipeline, Platform)
  3. Enterprise AI - Generates Summit platforms (∞ Extensions: Governance, Scale, Integration, Intelligence, Strategy, Change)

Each ALM uses a combination of Claude (Anthropic), GPT-4 (OpenAI), and Gemini (Google) to deliver context-aware, architecture-compliant artifacts.


Installation

# From project root
cd tools/rally-ai
pnpm install
pnpm build

# Make executable (optional)
chmod +x ../../bin/rally-ai

Quick Start

Create a Studio App

rally-ai create studio-app \
  --description "Food truck discovery and ordering app" \
  --audience "Taco enthusiasts in Austin, TX" \
  --domain "food-service" \
  --workflows "search" "order" "track" \
  --modules "location" "payments" "notifications" \
  --validate

Output:

Create an Edge Solution

rally-ai create edge-solution \
  --studio-apps "donor-portal" "event-manager" "volunteer-hub" \
  --domain "nonprofit" \
  --icp "Small to mid-size nonprofits with 5-50 staff" \
  --pain-points "Manual donor management" "Disconnected systems" \
  --validate

Output:

Create a Summit Solution

rally-ai create summit-solution \
  --domain "spm" \
  --edge-solutions "np-edge" "designer-biz-kit" \
  --constraints "Multi-tenant" "SOC2 compliance" \
  --compliance "SOC2" "GDPR" "CCPA" \
  --validate

Output:


Configuration

Environment Variables

Set in .env file or shell:

# Required: Provider API Keys
ANTHROPIC_API_KEY="sk-ant-..."
GOOGLE_API_KEY="..."
OPENAI_API_KEY="sk-..."

# Optional: Vercel AI Gateway (recommended)
VERCEL_AI_GATEWAY_URL="https://gateway.ai.vercel.com"
VERCEL_AI_GATEWAY_TOKEN="[from vercel env pull]"

# Optional: RAG System (for context-aware generation)
DATABASE_URL="postgres://..."  # Prisma database URL for RAG

With Vercel AI Gateway

Rally AI uses Vercel AI Gateway for:

Setup:

  1. Add provider API keys in Vercel Dashboard → AI Gateway → BYOK
  2. Run vercel env pull to get VERCEL_AI_GATEWAY_TOKEN
  3. Rally AI automatically uses gateway if token exists

Without Gateway:

See AI Gateway Integration Guide for complete setup.


AI Lifecycle Managers (ALMs)

Creator AI - Studio ALM

Purpose: Generate Studio apps following the 3-step flow (Ideate → Create → Validate)

Primary Model: rally/coder-claude (Claude Sonnet 3.5)

Secondary Models:

Responsibilities:

Example:

rally-ai create studio-app \
  --description "Birthday party planning app" \
  --audience "Parents planning kids' birthdays" \
  --domain "events" \
  --workflows "invite" "track-rsvp" "manage-budget"

Operator AI - Edge ALM

Purpose: Generate Edge solutions following the 6 P's framework

Primary Model: rally/designer-gpt (GPT-4)

Secondary Models:

Responsibilities:

The 6 P's:

  1. People - Roles, teams, collaboration
  2. Process - Workflows, automation, SOPs
  3. Products - Offerings, pricing, packaging
  4. Performance - Metrics, KPIs, analytics
  5. Pipeline - Sales funnel, customer journey
  6. Platform - Infrastructure, integrations, APIs

Example:

rally-ai create edge-solution \
  --studio-apps "taco-scope" "taco-finder" \
  --domain "food-service" \
  --icp "Taqueria owners in major metro areas" \
  --pain-points "Inventory management" "Staff scheduling" "Customer ordering"

Enterprise AI - Summit ALM

Purpose: Generate Summit platforms following ∞ Extensions

Primary Model: rally/designer-gpt (GPT-4)

Secondary Models:

Responsibilities:

The ∞ Extensions:

  1. Governance - Compliance, security, audit trails
  2. Scale - Multi-tenant, global, high-availability
  3. Integration - Enterprise systems, complex data flows
  4. Intelligence - AI/ML, predictive analytics, insights
  5. Strategy - Executive dashboards, ROI tracking
  6. Change - Migration, training, adoption management

Example:

rally-ai create summit-solution \
  --domain "nonprofit" \
  --edge-solutions "np-edge" "faith-edge" \
  --constraints "Multi-tenant isolation" "Global deployment" \
  --compliance "SOC2" "GDPR"

Commands

rally-ai create studio-app

Generate a Studio app with Creator AI.

Options:

Example:

rally-ai create studio-app \
  --description "Real-time collaboration whiteboard" \
  --audience "Remote teams doing design sprints" \
  --domain "collaboration" \
  --workflows "draw" "comment" "share" "export" \
  --modules "websockets" "canvas" "auth" "storage" \
  --validate

Output Structure:

{
  "id": "collab-whiteboard",
  "tier": "studio",
  "framework": "3-steps",
  "description": "...",
  "audience": "...",
  "domain": "collaboration",
  "workflows": [...],
  "modules": [...],
  "threeSteps": {
    "ideate": { "problem": "...", "outcome": "..." },
    "create": { "features": [...], "ui": "..." },
    "validate": { "metrics": [...], "successCriteria": [...] }
  }
}

rally-ai create edge-solution

Generate an Edge solution with Operator AI.

Options:

Example:

rally-ai create edge-solution \
  --studio-apps "event-planner" "donor-portal" "volunteer-hub" \
  --domain "nonprofit" \
  --icp "Small to mid-size nonprofits with 5-50 staff" \
  --pain-points "Manual donor tracking" "Event coordination overhead" \
  --existing-solutions "DonorBox" "Eventbrite" \
  --validate

Output Structure:

{
  "id": "np-edge",
  "tier": "edge",
  "framework": "6-ps",
  "domain": "nonprofit",
  "icp": "...",
  "studioApps": [...],
  "sixPs": {
    "people": { "roles": [...], "teams": [...] },
    "process": { "workflows": [...], "automations": [...] },
    "products": { "offerings": [...], "pricing": [...] },
    "performance": { "metrics": [...], "kpis": [...] },
    "pipeline": { "stages": [...], "tracking": [...] },
    "platform": { "modules": [...], "integrations": [...] }
  }
}

rally-ai create summit-solution

Generate a Summit platform with Enterprise AI.

Options:

Example:

rally-ai create summit-solution \
  --domain "spm" \
  --edge-solutions "np-edge" "designer-biz-kit" "bhg-edge" \
  --constraints "Multi-tenant with data isolation" "SOC2 compliance" \
  --compliance "SOC2" "GDPR" "CCPA" \
  --validate

Output Structure:

{
  "id": "summit-spm-governance",
  "tier": "summit",
  "framework": "infinity-extensions",
  "domain": "spm",
  "edgeSolutions": [...],
  "infinityExtensions": {
    "governance": { "policies": [...], "compliance": [...] },
    "scale": { "multiTenant": true, "regions": [...] },
    "integration": { "systems": [...], "apis": [...] },
    "intelligence": { "analytics": [...], "ml": [...] },
    "strategy": { "dashboards": [...], "okrs": [...] },
    "change": { "training": [...], "adoption": [...] }
  }
}

rally-ai create module

Generate a reusable module with Capability AI.

Options:

Example:

rally-ai create module \
  --description "Stripe payment processing with subscriptions" \
  --domain "payments" \
  --category "integrations" \
  --consumers "taco-finder" "np-edge" "designer-biz-kit" \
  --validate

rally-ai collaborate

Multi-agent collaboration where Designer (GPT-4), Coder (Claude), and Tester (Gemini) work together iteratively.

rally-ai collaborate "feature-name" \
  --context "additional context" \
  --rounds 3 \
  --mode build

Collaboration Modes:

What happens:

  1. Designer (GPT-4) proposes initial architecture
  2. Coder (Claude) reviews and provides implementation details
  3. Tester (Gemini) raises security/testing concerns
  4. Iterative rounds where agents question and refine
  5. Consensus reached when all concerns are addressed

Output:


rally-ai design

Combine technical analysis from Claude with business validation from Gemini.

rally-ai design "multi-tenant-auth" \
  --context "Support 100+ tenants, OAuth + email/password"

Process:

  1. Claude performs deep technical analysis
  2. Gemini validates business aspects
  3. Final design synthesizes both perspectives

Output:


rally-ai sprint-plan

Create a 4-week tactical execution plan with GPT-4.

rally-ai sprint-plan "multi-tenant-auth"

GPT-4 generates:

Output:


rally-ai validate

Run comprehensive validation with all three AIs.

rally-ai validate "multi-tenant-auth"

Validation checks:

Claude validates:

Gemini validates:

GPT-4 reconciles:

Output:


rally-ai workflow

Run the complete three-model flow in one command.

rally-ai workflow "payments-refactor" \
  --context "PCI scope, event-driven architecture"

Execution order:

  1. GPT-4: Architecture blueprint with diagrams
  2. Claude: Coding plan with module breakdown
  3. Gemini: Review with risk assessment

Output:


rally-ai info

Check AI model configuration.

rally-ai info

Shows:


RAG Integration

Rally AI includes Retrieval-Augmented Generation (RAG) for context-aware artifact generation.

How It Works

  1. Knowledge Base: Rally AI maintains a vector database with:

    • Architecture patterns
    • Module documentation
    • Example apps and solutions
    • Design system guidelines
    • Domain-specific knowledge
  2. Context Retrieval: Before generating artifacts, ALMs query RAG for relevant patterns

  3. Augmented Generation: AI models receive both the user's request and retrieved context

  4. Better Results: Context-aware generation produces higher-quality, architecture-compliant artifacts

RAG Domains

Using RAG Programmatically

import { MultiAIOrchestrator } from "@rally/ai-orchestrator";

const orchestrator = new MultiAIOrchestrator();

const result = await orchestrator.chatWithRag(
  "rally/coder-claude",
  "tenant-123",
  "How do I implement authentication in this codebase?",
  {
    domain: "modules",
    topK: 10,
    minSimilarity: 0.8
  }
);

console.log(result.answer);
console.log(`Used ${result.sources.totalChunks} source chunks`);

Model Router

Rally AI uses a unified model abstraction layer:

type RallyModelId =
  | "rally/coder-claude"        // Claude Sonnet 3.5
  | "rally/designer-gpt"        // GPT-4 Turbo
  | "rally/tester-gemini"       // Gemini 1.5 Pro
  | "rally/spm-llama"           // Private LLaMA (SPM expertise)
  | "rally/codex-openai";       // GPT-4 (reconciliation)

Routing:

Chat with any model:

const response = await orchestrator.chat(
  "rally/coder-claude",
  "Review this authentication implementation..."
);

Architecture 3.0 Compliance

Rally AI enforces Architecture 3.0 standards:

Naming Conventions

File Locations

Tier Responsibilities

Design System 2.0


Output Directory Structure

Rally AI saves all outputs in organized directories:

project-root/
├── apps/
│   ├── studio/app/apps/          # Studio apps
│   ├── edge/                      # Edge solutions
│   └── summit/                    # Summit platforms
├── packages/modules/src/          # Modules
├── design-docs/                   # Design phase outputs
├── sprint-plans/                  # Sprint planning outputs
├── validation/                    # Validation outputs
└── collaborations/                # Multi-agent sessions

Complete Workflow Example

Building TacoFinder Ecosystem

# Step 1: Create Studio apps
rally-ai create studio-app \
  --description "Taco restaurant scoping and analysis" \
  --audience "Restaurant entrepreneurs" \
  --domain "food-service" \
  --workflows "market-analysis" "feasibility" \
  --validate

rally-ai create studio-app \
  --description "Consumer taco discovery app" \
  --audience "Taco enthusiasts" \
  --domain "food-service" \
  --workflows "search" "order" "track" \
  --validate

# Step 2: Create Edge solution
rally-ai create edge-solution \
  --studio-apps "taco-scope" "taco-finder" \
  --domain "food-service" \
  --icp "Taqueria owners in major metro areas" \
  --pain-points "Inventory management" "Staff scheduling" \
  --validate

# Step 3: (Future) Create Summit platform
rally-ai create summit-solution \
  --domain "food-service" \
  --edge-solutions "taco-edge" \
  --constraints "Multi-city franchise management" \
  --compliance "Health department regulations" \
  --validate

Best Practices

1. Start with the Right Tier

Ask yourself:

2. Use Specific Context

More context = better results:

❌ Bad:

rally-ai create studio-app --description "auth app" --audience "users" --domain "security"

✅ Good:

rally-ai create studio-app \
  --description "OAuth 2.0 authentication with Google, GitHub, and email/password" \
  --audience "Developers building multi-tenant SaaS apps" \
  --domain "security" \
  --workflows "login" "register" "password-reset" "2fa" \
  --modules "auth" "session" "email" "security"

3. Always Validate

Use the --validate flag to catch issues early:

rally-ai create studio-app ... --validate

4. Review All Outputs

Don't skip reading generated specs:

5. Use Collaboration for Complex Features

For complex or high-risk features, use multi-agent collaboration first:

rally-ai collaborate "payment-processing" \
  --context "PCI compliance, Stripe integration, subscriptions" \
  --rounds 3 \
  --mode build

Troubleshooting

"API key not found"

Solution:

# Check environment variables
echo $ANTHROPIC_API_KEY
echo $OPENAI_API_KEY
echo $GOOGLE_API_KEY

# Set if missing
export ANTHROPIC_API_KEY="sk-ant-..."

"Gateway authentication failed"

Solution:

# Refresh OIDC token (expires every 12 hours)
vercel env pull

# Or use API key instead
export VERCEL_AI_GATEWAY_TOKEN="your_token"

"Template file not found"

Known Issue: Template files in knowledge/prompt-library/ need to be created.

Temporary Solution: ALMs will work without templates but may produce lower-quality specs.

"Validation failed: Invalid tier"

Solution: Ensure artifact is in correct location:


Advanced Usage

Programmatic API

import { MultiAIOrchestrator } from "@rally/ai-orchestrator";

const orchestrator = new MultiAIOrchestrator();

// Create Studio app
const app = await orchestrator.createStudioApp({
  description: "Event planning app",
  audience: "Party planners",
  domain: "events",
  workflows: ["invite", "rsvp", "budget"],
  modules: ["calendar", "email", "payments"]
});

// Create Edge solution
const solution = await orchestrator.createEdgeSolution({
  studioAppIds: [app.id],
  domain: "events",
  icp: "Event planning businesses",
  painPoints: ["Manual coordination", "Payment tracking"]
});

// Validate
import { validateArtifact } from "@rally/ai-orchestrator/validation";

const validation = await validateArtifact({
  type: "studio-app",
  id: app.id,
  tier: "studio",
  filePath: app.filePath
});

if (!validation.passed) {
  console.error("Validation failed:", validation.violations);
}

Related Documentation


Resources


Support

Issues: https://github.com/AICodeRally/aicoderally-stack/issues

Questions: todd@aicoderally.com

Implementation Summary: See knowledge/architecture/RALLY_AI_IMPLEMENTATION_SUMMARY.md for complete details on the ALM architecture, features, and known limitations.


Last Updated: November 28, 2025 Version: 2.0.0 (ALM Architecture)