Experience the complete BDD-driven development lifecycle with SuperOptiX. See how evaluation, optimization, and multi-agent orchestration work together to create production-ready AI teams.
Developer, QA Engineer, DevOps Engineer
Professional specification execution
Automatic performance improvement
Production-ready deployment
Ensure your system meets the requirements for optimal performance
Required for agent optimization (Step 5)
Recommended for smooth operation
For model downloads and API calls
Required for SuperOptiX compatibility
Install with pip, conda, or uv
Optional but recommended for local inference
Install models before running agents
ollama pull llama3.2:1b
super model install llama3.2:1b
super model run llama3.2:1b "Hello"
PYTHONUTF8=1
Optimization (Step 5) is resource-intensive and can be costly. It requires significant GPU resources and may incur cloud costs. You can skip optimization if you don't meet the requirements or want to avoid costs - your agent will still work without optimization.
Choose your preferred method to get started
Download models before proceeding - agents won't work without them
Models are required for agent execution. Without models, your agents won't be able to process requests. Choose the installation method that works best for your system.
Install AI models using any of the methods above. The SuperOptiX CLI method is recommended as it provides the most seamless experience.
Creates the proper directory structure with `.super`
configuration file for workspace management.
Make sure you have installed AI models from Step 0 before pulling agents. The agent will use the models you installed.
Downloads a production-ready agent with SuperSpec YAML configuration including BDD scenarios for testing.
Converts the SuperSpec YAML playbook into an executable DSPy pipeline with BDD scenarios as evaluation tests.
Critical: Always evaluate before optimizing to establish baseline performance!
The BDD Test Runner executes all scenarios defined in your SuperSpec playbook, providing detailed analysis and baseline metrics before optimization.
Transform BDD scenarios into training data for automatic agent improvement
🔄 Skip this step if you don't meet the requirements or want to avoid costs. Your agent will still work without optimization.
💡 Note: Optimization can also be triggered during evaluate or run stages depending on your workflow configuration.
Uses BDD scenarios as training examples
Tests different prompt variations
Saves optimized configurations
The DSPy Optimization Engine uses your BDD scenarios as training data to automatically improve prompts, reasoning chains, and few-shot examples.
Re-run BDD tests to validate optimization effectiveness
Quantitative improvement validated through the same BDD scenarios, showing measurable enhancement in agent performance.
Now your optimized agent can execute real production goals with improved performance validated by BDD scenarios.
Coordinate multiple optimized agents for complex workflows
Kubernetes-style orchestration for AI agents - coordinate multiple optimized agents with automatic task decomposition and context passing.
Test agents before optimizing, just like TDD for traditional software
Your tests become optimization training examples
Deploy only when pass rates meet production standards
DSPy automatically improves agent performance
Measure exactly how much agents improved
Multi-agent coordination for complex workflows
Understanding the complete BDD-driven development lifecycle
1. Evaluation-First: We started with BDD scenarios that define what success looks like, just like TDD in traditional software development.
2. Baseline Measurement: We established performance metrics before any optimization, ensuring we can measure improvement.
3. Systematic Optimization: DSPy used our BDD scenarios as training data to automatically improve prompts and reasoning chains.
4. Validation: We re-ran the same tests to validate that optimization actually improved performance.
5. Production Deployment: Our optimized agents can now execute real-world goals with confidence.
Reproducible Results: Every optimization is based on your specific use cases, not generic examples.
Quality Assurance: BDD scenarios serve as both tests and training data, ensuring optimization improves real-world performance.
Measurable Improvement: You can quantify exactly how much your agents improved through the same evaluation metrics.
Production Ready: The evaluation-first approach ensures your agents meet quality gates before deployment.
For detailed explanations of what happens at each stage, comprehensive examples, and advanced techniques, visit our complete documentation:
Complete Quick Start Guide with Detailed ExplanationsSee the entire SuperOptiX workflow in action - from installation to running your first AI team
This demo shows the complete end-to-end workflow you just learned about: