Drop-in DSPy Replacement

Stop hand-tuning prompts.
Let algorithms do it.

VizPy's ContraPrompt and PromptGrad optimizers outperform GEPA on every benchmark. Two lines of code. Better prompts. No DSPy lock-in required.

optimize.py
import dspy import vizpy # Use any model you have access to dspy.configure(lm=dspy.LM("your_provider/model")) # Pick your optimizer optimizer = vizpy.ContraPromptOptimizer(metric=my_metric) optimized = optimizer.optimize(module, train_examples) # That's it. Better prompts. Same module.

Benchmark Results vs. GEPA

BBH
+8%
accuracy gain
HotPotQA
+6%
accuracy gain
GPQA
+11%
accuracy gain
GDPR
+5%
accuracy gain

Two optimizers. One clear API.

Each algorithm is purpose-built for its task type. No guesswork about which optimizer to use.

Classification

ContraPrompt

Mines contrastive pairs from incorrect vs. corrected attempts. Extracts separating rules that teach the model to distinguish edge cases. Best for tasks with discrete labels.

Best for: Classification, Labeling, Routing
Generation

PromptGrad

Uses epoch-based failure analysis to compute textual gradients. Accumulates correction rules across training runs. Best for tasks that produce free-form output.

Best for: Summarization, QA, Code Generation

Three steps to better prompts.

01

Define your module

Write a standard DSPy module with a metric function. If you already have one, you're done here.

02

Pick your optimizer

ContraPrompt for classification. PromptGrad for generation. Each returns an optimized version of your module with the same structure.

03

Read the rules

Every optimization produces plain-English explanations of what it changed and why. No black-box tuning.

Prompts are parameters.
Optimize them like it.

Built by the team behind multi-objective RL at VizopsAI. Ex-DeepMind, ex-Amazon, JHU PhD research.