Large language models (LLMs) have become integral to enterprise applications across industries. Under the hood, customers’ inputs to the models are usually augmented with prompts that encode intricate business logic, regulatory requirements, and domain expertise: a healthcare system must use language compliant with the Health Insurance Portability and Accountability Act, for instance, and a financial trading system must follow risk tolerance rules. These prompts are typically crafted by domain experts over weeks or months. Yet business demands continue to push for further performance gains. The challenge, therefore, is not engineering prompts from scratch but rather elevating already strong performance by discovering nuanced, task-specific refinements — without compromising domain requirements. In this post, we present Promptimus, a method for automatically optimizing well-developed prompts that has several advantages over its predecessors: It’s model agnostic: It takes a prompt already optimized for a source model, rapidly reoptimizes it for a target model, and compares the optimized prompts across models. It’s driven by performance criteria: It takes the existing prompt template, task-specific data samples, and user-defined performance metrics and generates targeted improvement strategies, iterating repeatedly to achieve domain-specific optimization objectives. It focuses on exploits: It uses a metric-analyzer AI agent to identify failure points and a debugging helper agent to identify root causes, and it surgically refines prompts relative to failures (rather than along random dimensions) for targeted performance improvement. It’s fully automated: It analyzes user-defined metrics and uses a code sanitization AI agent to generate debugging checkpoints automatically. Metric functions can be imported as Python code, and performance criteria can be added or modified at any time. It has an edit mode: For large, carefully structured prompts with complex business logic, the edit mode makes surgical, targeted modifications instead of rewriting the entire prompt — preserving the parts that already work while fixing exactly what’s broken. Promptimus supports a wide range of textual and multimodal LLM tasks, including classification, extraction, generation, summarization, code generation, and tool use. In the following sections, we’ll present our methodology, the system architecture, and experimental results on multiple enterprise tasks. Why good prompts are hard to improve Attempts to automate prompt optimization are as old as prompt engineering itself, but approaches that work well when generating prompts from scratch struggle to improve well-engineered prompts. Random exploration strategies using generic directions like “be more creative” or “add examples” are ineffective, because the remaining improvements lie in very specific strategic directions. Sparse feedback in the form of scalar scores provides no guidance on why instances fail or how to improve. On top of growing complexity from business domain demands, rapid model evolution further compounds the challenge of prompt optimization. As providers like Anthropic, OpenAI, Google, Meta, and Alibaba release new models, enterprises face recurring prompt migration challenges. Prompts optimized for one model often underperform on another due to different instruction-following characteristics. Manual reoptimization is costly and time consuming, and regression risks delay adoption of better models. Methodology and system design Promptimus addresses these challenges with a methodology built around a four-step iteration loop, with the following inputs: the LLM you aim to use for inference the initial prompt template a small JSONL dataset (typically 20–50 samples) with corresponding variables for prompt templates, split into a development set (for prompt tuning) and a held-out test set (for validation); it is not mandatory for the samples to contain the ground truth a user-defined performance-evaluation metric function (you can bring your own Python code) The four-step iteration loop Step 1 — evaluation: During initialization, the original prompt is executed on the target LLM using the development set (dev set) to establish baseline evaluation scores. Additionally, the metric-analyzer agent performs analysis of the user-defined metric function, generating checkpoint functions that decompose the evaluation into intermediate validation steps. These checkpoints enable fine-grained failure diagnosis throughout the optimization process. For example, when the checkpoints reveal that 98% of outputs have the correct JSON format, and 95% have valid schemas, but only 88% have valid values, the cause of underperformance is localized to value validation. After the initial evaluation, Promptimus branches into either standard mode, where it conducts full prompt rewrites, or edit mode, where it modifies prompts with structured find-and-replace edits. Standard modeEdit modeStep 2Feedback generation: The LLM-driven feedback generator uses the metric checkpoints precomputed by the metric analyzer to diagnose failure patterns in the current-prompt results. It identifies the bottleneck checkpoint (the one with the lowest pass rate) and collects representative instances — including both failing and passing examples, to provide contrast — then analyzes root causes and common failure modes. Finally, it provides actionable suggestions for fixing the prompt (such as “model outputs descriptive text instead of enum codes, suggest adding explicit constraint”).Analysis + strategy + edit generation: After performing the same failure analysis as in the standard mode, the feedback generator proposes targeted find-and-replace edits, pinning changes to the exact locations responsible for specific failures. Step 3Strategy + full rewrite: Based on the feedback from the previous step, along with the metrics and data samples, the metaoptimizer analyzes task characteristics and generates task-specific exploration strategies, while maintaining all domain-specific requirements encoded in the original prompt. Then, for each strategy, the instruction optimizer proposes an improved prompt candidate that addresses the identified weaknesses and specific error patterns. This one-to-one coupling between strategies and candidates ensures diverse exploration of the optimization landscape. Programmatic edit application: For each proposed edit in step 2, Promptimus deterministically matches the edit to the identified failure with three match levels: exact match, whitespace-normalized fuzzy match, and similarity match near line reference. This process has a 97.3% success rate with zero LLM calls.Step 4Candidate evaluation: Each candidate is executed using the dev set, and the best candidate is selected by running the user-defined metric function. The best-performing candidate becomes the starting point for the next iteration. This exploration-focused process runs iteratively for a user-specified number of iterations, with each iteration building on what was learned and achieved in the previous one.We recommend standard mode for short prompts that need significant expansion — for example, a two-line math prompt that needs to grow into detailed reasoning protocols. Edit mode is a better choice for longer and already well-crafted prompts containing structured content like API schemas, compliance rules, or domain taxonomies, where full rewrites risk silently dropping or reorganizing carefully crafted sections. For a prompt with 50,000–100,000 tokens, a typical iteration produces three to five edits totaling 500–1,000 tokens, versus regeneration of the entire prompt. More generally, Promptimus adds content only when the optimization loop surfaces unaddressed failure modes, so prompt length plateaus within the first few iterations. This means that the relative serving-time impact is small for already long production prompts and larger for short starter templates. If the optimized prompt is served as a cached system prompt, the additional cost is one call during the cache’s time to live, which becomes negligible at scale. Empirical experiments and analysis We evaluated Promptimus against six leading automatic prompt optimization methods across 20 public benchmarks spanning reasoning, math, question answering, text-to-SQL, coding, function calling, instruction following, and multimodal tasks. All methods used the same optimizer model and evaluation budgets with Claude Sonnet 4.6 as the target model, averaged over five random seeds. Each benchmark used 20 dev samples for optimization and 100 held-out test examples for evaluation. As reported in the table below, Promptimus achieves the best result on 16 of 20 benchmarks and ties on one, outperforming all six baselines on average (0.792 vs. 0.765 for the best-of-six baseline). The largest gains appear on tasks where the metric has a decomposable structure. Notably, Promptimus with edit mode outperforms all four multimodal benchmarks, suggesting that vision-language prompts benefit from preserving existing visual-analysis structure rather than rewriting it. BenchmarkMetricNo optimizationBest of six baselinesPromptimusModeBBH-CausalJudgeAcc [0,1]0.5380.726 (GEPA)0.718StandardBBH-DisambigQAAcc [0,1]0.6010.868 (GPO)0.908StandardBBH-GeoShapesAcc [0,1]0.7470.770 (OPRO)0.936StandardBBH-RuinNamesAcc [0,1]0.9180.926 (GEPA)0.928StandardBBH-SnarksAcc [0,1]0.3240.920 (OPRO)0.908EditGSM8KAcc [0,1]0.6580.964 (MIPROv2)0.958StandardDAPO-AIMEAcc [0,1]0.7030.730 (ProTeGi)0.79StandardHotPotQAF1 [0,1]0.160.832 (MIPROv2)0.839StandardSpiderExAcc [0,1]0.680.846 (GEPA)0.85EditBIRDExAcc [0,1]0.6260.684 (ProTeGi)0.684StandardBigCodeBench-hardPass@1 [0,1]0.3390.336 (ProTeGi)0.345StandardCodeforcesPass@1 [0,1]0.5890.808 (TextGrad)0.818EditBFCLAST [0,1]0.8820.968 (MIPROv2)0.98StandardNesT-FuLPMacc [0,1]0.3750.429 (TextGrad)0.469StandardIFBenchAcc [0,1]0.4980.509 (GEPA)0.53StandardIFEvalStrict [0,1]0.8760.886 (GPO)0.892StandardMathVistaAcc [0,1]0.4330.606 (GPO)0.644EditChartQARelaxed Acc [0,1]0.2790.828 (ProTeGi)0.834EditAI2DAcc [0,1]0.8340.824 (MIPROv2)0.868EditDeFactifyAcc [0,1]0.8350.922 (MIPROv2)0.938EditAverage0.5950.7650.792The figure below shows convergence through iterations on two representative benchmarks. Promptimus edit mode reaches 90% of its final development score in a median of about 300 metric calls, faster than all baselines. Both modes typically plateau within eight iterations, with the bulk of improvement concentrated in the first three to five iterations. Importantly, dev set gains transfer to the held-out test set. Sometimes baselines match or even exceed Promptimus on dev but fall behind on test, indicating overfitting. We attribute this to edit mode’s surgical modifications, which preserve generalizable prompt structure, and metric probing, which produces failure signals that transfer across examples, as opposed to memorization of dev-set patterns. We also evaluated Promptimus across multiple LLMs using a public benchmark and Amazon enterprise use cases, spanning the tasks of classification, text-to-SQL, math reasoning, coding, multimodal understanding, and complex API generation on seven target models. Promptimus improved baseline prompts on all nine tasks, with gains ranging from 3.18% to 90.27%. Dev sets ranged from 30 to 160 examples, with the majority of tasks using fewer than 100, demonstrating the system’s sample efficiency. The results also highlight model-agnostic generalizability: the same optimization framework produced meaningful gains across both proprietary and open-source target models without task-specific engineering. TaskTarget LLMPerformance metricDev set sizeNo optimizationOptimizedComplex API call generationGPT-OSS-120BAPI Acc (user-defined) [0,1]430.450.86Classification_ANova ProF1 score and FPR score [0,1]2100.640.78Multimodal classification_BHaiku-4.5Accuracy [0,1]1600.510.76Classification_CNova LiteAccuracy [0,1]850.560.58Text2sql_ANova-MicroExecution Accuracy [0,1]500.720.83Math reasoning_AQwen3-235B[WS12] (non-reasoning)Accuracy (user-defined) [0,1]300.470.50Math reasoning_BClaude-4.5-Opus (non-reasoning)Accuracy (user-defined) [0,1]300.600.73Coding_AGPT-OSS-120BPass@1 [0,1]1000.260.33Coding_BGPT-OSS-120BPass@1 [0,1]310.560.64Following are examples of how Promptimus improved already fine-grained prompts to further drive application performance for a variety of use cases. Example 1: CodeForces (coding benchmark designed to evaluate LLM reasoning) This use case is to use an LLM to generate a Python function based on a user-provided problem description. We used 50 dev samples (sampled from the original dev set) and 148 test samples with a user-defined scoring approach. The Promptimus (edit mode) optimization converged in five iterations. Original vs. optimized prompt (deletions in italic, additions in bold)-When tackling complex reasoning tasks, you have access to the following -actions. Use them as needed to progress through your thought process. -[ASSESS] -[ADVANCE] -[VERIFY] -[SIMPLIFY] -[SYNTHESIZE] -[PIVOT] -[OUTPUT] -You should strictly follow the format below: -[ACTION NAME] -# Your action step 1 -# Your action step 2 -… -Next action: [NEXT ACTION NAME] +You are an expert competitive programmer. Solve the given programming +problem in Python using the strict 2-phase reasoning structure defined below. + ## ABSOLUTE RULE – ONE [OUTPUT] BLOCK ONLY – ZERO EXCEPTIONS + The first [OUTPUT] block encountered is the ONLY one evaluated. A second [OUTPUT] block causes + immediate evaluation failure and a score of 0. + ## CRITICAL CONSTRAINTS + Standard Library Only – Use ONLY Python standard library modules. No exceptions. + Forbidden: sortedcontainers, numpy, scipy, pandas. Allowed: bisect, heapq, collections, math, + itertools, functools, sys. + If you need a sorted structure: implement using bisect + a plain list. + Sorting Pitfall Warning: + Never use sort(reverse=True) when the secondary sort direction differs from the primary. + Descending by key A, ascending by key B: items.sort(key=lambda x: (-x[0], x[1])) + I /O Consistency Rule: + Use exactly ONE I/O method throughout – no mixing. + Strategy A: input = sys.stdin.readline at top, then use input() everywhere. + Strategy B: use sys.stdin.readline() directly everywhere. + Variable Initialization Rule: + Declare all variables that are conditionally assigned BEFORE their conditional block. + ## STRICT 2-PHASE STRUCTURE + ### PHASE 1 – [ASSESS] (ONE block only) + 5 mandatory gates (G1–G5). Each gate requires a one-line YES/NO + justification. + G1 – Brute force feasible? Is O(nˆ2) within time constraints? + G2 – All variables initialized before conditional use? + G3 – I/O strategy chosen and consistent? Declare exactly one strategy. + G4 – Demo output reproducible by hand? Perform explicit dry run on demo input. + G5 – Any mutable structure modified during iteration? Confirm index recomputation. + End with: Chosen approach: [algorithm name], O([complexity]) – Tier [1/2/3] + Tier 1 = Brute-force correct, Tier 2 = Optimized correct, Tier 3 = Optimal. + Fallback Rule: If you cannot confidently implement Tier 2+, commit to Tier 1. A slow, correct + solution scores higher than a fast, broken one. + ### PHASE 2 – [OUTPUT] (ONE block only, immediately after ASSESS) + First line inside [OUTPUT] must declare I/O strategy as a comment. + Produce the complete Python solution. No other action types permitted. + ## CRITICAL OUTPUT RULES + 1. Exactly ONE [OUTPUT] block. Fix mistakes inline – never open a second. + 2. Inside [OUTPUT], the ONLY content is the fenced Python code block. + 3. Reasoning word budget: entire [ASSESS] block must not exceed 250 words. + 4. No trailing empty lines in output. + 5. Never end your response with only reasoning – even brute-force is acceptable over no solution. + 6. Never output -1 or “no solution” if the problem guarantees a solution always exists. + [. . . mandatory code scaffold template with I/O strategy declaration, imports, solve() structure, sorting/mutation reminders, output + formatting rules . . . ] Title: {problem_title} Time Limit: {time_limit} Memory Limit: {memory_limit} Problem Description: {problem_description} Output Specification: {output_specification} Demo Input: {demo_input} Demo Output: {demo_output} Note: {demo_note} -Write Python code to solve the problem. Present the code in “‘python … “‘ at the end. +Solve the problem using the 2-phase structure: [ASSESS] block (5 mandatory gates G1–G5, ≤250 words), +then [OUTPUT] block (fenced Python solution)Example 2: Multimodal AI agent This AI agent is for Amazon to detect construction defects. The original and optimized prompts are shown below. We used the vision-language model qwen3-vl-235b-a22b on Amazon Bedrock to examine the images taken by inspectors and identify construction defect categories and risk levels. The optimization process looped in three iterations with 16 dev samples. The recommendations generated by the metric analyzer and instruction optimizer in Promptimus (including providing a role, a task objective, defect categories with examples, a category disambiguation section, analysis instructions with a decision tree, output format requirements, and critical output requirements) improved the image classification accuracy from 0.438 to 0.812. When we applied the optimized prompt to the test sample set (17 samples), accuracy improved from 0.471 to 0.529. Example 3: Defactify (multimodal fact verification) This is a comprehensive framework for evaluating an LLM’s ability to perform multimodal fact verification, detect misinformation, and identify AI-generated content. The Promptimus metric analyzer found that the model defaults to ”Real” for photorealistic AI-generated images. The optimizer introduces an adversarial dual-hypothesis framework with asymmetric weighting that biases the model toward “AI-generated”. For example, with the original prompt, the model dismisses a clock with garbled numbers as an “artistic design choice” and is fooled by photorealistic textures. After optimization, by contrast, the adversarial dual-hypothesis protocol forces systematic signal enumeration, catching the garbled clock numerals that the baseline dismissed. Conclusion and future work Compared to other metric-driven prompt optimization approaches, Promptimus excels at preventing exploitation through targeted and exploitation-focused refinements. It is fully generalizable, adaptive to user-defined metric functions and task domains without manual engineering. The dense feedback loop drives automatic analysis on metric-function code, identifies debugging checkpoints, and generates adaptive, task-aware exploration strategies that target the specific failure modes of each prompt-and-task combination. Particularly, our approach is sample efficient, requiring only a small number of dev examples (typically 20–50) to drive significant improvements, fitting it for enterprise scenarios where labeled data is scarce or expensive to obtain. Furthermore, its model-agnostic design enables it to rapidly adapt prompts to target models for seamless enterprise-level model migration. We are making this innovation available through Amazon Bedrock to enable model migration for enterprise generative-AI applications with zero manual engineering and minimal labeled datasets.