Skip to content

Search

ESC
Code editor with AI prompt patterns

The Developer's Guide to Prompt Engineering

T
by Tomáš
5 min read

TL;DR

Effective prompts use structured formats, explicit constraints, and few-shot examples to produce consistent, reliable AI outputs in production systems.

Prompt engineering is the difference between an AI feature that works in demos and one that works in production. For developers building AI-powered systems, the prompt is your primary interface to the model — and it deserves the same rigor as any other piece of your codebase.

The Anatomy of a Production Prompt

A well-structured prompt has distinct sections, each serving a specific purpose. Treating prompts as structured documents rather than freeform text dramatically improves consistency.

interface PromptConfig {
  system: string;
  context: string;
  instructions: string;
  constraints: string[];
  outputFormat: string;
  examples: Array<{ input: string; output: string }>;
}

const codeReviewPrompt: PromptConfig = {
  system: "You are a senior TypeScript developer performing code review.",
  context: "Review the following pull request diff for a Node.js REST API.",
  instructions: "Identify bugs, security issues, and performance problems.",
  constraints: [
    "Only flag issues with HIGH or CRITICAL severity",
    "Include the file path and line number for each issue",
    "Suggest a specific fix for each issue found",
  ],
  outputFormat: "Return a JSON array of issues",
  examples: [
    {
      input: "const data = req.body.query; db.raw(data);",
      output: '{"severity":"CRITICAL","issue":"SQL injection","fix":"Use parameterized queries"}',
    },
  ],
};

The more explicit your constraints, the less variance in model output. Ambiguity in prompts produces ambiguity in results.

Core Prompt Techniques

Zero-Shot Prompting

Zero-shot prompts provide instructions without examples. They work well for straightforward tasks where the model’s training data covers the domain.

const prompt = `Extract the email addresses from the following text.
Return them as a JSON array of strings.

Text: "Contact us at support@example.com or sales@example.com for details."`;

Few-Shot Prompting

Few-shot prompts include input-output examples that demonstrate the expected behavior. This technique is essential when your output format is specific or when the task has nuances that instructions alone cannot capture.

const fewShotPrompt = `Classify the following support ticket by category.

Examples:
Input: "I can't log in to my account"
Category: authentication

Input: "The page loads slowly on mobile"
Category: performance

Input: "I was charged twice for my subscription"
Category: billing

Input: "${userTicket}"
Category:`;

Chain-of-Thought Prompting

Chain-of-thought prompting asks the model to show its reasoning before producing a final answer. This reduces errors on complex tasks by forcing intermediate steps.

{
  "system": "You are a debugging assistant. Think through each step before providing your answer.",
  "user": "This function should return the sum of even numbers, but it returns 0 for [2, 4, 6]. Walk through the code step by step, then provide the fix.\n\nfunction sumEvens(nums) {\n  let sum = 0;\n  for (let i = 0; i < nums.length; i++) {\n    if (nums[i] % 2 !== 0) sum += nums[i];\n  }\n  return sum;\n}"
}

Comparing Prompt Techniques

Each technique has trade-offs in token cost, reliability, and implementation complexity.

TechniqueToken CostReliabilityBest For
Zero-shotLowModerateSimple, well-defined tasks
Few-shotMediumHighFormat-sensitive outputs
Chain-of-thoughtHighVery highComplex reasoning, math, debugging
System + user splitLowHighRole-based behavior constraints

System Prompts vs. User Prompts

The distinction between system and user prompts is critical in production. The system prompt defines persistent behavior — role, constraints, output format. The user prompt contains the variable, per-request input.

async function classifyText(text: string): Promise<string> {
  const response = await client.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 100,
    system: `You are a text classifier. Respond with exactly one label from:
      [positive, negative, neutral]. No explanation, no punctuation.`,
    messages: [
      { role: "user", content: text }
    ],
  });

  return response.content[0].text.trim();
}

This separation keeps your system prompt stable across deployments while the user prompt changes per request. It also simplifies testing — you can validate the system prompt independently from the input data.

Structured Output with JSON Mode

For production systems, free-text responses are unreliable. Force structured output by specifying a schema in your prompt and using the model’s JSON mode when available.

const structuredPrompt = `Analyze the following error log and return a JSON object matching this schema:

{
  "error_type": "string (one of: runtime, network, validation, auth)",
  "severity": "string (one of: low, medium, high, critical)",
  "root_cause": "string (one sentence)",
  "suggested_fix": "string (one sentence)"
}

Error log:
${errorLog}`;

Always validate structured outputs against your schema before using them in application logic. Models occasionally produce malformed JSON or add unexpected fields.

Defensive Prompting Patterns

Production prompts need guardrails. Without them, edge cases in user input can produce unexpected or harmful outputs.

Key defensive patterns:

  • Input validation — sanitize and truncate user input before including it in prompts
  • Output validation — parse and validate model responses against expected schemas
  • Fallback handling — define behavior when the model returns unexpected output
  • Rate limiting — prevent abuse by throttling requests per user
  • Content filtering — check both inputs and outputs for policy violations

FAQ

What is prompt engineering?

Prompt engineering is the practice of designing and optimizing input instructions to get consistent, reliable outputs from large language models. It involves structuring prompts with clear instructions, constraints, and examples to minimize variance and maximize accuracy in model responses.

What is few-shot prompting?

Few-shot prompting provides the model with a small number of input-output examples before the actual query, helping it understand the expected format and behavior. Typically 2–5 examples are sufficient to establish the pattern without consuming excessive tokens.

How do system prompts differ from user prompts?

System prompts set the model’s role, constraints, and behavior globally, while user prompts contain the specific task or query for each interaction. In production, system prompts remain constant across requests and define the overall behavior of your AI feature, while user prompts vary with each individual request.

Share

Comments