Want your AI coding assistant to behave like a reliable pair programmer? These proven prompting tactics—adapted from OpenAI’s Codex best practices—work just as well with today’s models.
Prompting best practices that work
- Start with a spec: State language, framework, versions, target OS, inputs/outputs, constraints (performance, memory), and acceptance criteria.
- Show the format: Provide an example function signature, docstring, and a tiny I/O example so the model mirrors your style and structure.
- Paste the context: Include relevant snippets, file paths, schema, and API endpoints. Point to what can change and what must not.
- Think, then code: Ask the model to outline a plan or algorithm first, then generate the implementation.
- Use few-shot cues: Provide 1–2 short examples (tests, small transforms) and ask it to continue the pattern.
- Constrain the output: Specify naming conventions, docstring style, error handling, and maximum length or functions to touch.
- Test-driven loop: Have it write or update unit tests, run them, paste failures, and ask for a minimal fix—repeat until green.
- Be explicit with errors: Share stack traces and exact messages; ask for root cause, fix, and a brief justification.
- Lock in decisions: Summarize the final approach and constraints in each new thread so context isn’t lost.
Mini prompt template
- Goal: Implement [function] that does [task] given [inputs] and returns [outputs].
- Environment: [language/version], [framework], [DB/service], target OS: [x].
- Constraints: O([n]) time, memory under [x], handle [edge cases], follow [style guide].
- Context: Here are relevant files/snippets: [paths + code]. Do not modify [files].
- Format: Provide function-only code with docstring and unit tests for [cases].
- Process: First outline the approach in bullets. After I confirm, produce code.
Pro tips
- Prefer small, iterative prompts over one giant ask; it reduces hallucinations and makes diffs manageable.
- Keep a prompt library for common tasks (parsing, REST client, SQL query); reuse and tweak per project.
- Sanitize secrets; never paste real tokens or customer data. Use mock values in examples.
Source
OpenAI Codex best practices: developers.openai.com/codex/learn/best-practices. Core ideas—clear specs, examples, constraints, and iterative feedback—generalize to modern LLMs.
Takeaway
Treat the model like a junior dev: give a crisp spec, show the pattern, enforce constraints, and iterate with tests. You’ll ship cleaner code, faster.
Get weekly, practical AI tips in your inbox—subscribe to our newsletter: theainuggets.com/newsletter.

