Introduction
Your AI coding assistant is only as good as the instructions you give itโbut most developers treat prompts like casual conversations instead of precision tools.
As AI pair programming becomes standard practice, the ability to communicate effectively with language models is emerging as a critical developer skill. Prompt engineering isnโt just about getting answersโitโs about getting the right answers consistently, efficiently, and reproducibly.
This tutorial will transform your Copilot interactions from hit-or-miss requests into systematic, high-quality AI collaboration.
๐ Why This Matters
- AI coding assistants can 10x productivityโif used correctly
- Poor prompts waste time with iterations and corrections
- Structured prompting creates reusable patterns for teams
- Understanding prompt engineering prepares you for agentic AI workflows
๐ฏ What Youโll Learn
- The anatomy of an effective prompt
- Core prompting patterns: RCTF, few-shot, chain-of-thought
- VS Code Copilot configuration for project context
- Building a reusable prompt template library
- Iterating prompts with the PDCA improvement cycle
๐ Before We Begin
- Required: VS Code with GitHub Copilot extension installed
- Required: Active GitHub Copilot subscription
- Recommended: A project youโre actively working on for practice
- Helpful: Familiarity with Markdown syntax
Section 1: Understanding Prompt Engineering Fundamentals
Key Concepts
What is a Prompt?
A prompt is the input instruction you provide to an AI model. It combines context, task description, and output requirementsโanalogous to writing precise function specifications.
Why Structure Matters
The difference between vague and structured prompts is dramatic:
Vague โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Precise
"Help me code" "Generate a Python function that validates
email addresses using regex, handles edge
cases (empty, special chars), returns
tuple(bool, str), includes docstring"
๐ป Code Example: Basic vs. Structured Prompt
โ Unstructured Prompt:
Write a function to validate email
โ Structured Prompt:
[ROLE] You are a senior Python developer specializing in input validation.
[CONTEXT] Building a user registration API that needs robust email validation.
[TASK] Write a Python function that:
- Validates email format using regex
- Handles edge cases: empty string, missing @, invalid domain
- Returns tuple: (is_valid: bool, error_message: str | None)
[CONSTRAINTS]
- Python 3.10+ with type hints
- No external libraries
- Include docstring with examples
- Maximum 25 lines
[FORMAT] Provide the function, then 3 test cases showing usage.
๐ง Hands-On Exercise 1: Transform a Vague Prompt
Objective: Practice converting unstructured requests into RCTF format
Challenge: Take this vague prompt and rewrite it using the RCTF pattern:
โMake a script that organizes my filesโ
Success Criteria:
- Role defined (what expertise is needed)
- Context provided (whatโs the situation)
- Task specified with 3+ specific requirements
- Constraints listed (language, limitations)
- Output format defined
Section 2: Core Prompting Patterns
Pattern 1: RCTF (Role-Context-Task-Format)
The foundational pattern for most prompts:
[ROLE]
You are a [specific expert with relevant experience].
[CONTEXT]
The user is working on [situation/project].
Current state: [what exists now]
Goal: [what we're trying to achieve]
[TASK]
Your task is to [specific, actionable request].
Requirements:
1. [Requirement 1]
2. [Requirement 2]
3. [Requirement 3]
[CONSTRAINTS]
- [Technical constraint]
- [Quality constraint]
- [Scope constraint]
[FORMAT]
Structure your response as:
1. [Section 1]
2. [Section 2]
3. [Section 3]
Pattern 2: Few-Shot Prompting
Provide examples to establish the pattern:
Convert function names to descriptive comments:
Example 1:
Input: getUserById
Output: // Retrieves a user record from the database using their unique identifier
Example 2:
Input: validateEmail
Output: // Validates that a string conforms to standard email address format
Example 3:
Input: calculateTotalPrice
Output: // Computes the total price including taxes and applicable discounts
Now convert:
Input: processPaymentQueue
Output:
When to Use Few-Shot:
- Custom output formats
- Domain-specific patterns
- Consistent style across outputs
- Complex transformations
Pattern 3: Chain-of-Thought (CoT)
Force step-by-step reasoning for complex problems:
Problem: Design a caching strategy for a real-time dashboard.
Think through this step-by-step:
1. First, identify what data changes frequently vs. infrequently
2. Then, analyze the read/write patterns
3. Next, evaluate cache invalidation strategies
4. Finally, propose the architecture with trade-offs
For each step, explain your reasoning before moving to the next.
When to Use CoT:
- Multi-step logic problems
- Debugging complex issues
- Architecture decisions
- Code review analysis
๐ป Code Example: Combined Patterns
[ROLE] You are a DevOps engineer specializing in CI/CD pipelines.
[CONTEXT] Migrating a monorepo from Jenkins to GitHub Actions.
The repo has 3 services: API (Node.js), Web (React), Worker (Python).
[TASK] Design the GitHub Actions workflow structure.
Think step-by-step:
1. Analyze which jobs can run in parallel
2. Identify shared dependencies and caching opportunities
3. Design the job dependency graph
4. Propose the workflow file structure
[FORMAT]
1. Analysis of parallelization opportunities
2. Mermaid diagram of job dependencies
3. YAML snippet for the main workflow
4. Caching strategy summary table
๐ง Hands-On Exercise 2: Apply Prompting Patterns
Objective: Practice selecting and applying the right pattern
Challenge: Choose the appropriate pattern and write a prompt for:
โI need to refactor a 500-line function into smaller unitsโ
Success Criteria:
- Pattern selection justified (why this pattern?)
- Complete prompt using chosen pattern
- Expected output structure defined
Section 3: VS Code Copilot Configuration
Project-Level Instructions
Create .github/copilot-instructions.md to give Copilot persistent context:
# Project Copilot Instructions
## Code Style
- Use TypeScript with strict mode enabled
- Follow functional programming patterns where appropriate
- All functions must have JSDoc comments
- Maximum function length: 30 lines
## Architecture
- Services: `src/services/` - Business logic
- Components: `src/components/` - React components
- Utils: `src/utils/` - Pure helper functions
- Types: `src/types/` - TypeScript interfaces
## Testing
- Framework: Jest + React Testing Library
- Coverage target: 80%
- Test file naming: `*.test.ts` or `*.spec.ts`
## Security
- Never hardcode credentials or API keys
- Validate all user inputs
- Use parameterized queries for database operations
## Dependencies
- Prefer standard library over external packages
- Document why any new dependency is needed
Workspace Agents and References
Using @workspace for codebase context:
@workspace How is authentication handled in this project?
Using #file for specific file context:
#file:src/auth/login.ts Review this for security vulnerabilities
Using #selection for highlighted code:
#selection Refactor this to use async/await instead of callbacks
๐ป Code Example: Copilot Instructions File
<!-- .github/copilot-instructions.md -->
# IT-Journey Project Instructions
## Core Principles
When generating code for this project:
- Apply DRY (Don't Repeat Yourself)
- Design for Failure (DFF) - include error handling
- Keep It Simple (KIS) - prefer clarity over cleverness
## Jekyll Context
- Site generator: Jekyll 3.9.5
- Template language: Liquid
- Content format: Markdown with YAML frontmatter
- Collections: _posts, _quests, _docs
## Content Standards
- All posts require complete frontmatter (see posts.instructions.md)
- Use fantasy/RPG theming for quest content
- Include multi-platform instructions where applicable
## File Organization
- Posts: `pages/_posts/YYYY-MM-DD-title.md`
- Quests: `pages/_quests/lvl_XXX/quest-name/index.md`
- Prompts: `.github/prompts/name.prompt.md`
๐ง Hands-On Exercise 3: Configure Your Project
Objective: Create project-specific Copilot instructions
Challenge: Write a .github/copilot-instructions.md for your current project
Success Criteria:
- Code style section with 3+ rules
- Architecture section with file organization
- Testing section with framework and patterns
- At least one project-specific convention
Section 4: Building Reusable Prompt Templates
The .github/prompts/ Pattern
Create reusable prompts with variables:
---
name: "code-review"
description: "Structured code review prompt"
version: "1.0.0"
inputs:
- focus_area
- severity_threshold
---
# Code Review:
Review the provided code focusing on .
## Review Criteria
### Security
- [ ] Input validation present
- [ ] No hardcoded credentials
- [ ] Proper authentication checks
### Performance
- [ ] No unnecessary loops or iterations
- [ ] Appropriate data structures used
- [ ] Caching considered where applicable
### Maintainability
- [ ] Clear naming conventions
- [ ] Adequate documentation
- [ ] DRY principle followed
## Output Format
For each issue found:
- **Severity**: ๐ด Critical | ๐ก Warning | ๐ข Suggestion
- **Location**: File and line number
- **Issue**: Description of the problem
- **Fix**: Recommended solution with code example
Only report issues at level or higher.
Template Library Structure
.github/prompts/
โโโ README.md # Catalog and usage guide
โโโ code-review.prompt.md # Code review template
โโโ generate-tests.prompt.md # Test generation template
โโโ refactor.prompt.md # Refactoring assistant
โโโ document.prompt.md # Documentation generator
โโโ debug.prompt.md # Debugging assistant
โโโ explain.prompt.md # Code explanation template
๐ป Code Example: Debug Prompt Template
---
name: "debug-assistant"
description: "Systematic debugging prompt for code issues"
version: "1.0.0"
inputs:
- language
- error_type
---
# Debug Assistant:
[ROLE] You are an expert debugger specializing in issues.
[CONTEXT]
The user is experiencing a in their code.
[TASK]
Analyze the provided code and error, then:
1. Identify the root cause
2. Explain why this error occurs
3. Provide a fix with explanation
4. Suggest prevention strategies
[FORMAT]
## ๐ Analysis
[Step-by-step breakdown of the issue]
## ๐ Root Cause
[Specific cause of the error]
## โ
Solution
[Fixed code with comments]
## ๐ก๏ธ Prevention
[How to avoid this in the future]
๐ง Hands-On Exercise 4: Create a Prompt Template
Objective: Build a reusable prompt for your common tasks
Challenge: Create a prompt template for one of these scenarios:
- API endpoint documentation generator
- Unit test generation for functions
- Git commit message writer
- Code explanation for onboarding
Success Criteria:
- Valid frontmatter with name, description, version
- At least 2 input variables defined
- RCTF structure in prompt body
- Clear output format specified
Section 5: Iterating with PDCA
The Prompt Development Cycle
โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ
โ PLAN โโโโโถโ DO โโโโโถโ CHECK โโโโโถโ ACT โ
โ โ โ โ โ โ โ โ
โ Define โ โ Write โ โ Measure โ โ Refine โ
โ success โ โ prompt โ โ quality โ โ or โ
โ criteriaโ โ โ โ โ โ templateโ
โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ
โฒ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Quality Scoring Framework
Rate each prompt output (0-10):
| Criterion | Description | Score |
|---|---|---|
| Correctness | Output works as intended | /10 |
| Completeness | All requirements addressed | /10 |
| Format | Follows requested structure | /10 |
| Efficiency | No unnecessary content | /10 |
Target: Average 8+ before templating
Iteration Log Template
## Prompt Iteration Log: [Task Name]
### Version 1 (Baseline)
**Prompt**: [Original prompt]
**Score**: 4/10
**Issues**:
- Too vague, got minimal output
- No error handling included
- Missing type hints
### Version 2 (Added Structure)
**Changes**: Added RCTF pattern, specified constraints
**Score**: 7/10
**Issues**:
- Better structure, but missing edge cases
- No examples in docstring
### Version 3 (Added Examples)
**Changes**: Added few-shot examples for edge cases
**Score**: 9/10
**Decision**: โ
Template this version
๐ป Code Example: Iteration in Action
Version 1 (Score: 3/10):
Write a function to parse dates
Version 2 (Score: 6/10):
Write a Python function that parses date strings into datetime objects.
Handle multiple formats. Include error handling.
Version 3 (Score: 9/10):
[ROLE] You are a Python developer specializing in date/time handling.
[TASK] Write a function that parses date strings into datetime objects.
Requirements:
1. Support formats: ISO 8601, US (MM/DD/YYYY), EU (DD/MM/YYYY)
2. Auto-detect format when possible
3. Return None for unparseable strings (don't raise exceptions)
4. Include type hints and docstring
[EXAMPLES]
Input: "2025-11-26" โ datetime(2025, 11, 26)
Input: "11/26/2025" โ datetime(2025, 11, 26) # US format
Input: "invalid" โ None
[CONSTRAINTS]
- Use standard library only (datetime, re)
- Maximum 30 lines
- Include 3 test cases in docstring
๐ง Hands-On Exercise 5: PDCA Iteration Practice
Objective: Experience the improvement cycle firsthand
Challenge:
- Start with this vague prompt: โHelp me write better codeโ
- Iterate 3 times, scoring each version
- Document what changed and why
Success Criteria:
- 3 versions documented with scores
- Each iteration addresses specific issues
- Final version scores 8+ on quality criteria
- Changes justified with reasoning
๐ Platform-Specific Guidance
๐ macOS
# Install VS Code Copilot extension via CLI
code --install-extension GitHub.copilot
code --install-extension GitHub.copilot-chat
# Verify installation
code --list-extensions | grep -i copilot
๐ช Windows (PowerShell)
# Install VS Code Copilot extension via CLI
code --install-extension GitHub.copilot
code --install-extension GitHub.copilot-chat
# Verify installation
code --list-extensions | Select-String "copilot"
๐ง Linux
# Install VS Code Copilot extension via CLI
code --install-extension GitHub.copilot
code --install-extension GitHub.copilot-chat
# Verify installation
code --list-extensions | grep -i copilot
โ Knowledge Validation
๐ง Self-Assessment
Before completing, verify you can:
- Explain the difference between zero-shot and few-shot prompting
- Write a prompt using the RCTF pattern from memory
- Describe when to use chain-of-thought prompting
- Create a
.github/copilot-instructions.mdfile - Design a reusable prompt template with variables
- Apply PDCA to improve a poorly-performing prompt
๐ฎ Practice Exercises
- Beginner: Transform 3 vague prompts into RCTF format
- Intermediate: Create a prompt template library with 3 templates for your project
- Advanced: Build a complete
.github/prompts/directory with README catalog
๐ง Troubleshooting Guide
Issue 1: Copilot Ignores Project Instructions
Symptoms: Suggestions donโt follow .github/copilot-instructions.md
Solution:
- Verify file location: Must be
.github/copilot-instructions.md(not.github/copilot/) - Check file syntax: Valid Markdown without YAML frontmatter
- Reload VS Code window:
Cmd/Ctrl + Shift + Pโ โReload Windowโ
Prevention: Test instructions with explicit @workspace queries
Issue 2: Inconsistent Output Quality
Symptoms: Same prompt produces varying quality results
Solution:
- Add more specific constraints
- Include examples (few-shot)
- Specify output format explicitly
- Add verification step: โBefore responding, verify your answer addresses X, Y, Zโ
Prevention: Use templates with tested, consistent prompts
Issue 3: Outputs Too Verbose or Too Brief
Symptoms: Response length doesnโt match needs
Solution:
- Too verbose: Add โBe conciseโ or โMaximum X linesโ
- Too brief: Add โProvide detailed explanationโ or โInclude examplesโ
Prevention: Always specify output length expectations in prompt
๐ Next Steps
Key Takeaways
- Prompts are code โ Version control, test, and iterate on them
- Structure beats length โ RCTF pattern creates consistency
- Context is power โ Project instructions amplify every prompt
- Patterns are reusable โ Build a template library over time
- Measure before templating โ Only save prompts that score 8+
๐ Further Learning
- IT-Journey Quest: AI-Assisted Development Fundamentals
- Reference: prompts.instructions.md - Full Kaizen prompt engineering guide
- External: Prompt Engineering Guide - Community patterns
- Documentation: GitHub Copilot Docs
๐ฏ Project Ideas
- Beginner: Create 5 prompt templates for common coding tasks
- Intermediate: Build a team prompt library with usage documentation
- Advanced: Design an agent prompt for multi-step workflow automation
๐ Resources and References
๐ Essential Documentation
| Resource | Description |
|---|---|
| GitHub Copilot Docs | Official documentation |
| VS Code Copilot Extension | Extension marketplace page |
| Copilot Chat Extension | Chat interface extension |
๐ฅ Learning Resources
| Resource | Type | Description |
|---|---|---|
| Prompt Engineering Guide | Guide | Community-maintained patterns |
| Learn Prompting | Course | Free structured curriculum |
| OpenAI Prompt Engineering | Docs | Official OpenAI guidance |
๐ง IT-Journey Resources
| Resource | Description |
|---|---|
prompts.instructions.md |
Kaizen-integrated prompt engineering guide |
posts.instructions.md |
Post creation standards |
.github/prompts/ |
Example prompt templates |
This article was created following IT-Journeyโs post standards and Kaizen continuous improvement principles. Found an issue or have a suggestion? Open an issue or contribute directly!