Testing Frameworks

This document provides an overview of the testing infrastructure in the IT-Journey repository.

Overview

The IT-Journey repository includes comprehensive testing frameworks to ensure content quality, link health, and structural integrity. All testing code is located in the test/ directory.

Test Directory Structure

test/
├── hyperlink-guardian/       # Link validation and health monitoring
│   ├── docs/                # Guardian documentation
│   ├── config.yml           # Configuration file
│   ├── validator.py         # Core validation logic
│   └── README.md
├── quest-validator/          # Quest content structure validation
│   ├── docs/                # Validator documentation
│   ├── validator.py         # Quest validation logic
│   ├── schemas/             # JSON schemas for validation
│   └── README.md
├── test-results/             # Test output artifacts
└── README.md                 # Testing overview

Purpose

The Hyperlink Guardian is a comprehensive link validation system that proactively monitors link health across the IT-Journey website.

Location

Features

Core Capabilities:

Analysis Levels:

Usage

Command Line:

# Basic website check
python3 scripts/link-checker.py --scope website

# Comprehensive analysis with AI
python3 scripts/link-checker.py \
  --scope website \
  --analysis-level comprehensive \
  --timeout 30 \
  --output-dir link-check-results

# Create GitHub issue with results
python3 scripts/link-checker.py \
  --scope website \
  --create-issue \
  --repository bamr87/it-journey

GitHub Actions:

# Manual dispatch via Actions UI
Actions > Link Health Guardian > Run workflow
# Configure options in UI

Configuration

Scope Options:

Timeout Settings:

Output Files

link-check-results/
├── lychee_results.json       # Raw link checker output
├── link_analysis.json        # Categorized failures
├── ai_analysis.md            # AI insights (if enabled)
├── github_issue.md           # Issue content
├── statistics.env            # Key metrics
└── issue_url.txt             # Created issue URL

AI Analysis

When enabled (--ai-analysis or default), the Guardian uses OpenAI GPT-4 to:

Requirements:

Cost: ~$0.01-0.10 per analysis depending on results size

Integration

CI/CD Pipeline:

# Scheduled runs
- Monday 6 AM UTC (weekly comprehensive)
- Friday 6 PM UTC (end-of-week validation)

# Outputs
- GitHub issues for broken links
- Workflow artifacts (30-day retention)
- Workflow summaries with health status

Documentation

Comprehensive documentation available:

Troubleshooting

Common Issues:

Issue: High false positive rate

# Solution: Increase timeout
python3 scripts/link-checker.py --scope website --timeout 45

Issue: AI analysis not working

# Solution: Check API key
echo $OPENAI_API_KEY
# Set if missing
export OPENAI_API_KEY="your-key-here"

Issue: Rate limiting errors

# Solution: Add delays between requests (automatic in script)
# Or check .lycheeignore file for patterns

Quest Validator

Purpose

The Quest Validator ensures quest content follows structural requirements and maintains quality standards.

Location

Features

Validation Checks:

Quest Requirements:

# Required frontmatter fields
title: "Quest Title"
date: YYYY-MM-DDTHH:MM:SS.sssZ
level: "0101"                    # Binary format
difficulty: "intermediate"        # beginner|intermediate|advanced|expert
quest_type: "automation"          # Quest category
xp: 500                          # Experience points
achievements: [...]              # List of achievements
prerequisites: [...]             # List of requirements
estimated_time: "2-3 hours"      # Completion time
platforms: [...]                 # Supported OS

Usage

Command Line:

# Validate single quest
python3 test/quest-validator/validator.py \
  pages/_quests/my-quest.md

# Validate all quests
python3 test/quest-validator/validator.py \
  pages/_quests/

# Verbose output
python3 test/quest-validator/validator.py \
  pages/_quests/ \
  --verbose

Output

Validation Report:

Quest Validation Report
=======================

File: pages/_quests/link-guardian-quest.md
Status: ✅ PASSED

Checks:
  ✅ Frontmatter valid
  ✅ Required fields present
  ✅ Quest-specific fields valid
  ✅ Content structure valid
  ✅ Code blocks syntax valid
  ⚠️  Missing platform: Windows
  ✅ Achievements defined

Summary: 6 passed, 1 warning, 0 errors

Configuration

Schema Files:

test/quest-validator/schemas/
├── quest-frontmatter.json    # Frontmatter schema
├── quest-structure.json      # Content structure
└── quest-achievements.json   # Achievement definitions

CI/CD Integration

Currently manual. Planned GitHub Actions integration:

# Future workflow
name: Quest Validation
on: [push, pull_request]
jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: python3 test/quest-validator/validator.py pages/_quests/

Running Tests Locally

Prerequisites

# Python 3.11+
python3 --version

# Install dependencies
pip install requests pyyaml jsonschema

# Lychee link checker (for hyperlink guardian)
# macOS
brew install lychee

# Linux
curl -sSL https://github.com/lycheeverse/lychee/releases/latest/download/lychee-x86_64-unknown-linux-gnu.tar.gz | tar -xz
sudo mv lychee /usr/local/bin/

Test Commands

Link Health Check:

# Quick internal link check
python3 scripts/link-checker.py --scope internal --timeout 10

# Comprehensive check with AI
python3 scripts/link-checker.py \
  --scope website \
  --analysis-level comprehensive

Quest Validation:

# Validate all quests
python3 test/quest-validator/validator.py pages/_quests/

# Validate specific quest
python3 test/quest-validator/validator.py \
  pages/_quests/link-guardian-quest.md

Jekyll Build Test:

# Test build
bundle exec jekyll build --verbose

# Check for issues
bundle exec jekyll doctor

Test Artifacts

Storage Location

Test results are stored in:

Artifact Types

Link Checker Artifacts:

Quest Validator Artifacts:

Continuous Integration

Automated Testing

On Every Push:

On Pull Requests:

Scheduled:

Test Results

View in GitHub:

Repository > Actions > Select workflow > View results

Download Artifacts:

Workflow run > Artifacts section > Download zip

Best Practices

Test-Driven Content

  1. Write tests first (for new features)
  2. Test locally before pushing
  3. Review test results in PRs
  4. Fix failures promptly
  5. Keep tests up to date

Performance Considerations

Link Checking:

Quest Validation:

Test Maintenance

Weekly:

Monthly:

Quarterly:

Future Enhancements

Planned Testing Features

Content Testing:

Integration Testing:

Performance Testing:

Additional Resources

Documentation

External Tools


Last Updated: 2025-10-13
Version: 1.0.0