AI Validation Pipeline
Overview¶
The AI Validation Pipeline is a comprehensive, multi-stage validation system that combines traditional linting, testing, and static analysis with AI-powered code review and style checking. This pipeline ensures code quality, consistency, and adherence to style guides before code reaches production.
Key Features¶
- ✅ Multi-Stage Validation: Pre-commit, CI, and post-merge validation stages
- ✅ AI-Powered Review: Automated code review with contextual suggestions
- ✅ Style Enforcement: Automatic detection of style guide violations
- ✅ Security Scanning: Integrated security vulnerability detection
- ✅ Metadata Validation: Ensures documentation frontmatter is complete and accurate
- ✅ Performance Checks: Identifies performance anti-patterns
- ✅ Platform Agnostic: Works with GitHub Actions, GitLab CI, Jenkins, and others
Pipeline Architecture¶
Validation Stages¶
┌─────────────────┐
│ Developer │
│ Local Machine │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Pre-commit │ ← Stage 1: Local Validation
│ Hooks │ • Formatting (black, prettier, terraform fmt)
└────────┬────────┘ • Linting (eslint, flake8, shellcheck)
│ • Security (detect-secrets)
│ • Quick tests
▼
┌─────────────────┐
│ Git Push │
│ │
└────────┬────────┘
│
▼
┌─────────────────┐
│ CI Pipeline │ ← Stage 2: Continuous Integration
│ (PR/MR) │ • Full test suite
└────────┬────────┘ • Static analysis
│ • Security scanning
│ • AI code review
│ • Style validation
▼ • Terraform plan
┌─────────────────┐
│ AI Review Bot │ ← Stage 3: AI Analysis
│ │ • Style suggestions
└────────┬────────┘ • Anti-pattern detection
│ • Best practice recommendations
│ • Documentation review
▼
┌─────────────────┐
│ Human Review │ ← Stage 4: Code Review
│ & Approval │ • Manual review
└────────┬────────┘ • Approval process
│
▼
┌─────────────────┐
│ Merge to Main │
│ │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Post-Merge │ ← Stage 5: Deployment Validation
│ Validation │ • Build verification
└────────┬────────┘ • Deploy to staging
│ • Smoke tests
▼ • Performance tests
┌─────────────────┐
│ Production │
│ Deployment │
└─────────────────┘
Stage 1: Pre-commit Hooks¶
Purpose¶
Catch issues before code is committed to the repository, providing instant feedback to developers.
Configuration¶
Create .pre-commit-config.yaml:
repos:
# General file checks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-json
- id: check-added-large-files
args: ['--maxkb=500']
- id: check-merge-conflict
- id: check-case-conflict
- id: mixed-line-ending
- id: detect-private-key
# Python
- repo: https://github.com/psf/black
rev: 23.12.1
hooks:
- id: black
language_version: python3.10
args: ['--line-length=100']
- repo: https://github.com/PyCQA/flake8
rev: 7.0.0
hooks:
- id: flake8
args: ['--max-line-length=100', '--extend-ignore=E203,W503']
# YAML linting
- repo: https://github.com/adrienverge/yamllint
rev: v1.33.0
hooks:
- id: yamllint
args: ['-d', '{extends: default, rules: {line-length: {max: 120}}}']
# Bash/Shell
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.9.0.6
hooks:
- id: shellcheck
# Terraform
- repo: https://github.com/antonbabenko/pre-commit-terraform
rev: v1.86.0
hooks:
- id: terraform_fmt
- id: terraform_validate
- id: terraform_docs
args:
- '--args=--lockfile=false'
# Markdown
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.38.0
hooks:
- id: markdownlint
args: ['--fix']
# Security
- repo: https://github.com/Yelp/detect-secrets
rev: v1.4.0
hooks:
- id: detect-secrets
args: ['--baseline', '.secrets.baseline']
Installation¶
## Install pre-commit
pip install pre-commit
## Install hooks
pre-commit install
## Run manually on all files
pre-commit run --all-files
## Update hooks to latest versions
pre-commit autoupdate
Pre-commit Best Practices¶
- Run Locally First: Test pre-commit hooks before pushing
- Keep Hooks Fast: Pre-commit should complete in < 30 seconds
- Auto-fix When Possible: Use
--fixflags for formatters - Skip When Needed: Use
SKIP=hook_id git commitfor emergencies
Stage 2: CI Pipeline Validation¶
GitHub Actions Example¶
Create .github/workflows/validate.yml:
name: Validation Pipeline
on:
pull_request:
branches: [main, develop]
push:
branches: [main, develop]
jobs:
pre-validation:
name: Pre-validation Checks
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install dependencies
run: |
pip install pre-commit
pre-commit install-hooks
- name: Run pre-commit on all files
run: pre-commit run --all-files --show-diff-on-failure
lint-and-format:
name: Linting and Formatting
runs-on: ubuntu-latest
needs: pre-validation
timeout-minutes: 10
strategy:
matrix:
language: [python, terraform, bash, yaml]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Lint Python
if: matrix.language == 'python'
run: |
pip install black flake8 mypy pylint
black --check .
flake8 .
mypy . --ignore-missing-imports
- name: Lint Terraform
if: matrix.language == 'terraform'
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.6.0
- run: terraform fmt -check -recursive
if: matrix.language == 'terraform'
- name: Lint Bash
if: matrix.language == 'bash'
run: |
sudo apt-get install -y shellcheck
find . -name "*.sh" -exec shellcheck {} +
- name: Lint YAML
if: matrix.language == 'yaml'
run: |
pip install yamllint
yamllint .
security-scan:
name: Security Scanning
runs-on: ubuntu-latest
needs: pre-validation
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@v0.34.1
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy results to GitHub Security
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'
- name: Run Semgrep
uses: returntocorp/semgrep-action@v1
with:
config: >-
p/security-audit
p/secrets
p/ci
- name: Check for secrets
run: |
pip install detect-secrets
detect-secrets scan --all-files --force-use-all-plugins
test:
name: Test Suite
runs-on: ubuntu-latest
needs: [lint-and-format, security-scan]
timeout-minutes: 30
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'
- name: Install dependencies
run: |
pip install pytest pytest-cov pytest-xdist
pip install -r requirements.txt
- name: Run unit tests
run: pytest tests/unit -v --cov --cov-report=xml
- name: Run integration tests
run: pytest tests/integration -v
- name: Upload coverage
uses: codecov/codecov-action@v4
with:
files: ./coverage.xml
flags: unittests
ai-code-review:
name: AI Code Review
runs-on: ubuntu-latest
needs: test
if: github.event_name == 'pull_request'
timeout-minutes: 10
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: AI Style Review
uses: openai/openai-pr-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
with:
model: 'gpt-4'
review_type: 'style-guide'
style_guide_url: 'https://tydukes.github.io/coding-style-guide/'
- name: AI Security Review
uses: openai/openai-pr-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
with:
model: 'gpt-4'
review_type: 'security'
metadata-validation:
name: Validate Metadata
runs-on: ubuntu-latest
needs: pre-validation
timeout-minutes: 5
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Validate frontmatter
run: |
python scripts/validate_metadata.py
- name: Check documentation completeness
run: |
python scripts/check_docs.py
terraform-plan:
name: Terraform Plan
runs-on: ubuntu-latest
needs: [lint-and-format, security-scan]
if: contains(github.event.pull_request.changed_files, '.tf')
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.6.0
- name: Terraform Init
run: terraform init
working-directory: ./terraform
- name: Terraform Plan
id: plan
run: terraform plan -no-color -out=tfplan
working-directory: ./terraform
- name: Post plan to PR
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
with:
script: |
const output = `#### Terraform Plan 📖
\`\`\`terraform
${{ steps.plan.outputs.stdout }}
\`\`\`
`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
});
summary:
name: Validation Summary
runs-on: ubuntu-latest
needs: [lint-and-format, security-scan, test, metadata-validation]
if: always()
timeout-minutes: 5
steps:
- name: Check validation results
run: |
if [ "${{ needs.lint-and-format.result }}" != "success" ] ||
[ "${{ needs.security-scan.result }}" != "success" ] ||
[ "${{ needs.test.result }}" != "success" ] ||
[ "${{ needs.metadata-validation.result }}" != "success" ]; then
echo "❌ Validation failed"
exit 1
fi
echo "✅ All validation checks passed"
Stage 3: AI Code Review¶
AI Review Bot Configuration¶
The AI Review Bot analyzes code changes and provides contextual feedback on:
- Style Adherence: Checks against the style guide
- Anti-Patterns: Identifies common mistakes
- Best Practices: Suggests improvements
- Documentation: Reviews comment quality and completeness
- Security: Detects potential vulnerabilities
- Performance: Identifies inefficient code patterns
Review Types¶
Style Review¶
- name: AI Style Review
uses: openai/openai-pr-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
with:
model: 'gpt-4'
review_type: 'style-guide'
style_guide_url: 'https://tydukes.github.io/coding-style-guide/'
focus_areas:
- naming_conventions
- code_organization
- documentation
- formatting
Security Review¶
- name: AI Security Review
uses: openai/openai-pr-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
with:
model: 'gpt-4'
review_type: 'security'
focus_areas:
- input_validation
- authentication
- secrets_management
- sql_injection
- xss_vulnerabilities
Performance Review¶
- name: AI Performance Review
uses: openai/openai-pr-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
with:
model: 'gpt-4'
review_type: 'performance'
focus_areas:
- algorithm_complexity
- database_queries
- caching_opportunities
- resource_usage
Custom AI Review Prompts¶
Create .github/ai-review-prompts.yaml:
style_review_prompt: |
You are a senior DevOps engineer reviewing code against the DevOps Engineering Style Guide.
Review the following code changes and provide feedback on:
1. Adherence to style guide: https://tydukes.github.io/coding-style-guide/
2. Naming conventions (variables, functions, classes)
3. Code organization and module structure
4. Documentation completeness and quality
5. Anti-patterns present in the code
For each issue found:
- Cite the specific section of the style guide
- Provide a code example showing the correction
- Explain why the change improves the code
Be constructive and prioritize clarity over brevity.
security_review_prompt: |
You are a security engineer reviewing code for vulnerabilities.
Analyze the code changes for:
1. Input validation and sanitization
2. Authentication and authorization
3. Secrets and credential management
4. SQL injection vulnerabilities
5. XSS vulnerabilities
6. Path traversal risks
7. Insecure dependencies
For each vulnerability:
- Describe the security risk
- Provide a secure code example
- Reference OWASP guidelines where applicable
performance_review_prompt: |
You are a performance optimization specialist.
Review the code for:
1. Algorithm complexity (O(n) analysis)
2. Database query optimization
3. Caching opportunities
4. Resource usage (memory, CPU)
5. Async/await usage
6. Loop optimizations
For each optimization opportunity:
- Explain the performance impact
- Provide optimized code example
- Estimate performance improvement
Stage 4: Metadata Validation¶
Metadata Validation Script¶
Create scripts/validate_metadata.py:
#!/usr/bin/env python3
"""
Validate YAML frontmatter metadata in documentation files.
"""
import sys
from pathlib import Path
from typing import Dict, List, Any
import yaml
import re
REQUIRED_FIELDS = {
"title": str,
"description": str,
"author": str,
"date": str,
"tags": list,
"category": str,
"status": str,
"version": str,
}
VALID_STATUSES = ["active", "deprecated", "draft", "needs-expansion"]
VALID_CATEGORIES = [
"Home",
"Overview",
"Language Guides",
"Metadata Schema",
"Templates",
"Examples",
"Anti-Patterns",
"CI/CD",
]
def extract_frontmatter(file_path: Path) -> Dict[str, Any] | None:
"""Extract YAML frontmatter from markdown file."""
content = file_path.read_text()
# Match frontmatter between --- delimiters
match = re.match(r"^---\n(.*?)\n---", content, re.DOTALL)
if not match:
return None
try:
return yaml.safe_load(match.group(1))
except yaml.YAMLError as e:
print(f"❌ {file_path}: Invalid YAML frontmatter: {e}")
return None
def validate_metadata(file_path: Path, metadata: Dict[str, Any]) -> List[str]:
"""Validate metadata against schema."""
errors = []
# Check required fields
for field, field_type in REQUIRED_FIELDS.items():
if field not in metadata:
errors.append(f"Missing required field: {field}")
elif not isinstance(metadata[field], field_type):
errors.append(
f"Field '{field}' must be {field_type.__name__}, "
f"got {type(metadata[field]).__name__}"
)
# Validate status
if "status" in metadata and metadata["status"] not in VALID_STATUSES:
errors.append(
f"Invalid status '{metadata['status']}'. "
f"Must be one of: {', '.join(VALID_STATUSES)}"
)
# Validate category
if "category" in metadata and metadata["category"] not in VALID_CATEGORIES:
errors.append(
f"Invalid category '{metadata['category']}'. "
f"Must be one of: {', '.join(VALID_CATEGORIES)}"
)
# Validate date format (YYYY-MM-DD)
if "date" in metadata:
if not re.match(r"^\d{4}-\d{2}-\d{2}$", metadata["date"]):
errors.append(f"Invalid date format '{metadata['date']}'. Use YYYY-MM-DD")
# Validate version format (semver)
if "version" in metadata:
if not re.match(r"^\d+\.\d+\.\d+$", metadata["version"]):
errors.append(
f"Invalid version format '{metadata['version']}'. Use semver (x.y.z)"
)
# Validate tags (non-empty list)
if "tags" in metadata:
if not metadata["tags"]:
errors.append("Tags list cannot be empty")
return errors
def main() -> int:
"""Main validation function."""
docs_dir = Path("docs")
if not docs_dir.exists():
print("❌ docs/ directory not found")
return 1
markdown_files = list(docs_dir.rglob("*.md"))
if not markdown_files:
print("❌ No markdown files found in docs/")
return 1
total_files = len(markdown_files)
files_with_errors = 0
total_errors = 0
print(f"Validating {total_files} documentation files...\n")
for file_path in markdown_files:
metadata = extract_frontmatter(file_path)
if metadata is None:
print(f"⚠️ {file_path.relative_to(docs_dir)}: No frontmatter found")
files_with_errors += 1
continue
errors = validate_metadata(file_path, metadata)
if errors:
print(f"❌ {file_path.relative_to(docs_dir)}:")
for error in errors:
print(f" - {error}")
total_errors += 1
print()
files_with_errors += 1
# Summary
print("=" * 60)
if files_with_errors == 0:
print(f"✅ All {total_files} files passed validation")
return 0
else:
print(f"❌ {files_with_errors}/{total_files} files have errors")
print(f" Total errors: {total_errors}")
return 1
if __name__ == "__main__":
sys.exit(main())
Make it executable:
chmod +x scripts/validate_metadata.py
Stage 5: Post-Merge Validation¶
Deployment Validation¶
After code is merged to main, run additional validation:
name: Post-Merge Validation
on:
push:
branches: [main]
jobs:
build-verification:
name: Build Verification
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build application
run: |
make build
- name: Verify build artifacts
run: |
test -f dist/app || exit 1
test -f dist/app.tar.gz || exit 1
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build-verification
environment: staging
timeout-minutes: 20
steps:
- name: Deploy to staging
run: |
kubectl apply -f k8s/staging/
- name: Wait for deployment
run: |
kubectl rollout status deployment/app -n staging --timeout=5m
smoke-tests:
name: Smoke Tests
runs-on: ubuntu-latest
needs: deploy-staging
timeout-minutes: 10
steps:
- name: Health check
run: |
curl -f https://staging.example.com/health || exit 1
- name: API smoke tests
run: |
pytest tests/smoke -v --env=staging
performance-tests:
name: Performance Tests
runs-on: ubuntu-latest
needs: smoke-tests
timeout-minutes: 30
steps:
- name: Load testing
run: |
k6 run tests/performance/load.js --env HOSTNAME=staging.example.com
- name: Analyze results
run: |
python scripts/analyze_performance.py
GitLab CI Example¶
Create .gitlab-ci.yml:
stages:
- validate
- test
- security
- review
- deploy
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
pre-commit:
stage: validate
image: python:3.11
script:
- pip install pre-commit
- pre-commit run --all-files
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
lint:python:
stage: validate
image: python:3.11
script:
- pip install black flake8 mypy
- black --check .
- flake8 .
- mypy .
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test:unit:
stage: test
image: python:3.11
script:
- pip install pytest pytest-cov
- pytest tests/unit -v --cov
coverage: '/TOTAL.*\s+(\d+%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
security:trivy:
stage: security
image:
name: aquasec/trivy:latest
entrypoint: [""]
script:
- trivy fs --exit-code 1 --severity HIGH,CRITICAL .
allow_failure: true
security:semgrep:
stage: security
image: returntocorp/semgrep
script:
- semgrep --config=p/security-audit --config=p/secrets .
ai-review:
stage: review
image: python:3.11
script:
- pip install openai
- python scripts/ai_review.py
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
allow_failure: true
deploy:staging:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl apply -f k8s/staging/
environment:
name: staging
url: https://staging.example.com
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
Jenkins Pipeline Example¶
Create Jenkinsfile:
pipeline {
agent any
environment {
PYTHON_VERSION = '3.11'
TERRAFORM_VERSION = '1.6.0'
}
stages {
stage('Pre-validation') {
steps {
sh 'pre-commit run --all-files'
}
}
stage('Lint') {
parallel {
stage('Python') {
steps {
sh '''
pip install black flake8 mypy
black --check .
flake8 .
mypy .
'''
}
}
stage('Terraform') {
steps {
sh '''
terraform fmt -check -recursive
terraform validate
'''
}
}
stage('Shell') {
steps {
sh 'find . -name "*.sh" -exec shellcheck {} +'
}
}
}
}
stage('Security Scan') {
parallel {
stage('Trivy') {
steps {
sh 'trivy fs --exit-code 1 --severity HIGH,CRITICAL .'
}
}
stage('Semgrep') {
steps {
sh 'semgrep --config=p/security-audit --config=p/secrets .'
}
}
}
}
stage('Test') {
steps {
sh '''
pip install pytest pytest-cov
pytest tests/unit -v --cov --cov-report=xml
'''
}
post {
always {
junit 'test-results/*.xml'
cobertura coberturaReportFile: 'coverage.xml'
}
}
}
stage('AI Code Review') {
when {
changeRequest()
}
steps {
script {
sh 'python scripts/ai_review.py --pr-id ${CHANGE_ID}'
}
}
}
stage('Metadata Validation') {
steps {
sh 'python scripts/validate_metadata.py'
}
}
stage('Terraform Plan') {
when {
changeRequest()
}
steps {
dir('terraform') {
sh '''
terraform init
terraform plan -no-color -out=tfplan
'''
}
}
}
}
post {
always {
cleanWs()
}
success {
echo '✅ All validation checks passed'
}
failure {
echo '❌ Validation failed'
}
}
}
Best Practices¶
Performance Optimization¶
- Parallel Execution: Run independent jobs concurrently
- Caching: Cache dependencies between runs
- Incremental Validation: Only validate changed files when possible
- Timeout Limits: Set reasonable timeouts for all jobs
Security Considerations¶
- Secret Management: Use CI platform's secret management
- Least Privilege: Grant minimum required permissions
- Dependency Scanning: Regularly scan for vulnerable dependencies
- Container Scanning: Scan Docker images for vulnerabilities
Developer Experience¶
- Fast Feedback: Keep pre-commit hooks under 30 seconds
- Clear Error Messages: Provide actionable error messages
- Auto-fix When Possible: Automatically fix formatting issues
- Gradual Adoption: Allow teams to incrementally adopt validation
Cost Optimization¶
- Skip Redundant Checks: Don't re-run validation on merge commits
- Use Cheaper Runners: Use standard runners for simple tasks
- Cache Aggressively: Cache dependencies, tools, and build artifacts
- Fail Fast: Stop pipeline on critical failures
Troubleshooting¶
Common Issues¶
Pre-commit Hooks Fail¶
## Update hooks to latest versions
pre-commit autoupdate
## Clear cache and reinstall
pre-commit clean
pre-commit install-hooks
## Skip specific hook temporarily
SKIP=black git commit -m "commit message"
CI Pipeline Timeout¶
## Increase timeout for specific job
jobs:
test:
timeout-minutes: 30 # Increase from default 10
AI Review API Rate Limits¶
## Add retry logic with exponential backoff
- name: AI Review with Retry
uses: nick-invision/retry@v2
with:
timeout_minutes: 10
max_attempts: 3
retry_wait_seconds: 60
command: python scripts/ai_review.py
False Positives in Security Scans¶
## Create allowlist for known false positives
## .trivyignore
CVE-2023-12345 # False positive in test dependency
## .semgrepignore
tests/ # Ignore test files for certain rules
Metrics and Monitoring¶
Key Metrics¶
Track these metrics to measure pipeline effectiveness:
- Pipeline Success Rate: Percentage of successful pipeline runs
- Average Pipeline Duration: Time from trigger to completion
- Mean Time to Detection (MTTD): Time to detect issues
- Mean Time to Resolution (MTTR): Time to fix issues
- False Positive Rate: Percentage of false alarms
- Code Coverage: Percentage of code covered by tests
Monitoring Dashboard¶
Create a dashboard tracking:
metrics:
pipeline_success_rate:
query: "sum(pipeline_success) / sum(pipeline_total)"
target: "> 95%"
average_duration:
query: "avg(pipeline_duration_seconds)"
target: "< 300" # 5 minutes
security_vulnerabilities:
query: "sum(vulnerabilities_detected)"
target: "= 0"
test_coverage:
query: "avg(code_coverage_percentage)"
target: "> 80%"
References¶
Tools and Platforms¶
AI Code Review¶
Security Scanning¶
Status: Active