fetch_ml/tests
Jeremie Fraeys ec9e845bb6
fix(test): Fix WebSocketQueue test timeout and race conditions
Reduce worker polling interval from 5ms to 1ms for faster task pickup

Add 100ms buffer after job submission to allow queue to settle

Increase timeout from 30s to 60s to prevent flaky failures

Fixes intermittent timeout issues in integration tests
2026-02-23 14:38:18 -05:00
..
benchmarks feat: GPU detection transparency and artifact scanner improvements 2026-02-23 12:29:34 -05:00
chaos test: add comprehensive test coverage and command improvements 2026-02-16 20:38:15 -05:00
e2e test(e2e): skip gracefully when Redis unavailable, fix cross-device link 2026-02-21 21:20:47 -05:00
fixtures test(e2e): skip gracefully when Redis unavailable, fix cross-device link 2026-02-21 21:20:47 -05:00
integration fix(test): Fix WebSocketQueue test timeout and race conditions 2026-02-23 14:38:18 -05:00
load chore(cleanup): remove legacy artifacts and add tooling configs 2026-02-12 12:06:09 -05:00
scripts test: implement comprehensive test suite with multiple test types 2025-12-04 16:55:13 -05:00
security feat: add security monitoring and validation framework 2026-02-19 15:34:25 -05:00
unit test: Update duplicate detection tests 2026-02-23 14:14:21 -05:00
README.md test: expand unit/integration/e2e coverage for new worker/api behavior 2026-01-05 12:31:36 -05:00

Test Categories

1. Unit Tests (unit/)

  • Purpose: Test individual functions and components in isolation
  • Scope: Small, fast tests for specific functionality
  • Languages: Go and Zig tests
  • Usage: make test-unit

2. Integration Tests (integration/)

  • Purpose: Test component interactions and system integration
  • Scope: Multiple components working together
  • Dependencies: Requires Redis, database
  • Usage: make test-integration

3. End-to-End Tests (e2e/)

  • Purpose: Test complete user workflows and system behavior
  • Scope: Full system from user perspective
  • Dependencies: Complete system setup
  • Usage: make test-e2e

Note: Podman-based E2E (TestPodmanIntegration) is opt-in because it builds/runs containers. Enable it with FETCH_ML_E2E_PODMAN=1 go test ./tests/e2e/....

4. Performance Tests (benchmarks/)

  • Purpose: Measure performance characteristics and identify bottlenecks
  • Scope: API endpoints, ML experiments, payload handling
  • Metrics: Latency, throughput, memory usage
  • Usage: make benchmark

5. Load Tests (load/)

  • Purpose: Test system behavior under high load
  • Scope: Concurrent users, stress testing, spike testing
  • Scenarios: Light, medium, heavy load patterns
  • Usage: make load-test

6. Chaos Tests (chaos/)

  • Purpose: Test system resilience and failure recovery
  • Scope: Database failures, Redis failures, network issues
  • Scenarios: Connection failures, resource exhaustion, high concurrency
  • Usage: make chaos-test

Test Execution

Quick Test Commands

make test              # Run all tests
make test-unit         # Unit tests only
make test-integration  # Integration tests only
make test-e2e         # End-to-end tests only
make test-coverage    # All tests with coverage report

Performance Testing Commands

make benchmark         # Run performance benchmarks
make load-test         # Run load testing suite
make chaos-test        # Run chaos engineering tests
make tech-excellence   # Run complete technical excellence suite

Individual Test Execution

# Run specific benchmark
go test -bench=BenchmarkAPIServer ./tests/benchmarks/

# Run specific chaos test
go test -v ./tests/chaos/ -run TestChaosTestSuite

# Run with coverage
go test -cover ./tests/unit/

Test Dependencies

Required Services

  • Redis: Required for integration, performance, and chaos tests
  • Database: SQLite for local, PostgreSQL for production-like tests
  • Docker/Podman: For container-based tests

Test Configuration

  • Test databases use isolated Redis DB numbers (4-7)
  • Temporary directories used for file-based tests
  • Test servers use random ports to avoid conflicts

Best Practices

Writing Tests

  1. Unit Tests: Test single functions, mock external dependencies
  2. Integration Tests: Test real component interactions
  3. Performance Tests: Use testing.B for benchmarks, include memory stats
  4. Chaos Tests: Simulate realistic failure scenarios

Test Organization

  1. Package Naming: Use descriptive package names (benchmarks, chaos, etc.)
  2. File Naming: Use *_test.go suffix, descriptive names
  3. Test Functions: Use Test* for unit tests, Benchmark* for performance

Cleanup

  1. Resources: Close database connections, Redis clients
  2. Temp Files: Use t.TempDir() for temporary files
  3. Test Data: Clean up Redis test databases after tests

Technical Excellence Features

The test suite includes advanced testing capabilities:

  • Performance Regression Detection: Automated detection of performance degradations
  • Chaos Engineering: System resilience testing under failure conditions
  • Load Testing: High-concurrency and stress testing scenarios
  • Profiling Tools: CPU, memory, and performance profiling
  • Architecture Decision Records: Documented technical decisions

CI/CD Integration

All tests are integrated into the CI/CD pipeline:

  • Unit tests run on every commit
  • Integration tests run on PRs
  • Performance tests run nightly
  • Chaos tests run before releases

Troubleshooting

Common Issues

  1. Redis Connection: Ensure Redis is running for integration tests
  2. Port Conflicts: Tests use random ports, but conflicts can occur
  3. Resource Limits: Chaos tests may hit system resource limits
  4. Test Isolation: Ensure tests don't interfere with each other

Debug Tips

  1. Use -v flag for verbose output
  2. Use -run flag to run specific tests
  3. Check test logs for detailed error information
  4. Use make test-coverage for coverage analysis