fetch_ml/tests
Jeremie Fraeys 8ecdd36155
Some checks failed
Checkout test / test (push) Successful in 7s
CI with Native Libraries / Check Build Environment (push) Successful in 13s
CI/CD Pipeline / Test (push) Failing after 5m8s
CI/CD Pipeline / Dev Compose Smoke Test (push) Has been skipped
CI/CD Pipeline / Build (push) Has been skipped
CI/CD Pipeline / Test Scripts (push) Has been skipped
CI/CD Pipeline / Security Scan (push) Failing after 4m51s
Documentation / build-and-publish (push) Failing after 37s
CI with Native Libraries / Build and Test Native Libraries (push) Failing after 14m38s
CI with Native Libraries / Build Release Libraries (push) Has been skipped
CI/CD Pipeline / Docker Build (push) Has been skipped
test(integration): add websocket queue and hash benchmarks
- Add websocket queue integration test
- Add worker hash benchmark test
- Add native detection script
2026-02-18 12:46:06 -05:00
..
benchmarks test(benchmarks): add tolerance to response packet regression test 2026-02-18 12:45:40 -05:00
chaos test: add comprehensive test coverage and command improvements 2026-02-16 20:38:15 -05:00
e2e refactor(dependency-hygiene): Fix Redis leak, simplify TUI wrapper, clean go.mod 2026-02-17 21:13:49 -05:00
fixtures test: expand unit/integration/e2e coverage for new worker/api behavior 2026-01-05 12:31:36 -05:00
integration test(integration): add websocket queue and hash benchmarks 2026-02-18 12:46:06 -05:00
load chore(cleanup): remove legacy artifacts and add tooling configs 2026-02-12 12:06:09 -05:00
scripts test: implement comprehensive test suite with multiple test types 2025-12-04 16:55:13 -05:00
unit test(integration): add websocket queue and hash benchmarks 2026-02-18 12:46:06 -05:00
README.md test: expand unit/integration/e2e coverage for new worker/api behavior 2026-01-05 12:31:36 -05:00

Test Categories

1. Unit Tests (unit/)

  • Purpose: Test individual functions and components in isolation
  • Scope: Small, fast tests for specific functionality
  • Languages: Go and Zig tests
  • Usage: make test-unit

2. Integration Tests (integration/)

  • Purpose: Test component interactions and system integration
  • Scope: Multiple components working together
  • Dependencies: Requires Redis, database
  • Usage: make test-integration

3. End-to-End Tests (e2e/)

  • Purpose: Test complete user workflows and system behavior
  • Scope: Full system from user perspective
  • Dependencies: Complete system setup
  • Usage: make test-e2e

Note: Podman-based E2E (TestPodmanIntegration) is opt-in because it builds/runs containers. Enable it with FETCH_ML_E2E_PODMAN=1 go test ./tests/e2e/....

4. Performance Tests (benchmarks/)

  • Purpose: Measure performance characteristics and identify bottlenecks
  • Scope: API endpoints, ML experiments, payload handling
  • Metrics: Latency, throughput, memory usage
  • Usage: make benchmark

5. Load Tests (load/)

  • Purpose: Test system behavior under high load
  • Scope: Concurrent users, stress testing, spike testing
  • Scenarios: Light, medium, heavy load patterns
  • Usage: make load-test

6. Chaos Tests (chaos/)

  • Purpose: Test system resilience and failure recovery
  • Scope: Database failures, Redis failures, network issues
  • Scenarios: Connection failures, resource exhaustion, high concurrency
  • Usage: make chaos-test

Test Execution

Quick Test Commands

make test              # Run all tests
make test-unit         # Unit tests only
make test-integration  # Integration tests only
make test-e2e         # End-to-end tests only
make test-coverage    # All tests with coverage report

Performance Testing Commands

make benchmark         # Run performance benchmarks
make load-test         # Run load testing suite
make chaos-test        # Run chaos engineering tests
make tech-excellence   # Run complete technical excellence suite

Individual Test Execution

# Run specific benchmark
go test -bench=BenchmarkAPIServer ./tests/benchmarks/

# Run specific chaos test
go test -v ./tests/chaos/ -run TestChaosTestSuite

# Run with coverage
go test -cover ./tests/unit/

Test Dependencies

Required Services

  • Redis: Required for integration, performance, and chaos tests
  • Database: SQLite for local, PostgreSQL for production-like tests
  • Docker/Podman: For container-based tests

Test Configuration

  • Test databases use isolated Redis DB numbers (4-7)
  • Temporary directories used for file-based tests
  • Test servers use random ports to avoid conflicts

Best Practices

Writing Tests

  1. Unit Tests: Test single functions, mock external dependencies
  2. Integration Tests: Test real component interactions
  3. Performance Tests: Use testing.B for benchmarks, include memory stats
  4. Chaos Tests: Simulate realistic failure scenarios

Test Organization

  1. Package Naming: Use descriptive package names (benchmarks, chaos, etc.)
  2. File Naming: Use *_test.go suffix, descriptive names
  3. Test Functions: Use Test* for unit tests, Benchmark* for performance

Cleanup

  1. Resources: Close database connections, Redis clients
  2. Temp Files: Use t.TempDir() for temporary files
  3. Test Data: Clean up Redis test databases after tests

Technical Excellence Features

The test suite includes advanced testing capabilities:

  • Performance Regression Detection: Automated detection of performance degradations
  • Chaos Engineering: System resilience testing under failure conditions
  • Load Testing: High-concurrency and stress testing scenarios
  • Profiling Tools: CPU, memory, and performance profiling
  • Architecture Decision Records: Documented technical decisions

CI/CD Integration

All tests are integrated into the CI/CD pipeline:

  • Unit tests run on every commit
  • Integration tests run on PRs
  • Performance tests run nightly
  • Chaos tests run before releases

Troubleshooting

Common Issues

  1. Redis Connection: Ensure Redis is running for integration tests
  2. Port Conflicts: Tests use random ports, but conflicts can occur
  3. Resource Limits: Chaos tests may hit system resource limits
  4. Test Isolation: Ensure tests don't interfere with each other

Debug Tips

  1. Use -v flag for verbose output
  2. Use -run flag to run specific tests
  3. Check test logs for detailed error information
  4. Use make test-coverage for coverage analysis