133 lines
4.6 KiB
Markdown
133 lines
4.6 KiB
Markdown
## Test Categories
|
|
|
|
### 1. Unit Tests (`unit/`)
|
|
- **Purpose**: Test individual functions and components in isolation
|
|
- **Scope**: Small, fast tests for specific functionality
|
|
- **Languages**: Go and Zig tests
|
|
- **Usage**: `make test-unit`
|
|
|
|
### 2. Integration Tests (`integration/`)
|
|
- **Purpose**: Test component interactions and system integration
|
|
- **Scope**: Multiple components working together
|
|
- **Dependencies**: Requires Redis, database
|
|
- **Usage**: `make test-integration`
|
|
|
|
### 3. End-to-End Tests (`e2e/`)
|
|
- **Purpose**: Test complete user workflows and system behavior
|
|
- **Scope**: Full system from user perspective
|
|
- **Dependencies**: Complete system setup
|
|
- **Usage**: `make test-e2e`
|
|
|
|
Note: Podman-based E2E (`TestPodmanIntegration`) is opt-in because it builds/runs containers.
|
|
Enable it with `FETCH_ML_E2E_PODMAN=1 go test ./tests/e2e/...`.
|
|
|
|
### 4. Performance Tests (`benchmarks/`)
|
|
- **Purpose**: Measure performance characteristics and identify bottlenecks
|
|
- **Scope**: API endpoints, ML experiments, payload handling
|
|
- **Metrics**: Latency, throughput, memory usage
|
|
- **Usage**: `make benchmark`
|
|
|
|
### 5. Load Tests (`load/`)
|
|
- **Purpose**: Test system behavior under high load
|
|
- **Scope**: Concurrent users, stress testing, spike testing
|
|
- **Scenarios**: Light, medium, heavy load patterns
|
|
- **Usage**: `make load-test`
|
|
|
|
### 6. Chaos Tests (`chaos/`)
|
|
- **Purpose**: Test system resilience and failure recovery
|
|
- **Scope**: Database failures, Redis failures, network issues
|
|
- **Scenarios**: Connection failures, resource exhaustion, high concurrency
|
|
- **Usage**: `make chaos-test`
|
|
|
|
## Test Execution
|
|
|
|
### Quick Test Commands
|
|
```bash
|
|
make test # Run all tests
|
|
make test-unit # Unit tests only
|
|
make test-integration # Integration tests only
|
|
make test-e2e # End-to-end tests only
|
|
make test-coverage # All tests with coverage report
|
|
```
|
|
|
|
### Performance Testing Commands
|
|
```bash
|
|
make benchmark # Run performance benchmarks
|
|
make load-test # Run load testing suite
|
|
make chaos-test # Run chaos engineering tests
|
|
make tech-excellence # Run complete technical excellence suite
|
|
```
|
|
|
|
### Individual Test Execution
|
|
```bash
|
|
# Run specific benchmark
|
|
go test -bench=BenchmarkAPIServer ./tests/benchmarks/
|
|
|
|
# Run specific chaos test
|
|
go test -v ./tests/chaos/ -run TestChaosTestSuite
|
|
|
|
# Run with coverage
|
|
go test -cover ./tests/unit/
|
|
```
|
|
|
|
## Test Dependencies
|
|
|
|
### Required Services
|
|
- **Redis**: Required for integration, performance, and chaos tests
|
|
- **Database**: SQLite for local, PostgreSQL for production-like tests
|
|
- **Docker/Podman**: For container-based tests
|
|
|
|
### Test Configuration
|
|
- Test databases use isolated Redis DB numbers (4-7)
|
|
- Temporary directories used for file-based tests
|
|
- Test servers use random ports to avoid conflicts
|
|
|
|
## Best Practices
|
|
|
|
### Writing Tests
|
|
1. **Unit Tests**: Test single functions, mock external dependencies
|
|
2. **Integration Tests**: Test real component interactions
|
|
3. **Performance Tests**: Use `testing.B` for benchmarks, include memory stats
|
|
4. **Chaos Tests**: Simulate realistic failure scenarios
|
|
|
|
### Test Organization
|
|
1. **Package Naming**: Use descriptive package names (`benchmarks`, `chaos`, etc.)
|
|
2. **File Naming**: Use `*_test.go` suffix, descriptive names
|
|
3. **Test Functions**: Use `Test*` for unit tests, `Benchmark*` for performance
|
|
|
|
### Cleanup
|
|
1. **Resources**: Close database connections, Redis clients
|
|
2. **Temp Files**: Use `t.TempDir()` for temporary files
|
|
3. **Test Data**: Clean up Redis test databases after tests
|
|
|
|
## Technical Excellence Features
|
|
|
|
The test suite includes advanced testing capabilities:
|
|
|
|
- **Performance Regression Detection**: Automated detection of performance degradations
|
|
- **Chaos Engineering**: System resilience testing under failure conditions
|
|
- **Load Testing**: High-concurrency and stress testing scenarios
|
|
- **Profiling Tools**: CPU, memory, and performance profiling
|
|
- **Architecture Decision Records**: Documented technical decisions
|
|
|
|
## CI/CD Integration
|
|
|
|
All tests are integrated into the CI/CD pipeline:
|
|
- Unit tests run on every commit
|
|
- Integration tests run on PRs
|
|
- Performance tests run nightly
|
|
- Chaos tests run before releases
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
1. **Redis Connection**: Ensure Redis is running for integration tests
|
|
2. **Port Conflicts**: Tests use random ports, but conflicts can occur
|
|
3. **Resource Limits**: Chaos tests may hit system resource limits
|
|
4. **Test Isolation**: Ensure tests don't interfere with each other
|
|
|
|
### Debug Tips
|
|
1. Use `-v` flag for verbose output
|
|
2. Use `-run` flag to run specific tests
|
|
3. Check test logs for detailed error information
|
|
4. Use `make test-coverage` for coverage analysis
|