- Fix YAML tags in auth config struct (json -> yaml) - Update CLI configs to use pre-hashed API keys - Remove double hashing in WebSocket client - Fix port mapping (9102 -> 9103) in CLI commands - Update permission keys to use jobs:read, jobs:create, etc. - Clean up all debug logging from CLI and server - All user roles now authenticate correctly: * Admin: Can queue jobs and see all jobs * Researcher: Can queue jobs and see own jobs * Analyst: Can see status (read-only access) Multi-user authentication is now fully functional.
4.4 KiB
4.4 KiB
Test Categories
1. Unit Tests (unit/)
- Purpose: Test individual functions and components in isolation
- Scope: Small, fast tests for specific functionality
- Languages: Go and Zig tests
- Usage:
make test-unit
2. Integration Tests (integration/)
- Purpose: Test component interactions and system integration
- Scope: Multiple components working together
- Dependencies: Requires Redis, database
- Usage:
make test-integration
3. End-to-End Tests (e2e/)
- Purpose: Test complete user workflows and system behavior
- Scope: Full system from user perspective
- Dependencies: Complete system setup
- Usage:
make test-e2e
4. Performance Tests (benchmarks/)
- Purpose: Measure performance characteristics and identify bottlenecks
- Scope: API endpoints, ML experiments, payload handling
- Metrics: Latency, throughput, memory usage
- Usage:
make benchmark
5. Load Tests (load/)
- Purpose: Test system behavior under high load
- Scope: Concurrent users, stress testing, spike testing
- Scenarios: Light, medium, heavy load patterns
- Usage:
make load-test
6. Chaos Tests (chaos/)
- Purpose: Test system resilience and failure recovery
- Scope: Database failures, Redis failures, network issues
- Scenarios: Connection failures, resource exhaustion, high concurrency
- Usage:
make chaos-test
Test Execution
Quick Test Commands
make test # Run all tests
make test-unit # Unit tests only
make test-integration # Integration tests only
make test-e2e # End-to-end tests only
make test-coverage # All tests with coverage report
Performance Testing Commands
make benchmark # Run performance benchmarks
make load-test # Run load testing suite
make chaos-test # Run chaos engineering tests
make tech-excellence # Run complete technical excellence suite
Individual Test Execution
# Run specific benchmark
go test -bench=BenchmarkAPIServer ./tests/benchmarks/
# Run specific chaos test
go test -v ./tests/chaos/ -run TestChaosTestSuite
# Run with coverage
go test -cover ./tests/unit/
Test Dependencies
Required Services
- Redis: Required for integration, performance, and chaos tests
- Database: SQLite for local, PostgreSQL for production-like tests
- Docker/Podman: For container-based tests
Test Configuration
- Test databases use isolated Redis DB numbers (4-7)
- Temporary directories used for file-based tests
- Test servers use random ports to avoid conflicts
Best Practices
Writing Tests
- Unit Tests: Test single functions, mock external dependencies
- Integration Tests: Test real component interactions
- Performance Tests: Use
testing.Bfor benchmarks, include memory stats - Chaos Tests: Simulate realistic failure scenarios
Test Organization
- Package Naming: Use descriptive package names (
benchmarks,chaos, etc.) - File Naming: Use
*_test.gosuffix, descriptive names - Test Functions: Use
Test*for unit tests,Benchmark*for performance
Cleanup
- Resources: Close database connections, Redis clients
- Temp Files: Use
t.TempDir()for temporary files - Test Data: Clean up Redis test databases after tests
Technical Excellence Features
The test suite includes advanced testing capabilities:
- Performance Regression Detection: Automated detection of performance degradations
- Chaos Engineering: System resilience testing under failure conditions
- Load Testing: High-concurrency and stress testing scenarios
- Profiling Tools: CPU, memory, and performance profiling
- Architecture Decision Records: Documented technical decisions
CI/CD Integration
All tests are integrated into the CI/CD pipeline:
- Unit tests run on every commit
- Integration tests run on PRs
- Performance tests run nightly
- Chaos tests run before releases
Troubleshooting
Common Issues
- Redis Connection: Ensure Redis is running for integration tests
- Port Conflicts: Tests use random ports, but conflicts can occur
- Resource Limits: Chaos tests may hit system resource limits
- Test Isolation: Ensure tests don't interfere with each other
Debug Tips
- Use
-vflag for verbose output - Use
-runflag to run specific tests - Check test logs for detailed error information
- Use
make test-coveragefor coverage analysis