563 lines
14 KiB
Markdown
563 lines
14 KiB
Markdown
# Testing Guide
|
||
|
||
Comprehensive testing strategy for Brief Bench FastAPI, covering unit tests, integration tests, and end-to-end tests.
|
||
|
||
## Test Pyramid
|
||
|
||
```
|
||
/\
|
||
/ \ E2E Tests (Slow, Full Stack)
|
||
/____\
|
||
/ \
|
||
/ Integ. \ Integration Tests (Medium, DB API)
|
||
/__________\
|
||
/ \
|
||
/ Unit \ Unit Tests (Fast, Isolated)
|
||
/________________\
|
||
```
|
||
|
||
### Three Test Levels
|
||
|
||
1. **Unit Tests** (119 tests, 99% coverage)
|
||
- Fast, isolated tests
|
||
- Mock all external dependencies
|
||
- Test business logic in isolation
|
||
- Run during development
|
||
|
||
2. **Integration Tests** (DB API integration)
|
||
- Test integration with DB API
|
||
- Require DB API service running
|
||
- Validate data flow and API contracts
|
||
- Run before commits
|
||
|
||
3. **End-to-End Tests** (Full stack)
|
||
- Test complete user workflows
|
||
- Require all services (FastAPI, DB API, RAG backends)
|
||
- Validate entire system integration
|
||
- Run before deployment
|
||
|
||
## Quick Start
|
||
|
||
### Run All Tests (Unit + Integration)
|
||
|
||
```bash
|
||
# Windows
|
||
.\run_all_tests.bat
|
||
|
||
# Linux/Mac
|
||
./run_all_tests.sh
|
||
```
|
||
|
||
### Run Test Category
|
||
|
||
```bash
|
||
# Unit tests only (fast)
|
||
.\run_unit_tests.bat
|
||
|
||
# Integration tests (requires DB API)
|
||
.\run_integration_tests.bat
|
||
|
||
# E2E tests (requires all services)
|
||
.\run_e2e_tests.bat
|
||
```
|
||
|
||
### Run Specific Tests
|
||
|
||
```bash
|
||
# Activate virtual environment first
|
||
.venv\Scripts\activate # Windows
|
||
source .venv/bin/activate # Linux/Mac
|
||
|
||
# Run by marker
|
||
pytest -m unit # Unit tests only
|
||
pytest -m integration # Integration tests only
|
||
pytest -m e2e # E2E tests only
|
||
|
||
# Run by file
|
||
pytest tests/unit/test_auth_service.py
|
||
pytest tests/integration/test_auth_integration.py
|
||
pytest tests/e2e/test_full_flow_e2e.py
|
||
|
||
# Run specific test function
|
||
pytest tests/unit/test_auth_service.py::TestAuthService::test_generate_token
|
||
```
|
||
|
||
## Unit Tests
|
||
|
||
**Location**: `tests/unit/`
|
||
**Coverage**: 99% of application code
|
||
**Speed**: Very fast (< 1 second)
|
||
**Dependencies**: None (all mocked)
|
||
|
||
### What They Test
|
||
|
||
- Service layer business logic
|
||
- Model validation
|
||
- Utility functions
|
||
- Error handling
|
||
- Edge cases
|
||
|
||
### Key Features
|
||
|
||
- All external APIs are mocked (httpx, JWT, etc.)
|
||
- No real network calls
|
||
- No database required
|
||
- Deterministic and repeatable
|
||
- Run in parallel
|
||
|
||
### Running Unit Tests
|
||
|
||
```bash
|
||
# Run with coverage report
|
||
.\run_unit_tests.bat
|
||
|
||
# Or manually
|
||
pytest tests/unit/ -v --cov=app --cov-report=html
|
||
|
||
# View coverage report
|
||
start htmlcov/index.html # Windows
|
||
open htmlcov/index.html # Mac
|
||
```
|
||
|
||
### Coverage Report
|
||
|
||
After running unit tests, coverage report is available:
|
||
- **Terminal**: Printed to console
|
||
- **HTML**: `htmlcov/index.html`
|
||
- **XML**: `coverage.xml` (for CI/CD)
|
||
|
||
### Test Files
|
||
|
||
```
|
||
tests/unit/
|
||
├── conftest.py # Unit test fixtures
|
||
├── test_auth_service.py # Authentication logic
|
||
├── test_settings_service.py # Settings management
|
||
├── test_analysis_service.py # Analysis sessions
|
||
├── test_query_service.py # Query processing
|
||
├── test_rag_service.py # RAG backend communication
|
||
├── test_db_api_client.py # DB API client
|
||
├── test_jwt_utils.py # JWT utilities
|
||
├── test_models.py # Pydantic models
|
||
└── test_dependencies.py # Dependency injection
|
||
```
|
||
|
||
## Integration Tests
|
||
|
||
**Location**: `tests/integration/`
|
||
**Coverage**: DB API integration
|
||
**Speed**: Medium (few seconds)
|
||
**Dependencies**: DB API service
|
||
|
||
### What They Test
|
||
|
||
- FastAPI endpoints with real DB API
|
||
- Authentication flow
|
||
- Settings CRUD operations
|
||
- Analysis session management
|
||
- Error handling from external service
|
||
|
||
### Prerequisites
|
||
|
||
**DB API must be running** at `http://localhost:8081`
|
||
|
||
Check health:
|
||
```bash
|
||
curl http://localhost:8081/health
|
||
```
|
||
|
||
### Configuration
|
||
|
||
Create `tests/integration/.env.integration`:
|
||
|
||
```bash
|
||
cp tests/integration/.env.integration.example tests/integration/.env.integration
|
||
```
|
||
|
||
Edit with your test credentials:
|
||
```
|
||
TEST_DB_API_URL=http://localhost:8081/api/v1
|
||
TEST_LOGIN=99999999 # 8-digit test user
|
||
```
|
||
|
||
### Running Integration Tests
|
||
|
||
```bash
|
||
# Check prerequisites and run
|
||
.\run_integration_tests.bat
|
||
|
||
# Or manually
|
||
pytest tests/integration/ -v -m integration
|
||
```
|
||
|
||
### Test Files
|
||
|
||
```
|
||
tests/integration/
|
||
├── conftest.py # Integration fixtures
|
||
├── .env.integration.example # Config template
|
||
├── .env.integration # Your config (not in git)
|
||
├── README.md # Integration test docs
|
||
├── test_auth_integration.py # Auth with DB API
|
||
├── test_settings_integration.py # Settings with DB API
|
||
├── test_analysis_integration.py # Sessions with DB API
|
||
└── test_query_integration.py # Query endpoints (DB API part)
|
||
```
|
||
|
||
### What's NOT Tested
|
||
|
||
Integration tests **do not** call RAG backends:
|
||
- RAG queries are mocked
|
||
- Only DB API integration is tested
|
||
- Use E2E tests for full RAG testing
|
||
|
||
## End-to-End Tests
|
||
|
||
**Location**: `tests/e2e/`
|
||
**Coverage**: Complete user workflows
|
||
**Speed**: Slow (minutes)
|
||
**Dependencies**: All services (FastAPI + DB API + RAG backends)
|
||
|
||
### What They Test
|
||
|
||
- Complete authentication → query → save → retrieve flow
|
||
- Real RAG backend calls (IFT, PSI, PROD)
|
||
- Cross-environment functionality
|
||
- Data persistence end-to-end
|
||
- User isolation and security
|
||
- Error scenarios with real services
|
||
|
||
### Prerequisites
|
||
|
||
**All services must be running**:
|
||
|
||
1. ✅ DB API at `http://localhost:8081`
|
||
2. ✅ IFT RAG backend (configured host)
|
||
3. ✅ PSI RAG backend (configured host)
|
||
4. ✅ PROD RAG backend (configured host)
|
||
5. ✅ Test user exists in DB API
|
||
6. ✅ Valid bearer tokens for RAG backends
|
||
|
||
### Configuration
|
||
|
||
Create `tests/e2e/.env.e2e`:
|
||
|
||
```bash
|
||
cp tests/e2e/.env.e2e.example tests/e2e/.env.e2e
|
||
```
|
||
|
||
Edit with your configuration (see `.env.e2e.example` for all variables):
|
||
```bash
|
||
E2E_DB_API_URL=http://localhost:8081/api/v1
|
||
E2E_TEST_LOGIN=99999999
|
||
|
||
E2E_IFT_RAG_HOST=ift-rag.example.com
|
||
E2E_IFT_BEARER_TOKEN=your_token_here
|
||
# ... more config
|
||
```
|
||
|
||
⚠️ **Security**: `.env.e2e` contains real credentials - never commit to git!
|
||
|
||
### Running E2E Tests
|
||
|
||
```bash
|
||
# Check prerequisites and run all E2E
|
||
.\run_e2e_tests.bat
|
||
|
||
# Or manually run all E2E tests
|
||
pytest tests/e2e/ -v -m e2e
|
||
|
||
# Run environment-specific tests
|
||
pytest tests/e2e/ -v -m e2e_ift # IFT only
|
||
pytest tests/e2e/ -v -m e2e_psi # PSI only
|
||
pytest tests/e2e/ -v -m e2e_prod # PROD only
|
||
|
||
# Run specific test suite
|
||
pytest tests/e2e/test_full_flow_e2e.py -v # Workflows
|
||
pytest tests/e2e/test_rag_backends_e2e.py -v # RAG backends
|
||
pytest tests/e2e/test_error_scenarios_e2e.py -v # Error cases
|
||
```
|
||
|
||
### Test Files
|
||
|
||
```
|
||
tests/e2e/
|
||
├── conftest.py # E2E fixtures
|
||
├── .env.e2e.example # Config template
|
||
├── .env.e2e # Your config (not in git)
|
||
├── README.md # E2E test docs (detailed)
|
||
├── test_full_flow_e2e.py # Complete workflows
|
||
├── test_rag_backends_e2e.py # RAG integration
|
||
└── test_error_scenarios_e2e.py # Error handling
|
||
```
|
||
|
||
### What They Test
|
||
|
||
**Complete Workflows**:
|
||
- User login → JWT token
|
||
- Get/update settings
|
||
- Bench mode queries to RAG
|
||
- Backend mode queries with sessions
|
||
- Save analysis sessions
|
||
- Retrieve and delete sessions
|
||
|
||
**RAG Backend Integration**:
|
||
- IFT RAG (bench mode)
|
||
- PSI RAG (backend mode)
|
||
- PROD RAG (bench mode)
|
||
- Session reset functionality
|
||
- Cross-environment queries
|
||
|
||
**Error Scenarios**:
|
||
- Authentication failures
|
||
- Validation errors
|
||
- Mode compatibility issues
|
||
- Resource not found
|
||
- Edge cases (long questions, special chars, etc.)
|
||
|
||
### Timeouts
|
||
|
||
E2E tests use realistic timeouts:
|
||
- RAG queries: 120 seconds (2 minutes)
|
||
- Large batches: 180 seconds (3 minutes)
|
||
- DB API calls: 30 seconds
|
||
|
||
### Cleanup
|
||
|
||
E2E tests automatically clean up after themselves. However, if tests fail catastrophically, you may need to manually delete test sessions.
|
||
|
||
## Test Markers
|
||
|
||
Use pytest markers to run specific test categories:
|
||
|
||
```bash
|
||
# Unit tests only
|
||
pytest -m unit
|
||
|
||
# Integration tests only
|
||
pytest -m integration
|
||
|
||
# All E2E tests
|
||
pytest -m e2e
|
||
|
||
# E2E tests for specific environment
|
||
pytest -m e2e_ift
|
||
pytest -m e2e_psi
|
||
pytest -m e2e_prod
|
||
|
||
# Slow tests only
|
||
pytest -m slow
|
||
|
||
# Everything except E2E
|
||
pytest -m "not e2e"
|
||
|
||
# Unit and integration (no E2E)
|
||
pytest -m "unit or integration"
|
||
```
|
||
|
||
## CI/CD Integration
|
||
|
||
### Recommended CI/CD Pipeline
|
||
|
||
```yaml
|
||
# .github/workflows/test.yml example
|
||
name: Tests
|
||
|
||
on: [push, pull_request]
|
||
|
||
jobs:
|
||
unit-tests:
|
||
runs-on: ubuntu-latest
|
||
steps:
|
||
- uses: actions/checkout@v3
|
||
- name: Set up Python
|
||
uses: actions/setup-python@v4
|
||
with:
|
||
python-version: '3.11'
|
||
- name: Install dependencies
|
||
run: pip install -r requirements.txt
|
||
- name: Run unit tests
|
||
run: pytest tests/unit/ -v -m unit --cov=app
|
||
- name: Upload coverage
|
||
uses: codecov/codecov-action@v3
|
||
|
||
integration-tests:
|
||
runs-on: ubuntu-latest
|
||
services:
|
||
db-api:
|
||
image: your-db-api:latest
|
||
ports:
|
||
- 8081:8081
|
||
steps:
|
||
- uses: actions/checkout@v3
|
||
- name: Set up Python
|
||
uses: actions/setup-python@v4
|
||
with:
|
||
python-version: '3.11'
|
||
- name: Install dependencies
|
||
run: pip install -r requirements.txt
|
||
- name: Create .env.integration
|
||
run: |
|
||
echo "TEST_DB_API_URL=http://localhost:8081/api/v1" > tests/integration/.env.integration
|
||
echo "TEST_LOGIN=${{ secrets.TEST_LOGIN }}" >> tests/integration/.env.integration
|
||
- name: Run integration tests
|
||
run: pytest tests/integration/ -v -m integration
|
||
|
||
e2e-tests:
|
||
runs-on: ubuntu-latest
|
||
# Only run E2E on main branch or releases
|
||
if: github.ref == 'refs/heads/main'
|
||
services:
|
||
db-api:
|
||
image: your-db-api:latest
|
||
ports:
|
||
- 8081:8081
|
||
steps:
|
||
- uses: actions/checkout@v3
|
||
- name: Set up Python
|
||
uses: actions/setup-python@v4
|
||
with:
|
||
python-version: '3.11'
|
||
- name: Install dependencies
|
||
run: pip install -r requirements.txt
|
||
- name: Create .env.e2e
|
||
run: |
|
||
echo "E2E_DB_API_URL=${{ secrets.E2E_DB_API_URL }}" > tests/e2e/.env.e2e
|
||
echo "E2E_TEST_LOGIN=${{ secrets.E2E_TEST_LOGIN }}" >> tests/e2e/.env.e2e
|
||
# ... add all other secrets
|
||
- name: Run E2E tests
|
||
run: pytest tests/e2e/ -v -m e2e
|
||
```
|
||
|
||
### CI/CD Best Practices
|
||
|
||
1. **Always run unit tests** (fast, no dependencies)
|
||
2. **Run integration tests** if DB API is available
|
||
3. **Run E2E tests** only on main branch or before deployment
|
||
4. **Use secrets** for test credentials
|
||
5. **Cache dependencies** to speed up builds
|
||
6. **Parallel execution** for unit tests
|
||
7. **Generate coverage reports** for unit tests only
|
||
|
||
## Troubleshooting
|
||
|
||
### Unit Tests Failing
|
||
|
||
**Check**:
|
||
- Virtual environment activated
|
||
- All dependencies installed: `pip install -r requirements.txt`
|
||
- No syntax errors in test files
|
||
|
||
### Integration Tests Skipped
|
||
|
||
**Cause**: DB API not running or not configured
|
||
|
||
**Fix**:
|
||
1. Start DB API: `docker-compose up db-api`
|
||
2. Check health: `curl http://localhost:8081/health`
|
||
3. Verify `.env.integration` exists and is correct
|
||
|
||
### E2E Tests Skipped
|
||
|
||
**Cause**: Required services not running
|
||
|
||
**Fix**:
|
||
1. Check DB API health
|
||
2. Verify RAG backends are accessible
|
||
3. Confirm `.env.e2e` is configured
|
||
4. Ensure test user exists in DB API
|
||
|
||
### Timeout Errors
|
||
|
||
**Cause**: RAG backends slow or unavailable
|
||
|
||
**Fix**:
|
||
- Check RAG backend health
|
||
- Verify bearer tokens are valid
|
||
- Check network connectivity
|
||
- Increase timeout if needed (in test files)
|
||
|
||
### Authentication Failures
|
||
|
||
**Cause**: Invalid credentials or test user doesn't exist
|
||
|
||
**Fix**:
|
||
- Verify test user exists: `curl -X POST http://localhost:8081/api/v1/users/login?login=99999999`
|
||
- Check bearer tokens are valid
|
||
- Ensure JWT_SECRET_KEY matches between environments
|
||
|
||
## Best Practices
|
||
|
||
### During Development
|
||
|
||
1. **Write unit tests first** (TDD approach)
|
||
2. **Run unit tests frequently** (on every change)
|
||
3. **Use `--lf` flag** to run last failed tests: `pytest --lf`
|
||
4. **Use `-x` flag** to stop on first failure: `pytest -x`
|
||
5. **Use `-k` flag** to run matching tests: `pytest -k "test_auth"`
|
||
|
||
### Before Committing
|
||
|
||
1. ✅ Run all unit tests
|
||
2. ✅ Check coverage (should be > 90%)
|
||
3. ✅ Run integration tests if DB API available
|
||
4. ✅ Fix any failing tests
|
||
5. ✅ Review coverage report for gaps
|
||
|
||
### Before Deploying
|
||
|
||
1. ✅ Run all unit tests
|
||
2. ✅ Run all integration tests
|
||
3. ✅ Run all E2E tests
|
||
4. ✅ Verify all tests pass
|
||
5. ✅ Check for any warnings
|
||
6. ✅ Review test results summary
|
||
|
||
### Writing New Tests
|
||
|
||
**For new features**:
|
||
1. Start with unit tests (business logic)
|
||
2. Add integration tests (DB API interaction)
|
||
3. Add E2E test (complete workflow)
|
||
|
||
**For bug fixes**:
|
||
1. Write failing test that reproduces bug
|
||
2. Fix the bug
|
||
3. Verify test passes
|
||
4. Add related edge case tests
|
||
|
||
## Test Coverage Goals
|
||
|
||
- **Unit Test Coverage**: > 95%
|
||
- **Integration Test Coverage**: All DB API endpoints
|
||
- **E2E Test Coverage**: All critical user workflows
|
||
|
||
Current coverage:
|
||
- ✅ Unit: 99%
|
||
- ✅ Integration: All DB API integration points
|
||
- ✅ E2E: Complete workflows + error scenarios
|
||
|
||
## Related Documentation
|
||
|
||
- [Unit Tests](tests/unit/) - Fast isolated tests
|
||
- [Integration Tests](tests/integration/README.md) - DB API integration
|
||
- [E2E Tests](tests/e2e/README.md) - Full stack testing
|
||
- [DB API Contract](DB_API_CONTRACT.md) - External API spec
|
||
- [PROJECT_STATUS.md](PROJECT_STATUS.md) - Implementation status
|
||
|
||
## Summary
|
||
|
||
```
|
||
┌─────────────────────────────────────────────────────────┐
|
||
│ Test Type │ Speed │ Dependencies │ When to Run │
|
||
├─────────────────────────────────────────────────────────┤
|
||
│ Unit │ Fast │ None │ Always │
|
||
│ Integration │ Med │ DB API │ Before commit │
|
||
│ E2E │ Slow │ All services │ Before deploy │
|
||
└─────────────────────────────────────────────────────────┘
|
||
|
||
Run unit tests constantly ⚡
|
||
Run integration tests regularly 🔄
|
||
Run E2E tests before deployment 🚀
|
||
```
|