11 KiB
End-to-End (E2E) Tests
End-to-end tests for the complete Brief Bench FastAPI system, testing the entire stack from authentication through RAG queries to data persistence.
Overview
E2E tests validate:
- Complete user workflows from login to query results
- Integration between FastAPI backend, DB API, and RAG backends
- Real API calls to all external services (no mocking)
- Data persistence and retrieval
- Cross-environment functionality (IFT, PSI, PROD)
- Error handling and edge cases
Prerequisites
CRITICAL: All external services must be running before E2E tests can execute.
Required Services
-
DB API (database service)
- Must be running and accessible
- Default:
http://localhost:8081 - Health check:
GET http://localhost:8081/health
-
RAG Backends (one or more environments)
- IFT RAG: Development/test environment
- PSI RAG: Backend mode testing
- PROD RAG: Production-like testing
- Each environment needs its own RAG backend server running
-
Test User Account
- A valid 8-digit test login must exist in DB API
- Recommended: Use a dedicated test account (e.g.,
99999999) - This account will be used for all E2E test operations
Service Availability Check
E2E tests automatically check prerequisites before running:
- DB API health endpoint
- RAG backend host configurations
- Test credentials presence
If any prerequisite is not met, tests will be skipped with a detailed error message.
Environment Configuration
Create .env.e2e File
Copy the example file and configure for your environment:
cp tests/e2e/.env.e2e.example tests/e2e/.env.e2e
Required Environment Variables
Edit tests/e2e/.env.e2e with your configuration:
# DB API Configuration
E2E_DB_API_URL=http://localhost:8081/api/v1
# Test User Credentials
E2E_TEST_LOGIN=99999999 # 8-digit test user login
# IFT Environment Settings (Bench Mode)
E2E_IFT_RAG_HOST=ift-rag.example.com
E2E_IFT_BEARER_TOKEN=your_ift_bearer_token_here
E2E_IFT_SYSTEM_PLATFORM=telegram
E2E_IFT_SYSTEM_PLATFORM_USER=test_user_ift
# PSI Environment Settings (Backend Mode)
E2E_PSI_RAG_HOST=psi-rag.example.com
E2E_PSI_BEARER_TOKEN=your_psi_bearer_token_here
E2E_PSI_PLATFORM_USER_ID=test_user_psi
E2E_PSI_PLATFORM_ID=telegram
# PROD Environment Settings (Bench Mode)
E2E_PROD_RAG_HOST=prod-rag.example.com
E2E_PROD_BEARER_TOKEN=your_prod_bearer_token_here
E2E_PROD_SYSTEM_PLATFORM=telegram
E2E_PROD_SYSTEM_PLATFORM_USER=test_user_prod
Security Note
⚠️ NEVER commit .env.e2e to version control!
The .env.e2e file contains:
- Real bearer tokens for RAG backends
- Production-like credentials
- Sensitive configuration
Always use .env.e2e.example as a template.
Running E2E Tests
Prerequisites Check
First, ensure all services are running:
# Check DB API
curl http://localhost:8081/health
# Check that your .env.e2e is configured
cat tests/e2e/.env.e2e
Run All E2E Tests
# Activate virtual environment
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
# Run all E2E tests
pytest tests/e2e/ -v -m e2e
# Run with detailed output
pytest tests/e2e/ -v -m e2e -s
Run Specific Test Categories
# Run only IFT environment tests
pytest tests/e2e/ -v -m e2e_ift
# Run only PSI environment tests
pytest tests/e2e/ -v -m e2e_psi
# Run only PROD environment tests
pytest tests/e2e/ -v -m e2e_prod
# Run only workflow tests
pytest tests/e2e/test_full_flow_e2e.py -v
# Run only error scenario tests
pytest tests/e2e/test_error_scenarios_e2e.py -v
# Run only RAG backend tests
pytest tests/e2e/test_rag_backends_e2e.py -v
Run Individual Test
# Run a specific test function
pytest tests/e2e/test_full_flow_e2e.py::TestCompleteUserFlow::test_full_workflow_bench_mode -v
Useful pytest Options
# Show print statements
pytest tests/e2e/ -v -s
# Stop on first failure
pytest tests/e2e/ -v -x
# Show local variables on failure
pytest tests/e2e/ -v -l
# Run with coverage (not typical for E2E)
pytest tests/e2e/ -v --cov=app
# Increase timeout for slow RAG backends
pytest tests/e2e/ -v --timeout=300
Test Structure
Test Files
tests/e2e/
├── conftest.py # E2E fixtures and configuration
├── .env.e2e.example # Example environment variables
├── .env.e2e # Your actual config (not in git)
├── README.md # This file
├── test_full_flow_e2e.py # Complete user workflow tests
├── test_rag_backends_e2e.py # RAG backend integration tests
└── test_error_scenarios_e2e.py # Error handling and edge cases
Test Markers
E2E tests use pytest markers for categorization:
@pytest.mark.e2e- All E2E tests@pytest.mark.e2e_ift- IFT environment specific@pytest.mark.e2e_psi- PSI environment specific@pytest.mark.e2e_prod- PROD environment specific
Fixtures
Key fixtures from conftest.py:
check_prerequisites- Verifies all services are availablee2e_client- FastAPI TestClient instancee2e_auth_headers- Authenticated headers with JWT tokensetup_test_settings- Configures user settings for all environmentscleanup_test_sessions- Removes test data after each test
Test Coverage
Complete User Workflows (test_full_flow_e2e.py)
-
Full Workflow - Bench Mode
- Authenticate → Get settings → Send bench query → Save session → Retrieve → Delete
-
Full Workflow - Backend Mode
- Authenticate → Verify PSI settings → Send backend query → Save session
-
Settings Change Affects Queries
- Change settings → Verify mode compatibility → Restore settings
-
Multiple Sessions Management
- Create sessions for all environments → List → Filter → Delete all
-
User Data Isolation
- Verify authentication requirements → Test access controls
RAG Backend Tests (test_rag_backends_e2e.py)
-
Environment-Specific Queries
- IFT bench mode queries
- PSI backend mode queries
- PROD bench mode queries
-
Backend Mode Features
- Session management
- Session reset functionality
- Sequential question processing
-
Query Parameters
with_docsparameter handling- Multiple questions in one request
- Cross-environment queries
Error Scenarios (test_error_scenarios_e2e.py)
-
Authentication Errors
- Missing auth token
- Invalid JWT token
- Malformed authorization header
-
Validation Errors
- Invalid environment names
- Empty questions list
- Missing required fields
- Invalid data structures
-
Mode Compatibility
- Bench query with backend mode settings
- Backend query with bench mode settings
-
Resource Not Found
- Nonexistent session IDs
- Invalid UUID formats
-
Settings Errors
- Invalid API modes
- Invalid environments
-
Edge Cases
- Very long questions
- Special characters
- Large number of questions
- Pagination edge cases
Timeouts
E2E tests use generous timeouts due to real RAG backend processing:
- Default query timeout: 120 seconds (2 minutes)
- Large batch queries: 180 seconds (3 minutes)
- DB API operations: 30 seconds
If tests timeout frequently, check:
- RAG backend performance
- Network connectivity
- Server load
Cleanup
Tests automatically clean up after themselves:
cleanup_test_sessionsfixture removes all sessions created during tests- Each test is isolated and doesn't affect other tests
- Failed tests may leave orphaned sessions (check manually if needed)
Manual Cleanup
If needed, clean up test data manually:
# Get all sessions for test user
GET /api/v1/analysis/sessions?limit=1000
# Delete specific session
DELETE /api/v1/analysis/sessions/{session_id}
Troubleshooting
Tests Are Skipped
Symptom: All tests show as "SKIPPED"
Causes:
- DB API not running
- RAG backends not configured
- Missing
.env.e2efile - Test user doesn't exist
Solution: Check prerequisite error messages for details.
Authentication Failures
Symptom: Tests fail with 401 Unauthorized
Causes:
- Test user doesn't exist in DB API
- Invalid test login in
.env.e2e - JWT secret mismatch between environments
Solution: Verify test user exists and credentials are correct.
Timeout Errors
Symptom: Tests timeout during RAG queries
Causes:
- RAG backend is slow or overloaded
- Network issues
- Invalid bearer tokens
- mTLS certificate problems
Solution:
- Check RAG backend health
- Verify bearer tokens are valid
- Increase timeout values if needed
Connection Refused
Symptom: Connection errors to services
Causes:
- Service not running
- Wrong host/port in configuration
- Firewall blocking connections
Solution: Verify all services are accessible and configuration is correct.
Validation Errors (422)
Symptom: Tests fail with 422 Unprocessable Entity
Causes:
- Incorrect data format in
.env.e2e - Missing required settings
- Invalid enum values
Solution: Check .env.e2e.example for correct format.
Best Practices
When to Run E2E Tests
- Before deploying: Always run E2E tests before production deployment
- After major changes: Run when modifying API endpoints or services
- Regularly in CI/CD: Set up automated E2E testing in your pipeline
- Not during development: Use unit/integration tests for rapid feedback
Test Data Management
- Use dedicated test user account (not production users)
- Tests create and delete their own data
- Don't rely on existing data in DB
- Clean up manually if tests fail catastrophically
Performance Considerations
- E2E tests are slow (real network calls)
- Run unit/integration tests first
- Consider running E2E tests in parallel (with caution)
- Use environment-specific markers to run subset of tests
CI/CD Integration
Example GitHub Actions workflow:
e2e-tests:
runs-on: ubuntu-latest
services:
db-api:
image: your-db-api:latest
ports:
- 8081:8081
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Create .env.e2e
run: |
echo "E2E_DB_API_URL=${{ secrets.E2E_DB_API_URL }}" > tests/e2e/.env.e2e
echo "E2E_TEST_LOGIN=${{ secrets.E2E_TEST_LOGIN }}" >> tests/e2e/.env.e2e
# ... other env vars
- name: Run E2E tests
run: pytest tests/e2e/ -v -m e2e
Contributing
When adding new E2E tests:
- Add test to appropriate file (workflow/backend/error)
- Use existing fixtures from
conftest.py - Add cleanup logic if creating persistent data
- Document any new environment variables
- Use appropriate pytest markers
- Add realistic timeout values
- Test both success and failure paths
Related Documentation
- Integration Tests - Tests for DB API integration only
- Unit Tests - Fast isolated tests
- DB API Contract - External DB API specification