integration tests

This commit is contained in:
itqop 2025-12-18 10:00:04 +03:00
parent d72c8141a1
commit 4410f770b7
22 changed files with 3853 additions and 52 deletions

2
.gitignore vendored
View File

@ -29,6 +29,8 @@ env/
# Environment variables
.env
.env.local
.env.integration
.env.e2e
# IDEs
.vscode/

View File

@ -200,11 +200,65 @@ finally:
## Testing Strategy
When adding tests:
1. Unit test services in isolation (mock httpx responses)
2. Integration test API endpoints (use TestClient from FastAPI)
3. Mock external dependencies (DB API, RAG backends)
4. Test error paths (network failures, invalid tokens, missing settings)
The project has comprehensive test coverage across three levels:
### Test Pyramid
1. **Unit Tests** (119 tests, 99% coverage) - `tests/unit/`
- Fast, isolated tests with all dependencies mocked
- Test business logic, models, utilities in isolation
- Run constantly during development
- Command: `.\run_unit_tests.bat` or `pytest tests/unit/ -m unit`
2. **Integration Tests** (DB API integration) - `tests/integration/`
- Test FastAPI endpoints with real DB API
- Requires DB API service running
- Mock RAG backends (only DB integration tested)
- Run before commits
- Command: `.\run_integration_tests.bat` or `pytest tests/integration/ -m integration`
- Configuration: `tests/integration/.env.integration` (see `.env.integration.example`)
3. **End-to-End Tests** (Full stack) - `tests/e2e/`
- Test complete workflows from auth to RAG queries
- Requires ALL services: FastAPI + DB API + RAG backends
- Real network calls, no mocking
- Run before deployment
- Command: `.\run_e2e_tests.bat` or `pytest tests/e2e/ -m e2e`
- Configuration: `tests/e2e/.env.e2e` (see `.env.e2e.example`)
### Running Tests
```bash
# All tests (unit + integration)
.\run_all_tests.bat
# Specific test level
.\run_unit_tests.bat
.\run_integration_tests.bat
.\run_e2e_tests.bat
# Using pytest markers
pytest -m unit # Unit tests only
pytest -m integration # Integration tests only
pytest -m e2e # E2E tests only
pytest -m e2e_ift # E2E for IFT environment only
```
### Test Documentation
See [TESTING.md](TESTING.md) for comprehensive testing guide including:
- Detailed setup instructions for each test level
- Environment configuration
- Troubleshooting common issues
- CI/CD integration examples
- Best practices
### When Adding New Features
1. **Unit Tests**: Test business logic in isolation (always required)
2. **Integration Tests**: Test DB API interaction if feature uses DB
3. **E2E Tests**: Add workflow test if feature is user-facing
4. **Run all tests**: Verify nothing broke before committing
## Common Development Scenarios

View File

@ -39,7 +39,7 @@ brief-bench-fastapi/
│ ├── dependencies.py ✅ DI: get_db_client, get_current_user
│ └── main.py ✅ FastAPI app с CORS
├── static/ ❌ Пусто (нужно скопировать из rag-bench)
├── tests/ ❌ Пусто
├── tests/ ✅ Полный набор тестов (unit/integration/e2e)
├── certs/ ❌ Не создана (для mTLS)
├── .env.example ✅
├── .gitignore ✅
@ -259,9 +259,24 @@ class RagService:
- `app/middleware/logging.py` - логирование запросов
- `app/middleware/error_handler.py` - глобальная обработка ошибок
### 9. Tests
- Unit tests для services
- Integration tests для API endpoints
### 9. Tests ✅ COMPLETED
- ✅ **Unit Tests** (119 tests, 99% coverage) - `tests/unit/`
- All services, models, utilities tested in isolation
- All external dependencies mocked
- Run: `.\run_unit_tests.bat`
- ✅ **Integration Tests** (DB API integration) - `tests/integration/`
- FastAPI endpoints with real DB API
- Requires DB API service running
- Run: `.\run_integration_tests.bat`
- ✅ **End-to-End Tests** (Full stack) - `tests/e2e/`
- Complete workflows: auth → query → save → retrieve
- Requires all services (FastAPI + DB API + RAG backends)
- Real network calls to RAG backends
- Run: `.\run_e2e_tests.bat`
- ✅ **Test Documentation** - `TESTING.md`
- Comprehensive testing guide
- Setup instructions for each test level
- Troubleshooting and best practices
---

563
TESTING.md Normal file
View File

@ -0,0 +1,563 @@
# Testing Guide
Comprehensive testing strategy for Brief Bench FastAPI, covering unit tests, integration tests, and end-to-end tests.
## Test Pyramid
```
/\
/ \ E2E Tests (Slow, Full Stack)
/____\
/ \
/ Integ. \ Integration Tests (Medium, DB API)
/__________\
/ \
/ Unit \ Unit Tests (Fast, Isolated)
/________________\
```
### Three Test Levels
1. **Unit Tests** (119 tests, 99% coverage)
- Fast, isolated tests
- Mock all external dependencies
- Test business logic in isolation
- Run during development
2. **Integration Tests** (DB API integration)
- Test integration with DB API
- Require DB API service running
- Validate data flow and API contracts
- Run before commits
3. **End-to-End Tests** (Full stack)
- Test complete user workflows
- Require all services (FastAPI, DB API, RAG backends)
- Validate entire system integration
- Run before deployment
## Quick Start
### Run All Tests (Unit + Integration)
```bash
# Windows
.\run_all_tests.bat
# Linux/Mac
./run_all_tests.sh
```
### Run Test Category
```bash
# Unit tests only (fast)
.\run_unit_tests.bat
# Integration tests (requires DB API)
.\run_integration_tests.bat
# E2E tests (requires all services)
.\run_e2e_tests.bat
```
### Run Specific Tests
```bash
# Activate virtual environment first
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
# Run by marker
pytest -m unit # Unit tests only
pytest -m integration # Integration tests only
pytest -m e2e # E2E tests only
# Run by file
pytest tests/unit/test_auth_service.py
pytest tests/integration/test_auth_integration.py
pytest tests/e2e/test_full_flow_e2e.py
# Run specific test function
pytest tests/unit/test_auth_service.py::TestAuthService::test_generate_token
```
## Unit Tests
**Location**: `tests/unit/`
**Coverage**: 99% of application code
**Speed**: Very fast (< 1 second)
**Dependencies**: None (all mocked)
### What They Test
- Service layer business logic
- Model validation
- Utility functions
- Error handling
- Edge cases
### Key Features
- All external APIs are mocked (httpx, JWT, etc.)
- No real network calls
- No database required
- Deterministic and repeatable
- Run in parallel
### Running Unit Tests
```bash
# Run with coverage report
.\run_unit_tests.bat
# Or manually
pytest tests/unit/ -v --cov=app --cov-report=html
# View coverage report
start htmlcov/index.html # Windows
open htmlcov/index.html # Mac
```
### Coverage Report
After running unit tests, coverage report is available:
- **Terminal**: Printed to console
- **HTML**: `htmlcov/index.html`
- **XML**: `coverage.xml` (for CI/CD)
### Test Files
```
tests/unit/
├── conftest.py # Unit test fixtures
├── test_auth_service.py # Authentication logic
├── test_settings_service.py # Settings management
├── test_analysis_service.py # Analysis sessions
├── test_query_service.py # Query processing
├── test_rag_service.py # RAG backend communication
├── test_db_api_client.py # DB API client
├── test_jwt_utils.py # JWT utilities
├── test_models.py # Pydantic models
└── test_dependencies.py # Dependency injection
```
## Integration Tests
**Location**: `tests/integration/`
**Coverage**: DB API integration
**Speed**: Medium (few seconds)
**Dependencies**: DB API service
### What They Test
- FastAPI endpoints with real DB API
- Authentication flow
- Settings CRUD operations
- Analysis session management
- Error handling from external service
### Prerequisites
**DB API must be running** at `http://localhost:8081`
Check health:
```bash
curl http://localhost:8081/health
```
### Configuration
Create `tests/integration/.env.integration`:
```bash
cp tests/integration/.env.integration.example tests/integration/.env.integration
```
Edit with your test credentials:
```
TEST_DB_API_URL=http://localhost:8081/api/v1
TEST_LOGIN=99999999 # 8-digit test user
```
### Running Integration Tests
```bash
# Check prerequisites and run
.\run_integration_tests.bat
# Or manually
pytest tests/integration/ -v -m integration
```
### Test Files
```
tests/integration/
├── conftest.py # Integration fixtures
├── .env.integration.example # Config template
├── .env.integration # Your config (not in git)
├── README.md # Integration test docs
├── test_auth_integration.py # Auth with DB API
├── test_settings_integration.py # Settings with DB API
├── test_analysis_integration.py # Sessions with DB API
└── test_query_integration.py # Query endpoints (DB API part)
```
### What's NOT Tested
Integration tests **do not** call RAG backends:
- RAG queries are mocked
- Only DB API integration is tested
- Use E2E tests for full RAG testing
## End-to-End Tests
**Location**: `tests/e2e/`
**Coverage**: Complete user workflows
**Speed**: Slow (minutes)
**Dependencies**: All services (FastAPI + DB API + RAG backends)
### What They Test
- Complete authentication → query → save → retrieve flow
- Real RAG backend calls (IFT, PSI, PROD)
- Cross-environment functionality
- Data persistence end-to-end
- User isolation and security
- Error scenarios with real services
### Prerequisites
**All services must be running**:
1. ✅ DB API at `http://localhost:8081`
2. ✅ IFT RAG backend (configured host)
3. ✅ PSI RAG backend (configured host)
4. ✅ PROD RAG backend (configured host)
5. ✅ Test user exists in DB API
6. ✅ Valid bearer tokens for RAG backends
### Configuration
Create `tests/e2e/.env.e2e`:
```bash
cp tests/e2e/.env.e2e.example tests/e2e/.env.e2e
```
Edit with your configuration (see `.env.e2e.example` for all variables):
```bash
E2E_DB_API_URL=http://localhost:8081/api/v1
E2E_TEST_LOGIN=99999999
E2E_IFT_RAG_HOST=ift-rag.example.com
E2E_IFT_BEARER_TOKEN=your_token_here
# ... more config
```
⚠️ **Security**: `.env.e2e` contains real credentials - never commit to git!
### Running E2E Tests
```bash
# Check prerequisites and run all E2E
.\run_e2e_tests.bat
# Or manually run all E2E tests
pytest tests/e2e/ -v -m e2e
# Run environment-specific tests
pytest tests/e2e/ -v -m e2e_ift # IFT only
pytest tests/e2e/ -v -m e2e_psi # PSI only
pytest tests/e2e/ -v -m e2e_prod # PROD only
# Run specific test suite
pytest tests/e2e/test_full_flow_e2e.py -v # Workflows
pytest tests/e2e/test_rag_backends_e2e.py -v # RAG backends
pytest tests/e2e/test_error_scenarios_e2e.py -v # Error cases
```
### Test Files
```
tests/e2e/
├── conftest.py # E2E fixtures
├── .env.e2e.example # Config template
├── .env.e2e # Your config (not in git)
├── README.md # E2E test docs (detailed)
├── test_full_flow_e2e.py # Complete workflows
├── test_rag_backends_e2e.py # RAG integration
└── test_error_scenarios_e2e.py # Error handling
```
### What They Test
**Complete Workflows**:
- User login → JWT token
- Get/update settings
- Bench mode queries to RAG
- Backend mode queries with sessions
- Save analysis sessions
- Retrieve and delete sessions
**RAG Backend Integration**:
- IFT RAG (bench mode)
- PSI RAG (backend mode)
- PROD RAG (bench mode)
- Session reset functionality
- Cross-environment queries
**Error Scenarios**:
- Authentication failures
- Validation errors
- Mode compatibility issues
- Resource not found
- Edge cases (long questions, special chars, etc.)
### Timeouts
E2E tests use realistic timeouts:
- RAG queries: 120 seconds (2 minutes)
- Large batches: 180 seconds (3 minutes)
- DB API calls: 30 seconds
### Cleanup
E2E tests automatically clean up after themselves. However, if tests fail catastrophically, you may need to manually delete test sessions.
## Test Markers
Use pytest markers to run specific test categories:
```bash
# Unit tests only
pytest -m unit
# Integration tests only
pytest -m integration
# All E2E tests
pytest -m e2e
# E2E tests for specific environment
pytest -m e2e_ift
pytest -m e2e_psi
pytest -m e2e_prod
# Slow tests only
pytest -m slow
# Everything except E2E
pytest -m "not e2e"
# Unit and integration (no E2E)
pytest -m "unit or integration"
```
## CI/CD Integration
### Recommended CI/CD Pipeline
```yaml
# .github/workflows/test.yml example
name: Tests
on: [push, pull_request]
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run unit tests
run: pytest tests/unit/ -v -m unit --cov=app
- name: Upload coverage
uses: codecov/codecov-action@v3
integration-tests:
runs-on: ubuntu-latest
services:
db-api:
image: your-db-api:latest
ports:
- 8081:8081
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Create .env.integration
run: |
echo "TEST_DB_API_URL=http://localhost:8081/api/v1" > tests/integration/.env.integration
echo "TEST_LOGIN=${{ secrets.TEST_LOGIN }}" >> tests/integration/.env.integration
- name: Run integration tests
run: pytest tests/integration/ -v -m integration
e2e-tests:
runs-on: ubuntu-latest
# Only run E2E on main branch or releases
if: github.ref == 'refs/heads/main'
services:
db-api:
image: your-db-api:latest
ports:
- 8081:8081
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Create .env.e2e
run: |
echo "E2E_DB_API_URL=${{ secrets.E2E_DB_API_URL }}" > tests/e2e/.env.e2e
echo "E2E_TEST_LOGIN=${{ secrets.E2E_TEST_LOGIN }}" >> tests/e2e/.env.e2e
# ... add all other secrets
- name: Run E2E tests
run: pytest tests/e2e/ -v -m e2e
```
### CI/CD Best Practices
1. **Always run unit tests** (fast, no dependencies)
2. **Run integration tests** if DB API is available
3. **Run E2E tests** only on main branch or before deployment
4. **Use secrets** for test credentials
5. **Cache dependencies** to speed up builds
6. **Parallel execution** for unit tests
7. **Generate coverage reports** for unit tests only
## Troubleshooting
### Unit Tests Failing
**Check**:
- Virtual environment activated
- All dependencies installed: `pip install -r requirements.txt`
- No syntax errors in test files
### Integration Tests Skipped
**Cause**: DB API not running or not configured
**Fix**:
1. Start DB API: `docker-compose up db-api`
2. Check health: `curl http://localhost:8081/health`
3. Verify `.env.integration` exists and is correct
### E2E Tests Skipped
**Cause**: Required services not running
**Fix**:
1. Check DB API health
2. Verify RAG backends are accessible
3. Confirm `.env.e2e` is configured
4. Ensure test user exists in DB API
### Timeout Errors
**Cause**: RAG backends slow or unavailable
**Fix**:
- Check RAG backend health
- Verify bearer tokens are valid
- Check network connectivity
- Increase timeout if needed (in test files)
### Authentication Failures
**Cause**: Invalid credentials or test user doesn't exist
**Fix**:
- Verify test user exists: `curl -X POST http://localhost:8081/api/v1/users/login?login=99999999`
- Check bearer tokens are valid
- Ensure JWT_SECRET_KEY matches between environments
## Best Practices
### During Development
1. **Write unit tests first** (TDD approach)
2. **Run unit tests frequently** (on every change)
3. **Use `--lf` flag** to run last failed tests: `pytest --lf`
4. **Use `-x` flag** to stop on first failure: `pytest -x`
5. **Use `-k` flag** to run matching tests: `pytest -k "test_auth"`
### Before Committing
1. ✅ Run all unit tests
2. ✅ Check coverage (should be > 90%)
3. ✅ Run integration tests if DB API available
4. ✅ Fix any failing tests
5. ✅ Review coverage report for gaps
### Before Deploying
1. ✅ Run all unit tests
2. ✅ Run all integration tests
3. ✅ Run all E2E tests
4. ✅ Verify all tests pass
5. ✅ Check for any warnings
6. ✅ Review test results summary
### Writing New Tests
**For new features**:
1. Start with unit tests (business logic)
2. Add integration tests (DB API interaction)
3. Add E2E test (complete workflow)
**For bug fixes**:
1. Write failing test that reproduces bug
2. Fix the bug
3. Verify test passes
4. Add related edge case tests
## Test Coverage Goals
- **Unit Test Coverage**: > 95%
- **Integration Test Coverage**: All DB API endpoints
- **E2E Test Coverage**: All critical user workflows
Current coverage:
- ✅ Unit: 99%
- ✅ Integration: All DB API integration points
- ✅ E2E: Complete workflows + error scenarios
## Related Documentation
- [Unit Tests](tests/unit/) - Fast isolated tests
- [Integration Tests](tests/integration/README.md) - DB API integration
- [E2E Tests](tests/e2e/README.md) - Full stack testing
- [DB API Contract](DB_API_CONTRACT.md) - External API spec
- [CLAUDE.md](CLAUDE.md) - Architecture overview
- [PROJECT_STATUS.md](PROJECT_STATUS.md) - Implementation status
## Summary
```
┌─────────────────────────────────────────────────────────┐
│ Test Type │ Speed │ Dependencies │ When to Run │
├─────────────────────────────────────────────────────────┤
│ Unit │ Fast │ None │ Always │
│ Integration │ Med │ DB API │ Before commit │
│ E2E │ Slow │ All services │ Before deploy │
└─────────────────────────────────────────────────────────┘
Run unit tests constantly ⚡
Run integration tests regularly 🔄
Run E2E tests before deployment 🚀
```

View File

@ -26,6 +26,10 @@ addopts =
markers =
unit: Unit tests
integration: Integration tests
e2e: End-to-end tests (requires all services running)
e2e_ift: E2E tests for IFT environment
e2e_psi: E2E tests for PSI environment
e2e_prod: E2E tests for PROD environment
slow: Slow running tests
# Coverage options

48
run_all_tests.bat Normal file
View File

@ -0,0 +1,48 @@
@echo off
REM Run all tests (unit + integration)
echo ========================================
echo Running ALL tests (unit + integration)
echo ========================================
echo.
echo [1/2] Running unit tests...
call run_unit_tests.bat
set UNIT_RESULT=%ERRORLEVEL%
echo.
echo ========================================
echo.
echo [2/2] Running integration tests...
call run_integration_tests.bat
set INTEGRATION_RESULT=%ERRORLEVEL%
echo.
echo ========================================
echo Test Results Summary
echo ========================================
if %UNIT_RESULT% EQU 0 (
echo Unit tests: ✓ PASSED
) else (
echo Unit tests: ✗ FAILED
)
if %INTEGRATION_RESULT% EQU 0 (
echo Integration tests: ✓ PASSED
) else (
echo Integration tests: ✗ FAILED
)
echo ========================================
if %UNIT_RESULT% EQU 0 if %INTEGRATION_RESULT% EQU 0 (
echo.
echo ✓ ALL TESTS PASSED!
exit /b 0
) else (
echo.
echo ✗ SOME TESTS FAILED!
exit /b 1
)

65
run_e2e_tests.bat Normal file
View File

@ -0,0 +1,65 @@
@echo off
REM Run end-to-end tests (requires ALL services running)
echo ========================================
echo Checking E2E Test Prerequisites
echo ========================================
echo.
REM Check if .env.e2e exists
if not exist "tests\e2e\.env.e2e" (
echo ERROR: tests\e2e\.env.e2e not found!
echo.
echo Please create .env.e2e from .env.e2e.example:
echo copy tests\e2e\.env.e2e.example tests\e2e\.env.e2e
echo.
echo Then edit .env.e2e with your configuration.
echo.
exit /b 1
)
echo ✓ E2E configuration file found
echo.
REM Check if DB API is running
echo Checking if DB API is running...
curl -f -s http://localhost:8081/health >nul 2>&1
if %ERRORLEVEL% NEQ 0 (
echo.
echo WARNING: DB API is not responding at http://localhost:8081
echo E2E tests require DB API to be running.
echo.
echo Continue anyway? Tests will be skipped if services are unavailable.
pause
)
echo ✓ DB API is accessible
echo.
echo ========================================
echo Running E2E Tests
echo ========================================
echo.
echo NOTE: E2E tests are slow (real network calls)
echo Tests will be SKIPPED if services unavailable
echo.
.venv\Scripts\python.exe -m pytest tests/e2e/ -v -m e2e
if %ERRORLEVEL% EQU 0 (
echo.
echo ✓ All E2E tests passed!
exit /b 0
) else (
echo.
echo ✗ Some E2E tests failed or were skipped!
echo.
echo Common issues:
echo - Services not running (DB API, RAG backends)
echo - Invalid credentials in .env.e2e
echo - Network connectivity problems
echo - Test user doesn't exist in DB API
echo.
echo See tests/e2e/README.md for troubleshooting
exit /b %ERRORLEVEL%
)

26
run_integration_tests.bat Normal file
View File

@ -0,0 +1,26 @@
@echo off
REM Run integration tests (requires DB API running)
echo Checking if DB API is running...
curl -f -s http://localhost:8081/health >nul 2>&1
if %ERRORLEVEL% NEQ 0 (
echo.
echo ERROR: DB API is not responding at http://localhost:8081
echo Please start DB API before running integration tests.
echo.
exit /b 1
)
echo DB API is running ✓
echo.
echo Running integration tests...
.venv\Scripts\python.exe -m pytest tests/integration/ -v -m integration
if %ERRORLEVEL% EQU 0 (
echo.
echo ✓ All integration tests passed!
) else (
echo.
echo ✗ Some integration tests failed!
exit /b %ERRORLEVEL%
)

14
run_unit_tests.bat Normal file
View File

@ -0,0 +1,14 @@
@echo off
REM Run unit tests only (no integration tests)
echo Running unit tests...
.venv\Scripts\python.exe -m pytest tests/unit/ -v --cov=app --cov-report=term-missing --cov-report=html
if %ERRORLEVEL% EQU 0 (
echo.
echo ✓ All unit tests passed!
) else (
echo.
echo ✗ Some unit tests failed!
exit /b %ERRORLEVEL%
)

View File

@ -1,76 +1,134 @@
# Brief Bench Tests
Полный набор юнит-тестов для Brief Bench FastAPI.
Полная система тестирования для Brief Bench FastAPI.
## Структура тестов
```
tests/
├── conftest.py # Fixtures и моки
├── test_auth.py # Тесты авторизации
├── test_settings.py # Тесты настроек
├── test_query.py # Тесты запросов к RAG
├── test_analysis.py # Тесты сессий анализа
├── test_security.py # Тесты JWT
└── test_models.py # Тесты Pydantic моделей
├── unit/ # Unit тесты (моки, изоляция)
│ ├── conftest.py # Фикстуры для unit тестов
│ ├── test_analysis.py # Тесты analysis endpoints
│ ├── test_auth.py # Тесты аутентификации
│ ├── test_base_interface.py # Тесты TgBackendInterface
│ ├── test_db_api_client.py # Тесты DB API клиента
│ ├── test_dependencies.py # Тесты dependencies
│ ├── test_main.py # Тесты main endpoints
│ ├── test_models.py # Тесты Pydantic моделей
│ ├── test_query.py # Тесты query endpoints
│ ├── test_security.py # Тесты JWT security
│ └── test_settings.py # Тесты settings endpoints
├── integration/ # Integration тесты (реальный DB API)
│ ├── conftest.py # Фикстуры для integration тестов
│ ├── README.md # Документация integration тестов
│ ├── .env.integration.example # Пример конфигурации
│ ├── test_auth_integration.py
│ ├── test_settings_integration.py
│ ├── test_analysis_integration.py
│ └── test_query_integration.py
└── README.md # Этот файл
```
## Запуск тестов
## Быстрый старт
### Установка зависимостей
### 1. Только Unit тесты (без внешних зависимостей)
```bash
# Windows
run_unit_tests.bat
# Linux/Mac
pytest tests/unit/ -v --cov=app --cov-report=term-missing
```
**Результат:** 119 тестов, 99% coverage
### 2. Integration тесты (требуется DB API)
```bash
# Windows
run_integration_tests.bat
# Linux/Mac
pytest tests/integration/ -v -m integration
```
⚙️ **Требуется:** DB API на http://localhost:8081
### 3. Все тесты
```bash
# Windows
run_all_tests.bat
# Linux/Mac
pytest tests/ -v
```
## Установка
```bash
pip install -r requirements.txt
pip install -r requirements-dev.txt
```
### Запуск всех тестов
## Команды pytest
### Базовые команды
```bash
# Все тесты
pytest
```
### Запуск с покрытием
# Unit тесты
pytest tests/unit/
```bash
# Integration тесты
pytest tests/integration/
# С coverage
pytest --cov=app --cov-report=html
```
Отчет будет в `htmlcov/index.html`
# Конкретный файл
pytest tests/unit/test_auth.py
### Запуск конкретного файла
# Конкретный тест
pytest tests/unit/test_auth.py::TestAuthEndpoints::test_login_success
```bash
pytest tests/test_auth.py
```
### Запуск конкретного теста
```bash
pytest tests/test_auth.py::TestAuthEndpoints::test_login_success
```
### Запуск с подробным выводом
```bash
# Verbose вывод
pytest -v
# Остановиться на первой ошибке
pytest -x
# Показать print statements
pytest -s
```
### Запуск только быстрых тестов
## Покрытие (Coverage)
```bash
pytest -m "not slow"
```
### Unit Tests: **99%** (567 строк, 4 непокрыто)
## Покрытие
| Модуль | Coverage | Тесты |
|--------|----------|-------|
| app/api/v1/analysis.py | 100% | 20 |
| app/api/v1/auth.py | 100% | 6 |
| app/api/v1/query.py | 97% | 10 |
| app/api/v1/settings.py | 100% | 14 |
| app/dependencies.py | 100% | 6 |
| app/interfaces/base.py | 100% | 24 |
| app/interfaces/db_api_client.py | 100% | 8 |
| app/services/rag_service.py | 100% | 17 |
| app/services/auth_service.py | 100% | 3 |
| app/utils/security.py | 100% | 5 |
| app/models/*.py | 100% | 14 |
| app/main.py | 92% | 3 |
Текущее покрытие кода:
- **Authentication**: 100% (endpoints + service)
- **Settings**: 100% (endpoints)
- **Query**: 95% (endpoints + RAG service)
- **Analysis**: 100% (endpoints)
- **Security**: 100% (JWT utils)
- **Models**: 100% (Pydantic validation)
**Непокрытые строки (4):**
- `query.py:190-191` - Logger в exception handler
- `main.py:56-57` - `if __name__ == "__main__"` блок
## Что тестируется

View File

@ -0,0 +1,77 @@
# E2E Test Environment Configuration
# Copy this file to .env.e2e and update with your actual values
# NEVER commit .env.e2e to version control!
# ============================================
# DB API Configuration
# ============================================
# URL of the DB API service (without trailing slash)
E2E_DB_API_URL=http://localhost:8081/api/v1
# ============================================
# Test User Credentials
# ============================================
# 8-digit test login that exists in DB API
# Use a dedicated test account, not a real user!
E2E_TEST_LOGIN=99999999
# ============================================
# IFT Environment (Bench Mode)
# ============================================
# RAG backend host for IFT environment
E2E_IFT_RAG_HOST=ift-rag.example.com
# Bearer token for IFT RAG authentication
E2E_IFT_BEARER_TOKEN=your_ift_bearer_token_here
# System platform identifier for IFT
E2E_IFT_SYSTEM_PLATFORM=telegram
# System platform user identifier for IFT
E2E_IFT_SYSTEM_PLATFORM_USER=test_user_ift
# ============================================
# PSI Environment (Backend Mode)
# ============================================
# RAG backend host for PSI environment
E2E_PSI_RAG_HOST=psi-rag.example.com
# Bearer token for PSI RAG authentication
E2E_PSI_BEARER_TOKEN=your_psi_bearer_token_here
# Platform user ID for PSI backend mode
E2E_PSI_PLATFORM_USER_ID=test_user_psi
# Platform ID for PSI backend mode
E2E_PSI_PLATFORM_ID=telegram
# ============================================
# PROD Environment (Bench Mode)
# ============================================
# RAG backend host for PROD environment
E2E_PROD_RAG_HOST=prod-rag.example.com
# Bearer token for PROD RAG authentication
E2E_PROD_BEARER_TOKEN=your_prod_bearer_token_here
# System platform identifier for PROD
E2E_PROD_SYSTEM_PLATFORM=telegram
# System platform user identifier for PROD
E2E_PROD_SYSTEM_PLATFORM_USER=test_user_prod
# ============================================
# Notes
# ============================================
# 1. All RAG hosts should be accessible from your machine
# 2. Bearer tokens must be valid for their respective environments
# 3. Test user (E2E_TEST_LOGIN) must exist in DB API
# 4. IFT and PROD use bench mode (batch queries)
# 5. PSI uses backend mode (sequential queries with session)
# 6. Platform identifiers should match your actual platform setup
#
# Security:
# - Keep this file secure (contains real credentials)
# - Never commit .env.e2e to git
# - Use dedicated test tokens, not production tokens if possible
# - Consider using different bearer tokens for different environments

438
tests/e2e/README.md Normal file
View File

@ -0,0 +1,438 @@
# End-to-End (E2E) Tests
End-to-end tests for the complete Brief Bench FastAPI system, testing the entire stack from authentication through RAG queries to data persistence.
## Overview
E2E tests validate:
- Complete user workflows from login to query results
- Integration between FastAPI backend, DB API, and RAG backends
- Real API calls to all external services (no mocking)
- Data persistence and retrieval
- Cross-environment functionality (IFT, PSI, PROD)
- Error handling and edge cases
## Prerequisites
**CRITICAL**: All external services must be running before E2E tests can execute.
### Required Services
1. **DB API** (database service)
- Must be running and accessible
- Default: `http://localhost:8081`
- Health check: `GET http://localhost:8081/health`
2. **RAG Backends** (one or more environments)
- **IFT RAG**: Development/test environment
- **PSI RAG**: Backend mode testing
- **PROD RAG**: Production-like testing
- Each environment needs its own RAG backend server running
3. **Test User Account**
- A valid 8-digit test login must exist in DB API
- Recommended: Use a dedicated test account (e.g., `99999999`)
- This account will be used for all E2E test operations
### Service Availability Check
E2E tests automatically check prerequisites before running:
- DB API health endpoint
- RAG backend host configurations
- Test credentials presence
If any prerequisite is not met, tests will be **skipped** with a detailed error message.
## Environment Configuration
### Create `.env.e2e` File
Copy the example file and configure for your environment:
```bash
cp tests/e2e/.env.e2e.example tests/e2e/.env.e2e
```
### Required Environment Variables
Edit `tests/e2e/.env.e2e` with your configuration:
```bash
# DB API Configuration
E2E_DB_API_URL=http://localhost:8081/api/v1
# Test User Credentials
E2E_TEST_LOGIN=99999999 # 8-digit test user login
# IFT Environment Settings (Bench Mode)
E2E_IFT_RAG_HOST=ift-rag.example.com
E2E_IFT_BEARER_TOKEN=your_ift_bearer_token_here
E2E_IFT_SYSTEM_PLATFORM=telegram
E2E_IFT_SYSTEM_PLATFORM_USER=test_user_ift
# PSI Environment Settings (Backend Mode)
E2E_PSI_RAG_HOST=psi-rag.example.com
E2E_PSI_BEARER_TOKEN=your_psi_bearer_token_here
E2E_PSI_PLATFORM_USER_ID=test_user_psi
E2E_PSI_PLATFORM_ID=telegram
# PROD Environment Settings (Bench Mode)
E2E_PROD_RAG_HOST=prod-rag.example.com
E2E_PROD_BEARER_TOKEN=your_prod_bearer_token_here
E2E_PROD_SYSTEM_PLATFORM=telegram
E2E_PROD_SYSTEM_PLATFORM_USER=test_user_prod
```
### Security Note
⚠️ **NEVER commit `.env.e2e` to version control!**
The `.env.e2e` file contains:
- Real bearer tokens for RAG backends
- Production-like credentials
- Sensitive configuration
Always use `.env.e2e.example` as a template.
## Running E2E Tests
### Prerequisites Check
First, ensure all services are running:
```bash
# Check DB API
curl http://localhost:8081/health
# Check that your .env.e2e is configured
cat tests/e2e/.env.e2e
```
### Run All E2E Tests
```bash
# Activate virtual environment
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
# Run all E2E tests
pytest tests/e2e/ -v -m e2e
# Run with detailed output
pytest tests/e2e/ -v -m e2e -s
```
### Run Specific Test Categories
```bash
# Run only IFT environment tests
pytest tests/e2e/ -v -m e2e_ift
# Run only PSI environment tests
pytest tests/e2e/ -v -m e2e_psi
# Run only PROD environment tests
pytest tests/e2e/ -v -m e2e_prod
# Run only workflow tests
pytest tests/e2e/test_full_flow_e2e.py -v
# Run only error scenario tests
pytest tests/e2e/test_error_scenarios_e2e.py -v
# Run only RAG backend tests
pytest tests/e2e/test_rag_backends_e2e.py -v
```
### Run Individual Test
```bash
# Run a specific test function
pytest tests/e2e/test_full_flow_e2e.py::TestCompleteUserFlow::test_full_workflow_bench_mode -v
```
### Useful pytest Options
```bash
# Show print statements
pytest tests/e2e/ -v -s
# Stop on first failure
pytest tests/e2e/ -v -x
# Show local variables on failure
pytest tests/e2e/ -v -l
# Run with coverage (not typical for E2E)
pytest tests/e2e/ -v --cov=app
# Increase timeout for slow RAG backends
pytest tests/e2e/ -v --timeout=300
```
## Test Structure
### Test Files
```
tests/e2e/
├── conftest.py # E2E fixtures and configuration
├── .env.e2e.example # Example environment variables
├── .env.e2e # Your actual config (not in git)
├── README.md # This file
├── test_full_flow_e2e.py # Complete user workflow tests
├── test_rag_backends_e2e.py # RAG backend integration tests
└── test_error_scenarios_e2e.py # Error handling and edge cases
```
### Test Markers
E2E tests use pytest markers for categorization:
- `@pytest.mark.e2e` - All E2E tests
- `@pytest.mark.e2e_ift` - IFT environment specific
- `@pytest.mark.e2e_psi` - PSI environment specific
- `@pytest.mark.e2e_prod` - PROD environment specific
### Fixtures
Key fixtures from `conftest.py`:
- `check_prerequisites` - Verifies all services are available
- `e2e_client` - FastAPI TestClient instance
- `e2e_auth_headers` - Authenticated headers with JWT token
- `setup_test_settings` - Configures user settings for all environments
- `cleanup_test_sessions` - Removes test data after each test
## Test Coverage
### Complete User Workflows (`test_full_flow_e2e.py`)
1. **Full Workflow - Bench Mode**
- Authenticate → Get settings → Send bench query → Save session → Retrieve → Delete
2. **Full Workflow - Backend Mode**
- Authenticate → Verify PSI settings → Send backend query → Save session
3. **Settings Change Affects Queries**
- Change settings → Verify mode compatibility → Restore settings
4. **Multiple Sessions Management**
- Create sessions for all environments → List → Filter → Delete all
5. **User Data Isolation**
- Verify authentication requirements → Test access controls
### RAG Backend Tests (`test_rag_backends_e2e.py`)
1. **Environment-Specific Queries**
- IFT bench mode queries
- PSI backend mode queries
- PROD bench mode queries
2. **Backend Mode Features**
- Session management
- Session reset functionality
- Sequential question processing
3. **Query Parameters**
- `with_docs` parameter handling
- Multiple questions in one request
- Cross-environment queries
### Error Scenarios (`test_error_scenarios_e2e.py`)
1. **Authentication Errors**
- Missing auth token
- Invalid JWT token
- Malformed authorization header
2. **Validation Errors**
- Invalid environment names
- Empty questions list
- Missing required fields
- Invalid data structures
3. **Mode Compatibility**
- Bench query with backend mode settings
- Backend query with bench mode settings
4. **Resource Not Found**
- Nonexistent session IDs
- Invalid UUID formats
5. **Settings Errors**
- Invalid API modes
- Invalid environments
6. **Edge Cases**
- Very long questions
- Special characters
- Large number of questions
- Pagination edge cases
## Timeouts
E2E tests use generous timeouts due to real RAG backend processing:
- **Default query timeout**: 120 seconds (2 minutes)
- **Large batch queries**: 180 seconds (3 minutes)
- **DB API operations**: 30 seconds
If tests timeout frequently, check:
1. RAG backend performance
2. Network connectivity
3. Server load
## Cleanup
Tests automatically clean up after themselves:
- `cleanup_test_sessions` fixture removes all sessions created during tests
- Each test is isolated and doesn't affect other tests
- Failed tests may leave orphaned sessions (check manually if needed)
### Manual Cleanup
If needed, clean up test data manually:
```python
# Get all sessions for test user
GET /api/v1/analysis/sessions?limit=1000
# Delete specific session
DELETE /api/v1/analysis/sessions/{session_id}
```
## Troubleshooting
### Tests Are Skipped
**Symptom**: All tests show as "SKIPPED"
**Causes**:
1. DB API not running
2. RAG backends not configured
3. Missing `.env.e2e` file
4. Test user doesn't exist
**Solution**: Check prerequisite error messages for details.
### Authentication Failures
**Symptom**: Tests fail with 401 Unauthorized
**Causes**:
1. Test user doesn't exist in DB API
2. Invalid test login in `.env.e2e`
3. JWT secret mismatch between environments
**Solution**: Verify test user exists and credentials are correct.
### Timeout Errors
**Symptom**: Tests timeout during RAG queries
**Causes**:
1. RAG backend is slow or overloaded
2. Network issues
3. Invalid bearer tokens
4. mTLS certificate problems
**Solution**:
- Check RAG backend health
- Verify bearer tokens are valid
- Increase timeout values if needed
### Connection Refused
**Symptom**: Connection errors to services
**Causes**:
1. Service not running
2. Wrong host/port in configuration
3. Firewall blocking connections
**Solution**: Verify all services are accessible and configuration is correct.
### Validation Errors (422)
**Symptom**: Tests fail with 422 Unprocessable Entity
**Causes**:
1. Incorrect data format in `.env.e2e`
2. Missing required settings
3. Invalid enum values
**Solution**: Check `.env.e2e.example` for correct format.
## Best Practices
### When to Run E2E Tests
- **Before deploying**: Always run E2E tests before production deployment
- **After major changes**: Run when modifying API endpoints or services
- **Regularly in CI/CD**: Set up automated E2E testing in your pipeline
- **Not during development**: Use unit/integration tests for rapid feedback
### Test Data Management
- Use dedicated test user account (not production users)
- Tests create and delete their own data
- Don't rely on existing data in DB
- Clean up manually if tests fail catastrophically
### Performance Considerations
- E2E tests are slow (real network calls)
- Run unit/integration tests first
- Consider running E2E tests in parallel (with caution)
- Use environment-specific markers to run subset of tests
### CI/CD Integration
Example GitHub Actions workflow:
```yaml
e2e-tests:
runs-on: ubuntu-latest
services:
db-api:
image: your-db-api:latest
ports:
- 8081:8081
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Create .env.e2e
run: |
echo "E2E_DB_API_URL=${{ secrets.E2E_DB_API_URL }}" > tests/e2e/.env.e2e
echo "E2E_TEST_LOGIN=${{ secrets.E2E_TEST_LOGIN }}" >> tests/e2e/.env.e2e
# ... other env vars
- name: Run E2E tests
run: pytest tests/e2e/ -v -m e2e
```
## Contributing
When adding new E2E tests:
1. Add test to appropriate file (workflow/backend/error)
2. Use existing fixtures from `conftest.py`
3. Add cleanup logic if creating persistent data
4. Document any new environment variables
5. Use appropriate pytest markers
6. Add realistic timeout values
7. Test both success and failure paths
## Related Documentation
- [Integration Tests](../integration/README.md) - Tests for DB API integration only
- [Unit Tests](../unit/) - Fast isolated tests
- [DB API Contract](../../DB_API_CONTRACT.md) - External DB API specification
- [CLAUDE.md](../../CLAUDE.md) - Project architecture overview

185
tests/e2e/conftest.py Normal file
View File

@ -0,0 +1,185 @@
"""End-to-End tests configuration and fixtures.
E2E tests require:
- DB API running
- RAG backends (IFT/PSI/PROD) available
- mTLS certificates configured
- Test user created in DB API
"""
import os
import pytest
from fastapi.testclient import TestClient
from app.main import app
# E2E Test configuration
E2E_DB_API_URL = os.getenv("E2E_DB_API_URL", "http://localhost:8081/api/v1")
E2E_TEST_LOGIN = os.getenv("E2E_TEST_LOGIN", "88888888") # E2E test user
# RAG configuration (should match production .env)
E2E_IFT_RAG_HOST = os.getenv("E2E_IFT_RAG_HOST")
E2E_PSI_RAG_HOST = os.getenv("E2E_PSI_RAG_HOST")
E2E_PROD_RAG_HOST = os.getenv("E2E_PROD_RAG_HOST")
@pytest.fixture(scope="session")
def check_prerequisites():
"""Check that all required services are available."""
import httpx
errors = []
# Check DB API
try:
response = httpx.get(f"{E2E_DB_API_URL.replace('/api/v1', '')}/health", timeout=5.0)
if response.status_code != 200:
errors.append(f"DB API health check failed: {response.status_code}")
except Exception as e:
errors.append(f"DB API not available: {e}")
# Check RAG hosts configured
if not E2E_IFT_RAG_HOST:
errors.append("E2E_IFT_RAG_HOST not configured")
if not E2E_PSI_RAG_HOST:
errors.append("E2E_PSI_RAG_HOST not configured")
if not E2E_PROD_RAG_HOST:
errors.append("E2E_PROD_RAG_HOST not configured")
if errors:
pytest.skip(f"E2E prerequisites not met:\n" + "\n".join(f" - {e}" for e in errors))
yield
@pytest.fixture(scope="session")
def e2e_client():
"""FastAPI test client for E2E tests."""
with TestClient(app) as client:
yield client
@pytest.fixture(scope="function")
def e2e_auth_token(e2e_client):
"""Get authentication token for E2E test user."""
response = e2e_client.post(
"/api/v1/auth/login",
params={"login": E2E_TEST_LOGIN}
)
if response.status_code != 200:
pytest.skip(f"Cannot authenticate E2E test user: {response.status_code} - {response.text}")
return response.json()["access_token"]
@pytest.fixture(scope="function")
def e2e_auth_headers(e2e_auth_token):
"""Authorization headers for E2E tests."""
return {"Authorization": f"Bearer {e2e_auth_token}"}
@pytest.fixture(scope="function")
def e2e_user_id(e2e_client):
"""Get E2E test user ID."""
response = e2e_client.post(
"/api/v1/auth/login",
params={"login": E2E_TEST_LOGIN}
)
if response.status_code != 200:
pytest.skip(f"Cannot get E2E test user ID: {response.status_code}")
return response.json()["user"]["user_id"]
@pytest.fixture(scope="function")
def setup_test_settings(e2e_client, e2e_auth_headers):
"""Setup test settings for all environments before tests."""
settings = {
"settings": {
"ift": {
"apiMode": "bench",
"bearerToken": os.getenv("E2E_IFT_BEARER_TOKEN", ""),
"systemPlatform": os.getenv("E2E_IFT_SYSTEM_PLATFORM", "test-platform"),
"systemPlatformUser": os.getenv("E2E_IFT_SYSTEM_USER", "test-user"),
"platformUserId": os.getenv("E2E_IFT_PLATFORM_USER_ID", "test-user-id"),
"platformId": os.getenv("E2E_IFT_PLATFORM_ID", "test-platform-id"),
"withClassify": False,
"resetSessionMode": True
},
"psi": {
"apiMode": "backend",
"bearerToken": os.getenv("E2E_PSI_BEARER_TOKEN", ""),
"systemPlatform": os.getenv("E2E_PSI_SYSTEM_PLATFORM", "test-platform"),
"systemPlatformUser": os.getenv("E2E_PSI_SYSTEM_USER", "test-user"),
"platformUserId": os.getenv("E2E_PSI_PLATFORM_USER_ID", "test-user-id"),
"platformId": os.getenv("E2E_PSI_PLATFORM_ID", "test-platform-id"),
"withClassify": True,
"resetSessionMode": False
},
"prod": {
"apiMode": "bench",
"bearerToken": os.getenv("E2E_PROD_BEARER_TOKEN", ""),
"systemPlatform": os.getenv("E2E_PROD_SYSTEM_PLATFORM", "test-platform"),
"systemPlatformUser": os.getenv("E2E_PROD_SYSTEM_USER", "test-user"),
"platformUserId": os.getenv("E2E_PROD_PLATFORM_USER_ID", "test-user-id"),
"platformId": os.getenv("E2E_PROD_PLATFORM_ID", "test-platform-id"),
"withClassify": False,
"resetSessionMode": True
}
}
}
response = e2e_client.put(
"/api/v1/settings",
json=settings,
headers=e2e_auth_headers
)
if response.status_code != 200:
pytest.skip(f"Cannot setup test settings: {response.status_code} - {response.text}")
return response.json()
@pytest.fixture(scope="function")
def cleanup_test_sessions(e2e_client, e2e_auth_headers, e2e_user_id):
"""Cleanup test sessions after each test."""
yield
# Cleanup: delete all test sessions created during test
try:
# Get all sessions
response = e2e_client.get(
"/api/v1/analysis/sessions?limit=200",
headers=e2e_auth_headers
)
if response.status_code == 200:
sessions = response.json()["sessions"]
# Delete sessions created during test
for session in sessions:
e2e_client.delete(
f"/api/v1/analysis/sessions/{session['session_id']}",
headers=e2e_auth_headers
)
except Exception:
pass # Ignore cleanup errors
def pytest_configure(config):
"""Configure pytest for E2E tests."""
config.addinivalue_line(
"markers", "e2e: mark test as end-to-end test (requires full infrastructure)"
)
config.addinivalue_line(
"markers", "e2e_ift: E2E test for IFT environment"
)
config.addinivalue_line(
"markers", "e2e_psi: E2E test for PSI environment"
)
config.addinivalue_line(
"markers", "e2e_prod: E2E test for PROD environment"
)

View File

@ -0,0 +1,450 @@
"""End-to-End tests for error scenarios and edge cases.
Tests error handling, validation, and failure recovery across the entire stack.
"""
import pytest
@pytest.mark.e2e
class TestAuthenticationErrors:
"""Test authentication error scenarios."""
def test_query_without_authentication(self, e2e_client, setup_test_settings):
"""Test that queries without auth token are rejected."""
query_data = {
"environment": "ift",
"questions": [{"body": "Unauthorized query", "with_docs": True}]
}
response = e2e_client.post("/api/v1/query/bench", json=query_data)
assert response.status_code == 401
def test_invalid_bearer_token(self, e2e_client, setup_test_settings):
"""Test that invalid JWT tokens are rejected."""
invalid_headers = {"Authorization": "Bearer invalid_token_12345"}
response = e2e_client.get("/api/v1/settings", headers=invalid_headers)
assert response.status_code == 401
def test_expired_or_malformed_token(self, e2e_client, setup_test_settings):
"""Test malformed authorization header."""
malformed_headers = {"Authorization": "NotBearer token"}
response = e2e_client.get("/api/v1/settings", headers=malformed_headers)
assert response.status_code == 401
def test_session_access_without_auth(self, e2e_client, setup_test_settings):
"""Test that session endpoints require authentication."""
# Try to list sessions without auth
response = e2e_client.get("/api/v1/analysis/sessions")
assert response.status_code == 401
# Try to create session without auth
session_data = {
"environment": "ift",
"api_mode": "bench",
"request": [{"body": "Test"}],
"response": {"answer": "Test"}
}
response = e2e_client.post("/api/v1/analysis/sessions", json=session_data)
assert response.status_code == 401
@pytest.mark.e2e
class TestValidationErrors:
"""Test input validation errors."""
@pytest.mark.usefixtures("check_prerequisites")
def test_invalid_environment(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test query with invalid environment name."""
query_data = {
"environment": "invalid_env",
"questions": [{"body": "Test", "with_docs": True}]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers
)
assert response.status_code == 422 # Validation error
@pytest.mark.usefixtures("check_prerequisites")
def test_empty_questions_list(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test query with empty questions list."""
query_data = {
"environment": "ift",
"questions": []
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers
)
assert response.status_code == 422
@pytest.mark.usefixtures("check_prerequisites")
def test_missing_required_fields(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test query with missing required fields."""
# Missing 'questions' field
query_data = {"environment": "ift"}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers
)
assert response.status_code == 422
@pytest.mark.usefixtures("check_prerequisites")
def test_invalid_question_structure(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test question with missing required fields."""
query_data = {
"environment": "ift",
"questions": [
{"body": "Valid question", "with_docs": True},
{"with_docs": True} # Missing 'body'
]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers
)
assert response.status_code == 422
@pytest.mark.usefixtures("check_prerequisites")
def test_invalid_session_data(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test creating session with invalid data."""
# Missing required fields
invalid_session = {
"environment": "ift"
# Missing api_mode, request, response
}
response = e2e_client.post(
"/api/v1/analysis/sessions",
json=invalid_session,
headers=e2e_auth_headers
)
assert response.status_code == 422
@pytest.mark.e2e
class TestModeCompatibilityErrors:
"""Test API mode compatibility errors."""
@pytest.mark.usefixtures("check_prerequisites")
def test_bench_query_with_backend_mode_settings(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test that bench query fails when environment is in backend mode."""
# Change IFT to backend mode
settings_update = {
"settings": {
"ift": {
"apiMode": "backend",
"bearerToken": "test_token",
"systemPlatform": "test",
"systemPlatformUser": "test_user",
"platformUserId": "123",
"platformId": "test_platform",
"withClassify": False,
"resetSessionMode": False
}
}
}
update_response = e2e_client.put(
"/api/v1/settings",
json=settings_update,
headers=e2e_auth_headers
)
assert update_response.status_code == 200
# Try bench query (should fail)
query_data = {
"environment": "ift",
"questions": [{"body": "Test", "with_docs": True}]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers
)
# Should fail because IFT is in backend mode
assert response.status_code in [400, 500, 502]
@pytest.mark.usefixtures("check_prerequisites")
def test_backend_query_with_bench_mode_settings(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test that backend query fails when environment is in bench mode."""
# IFT is in bench mode by default (from setup_test_settings)
query_data = {
"environment": "ift",
"questions": [{"body": "Test", "with_docs": True}],
"reset_session": False
}
response = e2e_client.post(
"/api/v1/query/backend",
json=query_data,
headers=e2e_auth_headers
)
# Should fail because IFT is in bench mode
assert response.status_code in [400, 500, 502]
@pytest.mark.e2e
class TestResourceNotFoundErrors:
"""Test resource not found scenarios."""
@pytest.mark.usefixtures("check_prerequisites")
def test_get_nonexistent_session(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test retrieving a session that doesn't exist."""
fake_session_id = "00000000-0000-0000-0000-000000000000"
response = e2e_client.get(
f"/api/v1/analysis/sessions/{fake_session_id}",
headers=e2e_auth_headers
)
assert response.status_code == 404
@pytest.mark.usefixtures("check_prerequisites")
def test_delete_nonexistent_session(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test deleting a session that doesn't exist."""
fake_session_id = "00000000-0000-0000-0000-000000000000"
response = e2e_client.delete(
f"/api/v1/analysis/sessions/{fake_session_id}",
headers=e2e_auth_headers
)
assert response.status_code == 404
@pytest.mark.usefixtures("check_prerequisites")
def test_invalid_session_id_format(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test accessing session with invalid UUID format."""
invalid_session_id = "not-a-valid-uuid"
response = e2e_client.get(
f"/api/v1/analysis/sessions/{invalid_session_id}",
headers=e2e_auth_headers
)
# Could be 404 or 422 depending on validation
assert response.status_code in [404, 422]
@pytest.mark.e2e
class TestSettingsErrors:
"""Test settings-related error scenarios."""
@pytest.mark.usefixtures("check_prerequisites")
def test_update_settings_with_invalid_mode(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test updating settings with invalid API mode."""
invalid_settings = {
"settings": {
"ift": {
"apiMode": "invalid_mode",
"bearerToken": "test"
}
}
}
response = e2e_client.put(
"/api/v1/settings",
json=invalid_settings,
headers=e2e_auth_headers
)
assert response.status_code == 422
@pytest.mark.usefixtures("check_prerequisites")
def test_update_settings_with_invalid_environment(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test updating settings for non-existent environment."""
invalid_settings = {
"settings": {
"invalid_env": {
"apiMode": "bench",
"bearerToken": "test"
}
}
}
response = e2e_client.put(
"/api/v1/settings",
json=invalid_settings,
headers=e2e_auth_headers
)
# Could be 422 (validation) or 400 (bad request)
assert response.status_code in [400, 422]
@pytest.mark.e2e
class TestEdgeCases:
"""Test edge cases and boundary conditions."""
@pytest.mark.usefixtures("check_prerequisites")
def test_very_long_question(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test query with very long question text."""
long_question = "Тест " * 1000 # Very long question
query_data = {
"environment": "ift",
"questions": [{"body": long_question, "with_docs": True}]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
# Should either succeed or fail gracefully
assert response.status_code in [200, 400, 413, 422, 502]
@pytest.mark.usefixtures("check_prerequisites")
def test_special_characters_in_question(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings
):
"""Test query with special characters."""
special_chars_question = "Test with special chars: <>&\"'`\n\t\r"
query_data = {
"environment": "ift",
"questions": [{"body": special_chars_question, "with_docs": True}]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
# Should handle special characters properly
assert response.status_code in [200, 400, 422, 502]
@pytest.mark.usefixtures("check_prerequisites")
def test_large_number_of_questions(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test query with many questions."""
questions = [
{"body": f"Вопрос номер {i}", "with_docs": i % 2 == 0}
for i in range(50) # 50 questions
]
query_data = {
"environment": "ift",
"questions": questions
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=180.0 # Longer timeout for many questions
)
# Should either succeed or fail gracefully
assert response.status_code in [200, 400, 413, 422, 502, 504]
@pytest.mark.usefixtures("check_prerequisites")
def test_query_pagination_limits(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test session list pagination with edge case limits."""
# Test with limit=0
response = e2e_client.get(
"/api/v1/analysis/sessions?limit=0",
headers=e2e_auth_headers
)
assert response.status_code in [200, 400, 422]
# Test with very large limit
response = e2e_client.get(
"/api/v1/analysis/sessions?limit=10000",
headers=e2e_auth_headers
)
assert response.status_code in [200, 400, 422]
# Test with negative offset
response = e2e_client.get(
"/api/v1/analysis/sessions?offset=-1",
headers=e2e_auth_headers
)
assert response.status_code in [200, 400, 422]

View File

@ -0,0 +1,357 @@
"""End-to-End tests for complete user flow.
Tests the entire workflow from authentication to RAG query and analysis.
"""
import pytest
import time
@pytest.mark.e2e
class TestCompleteUserFlow:
"""Test complete user flow from start to finish."""
@pytest.mark.usefixtures("check_prerequisites")
def test_full_workflow_bench_mode(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test complete workflow in bench mode.
Flow:
1. Authenticate
2. Get/update settings
3. Send bench query to RAG
4. Save analysis session
5. Retrieve session
6. Delete session
"""
# 1. Authentication already done via fixture
# 2. Verify settings
settings_response = e2e_client.get(
"/api/v1/settings",
headers=e2e_auth_headers
)
assert settings_response.status_code == 200
settings = settings_response.json()
assert "ift" in settings["settings"]
assert settings["settings"]["ift"]["apiMode"] == "bench"
# 3. Send bench query to IFT RAG
query_data = {
"environment": "ift",
"questions": [
{"body": "E2E тестовый вопрос 1", "with_docs": True},
{"body": "E2E тестовый вопрос 2", "with_docs": False}
]
}
query_response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0 # RAG can be slow
)
assert query_response.status_code == 200
query_result = query_response.json()
assert "request_id" in query_result
assert "response" in query_result
assert "timestamp" in query_result
assert query_result["environment"] == "ift"
# 4. Save analysis session
session_data = {
"environment": "ift",
"api_mode": "bench",
"request": query_data["questions"],
"response": query_result["response"],
"annotations": {
"request_id": query_result["request_id"],
"timestamp": query_result["timestamp"],
"test_type": "e2e_full_workflow"
}
}
session_response = e2e_client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=e2e_auth_headers
)
assert session_response.status_code == 201
session = session_response.json()
assert "session_id" in session
session_id = session["session_id"]
# 5. Retrieve session
get_session_response = e2e_client.get(
f"/api/v1/analysis/sessions/{session_id}",
headers=e2e_auth_headers
)
assert get_session_response.status_code == 200
retrieved_session = get_session_response.json()
assert retrieved_session["session_id"] == session_id
assert retrieved_session["environment"] == "ift"
assert retrieved_session["api_mode"] == "bench"
# 6. Delete session
delete_response = e2e_client.delete(
f"/api/v1/analysis/sessions/{session_id}",
headers=e2e_auth_headers
)
assert delete_response.status_code == 204
# Verify deletion
verify_response = e2e_client.get(
f"/api/v1/analysis/sessions/{session_id}",
headers=e2e_auth_headers
)
assert verify_response.status_code == 404
@pytest.mark.usefixtures("check_prerequisites")
def test_full_workflow_backend_mode(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test complete workflow in backend mode (PSI).
Flow:
1. Authenticate
2. Verify PSI settings (backend mode)
3. Send backend query to PSI RAG
4. Save and verify session
"""
# 1. Verify PSI settings
settings_response = e2e_client.get(
"/api/v1/settings",
headers=e2e_auth_headers
)
assert settings_response.status_code == 200
settings = settings_response.json()
assert settings["settings"]["psi"]["apiMode"] == "backend"
# 2. Send backend query to PSI RAG
query_data = {
"environment": "psi",
"questions": [
{"body": "E2E backend тест", "with_docs": True}
],
"reset_session": False
}
query_response = e2e_client.post(
"/api/v1/query/backend",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
assert query_response.status_code == 200
query_result = query_response.json()
assert query_result["environment"] == "psi"
assert "response" in query_result
# 3. Save session
session_data = {
"environment": "psi",
"api_mode": "backend",
"request": query_data["questions"],
"response": query_result["response"],
"annotations": {
"test_type": "e2e_backend_mode",
"reset_session": False
}
}
session_response = e2e_client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=e2e_auth_headers
)
assert session_response.status_code == 201
@pytest.mark.usefixtures("check_prerequisites")
def test_settings_change_affects_queries(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test that changing settings affects subsequent queries."""
# 1. Get current settings
settings_response = e2e_client.get(
"/api/v1/settings",
headers=e2e_auth_headers
)
assert settings_response.status_code == 200
original_settings = settings_response.json()
# 2. Change IFT to backend mode
updated_settings = {
"settings": {
"ift": {
**original_settings["settings"]["ift"],
"apiMode": "backend"
}
}
}
update_response = e2e_client.put(
"/api/v1/settings",
json=updated_settings,
headers=e2e_auth_headers
)
assert update_response.status_code == 200
# 3. Try bench query (should fail - wrong mode)
bench_query = {
"environment": "ift",
"questions": [{"body": "Test", "with_docs": True}]
}
bench_response = e2e_client.post(
"/api/v1/query/bench",
json=bench_query,
headers=e2e_auth_headers
)
# Should fail because IFT is now in backend mode
assert bench_response.status_code in [400, 500]
# 4. Backend query should work
backend_query = {
"environment": "ift",
"questions": [{"body": "Test", "with_docs": True}],
"reset_session": True
}
backend_response = e2e_client.post(
"/api/v1/query/backend",
json=backend_query,
headers=e2e_auth_headers,
timeout=120.0
)
assert backend_response.status_code == 200
# 5. Restore original settings
restore_response = e2e_client.put(
"/api/v1/settings",
json={"settings": {"ift": original_settings["settings"]["ift"]}},
headers=e2e_auth_headers
)
assert restore_response.status_code == 200
@pytest.mark.usefixtures("check_prerequisites")
def test_multiple_sessions_management(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test creating and managing multiple analysis sessions."""
session_ids = []
# Create multiple sessions
for i, env in enumerate(["ift", "psi", "prod"]):
session_data = {
"environment": env,
"api_mode": "bench" if env != "psi" else "backend",
"request": [{"body": f"E2E test question {i}"}],
"response": {"answer": f"E2E test answer {i}"},
"annotations": {
"test_type": "e2e_multiple_sessions",
"iteration": i
}
}
response = e2e_client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=e2e_auth_headers
)
assert response.status_code == 201
session_ids.append(response.json()["session_id"])
# List all sessions
list_response = e2e_client.get(
"/api/v1/analysis/sessions?limit=50",
headers=e2e_auth_headers
)
assert list_response.status_code == 200
sessions_list = list_response.json()
assert sessions_list["total"] >= 3
# Filter by environment
ift_sessions = e2e_client.get(
"/api/v1/analysis/sessions?environment=ift&limit=50",
headers=e2e_auth_headers
)
assert ift_sessions.status_code == 200
# Delete all created sessions
for session_id in session_ids:
delete_response = e2e_client.delete(
f"/api/v1/analysis/sessions/{session_id}",
headers=e2e_auth_headers
)
assert delete_response.status_code == 204
@pytest.mark.usefixtures("check_prerequisites")
def test_concurrent_user_isolation(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test that user data is properly isolated (sessions, settings)."""
# Create a session
session_data = {
"environment": "ift",
"api_mode": "bench",
"request": [{"body": "Isolation test"}],
"response": {"answer": "Isolated data"},
"annotations": {"test": "isolation"}
}
create_response = e2e_client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=e2e_auth_headers
)
assert create_response.status_code == 201
session_id = create_response.json()["session_id"]
# Verify we can access our session
get_response = e2e_client.get(
f"/api/v1/analysis/sessions/{session_id}",
headers=e2e_auth_headers
)
assert get_response.status_code == 200
# Try to access without auth (should fail)
unauth_response = e2e_client.get(
f"/api/v1/analysis/sessions/{session_id}"
)
assert unauth_response.status_code == 401
# Cleanup
e2e_client.delete(
f"/api/v1/analysis/sessions/{session_id}",
headers=e2e_auth_headers
)

View File

@ -0,0 +1,236 @@
"""End-to-End tests for RAG backend interactions.
Tests environment-specific RAG backend communication, mTLS handling,
and query mode compatibility.
"""
import pytest
@pytest.mark.e2e
class TestRagBackendsE2E:
"""Test RAG backend communication across environments."""
@pytest.mark.e2e_ift
@pytest.mark.usefixtures("check_prerequisites")
def test_ift_bench_mode_query(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test IFT RAG backend with bench mode queries."""
query_data = {
"environment": "ift",
"questions": [
{"body": "Тест IFT bench режима вопрос 1", "with_docs": True},
{"body": "Тест IFT bench режима вопрос 2", "with_docs": False},
{"body": "Тест IFT bench режима вопрос 3", "with_docs": True}
]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
assert response.status_code == 200
result = response.json()
# Verify response structure
assert result["environment"] == "ift"
assert "request_id" in result
assert "response" in result
assert "timestamp" in result
# Response should contain answers for all questions
assert isinstance(result["response"], (dict, list))
@pytest.mark.e2e_psi
@pytest.mark.usefixtures("check_prerequisites")
def test_psi_backend_mode_query(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test PSI RAG backend with backend mode queries."""
query_data = {
"environment": "psi",
"questions": [
{"body": "Тест PSI backend режима", "with_docs": True}
],
"reset_session": False
}
response = e2e_client.post(
"/api/v1/query/backend",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
assert response.status_code == 200
result = response.json()
assert result["environment"] == "psi"
assert "response" in result
@pytest.mark.e2e_psi
@pytest.mark.usefixtures("check_prerequisites")
def test_psi_backend_mode_with_session_reset(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test PSI backend mode with session reset."""
# First query
query_data_1 = {
"environment": "psi",
"questions": [{"body": "Первый вопрос с контекстом", "with_docs": True}],
"reset_session": False
}
response_1 = e2e_client.post(
"/api/v1/query/backend",
json=query_data_1,
headers=e2e_auth_headers,
timeout=120.0
)
assert response_1.status_code == 200
# Second query with reset
query_data_2 = {
"environment": "psi",
"questions": [{"body": "Второй вопрос после сброса", "with_docs": True}],
"reset_session": True
}
response_2 = e2e_client.post(
"/api/v1/query/backend",
json=query_data_2,
headers=e2e_auth_headers,
timeout=120.0
)
assert response_2.status_code == 200
@pytest.mark.e2e_prod
@pytest.mark.usefixtures("check_prerequisites")
def test_prod_bench_mode_query(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test PROD RAG backend with bench mode queries."""
query_data = {
"environment": "prod",
"questions": [
{"body": "Тест PROD окружения", "with_docs": True}
]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
assert response.status_code == 200
result = response.json()
assert result["environment"] == "prod"
@pytest.mark.e2e
@pytest.mark.usefixtures("check_prerequisites")
def test_query_with_docs_parameter(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test that with_docs parameter is properly handled."""
query_data = {
"environment": "ift",
"questions": [
{"body": "Вопрос с документами", "with_docs": True},
{"body": "Вопрос без документов", "with_docs": False}
]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
assert response.status_code == 200
@pytest.mark.e2e
@pytest.mark.usefixtures("check_prerequisites")
def test_multiple_sequential_queries(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test multiple sequential queries to same environment."""
for i in range(3):
query_data = {
"environment": "ift",
"questions": [
{"body": f"Последовательный запрос #{i+1}", "with_docs": True}
]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
assert response.status_code == 200
result = response.json()
assert "request_id" in result
@pytest.mark.e2e
@pytest.mark.usefixtures("check_prerequisites")
def test_cross_environment_queries(
self,
e2e_client,
e2e_auth_headers,
setup_test_settings,
cleanup_test_sessions
):
"""Test queries to different environments in sequence."""
environments = ["ift", "prod"] # PSI uses backend mode, skip for this test
for env in environments:
query_data = {
"environment": env,
"questions": [
{"body": f"Тест окружения {env.upper()}", "with_docs": True}
]
}
response = e2e_client.post(
"/api/v1/query/bench",
json=query_data,
headers=e2e_auth_headers,
timeout=120.0
)
assert response.status_code == 200
result = response.json()
assert result["environment"] == env

View File

@ -0,0 +1,8 @@
# Integration tests environment variables
# Copy this file to .env.integration and update with your values
# DB API URL for integration tests
TEST_DB_API_URL=http://localhost:8081/api/v1
# Test user login (8-digit)
TEST_LOGIN=99999999

227
tests/integration/README.md Normal file
View File

@ -0,0 +1,227 @@
# Integration Tests
Интеграционные тесты для проверки взаимодействия Brief Bench FastAPI с реальным DB API.
## Предварительные требования
### 1. Запустите DB API
DB API должен быть запущен и доступен перед выполнением интеграционных тестов.
```bash
# Пример запуска DB API (из репозитория DB API)
cd ../db-api-project
uvicorn app.main:app --host localhost --port 8081
```
### 2. Настройте переменные окружения
Создайте файл `.env.integration` в корне проекта:
```bash
# DB API URL for integration tests
TEST_DB_API_URL=http://localhost:8081/api/v1
# Test user login (8-digit)
TEST_LOGIN=99999999
```
Или установите переменные окружения напрямую:
```bash
export TEST_DB_API_URL=http://localhost:8081/api/v1
export TEST_LOGIN=99999999
```
### 3. Создайте тестового пользователя в DB API
Убедитесь, что пользователь с логином `99999999` существует в DB API или что DB API поддерживает автоматическое создание пользователей.
## Запуск тестов
### Все интеграционные тесты
```bash
# Из корня проекта
pytest tests/integration/ -v
```
### Только unit тесты (без интеграционных)
```bash
pytest tests/unit/ -v
```
### Конкретный модуль
```bash
# Тесты аутентификации
pytest tests/integration/test_auth_integration.py -v
# Тесты настроек
pytest tests/integration/test_settings_integration.py -v
# Тесты сессий анализа
pytest tests/integration/test_analysis_integration.py -v
# Тесты запросов (только DB API, без RAG)
pytest tests/integration/test_query_integration.py -v
```
### С отметкой integration
```bash
pytest -m integration -v
```
## Структура тестов
```
tests/integration/
├── conftest.py # Фикстуры для интеграционных тестов
├── README.md # Этот файл
├── test_auth_integration.py # Тесты аутентификации и JWT
├── test_settings_integration.py # Тесты управления настройками
├── test_analysis_integration.py # Тесты сессий анализа
└── test_query_integration.py # Тесты запросов (DB API часть)
```
## Что тестируется
### ✅ Auth Integration (`test_auth_integration.py`)
- Успешная авторизация с реальным DB API
- Генерация и валидация JWT токенов
- Защита endpoint-ов с использованием JWT
- Обработка ошибок аутентификации
### ✅ Settings Integration (`test_settings_integration.py`)
- Получение настроек пользователя из DB API
- Обновление настроек для всех окружений (IFT, PSI, PROD)
- Частичное обновление настроек
- Персистентность настроек
- Проверка структуры данных настроек
### ✅ Analysis Integration (`test_analysis_integration.py`)
- Создание сессий анализа в DB API
- Получение списка сессий с фильтрацией
- Пагинация сессий
- Получение сессии по ID
- Удаление сессий
- Целостность данных (включая Unicode, вложенные структуры)
### ✅ Query Integration (`test_query_integration.py`)
- Получение настроек пользователя для запросов
- Проверка соответствия apiMode (bench/backend)
- Обновление настроек между запросами
- **Примечание:** RAG backend не вызывается (мокируется)
## Что НЕ тестируется
**RAG Backend взаимодействие** - требует запущенные RAG сервисы (IFT/PSI/PROD)
**mTLS сертификаты** - требует реальные сертификаты
**Производительность** - используйте отдельные performance тесты
**Нагрузочное тестирование** - используйте инструменты типа Locust/K6
## Troubleshooting
### DB API не отвечает
```
httpx.ConnectError: [Errno 61] Connection refused
```
**Решение:** Убедитесь, что DB API запущен на `http://localhost:8081`
### Тестовый пользователь не найден
```
404: User not found
```
**Решение:** Создайте пользователя с логином `99999999` в DB API или измените `TEST_LOGIN` в переменных окружения
### JWT токен истек
```
401: Token expired
```
**Решение:** JWT токены имеют срок действия 30 дней. Тесты автоматически получают новые токены через фикстуру `auth_token`
### Тесты не очищают данные
Фикстура `clean_test_sessions` автоматически очищает тестовые сессии после каждого теста. Если видите старые данные, это может быть из-за:
- Прерванных тестов (Ctrl+C)
- Ошибок в DB API
**Решение:** Удалите тестовые данные вручную через DB API или базу данных
## CI/CD Integration
Для запуска в CI/CD pipeline:
```yaml
# .github/workflows/integration-tests.yml
name: Integration Tests
on: [push, pull_request]
jobs:
integration-tests:
runs-on: ubuntu-latest
services:
db-api:
image: your-db-api-image:latest
ports:
- 8081:8081
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.12'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run integration tests
env:
TEST_DB_API_URL: http://localhost:8081/api/v1
TEST_LOGIN: 99999999
run: pytest tests/integration/ -v
```
## Полезные команды
```bash
# Запустить с детальным выводом
pytest tests/integration/ -vv
# Показать print statements
pytest tests/integration/ -v -s
# Остановить на первой ошибке
pytest tests/integration/ -v -x
# Запустить только неуспешные тесты из прошлого запуска
pytest tests/integration/ -v --lf
# Запустить конкретный тест
pytest tests/integration/test_auth_integration.py::TestAuthIntegration::test_login_success -v
# Показать coverage (только для интеграционных тестов)
pytest tests/integration/ --cov=app --cov-report=term-missing
```
## Рекомендации
1. **Всегда запускайте интеграционные тесты перед деплоем**
2. **Используйте отдельную тестовую базу данных** для DB API
3. **Не запускайте интеграционные тесты на проде** - только на dev/staging
4. **Проверяйте логи DB API** при отладке проблем
5. **Очищайте тестовые данные** после каждого запуска

View File

@ -0,0 +1,378 @@
"""Integration tests for analysis session endpoints."""
import pytest
@pytest.mark.integration
class TestAnalysisIntegration:
"""Integration tests for analysis session management."""
def test_create_session_success(self, client, auth_headers, clean_test_sessions):
"""Test creating analysis session with real DB API."""
session_data = {
"environment": "ift",
"api_mode": "bench",
"request": [
{"body": "Test question 1", "with_docs": True},
{"body": "Test question 2", "with_docs": False}
],
"response": {
"answers": [
{"answer": "Test answer 1", "docs": []},
{"answer": "Test answer 2", "docs": []}
]
},
"annotations": {
"duration_ms": 1234,
"model": "test-model",
"test_run": True
}
}
response = client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
assert response.status_code == 201
data = response.json()
assert "session_id" in data
assert "user_id" in data
assert data["environment"] == "ift"
assert data["api_mode"] == "bench"
assert len(data["request"]) == 2
assert "created_at" in data
assert "updated_at" in data
def test_get_sessions_list(self, client, auth_headers, clean_test_sessions):
"""Test getting list of sessions."""
# Create test sessions
for env in ["ift", "psi", "prod"]:
session_data = {
"environment": env,
"api_mode": "bench",
"request": [{"body": f"Question for {env}"}],
"response": {"answer": f"Answer for {env}"},
"annotations": {}
}
client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
# Get all sessions
response = client.get("/api/v1/analysis/sessions", headers=auth_headers)
assert response.status_code == 200
data = response.json()
assert "sessions" in data
assert "total" in data
assert data["total"] >= 3
assert len(data["sessions"]) >= 3
# Verify session structure
for session in data["sessions"]:
assert "session_id" in session
assert "environment" in session
assert "created_at" in session
assert session["environment"] in ["ift", "psi", "prod"]
def test_get_sessions_with_filter(self, client, auth_headers, clean_test_sessions):
"""Test filtering sessions by environment."""
# Create sessions for different environments
for env in ["ift", "psi"]:
for i in range(2):
session_data = {
"environment": env,
"api_mode": "bench",
"request": [{"body": f"Question {i}"}],
"response": {},
"annotations": {}
}
client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
# Filter by IFT
response = client.get(
"/api/v1/analysis/sessions?environment=ift",
headers=auth_headers
)
assert response.status_code == 200
data = response.json()
assert data["total"] >= 2
# All returned sessions should be IFT
for session in data["sessions"]:
assert session["environment"] == "ift"
def test_get_sessions_pagination(self, client, auth_headers, clean_test_sessions):
"""Test session pagination."""
# Create 5 test sessions
for i in range(5):
session_data = {
"environment": "ift",
"api_mode": "bench",
"request": [{"body": f"Question {i}"}],
"response": {},
"annotations": {}
}
client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
# Get first 3
response = client.get(
"/api/v1/analysis/sessions?limit=3&offset=0",
headers=auth_headers
)
assert response.status_code == 200
data = response.json()
assert len(data["sessions"]) <= 3
# Get next 3
response = client.get(
"/api/v1/analysis/sessions?limit=3&offset=3",
headers=auth_headers
)
assert response.status_code == 200
def test_get_session_by_id(self, client, auth_headers, clean_test_sessions):
"""Test getting specific session by ID."""
# Create session
session_data = {
"environment": "psi",
"api_mode": "backend",
"request": [
{"body": "Detailed question", "with_docs": True}
],
"response": {
"answer": "Detailed answer",
"confidence": 0.95
},
"annotations": {
"duration_ms": 5678,
"tokens_used": 1234
}
}
create_response = client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
assert create_response.status_code == 201
session_id = create_response.json()["session_id"]
# Get session by ID
get_response = client.get(
f"/api/v1/analysis/sessions/{session_id}",
headers=auth_headers
)
assert get_response.status_code == 200
data = get_response.json()
assert data["session_id"] == session_id
assert data["environment"] == "psi"
assert data["api_mode"] == "backend"
assert data["request"][0]["body"] == "Detailed question"
assert data["response"]["answer"] == "Detailed answer"
assert data["annotations"]["duration_ms"] == 5678
def test_delete_session(self, client, auth_headers, clean_test_sessions):
"""Test deleting a session."""
# Create session
session_data = {
"environment": "prod",
"api_mode": "bench",
"request": [{"body": "To be deleted"}],
"response": {},
"annotations": {}
}
create_response = client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
assert create_response.status_code == 201
session_id = create_response.json()["session_id"]
# Delete session
delete_response = client.delete(
f"/api/v1/analysis/sessions/{session_id}",
headers=auth_headers
)
assert delete_response.status_code == 204
# Verify deletion
get_response = client.get(
f"/api/v1/analysis/sessions/{session_id}",
headers=auth_headers
)
assert get_response.status_code == 404
def test_delete_nonexistent_session(self, client, auth_headers):
"""Test deleting non-existent session."""
fake_session_id = "00000000-0000-0000-0000-000000000000"
response = client.delete(
f"/api/v1/analysis/sessions/{fake_session_id}",
headers=auth_headers
)
assert response.status_code == 404
def test_create_session_invalid_environment(self, client, auth_headers):
"""Test creating session with invalid environment."""
session_data = {
"environment": "invalid", # Invalid
"api_mode": "bench",
"request": [],
"response": {},
"annotations": {}
}
response = client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
# Should fail validation (either FastAPI or DB API)
assert response.status_code in [400, 422]
def test_sessions_require_authentication(self, client):
"""Test that session endpoints require authentication."""
# Create without auth
response = client.post(
"/api/v1/analysis/sessions",
json={"environment": "ift", "api_mode": "bench", "request": [], "response": {}, "annotations": {}}
)
assert response.status_code == 401
# List without auth
response = client.get("/api/v1/analysis/sessions")
assert response.status_code == 401
# Get by ID without auth
response = client.get("/api/v1/analysis/sessions/some-id")
assert response.status_code == 401
# Delete without auth
response = client.delete("/api/v1/analysis/sessions/some-id")
assert response.status_code == 401
def test_create_multiple_sessions_same_user(self, client, auth_headers, clean_test_sessions):
"""Test creating multiple sessions for same user."""
session_ids = []
for i in range(3):
session_data = {
"environment": "ift",
"api_mode": "bench",
"request": [{"body": f"Question {i}"}],
"response": {"answer": f"Answer {i}"},
"annotations": {"iteration": i}
}
response = client.post(
"/api/v1/analysis/sessions",
json=session_data,
headers=auth_headers
)
assert response.status_code == 201
session_ids.append(response.json()["session_id"])
# Verify all sessions exist
list_response = client.get("/api/v1/analysis/sessions", headers=auth_headers)
assert list_response.status_code == 200
assert list_response.json()["total"] >= 3
# Verify each session is unique
assert len(set(session_ids)) == 3
def test_session_data_integrity(self, client, auth_headers, clean_test_sessions):
"""Test that session data is stored and retrieved without corruption."""
complex_data = {
"environment": "psi",
"api_mode": "backend",
"request": [
{
"body": "Complex question with special chars: Привет! 你好 こんにちは",
"with_docs": True,
"metadata": {
"source": "test",
"priority": 1,
"tags": ["integration", "test", "unicode"]
}
}
],
"response": {
"answer": "Complex answer with nested data",
"confidence": 0.98,
"sources": [
{"doc_id": "doc1", "relevance": 0.95},
{"doc_id": "doc2", "relevance": 0.87}
],
"metadata": {
"model": "gpt-4",
"temperature": 0.7,
"tokens": {"prompt": 100, "completion": 200}
}
},
"annotations": {
"test_type": "integration",
"special_chars": "!@#$%^&*()",
"unicode": "Тест 测试 テスト",
"nested": {
"level1": {
"level2": {
"value": "deep nested value"
}
}
}
}
}
# Create session
create_response = client.post(
"/api/v1/analysis/sessions",
json=complex_data,
headers=auth_headers
)
assert create_response.status_code == 201
session_id = create_response.json()["session_id"]
# Retrieve and verify
get_response = client.get(
f"/api/v1/analysis/sessions/{session_id}",
headers=auth_headers
)
assert get_response.status_code == 200
retrieved_data = get_response.json()
# Verify complex data integrity
assert "Привет" in retrieved_data["request"][0]["body"]
assert retrieved_data["response"]["sources"][0]["doc_id"] == "doc1"
assert retrieved_data["annotations"]["nested"]["level1"]["level2"]["value"] == "deep nested value"

View File

@ -0,0 +1,86 @@
"""Integration tests for authentication endpoints."""
import pytest
@pytest.mark.integration
class TestAuthIntegration:
"""Integration tests for authentication flow."""
def test_login_success(self, client, test_login):
"""Test successful login with real DB API."""
response = client.post(
"/api/v1/auth/login",
params={"login": test_login}
)
assert response.status_code == 200
data = response.json()
assert "access_token" in data
assert data["token_type"] == "bearer"
assert "user" in data
user = data["user"]
assert user["login"] == test_login
assert "user_id" in user
assert "created_at" in user
assert "last_login_at" in user
def test_login_invalid_format(self, client):
"""Test login with invalid format."""
response = client.post(
"/api/v1/auth/login",
params={"login": "123"} # Too short
)
assert response.status_code == 422 # Validation error
def test_login_nonexistent_user(self, client):
"""Test login with non-existent user."""
response = client.post(
"/api/v1/auth/login",
params={"login": "00000000"} # Likely doesn't exist
)
# Should return 404 if user doesn't exist in DB API
# Or create user if DB API auto-creates
assert response.status_code in [200, 404]
def test_token_contains_user_info(self, client, test_login):
"""Test that JWT token contains user information."""
from app.utils.security import decode_access_token
response = client.post(
"/api/v1/auth/login",
params={"login": test_login}
)
assert response.status_code == 200
token = response.json()["access_token"]
# Decode token
payload = decode_access_token(token)
assert payload["login"] == test_login
assert "user_id" in payload
assert "exp" in payload
def test_protected_endpoint_without_token(self, client):
"""Test accessing protected endpoint without token."""
response = client.get("/api/v1/settings")
assert response.status_code == 401
def test_protected_endpoint_with_token(self, client, auth_headers):
"""Test accessing protected endpoint with valid token."""
response = client.get("/api/v1/settings", headers=auth_headers)
# Should return 200 (or 404 if no settings yet)
assert response.status_code in [200, 404]
def test_protected_endpoint_with_invalid_token(self, client):
"""Test accessing protected endpoint with invalid token."""
headers = {"Authorization": "Bearer invalid_token_here"}
response = client.get("/api/v1/settings", headers=headers)
assert response.status_code == 401

View File

@ -0,0 +1,295 @@
"""Integration tests for query endpoints (DB API interaction only).
Note: These tests check DB API integration for user settings retrieval.
RAG backend calls are not tested here (require actual RAG infrastructure).
"""
import pytest
from unittest.mock import patch, AsyncMock, MagicMock
@pytest.mark.integration
class TestQueryDBApiIntegration:
"""Integration tests for query endpoints - DB API interaction only."""
def test_bench_query_retrieves_user_settings(self, client, auth_headers):
"""Test that bench query retrieves user settings from DB API."""
# First, set up user settings for bench mode
settings_data = {
"settings": {
"ift": {
"apiMode": "bench",
"bearerToken": "test-token",
"systemPlatform": "test-platform",
"systemPlatformUser": "test-user",
"platformUserId": "user-123",
"platformId": "platform-123",
"withClassify": False,
"resetSessionMode": True
}
}
}
client.put("/api/v1/settings", json=settings_data, headers=auth_headers)
# Mock RAG service to avoid actual RAG calls
with patch('app.api.v1.query.RagService') as MockRagService:
mock_rag = AsyncMock()
mock_rag.send_bench_query = AsyncMock(return_value={
"answers": [{"answer": "Test answer", "docs": []}]
})
mock_rag.close = AsyncMock()
MockRagService.return_value = mock_rag
# Send bench query
query_data = {
"environment": "ift",
"questions": [{"body": "Test question", "with_docs": True}]
}
response = client.post(
"/api/v1/query/bench",
json=query_data,
headers=auth_headers
)
# Should succeed (settings retrieved from DB API)
assert response.status_code == 200
# Verify RAG service was called with correct settings
mock_rag.send_bench_query.assert_called_once()
call_kwargs = mock_rag.send_bench_query.call_args[1]
assert call_kwargs["environment"] == "ift"
assert "user_settings" in call_kwargs
user_settings = call_kwargs["user_settings"]
assert user_settings["bearerToken"] == "test-token"
def test_backend_query_retrieves_user_settings(self, client, auth_headers):
"""Test that backend query retrieves user settings from DB API."""
# Set up user settings for backend mode
settings_data = {
"settings": {
"psi": {
"apiMode": "backend",
"bearerToken": "backend-token",
"systemPlatform": "backend-platform",
"systemPlatformUser": "backend-user",
"platformUserId": "user-456",
"platformId": "platform-456",
"withClassify": True,
"resetSessionMode": False
}
}
}
client.put("/api/v1/settings", json=settings_data, headers=auth_headers)
# Mock RAG service
with patch('app.api.v1.query.RagService') as MockRagService:
mock_rag = AsyncMock()
mock_rag.send_backend_query = AsyncMock(return_value=[
{"answer": "Test answer", "confidence": 0.95}
])
mock_rag.close = AsyncMock()
MockRagService.return_value = mock_rag
# Send backend query
query_data = {
"environment": "psi",
"questions": [{"body": "Test question", "with_docs": True}],
"reset_session": False
}
response = client.post(
"/api/v1/query/backend",
json=query_data,
headers=auth_headers
)
assert response.status_code == 200
# Verify RAG service was called with correct settings
mock_rag.send_backend_query.assert_called_once()
call_kwargs = mock_rag.send_backend_query.call_args[1]
assert call_kwargs["environment"] == "psi"
assert call_kwargs["reset_session"] is False
user_settings = call_kwargs["user_settings"]
assert user_settings["bearerToken"] == "backend-token"
assert user_settings["withClassify"] is True
def test_bench_query_wrong_api_mode(self, client, auth_headers):
"""Test bench query fails when settings configured for backend mode."""
# Set up settings for backend mode
settings_data = {
"settings": {
"ift": {
"apiMode": "backend", # Wrong mode for bench query
"bearerToken": "token",
"systemPlatform": "platform",
"systemPlatformUser": "user",
"platformUserId": "user-id",
"platformId": "platform-id",
"withClassify": False,
"resetSessionMode": True
}
}
}
client.put("/api/v1/settings", json=settings_data, headers=auth_headers)
# Try bench query
query_data = {
"environment": "ift",
"questions": [{"body": "Test", "with_docs": True}]
}
response = client.post(
"/api/v1/query/bench",
json=query_data,
headers=auth_headers
)
# Should fail due to wrong API mode
assert response.status_code in [400, 500]
def test_backend_query_wrong_api_mode(self, client, auth_headers):
"""Test backend query fails when settings configured for bench mode."""
# Set up settings for bench mode
settings_data = {
"settings": {
"prod": {
"apiMode": "bench", # Wrong mode for backend query
"bearerToken": "token",
"systemPlatform": "platform",
"systemPlatformUser": "user",
"platformUserId": "user-id",
"platformId": "platform-id",
"withClassify": False,
"resetSessionMode": True
}
}
}
client.put("/api/v1/settings", json=settings_data, headers=auth_headers)
# Try backend query
query_data = {
"environment": "prod",
"questions": [{"body": "Test", "with_docs": True}],
"reset_session": True
}
response = client.post(
"/api/v1/query/backend",
json=query_data,
headers=auth_headers
)
# Should fail due to wrong API mode
assert response.status_code in [400, 500]
def test_query_invalid_environment(self, client, auth_headers):
"""Test query with invalid environment name."""
query_data = {
"environment": "invalid_env",
"questions": [{"body": "Test", "with_docs": True}]
}
# Bench query
response = client.post(
"/api/v1/query/bench",
json=query_data,
headers=auth_headers
)
assert response.status_code == 400
# Backend query
query_data["reset_session"] = True
response = client.post(
"/api/v1/query/backend",
json=query_data,
headers=auth_headers
)
assert response.status_code == 400
def test_query_requires_authentication(self, client):
"""Test that query endpoints require authentication."""
query_data = {
"environment": "ift",
"questions": [{"body": "Test", "with_docs": True}]
}
# Bench without auth
response = client.post("/api/v1/query/bench", json=query_data)
assert response.status_code == 401
# Backend without auth
query_data["reset_session"] = True
response = client.post("/api/v1/query/backend", json=query_data)
assert response.status_code == 401
def test_settings_update_affects_query(self, client, auth_headers):
"""Test that updating settings affects subsequent queries."""
# Initial settings
initial_settings = {
"settings": {
"ift": {
"apiMode": "bench",
"bearerToken": "initial-token",
"systemPlatform": "initial",
"systemPlatformUser": "initial",
"platformUserId": "initial",
"platformId": "initial",
"withClassify": False,
"resetSessionMode": True
}
}
}
client.put("/api/v1/settings", json=initial_settings, headers=auth_headers)
# Mock RAG service
with patch('app.api.v1.query.RagService') as MockRagService:
mock_rag = AsyncMock()
mock_rag.send_bench_query = AsyncMock(return_value={"answers": []})
mock_rag.close = AsyncMock()
MockRagService.return_value = mock_rag
# First query
query_data = {
"environment": "ift",
"questions": [{"body": "Test", "with_docs": True}]
}
client.post("/api/v1/query/bench", json=query_data, headers=auth_headers)
# Check first call
first_call = mock_rag.send_bench_query.call_args[1]
assert first_call["user_settings"]["bearerToken"] == "initial-token"
# Update settings
updated_settings = {
"settings": {
"ift": {
"apiMode": "bench",
"bearerToken": "updated-token",
"systemPlatform": "updated",
"systemPlatformUser": "updated",
"platformUserId": "updated",
"platformId": "updated",
"withClassify": True,
"resetSessionMode": False
}
}
}
client.put("/api/v1/settings", json=updated_settings, headers=auth_headers)
# Second query
mock_rag.send_bench_query.reset_mock()
client.post("/api/v1/query/bench", json=query_data, headers=auth_headers)
# Check second call uses updated settings
second_call = mock_rag.send_bench_query.call_args[1]
assert second_call["user_settings"]["bearerToken"] == "updated-token"
assert second_call["user_settings"]["withClassify"] is True

View File

@ -0,0 +1,215 @@
"""Integration tests for settings endpoints."""
import pytest
@pytest.mark.integration
class TestSettingsIntegration:
"""Integration tests for user settings management."""
def test_get_settings_success(self, client, auth_headers):
"""Test getting user settings from real DB API."""
response = client.get("/api/v1/settings", headers=auth_headers)
# Should return 200 with settings or 404 if no settings yet
assert response.status_code in [200, 404]
if response.status_code == 200:
data = response.json()
assert "user_id" in data
assert "settings" in data
assert "updated_at" in data
# Check settings structure
settings = data["settings"]
for env in ["ift", "psi", "prod"]:
if env in settings:
env_settings = settings[env]
assert "apiMode" in env_settings
assert env_settings["apiMode"] in ["bench", "backend"]
def test_update_settings_full(self, client, auth_headers):
"""Test updating settings for all environments."""
update_data = {
"settings": {
"ift": {
"apiMode": "bench",
"bearerToken": "test-token-ift",
"systemPlatform": "test-platform",
"systemPlatformUser": "test-user",
"platformUserId": "user-123",
"platformId": "platform-123",
"withClassify": True,
"resetSessionMode": False
},
"psi": {
"apiMode": "backend",
"bearerToken": "test-token-psi",
"systemPlatform": "test-platform",
"systemPlatformUser": "test-user",
"platformUserId": "user-456",
"platformId": "platform-456",
"withClassify": False,
"resetSessionMode": True
},
"prod": {
"apiMode": "bench",
"bearerToken": "test-token-prod",
"systemPlatform": "test-platform",
"systemPlatformUser": "test-user",
"platformUserId": "user-789",
"platformId": "platform-789",
"withClassify": False,
"resetSessionMode": False
}
}
}
response = client.put(
"/api/v1/settings",
json=update_data,
headers=auth_headers
)
assert response.status_code == 200
data = response.json()
assert "user_id" in data
assert "settings" in data
assert "updated_at" in data
# Verify settings were updated
assert data["settings"]["ift"]["apiMode"] == "bench"
assert data["settings"]["ift"]["bearerToken"] == "test-token-ift"
assert data["settings"]["psi"]["apiMode"] == "backend"
assert data["settings"]["prod"]["apiMode"] == "bench"
def test_update_settings_partial(self, client, auth_headers):
"""Test updating settings for single environment."""
update_data = {
"settings": {
"ift": {
"apiMode": "backend",
"bearerToken": "updated-token",
"systemPlatform": "updated-platform",
"systemPlatformUser": "updated-user",
"platformUserId": "updated-user-id",
"platformId": "updated-platform-id",
"withClassify": True,
"resetSessionMode": True
}
}
}
response = client.put(
"/api/v1/settings",
json=update_data,
headers=auth_headers
)
assert response.status_code == 200
data = response.json()
assert data["settings"]["ift"]["apiMode"] == "backend"
assert data["settings"]["ift"]["bearerToken"] == "updated-token"
def test_update_then_get_settings(self, client, auth_headers):
"""Test updating settings and then retrieving them."""
# Update settings
update_data = {
"settings": {
"ift": {
"apiMode": "bench",
"bearerToken": "integration-test-token",
"systemPlatform": "test",
"systemPlatformUser": "test",
"platformUserId": "test-123",
"platformId": "test-456",
"withClassify": False,
"resetSessionMode": True
}
}
}
put_response = client.put(
"/api/v1/settings",
json=update_data,
headers=auth_headers
)
assert put_response.status_code == 200
# Get settings
get_response = client.get("/api/v1/settings", headers=auth_headers)
assert get_response.status_code == 200
data = get_response.json()
assert data["settings"]["ift"]["bearerToken"] == "integration-test-token"
assert data["settings"]["ift"]["platformUserId"] == "test-123"
def test_update_settings_invalid_api_mode(self, client, auth_headers):
"""Test updating settings with invalid apiMode."""
update_data = {
"settings": {
"ift": {
"apiMode": "invalid_mode", # Invalid
"bearerToken": "",
"systemPlatform": "",
"systemPlatformUser": "",
"platformUserId": "",
"platformId": "",
"withClassify": False,
"resetSessionMode": True
}
}
}
response = client.put(
"/api/v1/settings",
json=update_data,
headers=auth_headers
)
# Should accept any string (no validation on FastAPI side)
# DB API might validate
assert response.status_code in [200, 400]
def test_settings_persistence(self, client, auth_headers):
"""Test that settings persist across requests."""
# Set unique value
unique_token = "persistence-test-token-12345"
update_data = {
"settings": {
"psi": {
"apiMode": "backend",
"bearerToken": unique_token,
"systemPlatform": "test",
"systemPlatformUser": "test",
"platformUserId": "test",
"platformId": "test",
"withClassify": True,
"resetSessionMode": False
}
}
}
# Update
client.put("/api/v1/settings", json=update_data, headers=auth_headers)
# Get multiple times to verify persistence
for _ in range(3):
response = client.get("/api/v1/settings", headers=auth_headers)
assert response.status_code == 200
data = response.json()
assert data["settings"]["psi"]["bearerToken"] == unique_token
def test_settings_require_authentication(self, client):
"""Test that settings endpoints require authentication."""
# GET without auth
response = client.get("/api/v1/settings")
assert response.status_code == 401
# PUT without auth
response = client.put("/api/v1/settings", json={"settings": {}})
assert response.status_code == 401