# Unified LLM Gateway & Data Import Analysis Service This project exposes a FastAPI-based microservice that provides: - A unified chat completions gateway supporting multiple LLM providers (OpenAI, Anthropic, OpenRouter, Gemini, Qwen, DeepSeek, etc.) - An asynchronous data import analysis pipeline that orchestrates LLM calls to produce structured metadata and processing recommendations The following instructions cover environment setup, dependency installation, and running the backend service. ## Prerequisites - Python 3.11 (recommended) or newer - Git - [uv](https://github.com/astral-sh/uv) package manager (used for Python dependency management) ## Install uv ```bash # Linux / macOS curl -LsSf https://astral.sh/uv/install.sh | sh # Windows (PowerShell) powershell -c "irm https://astral.sh/uv/install.ps1 | iex" ``` After installation, ensure `uv` is on your `PATH`: ```bash uv --version ``` ## Install Python Dependencies Create (or activate) a virtual environment, then install project dependencies with `uv`: ```bash # Create a virtualenv named .venv if it doesn't exist uv venv .venv # Activate the virtualenv (Linux/macOS) source .venv/bin/activate # On Windows PowerShell: # .\.venv\Scripts\Activate.ps1 # Install dependencies from requirements.txt uv pip install -r requirements.txt ``` If you prefer native `pip`, replace the last command with `pip install -r requirements.txt`. ## Environment Variables Copy `.env.example` to `.env` (if provided) or edit `.env` to supply API keys and configuration values: - `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `OPENROUTER_API_KEY`, etc. - `HTTP_CLIENT_TIMEOUT`, `IMPORT_CHAT_TIMEOUT_SECONDS` - `LOG_LEVEL`, `LOG_FORMAT` for logging ## Run the Backend Service Start the FastAPI application using uvicorn: ```bash uvicorn app.main:app --reload --host 0.0.0.0 --port 8000 ``` - `--reload` enables auto-restart during development. - Access the interactive API docs at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs). To keep it running in the background (Unix-like systems): ```bash nohup uvicorn app.main:app --host 0.0.0.0 --port 8000 > server.log 2>&1 & ``` Or use a process manager such as `pm2`, `supervisor`, or systemd for production deployments. ## API List 1. 导入分析schema接口 http://localhost:8000/v1/import/analyze ## Additional Commands - Run the data import analysis example: `python test/data_import_analysis_example.py` - Test the OpenRouter demo: `python test/openrouter_chat_example.py` - Send a DeepSeek chat request script: `python scripts/deepseek_request.py`