eefaf91ed16390567863f0d4eb3e42d04a3c6a7a
Unified LLM Gateway & Data Import Analysis Service
This project exposes a FastAPI-based microservice that provides:
- A unified chat completions gateway supporting multiple LLM providers (OpenAI, Anthropic, OpenRouter, Gemini, Qwen, DeepSeek, etc.)
- An asynchronous data import analysis pipeline that orchestrates LLM calls to produce structured metadata and processing recommendations
The following instructions cover environment setup, dependency installation, and running the backend service.
Prerequisites
- Python 3.11 (recommended) or newer
- Git
- uv package manager (used for Python dependency management)
Install uv
# Linux / macOS
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
After installation, ensure uv is on your PATH:
uv --version
Install Python Dependencies
Create (or activate) a virtual environment, then install project dependencies with uv:
# Create a virtualenv named .venv if it doesn't exist
uv venv .venv
# Activate the virtualenv (Linux/macOS)
source .venv/bin/activate
# On Windows PowerShell:
# .\.venv\Scripts\Activate.ps1
# Install dependencies from requirements.txt
uv pip install -r requirements.txt
If you prefer native pip, replace the last command with pip install -r requirements.txt.
Environment Variables
Copy .env.example to .env (if provided) or edit .env to supply API keys and configuration values:
OPENAI_API_KEY,ANTHROPIC_API_KEY,OPENROUTER_API_KEY, etc.HTTP_CLIENT_TIMEOUT,IMPORT_CHAT_TIMEOUT_SECONDSLOG_LEVEL,LOG_FORMATfor logging
Run the Backend Service
Start the FastAPI application using uvicorn:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
--reloadenables auto-restart during development.- Access the interactive API docs at http://127.0.0.1:8000/docs.
To keep it running in the background (Unix-like systems):
nohup uvicorn app.main:app --host 0.0.0.0 --port 8000 > server.log 2>&1 &
Or use a process manager such as pm2, supervisor, or systemd for production deployments.
API List
- 导入分析schema接口 http://localhost:8000/v1/import/analyze
Additional Commands
- Run the data import analysis example:
python test/data_import_analysis_example.py - Test the OpenRouter demo:
python test/openrouter_chat_example.py - Send a DeepSeek chat request script:
python scripts/deepseek_request.py
Description
Languages
Python
99.8%
Dockerfile
0.2%