93. AI-First Portfolio Analysis
Status: Accepted Date: 2025-07-06
Context
Analyzing a trading portfolio involves more than just looking at the P&L. It requires interpreting a complex set of metrics (leverage, margin usage, position concentration, risk/reward ratios) in the context of the current market. Simply displaying these raw metrics on a dashboard is not enough; it doesn't tell the user what the numbers mean or what they should do. We need a way to transform this raw data into actionable insights.
Decision
The Apollo module will adopt an AI-First approach to portfolio analysis. Instead of creating a complex, rule-based engine to interpret portfolio data, we will use a local Large Language Model (via Ollama) to perform this interpretation.
The workflow will be:
- Fetch the raw portfolio data from the exchange (
adr://bybit-integration). - Package this data (P&L, leverage, margin, positions, etc.) into a structured context.
- Pass this context to an LLM with a specialized prompt asking it to act as an expert portfolio risk manager.
- The prompt will instruct the LLM to generate a holistic analysis, including an overall "health score" (0-100), a qualitative risk level (Low, Medium, High), and a set of specific, actionable recommendations for improvement.
- This analysis will be returned in a standardized JSON format (
adr://structured-output).
Consequences
Positive:
- Holistic, Contextual Insights: The LLM can interpret the interplay between different metrics in a way that is very difficult to program with rules. It can provide a qualitative, "big picture" assessment of the portfolio's health.
- Actionable Recommendations: The system doesn't just show data; it provides clear, natural-language recommendations that a user can act on (e.g., "Consider reducing leverage on your BTC position as it represents a highly concentrated risk").
- High Flexibility: The analysis logic can be rapidly evolved by simply updating the prompt. We can easily add new analytical angles or change the focus of the recommendations without code changes.
Negative:
- Dependency on Local AI Infrastructure: This approach requires a running Ollama instance with suitable models and, potentially, GPU hardware for acceptable performance. This adds to the system's operational requirements.
- Quality of Analysis is Prompt-Dependent: The quality and reliability of the generated insights are entirely dependent on the quality of the prompt and the capabilities of the chosen LLM.
- Potential for Generic or Unhelpful Advice: A poorly-tuned model or prompt could generate generic, obvious, or even incorrect advice.
Mitigation:
- Clear Infrastructure Requirements: The system's deployment documentation will clearly state the requirement for a configured Ollama instance. We will also implement fallbacks to more basic metric displays if the AI service is unavailable.
- Rigorous Prompt Engineering: We will treat the portfolio analysis prompt as a critical asset, subject to version control, testing, and continuous refinement.
- Structured Output and Validation: By forcing the LLM to provide its analysis in a structured format (
adr://structured-output), we can validate the output and ensure it conforms to our expectations. We will also provide users with the raw metrics alongside the AI analysis, allowing them to verify the AI's conclusions.