Skip to content

Fleonex-dev/FIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FIP — Fundamental Intelligence Platform

A small Go service that performs asynchronous, LLM-driven signal extraction and scoring for equity tickers using Temporal for orchestration and PostgreSQL for persistence.

Overview

FIP runs lightweight analysis workflows that call modular “exploration” activities (currently: fundamentals and sentiment). Each workflow collects structured signals from an LLM provider, converts those signals to a numeric score, persists the result, and exposes HTTP endpoints to start and retrieve analyses. The project is intended as a developer-facing prototype for integrating LLMs into evented batch analysis pipelines and as a foundation for implementing production LLM adapters and storage backends.

Key features

  • Temporal-based workflow orchestration for reliable, asynchronous analysis execution.
  • Pluggable LLM provider abstraction (internal/platform/llm.Provider) with a mock implementation for local development.
  • Concurrent exploration activities (Fundamentals, Sentiment) executed by the workflow.
  • Scoring function that converts exploration signals into a 0–100 score (internal/domain.ScoreExplorations).
  • Persistence layer (Postgres) for completed analyses with retrieval by request ID.
  • Simple HTTP API:
    • POST /analyze?ticker={TICKER} → starts a workflow; returns workflow execution ID (202)
    • GET /analysis/{id} → returns stored analysis: id, ticker, score, report
    • GET /health → basic liveness check

Architecture & design

High level

  • HTTP front-end receives user requests and starts a Temporal workflow.
  • Temporal workflow executes exploration activities concurrently, aggregates their results, computes a score, and invokes a storage activity to persist the analysis.
  • Storage is a Postgres-backed store accessed via the AnalysisRepository interface.

Key components

  • cmd/server — process entrypoint: initializes Temporal client, starts Temporal worker, sets HTTP routes.
  • internal/workflow — Temporal workflow definition (AnalysisWorkflow).
  • internal/activity — exploration activities (Fundamentals, Sentiment) and storage activity for persistence.
  • internal/serviceAnalysisService for workflow orchestration (decouples HTTP handlers from Temporal).
  • internal/domain — domain types (Signal, ExplorationResult), scoring function, and validation logic.
  • internal/platform/db — PostgreSQL storage with AnalysisRepository interface for loose coupling.
  • internal/platform/llm — LLM Provider interface and mock provider used by activities.
  • internal/platform/temporal — Temporal worker setup and activity registration.
  • internal/platform/logger — Logger interface for flexible logging implementations.

Data flow (conceptual)

  1. Client POSTs /analyze?ticker=XXX
  2. HTTP handler delegates to AnalysisService which starts a Temporal workflow.
  3. Workflow concurrently executes exploration activities:
    • Each activity calls the LLM provider to produce structured signals.
  4. Workflow aggregates exploration results, computes a score (0–100).
  5. Workflow calls StorageActivity.Save to write the analysis to PostgreSQL.
  6. Client can later GET /analysis/{workflowID} to retrieve the stored result.

Design decisions and trade-offs

  • Temporal for orchestration: chosen for durable, fault-tolerant async workflows; adds operational dependency but simplifies retries and visibility.
  • Service layer: AnalysisService abstracts workflow orchestration from HTTP handlers, improving testability and separation of concerns.
  • Interface-based dependencies: All external dependencies (DB, LLM, Logger) are accessed via interfaces, enabling easy mocking and implementation swapping.
  • Struct-based activities: Activities are methods on dependency-injected structs rather than closures, improving testability and reducing coupling.
  • Repository pattern: Database operations use the AnalysisRepository interface, decoupling business logic from PostgreSQL.
  • Activities use short StartToClose timeouts (5s) to keep workflows responsive; heavy work should be moved to separate background tasks or have increased activity timeouts.
  • Persistence is intentionally minimal and synchronous (DB write in storage activity) to keep state consistent and auditable.

Statefulness

  • HTTP server is stateless with respect to analysis state; workflow state is maintained by Temporal and persisted in Temporal’s DB.
  • Analysis results are persisted to Postgres for durable retrieval and reporting.

Sync vs async

  • Client interaction is asynchronous: POST returns a workflow execution ID immediately; processing happens in the background in Temporal workers.

API boundaries

  • HTTP API is a thin orchestration surface that triggers Temporal workflows and fetches stored artifacts from Postgres.

Tech stack

  • Language: Go (go 1.25.3 as declared in go.mod)
  • Orchestration: Temporal (go.temporal.io/sdk v1.39.0)
  • Database: PostgreSQL (via github.com/lib/pq and database/sql)
  • Internal: small custom LLM interface and mock provider
  • Tooling: go toolchain (go run, go test), Docker/Compose files are present for development

Project structure (brief)

  • cmd/server/ – Application entrypoint, HTTP handlers, and server initialization
  • internal/workflow/ – Temporal workflow definitions
  • internal/activity/ – Temporal activities (ExplorationActivity, StorageActivity)
  • internal/domain/ – Domain models, validation, and scoring logic
  • internal/service/ – Service layer for business logic orchestration
  • internal/platform/db/ – PostgreSQL store and AnalysisRepository interface
  • internal/platform/llm/ – LLM Provider interface and mock implementation
  • internal/platform/logger/ – Logger interface for flexible logging
  • internal/platform/temporal/ – Temporal worker and activity registration
  • migrations/ – Database schema migrations
  • Makefile – Development tasks and build commands
  • DEVELOPMENT.md – Detailed development guide

Key refactored components (as of latest update):

  • ExplorationActivity struct: Consolidates fundamentals and sentiment exploration with explicit LLM provider injection
  • StorageActivity struct: Handles persistence with injected repository dependency
  • AnalysisRepository interface: Abstracts database operations for loose coupling
  • AnalysisService: Orchestrates workflow execution, decoupling HTTP handlers from Temporal
  • Domain validation: Centralized ticker and request ID validation
  • Logger interface: Abstracted logging for flexible implementations

Setup & usage

Prerequisites

  • Go 1.25+ toolchain
  • Docker & Docker Compose (for running Temporal and PostgreSQL locally)

Quick Start (Recommended)

Run a single command to start everything:

make dev

This will:

  1. Start PostgreSQL and Temporal in Docker
  2. Wait for services to be ready
  3. Create the database schema
  4. Start the FIP server

The server will be available at http://localhost:8080

Environment Variables

  • TEMPORAL_ADDRESS: Temporal server address (default: localhost:7233)
  • DATABASE_URL: PostgreSQL connection string (default: postgres://fip:fip@localhost/fip)
  • HTTP_ADDR: Server listen address (default: :8080)
  • API_KEY: Optional API key for authentication

Development Commands

For a complete list of available commands:

make

Common commands:

  • make dev – Start everything (Docker + server)
  • make up – Start only Docker services (postgres, temporal)
  • make down – Stop all services
  • make run – Run server locally (requires services running)
  • make test – Run tests
  • make test-coverage – Generate coverage report
  • make build – Build binary to bin/fip
  • make lint – Check code quality
  • make fmt – Format code

See DEVELOPMENT.md for detailed development workflows.

Example requests

  • Start an analysis (returns workflow execution ID):

    curl -X POST "http://localhost:8080/analyze?ticker=AAPL"

    Response: 202 Accepted, body contains workflow execution ID.

  • Retrieve an analysis

    curl "http://localhost:8080/analysis/{workflowID}"

    Response: 200 OK with JSON: { "id": "{workflowID}", "ticker": "AAPL", "score": , "report": }

Example workflow (concise)

Inputs → processing → outputs

  • Input: ticker (string) from HTTP POST.
  • Processing:
    • Workflow launches Fundamentals and Sentiment activities concurrently.
    • Each activity calls LLM provider to GenerateJSON into a typed result (signals).
    • Workflow aggregates signals, computes score via domain.ScoreExplorations.
    • Workflow calls storage activity (Storage.Save) to persist the result.
  • Output:
    • Persisted analysis row in Postgres.
    • Workflow completion; client can retrieve stored result by workflow ID.

Design principles

  • Explicit interfaces: LLM providers are abstracted to allow safe replacement and testing.
  • Fail-fast on config errors: DB connection is pinged early during initialization.
  • Small, focused activities: each activity has a single responsibility (exploration or persistence).
  • Observable and durable: Temporal provides durable state and visibility for long-running processes.
  • Minimal surface area for HTTP: HTTP layer only starts workflows and returns persisted results.

Limitations & Future Work

Known limitations

  • No production LLM adapter included: only a MockProvider exists (internal/platform/llm/mock.go). Replace NewMockProvider() with a production provider implementation to integrate real LLM services.
  • Database migrations are not applied by the process; migrations/ exists but the service expects the analyses table to be present.
  • The storage schema is assumed by code; the service does not validate schema beyond insertion attempts.
  • Activity timeouts are short (StartToCloseTimeout = 5s); long-running LLM calls may need larger timeouts or chunking.

Suggested minimal Postgres schema (conservative assumption; adapt as needed):

CREATE TABLE analyses (
  request_id TEXT PRIMARY KEY,
  ticker TEXT NOT NULL,
  status TEXT NOT NULL,
  score DOUBLE PRECISION,
  report JSONB
);

Extension points

  • Implement a production LLM provider that satisfies internal/platform/llm.Provider.
  • Add automated migration/application of migrations/ at startup (or a CI job).
  • Add authentication/authorization on HTTP endpoints and more robust input validation.
  • Add tests around workflow behavior (integration tests with Temporal Test Server or local Temporal instance).

Implementation priorities

To move this prototype toward a more production-ready state, issues should be addressed roughly in the following order:

  1. Correctness & stability (blocking)

    • Fix Temporal worker/client lifecycle so there is a single owned client and a clean shutdown path.
    • Ensure activities use the proper Temporal activity context (for cancellation, deadlines, and heartbeats) and have realistic timeouts and retry policies.
    • Remove per-activity DB connection creation; share a properly configured Store instance and close it on shutdown.
    • Align the analyses schema between code, run-dev.sh, and any migrations; add basic idempotency guarantees on request_id.
  2. Security & configuration hygiene (high impact)

    • Stop committing real .env files; use .env.example and external secret management.
    • Introduce strict validation for external inputs (ticker, request IDs) and plan for auth/rate limiting on HTTP endpoints.
    • Centralize configuration (Temporal address, DB URL, ports, timeouts) in a validated config struct with clear dev vs prod modes.
  3. Observability & operator experience (high impact, moderate effort)

    • Introduce structured logging with consistent identifiers (request ID, workflow ID, ticker).
    • Add basic health/readiness checks for Temporal and Postgres, and set up minimal metrics (workflow/activity success, DB latency).
  4. Testing & safety net (medium impact, ongoing)

    • Add unit tests for domain scoring and key activities (with mocked LLM/storage).
    • Add workflow tests using the Temporal test framework and at least one end-to-end test that drives the HTTP API against Postgres.
  5. Polish & DX (lower risk, incremental)

    • Fill out the Makefile with standard targets (dev, test, lint, migrate) and ensure run-dev.sh delegates to the same tooling.
    • Refine scoring semantics and API responses as product requirements evolve.

About

Fundamental Intelligence Platform

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published