A small Go service that performs asynchronous, LLM-driven signal extraction and scoring for equity tickers using Temporal for orchestration and PostgreSQL for persistence.
FIP runs lightweight analysis workflows that call modular “exploration” activities (currently: fundamentals and sentiment). Each workflow collects structured signals from an LLM provider, converts those signals to a numeric score, persists the result, and exposes HTTP endpoints to start and retrieve analyses. The project is intended as a developer-facing prototype for integrating LLMs into evented batch analysis pipelines and as a foundation for implementing production LLM adapters and storage backends.
- Temporal-based workflow orchestration for reliable, asynchronous analysis execution.
- Pluggable LLM provider abstraction (
internal/platform/llm.Provider) with a mock implementation for local development. - Concurrent exploration activities (Fundamentals, Sentiment) executed by the workflow.
- Scoring function that converts exploration signals into a 0–100 score (
internal/domain.ScoreExplorations). - Persistence layer (Postgres) for completed analyses with retrieval by request ID.
- Simple HTTP API:
- POST /analyze?ticker={TICKER} → starts a workflow; returns workflow execution ID (202)
- GET /analysis/{id} → returns stored analysis: id, ticker, score, report
- GET /health → basic liveness check
- HTTP front-end receives user requests and starts a Temporal workflow.
- Temporal workflow executes exploration activities concurrently, aggregates their results, computes a score, and invokes a storage activity to persist the analysis.
- Storage is a Postgres-backed store accessed via the
AnalysisRepositoryinterface.
cmd/server— process entrypoint: initializes Temporal client, starts Temporal worker, sets HTTP routes.internal/workflow— Temporal workflow definition (AnalysisWorkflow).internal/activity— exploration activities (Fundamentals, Sentiment) and storage activity for persistence.internal/service—AnalysisServicefor workflow orchestration (decouples HTTP handlers from Temporal).internal/domain— domain types (Signal, ExplorationResult), scoring function, and validation logic.internal/platform/db— PostgreSQL storage withAnalysisRepositoryinterface for loose coupling.internal/platform/llm— LLM Provider interface and mock provider used by activities.internal/platform/temporal— Temporal worker setup and activity registration.internal/platform/logger— Logger interface for flexible logging implementations.
- Client POSTs
/analyze?ticker=XXX - HTTP handler delegates to
AnalysisServicewhich starts a Temporal workflow. - Workflow concurrently executes exploration activities:
- Each activity calls the LLM provider to produce structured signals.
- Workflow aggregates exploration results, computes a score (0–100).
- Workflow calls
StorageActivity.Saveto write the analysis to PostgreSQL. - Client can later GET
/analysis/{workflowID}to retrieve the stored result.
- Temporal for orchestration: chosen for durable, fault-tolerant async workflows; adds operational dependency but simplifies retries and visibility.
- Service layer:
AnalysisServiceabstracts workflow orchestration from HTTP handlers, improving testability and separation of concerns. - Interface-based dependencies: All external dependencies (DB, LLM, Logger) are accessed via interfaces, enabling easy mocking and implementation swapping.
- Struct-based activities: Activities are methods on dependency-injected structs rather than closures, improving testability and reducing coupling.
- Repository pattern: Database operations use the
AnalysisRepositoryinterface, decoupling business logic from PostgreSQL. - Activities use short StartToClose timeouts (5s) to keep workflows responsive; heavy work should be moved to separate background tasks or have increased activity timeouts.
- Persistence is intentionally minimal and synchronous (DB write in storage activity) to keep state consistent and auditable.
Statefulness
- HTTP server is stateless with respect to analysis state; workflow state is maintained by Temporal and persisted in Temporal’s DB.
- Analysis results are persisted to Postgres for durable retrieval and reporting.
Sync vs async
- Client interaction is asynchronous: POST returns a workflow execution ID immediately; processing happens in the background in Temporal workers.
API boundaries
- HTTP API is a thin orchestration surface that triggers Temporal workflows and fetches stored artifacts from Postgres.
- Language: Go (go 1.25.3 as declared in
go.mod) - Orchestration: Temporal (go.temporal.io/sdk v1.39.0)
- Database: PostgreSQL (via github.com/lib/pq and database/sql)
- Internal: small custom LLM interface and mock provider
- Tooling: go toolchain (go run, go test), Docker/Compose files are present for development
cmd/server/– Application entrypoint, HTTP handlers, and server initializationinternal/workflow/– Temporal workflow definitionsinternal/activity/– Temporal activities (ExplorationActivity, StorageActivity)internal/domain/– Domain models, validation, and scoring logicinternal/service/– Service layer for business logic orchestrationinternal/platform/db/– PostgreSQL store and AnalysisRepository interfaceinternal/platform/llm/– LLM Provider interface and mock implementationinternal/platform/logger/– Logger interface for flexible logginginternal/platform/temporal/– Temporal worker and activity registrationmigrations/– Database schema migrationsMakefile– Development tasks and build commandsDEVELOPMENT.md– Detailed development guide
Key refactored components (as of latest update):
- ExplorationActivity struct: Consolidates fundamentals and sentiment exploration with explicit LLM provider injection
- StorageActivity struct: Handles persistence with injected repository dependency
- AnalysisRepository interface: Abstracts database operations for loose coupling
- AnalysisService: Orchestrates workflow execution, decoupling HTTP handlers from Temporal
- Domain validation: Centralized ticker and request ID validation
- Logger interface: Abstracted logging for flexible implementations
- Go 1.25+ toolchain
- Docker & Docker Compose (for running Temporal and PostgreSQL locally)
Run a single command to start everything:
make devThis will:
- Start PostgreSQL and Temporal in Docker
- Wait for services to be ready
- Create the database schema
- Start the FIP server
The server will be available at http://localhost:8080
TEMPORAL_ADDRESS: Temporal server address (default:localhost:7233)DATABASE_URL: PostgreSQL connection string (default:postgres://fip:fip@localhost/fip)HTTP_ADDR: Server listen address (default::8080)API_KEY: Optional API key for authentication
For a complete list of available commands:
makeCommon commands:
make dev– Start everything (Docker + server)make up– Start only Docker services (postgres, temporal)make down– Stop all servicesmake run– Run server locally (requires services running)make test– Run testsmake test-coverage– Generate coverage reportmake build– Build binary tobin/fipmake lint– Check code qualitymake fmt– Format code
See DEVELOPMENT.md for detailed development workflows.
Example requests
-
Start an analysis (returns workflow execution ID):
curl -X POST "http://localhost:8080/analyze?ticker=AAPL"Response: 202 Accepted, body contains workflow execution ID.
-
Retrieve an analysis
curl "http://localhost:8080/analysis/{workflowID}"Response: 200 OK with JSON: { "id": "{workflowID}", "ticker": "AAPL", "score": , "report": }
Inputs → processing → outputs
- Input: ticker (string) from HTTP POST.
- Processing:
- Workflow launches Fundamentals and Sentiment activities concurrently.
- Each activity calls LLM provider to GenerateJSON into a typed result (signals).
- Workflow aggregates signals, computes score via
domain.ScoreExplorations. - Workflow calls storage activity (
Storage.Save) to persist the result.
- Output:
- Persisted analysis row in Postgres.
- Workflow completion; client can retrieve stored result by workflow ID.
- Explicit interfaces: LLM providers are abstracted to allow safe replacement and testing.
- Fail-fast on config errors: DB connection is pinged early during initialization.
- Small, focused activities: each activity has a single responsibility (exploration or persistence).
- Observable and durable: Temporal provides durable state and visibility for long-running processes.
- Minimal surface area for HTTP: HTTP layer only starts workflows and returns persisted results.
Known limitations
- No production LLM adapter included: only a
MockProviderexists (internal/platform/llm/mock.go). ReplaceNewMockProvider()with a production provider implementation to integrate real LLM services. - Database migrations are not applied by the process;
migrations/exists but the service expects theanalysestable to be present. - The storage schema is assumed by code; the service does not validate schema beyond insertion attempts.
- Activity timeouts are short (
StartToCloseTimeout = 5s); long-running LLM calls may need larger timeouts or chunking.
Suggested minimal Postgres schema (conservative assumption; adapt as needed):
CREATE TABLE analyses (
request_id TEXT PRIMARY KEY,
ticker TEXT NOT NULL,
status TEXT NOT NULL,
score DOUBLE PRECISION,
report JSONB
);Extension points
- Implement a production LLM provider that satisfies
internal/platform/llm.Provider. - Add automated migration/application of
migrations/at startup (or a CI job). - Add authentication/authorization on HTTP endpoints and more robust input validation.
- Add tests around workflow behavior (integration tests with Temporal Test Server or local Temporal instance).
To move this prototype toward a more production-ready state, issues should be addressed roughly in the following order:
-
Correctness & stability (blocking)
- Fix Temporal worker/client lifecycle so there is a single owned client and a clean shutdown path.
- Ensure activities use the proper Temporal activity context (for cancellation, deadlines, and heartbeats) and have realistic timeouts and retry policies.
- Remove per-activity DB connection creation; share a properly configured
Storeinstance and close it on shutdown. - Align the
analysesschema between code,run-dev.sh, and any migrations; add basic idempotency guarantees onrequest_id.
-
Security & configuration hygiene (high impact)
- Stop committing real
.envfiles; use.env.exampleand external secret management. - Introduce strict validation for external inputs (ticker, request IDs) and plan for auth/rate limiting on HTTP endpoints.
- Centralize configuration (Temporal address, DB URL, ports, timeouts) in a validated config struct with clear dev vs prod modes.
- Stop committing real
-
Observability & operator experience (high impact, moderate effort)
- Introduce structured logging with consistent identifiers (request ID, workflow ID, ticker).
- Add basic health/readiness checks for Temporal and Postgres, and set up minimal metrics (workflow/activity success, DB latency).
-
Testing & safety net (medium impact, ongoing)
- Add unit tests for domain scoring and key activities (with mocked LLM/storage).
- Add workflow tests using the Temporal test framework and at least one end-to-end test that drives the HTTP API against Postgres.
-
Polish & DX (lower risk, incremental)
- Fill out the
Makefilewith standard targets (dev,test,lint,migrate) and ensurerun-dev.shdelegates to the same tooling. - Refine scoring semantics and API responses as product requirements evolve.
- Fill out the