An intelligent stock management solution designed for hospitals, public distribution systems, and NGOs to monitor inventory health, predict demand, and automate reordering processes. Built on Snowflake's data platform with real-time monitoring capabilities.
Healthcare facilities and distribution systems face critical challenges:
- Stock-outs of essential medicines and supplies
- Over-stocking leading to waste and expiration
- Fragmented data across multiple systems
- Reactive rather than proactive inventory management
- Manual procurement processes prone to delays
A unified stock management system that provides:
- Real-time inventory visibility across all locations
- Early warning alerts for stock-outs
- Automated demand forecasting
- Data-driven reorder recommendations
- Comprehensive audit trail of procurement actions
Data Platform: Snowflake
- Worksheets/SQL: Core data transformations
- Dynamic Tables: Auto-refreshing metrics and calculations
- Streams: Change data capture for real-time alerts
- Tasks: Scheduled automation and orchestration
- Snowpark Python: Machine learning for demand forecasting
- Streamlit: Interactive web dashboard
Programming Languages:
- SQL (data layer)
- Python 3.9 (forecasting & visualization)
Key Libraries:
- snowflake-snowpark-python
- pandas
- plotly
- streamlit
Primary transaction table storing daily inventory movements.
Columns:
- record_date (DATE): Transaction date
- location (VARCHAR): Facility/warehouse identifier
- item_name (VARCHAR): Product identifier
- opening_stock (NUMBER): Stock at start of day
- received (NUMBER): Units received
- issued (NUMBER): Units dispensed/sold
- closing_stock (NUMBER): Stock at end of day
- lead_time_days (NUMBER): Supplier lead timeAuto-refreshed aggregated metrics.
Refresh Frequency: 1 hour
Calculates:
- avg_daily_usage: 30-day rolling average
- days_until_stockout: Current stock Ă· daily usage
- stock_status: CRITICAL/LOW/HEALTHY/OVERSTOCKAutomated purchase suggestions.
Refresh Frequency: 1 hour
Calculates:
- recommended_reorder_qty: Safety stock - current stock
- priority: 1 (Critical) to 4 (Overstock)Stream-powered alert log.
Populated by: process_stock_changes task
Triggers: Status changes to CRITICAL or LOW
Frequency: Every 5 minutesML-generated predictions.
Generated by: forecast_demand procedure
Horizon: 14 days ahead
Method: 7-day moving average with trend adjustmentCREATE DATABASE stock_management;
CREATE SCHEMA inventory;Creates isolated workspace for all inventory objects.
Initial Load:
- INSERT statements for sample data
- Production: CSV upload via Snowflake UI or COPY INTO
Ongoing Updates:
- Daily batch loads from operational systems
- Real-time streaming for high-frequency updates
Dynamic Table: stock_health_metrics
Business logic:
- Aggregates last 30 days of transactions
- Calculates consumption velocity
- Determines stock status thresholds:
- CRITICAL: < lead_time_days coverage
- LOW: < 1.5x lead_time_days coverage
- OVERSTOCK: > 4x lead_time_days coverage
- HEALTHY: Everything else
Dynamic Table: reorder_recommendations
Reorder formula:
recommended_qty = (avg_daily_usage Ă— lead_time_days Ă— 2) - current_stock
Safety factor of 2x provides buffer for demand variability.
Stream: stock_health_changes
Captures all INSERT/UPDATE/DELETE operations on stock_health_metrics.
Task: process_stock_changes
Schedule: Every 5 minutes
Condition: WHEN SYSTEM$STREAM_HAS_DATA('stock_health_changes')
Action: Insert alerts into stock_alerts tableBenefit: Alerts fire only when status actually changes, avoiding spam.
Procedure: forecast_demand
Algorithm:
- Extract last 30 days of usage data
- Calculate 7-day moving average
- Detect linear trend (recent vs. older periods)
- Project 14 days forward
- Apply non-negativity constraint
Returns: Table of (forecast_date, predicted_usage) pairs
Task: refresh_forecasts
Schedule: Daily at 6 AM UTC
Action: Regenerate forecasts for all active itemsProcedure: check_critical_stock
Manual execution to validate alert logic:
CALL check_critical_stock();Returns count of items in CRITICAL status.
Framework: Streamlit in Snowflake Layout: Wide mode, 4-tab interface Data Connection: Snowpark session (auto-authenticated)
Purpose: Visual overview of inventory status across locations
Implementation:
pivot_table(index='ITEM_NAME', columns='LOCATION', values='DAYS_UNTIL_STOCKOUT')Visualization: Plotly heatmap
- Green: Healthy stock levels
- Yellow: Approaching reorder point
- Red: Critical/imminent stock-out
User Benefit: Instantly identify problem areas across entire inventory.
Purpose: Actionable notifications of stock issues
Data Source: stock_alerts table (last 50 records)
Key Metric: Critical Alerts count
Features:
- Chronological display (newest first)
- Status-based filtering
- Direct link to problematic items
User Benefit: Prioritize immediate attention to critical items.
Purpose: Predict future consumption patterns
Interaction Flow:
- User selects location (dropdown)
- User selects item (filtered by location)
- Click "Generate Forecast" button
- System calls forecast_demand stored procedure
- Display 14-day projection as line chart
Visualization: Streamlit native line chart
- X-axis: Future dates
- Y-axis: Predicted usage units
Edge Cases:
- < 7 days of data: Warning message displayed
- Empty result: Graceful degradation
User Benefit: Plan procurement ahead of actual stock-outs.
Purpose: Export-ready purchase orders
Data Source: reorder_recommendations dynamic table
Metrics Display:
- Total items needing reorder
- Aggregate units to purchase
Table Columns:
- Location
- Item Name
- Current Stock
- Recommended Quantity
- Priority (1-4 scale)
Export Feature:
download_button("Download Purchase Order", csv_data, "reorder_list.csv")User Benefit: One-click generation of procurement worksheets.
- Snowflake Account (Trial or Standard edition)
- Warehouse (COMPUTE_WH or equivalent)
- Permissions:
- CREATE DATABASE
- CREATE SCHEMA
- CREATE TABLE/PROCEDURE/TASK
- EXECUTE TASK
- CREATE STREAMLIT
- Open Snowflake Web UI
- Navigate to: Worksheets > + New Worksheet
- Copy entire SQL script from provided document
- Execute sequentially (do not run all at once)
- Verify each section completes without errors
Key Checkpoints:
-- After table creation
SELECT COUNT(*) FROM daily_stock; -- Should return 7
-- After dynamic tables
SELECT COUNT(*) FROM stock_health_metrics; -- Should return 2-3
-- After tasks
SHOW TASKS; -- Verify process_stock_changes existsALTER TASK process_stock_changes RESUME;
ALTER TASK refresh_forecasts RESUME;Verify status:
SHOW TASKS;
-- STATE column should show 'started'- Navigate to: Streamlit > + Streamlit App
- Name: "Stock Management Dashboard"
- Warehouse: COMPUTE_WH
- Database: stock_management
- Schema: inventory
- Copy Python code from frontend file
- Click "Run"
Troubleshooting:
- Error: "Table not found" → Verify USE DATABASE/SCHEMA context
- Error: "Procedure not found" → Check forecast_demand was created
- Slow loading → Increase warehouse size (M → L)
Option A: CSV Upload
- Navigate to: Data > Add Data > Load Data
- Select daily_stock table
- Upload CSV file
- Map columns (must match schema exactly)
Option B: SQL COPY INTO
COPY INTO daily_stock
FROM @my_stage/stock_data.csv
FILE_FORMAT = (TYPE = CSV SKIP_HEADER = 1);Morning Routine (8:00 AM):
- Open Streamlit dashboard
- Check "Alerts" tab for critical items
- Review "Heatmap" for overall health
- Export reorder list if items in CRITICAL status
Weekly Planning (Monday):
- Review "Forecasts" tab for high-usage items
- Generate 14-day projections
- Compare to current stock levels
- Schedule proactive orders
Monthly Review:
- Analyze alert frequency by location/item
- Adjust safety stock thresholds if needed
- Review forecast accuracy
- Update lead times if supplier performance changed
CRITICAL Status:
- Action: Immediate emergency order
- Timeline: Same-day processing
- Communication: Notify facility managers
LOW Status:
- Action: Regular order process
- Timeline: Within 2 business days
- Communication: Standard procurement workflow
High Confidence (stable patterns):
- Historical usage shows consistent trend
- Seasonal variations accounted for
- Use for long-term planning
Low Confidence (erratic patterns):
- New item with limited history
- Recent demand spikes/drops
- Verify with facility managers before major orders
Edit stock_health_metrics dynamic table:
-- Current: CRITICAL if < lead_time_days coverage
-- To change to 1.5x lead time:
WHEN MAX(closing_stock) / NULLIF(AVG(issued), 0) < (MAX(lead_time_days) * 1.5)
THEN 'CRITICAL'Edit reorder_recommendations dynamic table:
-- Current: 2x safety factor
-- To change to 3x:
ROUND((avg_daily_usage * lead_time_days * 3) - current_stock, 0)-- Change alert checking to every 15 minutes:
ALTER TASK process_stock_changes SET SCHEDULE = '15 MINUTE';
-- Change forecast refresh to twice daily:
ALTER TASK refresh_forecasts SET SCHEDULE = 'USING CRON 0 6,18 * * * UTC';Simply INSERT into daily_stock:
INSERT INTO daily_stock VALUES
('2026-01-09', 'New Clinic C', 'Aspirin 100mg', 500, 0, 45, 455, 10);Dynamic tables will automatically include new entries on next refresh.
Daily:
-- Verify tasks are running
SELECT * FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY())
WHERE NAME = 'PROCESS_STOCK_CHANGES'
ORDER BY SCHEDULED_TIME DESC
LIMIT 5;
-- Check for errors
SELECT * FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY())
WHERE STATE = 'FAILED'
AND SCHEDULED_TIME > DATEADD(day, -1, CURRENT_TIMESTAMP());Weekly:
-- Audit alert generation
SELECT
DATE_TRUNC('day', alert_timestamp) AS alert_date,
stock_status,
COUNT(*) AS alert_count
FROM stock_alerts
WHERE alert_timestamp > DATEADD(day, -7, CURRENT_TIMESTAMP())
GROUP BY 1, 2
ORDER BY 1 DESC;If dashboard loads slowly:
- Increase warehouse size:
ALTER WAREHOUSE COMPUTE_WH SET WAREHOUSE_SIZE = 'LARGE' - Add result caching: Streamlit caches automatically for 1 hour
- Reduce data retention: Archive records older than 90 days
If tasks are delayed:
- Check warehouse auto-suspend: Set to minimum 5 minutes
- Reduce TARGET_LAG on dynamic tables if data allows
- Consider dedicated warehouse for task execution
Recommended retention periods:
- daily_stock: 2 years (regulatory compliance)
- stock_alerts: 6 months (operational history)
- demand_forecasts: 3 months (validation tracking)
Archive older data:
CREATE TABLE daily_stock_archive AS
SELECT * FROM daily_stock
WHERE record_date < DATEADD(year, -2, CURRENT_DATE());
DELETE FROM daily_stock
WHERE record_date < DATEADD(year, -2, CURRENT_DATE());Issue: Dynamic tables not refreshing
Solution:
-- Check target lag settings
SHOW DYNAMIC TABLES;
-- Manually refresh if needed
ALTER DYNAMIC TABLE stock_health_metrics REFRESH;Issue: Forecasts returning empty results
Cause: Insufficient historical data (< 7 days)
Solution: Wait for more data accumulation or adjust forecast_demand procedure threshold
Issue: Tasks not executing
Solution:
-- Verify task is resumed
ALTER TASK process_stock_changes RESUME;
-- Check warehouse availability
SHOW WAREHOUSES;
-- Ensure COMPUTE_WH is not suspendedIssue: Streamlit session timeout
Solution: Refresh browser page - Snowpark session auto-reconnects
-- Create read-only role for dashboard users
CREATE ROLE stock_viewer;
GRANT USAGE ON DATABASE stock_management TO ROLE stock_viewer;
GRANT USAGE ON SCHEMA inventory TO ROLE stock_viewer;
GRANT SELECT ON ALL TABLES IN SCHEMA inventory TO ROLE stock_viewer;
GRANT SELECT ON ALL VIEWS IN SCHEMA inventory TO ROLE stock_viewer;
-- Create admin role for data management
CREATE ROLE stock_admin;
GRANT ALL ON DATABASE stock_management TO ROLE stock_admin;- No personally identifiable information (PII) stored
- Location codes instead of physical addresses
- Generic item names without pricing information
-- Track who accessed sensitive data
SELECT
user_name,
query_text,
start_time
FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY())
WHERE query_text ILIKE '%stock_health_metrics%'
ORDER BY start_time DESC
LIMIT 100;-
Multi-tier Forecasting
- ARIMA models for complex patterns
- Seasonal decomposition
- External factors (disease outbreaks, campaigns)
-
Supplier Integration
- API connections to vendor systems
- Automated PO submission
- Delivery tracking
-
Mobile Notifications
- SMS/email alerts for critical stock-outs
- Push notifications via mobile app
- Escalation workflows
-
Advanced Analytics
- ABC analysis (classify by importance)
- Cost optimization
- Expiry tracking and FEFO logic
-
Collaboration Features
- Comments on specific items
- Approval workflows
- Inter-facility transfers
- Snowflake Documentation: https://docs.snowflake.com
- Streamlit Documentation: https://docs.streamlit.io
- Snowpark Python: https://docs.snowflake.com/en/developer-guide/snowpark/python
For testing with larger datasets:
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
locations = ['Hospital A', 'Hospital B', 'Clinic C', 'Warehouse D']
items = ['Paracetamol 500mg', 'Insulin 100IU', 'Amoxicillin 250mg',
'Ibuprofen 400mg', 'Metformin 500mg']
data = []
for loc in locations:
for item in items:
for i in range(90):
date = datetime.now() - timedelta(days=90-i)
opening = 1000 if i == 0 else data[-1][6]
issued = np.random.randint(50, 200)
received = 500 if opening - issued < 200 else 0
closing = opening + received - issued
data.append([date.date(), loc, item, opening, received,
issued, closing, np.random.choice([5,7,10,14])])
df = pd.DataFrame(data, columns=['record_date', 'location', 'item_name',
'opening_stock', 'received', 'issued',
'closing_stock', 'lead_time_days'])
df.to_csv('sample_stock_data.csv', index=False)This project is designed for AI for Good initiatives. Contributions welcome:
- Bug fixes and optimization
- Additional forecasting algorithms
- Integration with ERP/pharmacy systems
- Localization and internationalization
Open source for healthcare and humanitarian applications.
For technical support or implementation questions, consult Snowflake community forums or engage professional services for enterprise deployments.
Version: 1.0
Last Updated: January 2026
Maintained By: Internal Development Team