Configuration¶
Configuration File¶
On first import, chronos-lab automatically creates ~/.chronos_lab/.env from the bundled template. This file contains all configuration settings.
Location¶
Default Contents¶
# ArcticDB Settings
ARCTICDB_DEFAULT_BACKEND="LMDB"
ARCTICDB_LOCAL_PATH=~/.chronos_lab/arcticdb
ARCTICDB_DEFAULT_LIBRARY_NAME=uscomp
#ARCTICDB_S3_BUCKET=
# Datatset Settings
DATASET_LOCAL_PATH=~/.chronos_lab/datasets
#DATASET_DDB_TABLE_NAME=
#DATASET_DDB_MAP='{
# "ddb_watchlist": {
# "pk": "map#ibpm#watchlist",
# "sk": "name"
# },
# "ddb_securities_intrinio": {
# "pk": "map#intrinio#securities"
# },
# "ddb_ohlcv_anomalies": {
# "pk": "chronos_lab#ohlcv_anomalies"
# }
#}'
# Hamilton Driver Settings
HAMILTON_CACHE_PATH=~/.chronos_lab/hamilton_cache
# Interactive Brokers Settings
IB_GATEWAY_HOST=127.0.0.1
IB_GATEWAY_PORT=4001
IB_GATEWAY_READONLY=True
IB_GATEWAY_CLIENT_ID=12
#IB_GATEWAY_ACCOUNT=
#IB_REF_DATA_CONCURRENCY=
#IB_HISTORICAL_DATA_CONCURRENCY=
# Intrinio API Settings
#INTRINIO_API_KEY=
# Logging
LOG_LEVEL=WARNING
# Store Settings
STORE_LOCAL_PATH=~/.chronos_lab/store
#STORE_S3_BUCKET=
Configuration Options¶
ArcticDB Storage¶
ArcticDB provides high-performance time series storage with support for three backend types: local LMDB, AWS S3, and in-memory. The backend selection is controlled by ARCTICDB_DEFAULT_BACKEND and can be overridden per operation.
ARCTICDB_DEFAULT_BACKEND¶
Default storage backend for ArcticDB operations.
Valid values: lmdb (local), s3 (AWS S3), mem (in-memory)
Default: lmdb
Used by: ohlcv_to_arcticdb(), ohlcv_from_arcticdb(), and ArcDB class when backend parameter is not specified
Backend characteristics:
- lmdb: Local filesystem storage using LMDB. Fast, persistent, suitable for single-machine workflows
- s3: AWS S3 cloud storage. Scalable, distributed, suitable for multi-machine workflows and data sharing
- mem: In-memory storage. Fastest but not persistent. Only recommended for testing
Example:
# Use local LMDB backend (default)
ARCTICDB_DEFAULT_BACKEND=lmdb
# Use S3 backend for distributed workflows
ARCTICDB_DEFAULT_BACKEND=s3
# Use in-memory backend for testing
ARCTICDB_DEFAULT_BACKEND=mem
Override in code:
from chronos_lab.sources import ohlcv_from_arcticdb
from chronos_lab.storage import ohlcv_to_arcticdb
# Use default backend from configuration
prices = ohlcv_from_arcticdb(symbols=['AAPL'], period='1y')
# Override to use S3 backend for this operation
prices = ohlcv_from_arcticdb(
symbols=['AAPL'],
period='1y',
backend='s3'
)
# Store to LMDB backend explicitly
ohlcv_to_arcticdb(
ohlcv=prices,
backend='lmdb',
library_name='yfinance'
)
ARCTICDB_LOCAL_PATH¶
Local filesystem path for ArcticDB LMDB backend storage.
Default: ~/.chronos_lab/arcticdb
Supports: Tilde expansion (~)
Used when: ARCTICDB_DEFAULT_BACKEND=lmdb or when backend='lmdb' is specified in code
Example:
ARCTICDB_S3_BUCKET¶
AWS S3 bucket name for ArcticDB S3 backend storage.
Default: None (S3 backend disabled)
Requires:
- AWS CLI configuration (see AWS S3 Setup below)
- ARCTICDB_DEFAULT_BACKEND=s3 or explicit backend='s3' in code
Used when: ARCTICDB_DEFAULT_BACKEND=s3 or when backend='s3' is specified in code
Example:
ARCTICDB_DEFAULT_LIBRARY_NAME¶
Default ArcticDB library name used when none is specified. Libraries provide logical separation of datasets within the same backend (similar to database schemas).
Default: uscomp
Used by: All ArcticDB operations when library_name parameter is not specified
Example:
# Use different libraries for different data sources
ARCTICDB_DEFAULT_LIBRARY_NAME=market_data
# Or organize by environment
ARCTICDB_DEFAULT_LIBRARY_NAME=production
Multiple libraries example:
from chronos_lab.storage import ohlcv_to_arcticdb
from chronos_lab.sources import ohlcv_from_arcticdb
# Store Yahoo Finance data in 'yfinance' library
ohlcv_to_arcticdb(ohlcv=yf_prices, library_name='yfinance')
# Store Intrinio data in 'intrinio' library
ohlcv_to_arcticdb(ohlcv=intrinio_prices, library_name='intrinio')
# Retrieve from specific library
prices = ohlcv_from_arcticdb(
symbols=['AAPL'],
period='1y',
library_name='yfinance'
)
Dataset Storage¶
Datasets provide structured data storage for portfolio composition, watchlists, security metadata, and other non-time-series data. Datasets can be stored locally as JSON files or in AWS DynamoDB for distributed workflows.
Important: Datasets are for structured/metadata storage, NOT time series data. Use ArcticDB for OHLCV price data.
DATASET_LOCAL_PATH¶
Local filesystem path for dataset JSON file storage.
Default: ~/.chronos_lab/datasets
Supports: Tilde expansion (~)
Used by: to_dataset() and from_dataset() for local storage
Example:
DATASET_DDB_TABLE_NAME¶
AWS DynamoDB table name for dataset storage. Required for DynamoDB-backed datasets (names starting with ddb_ prefix).
Default: None (DynamoDB disabled)
Requires: - AWS CLI configuration (see AWS DynamoDB Setup below) - DATASET_DDB_MAP configuration
Used by: to_dataset() and from_dataset() for datasets with ddb_ prefix
Example:
DATASET_DDB_MAP¶
JSON string mapping dataset names to DynamoDB key structure. Defines partition key (pk) and sort key (sk) patterns for each DynamoDB dataset.
Default: None
Format: JSON object with dataset names as keys, each containing:
- pk: Partition key pattern (required)
- sk: Sort key field name (optional, defaults to dataset item key)
Example:
DATASET_DDB_MAP='{
"ddb_watchlist": {
"pk": "map#ibpm#watchlist",
"sk": "name"
},
"ddb_securities_intrinio": {
"pk": "map#intrinio#securities"
},
"ddb_ohlcv_anomalies": {
"pk": "chronos_lab#ohlcv_anomalies"
}
}'
Use Cases:
-
Local datasets: Portfolio composition, custom watchlists, backtesting configurations
-
DynamoDB datasets: Distributed workflows where multiple processes share datasets
File Storage¶
General-purpose file storage for plots, reports, and other binary content. Supports local filesystem and S3 backends.
Important: File storage is for arbitrary files (plots, PDFs, CSVs), NOT for time series data. Use ArcticDB for OHLCV price data and datasets for structured metadata.
STORE_LOCAL_PATH¶
Local filesystem path for general file storage.
Default: ~/.chronos_lab/store
Supports: Tilde expansion (~)
Used by: to_store() for saving plots, charts, and other generated files locally
Example:
STORE_S3_BUCKET¶
AWS S3 bucket name for general file storage.
Default: None (S3 storage disabled)
Requires: AWS CLI configuration (see AWS S3 Setup below)
Used by: to_store() when stores=['s3'] or stores=['local', 's3']
Example:
Common Use Cases:
-
Saving analysis reports and visualizations
-
Sharing generated content across distributed systems
Hamilton Driver¶
Hamilton Driver settings.
HAMILTON_CACHE_PATH¶
Directory path for Hamilton Driver cache storage. Hamilton's caching system stores computation results to disk, enabling significant performance improvements for repeated calculations with the same inputs.
Default: ~/.chronos_lab/hamilton_cache
Supports: Tilde expansion (~)
Used by: AnalysisDriver class when enable_cache=True
Interactive Brokers¶
Interactive Brokers settings for connecting to IB Gateway or Trader Workstation (TWS) to retrieve real-time and historical market data.
Requirements:
- Interactive Brokers account (paper trading or live)
- IB Gateway or TWS running and configured to accept API connections
- chronos-lab[ib] extra installed
IB_GATEWAY_HOST¶
Hostname or IP address of the IB Gateway or TWS instance.
Default: 127.0.0.1 (localhost)
Used by: get_ib(), ohlcv_from_ib(), ohlcv_from_ib_async(), and IBMarketData.connect()
IB_GATEWAY_PORT¶
Port number for connecting to IB Gateway or TWS.
Default: 4001
Common ports:
- 4001: IB Gateway paper trading
- 4002: IB Gateway live trading
- 7496: TWS paper trading
- 7497: TWS live trading
Used by: get_ib(), ohlcv_from_ib(), ohlcv_from_ib_async(), and IBMarketData.connect()
IB_GATEWAY_READONLY¶
Read-only connection mode. When True, prevents order placement and account modifications.
Default: True
Valid values: True, False
Used by: IBMarketData.connect()
Important: Always use True for data retrieval workflows to prevent accidental trading operations.
IB_GATEWAY_CLIENT_ID¶
Unique client ID for the IB API connection. Each connection must have a unique client ID.
Default: 12
Valid values: Any positive integer
Used by: IBMarketData.connect()
Note: If running multiple applications connecting to the same Gateway/TWS instance, each must use a different client ID.
IB_GATEWAY_ACCOUNT¶
IB account identifier for the connection.
Default: None (uses primary account)
Format: Account number (e.g., DU1234567 for paper trading, U1234567 for live)
Used by: IBMarketData.connect()
Required when: Account has multiple sub-accounts or for explicit account selection
IB_REF_DATA_CONCURRENCY¶
Maximum number of concurrent reference data requests (contract details lookups) to IB API. Controls rate limiting for asynchronous operations.
Default: 20
Valid values: Positive integer (recommended: 10-50)
Used by: Async methods in IBMarketData class for contract qualification and details lookup
Note: Setting too high may trigger IB API rate limits. Adjust based on your connection type and IB account.
IB_HISTORICAL_DATA_CONCURRENCY¶
Maximum number of concurrent historical data requests to IB API. Controls rate limiting for asynchronous data retrieval operations.
Default: 20
Valid values: Positive integer (recommended: 10-50)
Used by: ohlcv_from_ib_async() and async methods in IBMarketData class
Note: IB API has rate limits on historical data requests. Adjust based on your account type and subscription level.
Intrinio API¶
INTRINIO_API_KEY¶
Your Intrinio API key for accessing institutional financial data.
Required for: Using ohlcv_from_intrinio() or securities_from_intrinio()
How to get: Sign up at intrinio.com
Example:
Logging¶
LOG_LEVEL¶
Logging level for chronos-lab operations.
Valid values: DEBUG, INFO, WARNING, ERROR, CRITICAL
Default: WARNING
Example:
Environment Variable Overrides¶
All settings can be overridden using environment variables. This is useful for:
- CI/CD environments
- Docker containers
- Temporary configuration changes
Example:
export INTRINIO_API_KEY="my_api_key"
export DATASET_DDB_TABLE_NAME="prod-datasets"
export ARCTICDB_DEFAULT_LIBRARY_NAME="production"
export IB_GATEWAY_PORT="7497"
export IB_GATEWAY_ACCOUNT="U1234567"
export LOG_LEVEL="WARNING"
python my_script.py
Environment variables take precedence over .env file settings.
AWS S3 Setup¶
To use ArcticDB with AWS S3 backend:
Step 1: Install Dependencies¶
Step 2: Configure AWS CLI¶
# Install AWS CLI (if not already installed)
# macOS
brew install awscli
# Linux
pip install awscli
# Configure credentials
aws configure
You'll be prompted for: - AWS Access Key ID - AWS Secret Access Key - Default region name - Default output format
This creates ~/.aws/credentials and ~/.aws/config.
Step 3: Set Environment Variables (Optional)¶
If using named AWS profiles:
Step 4: Configure chronos-lab¶
Edit ~/.chronos_lab/.env:
Step 5: Verify¶
from chronos_lab.arcdb import ArcDB
# This will use S3 backend
ac = ArcDB(library_name='test')
print("✓ S3 backend configured successfully")
AWS DynamoDB Setup¶
To use datasets with AWS DynamoDB backend for distributed workflows:
Step 1: Install Dependencies¶
Step 2: Configure AWS CLI¶
This creates ~/.aws/credentials and ~/.aws/config.
Step 3: Create DynamoDB Table¶
Use existing or create a table with partition key (pk) and sort key (sk):
aws dynamodb create-table \
--table-name my-datasets-table \
--attribute-definitions \
AttributeName=pk,AttributeType=S \
AttributeName=sk,AttributeType=S \
--key-schema \
AttributeName=pk,KeyType=HASH \
AttributeName=sk,KeyType=RANGE \
--billing-mode PAY_PER_REQUEST
Step 4: Configure chronos-lab¶
Edit ~/.chronos_lab/.env:
DATASET_DDB_TABLE_NAME=my-datasets-table
DATASET_DDB_MAP='{
"ddb_securities": {
"pk": "DATASET#securities",
"sk": "ticker"
},
"ddb_portfolio": {
"pk": "DATASET#portfolio",
"sk": "symbol"
}
}'
Step 5: Verify¶
from chronos_lab.storage import to_dataset
from chronos_lab.sources import from_dataset
# Write to DynamoDB
data = {
'AAPL': {'name': 'Apple Inc.', 'sector': 'Technology'},
'MSFT': {'name': 'Microsoft', 'sector': 'Technology'}
}
result = to_dataset(dataset_name='ddb_securities', dataset=data)
# Read from DynamoDB
securities = from_dataset(dataset_name='ddb_securities')
print(f"✓ DynamoDB backend configured successfully: {len(securities)} items")
Distributed Workflow Example:
One process writes datasets:
# Process 1: Update security metadata daily
from chronos_lab.sources import securities_from_intrinio
from chronos_lab.storage import to_dataset
securities = securities_from_intrinio()
to_dataset(dataset_name="ddb_securities", dataset=securities.to_dict(orient='index'))
Other processes read datasets:
# Process 2: Research workflow reads latest metadata
from chronos_lab.sources import from_dataset, ohlcv_from_arcticdb
securities = from_dataset(dataset_name='ddb_securities')
Configuration in Code¶
You can also access and use configuration programmatically:
from chronos_lab.settings import get_settings
settings = get_settings()
print(f"Intrinio API Key: {settings.intrinio_api_key}")
print(f"Dataset Local Path: {settings.dataset_local_path}")
print(f"Dataset DDB Table: {settings.dataset_ddb_table_name}")
print(f"Store Local Path: {settings.store_local_path}")
print(f"Store S3 Bucket: {settings.store_s3_bucket}")
print(f"ArcticDB Path: {settings.arcticdb_local_path}")
print(f"Default Library: {settings.arcticdb_default_library_name}")
print(f"IB Gateway Host: {settings.ib_gateway_host}")
print(f"IB Gateway Port: {settings.ib_gateway_port}")
print(f"IB Gateway Client ID: {settings.ib_gateway_client_id}")
print(f"Log Level: {settings.log_level}")
Multiple Environments¶
Development vs Production¶
Use different configuration files for different environments:
Development (~/.chronos_lab/.env):
DATASET_LOCAL_PATH=~/dev/datasets
STORE_LOCAL_PATH=~/dev/store
ARCTICDB_LOCAL_PATH=~/dev/arcticdb
ARCTICDB_DEFAULT_LIBRARY_NAME=dev
LOG_LEVEL=DEBUG
Production (environment variables):
export DATASET_DDB_TABLE_NAME=prod-datasets-table
export STORE_S3_BUCKET=prod-charts-bucket
export ARCTICDB_S3_BUCKET=prod-timeseries
export ARCTICDB_DEFAULT_LIBRARY_NAME=production
export IB_GATEWAY_HOST=prod-ib-gateway.internal
export IB_GATEWAY_PORT=4002
export IB_GATEWAY_ACCOUNT=U1234567
export LOG_LEVEL=WARNING
Docker¶
For Docker containers, mount configuration or use environment variables:
Option 1: Mount configuration file
Option 2: Environment variables
ENV INTRINIO_API_KEY=your_key
ENV DATASET_DDB_TABLE_NAME=my-datasets-table
ENV STORE_S3_BUCKET=my-charts-bucket
ENV ARCTICDB_S3_BUCKET=my-bucket
ENV IB_GATEWAY_HOST=host.docker.internal
ENV IB_GATEWAY_PORT=4001
ENV IB_GATEWAY_CLIENT_ID=12
ENV LOG_LEVEL=INFO
Troubleshooting¶
Configuration not loading¶
Symptom: Settings show None or defaults
Solution: Check file location and permissions:
AWS S3 connection errors¶
Symptom: "Unable to locate credentials"
Solution: Verify AWS CLI configuration:
Intrinio API errors¶
Symptom: "Invalid API key"
Solution: Verify API key:
Make sure there are no extra spaces or quotes around the key.
Interactive Brokers connection errors¶
Symptom: "Connection refused" or "Failed to connect to IB"
Solution: Verify IB Gateway/TWS is running and configured: 1. Check IB Gateway or TWS is running 2. Verify API connections are enabled (Configuration → API → Settings) 3. Confirm port matches your configuration:
4. Check firewall allows connections on the specified port 5. Ensure client ID is unique if running multiple applicationsBest Practices¶
- Never commit
.envfiles - Add to.gitignore - Use environment variables in CI/CD - Don't store secrets in code
- Rotate API keys regularly - Update in configuration file
- Use separate configurations per environment - dev/staging/prod
- Monitor API usage - Especially for paid services like Intrinio
- Back up S3 buckets - Enable versioning and replication
- Use read-only mode for IB connections - Keep
IB_GATEWAY_READONLY=Truefor data retrieval workflows - Use unique client IDs - Assign different
IB_GATEWAY_CLIENT_IDvalues for each application connecting to IB