Databento
NautilusTrader provides an adapter for integrating with the Databento API and Databento Binary Encoding (DBN) format data. As Databento is purely a market 数据 provider, there is no 执行 客户端 provided - although a sandbox 环境 with simulated 执行 could still be set up. It's also possible to match Databento 数据 with Interactive Brokers 执行, or to calculate traditional asset class signals for crypto trading.
The capabilities of this adapter include:
- Loading historical 数据 from DBN files and decoding into Nautilus objects for 回测 or writing to the 数据 catalog.
- Requesting historical 数据 which is decoded to Nautilus objects to 支持 实时交易 and 回测.
- Subscribing to 实时 数据 feeds which are decoded to Nautilus objects to 支持 实时交易 and sandbox environments.
Databento currently offers 125 USD in free data credits (historical data only) for new account sign-ups.
With careful requests, this is more than enough for 测试 and evaluation purposes. We recommend you make use of the /metadata.get_cost endpoint.
概览
The adapter implementation takes the databento-rs crate as a dependency, which is the official Rust 客户端 库 provided by Databento.
There is no need for an optional extra 安装 of Databento
, as the core components of the
adapter are compiled as static libraries and linked automatically during the build process.
The following adapter classes are available:
DatabentoDataLoader
: Loads Databento Binary Encoding (DBN) data from files.DatabentoInstrumentProvider
: Integrates with the Databento API (HTTP) to provide latest or historical instrument definitions.DatabentoHistoricalClient
: Integrates with the Databento API (HTTP) for historical market data requests.DatabentoLiveClient
: Integrates with the Databento API (raw TCP) for subscribing to real-time data feeds.DatabentoDataClient
: Provides aLiveMarketDataClient
implementation for running a trading node in real time.
As with the other integration 适配器, most users will simply define a 配置 for a 实时交易 节点 (covered below), and won't need to necessarily work with these lower level components directly.
示例
You can find live example scripts here.
Databento 文档
Databento provides extensive documentation for new users which can be found in the Databento new users guide. We recommend also referring to the Databento 文档 in conjunction with this NautilusTrader integration 指南.
Databento Binary Encoding (DBN)
Databento Binary Encoding (DBN) is an extremely fast message encoding and 存储 format for normalized market 数据. The DBN specification includes a simple, self-describing metadata header and a fixed set of struct definitions, which enforce a standardized way to normalize market 数据.
The integration provides a decoder which can convert DBN format 数据 to Nautilus objects.
The same Rust implemented Nautilus decoder is used for:
- Loading and decoding DBN files from disk.
- Decoding historical and live 数据 in real time.
Supported schemas
The following Databento schemas are supported by NautilusTrader:
Databento schema | Nautilus 数据 type | Description |
---|---|---|
MBO | OrderBookDelta | Market by order (L3). |
MBP_1 | (QuoteTick, TradeTick | None) | Market by price (L1). |
MBP_10 | OrderBookDepth10 | Market depth (L2). |
BBO_1S | QuoteTick | 1-second best bid/offer. |
BBO_1M | QuoteTick | 1-minute best bid/offer. |
CMBP_1 | (QuoteTick, TradeTick | None) | Consolidated MBP across venues. |
CBBO_1S | QuoteTick | Consolidated 1-second BBO. |
CBBO_1M | QuoteTick | Consolidated 1-minute BBO. |
TCBBO | (QuoteTick, TradeTick) | Trade-sampled consolidated BBO. |
TBBO | (QuoteTick, TradeTick) | Trade-sampled best bid/offer. |
TRADES | TradeTick | Trade ticks. |
OHLCV_1S | Bar | 1-second bars. |
OHLCV_1M | Bar | 1-minute bars. |
OHLCV_1H | Bar | 1-hour bars. |
OHLCV_1D | Bar | Daily bars. |
OHLCV_EOD | Bar | End-of-day bars. |
DEFINITION | Instrument (various types) | Instrument definitions. |
IMBALANCE | DatabentoImbalance | Auction imbalance 数据. |
STATISTICS | DatabentoStatistics | Market statistics. |
STATUS | InstrumentStatus | Market status updates. |
Schema considerations
- TBBO and TCBBO: Trade-sampled feeds that pair every trade with the BBO immediately before the trade's effect (TBBO per-venue, TCBBO consolidated across venues). Use when you need trades aligned with contemporaneous quotes without managing two streams.
- MBP-1 and CMBP-1 (L1): Event-level updates; emit trades only on trade events. Choose for a complete top-of-book event tape. For quote+trade alignment, prefer TBBO/TCBBO; otherwise, use TRADES.
- MBP-10 (L2): Top 10 levels with trades. Good for depth-aware strategies that don't need per-order detail; lighter than MBO with much of the structure you need including number of orders per level.
- MBO (L3): Per-order events enable queue position modeling and exact book reconstruction. Highest volume/cost; start at node initialization to ensure proper replay context.
- BBO_1S/BBO_1M and CBBO_1S/CBBO_1M: Sampled top-of-book quotes at fixed intervals (1s/1m), no trades. Best for monitoring/spreads/low-cost signal generation; not suited for fine-grained microstructure.
- TRADES: Trades only. Pair with MBP-1 (
include_trades=True
) or use TBBO/TCBBO if you need quote context aligned with trades. - OHLCV_ (incl. OHLCV_EOD): Aggregated bars derived from trades. Prefer for higher-timeframe analytics/backtests; ensure bar timestamps represent close time (set
bars_timestamp_on_close=True
). - Imbalance / Statistics / Status: Venue operational data; subscribe via
subscribe_data
with aDataType
carryinginstrument_id
metadata.
Consolidated schemas (CMBP_1, CBBO_1S, CBBO_1M, TCBBO) aggregate 数据 across multiple venues, providing a unified view of the market. These are particularly useful for cross-venue analysis and when you need a comprehensive market picture.
See also the Databento Schemas and data formats guide.
Schema selection for live subscriptions
The following table shows how Nautilus subscription methods map to Databento schemas:
Nautilus Subscription Method | Default Schema | Available Databento Schemas | Nautilus 数据 Type |
---|---|---|---|
subscribe_quote_ticks() | mbp-1 | mbp-1 , bbo-1s , bbo-1m , cmbp-1 , cbbo-1s , cbbo-1m , tbbo , tcbbo | QuoteTick |
subscribe_trade_ticks() | trades | trades , tbbo , tcbbo , mbp-1 , cmbp-1 | TradeTick |
subscribe_order_book_depth() | mbp-10 | mbp-10 | OrderBookDepth10 |
subscribe_order_book_deltas() | mbo | mbo | OrderBookDeltas |
subscribe_bars() | varies | ohlcv-1s , ohlcv-1m , ohlcv-1h , ohlcv-1d | Bar |
The 示例 below assume you're within a Strategy
or Actor
class context where self
has access to subscription methods.
Remember to import the necessary types:
from nautilus_trader.model import BarType
from nautilus_trader.model.enums import BookType
from nautilus_trader.model.identifiers import InstrumentId
Quote subscriptions (MBP / L1)
# Default MBP-1 quotes (may include trades)
self.subscribe_quote_ticks(instrument_id)
# Explicit MBP-1 schema
self.subscribe_quote_ticks(
instrument_id=instrument_id,
params={"schema": "mbp-1"}
)
# 1-second BBO snapshots (quotes only, no trades)
self.subscribe_quote_ticks(
instrument_id=instrument_id,
params={"schema": "bbo-1s"}
)
# Consolidated quotes across venues
self.subscribe_quote_ticks(
instrument_id=instrument_id,
params={"schema": "cbbo-1s"} # or "cmbp-1" for consolidated MBP
)
# Trade-sampled BBO (includes both quotes AND trades)
self.subscribe_quote_ticks(
instrument_id=instrument_id,
params={"schema": "tbbo"} # Will receive both QuoteTick and TradeTick onto the message bus
)
Trade subscriptions
# Trade ticks only
self.subscribe_trade_ticks(instrument_id)
# Trades from MBP-1 feed (only when trade events occur)
self.subscribe_trade_ticks(
instrument_id=instrument_id,
params={"schema": "mbp-1"}
)
# Trade-sampled data (includes quotes at trade time)
self.subscribe_trade_ticks(
instrument_id=instrument_id,
params={"schema": "tbbo"} # Also provides quotes at trade events
)
Order book depth subscriptions (MBP / L2)
# Subscribe to top 10 levels of market depth
self.subscribe_order_book_depth(
instrument_id=instrument_id,
depth=10 # MBP-10 schema is automatically selected
)
# The depth parameter must be 10 for Databento
# This will receive OrderBookDepth10 updates
Order book deltas subscriptions (MBO / L3)
# Subscribe to full order book updates (market by order)
self.subscribe_order_book_deltas(
instrument_id=instrument_id,
book_type=BookType.L3_MBO # Uses MBO schema
)
# Note: MBO subscriptions must be made at node startup for Databento
# to ensure proper replay from session start
Bar subscriptions
# Subscribe to 1-minute bars (automatically uses ohlcv-1m schema)
self.subscribe_bars(
bar_type=BarType.from_str(f"{instrument_id}-1-MINUTE-LAST-EXTERNAL")
)
# Subscribe to 1-second bars (automatically uses ohlcv-1s schema)
self.subscribe_bars(
bar_type=BarType.from_str(f"{instrument_id}-1-SECOND-LAST-EXTERNAL")
)
# Subscribe to hourly bars (automatically uses ohlcv-1h schema)
self.subscribe_bars(
bar_type=BarType.from_str(f"{instrument_id}-1-HOUR-LAST-EXTERNAL")
)
# Subscribe to daily bars (automatically uses ohlcv-1d schema)
self.subscribe_bars(
bar_type=BarType.from_str(f"{instrument_id}-1-DAY-LAST-EXTERNAL")
)
# Subscribe to daily bars with end-of-day schema (only valid for DAY aggregation)
self.subscribe_bars(
bar_type=BarType.from_str(f"{instrument_id}-1-DAY-LAST-EXTERNAL"),
params={"schema": "ohlcv-eod"}, # Override to use end-of-day bars
)
Custom 数据 type subscriptions
For specialized Databento 数据 types like imbalance and statistics, use the generic subscribe_data
method:
from nautilus_trader.adapters.databento import DATABENTO_CLIENT_ID
from nautilus_trader.adapters.databento import DatabentoImbalance
from nautilus_trader.adapters.databento import DatabentoStatistics
from nautilus_trader.model import DataType
# Subscribe to imbalance data
self.subscribe_data(
data_type=DataType(DatabentoImbalance, metadata={"instrument_id": instrument_id}),
client_id=DATABENTO_CLIENT_ID,
)
# Subscribe to statistics data
self.subscribe_data(
data_type=DataType(DatabentoStatistics, metadata={"instrument_id": instrument_id}),
client_id=DATABENTO_CLIENT_ID,
)
# Subscribe to instrument status updates
from nautilus_trader.model.data import InstrumentStatus
self.subscribe_data(
data_type=DataType(InstrumentStatus, metadata={"instrument_id": instrument_id}),
client_id=DATABENTO_CLIENT_ID,
)
Instrument IDs and symbology
Databento market 数据 includes an instrument_id
field which is an integer assigned
by either the original source venue, or internally by Databento during normalization.
It's important to realize that this is different to the Nautilus InstrumentId
which is a string made up of a symbol + venue with a period separator i.e. "{symbol}.{venue}"
.
The Nautilus decoder will use the Databento raw_symbol
for the Nautilus symbol
and an ISO 10383 MIC (Market Identifier Code)
from the Databento instrument definition message for the Nautilus venue
.
Databento datasets are identified with a dataset code which is not the same as a venue identifier. You can read more about Databento dataset naming conventions here.
Of particular note is for CME Globex MDP 3.0 数据 (GLBX.MDP3
dataset code), the following
exchanges are all grouped under the GLBX
venue. These mappings can be determined from the
金融工具 exchange
field:
CBCM
: XCME-XCBT inter-exchange spreadNYUM
: XNYM-DUMX inter-exchange spreadXCBT
: Chicago Board of Trade (CBOT)XCEC
: Commodities Exchange Center (COMEX)XCME
: Chicago Mercantile Exchange (CME)XFXS
: CME FX Link spreadXNYM
: New York Mercantile Exchange (NYMEX)
Other venue MICs can be found in the venue
field of responses from the metadata.list_publishers endpoint.
Timestamps
Databento 数据 includes various timestamp fields including (but not limited to):
ts_event
: The matching-engine-received timestamp expressed as the number of nanoseconds since the UNIX epoch.ts_in_delta
: The matching-engine-sending timestamp expressed as the number of nanoseconds beforets_recv
.ts_recv
: The capture-server-received timestamp expressed as the number of nanoseconds since the UNIX epoch.ts_out
: The Databento sending timestamp.
Nautilus 数据 includes at least two timestamps (required by the 数据
contract):
ts_event
: UNIX timestamp (nanoseconds) when the data event occurred.ts_init
: UNIX timestamp (nanoseconds) when the data object was created.
When decoding and normalizing Databento to Nautilus we generally assign the Databento ts_recv
value to the Nautilus
ts_event
field, as this timestamp is much more reliable and consistent, and is guaranteed to be monotonically increasing per instrument.
The 异常 to this are the DatabentoImbalance
and DatabentoStatistics
数据 types, which have fields for all timestamps as these types are defined specifically for the adapter.
See the following Databento docs for further information:
数据 types
The following section discusses Databento schema -> Nautilus 数据 type equivalence and considerations.
See Databento schemas and data formats.
Instrument definitions
Databento provides a single schema to cover all instrument classes, these are
decoded to the appropriate Nautilus Instrument
types.
The following Databento instrument classes are supported by NautilusTrader:
Databento instrument class | Code | Nautilus instrument type |
---|---|---|
Stock | K | Equity |
Future | F | FuturesContract |
Call | C | OptionContract |
Put | P | OptionContract |
Future spread | S | FuturesSpread |
Option spread | T | OptionSpread |
Mixed spread | M | OptionSpread |
FX spot | X | CurrencyPair |
Bond | B | Not yet available |
MBO (market by order)
This schema is the highest granularity 数据 offered by Databento, and represents
full order book depth. Some messages also provide trade information, and so when
decoding MBO messages Nautilus will produce an OrderBookDelta
and optionally a
TradeTick
.
The Nautilus live 数据 客户端 will buffer MBO messages until an F_LAST
flag
is seen. A discrete OrderBookDeltas
容器 object will then be passed to the
registered handler.
Order book snapshots are also buffered into a discrete OrderBookDeltas
容器
object, which occurs during the replay startup sequence.
MBP-1 (market by price, top-of-book)
This schema represents the top-of-book only (quotes and trades). Like with MBO messages, some
messages carry trade information, and so when decoding MBP-1 messages Nautilus
will produce a QuoteTick
and also a TradeTick
if the message is a trade.
TBBO and TCBBO (top-of-book with trades)
The TBBO (Top Book with Trades) and TCBBO (Top Consolidated Book with Trades) schemas provide
both quote and trade 数据 in each message. When subscribing to quotes using these schemas,
you'll automatically receive both QuoteTick
and TradeTick
数据, making them more efficient
than subscribing to quotes and trades separately. TCBBO provides consolidated 数据 across venues.
OHLCV (bar aggregates)
The Databento bar aggregation messages are timestamped at the open of the bar interval.
The Nautilus decoder will normalize the ts_event
timestamps to the close of the bar
(original ts_event
+ bar interval).
Imbalance & Statistics
The Databento imbalance
and statistics
schemas cannot be represented as a built-in Nautilus 数据 types,
and so they have specific types defined in Rust DatabentoImbalance
and DatabentoStatistics
.
Python bindings are provided via pyo3 (Rust) so the types behave a little differently to a built-in Nautilus
数据 types, where all attributes are pyo3 provided objects and not directly compatible
with certain methods which may expect a Cython provided type. There are pyo3 -> legacy Cython
object conversion methods available, which can be found in the API 参考.
Here is a general pattern for converting a pyo3 Price
to a Cython Price
:
price = Price.from_raw(pyo3_price.raw, pyo3_price.precision)
Additionally requesting for and subscribing to these 数据 types requires the use of the
lower level generic methods for custom 数据 types. The following example subscribes to the imbalance
schema for the AAPL.XNAS
instrument (Apple Inc trading on the Nasdaq exchange):
from nautilus_trader.adapters.databento import DATABENTO_CLIENT_ID
from nautilus_trader.adapters.databento import DatabentoImbalance
from nautilus_trader.model import DataType
instrument_id = InstrumentId.from_str("AAPL.XNAS")
self.subscribe_data(
data_type=DataType(DatabentoImbalance, metadata={"instrument_id": instrument_id}),
client_id=DATABENTO_CLIENT_ID,
)
Or requesting the previous days statistics
schema for the ES.FUT
parent symbol (all active E-mini S&P 500 futures contracts on the CME Globex exchange):
from nautilus_trader.adapters.databento import DATABENTO_CLIENT_ID
from nautilus_trader.adapters.databento import DatabentoStatistics
from nautilus_trader.model import DataType
instrument_id = InstrumentId.from_str("ES.FUT.GLBX")
metadata = {
"instrument_id": instrument_id,
"start": "2024-03-06",
}
self.request_data(
data_type=DataType(DatabentoStatistics, metadata=metadata),
client_id=DATABENTO_CLIENT_ID,
)
性能 considerations
When 回测 with Databento DBN 数据, there are two options:
- Store the 数据 in DBN (
.dbn.zst
) format files and decode to Nautilus objects on every run - Convert the DBN files to Nautilus objects and then write to the 数据 catalog once (stored as Nautilus Parquet format on disk)
Whilst the DBN -> Nautilus decoder is implemented in Rust and has been optimized, the best 性能 for 回测 will be achieved by writing the Nautilus objects to the 数据 catalog, which performs the decoding step once.
DataFusion provides a query engine backend to efficiently load and stream the Nautilus Parquet 数据 from disk, which achieves extremely high through-put (at least an order of magnitude faster than converting DBN -> Nautilus on the fly for every backtest run).
性能 benchmarks are currently under 开发.
Loading DBN 数据
You can load DBN files and convert the records to Nautilus objects using the
DatabentoDataLoader
class. There are two main purposes for doing so:
- Pass the converted 数据 to
BacktestEngine.add_data
directly for 回测. - Pass the converted 数据 to
ParquetDataCatalog.write_data
for later streaming use with aBacktestNode
.
DBN 数据 to a BacktestEngine
This code snippet demonstrates how to load DBN 数据 and pass to a BacktestEngine
.
Since the BacktestEngine
needs an instrument added, we'll use a test instrument
provided by the TestInstrumentProvider
(you could also pass an instrument object
which was parsed from a DBN file too).
The 数据 is a month of TSLA (Tesla Inc) trades on the Nasdaq exchange:
# Add instrument
TSLA_NASDAQ = TestInstrumentProvider.equity(symbol="TSLA")
engine.add_instrument(TSLA_NASDAQ)
# Decode data to legacy Cython objects
loader = DatabentoDataLoader()
trades = loader.from_dbn_file(
path=TEST_DATA_DIR / "databento" / "temp" / "tsla-xnas-20240107-20240206.trades.dbn.zst",
instrument_id=TSLA_NASDAQ.id,
)
# Add data
engine.add_data(trades)
DBN 数据 to a ParquetDataCatalog
This code snippet demonstrates how to load DBN 数据 and write to a ParquetDataCatalog
.
We pass a value of false for the as_legacy_cython
flag, which will ensure the
DBN records are decoded as pyo3 (Rust) objects. It's worth noting that legacy Cython
objects can also be passed to write_data
, but these need to be converted back to
pyo3 objects under the hood (so passing pyo3 objects is an 优化).
# Initialize the catalog interface
# (will use the `NAUTILUS_PATH` env var as the path)
catalog = ParquetDataCatalog.from_env()
instrument_id = InstrumentId.from_str("TSLA.XNAS")
# Decode data to pyo3 objects
loader = DatabentoDataLoader()
trades = loader.from_dbn_file(
path=TEST_DATA_DIR / "databento" / "temp" / "tsla-xnas-20240107-20240206.trades.dbn.zst",
instrument_id=instrument_id,
as_legacy_cython=False, # This is an optimization for writing to the catalog
)
# Write data
catalog.write_data(trades)
See also the Data concepts guide.
Historical loader options
The from_dbn_file
method supports several important parameters:
instrument_id
: Passing this improves decode speed by skipping symbology lookup.price_precision
: Override the default price precision for the instrument.include_trades
: For MBP-1/CMBP-1 schemas, setting this toTrue
will emit bothQuoteTick
andTradeTick
objects when trade data is present.as_legacy_cython
: Set toFalse
when loading IMBALANCE or STATISTICS schemas (required) or for performance when writing to catalog.
IMBALANCE and STATISTICS schemas require as_legacy_cython=False
as these are PyO3-only types. Setting as_legacy_cython=True
will raise a ValueError
.
Loading consolidated 数据
Consolidated schemas aggregate 数据 across multiple venues:
# Load consolidated MBP-1 quotes
loader = DatabentoDataLoader()
cmbp_quotes = loader.from_dbn_file(
path="consolidated.cmbp-1.dbn.zst",
instrument_id=InstrumentId.from_str("AAPL.XNAS"),
include_trades=True, # Get both quotes and trades if available
as_legacy_cython=True,
)
# Load consolidated BBO quotes
cbbo_quotes = loader.from_dbn_file(
path="consolidated.cbbo-1s.dbn.zst",
instrument_id=InstrumentId.from_str("AAPL.XNAS"),
as_legacy_cython=False, # Use PyO3 for better performance
)
# Load TCBBO (trade-sampled consolidated BBO) - provides both quotes and trades
# Note: include_trades=True loads quotes, include_trades=False loads trades
tcbbo_quotes = loader.from_dbn_file(
path="consolidated.tcbbo.dbn.zst",
instrument_id=InstrumentId.from_str("AAPL.XNAS"),
include_trades=True, # Loads quotes
as_legacy_cython=True,
)
tcbbo_trades = loader.from_dbn_file(
path="consolidated.tcbbo.dbn.zst",
instrument_id=InstrumentId.from_str("AAPL.XNAS"),
include_trades=False, # Loads trades
as_legacy_cython=True,
)
Cost optimization: Avoid subscribing to both TBBO/TCBBO and separate trade subscriptions for the same instrument, as these schemas already include trade data. This prevents duplicates and reduces costs.
实时 客户端 架构
The DatabentoDataClient
is a Python class which contains other Databento adapter classes.
There are two DatabentoLiveClient
s per Databento dataset:
- One for MBO (order book deltas) 实时 feeds
- One for all other 实时 feeds
There is currently a limitation that all MBO (order book deltas) subscriptions for a dataset have to be made at 节点 startup, to then be able to replay 数据 from the beginning of the session. If subsequent subscriptions arrive after start, then an error will be logged (and the subscription ignored).
There is no such limitation for any of the other Databento schemas.
A single DatabentoHistoricalClient
instance is reused between the DatabentoInstrumentProvider
and DatabentoDataClient
,
which makes historical instrument definitions and 数据 requests.
配置
The most common use case is to configure a live TradingNode
to include a
Databento 数据 客户端. To achieve this, add a Databento
section to your 客户端
配置(s):
from nautilus_trader.adapters.databento import DATABENTO
from nautilus_trader.live.node import TradingNode
config = TradingNodeConfig(
..., # Omitted
data_clients={
DATABENTO: {
"api_key": None, # 'DATABENTO_API_KEY' env var
"http_gateway": None, # Override for the default HTTP historical gateway
"live_gateway": None, # Override for the default raw TCP real-time gateway
"instrument_provider": InstrumentProviderConfig(load_all=True),
"instrument_ids": None, # Nautilus instrument IDs to load on start
"parent_symbols": None, # Databento parent symbols to load on start
},
},
..., # Omitted
)
Then, create a TradingNode
and add the 客户端 factory:
from nautilus_trader.adapters.databento.factories import DatabentoLiveDataClientFactory
from nautilus_trader.live.node import TradingNode
# Instantiate the live trading node with a configuration
node = TradingNode(config=config)
# Register the client factory with the node
node.add_data_client_factory(DATABENTO, DatabentoLiveDataClientFactory)
# Finally build the node
node.build()
配置 parameters
api_key
: The Databento API secret key. IfNone
then will source theDATABENTO_API_KEY
environment variable.http_gateway
: The historical HTTP client gateway override (useful for testing and typically not needed by most users).live_gateway
: The raw TCP real-time client gateway override (useful for testing and typically not needed by most users).use_exchange_as_venue
: Use exchange MIC instead of GLBX for CME families. DefaultTrue
.bars_timestamp_on_close
: Choose open vs close timestamps for bars. IfTrue
, uses bar close forts_event
/ts_init
; ifFalse
, uses open. DefaultTrue
.venue_dataset_map
: Per-venue dataset override asdict[Venue, str]
.parent_symbols
: The Databento parent symbols to subscribe to instrument definitions for on start. This is a map of Databento dataset keys -> to a sequence of the parent symbols, e.g. ES.FUTES.OPT (for all E-mini S&P 500 futures and options products).instrument_ids
: The instrument IDs to request instrument definitions for on start.timeout_initial_load
: The timeout (seconds) to wait for instruments to load (concurrently per dataset).mbo_subscriptions_delay
: The timeout (seconds) to wait for MBO/L3 subscriptions (concurrently per dataset). After the timeout the MBO order book feed will start and replay messages from the initial snapshot and then all deltas.
We recommend using 环境 variables to manage your credentials.