Tardis Machine Server
Locally runnable server with built-in data caching, providing both tick-level historical and consolidated real-time cryptocurrency market data via HTTP and WebSocket APIs
Introduction
Tardis-machine is a locally runnable server with built-in data caching that uses Tardis.dev HTTP API under the hood. It provides both tick-level historical and consolidated real-time cryptocurrency market data via it's HTTP and WebSocket APIs and is available via npm and Docker.
Features
efficient data replay API endpoints returning historical market data for whole time periods (in contrast to Tardis.dev HTTP API where single call returns data for single minute time period)
exchange-native market data APIs
tick-by-tick historical market data replay in exchange-native format
WebSocket API providing historical market data replay from any given past point in time with the same data format and 'subscribe' logic as real-time exchanges' APIs - in many cases existing exchanges' WebSocket clients can be used to connect to this endpoint
consistent format for accessing market data across multiple exchanges
consolidated real-time data streaming connecting directly to exchanges' WebSocket APIs
customizable order book snapshots and trade bars data types
transparent historical local data caching (cached data is stored on disk in compressed GZIP format and decompressed on demand when reading the data)
support for top cryptocurrency exchanges: BitMEX, Deribit, Binance, Binance Futures, FTX, OKEx, Huobi Global, Huobi DM, bitFlyer, Bitstamp, Coinbase Pro, Kraken Futures, Gemini, Kraken, Bitfinex, Bybit, OKCoin, CoinFLEX and more
Installation
Docker
Pull and run latest version of tardisdev/tardis-machine
image:
Tardis-machine server's HTTP endpoints will be available on port 8000
and WebSocket API endpoints on port 8001
. Your API key will be passed via ENV variable (TM_API_KEY
) — simply replace YOUR_API_KEY
with API key you've received via email.
Command above does not use persistent volumes for local caching (each docker restart will result in loosing local data cache). In order to use for example./host-cache-dir
as persistent volume (bind mount) cache directory, run:
Since using volumes can cause issues especially on Windows, it's fine to run Docker image without them with the caveat of potentially poor local cache ratio after each container's restart.
Config environment variables
You can set following environment config variables to configure tardis-machine server:
name | default | description |
TM_API_KEY | API key for Tardis.dev HTTP API - if not provided only first day of each month of historical data is accessible | |
TM_PORT |
| HTTP port on which server will be running, WebSocket port is always this value + 1 ( |
TM_CACHE_DIR |
| path to local dir that will be used as cache location |
TM_CLUSTER_MODE |
| will launch cluster of Node.js processes to handle the incoming requests if set to |
TM_DEBUG |
| server will print verbose debug logs to stdout if set to |
TM_CLEAR_CACHE |
| server will clear local cache dir on startup if set to |
npm
Requires Node.js v12+ and git installed.
Install and runtardis-machine
server via npx
command:
or install globally via npm
:
and then run:
Tardis-machine server's HTTP endpoints will be available on port 8000
and WebSocket API endpoints on port 8001
. Your API key will be passed via --api-key
config flag — simply replace YOUR_API_KEY
with API key you've received via email.
CLI config flags
You can configure tardis-machine server via environment variables as described in Docker section as well.
You can set following CLI config flags when starting tardis-machine server installed via npm
:
name | default | description |
--api-key | API key for Tardis.dev HTTP API - if not provided only first day of each month of historical data is accessible | |
--port |
| HTTP port on which server will be running, WebSocket port is always this value + 1 ( |
--cache-dir |
| path to local dir that will be used as cache location - if not provided default |
--cluster-mode |
| will launch cluster of Node.js processes to handle the incoming requests if set to |
--debug |
| server will print verbose debug logs to stdout if set to |
--clear-cache |
| server will clear local cache dir on startup is set to |
--help | shows CLI help | |
--version | shows tardis-machine version number |
Exchange-native market data APIs
Exchange-native market data API endpoints provide historical data in exchange-native format. The main difference between HTTP and WebSocket endpoints is the logic of requesting data:
HTTP API accepts request options payload via query string param
WebSocket API accepts exchanges' specific 'subscribe' messages that define what data will be then "replayed" and send to WebSocket client
• HTTP GET
/replay?options={options}
HTTP GET
/replay?options={options}Returns historical market data messages in exchange-native format for given replay options query string param. Single streaming HTTP response returns data for the whole requested time period as NDJSON.
In our preliminary benchmarks on AMD Ryzen 7 3700X, 64GB RAM, HTTP /replay API endpoint was returning ~700 000 messages/s (already locally cached data).
See also official Tardis.dev Python client library.
Replay options
HTTP /replay endpoint accepts required options query string param in url encoded JSON format.
name | type | default | |
exchange | string | - | requested exchange id - use /exchanges HTTP API to get list of valid exchanges ids |
filters | {channel:string, symbols?: string[]}[] | [] | optional filters of requested historical data feed - check historical data details for each exchange and /exchanges/:exchange HTTP API to get allowed channels and symbols for requested exchange |
from | string | - | replay period start date (UTC) in a ISO 8601 format, e.g., |
to | string | - | replay period end date (UTC) in a ISO 8601 format, e.g., |
withDisconnects | boolean | undefined | undefined | when set to |
Response format
Streamed HTTP response provides data in NDJSON format (new line delimited JSON) - each response line is a JSON with market data message in exchange-native format plus local timestamp:
localTimestamp
- date when message has been received in ISO 8601 formatmessage
- JSON with exactly the same format as provided by requested exchange real-time feeds
Sample response
• WebSocket
/ws-replay?exchange={exchange}&from={fromDate}&to={toDate}
WebSocket
/ws-replay?exchange={exchange}&from={fromDate}&to={toDate}Exchanges' WebSocket APIs are designed to publish real-time market data feeds, not historical ones. Tardis-machine WebSocket /ws-replay API fills that gap and allows "replaying" historical market data from any given past point in time with the same data format and 'subscribe' logic as real-time exchanges' APIs. In many cases existing exchanges' WebSocket clients can be used to connect to this endpoint just by changing URL, and receive historical market data in exchange-native format for date ranges specified in URL query string params.
After connection is established, client has 2 seconds to send subscriptions payloads and then market data replay starts.
If two clients connect at the same time requesting data for different exchanges and provide the same session key via query string param, then data being send to those clients will be synchronized (by local timestamp).
In our preliminary benchmarks on AMD Ryzen 7 3700X, 64GB RAM, WebSocket /ws-replay
API endpoint was sending ~500 000 messages/s (already locally cached data).
You can also try using existing WebSocket client by changing URL endpoint to the one shown in the example above.
Query string params
name | type | default | description |
exchange | string | - | requested exchange id - use /exchanges HTTP API to get list of valid exchanges ids |
from | string | - | replay period start date (UTC) in a ISO 8601 format, e.g., |
to | string | - | replay period end date (UTC) in a ISO 8601 format, e.g., |
session | string | undefined | undefined | optional replay session key. When specified and multiple clients use it when connecting at the same time then data being send to those clients is synchronized (by local timestamp). |
Normalized market data APIs
Normalized market data API endpoints provide data in unified format across all supported exchanges. Both HTTP /replay-normalized and WebSocket /ws-replay-normalized APIs accept the same replay options payload via query string param. It's mostly matter of preference when choosing which protocol to use, but WebSocket /ws-replay-normalized API has also it's real-time counterpart /ws-stream-normalized, which connects directly to exchanges' real-time WebSocket APIs. This opens the possibility of seamless switching between real-time streaming and historical normalized market data replay.
• HTTP GET
/replay-normalized?options={options}
HTTP GET
/replay-normalized?options={options}Returns historical market data for data types specified via query string. Single streaming HTTP response returns data for the whole requested time period as NDJSON. See supported data types which include normalized trade, order book change, customizable order book snapshots etc.
In our preliminary benchmarks on AMD Ryzen 7 3700X, 64GB RAM, HTTP /replay-normalized API endpoint was returning ~100 000 messages/s and ~50 000 messages/s when order book snapshots were also requested.
Replay normalized options
HTTP /replay-normalized endpoint accepts required options query string param in url encoded JSON format.
Options JSON needs to be an object or an array of objects with fields as specified below. If array is provided, then data requested for multiple exchanges is returned synchronized (by local timestamp).
name | type | default | |
exchange | string | - | requested exchange id - use /exchanges HTTP API to get list of valid exchanges ids |
symbols | string[] | undefined | undefined | optional symbols of requested historical data feed - use /exchanges/:exchange HTTP API to get allowed symbols for requested exchange |
from | string | - | replay period start date (UTC) in a ISO 8601 format, e.g., |
to | string | - | replay period end date (UTC) in a ISO 8601 format, e.g., |
dataTypes | string[] | - | array of normalized data types for which historical data will be returned |
withDisconnectMessages | boolean | undefined | undefined | when set to |
Response format & sample messages
• WebSocket
/ws-replay-normalized?options={options}
WebSocket
/ws-replay-normalized?options={options}Sends normalized historical market data for data types specified via query string. See supported data types which include normalized trade, order book change, customizable order book snapshots etc.
WebSocket /ws-stream-normalized is the real-time counterpart of this API endpoint, providing real-time market data in the same format, but not requiring API key as connects directly to exchanges' real-time WebSocket APIs.
Replay normalized options
WebSocket /ws-replay-normalized endpoint accepts required options query string param in url encoded JSON format.
Options JSON needs to be an object or an array of objects with fields as specified below. If array is provided, then data requested for multiple exchanges is being send synchronized (by local timestamp).
name | type | default | |
exchange | string | - | requested exchange id - use /exchanges HTTP API to get list of valid exchanges ids |
symbols | string[] | undefined | undefined | optional symbols of requested historical data feed - use /exchanges/:exchange HTTP API to get allowed symbols for requested exchange |
from | string | - | replay period start date (UTC) in a ISO 8601 format, e.g., |
to | string | - | replay period end date (UTC) in a ISO 8601 format, e.g., |
dataTypes | string[] | - | array of normalized data types for which historical data will be provided |
withDisconnectMessages | boolean | undefined | undefined | when set to |
In our preliminary benchmarks on AMD Ryzen 7 3700X, 64GB RAM, WebSocket /ws-replay-normalized API endpoint was returning ~70 000 messages/s and ~40 000 messages/s when order book snapshots were also requested.
Response format & sample messages
• WebSocket
/ws-stream-normalized?options={options}
WebSocket
/ws-stream-normalized?options={options}Sends normalized real-time market data for data types specified via query string. See supported data types which include normalized trade, order book change, customizable order book snapshots etc.
Doesn't requires API key as connects directly to exchanges real-time WebSocket APIs and transparently restarts closed, broken or stale connections (open connections without data being send for specified amount of time).
Provides consolidated real-time market data streaming functionality with options as an array - provides single consolidated real-time data stream for all exchanges specified in options array.
WebSocket /ws-replay-normalized is the historical counterpart of this API endpoint, providing historical market data in the same format.
Stream normalized options
WebSocket /ws-stream-normalized endpoint accepts required options query string param in url encoded JSON format.
Options JSON needs to be an object or an array of objects with fields as specified below. If array is specified then API provides single consolidated real-time data stream for all exchanges specified (as in examples above).
name | type | default | |
exchange | string | - | requested exchange id - use /exchanges HTTP API to get list of valid exchanges ids |
symbols | string[] | undefined | undefined | optional symbols of requested real-time data feed |
dataTypes | string[] | - | array of normalized data types for which real-time data will be provided |
withDisconnectMessages | boolean | undefined | undefined | when set to |
timeoutIntervalMS | number | 10000 | specifies time in milliseconds after which connection to real-time exchanges' WebSocket API is restarted if no message has been received |
Response format & sample messages
Normalized data types
• trade
Individual trade
• book_change
Initial L2 (market by price) order book snapshot (isSnapshot=true
) plus incremental updates for each order book change. Please note that amount
is the updated amount at that price level, not a delta. An amount
of 0
indicates the price level can be removed.
• derivative_ticker
Derivative instrument ticker info sourced from real-time ticker & instrument channels.
• book_snapshot_{number_of_levels}_{snapshot_interval}{time_unit}
Order book snapshot for selected number_of_levels
(top bids and asks), snapshot_interval
and time_unit
.
When snapshot_interval
is set to 0
, snapshots are taken anytime order book state within specified levels has changed, otherwise snapshots are taken anytime snapshot_interval
time has passed and there was an order book state change within specified levels. Order book snapshots are computed from exchanges' real-time order book streaming L2 data (market by price).
Examples:
book_snapshot_10_0ms
- provides top 10 levels tick-by-tick order book snapshotsbook_snapshot_50_100ms
- provides top 50 levels order book snapshots taken at 100 millisecond intervalsbook_snapshot_30_10s
- provides top 30 levels order book snapshots taken at 10 second intervalsquote
is an alias ofbook_snapshot_1_0ms
- provides top of the book (best bid/ask) tick-by-order book snapshotsquote_10s
is an alias ofbook_snapshot_1_10s
- provides top of the book (best bid/ask) order book snapshots taken at 10 seconds intervals
Available time units:
ms
- millisecondss
- secondsm
- minutes
• trade_bar_{aggregation_interval}{suffix}
Trades data in aggregated form, known as OHLC, candlesticks, klines etc. Not only most common time based aggregation is supported, but volume and tick count based as well. Bars are computed from tick-by-tick raw trade data, if in given interval no trades happened, there is no bar produced.
Examples:
trade_bar_10ms
- provides time based trade bars with 10 milliseconds intervalstrade_bar_5m
- provides time based trade bars with 5 minute intervalstrade_bar_100ticks
- provides ticks based trade bars with 100 ticks (individual trades) intervalstrade_bar_100000vol
- provides volume based trade bars with 100 000 volume intervals
Allowed suffixes:
ms
- millisecondss
- secondsm
- minutesticks
- number of ticksvol
- volume size
• disconnect
Message that marks events when real-time WebSocket connection that was used to collect the historical data got disconnected.
Last updated