Downloadable CSV files

Quick start

CSV datasets are available via dedicated datasets API that allows downloading tick level incremental order book L2 updates, order book snapshots, trades, options chains, quotes, derivative tickers and liquidations data. For ongoing data, CSV datasets for a given day are available on the next day around 06:00 UTC.

CSV datasets are exported from exchanges' real-time WebSocket feeds data we collected (and also provide via our API as historical data in exchange-native format).

Historical datasets for the first day of each month are available to download without API key. Our Node.js and Python clients have built-in functions to efficiently download whole date range of data.

# pip install tardis-dev
# requires Python >=3.6
from tardis_dev import datasets

datasets.download(
    exchange="deribit",
    data_types=[
        "incremental_book_L2",
        "trades",
        "quotes",
        "derivative_ticker",
        "book_snapshot_25",
        "liquidations"
    ],
    from_date="2019-11-01",
    to_date="2019-11-02",
    symbols=["BTC-PERPETUAL", "ETH-PERPETUAL"],
    api_key="YOUR API KEY (optionally)",
)

See full example that shows all available download options (download path customization, filenames conventions and more).

CSV format details

  • columns delimiter: , (comma)

  • new line marker: \n (LF)

  • decimal mark: . (dot)

  • date time format: microseconds since epoch (https://www.epochconverter.com/)

  • date time timezone: UTC

Data types

incremental_book_L2

Incremental order book L2 updates collected from exchanges' real-time WebSocket order book L2 data feeds - data as deep and granular as underlying real-time data source, please see FAQ: What is the maximum order book depth available for each supported exchange? for more details.

As exchanges real-time feeds usually publish multiple order book levels updates via single message you can recognize that by grouping rows by local_timestamp field if needed.

If you have any doubts how to correctly reconstruct full order book state from incremental_book_L2 CSV dataset, please see this answer or contact us.

In case you only need order book data for top 25 or top 5 levels, we do provide datasets with already reconstructed snapshots for every update for those. See book_snapshot_25 and book_snapshot_5.

• book_snapshot_25

Tick-level order book snapshots reconstructed from exchanges' real-time WebSocket order book L2 data feeds. Each row represents top 25 levels from each side of the limit order book book and was recorded every time any of the tracked bids/asks top 25 levels have changed.

• book_snapshot_5

Tick-level order book snapshots reconstructed from exchanges' real-time WebSocket order book L2 data feeds. Each row represents top 5 levels from each side of the limit order book book and was recorded every time any of the tracked bids/asks top 5 levels have changed.

• trades

Individual trades data collected from exchanges' real-time WebSocket trades data feeds.

• options_chain

Tick-level options summary info (strike prices, expiration dates, open interest, implied volatility, greeks etc.) for all active options instruments collected from exchanges' real-time WebSocket options tickers data feeds. Options chain data is available for Deribit (sourced from ticker channel) and OKEx Options (sourced from option/summary and index/ticker channels).

For options_chain data type only 'OPTIONS' symbol is available (one file per day for all options instruments).

• quotes

Top of the book (best bid/ask) data reconstructed from exchanges' real-time WebSocket order book L2 data feeds. - best bid/ask recorded every time top of the book has changed. We on purpose choose this solution over native exchanges real-time quotes feeds as those vary a lot between exchanges, can be throttled, some are absent at all, often are delayed and published in batches in comparison to more granular L2 updates which are the basis for our quotes dataset.

book_ticker

• derivative_ticker

Derivative instrument ticker info (open interest, funding, mark price, index price) collected from exchanges' real-time WebSocket instruments & tickers data feeds. Anytime any of the tracked values has changed data was added to final dataset.

• liquidations

Liquidations data collected from exchanges' real-time WebSocket data feeds were available.

See details which exchanges support it and since when.

Grouped symbols

In addition to standard currency pairs & instrument symbols that can be requested when via CSV datasets API, each exchange has additional special grouped symbols available depending if it supports given market type: SPOT, FUTURES, OPTIONS and PERPETUALS. When such symbol is requested then downloaded file for it has all the data for all instruments belonging for given market type. This is especially useful for options instruments that as specifying each option symbol one by one can be mundane process, using 'OPTIONS' as a symbol gives data for all options available at given time.

those special symbols are also listed in response to /exchanges/:exchange API call

Datasets API details

  • all downloadable datasets are gzip compressed

  • historical market data is available in daily intervals (separate file for each day) based on local timestamp (timestamp of message arrival) split by exchange, data type and symbol

  • data for a given day is available on the next day around 6h after 00:00 UTC - exact date until when data is available can be requested via /exchanges/:exchange API call (datasets.exportedUntil), e.g., https://api.tardis.dev/v1/exchanges/ftx

  • datasets are ordered and split into separate daily files by local_timestamp (timestamp of message arrival time)

  • empty gzip compressed file is being returned in case of no data available for a given day, symbol and data type, e.g., exchange downtime, very low volume currency pairs etc.

  • iftimestamp equals to local_timestamp it means that exchange didn't provide timestamp for message, e.g., BitMEX order book updates

  • cell in CSV file is empty if there's no value for it, e.g., no trade id if a given exchange doesn't provide one

  • datasets are sourced from Tardis.dev HTTP API, which in turn provides the the data sourced from exchanges real-time WebSocket market data feeds (in contrast to REST API endpoints)

Download via client libraries

Historical datasets for the first day of each month are available to download without API key.

# pip install tardis-dev
# requires Python >=3.6
from tardis_dev import datasets, get_exchange_details
import logging

# comment out to disable debug logs
logging.basicConfig(level=logging.DEBUG)

# function used by default if not provided via options
def default_file_name(exchange, data_type, date, symbol, format):
    return f"{exchange}_{data_type}_{date.strftime('%Y-%m-%d')}_{symbol}.{format}.gz"


# customized get filename function - saves data in nested directory structure
def file_name_nested(exchange, data_type, date, symbol, format):
    return f"{exchange}/{data_type}/{date.strftime('%Y-%m-%d')}_{symbol}.{format}.gz"


# returns data available at https://api.tardis.dev/v1/exchanges/deribit
deribit_details = get_exchange_details("deribit")
# print(deribit_details)

datasets.download(
    # one of https://api.tardis.dev/v1/exchanges with supportsDatasets:true - use 'id' value
    exchange="deribit",
    # accepted data types - 'datasets.symbols[].dataTypes' field in https://api.tardis.dev/v1/exchanges/deribit,
    # or get those values from 'deribit_details["datasets"]["symbols][]["dataTypes"] dict above
    data_types=["incremental_book_L2", "trades", "quotes", "derivative_ticker", "book_snapshot_25", "book_snapshot_5", "liquidations"],
    # change date ranges as needed to fetch full month or year for example
    from_date="2019-11-01",
    # to date is non inclusive
    to_date="2019-11-02",
    # accepted values: 'datasets.symbols[].id' field in https://api.tardis.dev/v1/exchanges/deribit
    symbols=["BTC-PERPETUAL", "ETH-PERPETUAL",],
    # (optional) your API key to get access to non sample data as well
    api_key="YOUR API KEY",
    # (optional) path where data will be downloaded into, default dir is './datasets'
    # download_dir="./datasets",
    # (optional) - one can customize downloaded file name/path (flat dir strucure, or nested etc) - by default function 'default_file_name' is used
    # get_filename=default_file_name,
    # (optional) file_name_nested will download data to nested directory structure (split by exchange and data type)
    # get_filename=file_name_nested,
)

If you're running into RuntimeError: This event loop is already running error try solution from https://github.com/ipython/ipython/issues/11338#issuecomment-646539516 (adding nest_asyncio).

Datasets API reference

GET https://datasets.tardis.dev/v1/:exchange/:dataType/:year/:month/:day/:symbol.csv.gz

Returns gzip compressed CSV dataset for given exchange, data type, date (year, month, day) and symbol.

Path Parameters

Headers

Sample requests

Last updated