Downloadable CSV files
CSV datasets are available via dedicated datasets API that allows downloading tick level incremental order book L2 updates, order book snapshots, trades, options chains, quotes, derivative tickers and liquidations data.
For ongoing data, CSV datasets for a given day are available on the next day around 05:00 UTC.
CSV datasets are exported from exchanges' real-time WebSocket feeds data we collected (and also provide via our API as historical data in exchange-native format).
Historical datasets for the first day of each month are available to download without API key.
Our Node.js and Python clients have built-in functions to efficiently download whole date range of data.
Python
Node.js
cURL
# pip install tardis-dev
# requires Python >=3.6
from tardis_dev import datasets
​
datasets.download(
exchange="deribit",
data_types=[
"incremental_book_L2",
"trades",
"quotes",
"derivative_ticker",
"book_snapshot_25",
"liquidations"
],
from_date="2019-11-01",
to_date="2019-11-02",
symbols=["BTC-PERPETUAL", "ETH-PERPETUAL"],
api_key="YOUR API KEY (optionally)",
)
See full example that shows all available download options (download path customization, filenames conventions and more).
// npm install [email protected]
// requires node version >=12
const { downloadDatasets } = require('tardis-dev')
​
;(async () => {
await downloadDatasets({
exchange: 'deribit',
dataTypes: [
'incremental_book_L2',
'trades',
'quotes',
'derivative_ticker',
'book_snapshot_25',
'liquidations'
],
from: '2019-11-01',
to: '2019-11-02',
symbols: ['BTC-PERPETUAL', 'ETH-PERPETUAL'],
apiKey: 'YOUR API KEY (optionally)'
})
})()
See full example that shows all available download options (download path customization, filenames conventions and more).
curl -o deribit_trades_2019-11-01_BTC-PERPETUAL.csv.gz https://datasets.tardis.dev/v1/deribit/trades/2019/11/01/BTC-PERPETUAL.csv.gz
​
​
​
- columns delimiter: , (comma)
- new line marker: \n (LF)
- decimal mark: . (dot)
- date time timezone: UTC
​
​
Incremental order book L2 updates collected from exchanges' real-time WebSocket order book L2 data feeds - data as deep and granular as underlying real-time data source, please see FAQ: What is the maximum order book depth available for each supported exchange? for more details.
As exchanges real-time feeds usually publish multiple order book levels updates via single message you can recognize that by grouping rows by
local_timestamp
field if needed. If you have any doubts how to correctly reconstruct full order book state from incremental_book_L2 CSV dataset, please see this answer or contact us.
In case you only need order book data for top 25 or top 5 levels, we do provide datasets with already reconstructed snapshots for every update for those. See book_snapshot_25 and book_snapshot_5.
​
CSV incremental_book_L2 schema
dataset preview
column name | description | ​ |
exchange | ​ | |
symbol | instrument symbol as provided by exchange (always uppercase) | ​ |
timestamp | timestamp provided by exchange in microseconds since epoch - if exchange does not provide one local_timestamp value is used as a fallback | ​ |
local_timestamp | message arrival timestamp in microseconds since epoch | ​ |
is_snapshot | possible values:
If last update was not a snapshot and current one is, then existing order book state must be discarded (all existing levels removed) | ​ |
side | determines to which side of the order book update belongs to:
|