Skip to content

Market data

Reading the book, fills, and pair catalog. Public — no session token required.

What's available

RPCReturnsUse it for
MarketDataService.ListPairsCatalog of pairs + per-spot metadata.Boot — discover what's tradeable.
MarketDataService.GetBookOne-shot L2 or L3 snapshot.Cold-start reconciliation, periodic resync.
MarketDataService.SubscribeStream of L1/L2/L3 book updates + status events for one or more pairs.Continuous order-book, depth charts, status alerts.
MarketDataService.SubscribeFillsStream of executed trades. Optional per-pair filter.Trade tape, volume metrics.

A single Subscribe call returns a multiplexed stream — every pair you subscribed to plus cross-cutting StatusEvents arrive on the same channel.

Pick your level

Subscriptions take a FeedLevel:

LevelPayloadUse for
FEED_LEVEL_L1Best bid + best askTickers, mark prices, sanity checks.
FEED_LEVEL_L2Aggregated depth, snapshot + deltasDepth charts, mid calculation, taker sizing.
FEED_LEVEL_L3Per-maker order listMaker analytics. Does not include oracle-offset orders (those are virtual).

Most integrations want L2.

Snapshot then stream

The canonical pattern for a UI:

rust
use std::sync::Arc;
use superis::ResilientStream;
use superis::proto::{
    market_data_service_client::MarketDataServiceClient,
    GetBookRequest, MarketDataEvent, Pair, SpotId, SubscribeRequest,
    PairSubscription, FeedLevel,
};

let mut market = MarketDataServiceClient::new(channel.clone());

// 1. Cold-start snapshot.
let snapshot = market
    .get_book(GetBookRequest {
        pair: Some(Pair { base: Some(SpotId { id: 1 }), quote: Some(SpotId { id: 0 }) }),
        level: FeedLevel::L2.into(),
    })
    .await?
    .into_inner();
let book = build_local_book(snapshot);

// 2. Subscribe to deltas.
let pair = Pair { base: Some(SpotId { id: 1 }), quote: Some(SpotId { id: 0 }) };
let sub = PairSubscription {
    pair: Some(pair),
    level: FeedLevel::L2.into(),
    snapshot_only: false,
};
let factory: superis::resilience::StreamFactory<MarketDataEvent> = {
    let mut market = market.clone();
    let sub = sub.clone();
    Arc::new(move || {
        let mut market = market.clone();
        let sub = sub.clone();
        Box::pin(async move {
            let stream = market
                .subscribe(SubscribeRequest { pairs: vec![sub] })
                .await?
                .into_inner();
            Ok(Box::pin(stream) as superis::resilience::BoxStream<_>)
        })
    })
};
let book_stream = ResilientStream::new(factory, None, 256);
let mut rx = book_stream.subscribe().await;

while let Ok(ev) = rx.recv().await {
    apply_event(&mut book, ev);
}
go
sub := &pb.PairSubscription{
    Pair:  &pb.Pair{Base: &pb.SpotId{Id: 1}, Quote: &pb.SpotId{Id: 0}},
    Level: pb.FeedLevel_FEED_LEVEL_L2,
}

// Cold-start snapshot.
snap, err := market.GetBook(ctx, &pb.GetBookRequest{
    Pair: sub.Pair, Level: pb.FeedLevel_FEED_LEVEL_L2,
})
if err != nil { log.Fatal(err) }
book := buildLocalBook(snap)

// Subscribe to deltas.
factory := func(ctx context.Context) (<-chan *pb.MarketDataEvent, <-chan error, error) {
    stream, err := market.Subscribe(ctx, &pb.SubscribeRequest{Pairs: []*pb.PairSubscription{sub}})
    if err != nil { return nil, nil, err }
    items := make(chan *pb.MarketDataEvent, 256)
    errs := make(chan error, 1)
    go func() {
        defer close(items); defer close(errs)
        for {
            ev, err := stream.Recv()
            if err != nil { errs <- err; return }
            items <- ev
        }
    }()
    return items, errs, nil
}
rs := superis.NewResilientStream(factory, nil, 256)
rs.Start(ctx)
defer rs.Close()
for ev := range rs.Subscribe() {
    applyEvent(book, ev)
}
ts
import { createClient, type Transport } from "@connectrpc/connect";
import { MarketDataService } from "@superis/sweetspot-client";
import { ResilientStream } from "@superis/sweetspot-client";

const market = createClient(MarketDataService, transport);
const pair = { base: { id: 1n }, quote: { id: 0n } };

// Cold-start snapshot.
const snapshot = await market.getBook({ pair, level: "FEED_LEVEL_L2" });
const book = buildLocalBook(snapshot);

// Subscribe to deltas.
const stream = new ResilientStream({
  factory: async (signal) =>
    market.subscribe({ pairs: [{ pair, level: "FEED_LEVEL_L2", snapshotOnly: false }] }, { signal }),
  capacity: 256,
});
stream.subscribe((ev) => applyEvent(book, ev));
stream.start();

The ResilientStream wrapper handles auto-reconnect + per-subscriber fan-out. On reconnect, call GetBook again to resync — the server restarts the L2 stream from a fresh snapshot but you may have missed deltas in flight.

Fills

Separate stream so book consumers don't pay to deserialize fills. Filter by pair (empty = all pairs).

rust
let fills = market
    .subscribe_fills(SubscribeFillsRequest {
        pairs: vec![Pair { base: Some(SpotId { id: 1 }), quote: Some(SpotId { id: 0 }) }],
    })
    .await?
    .into_inner();

while let Some(fill) = fills.message().await? {
    println!("{} {} @ {}", fill.side, fill.size.unwrap().value, fill.price.unwrap().value);
}

Discovering pairs at boot

rust
use superis::config::{refresh, ConfigCache};

let cache = ConfigCache::new();
let cfg = refresh(&cache, &mut market, &mut tx, server_program_id).await?;
for p in &cfg.pairs {
    println!("{}/{}: spot {} → {}", p.base_name, p.quote_name, p.base_spot_id, p.quote_spot_id);
}

ConfigCache joins ListPairs + TxService.GetSponsoredPayers into one cached struct you can pass into the quoting layer.

Health events

Subscribe multiplexes StatusEvents onto the same stream. Treat them as advisory — the SDK doesn't gate calls on them. Common values:

StateWhat it meansWhat to do
HEALTH_STATE_HEALTHYBook is fresh.Quote and trade normally.
HEALTH_STATE_DEGRADEDBook may be stale.Widen quotes; consider pausing taker flow.
HEALTH_STATE_HALTEDDon't act on this data.Pause submissions until you see HEALTHY again.

The status can be global (no pair) or scoped to one pair.

Rate limits

MarketDataService is per-IP rate limited. Bursting will return RESOURCE_EXHAUSTED; the bucket replenishes. For high-throughput consumers, prefer the streaming RPCs over polling GetBook — streams don't bill against the bucket per event.

Backoff

OperationSuggested cadence
ListPairsOnce at boot, then on schema changes.
GetBook (single pair)Once at boot, then on stream reconnect.
Subscribe (any level)Long-lived. Don't tear down + reopen on every event.
SubscribeFillsLong-lived.

Polling GetBook >1 Hz means you should be on Subscribe instead.

Apache 2.0