Getting the data model right
· One min read
Before you can serve data fast, you have to decide what data looks like.
We spent time this week on the token data model. The question isn't just what fields to expose — it's how to structure state so reads are instant regardless of write volume.
We settled on clean separation: static metadata in one place, live market state in another, aggregated metrics in a third. Each part updates independently. A snapshot request never waits for a write to finish.
OHLCV candles work the same way — maintained live as trades come in, not computed on read. The current candle is always ready. Historical candles are already built.
None of this is visible yet. But it's the kind of decision that determines whether the system holds up at scale.