TL Provider: Killing Phase 0 — Unified IDB-First Loading
The Inconsistency
The previous post ended with a clean architecture: a processing queue that runs continuously, Phase 0 cleaning stale IDB entries on startup, Phase 1 fetching and processing new streamers, Phase 2 reprocessing existing ones. It worked. But there was a subtle UX inconsistency hiding in plain sight.
First app open (empty IDB): endpoint streams appear in the list with liveUrl=null, then “light up” one-by-one as the queue resolves each liveUrl. Progressive, satisfying.
Second app open (IDB has cached liveUrls): Phase 0 runs, which iterates every IDB entry and checks its liveUrl against tango.me. Then processNewStreamer resolves each stream — when resolveLiveUrl(masterListUrl) fails, it falls back to the IDB cached liveUrl and calls setVideos(), which bulk-loads everything. Streams appear already lit up.
Two different loading experiences depending on cache state. The progressive UX was accidental — a side effect of having no cache on first open.
The Core Insight
Phase 0 was doing two things wrong:
-
Separate pass over IDB before any endpoint processing. This meant the queue spent time validating cache entries that might not even match the current endpoint response.
-
No unified resolution path.
processNewStreamertriedresolveLiveUrl(masterListUrl)first, fell back to IDB only on failure. Phase 0 checked IDB entries independently. Two code paths, two behaviors.
The fix: make IDB a cache checked inline during each stream’s processing, not a separate phase.
resolveStreamerLiveUrl — One Path for Everything
New helper that encapsulates the IDB-first + masterListUrl-fallback pattern:
resolveStreamerLiveUrl(epoch, streamer) → Promise<string | null>
getCached(streamerId)— if a cached liveUrl exists,checkLiveUrlagainst tango.me- Alive → use it, update IDB timestamp, done
- Dead → mark
cachedWasDead, fall through
resolveLiveUrl(masterListUrl)— try the endpoint- Success → cache it, done
- Fail + cachedWasDead → both confirmed dead →
removeCached(streamerId, force=true) - Fail + no cache existed → return null, not a removal candidate
This is the resolution path for everything — main streamers, co-streamers, and leftover IDB entries.
Killing Phase 0
startTlQueue used to be:
Phase 0: processIdbCache (walk all IDB entries, check liveUrls)
Process initial endpoint streamers
Queue loop: Phase 1 (endpoint) → Phase 2 (reprocess existing)
Now it’s:
Process initial endpoint streamers (each uses resolveStreamerLiveUrl)
Consume leftover IDB entries
Queue loop: Phase 1 (endpoint) → Phase 2 (reprocess existing)
No separate IDB pass. Each streamer checks its own IDB entry as part of its normal processing. If the cached liveUrl is alive, great — one network check instead of a resolveLiveUrl round-trip through the backend. If it’s dead, fall through to masterListUrl resolution.
Leftover IDB Entries
After the endpoint list is fully processed, there may be IDB entries for streamers that weren’t in the endpoint response. These are streamers from a previous session — maybe the API response changed, maybe they went offline and came back.
consumeLeftoverIdb handles them:
getAllCached()→ filter out entries already instreamerMap(by alias)- For each leftover: reconstruct a
TlStreamerfrom the cached entry →resolveStreamerLiveUrl- Alive → add to the bottom of the list, run co-streamer discovery
- Dead → already removed from IDB by
resolveStreamerLiveUrl
The key difference from endpoint entries: endpoint entries appear in the list immediately with liveUrl=null, while leftover IDB entries only appear if their liveUrl is confirmed alive. This makes sense — the endpoint is the authority on “who should be in the list.” IDB leftovers are speculative; they earn their spot by being alive.
Expanding the IDB Schema
To reconstruct a TlStreamer from a leftover IDB entry, the cache needs more than {streamerId, masterListUrl, liveUrl, cachedAt}. The new schema (v3) stores:
{
streamerId, streamId, alias, firstName,
masterListUrl, isFollowing, parentAlias,
liveUrl, cachedAt
}
Everything needed for co-streamer discovery and list insertion. Migration is a store clear — the cache repopulates on the first cycle.
putCached changed from putCached(streamerId, masterListUrl, liveUrl) to putCached(streamer, liveUrl) — it now stores all fields from the streamer object.
The Result
Every app open now has the same behavior:
- Endpoint streams appear with
liveUrl=null - Queue processes each one-by-one: check IDB cache first (fast), fall back to masterListUrl (slower)
- Streams light up progressively as liveUrls resolve
- After the endpoint list, leftover IDB entries appear at the bottom (only if alive)
- Queue loop continues: fetch endpoint → process new → reprocess existing
No Phase 0. No bulk loading. No behavioral difference between first and second open.
Takeaway
The original Phase 0 was defensive — “clean the cache before anything touches it.” But it was also a separate codepath that processed IDB entries differently from how the main queue processed endpoint entries. Two paths meant two behaviors.
Inlining the IDB check into the normal processing path was simpler and gave consistent UX. The cache becomes what it should have been from the start: a transparent acceleration layer that the processing code consults as part of its normal flow, not a separate subsystem that needs its own startup ritual.