Starholder API

The Emission Surface

A complete field-by-field guide to the structured output the world produces every time it thinks — from the wire format to what each field means for builders.

What the Emission Surface Is

Every time someone asks the world a question — whether through the browser, the REST API, an MCP tool, or a directive — the Protocol Emission Engine (PEE) runs an inference turn. The output of that turn is not a chat message. It is a structured, layered document called the Emission Surface.

The Emission Surface carries everything a consumer needs to present, analyze, or build on what the world just thought: the prose response, every entity and setting the reasoning touched, resolved imagery for each, atmospheric signals from the settings involved, how dense or sparse the territory is, what claims were made and how confident the system is, and session context that carries forward into the next turn.

This is what makes Starholder's output a media substrate rather than a text response. A documentary agent reads the prose and media layers. A podcast agent reads the prose and epistemic layers. An interactive experience reads the refs and topology layers. A research tool reads the claims and contradictions. They all consume the same document — they just use different parts of it.

Schema identifier: starholder.emission.v1


How to Get the Emission Surface

Three transport options, same payload:

TransportEndpointBest For
SSE (streaming)GET /api/v1/world/{worldId}/streamLive rendering, real-time UIs. Events arrive incrementally during generation.
Sync JSONPOST /api/v1/world/{worldId}/executeOne-shot agent calls. The complete surface arrives in a single response.
Snapshot retrievalGET /api/v1/world/{worldId}/turns/{turnId}/surfaceHistorical retrieval, debugging, analytics. Fetch any completed turn.

Layer Projection

You don't always need all eight layers. Add ?layers=text,refs,context to any endpoint that returns a surface and you'll get only those layers (plus identity, which is always included). This reduces payload size when you only care about specific parts of the output.


The Eight Layers

LayerWhat It Tells You
identityWho produced this turn, when, under what directive, and with what authority
textThe prose response: streaming raw (during generation), annotated with entity tags, clean for display, and parsed mention spans
refsEverything the system considered, what it used, and what it rejected — with navigable API links
mediaResolved imagery per entity/setting, positioned against specific mentions in the text
contextAtmosphere from the dominant setting (sounds, smells, light, temperature, horizon), shader/environment hints, music, and semantic scene profiling
topologyRetrieval density, confidence metrics, what drove the retrieval, and how much prior reasoning exists
epistemicClaims made, confidence scores, uncertainty, contradictions detected, and reasoning mode
sessionCross-turn memory: entity momentum, accumulated claims, open threads, warm summaries

Identity Layer

Provenance for the turn. Every field you need to know who triggered it, when, and under what conditions.

FieldTypeDescription
turnIdstringUnique identifier for this reasoning turn
executionIdstringExecution run identifier (groups related operations)
packetIdstring or nullThe ThoughtPacket (structured reasoning artifact) committed from this turn. Null if the turn was aborted or commit was deferred.
worldIdstringWhich world this turn ran against
sessionIdstringConversation session identifier
timestampstringISO 8601 when the turn completed
turnDurationMsnumberWall-clock time for the turn in milliseconds
ownerUserIdstringThe user account that owns this action
actorIdstringThe actor ref: human:user_..., persona:persona_..., or agent:ext_...
actorKindstringhuman, persona, or external_agent
apiKeyIdstring or absentPresent when an external agent triggered the turn
originSystemstring or absentThe agent's self-reported system name
directiveobject or absentPresent when the turn was triggered by a structured directive

Directive block (when present):

FieldDescription
directiveIdThe directive's unique ID
intentWhat the directive asked the persona to do
anchorRefsEntity or setting refs the persona should focus on
sourceGapIdThe gap coordinate motivating this directive
targetBountyIdThe bounty being fulfilled
modeanswer (direct response), explore (open investigation), bridge (connect two things), canonize (establish facts)
creativeDirectionFreeform creative guidance

Text Layer

The persona's prose output in multiple parallel forms.

FieldTypeWhat It's For
streamingRawstringThe raw text as it streamed during generation. Only present in SSE accumulation — stripped from surface_complete and persisted snapshots.
annotatedstringFinalized text with inline entity tags like [ent:starholder_main:dr_chen]. Build navigable, linked text from this.
cleanstringFinalized text with all tags stripped. Ready for display, TTS, search indexing, or text analysis.
mentionsarrayEvery entity and setting mention with exact character positions in the clean text.

Mention fields:

FieldDescription
refCanonical reference (e.g., ent:starholder_main:elara_mihai)
typeent (entity — a person, org, technology), set (setting — a place, era, institution), or txt (story reference)
labelHuman-readable name
surfaceTextThe exact words in the prose that matched
start / endCharacter offsets into clean — use these to highlight, hyperlink, or annotate
mentionIndex0-based occurrence index. The same entity might appear multiple times.

Wire note: In the compacted wire format, mentions are not sent separately. Instead, each ref in the refs map carries a spans array of [start, end] pairs. Reconstruct the mentions array from these spans if you need it.


Refs Layer

Everything the system considered, used, or rejected during retrieval and reasoning.

Catalog

The full retrieval universe — every entity, setting, story, and prior reasoning packet the system found relevant during graph traversal. This is the "considered" set, not just what made it into the output.

FieldDescription
refCanonical reference
labelHuman-readable name
typeentity, setting, textroot, or packet (a prior ThoughtPacket from an earlier turn)
anchorDistanceGraph distance from the input anchor: 0 = direct match, 1-3 = hops away
relevanceScore0.0–1.0, how relevant this ref is to the query
hrefAPI URL to fetch the full object

Accepted

Refs the persona actually referenced in its output — confirmed through validation.

FieldDescription
lifecycleStaterender_provisional (appeared during streaming, not yet validated) or commit_validated (passed post-generation validation)
provenanceHow it entered: pre_retrieval (from the initial query), mid_reasoning (discovered during generation), validator_accepted (added during validation)

Anomalies

Refs the persona tried to use but were rejected — either not found in retrieval results or forbidden by policy.

FieldDescription
reasonnot_in_traversal_outputs (the persona hallucinated a reference) or forbidden_by_policy (valid ref but not allowed in this context)

Wire note: In the compacted format, refs are a single Record<string, WireSurfaceRef> keyed by ref string. Anomalies have status: 'anomaly' and a reason field. Everything else is accepted. The score field carries the relevance score.


Media Layer

Resolved imagery for every entity and setting referenced, with per-mention placement context.

Bundles

One bundle per referenced entity or setting, containing the best available images ranked and categorized.

FieldDescription
refThe entity or setting this imagery belongs to
items[].mediaRefAsset reference
items[].scoreRelevance to this entity (0.0–1.0)
items[].tierQuality: 0 = exact match (tagged to this entity), 1 = contextually relevant, 2 = fallback
items[].provenancedirect (explicitly tagged), semantic (found by vector similarity), fallback (best available when nothing better exists)
items[].resolveHrefURL to fetch the actual image file: /api/media/resolve/{mediaRef}

Placements

Maps specific text mentions to specific images. If Dr. Chen is mentioned three times in different contexts, each mention gets its own best-match image based on the surrounding sentence.

FieldDescription
refThe entity or setting
mentionIndexWhich mention in the text (matches mentions[].mentionIndex in the Text layer)
phraseThe surrounding sentence that influenced image selection
primaryMediaRefBest image for this mention in this context
scoreHow well the image matches this specific mention context
alternatesOther candidate images with scores

Context Layer

The richest layer for immersive applications. Contains atmospheric signals, environment selection hints, music, and semantic scene profiling. This layer has several independent components that serve different purposes.

Shader / Environment Selection

Used by renderers to select visual environments, color palettes, and mood.

FieldDescription
contextTextEmbedding-ready text for environment matching. Contains the turn's output, accepted entity names, and the user's query. Always populated, even when no settings are involved.
priorityTermsUp to 8 entity labels ordered by relevance — used for boosting media scores in visual rendering. (Wire: priorityRefKeys)

Structured Atmosphere

Sensory signals extracted from the dominant setting — the most relevant setting the persona referenced during reasoning. These are parsed from structured descriptors stored on each setting record.

FieldDescription
atmosphere.soundAmbient sounds (e.g., "the gentle hum of electronic equipment and the distant pulse of neon lights")
atmosphere.smellScent descriptions (e.g., "a faint metallic tang, reminiscent of advanced technology")
atmosphere.temperatureClimate signals (e.g., "cool, with a subtle chill from the glass walls")
atmosphere.lightLighting conditions (e.g., "soft, ambient glow from the console, casting gentle reflections")
atmosphere.tactilePhysical textures (e.g., "smooth, polished surfaces, cool to the touch")
atmosphere.horizonWhat you'd see looking out (e.g., "an unobstructed view of the city below, a lattice of light and shadow")
visualSignatureThe setting's visual identity distilled into one descriptive line
essenceThe setting's emotional and thematic core in one line (e.g., "a sanctuary of digital echoes and contemplative light")

When no settings are referenced in the turn, all atmosphere fields are null. This is correct — there is no setting data to surface. Shader selection still works because contextText is always populated from the output text and entity labels.

Canonical Setting

The dominant setting that atmosphere was extracted from, with confidence scoring.

FieldDescription
canonicalSetting.refSetting ref
canonicalSetting.labelHuman-readable name
canonicalSetting.confidenceHow confident the system is that this is the right setting for this turn
canonicalSetting.scoringDetailed breakdown: retrievalScore, sceneSimilarity, lexicalAlignment, mismatchPenalty, finalScore

Semantic Scene Profile

A structured analysis of the turn's semantic content, categorized into thematic phrase buckets. Useful for advanced rendering that adapts to the semantic character of the turn.

FieldDescription
semanticSceneProfile.phrasesByCategoryKey phrases organized by thematic category (e.g., "technology", "conflict", "nature")
semanticSceneProfile.signaturesByCategoryDetailed signatures per category with phrase, score, phraseType, sourceText, sourceSegmentId, and sentenceImportance

Derivation Mode

How the context layer was assembled — useful for debugging and quality assessment.

ValueMeaning
semanticNormal path — atmosphere extracted from semantically matched settings
setting_surface_fallbackSettings were found but semantic matching failed; fell back to surface-level extraction
lexical_emergencyLast resort — no semantic or setting match; used lexical heuristics

Music

Server-side music selection based on phrase extraction from the turn's output.

FieldDescription
musicRef.refAudio asset reference
musicRef.resolveHrefURL to stream or download the audio
musicRef.confidenceHow well this track matches the turn's mood (0.0–1.0)
musicRef.queryPhraseThe phrase extracted from the turn that drove the music search

Primary Subject

The dominant entity or setting for this turn — the "main character" of this particular reasoning act.

FieldDescription
primarySubject.refCanonical reference
primarySubject.labelHuman-readable name
primarySubject.typeentity or setting

Selection priority: dominant setting (if any) > highest-relevance entity > null.


Topology Layer

Retrieval quality metrics. Tells you how dense or sparse the territory is, what drove the retrieval, and how much prior reasoning the persona had to build on.

FieldDescription
resultCountTotal retrieval results found
meanScoreAverage relevance score across all results
topScoreBest match relevance (0.0–1.0)
bottomScoreWeakest match in the result set
scoreStdDevScore spread — high std dev means a mix of strong and weak matches
sparsityDensity classification: dense, moderate, sparse, void
perIndex.entityResults from entity indexes
perIndex.settingResults from setting indexes
perIndex.textchunkResults from story content indexes
seedCountRetrieval seeds (starting points) used
expandedCountCandidates found through graph expansion from seeds
inferentialCountPrior ThoughtPackets found as retrievable context — this is the compounding effect. High numbers mean the persona is building on a rich foundation of prior reasoning. Low numbers mean it's mostly working from source material.
anchorsStarting points for retrieval, each with a source: direct_canonical_name (matched by name), hot_frontier (from session momentum), or seed (from a seed signal)

For builders: Use sparsity to frame output confidence. If it's void, warn users they're in uncharted territory. If inferentialCount is high, the output is well-grounded in prior analysis. Entertainment products can narrate this: "This connection is well-documented across seventeen prior explorations" vs "We're venturing into thin territory here — the historical record is sparse."


Epistemic Layer

What the persona claims to know and how confidently it knows it.

FieldDescription
uncertainty0.0 = fully grounded in retrieved evidence, 1.0 = pure speculation
supportScoreEvidence grounding strength — how well the retrieved material supports the output
modeReasoning mode: answer (direct response), explore (open-ended), bridge (connecting entities), canonize (establishing new facts)
intentTagsDescriptive tags: factual_answer, depth_synthesis, bridge_discovery, etc.

Claims

Specific factual assertions the persona made, with the evidence backing them.

FieldDescription
textThe verbatim claim
refsEntity and setting references that support it
confidence0.0–1.0

Contradictions

When the persona's reasoning encounters conflicting information in the knowledge graph.

FieldDescription
claimA / claimBThe two conflicting claims
severityreconcilable (both can be true — different perspectives, unreliable narration, timeline evolution) or irreconcilable (logical impossibility)

For builders: Contradictions are narrative texture, not errors. A podcast frames reconcilable contradictions as "competing accounts." A documentary presents both sides. An interactive experience lets users investigate. Only irreconcilable contradictions indicate actual data quality issues.


Session Layer

Accumulated state across turns. This is what gives multi-turn conversations coherence.

FieldDescription
turnCountTurns completed in this session
activeFactsEntity and setting refs established as grounded knowledge in prior turns
entityMomentumWhich entities keep appearing, weighted by recency and frequency. High-momentum entities are the session's "main characters." Each entry has ref and weight.
accumulatedClaimsEvery claim across all turns, with turn number, confidence, and cited refs. Track how understanding evolved.
openThreadsUnresolved questions or topics. Each has text, status (open, partially_addressed, resolved), and introducedAtTurn.
warmSummariesCondensed per-turn summaries: what the user asked (queryIntent), what the system concluded (summary), and what refs were covered.

What resets between turns: identity, text, refs, media, context, topology, epistemic — all reset to empty when a new turn starts.

What persists: The session layer carries forward. The system remembers the conversation's accumulated knowledge, momentum, and open threads.

surface_complete is authoritative: When received, it replaces everything — including the session layer — with the server's canonical snapshot.


SSE Event Lifecycle

When consuming via the streaming endpoint, events arrive in this order during a single turn:

PhaseEventsWhat's Happening
Streaming (S1)text_delta (repeated)The persona is generating prose. Append each delta to build live text.
Streaming (S1)ref_accepted (render_provisional)Between generation bursts, entity references the persona has touched are flushed via a sideband queue.
Streaming (S1)ref_media_bundleImagery resolved for those entities, also sideband-flushed between LLM read cycles.
Finalization (S1 close)text_replacementGeneration complete. The finalized annotated text replaces the streaming raw text as truth.
Validation (S2)ref_accepted (commit_validated)Refs that passed post-generation validation.
Validation (S2)ref_anomalyRefs that failed validation (hallucinated or policy-forbidden).
Post-generationmedia_placement_mapPer-mention image placements resolved against the final text.
Post-generationatmosphere_contextFull Context layer: atmosphere, canonical setting, music, scene profile.
Commit (S3)thoughtpacket_committedThe ThoughtPacket has been written to the knowledge graph. Carries epistemic data.
Commit (S3)contradiction_detectedConflicts found during reasoning (if any).
Terminalsurface_completeThe full, authoritative Emission Surface in wire format. This is the turn's final word.

PEE Inner Events

These events arrive wrapped in a pee_event envelope on the hub stream. They provide additional operational visibility:

EventDescription
budget_warningThe turn is approaching a resource limit (dimension, remaining, threshold)
engine_errorA recoverable or non-recoverable error during reasoning
turn_committedThe PEE terminal signal — ThoughtPacket written, turn complete
turn_abortedThe turn failed and no ThoughtPacket was committed
commit_deferredCommit was delayed (e.g., pending governance review)
story_materialization_readyThe system detected enough accumulated reasoning to suggest a story could be materialized from this topic
thoughtpacket:draft:completeA draft prose rendering of the ThoughtPacket was completed

Hub-Only Events

These appear only on the browser hub stream (POST /api/world-program/hub), not on the external agent stream:

EventDescription
wig_frameWorld Interaction Governor state — routing decisions, inflection points, suggestions
prompt_payloadThe full context payload that was sent to the LLM (debugging/transparency)
star:balance_changedReal-time $STAR balance update for the authenticated user
bounty_workbench_openA bounty workflow was activated
bounty_draft_triggerA gap-to-bounty drafting flow was triggered

External-Only Events

EventDescription
timeout300 seconds of idle — the stream will close. Reconnect with Last-Event-ID or ?lastSeq=.

Quick Reference: Event to Layer Mapping

SSE EventLayer AffectedAccumulation
text_deltatext.streamingRawAppend
text_replacementtext.annotated, clean, mentionsReplace
ref_acceptedrefs.acceptedUpsert by ref
ref_anomalyrefs.anomaliesAppend
ref_media_bundlemedia.bundlesUpsert by ref
media_placement_mapmedia.placementsUpsert by (ref, mentionIndex)
atmosphere_contextcontext (all fields)Replace entire layer
thoughtpacket_committedepistemicMerge
contradiction_detectedepistemic.contradictionsAppend
surface_completeAll layersReplace entire surface