Runtime form surface for live model comparison, not just another static tool list.

Dialog-LAB

Dialog-LAB is the live comparison desk inside SindByte: run A/B/C model rounds, expose reasoning blocks, capture voice input, and keep the session under explicit control.

2-3 models in parallel
Thinking extraction
Voice input + TTS
Session logging
Topic lock + stop-repeat
LMChat companion surface

This form complements the audited MCP catalog. The published tool list can change with build profile, credentials, and registration mode, but Dialog-LAB remains the fastest way to compare actual model behavior side by side.

Dialog-LAB live interface
Live Dialog-LAB surface inside the current SindByte build. Use it when you want model comparison, structured rounds, and visible reasoning output in one place.

What It Adds Beyond a Single Chat Window

Parallel A/B/C Runs

Send the same prompt to multiple models, then inspect differences in tone, structure, and reasoning quality before deciding what to keep.

Thinking Visibility

Reasoning tags such as <think>, <thinking>, and <reason> can be surfaced for review instead of staying buried in raw output.

Explicit Session Control

Use round limits, topic locks, and repetition checks to keep longer model debates productive instead of drifting or looping.

Dialog-LAB and LMChat Work Together

LMChat live interface
LMChat is the lighter chat surface for direct prompting plus IQTools. Dialog-LAB is the heavier comparison bench when you need multiple models, explicit rounds, and reviewable reasoning traces.

Recommended Session Pattern

1. Assign roles: Put candidate models into slots A, B, and optionally C. Keep one slot stable when you benchmark prompt changes.
2. Lock the scope: Use topic_lock and max_rounds when you want deterministic comparisons instead of free-form chatter.
3. Review and merge: Compare answers, inspect extracted reasoning, then merge the best parts into one saved session or hand the outcome to LMChat/IQTools.

Thinking Extraction Commands

The form handles extraction visually, but these SPR helpers define the underlying behavior and are useful when you script or inspect the same flow elsewhere in the product.

LMS.THK

Extract the reasoning blocks from a model response so you can inspect how the answer was formed.

LMS.THS

Strip thinking blocks and keep only the clean user-facing answer when you need production-ready output.

LMS.HTK

Check whether thinking tags are present and how many segments were found before you decide how to display them.

Voice, Logging, and Review Loop

Voice in: Use transcription for rapid prompt entry when you are testing several models on the same question.
Voice out: Turn the winning answer into speech when the comparison result needs to be reviewed hands-free.
Save logs: Keep timestamped session logs when you benchmark models, prompts, or provider settings over time.
Handoff: Move the final prompt or answer into LMChat, IQ workflows, or normal MCP tool steps once the comparison phase is done.
Operational note: Dialog-LAB is most useful when the live catalog is filtered or credential-gated, because it gives you a stable front-end for model comparison even when the runtime-visible tool set differs across builds, credentials, or registration modes.

Next Step

Use the manual for setup details, or jump to workflow recipes that combine Dialog-LAB with LMChat, IQTools, timers, and image generation.