The lighter local chat surface inside SindByte for direct execution after the comparison phase.

LMChat

LMChat is the faster chat desk inside SindByte: pick a model, run a focused prompt, attach an image, call IQ helpers when needed, and keep moving without the heavier multi-model setup of Dialog-LAB.

Single-model focus
Provider + model selection
IQ helper handoff
Attachments
Transcription when configured
Dialog-LAB companion

The current config audit shows 237 feature-flag entries across 19 MCP families, or 239 host-callable tools once the two core/runtime routes are counted. LMChat sits on top of that runtime surface and stays useful even when registration filters intentionally keep the published catalog small.

LMChat live interface
Live LMChat surface from the current build. Use it when you want direct prompting, model switching, and quick follow-up actions without opening the full comparison lab.

What LMChat Is Best At

Fast Iteration

Use LMChat when you already know the task and want to iterate on prompts, files, or screenshots quickly without staging a full A/B/C session.

Model and Provider Choice

Switch between local and configured provider targets from the operator surface instead of rebuilding a host-side setup for each run.

Selective IQ Help

Pull in IQ helpers when a prompt needs validation, alternative thinking, or a stronger review pass, then return to the main chat flow.

LMChat and Dialog-LAB Are Different Tools

Dialog-LAB live interface
Dialog-LAB is the heavier comparison bench. LMChat is the lighter execution surface once the winning direction is clear.

Recommended Operator Pattern

1. Compare first when uncertain: Start in Dialog-LAB if you need side-by-side model review or extracted reasoning traces.
2. Continue in LMChat: Move the winning prompt, answer, or edited instruction into LMChat for faster one-model execution and follow-up iterations.
3. Escalate only when needed: Pull in IQ helpers, attachments, or transcription support when the conversation needs more than plain chat.

Practical Capabilities Inside the Surface

Provider and Model Switching

Use the shared provider/model context to move between local LM flows and configured external routes without leaving the product.

Attachment-based Review

Attach screenshots or other images directly to the chat when a prompt has to react to actual UI or visual state.

Transcription Support

When the required provider path is configured, LMChat can use transcription for faster prompt entry instead of full manual typing.

IQ Helper Passes

Escalate a draft to validation, multi-angle review, or other IQ helper logic when a simple answer is not strong enough.

Workflow Handoffs

A good LMChat result can become the next timer prompt, a manual trading note, or a tool-guided follow-up in the wider runtime.

Useful Even with Short Registration

LMChat stays valuable when the host sees only a compact tool catalog because the runtime still provides a guided local chat surface around the same config.

Workflow Links That Matter

Dialog-LAB handoff: Compare prompts there, continue execution here. That keeps heavy review and fast follow-up separate.
Timer + IQ handoff: Turn a proven LMChat prompt into a scheduled routine once the interaction stops being one-off.
Trading and image handoff: Carry approved wording into trading notes, screenshot review, or generated asset work when the wider runtime is enabled.
Operational note: LMChat is intentionally lighter than Dialog-LAB. If you need deterministic multi-model comparison, reviewable reasoning traces, or explicit round control, use Dialog-LAB first and return to LMChat after the decision point.

Next Step

Jump to the workflow recipes for Dialog-LAB to LMChat handoff, or back to the manual if you still need setup and registration guidance.