Omics-OSDocs
Omics-OS CloudOmni Assistant

Context

How the assistant understands your data and selections

The assistant sees what you see. Select points on a scatter plot, highlight rows in a table, click a pathway node — your current focus becomes part of every message you send, without having to describe it.

Aware of What You See

When you make a selection on the canvas, the assistant receives a description of that selection automatically. You can ask "what is different about these cells?" and the assistant already knows which cells you mean, how many there are, and what dataset they come from.

Types of Context

ContextSourceExample
Data selectionCanvas plots, tables"Explain these 42 selected cells"
Active datasetLoaded modalitiesKnows columns, types, and size
Session historyPrior messagesBuilds on previous findings

Selections

Every interactive element on the canvas can produce a selection the assistant understands:

  • Scatter plots — lasso or box-select a group of points
  • Data tables — highlight one or more rows
  • Pathway maps — click a gene, compound, or linked pathway
  • 3D structures — click a residue or chain
  • Genome browser — select a region or feature

Once you have made a selection, the assistant prompt area shows a suggestion for how to ask about it. You can use the suggestion or write your own question.

Dataset Awareness

When you load a dataset into the Workspace, the assistant becomes aware of its shape — the number of observations, the available columns, and the data types. You do not need to describe your data before asking questions about it.

Session History

The assistant reads the full conversation from the current session. Earlier findings, clarifications, and uploaded files remain part of the context as the session continues.

On this page