November 3, 2025
·
2 mins

Domyn’s “ICL Visualizer” at VISXAI 2025: A Backstage Look at How Language Models Learn

Domyn ICL Visualizer logo with an abstract blue shape in the background

At the November 2nd VISXAI Conference, Domyn AI engineer Reetu Raj Harsh introduced an innovative new tool called the ICL Visualizer, an interactive research platform designed to help users explore how Large Language Models (LLMs) learn from context.

To put it simply: the visualizer lets users peek into a model’s in-context learning in real time. Through a range of dynamic visualizations — including attention heatmaps, induction timelines, token-level importance graphs, and induction head activity views — users can observe how the models detect patterns, processes examples, and generates predictions. 

The system tracks attention patterns and quantifies induction behavior as the model reads each token. By identifying specialized “induction heads”, it distinguishes between copying and genuine pattern recognition — measuring the learning strength and style with compute metrics such as induction and copying scores, attention entropy and sparsity. Users can watch the model “learn” from input examples with color-coded visual cues that signal when attention shifts across tokens: purple for pattern recognition, blue for copying, green for context, and yellow for scanning.

Designed for AI researchers, educators, and practitioners, the ICL Visualizer includes built-in guided tours and automated demos,  providing an interactive and user-friendly way to observe how models “learn to learn”. Try it out for yourself!

Authors
Pellentesque leo justo, placerat in dui ut, tincidunt tempus tellus praesent viverra consectetur tortor, rhoncus accumsan arcu venenatis id.
No items found.
it