**Evaluating Visual Summarization Techniques for Event Sequences**
--------------------------------------------------------------
**Abstract**
Real-world event sequences are often complex and heterogeneous, making simple data aggregation and visual encoding methods inadequate to produce meaningful visualizations. Numerous visual summarization techniques have thus been developed to generate concise overviews of sequential data. These techniques widely vary in terms of summarization methods and visualization designs. Despite the progress in novel techniques, currently there is a lack of understanding on the effectiveness of these techniques in comparison with each other. In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three different tasks, on six datasets, at six different levels of granularity. We analyzed the effects of these variables on visual summary quality and completion time of the experiment. Our analysis shows that Sequence Synopsis produces better visual summaries compared to the other two techniques overall, but
understanding its results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: interpretability and content. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.
## Content ##
-------
This OSF repository contains all the supplementary materials for our submission. For a full description of the code and analysis process please refer to the full paper.