The problem of visualizing huge amounts of data is very well known in the field of Computer Graphics. Visualizing large number of items (the order of millions) forces almost any kind of techniques to reveal its limits in terms of expressivity and scalability. To deal with this problem we propose a "feature preservation" approach, based on the idea of modelling the final visualization in a virtual space in order to analyze its features (e.g, absolute and relative density, clusters, etc.). Through this approach we provide a formal model to measure the visual clutter resulting from the representation of a large dataset on a physical device, obtaining some figures about the visualization decay and devising an automatic sampling strategy able to preserve relative densities.
|Original language||English (US)|
|Number of pages||13|
|Journal||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|State||Published - Dec 1 2004|
ASJC Scopus subject areas
- Theoretical Computer Science
- Computer Science(all)