Quality metrics for 2D scatterplot graphics: Automatically reducing visual clutter

Enrico Bertini, Giuseppe Santucci

    Research output: Contribution to journalArticle

    Abstract

    The problem of visualizing huge amounts of data is very well known in the field of Computer Graphics. Visualizing large number of items (the order of millions) forces almost any kind of techniques to reveal its limits in terms of expressivity and scalability. To deal with this problem we propose a "feature preservation" approach, based on the idea of modelling the final visualization in a virtual space in order to analyze its features (e.g, absolute and relative density, clusters, etc.). Through this approach we provide a formal model to measure the visual clutter resulting from the representation of a large dataset on a physical device, obtaining some figures about the visualization decay and devising an automatic sampling strategy able to preserve relative densities.

    Original languageEnglish (US)
    Pages (from-to)77-89
    Number of pages13
    JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume3031
    StatePublished - 2004

    Fingerprint

    Specific Gravity
    Clutter
    Visualization
    Computer Graphics
    Metric
    Sampling Strategy
    Computer graphics
    Formal Model
    Large Data Sets
    Preservation
    Scalability
    Figure
    Decay
    Sampling
    Equipment and Supplies
    Modeling
    Graphics
    Vision
    Datasets

    ASJC Scopus subject areas

    • Computer Science(all)
    • Biochemistry, Genetics and Molecular Biology(all)
    • Theoretical Computer Science

    Cite this

    @article{02b96abcfead48df82ca71b63d44e371,
    title = "Quality metrics for 2D scatterplot graphics: Automatically reducing visual clutter",
    abstract = "The problem of visualizing huge amounts of data is very well known in the field of Computer Graphics. Visualizing large number of items (the order of millions) forces almost any kind of techniques to reveal its limits in terms of expressivity and scalability. To deal with this problem we propose a {"}feature preservation{"} approach, based on the idea of modelling the final visualization in a virtual space in order to analyze its features (e.g, absolute and relative density, clusters, etc.). Through this approach we provide a formal model to measure the visual clutter resulting from the representation of a large dataset on a physical device, obtaining some figures about the visualization decay and devising an automatic sampling strategy able to preserve relative densities.",
    author = "Enrico Bertini and Giuseppe Santucci",
    year = "2004",
    language = "English (US)",
    volume = "3031",
    pages = "77--89",
    journal = "Lecture Notes in Computer Science",
    issn = "0302-9743",
    publisher = "Springer Verlag",

    }

    TY - JOUR

    T1 - Quality metrics for 2D scatterplot graphics

    T2 - Automatically reducing visual clutter

    AU - Bertini, Enrico

    AU - Santucci, Giuseppe

    PY - 2004

    Y1 - 2004

    N2 - The problem of visualizing huge amounts of data is very well known in the field of Computer Graphics. Visualizing large number of items (the order of millions) forces almost any kind of techniques to reveal its limits in terms of expressivity and scalability. To deal with this problem we propose a "feature preservation" approach, based on the idea of modelling the final visualization in a virtual space in order to analyze its features (e.g, absolute and relative density, clusters, etc.). Through this approach we provide a formal model to measure the visual clutter resulting from the representation of a large dataset on a physical device, obtaining some figures about the visualization decay and devising an automatic sampling strategy able to preserve relative densities.

    AB - The problem of visualizing huge amounts of data is very well known in the field of Computer Graphics. Visualizing large number of items (the order of millions) forces almost any kind of techniques to reveal its limits in terms of expressivity and scalability. To deal with this problem we propose a "feature preservation" approach, based on the idea of modelling the final visualization in a virtual space in order to analyze its features (e.g, absolute and relative density, clusters, etc.). Through this approach we provide a formal model to measure the visual clutter resulting from the representation of a large dataset on a physical device, obtaining some figures about the visualization decay and devising an automatic sampling strategy able to preserve relative densities.

    UR - http://www.scopus.com/inward/record.url?scp=35048892060&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=35048892060&partnerID=8YFLogxK

    M3 - Article

    AN - SCOPUS:35048892060

    VL - 3031

    SP - 77

    EP - 89

    JO - Lecture Notes in Computer Science

    JF - Lecture Notes in Computer Science

    SN - 0302-9743

    ER -