Faster least squares approximation

Petros Drineas, Michael W. Mahoney, Shanmugavelayutham Muthukrishnan, Tamás Sarlós

    Research output: Contribution to journalArticle

    Abstract

    Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with n » d. Then, existing exact methods find a solution vector in O(nd2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.

    Original languageEnglish (US)
    Pages (from-to)219-249
    Number of pages31
    JournalNumerische Mathematik
    Volume117
    Issue number2
    DOIs
    StatePublished - Jan 1 2011

    Fingerprint

    Least squares approximations
    Least Squares Approximation
    Relative Error
    Approximate Solution
    Hadamard Transform
    Hadamard transforms
    Random Projection
    Exact Method
    Least Squares Problem
    Approximation Problem
    Randomized Algorithms
    Exact Algorithms
    Approximation
    System of Linear Equations
    Linear equations
    Exact Solution

    ASJC Scopus subject areas

    • Computational Mathematics
    • Applied Mathematics

    Cite this

    Drineas, P., Mahoney, M. W., Muthukrishnan, S., & Sarlós, T. (2011). Faster least squares approximation. Numerische Mathematik, 117(2), 219-249. https://doi.org/10.1007/s00211-010-0331-6

    Faster least squares approximation. / Drineas, Petros; Mahoney, Michael W.; Muthukrishnan, Shanmugavelayutham; Sarlós, Tamás.

    In: Numerische Mathematik, Vol. 117, No. 2, 01.01.2011, p. 219-249.

    Research output: Contribution to journalArticle

    Drineas, P, Mahoney, MW, Muthukrishnan, S & Sarlós, T 2011, 'Faster least squares approximation', Numerische Mathematik, vol. 117, no. 2, pp. 219-249. https://doi.org/10.1007/s00211-010-0331-6
    Drineas P, Mahoney MW, Muthukrishnan S, Sarlós T. Faster least squares approximation. Numerische Mathematik. 2011 Jan 1;117(2):219-249. https://doi.org/10.1007/s00211-010-0331-6
    Drineas, Petros ; Mahoney, Michael W. ; Muthukrishnan, Shanmugavelayutham ; Sarlós, Tamás. / Faster least squares approximation. In: Numerische Mathematik. 2011 ; Vol. 117, No. 2. pp. 219-249.
    @article{7055df679c064c5cb60e75f342fc3c1d,
    title = "Faster least squares approximation",
    abstract = "Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with n » d. Then, existing exact methods find a solution vector in O(nd2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.",
    author = "Petros Drineas and Mahoney, {Michael W.} and Shanmugavelayutham Muthukrishnan and Tam{\'a}s Sarl{\'o}s",
    year = "2011",
    month = "1",
    day = "1",
    doi = "10.1007/s00211-010-0331-6",
    language = "English (US)",
    volume = "117",
    pages = "219--249",
    journal = "Numerische Mathematik",
    issn = "0029-599X",
    publisher = "Springer New York",
    number = "2",

    }

    TY - JOUR

    T1 - Faster least squares approximation

    AU - Drineas, Petros

    AU - Mahoney, Michael W.

    AU - Muthukrishnan, Shanmugavelayutham

    AU - Sarlós, Tamás

    PY - 2011/1/1

    Y1 - 2011/1/1

    N2 - Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with n » d. Then, existing exact methods find a solution vector in O(nd2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.

    AB - Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with n » d. Then, existing exact methods find a solution vector in O(nd2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.

    UR - http://www.scopus.com/inward/record.url?scp=78651450265&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=78651450265&partnerID=8YFLogxK

    U2 - 10.1007/s00211-010-0331-6

    DO - 10.1007/s00211-010-0331-6

    M3 - Article

    AN - SCOPUS:78651450265

    VL - 117

    SP - 219

    EP - 249

    JO - Numerische Mathematik

    JF - Numerische Mathematik

    SN - 0029-599X

    IS - 2

    ER -