### Abstract

We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n√log n), improving on the previous best of O(n^{1.5}) due to Lag arias, Lenstra, and Schnorr {hkzbabai}. We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(√n/log n) due to Aharonov and Regev [2]. To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio [3] gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+ε)γ approximate CVP to γ approximate CVP where the target is at distance at most 1+1/ε times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun [4]. As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice [5]. We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Ω(√n) factor larger.

Original language | English (US) |
---|---|

Title of host publication | Proceedings - IEEE 29th Conference on Computational Complexity, CCC 2014 |

Publisher | IEEE Computer Society |

Pages | 98-109 |

Number of pages | 12 |

ISBN (Print) | 9781479936267 |

DOIs | |

State | Published - 2014 |

Event | 29th Annual IEEE Conference on Computational Complexity, CCC 2014 - Vancouver, BC, Canada Duration: Jun 11 2014 → Jun 13 2014 |

### Other

Other | 29th Annual IEEE Conference on Computational Complexity, CCC 2014 |
---|---|

Country | Canada |

City | Vancouver, BC |

Period | 6/11/14 → 6/13/14 |

### Fingerprint

### Keywords

- BDD
- BDDP
- CVP
- CVPP

### ASJC Scopus subject areas

- Software
- Theoretical Computer Science
- Computational Mathematics

### Cite this

*Proceedings - IEEE 29th Conference on Computational Complexity, CCC 2014*(pp. 98-109). [6875479] IEEE Computer Society. https://doi.org/10.1109/CCC.2014.18

**On the closest vector problem with a distance guarantee.** / Dadush, Daniel; Regev, Oded; Stephens-Davidowitz, Noah.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings - IEEE 29th Conference on Computational Complexity, CCC 2014.*, 6875479, IEEE Computer Society, pp. 98-109, 29th Annual IEEE Conference on Computational Complexity, CCC 2014, Vancouver, BC, Canada, 6/11/14. https://doi.org/10.1109/CCC.2014.18

}

TY - GEN

T1 - On the closest vector problem with a distance guarantee

AU - Dadush, Daniel

AU - Regev, Oded

AU - Stephens-Davidowitz, Noah

PY - 2014

Y1 - 2014

N2 - We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n√log n), improving on the previous best of O(n1.5) due to Lag arias, Lenstra, and Schnorr {hkzbabai}. We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(√n/log n) due to Aharonov and Regev [2]. To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio [3] gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+ε)γ approximate CVP to γ approximate CVP where the target is at distance at most 1+1/ε times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun [4]. As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice [5]. We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Ω(√n) factor larger.

AB - We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n√log n), improving on the previous best of O(n1.5) due to Lag arias, Lenstra, and Schnorr {hkzbabai}. We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(√n/log n) due to Aharonov and Regev [2]. To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio [3] gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+ε)γ approximate CVP to γ approximate CVP where the target is at distance at most 1+1/ε times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun [4]. As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice [5]. We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Ω(√n) factor larger.

KW - BDD

KW - BDDP

KW - CVP

KW - CVPP

UR - http://www.scopus.com/inward/record.url?scp=84906658974&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84906658974&partnerID=8YFLogxK

U2 - 10.1109/CCC.2014.18

DO - 10.1109/CCC.2014.18

M3 - Conference contribution

AN - SCOPUS:84906658974

SN - 9781479936267

SP - 98

EP - 109

BT - Proceedings - IEEE 29th Conference on Computational Complexity, CCC 2014

PB - IEEE Computer Society

ER -