Lullaby of Cape Cod

English blog of Ilya Razenshteyn

STOC 2018 talk of Erik Waingarten

Sign up for the new posts via the RSS feed.

At STOC 2018, Erik Waingarten (who is starting his fourth year of PhD at Columbia) gave a beautiful talk on the joint work regarding nearest neighbor search in general metric spaces that I briefly described before. Even though the talk involves some math, it is certifiably understandable by non-theorists!

Jul 17  

ICM 2018 survey

Sign up for the new posts via the RSS feed.

Together with Alex Andoni and Piotr Indyk, we have just uploaded on arXiv a survey on (the theory of) the high-dimensional nearest neighbor search problem (NNS), which accompanies Piotr’s talk at the upcoming International Congress of Mathematicians in Rio de Janeiro. At the same congress, Assaf Naor will give a plenary talk about dimension reduction; the corresponding paper will hopefully appear online soon.

In the survey, we touch upon both classic and new topics. Let me briefly go over the sections:

  1. In the introduction, we define and motivate the problem and put it in the historical context.
  2. We describe two classic frameworks based on dimension reduction and locality-sensitive hashing (LSH). In addition to that, we list known LSH families.
  3. We survey some deterministic and Las Vegas data structures for the problem.
  4. We cover recent line of work on the data-dependent LSH (where we tailor hash families to a dataset at hand), which constitutes the bulk of my thesis. Besides, we cover a beautiful and underappreciated algorithm for NNS over the $\ell_\infty$ metric.
  5. We go over recent exciting developments for the closest-pair problem (an off-line version of NNS, which makes it somewhat easier), which are based on the polynomial method and fast matrix multiplication.
  6. Finally, we talk about NNS for metrics beyond $\ell_1$, $\ell_2$ and $\ell_\infty$. There are three main tools on this front: metric embeddings (deterministic and randomized), NNS data structures for direct sums of “tractable” metric spaces, and (since very recently) randomized partitions obtained via estimates on nonlinear spectral gaps (featured in the previous post).

We are looking forward to your feedback!

2018  

STOC 2018 talks and posters

Sign up for the new posts via the RSS feed.

I’m heading out to Los Angeles to attend STOC 2018 (and, more generally, TheoryFest). My co-authors will present two papers, which I was lucky to be involved with:

Data-Dependent Hashing via Nonlinear Spectral Gaps (joint with Alex Andoni, Assaf Naor, Sasho Nikolov and Erik Waingarten). Paper, poster. In this paper, we develop a new technique for randomized partitioning of general metric spaces, which is based on estimates for nonlinear spectral gaps (see here for a thorough overview). These random partitions imply very efficient data structures in the cell-probe model for the high-dimensional approximate nearest neighbor search problem (ANN) (see here for a survey) over general metric spaces. With more work, we can get true time-efficient data structures for several cases of interest: most notably, in the forthcoming paper, we obtain the ANN algorithm for a general $d$-dimensional normed space with approximation $2^{\widetilde{O}(\sqrt{\log d})}$ (the previous best result was the trivial $O(\sqrt{d})$ bound, which readily follows from the classic John’s theorem).

This paper is the culmination of a line of work (“nearest neighbor search for general distances”), which I’m involved with for three years (see also our previous paper, where we essentially settle the case of symmetric normed spaces). The connection with spectral gaps we discovered is, in my opinion, a true gem, which we had been overlooking for years. I will blog about it in a greater detail later.

Nonlinear Dimension Reduction via Outer Bi-Lipschitz Extensions (joint with Sepideh Mahabadi, Konstantin Makarychev and Yury Makarychev). Paper, poster. We develop a new geometric primitive, an outer bi-Lipschitz extension, and use it to solve two open problems raised by Elkin, Filtzer and Neiman (posed here and here). In both cases, we obtain new dimension reduction results, which are inherently nonlinear.

2018  

Beyond John’s ellipsoid

Sign up for the new posts via the RSS feed.

Here is the video of my talk from the Learning and Optimization workshop, which was a part of Microsoft TechFest.

This covers (parts of) our very fresh work (joint with Alex Andoni, Assaf Naor, Sasho Nikolov and Erik Waingarten), which is currently under submission. In short, we show the first Approximate Nearest Neighbor (ANN) search1 algorithm for general $d$-dimensional normed spaces with approximation $d^{o(1)}$. The only previously known bound is the trivial $O(\sqrt{d})$, which immediately follows from John’s theorem and any good ANN data structure for the Euclidean distance.

A couple of minor glitches:

  1. The Wasserstein-$p$ distance is not a norm for $p > 1$;
  2. In the last couple of slides, I started writing $D(\varepsilon)$ instead of $R(\varepsilon)$, but it is meant to be the same.

1 For some flavor of the problem, see this classic, or – in case you are more adventurous – my PhD thesis.

2018  

Test your intuition

Sign up for the new posts via the RSS feed.

new job is a good reason to start a (new) blog. The first post is in the spirit of the series by Gil Kalai.

Let $K$ be a convex body in $\mathbb{R}^d$ symmetric around the origin. Let $\mathcal{D}_1$ be a distribution supported on $K$, and let $\mathcal{D}_2$ be a distribution supported outside of $2K$. Can one always find a direction $v$ such that given projections of $\mathrm{poly}(d)$ i.i.d. samples from $\mathcal{D}_i$ on $v$, one can recover $i$ with high probability?

For example, suppose that $K = [-1, 1]^d$ is a cube. Then, each point in the support of $\mathcal{D}_2$ has at least one coordinate whose absolute value is large. It means that there exists a coordinate such that with probability at least $1/d$ over $\mathcal{D}_2$ it is outside of $[-2, 2]$. But then, using this coordinate, one can distinguish $\mathcal{D}_1$ and $\mathcal{D}_2$ using only $O(d)$ samples.

2018