![]() |
Samson Zhou |
![]() |
|
|
Hi, I'm a Postdoctoral Researcher at Carnegie Mellon University, hosted by --davidwoodruff--. My research lies at the intersections of theoretical computer science, data science, and machine learning. In particular, I am currently interested in numerical linear algebra, streaming algorithms, and high-dimensional probability. Here are some of my favorite links. Profiles: [dblp] [Scholar] Feel free to reach me at: samsonzhou AT gmail DOT com
We introduce \emph{difference estimators} for data stream computation, which provide approximations to $F(v)-F(u)$ for frequency vectors $v\succeq u$ and a given function $F$. We show how to use such estimators to carefully trade error for memory in an iterative manner. The function $F$ is generally non-linear, and we give the first difference estimators for the frequency moments $F_p$ for $p\in[0,2]$, as well as for integers $p>2$. Using these, we resolve a number of central open questions in adversarial robust streaming and sliding window models: \begin{enumerate} \item For adversarially robust streams, we obtain a $(1+\epsilon)$-approximation to $F_p$ using $\tilde{\mathcal{O}}\left(\frac{\log n}{\epsilon^2}\right)$ bits of space for $p\in[0,2]$, and using $\tilde{\mathcal{O}}\left(\frac{1}{\epsilon^2}n^{1-2/p}\right)$ bits of space for integers $p>2$. We also obtain an adversarially robust algorithm for the $L_2$-heavy hitters problem using $\mathcal{O}\left(\frac{\log n}{\epsilon^2}\right)$ bits of space. Our bounds are optimal up to $\poly(\log\log n + \log(1/\epsilon))$ factors, and improve the $\frac{1}{\epsilon^3}$ dependence of Ben-Eliezer \emph{et al.} (PODS 2020, best paper award) and the $\frac{1}{\epsilon^{2.5}}$ dependence of Hassidim \emph{et al.} (NeurIPS 2020, oral presentation). \item For sliding windows, we obtain a $(1+\epsilon)$-approximation to $F_p$ using $\tilde{\mathcal{O}}\left(\frac{\log^2 n}{\epsilon^2}\right)$ bits of space for $p\in(0,2]$, resolving a longstanding question of Braverman and Ostrovsky (FOCS 2007). For example, for $p = 2$ we improve the dependence on $\epsilon$ from $\frac{1}{\epsilon^4}$ to an optimal $\frac{1}{\epsilon^2}$. \end{enumerate} For both models, our dependence on $\epsilon$ shows, up to $\log\frac{1}{\epsilon}$ factors, that there is no overhead over the standard insertion-only data stream model for any of these problems.
We present three provably accurate, polynomial time, approximation algorithms for the Sparse Principal Component Analysis (SPCA) problem, without imposing any restrictive assumptions on the input covariance matrix. The first algorithm is based on randomized matrix multiplication; the second algorithm is based on a novel deterministic thresholding scheme; and the third algorithm is based on a semidefinite programming relaxation of SPCA. All algorithms come with provable guarantees and run in low-degree polynomial time. Our empirical evaluations confirm our theoretical findings.
A proof of sequential work allows a prover to convince a resource bounded verifier that the prover invested a substantial amount of sequential time to perform some underlying computation. Proofs of sequential work have many applications including time-stamping, blockchain design, and universally verifiable CPU benchmarks. Mahmoody, Moran and Vadhan (ITCS 2013) gave the first construction of proofs of sequential work in the random oracle model though the construction relied on expensive depth-robust graphs. In a recent breakthrough, Cohen and Pietrzak (EUROCRYPT 2018) gave a more efficient construction that does not require depth-robust graphs. In each of these constructions, the prover commits to a labeling of a directed acyclic graph $G$ with $N$ nodes and the verifier audits the prover by checking that a small subset of labels are locally consistent, e.g., $L_v = H(L_{v_1},\ldots,L_{v_\delta})$, where $v_1,\ldots,v_\delta$ denote the parents of node $v$. Provided that the graph $G$ has certain structural properties (e.g., depth-robustness), the prover must produce a long $\H$-sequence to pass the audit with non-negligible probability. An $\H$-sequence $x_0,x_1\ldots x_T$ has the property that $H(x_i)$ is a substring of $x_{i+1}$ for each $i$, i.e., we can find strings $a_i,b_i$ such that $x_{i+1} = a_i \cdot H(x_i) \cdot b_i$. In the parallel random oracle model, it is straightforward to argue that any attacker running in sequential time $T-1$ will fail to produce an $\H$-sequence of length $T$ except with negligible probability -- even if the attacker submits large batches of random oracle queries in each round. In this paper, we introduce the parallel quantum random oracle model and prove that any quantum attacker running in sequential time $T-1$ will fail to produce an $\H$-sequence except with negligible probability -- even if the attacker submits a large batch of quantum queries in each round. The proof is substantially more challenging and highlights the power of Zhandry's recent compressed oracle technique (CRYPTO 2019).
We consider the problem of learning a latent $k$-vertex simplex $K\in\mathbb{R}^d$, given access to $\AA\in\mathbb{R}^{d\times n}$, which can be viewed as a data matrix with $n$ points that are obtained by randomly perturbing latent points in the simplex $K$ (potentially beyond $K$). A large class of latent variable models, such as adversarial clustering, mixed membership stochastic block models, and topic models can be cast as learning a latent simplex. Bhattacharyya and Kannan (SODA 2020) give an algorithm for learning such a latent simplex in time roughly $O(k\cdot\nnz(\AA))$, where $\nnz(\AA)$ is the number of non-zeros in $\AA$. We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $\AA$, which holds in many of these applications. Further, we show this assumption is necessary, as otherwise an algorithm for learning a latent simplex would imply an algorithmic breakthrough for spectral low rank approximation. At a high level, Bhattacharyya and Kannan provide an adaptive algorithm that makes $k$ matrix-vector product queries to $\AA$ and each query is a function of all queries preceding it. Since each matrix-vector product requires $\nnz(\AA)$ time, their overall running time appears unavoidable. Instead, we obtain a low-rank approximation to $\AA$ in input-sparsity time and show that the column space thus obtained has small $\sin\Theta$ (angular) distance to the right top-$k$ singular space of $\AA$. Our algorithm then selects $k$ points in the low-rank subspace with the largest inner product (in absolute value) with $k$ carefully chosen random vectors. By working in the low-rank subspace, we avoid reading the entire matrix in each iteration and thus circumvent the $\Theta(k\cdot\nnz(\AA))$ running time.
We consider the \emph{sensitivity} of algorithms for the maximum matching problem against edge and vertex modifications. When an algorithm $A$ for the maximum matching problem is deterministic, the sensitivity of $A$ on $G$ is defined as $\max_{e \in E(G)}|A(G) \triangle A(G - e)|$, where $G-e$ is the graph obtained from $G$ by removing an edge $e \in E(G)$ and $\triangle$ denotes the symmetric difference. When $A$ is randomized, the sensitivity is defined as $\max_{e \in E(G)}d_{\mathrm{EM}}(A(G),A(G-e))$, where $d_{\mathrm{EM}}(\cdot,\cdot)$ denotes the earth mover's distance between two distributions. Thus the sensitivity measures the difference between the output of an algorithm after the input is slightly perturbed. Algorithms with low sensitivity, or \emph{stable} algorithms are desirable because they are robust to edge failure or attack. In this work, we show a randomized $(1-\epsilon)$-approximation algorithm with \emph{worst-case} sensitivity $O_{\epsilon}(1)$, which substantially improves upon the $(1-\epsilon)$-approximation algorithm of Varma and Yoshida (arXiv 2020) that obtains \emph{average} sensitivity $n^{O(1/(1+\epsilon^2))}$ sensitivity algorithm, and show a deterministic $1/2$-approximation algorithm with sensitivity $\exp(O(\log^*n))$ for bounded-degree graphs. We then show that any deterministic constant-factor approximation algorithm must have sensitivity $\Omega(\log^* n)$. Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of $n$ whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge. As an application of our results, we give an algorithm for the online maximum matching with $O_{\epsilon}(n)$ total replacements in the vertex-arrival model. By comparison, Bernstein~et~al.~(J. ACM 2019) gave an online algorithm that always outputs the maximum matching, but only for bipartite graphs and with $O(n\log n)$ total replacements. Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. For a graph with weight function $w$, the normalized weighted sensitivity is defined to be the sum of the weighted edges in the symmetric difference of the algorithm normalized by the altered edge, i.e., $\max_{e \in E(G)}\frac{1}{w(e)}w\left(A(G) \triangle A(G - e)\right)$. Hence the normalized weighted sensitivity measures the weighted difference between the output of an algorithm after the input is slightly perturbed, normalized by the weight of the perturbation. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter $\alpha>2$, there exists an algorithm that outputs a $\frac{1}{4\alpha}$-approximation to the maximum weighted matching in $O(m\log_{\alpha} n)$ time, with normalized weighted sensitivity $O(1)$.
We initiate the study of numerical linear algebra in the sliding window model, where only the most recent $W$ updates in a stream form the underlying data set. Although many existing algorithms in the sliding window model use or borrow elements from the smooth histogram framework (Braverman and Ostrovsky, FOCS 2007), we show that many interesting linear-algebraic problems, including spectral and vector induced matrix norms, generalized regression, and low-rank approximation, are not amenable to this approach in the row-arrival model. To overcome this challenge, we first introduce a unified row-sampling based framework that gives \emph{randomized} algorithms for spectral approximation, low-rank approximation/projection-cost preservation, and $\ell_1$-subspace embeddings in the sliding window model, which often use nearly optimal space and achieve nearly input sparsity runtime. Our algorithms are based on ``reverse online'' versions of offline sampling distributions such as (ridge) leverage scores, $\ell_1$ sensitivities, and Lewis weights to quantify both the importance and the recency of a row; our structural results on these distributions may be of independent interest for future algorithmic design. Although our techniques initially address numerical linear algebra in the sliding window model, our row-sampling framework rather surprisingly implies connections to the well-studied online model; our structural results also give the first sample optimal (up to lower order terms) online algorithm for low-rank approximation/projection-cost preservation. Using this powerful primitive, we give online algorithms for column/row subset selection and principal component analysis that resolves the main open question of Bhaskara~\etal\,(FOCS 2019). We also give the first online algorithm for $\ell_1$-subspace embeddings. We further formalize the connection between the online model and the sliding window model by introducing an \emph{additional} unified framework for \emph{deterministic} algorithms using a merge and reduce paradigm and the concept of online coresets, which we define as a weighted subset of rows of the input matrix that can be used to compute a good approximation to some given function on all of its prefixes. Our sampling based algorithms in the row-arrival online model yield online coresets, giving deterministic algorithms for spectral approximation, low-rank approximation/projection-cost preservation, and $\ell_1$-subspace embeddings in the sliding window model that use nearly optimal space.
Constructions of locally decodable codes (\LDCs) have one of two undesirable properties: low rate or high locality (polynomial in the length of the message). In settings where the encoder/decoder have already exchanged cryptographic keys and the channel is a probabilistic polynomial time (\PPT) algorithm, it is possible to circumvent these barriers and design \LDCs\ with constant rate and small locality. However, the assumption that the encoder/decoder have exchanged cryptographic keys is often prohibitive. We thus consider the problem of designing explicit and efficient \LDCs\ in settings where the channel is {\em slightly} more constrained than the encoder/decoder with respect to some resource e.g., space or (sequential) time. Given an explicit function $f$ that the channel cannot compute, we show how the encoder can transmit a random secret key to the local decoder using $f(\cdot)$ and a random oracle $\oracleH$. This allows bootstrap from the private key \LDC\ construction of Ostrovsky, Pandey and Sahai (ICALP, 2007), thereby answering an open question posed by Guruswami and Smith (FOCS 2010) of whether such bootstrapping techniques may apply to \LDCs\ in weaker channel models than just \PPT\ algorithms. Specifically, in the random oracle model we show how to construct explicit constant rate \LDCs\ with optimal locality of $\polylog$ in the security parameter against various resource constrained channels.
Adaptive sampling is a useful algorithmic tool for data summarization problems in the classical centralized setting, where the entire dataset is available to the single processor performing the computation. Adaptive sampling repeatedly selects rows of an underlying matrix $\A\in\mathbb{R}^{n\times d}$, where $n\gg d$, with probabilities proportional to their distances to the subspace of the previously selected rows. Intuitively, adaptive sampling seems to be limited to trivial multi-pass algorithms in the streaming model of computation due to its inherently sequential nature of assigning sampling probabilities to each row only after the previous iteration is completed. Surprisingly, we show this is not the case by giving the first one-pass algorithms for adaptive sampling on turnstile streams and using space $\poly(d,k,\log n)$, where $k$ is the number of adaptive sampling rounds to be performed. Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model. We give the first relative-error algorithm for column subset selection on turnstile streams. We show our adaptive sampling algorithm also gives the first relative-error algorithm for subspace approximation on turnstile streams that returns $k$ noisy rows of $\A$. The quality of the output can be improved to a $(1+\eps)$-approximation at the tradeoff of a bicriteria algorithm that outputs a larger number of rows. We then give the first algorithm for projective clustering on turnstile streams that uses space sublinear in $n$. In fact, we use space $\poly\left(d,k,s,\frac{1}{\eps},\log n\right)$ to output a $(1+\eps)$-approximation, where $s$ is the number of $k$-dimensional subspaces. Our adaptive sampling primitive also provides the first algorithm for volume maximization on turnstile streams. We complement our volume maximization algorithmic results with lower bounds that are tight up to lower order terms, even for multi-pass algorithms. By a similar construction, we also obtain lower bounds for volume maximization in the row-arrival model, which we match with competitive upper bounds.
The problem of selecting a small-size representative summary of a large dataset is a cornerstone of machine learning, optimization and data science. Motivated by applications to recommendation systems and other scenarios with query-limited access to vast amounts of data, we propose a new rigorous algorithmic framework for a standard formulation of this problem as a submodular maximization subject to a linear (knapsack) constraint. Our framework is based on augmenting all partial Greedy solutions with the best additional item. It can be instantiated with negligible overhead in any model of computation, which allows the classic Greedy algorithm and its variants to be implemented. We give such instantiations in the offline (Greedy+Max), multi-pass streaming (Sieve+Max) and distributed (Distributed+Max) settings. Our algorithms give (1/2-epsilon)-approximation with most other key parameters of interest being near-optimal. Our analysis is based on a new set of first-order linear differential inequalities and their robust approximate versions. Experiments on typical datasets (movie recommendations, influence maximization) confirm scalability and high quality of solutions obtained via our framework. Instance-specific approximations are typically in the 0.6-0.7 range and frequently beat even the (1-1/epsilon) \approx 0.63 worst-case barrier for polynomial-time algorithms.
Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy. Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources. The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample. We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample. Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs. Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest. We apply this framework in a layer-by-layer fashion from the top to the bottom. Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input $x\in \mathbb{R}^d$, including an adversarial one. We demonstrate the effectiveness of our method on popular network architectures. In particular, our coresets yield 90\% compression of the LeNet-300-100 architecture on MNIST while improving classification accuracy.
The cumulative pebbling complexity of a directed acyclic graph G is defined as cc(G) = \min_P \sum_i |P_i|, where the minimum is taken over all legal (parallel) black pebblings of G and |P_i| denotes the number of pebbles on the graph during round i. Intuitively, cc(G) captures the amortized Space-Time complexity of pebbling m copies of G in parallel. The cumulative pebbling complexity of a graph G is of particular interest in the field of cryptography as cc(G) is tightly related to the amortized Area-Time complexity of the data-independent memory hard function (iMHF) f_{G,H} (Alwen and Serbinenko, STOC 2015) defined using a constant indegree directed acyclic graph (DAG) G and a random oracle H. A secure iMHF should have amortized Space-Time complexity as high as possible e.g., to deter brute-force password attacker who wants to find x such that f_{G,H}(x) = h. Thus, to analyze the (in)security of a candidate iMHF f_{G,H}, it is crucial to estimate the value cc(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that is NP-Hard to compute cc(G), but their techniques do not even rule out an efficient (1+epsilon)-approximation algorithm for any constant epsilon>0. We show that for any constant c > 0, it is Unique Games hard to approximate cc(G) to within a factor of c. Along the way, we show the hardness of approximation of the DAG Vertex Deletion problem on DAGs of constant indegree. Namely, we show that for any k,epsilon>0 given a DAG G with N nodes and constant indegree, the it is Unique Games hard to distinguish between the case that (e_1, d_1)-reducible with e_1=N^{1/(1+2epsilon)}/k and d_1=k N^{2epsilon/(1+2epsilon)} and the case that G is (e_2, d_2)-depth-robust with e_2 = (1-epsilon)k e_1 and d_2= 0.9 N^{(1+epsilon)/(1+2epsilon)}, which may be of independent interest. Our result generalizes a result of Svensson who proved an analogous result for DAGs with indegree O(N).
Memory hard functions (MHFs) are an important cryptographic primitive that are used to design egalitarian proofs of work and in the construction of moderately expensive key-derivation functions resistant to brute-force attacks. Broadly speaking, MHFs can be divided into two categories: data-dependent memory hard functions (dMHFs) and data-independent memory hard functions (iMHFs). iMHFs are resistant to certain side-channel attacks as the memory access pattern induced by the honest evaluation algorithm is independent of the potentially sensitive input e.g., password. While dMHFs are potentially vulnerable to side-channel attacks (the induced memory access pattern might leak useful information to a brute-force attacker), they can achieve higher cumulative memory complexity (CMC) in comparison than an iMHF. In particular, any iMHF that can be evaluated in $N$ steps on a sequential machine has CMC {\em at most} $\O{\frac{N^2\log\log N}{\log N}}$. By contrast, the dMHF scrypt achieves maximal CMC $\Omega(N^2)$ --- though the CMC of scrypt would be reduced to just $\O{N}$ after a side-channel attack. In this paper, we introduce the notion of computationally data-independent memory hard functions (ciMHFs). Intuitively, we require that memory access pattern induced by the (randomized) ciMHF evaluation algorithm appears to be independent from the standpoint of a computationally bounded eavesdropping attacker --- even if the attacker selects the initial input. We then ask whether it is possible to circumvent known upper bound for iMHFs and build a ciMHF with CMC $\Omega(N^2)$. Surprisingly, we answer the question in the affirmative when the ciMHF evaluation algorithm is executed on a two-tiered memory architecture (RAM/Cache). We introduce the notion of a $k$-restricted dynamic graph to quantify the continuum between unrestricted dMHFs $(k=n)$ and iMHFs ($k=1$). For any $\eps > 0$ we show how to construct a $k$-restricted dynamic graph with $k=\Omega(N^{1-\eps})$ that provably achieves maximum cumulative pebbling cost $\Omega(N^2)$. We can use $k$-restricted dynamic graphs to build a ciMHF provided that cache is large enough to hold $k$ hash outputs and the dynamic graph satisfies a certain property that we call ``amenable to shuffling.'' In particular, we prove that the induced memory access pattern is indistinguishable to a polynomial time attacker who can monitor the locations of read/write requests to RAM, but not cache. We also show that when $k=o\left(N^{1/\log\log N}\right)$, then any $k$-restricted graph with constant indegree has cumulative pebbling cost $o(N^2)$. Our results almost completely characterize the spectrum of $k$-restricted dynamic graphs.
Network performance problems are notoriously difficult to diagnose. Prior profiling systems collect performance statistics by keeping information about each network flow, but maintaining per-flow state is not scalable on resource-constrained NIC and switch hardware. Instead, we propose sketch-based performance monitoring using memory that is sublinear in the number of flows. Existing sketches estimate metrics based on flow sizes. In contrast, performance monitoring typically requires combining information across pairs of packets, such as matching a data packet with its acknowledgment to compute a round-trip time. We define a new class of \emph{lean} algorithms that use memory sublinear in both the size of input data and the number of flows. We then introduce lean algorithms for a set of important statistics, such as identifying flows with high latency, loss, out-of-order, or retransmitted packets. We implement prototypes of our lean algorithms on a commodity programmable switch using the P4 language. Our experiments show that lean algorithms detect $\sim$82\% of top 100 problematic flows among real-world packet traces using just 40KB memory.
A function $f : \F_2^n \to \R$ is \emph{$s$-sparse} if it has at most $s$ non-zero Fourier coefficients. Motivated by applications to fast sparse Fourier transforms over $\F_2^n$, we study efficient algorithms for the problem of approximating the $\ell_2$-distance from a given function to the closest $s$-sparse function. While previous works (e.g., Gopalan \emph{et al.} SICOMP 2011) study the problem of distinguishing $s$-sparse functions from those that are far from $s$-sparse under Hamming distance, to the best of our knowledge no prior work has explicitly focused on the more general problem of distance estimation in the $\ell_2$ setting, which is particularly well-motivated for noisy Fourier spectra. Given the focus on efficiency, our main result is an algorithm that solves this problem with query complexity $\O{s}$ for constant accuracy and error parameters, which is only quadratically worse than applicable lower bounds.
We study the problem of constructing a linear sketch of minimum dimension that allows approximation of a given real-valued function $f \colon \ftwo^n \rightarrow \mathbb R$ with small expected squared error. We develop a general theory of linear sketching for such functions through which we analyze their dimension for most commonly studied types of valuation functions: additive, budget-additive, coverage, $\alpha$-Lipschitz submodular and matroid rank functions. This gives a characterization of how many bits of information have to be stored about the input $x$ so that one can compute $f$ under additive updates to its coordinates. Our results are tight in most cases and we also give extensions to the distributional version of the problem where the input $x \in \ftwo^n$ is generated uniformly at random. Using known connections with dynamic streaming algorithms, both upper and lower bounds on dimension obtained in our work extend to the space complexity of algorithms evaluating $f(x)$ under long sequences of additive updates to the input $x$ presented as a stream. Similar results hold for simultaneous communication in a distributed setting.
In the time-decay model for data streams, elements of an underlying data set arrive sequentially with the recently arrived elements being more important. A common approach for handling large data sets is to maintain a \emph{coreset}, a succinct summary of the processed data that allows approximate recovery of a predetermined query. We provide a general framework that takes any offline-coreset and gives a time-decay coreset for polynomial time decay functions. We also consider the exponential time decay model for $k$-median clustering, where we provide a constant factor approximation algorithm that utilizes the online facility location algorithm. Our algorithm stores $O(k\log(h\Delta)+h)$ points where $h$ is the half-life of the decay function and $\Delta$ is the aspect ratio of the dataset. Our techniques extend to $k$-means clustering and $M$-estimators as well.
Data-Independent Memory-hard functions (iMHFs) are a key cryptographic primitive underlying the design of moderately expensive password hashing algorithms and egalitarian proofs of work that are resistant to side-channel attacks. Several goals for MHFs have been proposed including bandwidth hardness, space-time (ST) complexity, amortized area-time (aAT) complexity and sustained space complexity. An iMHF can be specified using a directed acyclic graph (DAG) $G$ with $N=2^n$ nodes and low indegree, and the cost (aAT, ST etc...) to evaluate the iMHF can be analyzed using pebbling games. In particular, given a parameter $N$ (e.g., maximum acceptable running time) we would like to design the DAG $G$ to have maximum possible pebbling cost i.e., to ensure that the iMHF is as expensive as possible for an attacker to compute. Recently, Alwen et al.~\cite{CCS:AlwBloHar17} gave a randomized DAG construction called DRSample and proved that the aAT cost to pebble the graph was $\Omega\left( N^2/\log N\right)$. In an asymptotic sense the DRSample outperformed all prior constructions including Argon2i, the winner of the password hashing competition, which can be pebbled with aAT cost at most $\bigO\left(N^{1.767}\right)$. In this work we first prove a matching {\em upper bound} on the pebbling cost of DRSample by analyzing the greedy pebbling attack of Boneh et al.~\cite{AC:BonCorSch16}. This sequential attack on DRSample is simple, easy to implement and has good concrete performance. In fact, our results show that, for practical values of $N\leq 2^{24}$, Argon2i provides {\em stronger} resistance to known pebbling attacks than DRSample reversing a finding of Alwen et al.~\cite{CCS:AlwBloHar17}. We then develop a new iMHF candidate by extending DRSample with the bit-reversal graph, and show that the iMHF resists {\em all known attacks} in practice and has {\em optimal} asymptotic performance under every MHF metric. In particular, we prove that (1) {\em any} (nearly) sequential pebbling attack (including the greedy pebbling attack) has aAT cost $\Omega\left( N^2\right)$, (2) {\em any} parallel attacker has aAT cost at least $\Omega\left(N^2/\log N\right)$ and {\em at least} $\Omega\left(N^2 \log \log N/\log N\right)$ unless one can find new depth-reducing attacks against DRSample which significantly improve upon the state of the art, (3) the graph has high bandwidth-complexity, and (4) any pebbling {\em either} has aAT cost $\omega(N^2)$ or {\em requires} at least $\Omega(N)$ steps with $\Omega(N/\log N)$ pebbles on the DAG. This makes our construction the first practical iMHF with strong guarantees on the sustained space-complexity. We also observe that the Argon2i round function can (trivially) be evaluated in parallel, which would allow an attacker to reduce aAT costs by (nearly) an order of magnitude, and we develop an {\em inherently} sequential version of the Argon2i round function that prevents this attack. We implement our new iMHF candidate (with and without the sequential round function) and show that evaluation speed is nearly identical to Argon2i. Finally, we provide a pebbling reduction which proves that in the parallel random oracle model (PROM) the cost of evaluating an iMHF like Argon2i or DRSample+BRG is given by the pebbling cost of the underlying DAG.
We propose the first adversarially robust algorithm for monotone submodular maximization under single and multiple knapsack constraints with scalable implementations in distributed and streaming settings. For a single knapsack constraint, our algorithm outputs a robust summary of almost optimal (up to polylogarithmic factors) size, from which a constant-factor approximation to the optimal solution can be constructed. For multiple knapsack constraints, our approximation is within a constant-factor of the best known non-robust solution. We evaluate the performance of our algorithms by comparison to natural robustifications of existing non-robust algorithms under two objectives: 1) dominating set for large social network graphs from Facebook and Twitter collected by the Stanford Network Analysis Project (SNAP), 2) movie recommendations on a dataset from MovieLens. Experimental results show that our algorithms give the best objective for a majority of the inputs and show strong performance even compared to offline algorithms that are given the set of removals in advance.
We study the problem of estimating the size of a matching when the graph is revealed in a streaming fashion. Our results are multifold: \begin{enumerate} \item We give a tight structural result relating the size of a maximum matching to the {\em arboricity} of a graph, which has been one of the most studied graph parameters for matching algorithms in data streams. \item We further show that the weight of a maximum weighted matching can be efficiently estimated by augmenting any routine for estimating the size of an unweighted matching. Namely, given an algorithm for computing a $\lambda$-approximation in the unweighted case, we obtain a $2(1+\varepsilon)\cdot \lambda$ approximation for the weighted case, while only incurring a multiplicative logarithmic factor in the space bounds. The algorithm is implementable in any streaming model, including {\em dynamic} streams. \item We also investigate algebraic aspects of computing matchings in data streams, by proposing new algorithms and lower bounds based on analyzing the rank of the {\em Tutte-matrix} of the graph. In particular, we present an algorithm determining whether there exists a matching of size $k$ using $k^2\text{polylog } n $ space, where $n$ is the number of nodes in the graph. We also show a lower bound of $\Omega(n^{1-\varepsilon})$ space for small approximation factors to the rank of a matrix in {\em insertion-only} streams. \end{enumerate}
Memory Hard Functions (MHFs) have been proposed as an answer to the growing inequality between the computational speed of general purpose CPUs and Application Specific Integrated Circuits (ASICs). MHFs have seen widespread applications including password hashing, key stretching and proofs of work. Several metrics have been proposed to quantify the `memory hardness' of a function. Cumulative memory complexity (CMC) \cite{STOC:AlwSer15} (or amortized Area $\times$ Time complexity \cite{CCS:AlwBloHar17}) attempts to quantify the amortized cost to acquire/build the hardware to evaluate the function --- amortized by the number of instances of the function that can be evaluated of this hardware. By contrast, bandwidth hardness \cite{TCC:RenDev17} attempts to quantify the amortized energy costs of evaluating this function on hardware --- which in turn is largely dominated by the number of cache misses. Ideally, a good MHF would be both bandwidth hard and have high cumulative memory complexity. While the cumulative memory complexity of leading MHF candidates is well understood, little is known about the bandwidth hardness of many of the most prominent MHF candidates. Our contributions are as follows: First, we provide the first reduction proving that, in the parallel random oracle model, the bandwidth hardness of a Data-Independent Memory Hard Function (iMHF) is described by the red-blue pebbling cost of the directed acyclic graph (DAG) associated with that iMHF. Second, we show that the goals of designing an MHF with high CMC/bandwidth hardness are well aligned. In particular, we prove that {\em any} function with high CMC also has relatively high bandwidth costs. This result leads to the first {\em unconditional} lower bound on the bandwidth cost of scrypt. Third, we analyze the bandwidth hardness of several prominent iMHF candidates such as Argon2i \cite{BiryukovDK15}, winner of the password hashing competition, aATSample and DRSample \cite{CCS:AlwBloHar17} --- the first practical iMHF with asymptotically optimal CMC. More specifically, we show that Argon2i is maximally bandwidth hard as long as the cache-size $m$ is at most $m \in\O{n^{2/3-\epsilon}}$ where $n$ is the total number of data-labels produced during computation. We also show that aATSample and DRSample are maximally bandwidth hard as long as the cache-size is $m \in\O{n^{1-\epsilon}}$. Finally, we show that the problem of finding a red-blue pebbling with minimum bandwidth cost is NP-hard.
We study the \emph{distinct elements} and \emph{$\ell_p$-heavy hitters} problems in the \emph{sliding window} model, where only the most recent $n$ elements in the data stream form the underlying set. We first introduce the \emph{\histogram}, a simple twist on the exponential (Datar \etal, SODA 2002) and smooth histograms (Braverman and Ostrovsky, FOCS 2007) that may be of independent interest. We then show that the \histogram{} along with a careful combination of existing techniques to track either the identity or frequency of a few specific items suffices to obtain algorithms for both distinct elements and $\ell_p$-heavy hitters that are nearly optimal in both $n$ and $\eps$. Applying our new \histogram{} framework, we provide an algorithm that outputs a $(1+\eps)$-approximation to the number of distinct elements in the sliding window model and uses $\O{\frac{1}{\eps^2}\log n\log\frac{1}{\eps}\log\log n+\frac{1}{\eps}\log^2 n}$ bits of space. For $\ell_p$-heavy hitters, we provide an algorithm using space $\O{\frac{1}{\eps^p}\log^2 n\left(\log\log n+\log\frac{1}{\eps}\right)}$ for $p\in(0,2]$, improving upon the best-known algorithm for $\ell_2$-heavy hitters (Braverman \etal, COCOON 2014), which has space complexity $\O{\frac{1}{\eps^4}\log^3 n}$. We also show complementing nearly optimal lower bounds of $\Omega\left(\frac{1}{\eps}\log^2 n+\frac{1}{\eps^2}\log n\right)$ for distinct elements and $\Omega\left(\frac{1}{\eps^p}\log^2 n\right)$ for $\ell_p$-heavy hitters, both tight up to $\O{\log\log n}$ and $\O{\log\frac{1}{\eps}}$ factors.
Error-correcting codes that admit {\em local} decoding and correcting algorithms have been the focus of much recent research due to their numerous theoretical and practical applications. The goal is to obtain the best possible tradeoffs between the number of queries the algorithm can make to its oracle (the {\em locality} of the task), the amount of redundancy in the encoding (the {\em rate} of the code), and the amount of error it withstands. In the standard adverstimes channel model the current tradeoffs are dramatic, allowing either small query complexity and superpolynomial blocklength, or small blocklength but high query complexity. However, in the realistic, computationally bounded channel model, constructions of locally decodable codes (\LDCs) suddenly exhibit small locality and small blocklength, for constant error rate. The first such constructions are due to Ostrovsky, Pandey and Sahai (ICALP 2007) who built private \LDCs under the assumption that one-way functions exist, and in the setting where the sender and receiver share a private key. We study variants of locally decodable and locally correctable codes in computationally bounded but adverstimes channels, under the much weaker assumption that collision-resistant hash functions exist, and with no public-key or private-key cryptographic setup. Specifically, we provide constructions of {\em relaxed locally correctable codes} (\RLCCs) and {\em relaxed locally decodable codes} (\RLDCs) over binary alphabets, with constant rate and poly-logarithmic locality, that compare favorably with existing schemes built under much stronger cryptographic assumptions, and with classical \RLCCs in the computationally unbounded Hamming channel. Our constructions crucially employ {\em collision resistant hash functions} and {\em local expander graphs}, extending ideas from recent cryptographic constructions of memory-hard functions.
We investigate the problem of detecting periodic trends within a string $S$ of length $n$, arriving in the streaming model, containing at most $k$ wildcard characters, where $k=o(n)$. We say $S$ has wildcard-period $p$ if there exists an assignment to each of the wildcard characters so that in the resulting stream the length $n-p$ prefix equals the length $n-p$ suffix. We present a two-pass streaming algorithm that computes wildcard-periods of $S$ using $\O{k^3\,\polylog\,n}$ bits of space, while we also show that this problem cannot be solved in sublinear space in one pass. In addition, we present complementing lower bounds, while showing a new communication complexity on the sparse index problem.
We consider the computational complexity of finding a legal black pebbling of a DAG $G=(V,E)$ with minimum cumulative cost. A black pebbling is a sequence $P_0,\ldots, P_t \subseteq V$ of sets of nodes which must satisfy the following properties: $P_0 = \emptyset$ (we start off with no pebbles on $G$), $\sinks(G) \subseteq \bigcup_{j \leq t} P_j$ (every sink node was pebbled at some point) and $\parents\big(P_{i+1}\backslash P_i\big) \subseteq P_i$ (we can only place a new pebble on a node $v$ if all of $v$'s parents had a pebble during the last round). The cumulative cost of a pebbling $P_0,P_1,\ldots, P_t \subseteq V$ is $\cc(P) = \left| P_1\right| + \ldots + \left| P_t\right|$. The cumulative pebbling cost is an especially important security metric for data-independent memory hard functions, an important primitive for password hashing. Thus, an efficient (approximation) algorithm would be an invaluable tool for the cryptanalysis of password hash functions as it would provide an automated tool to establish tight bounds on the amortized space-time cost of computing the function. We show that such a tool is unlikely to exist in the most general case. In particular, we prove the following results. \begin{itemize} \item It is $\NPhard$ to find a pebbling minimizing cumulative cost. \item The natural linear program relaxation for the problem has integrality gap $\tilde{O}(n)$, where $n$ is the number of nodes in $G$. We conjecture that the problem is hard to approximate. \item We show that a related problem, find the minimum size subset $S\subseteq V$ such that $\depth(G-S) \leq d$, is also $\NPhard$. In fact, under the Unique Games Conjecture there is no $(2-\epsilon)$-approximation algorithm. \end{itemize}
We develop an economic model of an offline password cracker which allows us to make quantitative predictions about the fraction of accounts that a rational password attacker would crack in the event of an authentication server breach. We apply our economic model to analyze recent massive password breaches at Yahoo!, Dropbox, LastPass and AshleyMadison. All four organizations were using key-stretching to protect user passwords. In fact, LastPass' use of PBKDF2-SHA256 with $10^5$ hash iterations exceeds 2017 NIST minimum recommendation by an order of magnitude. Nevertheless, our analysis paints a bleak picture: the adopted key-stretching levels provide insufficient protection for user passwords. In particular, we present strong evidence that most user passwords follow a Zipf's law distribution, and characterize the behavior of a rational attacker when user passwords are selected from a Zipf's law distribution. We show that there is a finite threshold which depends on the Zipf's law parameters that characterizes the behavior of a rational attacker --- if the value of a cracked password (normalized by the cost of computing the password hash function) exceeds this threshold then the adversary's optimal strategy is {\em always} to continue attacking until each user password has been cracked. In all cases (Yahoo!, Dropbox, LastPass and AshleyMadison) we find that the value of a cracked password almost certainly exceeds this threshold meaning that a rational attacker would crack all passwords that are selected from the Zipf's law distribution (i.e., most user passwords). This prediction holds even if we incorporate an aggressive model of diminishing returns for the attacker (e.g., the total value of $500$ million cracked passwords is less than $100$ times the total value of $5$ million passwords). On a positive note our analysis demonstrates that memory hard functions (MHFs) such as SCRYPT or Argon2i can significantly reduce the damage of an offline attack. In particular, we find that because MHFs substantially increase guessing costs a rational attacker will give up well before he cracks most user passwords and this prediction holds even if the attacker does not encounter diminishing returns for additional cracked passwords. Based on our analysis we advocate that password hashing standards should be updated to require the use of memory hard functions for password hashing and disallow the use of non-memory hard functions such as BCRYPT or PBKDF2.
A palindrome is a string that reads the same as its reverse, such as ``aibohphobia'' (fear of palindromes). Given an integer $d>0$, a {\em $d$-near-palindrome} is a string of Hamming distance at most $d$ from its reverse. We study the natural problem of identifying a longest $d$-near-palindrome in data streams. The problem is relevant to the analysis of DNA databases, and to the task of repairing recursive structures in documents such as XML and JSON. We present an algorithm that returns a $d$-near-palindrome whose length is within a multiplicative $(1+\eps)$-factor of the longest $d$-near-palindrome. Our algorithm also returns the set of mismatched indices of the $d$-near-palindrome, using $\bigO{\frac{d\log^7 n}{\eps\log(1+\eps)}}$ bits of space, and $\bigO{\frac{d\log^6 n}{\eps\log(1+\eps)}}$ update time per arriving symbol. We show that $\Omega(d\log n)$ space is necessary for estimating the length of longest $d$-near-palindromes with high probability. We further obtain an additive-error approximation algorithm and a comparable lower bound, as well as an {\em exact} two-pass algorithm that solves the longest $d$-near-palindrome problem using $\bigO{d^2\sqrt{n}\log^6 n}$ bits of space.
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that \begin{enumerate} \item An Argon2i DAG is $\left(e,O\left(n^3/e^3\right)\right))$-reducible. \item The cumulative pebbling cost for Argon2i is at most $O\left(n^{1.768}\right)$. This improves upon the previous best upper bound of $O\left(n^{1.8}\right)$ \cite{AB17}. \item Argon2i DAG is $\left(e,\tilde{\Omega}\left(n^3/e^3\right)\right))$-depth robust. By contrast, analysis of \cite{ABP17} only established that Argon2i was $\left(e,\tilde{\Omega}\left(n^3/e^2\right)\right))$-depth robust. \item The cumulative pebbling complexity of Argon2i is at least $\tilde{\Omega}\left( n^{1.75}\right)$. This improves on the previous best bound of $\Omega\left( n^{1.66}\right)$ \cite{ABP17} and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing. \end{enumerate} We also show that Argon2i has high {\em fractional} depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.
Analyzing patterns in streamed data generated by network traffic, sensor networks, or satellite feeds is a challenge for systems in which the available storage is limited. In addition, real data is noisy, which makes designing data stream algorithms even more challenging. Motivated by such challenges, we study algorithms for detecting the similarity of two data streams that can be read in sync. Two strings $S, T\in \Sigma^n$ form a $d$-near-alignment if the distance between them in some given metric is at most $d$. We study the problem of identifying a longest substring of $S$ and $T$ that forms a {\em $d$-near-alignment} under the {\em edit} distance, in the {\em simultaneous streaming model}. In this model, symbols of strings $S$ and $T$ are streamed at the same time, and the amount of available processing space is sublinear in the length of the strings. We give several algorithms, including an exact one-pass algorithm that uses $\O{d^2+d\log n}$ bits of space. We couple these results with comparable lower bounds.
We study the problem of finding all k-periods of a length-n string S, presented as a data stream. S is said to have k-period p if its prefix of length n-p differs from its suffix of length n-p in at most k locations. The study of periodic patterns in sequences is fundamental to string algorithms, time series data mining, and computational biology. Since real data is rarely perfect, exact pattern finding in streamed data can be unrealistic; consequently, one needs to design algorithms that can withstand errors in the patterns. It is often the case that such tasks become much more difficult to analyze than their no-error analogues. This turns out to be the case in the study of near periodicity here. While our algorithms are similar to the ones in the exact version previously studied, our analysis requires a new structural understanding of k-periodicity. We give a one-pass streaming algorithm that computes the k-periods of a string S using poly(k, log n) bits of space, for k-periods of length at most n/2. We also present a two-pass streaming algorithm that computes k-periods of S using poly(k, log n) bits of space, regardless of period length. We complement these results with comparable lower bounds.
Group testing is the process of pooling arbitrary subsets from a set of $n$ items so as to identify, with a minimal number of disjunctive tests, a ``small'' subset of $d$ defective items. In ``classical'' non-adaptive group testing, it is known that when $d = o(n^{1-\delta})$ for any $\delta>0$, $\theta(d\log(n))$ tests are both information-theoretically necessary, and sufficient to guarantee recovery with high probability. Group testing schemes in the literature meeting this bound require most items to be tested $\Omega(\log(n))$ times, and most tests to incorporate $\Omega(n/d)$ items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be ``sparse''. Specifically, we consider (separately) scenarios in which (a) items are finitely divisible and hence may participate in at most $\gamma$ tests; and (b) tests are size-constrained to pool no more than $\rho$ items per test. For both scenarios we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In particular, one of our main results shows that $\gamma$-finite divisibility of items forces {\it any} group testing algorithm with probability of recovery error at most $\epsilon$ to perform at least $\Omega(\gamma d(n/d)^{(1-2\epsilon)/((1+2\epsilon)\gamma)})$ tests. Analogously, for $\rho$-sized constrained tests, we show an information-theoretic lower bound of $\Omega(n\log(n/d)/(\rho\log(n/\rho d)))$. In both scenarios we provide both randomized constructions (under both $\epsilon$-error and zero-error reconstruction guarantees) and explicit constructions of computationally efficient group-testing algorithms (under $\epsilon$-error reconstruction guarantees) that require a number of tests that are optimal up to constant factors in some regimes of $n, d, \gamma \text{ and } \rho$. We also investigate the effect of unreliability/noise in test outcomes.
We present linear-time algorithms for partitioning a path or a tree with weights on the vertices by removing $k$ edges to maximize the minimum-weight component. We also use the same framework to partition a path with weight on the vertices, removing $k$ edges to minimize the maximum-weight component. The algorithms use the parametric search paradigm, testing candidate values until an optimum is found while simultaneously reducing the running time needed for each test. For path-partitioning, the algorithm employs a synthetic weighting scheme that results in a constant fraction reduction in running time after each test. For tree-partitioning, our dual-pronged strategy makes progress no matter what the layout of our tree is.
We consider the problem of estimating the weight of a maximum weighted matching of a weighted graph $G(V,E)$ whose edges are revealed in a streaming fashion. Extending the framework from Crouch and Stubbs (APPROX 2014), we develop a reduction from the maximum weighted matching problem to the maximum cardinality matching problem that only doubles the approximation factor of a streaming algorithm developed for the maximum cardinality matching problem. Our results hold for the insertion-only and the dynamic (i.e, insertion and deletion) edge-arrival streaming models. The previous best-known reduction is due to Bury and Schwiegelshohn (ESA 2015) who develop an algorithm whose approximation guarantee scales by a polynomial factor. As an application, we obtain improved estimators for weighted planar graphs and, more generally, for weighted bounded-arboricity graphs, by feeding into our reduction the recent estimators due to Esfandiari \etal\ (SODA 2015) and to Chitnis \etal\ (SODA 2016). In particular, we obtain a $(48+\eps)$-approximation estimator for the weight of a maximum weighted matching in planar graphs.