User profiles for P. Valiant
Paul ValiantAssociate Professor of Computer Science, Purdue University Verified email at purdue.edu Cited by 2787 |
An automatic inequality prover and instance optimal identity testing
… c, c and a function f(p, ϵ) on the known distribution p and error parameter ϵ, such that our
tester distinguishes p = q from p − q1 ≥ ϵ using f(p, ϵ) samples with success probability > 2/3, …
tester distinguishes p = q from p − q1 ≥ ϵ using f(p, ϵ) samples with success probability > 2/3, …
Testing symmetric properties of distributions
P Valiant - Proceedings of the fortieth annual ACM symposium on …, 2008 - dl.acm.org
… tions p+, p− that satisfy this indistinguishability condition and where π(p+) is large yet π(p−)
is … shows how we may slightly modify p+, p− into a pair p+, p− whose moments match each …
is … shows how we may slightly modify p+, p− into a pair p+, p− whose moments match each …
The power of linear estimators
… 𝑃 𝑜𝑖(𝑘) is closely concentrated around 𝑘, we may often easily replace 𝑘-sample testing with
𝑃 … We now consider the distribution of the 𝑖th entry of a 𝑃 𝑜𝑖(𝑘)-sample fingerprint, ℱ(𝑖). …
𝑃 … We now consider the distribution of the 𝑖th entry of a 𝑃 𝑜𝑖(𝑘)-sample fingerprint, ℱ(𝑖). …
Estimating the unseen: improved estimators for entropy and other properties
We show that a class of statistical properties of distributions, which includes such practically
relevant properties as entropy, the number of distinct elements, and distance metrics …
relevant properties as entropy, the number of distinct elements, and distance metrics …
Estimating the unseen: an n/log (n)-sample estimator for entropy and support size, shown optimal via new CLTs
… This bound of log k 8k ensures that we will almost never see any element of p + or p − more
than log k times; that is, the portion of the fingerprint below m “captures the whole story”. How…
than log k times; that is, the portion of the fingerprint below m “captures the whole story”. How…
Optimal algorithms for testing closeness of discrete distributions
… More precisely, given samples from two distributions p and q over an n-element set, we wish
to distinguish whether p = q versus p is at least ε-far from q, in either l1 or l2 distance. Batu …
to distinguish whether p = q versus p is at least ε-far from q, in either l1 or l2 distance. Batu …
Incrementally verifiable computation or proofs of knowledge imply time/space efficiency
P Valiant - Theory of Cryptography: Fifth Theory of Cryptography …, 2008 - Springer
… Assume we have a machine P that outputs deceptive pairs (x = (M,s1,s3),p ) for Ti+1 with …
We note that our extractor runs logarithmic factor slower than P . Since the running time of P is …
We note that our extractor runs logarithmic factor slower than P . Since the running time of P is …
Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process
We consider networks, trained via stochastic gradient descent to minimize $\ell_2 $ loss, with
the training labels perturbed by independent noise at each iteration. We characterize the …
the training labels perturbed by independent noise at each iteration. We characterize the …
Instance optimal learning of discrete distributions
… a distribution p, our algorithm returns a labelled vector whose expected distance from p is …
true unlabeled vector of probabilities of distribution p and simply needs to assign labels—up …
true unlabeled vector of probabilities of distribution p and simply needs to assign labels—up …
[PDF][PDF] A CLT and tight lower bounds for estimating entropy
We prove two new multivariate central limit theorems; the first relates the sum of independent
distributions to the multivariate Gaussian of corresponding mean and covariance, under …
distributions to the multivariate Gaussian of corresponding mean and covariance, under …