User profiles for Hien Duy Nguyen
Hien Duy NguyenLa Trobe University and Kyushu University Verified email at latrobe.edu.au Cited by 1441 |
[HTML][HTML] Approximations of conditional probability density functions in Lebesgue spaces via mixture of experts models
Mixture of experts (MoE) models are widely applied for conditional probability density
estimation problems. We demonstrate the richness of the class of MoE models by proving …
estimation problems. We demonstrate the richness of the class of MoE models by proving …
Passive superconducting circulator on a chip
An on-chip microwave circulator that is compatible with superconducting devices is a key
element for scale up of superconducting circuits. Previous approaches to integrating circulators …
element for scale up of superconducting circuits. Previous approaches to integrating circulators …
Approximate Bayesian computation via the energy statistic
Approximate Bayesian computation (ABC) has become an essential part of the Bayesian
toolbox for addressing problems in which the likelihood is prohibitively expensive or entirely …
toolbox for addressing problems in which the likelihood is prohibitively expensive or entirely …
Functional connectivity subtypes associate robustly with ASD diagnosis
Our understanding of the changes in functional brain organization in autism is hampered by
the extensive heterogeneity that characterizes this neurodevelopmental disorder. Data …
the extensive heterogeneity that characterizes this neurodevelopmental disorder. Data …
[PDF][PDF] A non-asymptotic penalization criterion for model selection in mixture of experts models
Mixture of experts (MoE) is a popular class of models in statistics and machine learning that
has sustained attention over the years, due to its flexibility and effectiveness. We consider …
has sustained attention over the years, due to its flexibility and effectiveness. We consider …
A non-asymptotic risk bound for model selection in a high-dimensional mixture of experts via joint rank and variable selection
We are motivated by the problem of identifying potentially nonlinear regression relationships
between high-dimensional outputs and high-dimensional inputs of heterogeneous data. …
between high-dimensional outputs and high-dimensional inputs of heterogeneous data. …
A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models
Mixture of experts (MoE) are a popular class of statistical and machine learning models that
have gained attention over the years due to their flexibility and efficiency. In this work, we …
have gained attention over the years due to their flexibility and efficiency. In this work, we …
Summary statistics and discrepancy measures for approximate Bayesian computation via surrogate posteriors
A key ingredient in approximate Bayesian computation (ABC) procedures is the choice of a
discrepancy that describes how different the simulated and observed data are, often based …
discrepancy that describes how different the simulated and observed data are, often based …
[PDF][PDF] Approximate Bayesian computation with surrogate posteriors
A key ingredient in approximate Bayesian computation (ABC) procedures is the choice of a
discrepancy that describes how different the simulated and observed data are, often based …
discrepancy that describes how different the simulated and observed data are, often based …
Non-asymptotic model selection in block-diagonal mixture of polynomial experts models
Model selection, via penalized likelihood type criteria, is a standard task in many statistical
inference and machine learning problems. Progress has led to deriving criteria with …
inference and machine learning problems. Progress has led to deriving criteria with …