Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

A framework to efficiently smooth L1 penalties for linear regression

View ORCID ProfileGeorg Hahn, Sharon M. Lutz, Nilanjana Laha, Christoph Lange
doi: https://doi.org/10.1101/2020.09.17.301788
Georg Hahn
1Department of Biostatistics, T.H. Chan School of Public Health, Harvard University, Boston, MA 02115;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Georg Hahn
  • For correspondence: ghahn@hsph.harvard.edu
Sharon M. Lutz
2Harvard T.H. Chan School of Public Health
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nilanjana Laha
2Harvard T.H. Chan School of Public Health
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christoph Lange
2Harvard T.H. Chan School of Public Health
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

Penalized linear regression approaches that include an L1 term have become an important tool in day-to-day statistical data analysis. One prominent example is the least absolute shrinkage and selection operator (Lasso), though the class of L1 penalized regression operators also includes the fused and graphical Lasso, the elastic net, etc. Although the L1 penalty makes their objective function convex, it is not differentiable everywhere, motivating the development of proximal gradient algorithms such as Fista, the current gold standard in the literature. In this work, we take a different approach based on smoothing. The methodological contribution of our article is threefold: (1) We introduce a unified framework to compute closed-form smooth surrogates of a whole class of L1 penalized regression problems using Nesterov smoothing. The surrogates preserve the convexity of the original (unsmoothed) objective functions, are uniformly close to them, and have closed-form derivatives everywhere for efficient minimization via gradient descent; (2) We prove that the estimates obtained with the smooth surrogates can be made arbitrarily close to the ones of the original (unsmoothed) objective functions, and provide explicitly computable bounds on the accuracy of our estimates; (3) We propose an iterative algorithm to progressively smooth the L1 penalty which increases accuracy and is virtually free of tuning parameters. The proposed methodology is applicable to a large class of L1 penalized regression operators, including all the operators mentioned above. Using simulation studies, we compare our framework to current gold standards such as Fista, glmnet, gLasso, etc. Our simulation results suggest that our proposed smoothing framework provides estimates of equal or higher accuracy than the gold standards while keeping the aforementioned theoretical guarantees and having roughly the same asymptotic runtime scaling.

Competing Interest Statement

The authors have declared no competing interest.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission.
Back to top
PreviousNext
Posted September 19, 2020.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
A framework to efficiently smooth L1 penalties for linear regression
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
A framework to efficiently smooth L1 penalties for linear regression
Georg Hahn, Sharon M. Lutz, Nilanjana Laha, Christoph Lange
bioRxiv 2020.09.17.301788; doi: https://doi.org/10.1101/2020.09.17.301788
Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
Citation Tools
A framework to efficiently smooth L1 penalties for linear regression
Georg Hahn, Sharon M. Lutz, Nilanjana Laha, Christoph Lange
bioRxiv 2020.09.17.301788; doi: https://doi.org/10.1101/2020.09.17.301788

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Bioinformatics
Subject Areas
All Articles
  • Animal Behavior and Cognition (2235)
  • Biochemistry (4302)
  • Bioengineering (2958)
  • Bioinformatics (13483)
  • Biophysics (5959)
  • Cancer Biology (4633)
  • Cell Biology (6641)
  • Clinical Trials (138)
  • Developmental Biology (3939)
  • Ecology (6240)
  • Epidemiology (2053)
  • Evolutionary Biology (9181)
  • Genetics (6883)
  • Genomics (8803)
  • Immunology (3918)
  • Microbiology (11286)
  • Molecular Biology (4458)
  • Neuroscience (25625)
  • Paleontology (183)
  • Pathology (722)
  • Pharmacology and Toxicology (1209)
  • Physiology (1776)
  • Plant Biology (3999)
  • Scientific Communication and Education (892)
  • Synthetic Biology (1194)
  • Systems Biology (3627)
  • Zoology (654)