Robust randomized optimization with k nearest neighbors

Henry Reeve*, Ata Kaban

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

15 Downloads (Pure)

Abstract

Modern applications of machine learning typically require the tuning of a multitude of hyperparameters. With this motivation in mind, we consider the problem of optimization given a set of noisy function evaluations. We focus on robust optimization in which the goal is to find a point in the input space such that the function remains high when perturbed by an adversary within a given radius. Here we identify the minimax optimal rate for this problem, which turns out to be of order (n-λ/(2λ+1)), where n is the sample size and λ quantifies the smoothness of the function for a broad class of problems, including situations where the metric space is unbounded. The optimal rate is achieved (up to logarithmic factors) by a conceptually simple algorithm based on k-nearest neighbor regression.
Original languageEnglish
Pages (from-to)819-836
Number of pages18
JournalAnalysis and Applications
Volume17
Issue number5
DOIs
Publication statusPublished - 27 Aug 2019

Keywords

  • Optimisation for machine learning
  • metric spaces
  • non-parametric methods
  • Optimization for machine learning

Fingerprint Dive into the research topics of 'Robust randomized optimization with k nearest neighbors'. Together they form a unique fingerprint.

Cite this