Reframing in context: A systematic approach for model reuse in machine learning

José Hernández-Orallo, Adolfo Martinez-Uso, Ricardo B.C. Prudencio, Meelis Kull, Peter A Flach, Chowdhury Farhan Ahmed, Nicolas Lachiche

Research output: Contribution to journalArticle (Academic Journal)peer-review

9 Citations (Scopus)
572 Downloads (Pure)


We describe a systematic approach called reframing, defined as the process of preparing a machine learning model (e.g., a classifier) to perform well over a range of operating contexts. One way to achieve this is by constructing a versatile model, which is not fitted to a particular context, and thus enables model reuse. We formally characterise reframing in terms of a taxonomy of context changes that may be encountered and distinguish it from model retraining and revision. We then identify three main kinds of reframing: input reframing, output reframing and structural reframing. We proceed by reviewing areas and problems where some notion of reframing has already been developed and shown useful, if under different names: re-optimising, adapting, tuning, thresholding, etc. This exploration of the landscape of reframing allows us to identify opportunities where reframing might be possible and useful. Finally, we describe related approaches in terms of the problems they address or the kind of solutions they obtain. The paper closes with a re-interpretation of the model development and deployment process with the use of reframing.
Original languageEnglish
Pages (from-to)551-566
Number of pages16
JournalAI Communications
Issue number5
Publication statusPublished - 15 Nov 2016

Structured keywords

  • Jean Golding


  • machine learning
  • reframing
  • model reuse
  • operating context
  • cost-sensitive evaluation


Dive into the research topics of 'Reframing in context: A systematic approach for model reuse in machine learning'. Together they form a unique fingerprint.

Cite this