Abstract

In supervised learning, low quality annotations lead to poorly performing classification and detection models, while also rendering evaluation unreliable. Annotation quality is affected by multiple factors. For example, in the post-hoc self-reporting of daily activities, cognitive biases are one of the most common ingredients.
In particular, reporting the start and duration of an activity after its finalisation may incorporate biases introduced by personal time perceptions, as well as the imprecision and lack of granularity affected by time rounding. When dealing with time-bounded data, the annotations' consistency over the event is particularly important for both event detection and classification. Here we propose a method to model human biases on temporal annotations and proposed the use of soft labels. Experimental results in synthetic data show that soft labels are a better approximation of the ground truth for several metrics. We showcase the method on a real dataset of daily activities.
Original languageEnglish
Number of pages6
Publication statusAccepted/In press - 11 Jan 2023
EventARDUOUS 2023: Workshop at PerCom 2023 - Atlanta, Georgia, United States
Duration: 13 Mar 202317 Mar 2023
https://text2hbm.org/arduous/

Workshop

WorkshopARDUOUS 2023
Abbreviated titleARDUOUS 2023
Country/TerritoryUnited States
CityAtlanta, Georgia
Period13/03/2317/03/23
Internet address

Keywords

  • Annotations
  • Human Biases
  • Bayesian

Fingerprint

Dive into the research topics of 'When the Ground Truth is not True: Modelling Human Biases in Temporal Annotations'. Together they form a unique fingerprint.

Cite this