Distinguishing Artefacts: Evaluating the Saturation Point of Convolutional Neural Networks

Ric Real*, James A Gopsill, David Edward Jones, Chris M Snider, Ben J Hicks

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

1 Citation (Scopus)
152 Downloads (Pure)


Prior work has shown Convolutional Neural Networks (CNNs) trained on surrogate Computer Aided Design (CAD) models are able to detect and classify real-world artefacts from photographs. The applications of which support twinning of digital and physical assets in design, including rapid extraction of part geometry from model repositories, information search & retrieval and identifying components in the field for maintenance, repair, and recording. The performance of CNNs in classification tasks have been shown dependent on training data set size and number of classes. Where prior works have used relatively small surrogate model data sets (< 100 models), the question remains as to the ability of a CNN to differentiate between models in increasingly large model repositories.

This paper presents a method for generating synthetic image data sets from online CAD model repositories, and further investigates the capacity of an off-the-shelf CNN architecture trained on synthetic data to classify models as class size increases. 1,000 CAD models were curated and processed to generate large scale surrogate data sets, featuring model coverage at steps of 10◦, 30◦, 60◦, and 120◦ degrees.

The findings demonstrate the capability of computer vision algorithms to classify artefacts in model repositories of up to 200, beyond this point the CNN’s performance is observed to deteriorate significantly, limiting its present ability for automated twinning of physical to digital artefacts. Although, a match is more often found in the top-5 results showing potential for information search and retrieval on large repositories of surrogate models.

Original languageEnglish
Pages (from-to)385-390
Number of pages6
JournalProcedia CIRP
Early online date2 Jun 2021
Publication statusE-pub ahead of print - 2 Jun 2021
Event31st CIRP Design Conference 2021 - University of Twente, Enschede, Netherlands
Duration: 19 May 202121 May 2021

Bibliographical note

Funding Information:
The work reported in this paper has been undertaken as part of the Twinning of digital-physical models during prototyping project. The work was conducted at the University of Bristol, Design and Manufacturing Futures Laboratory (http://www.dmf-lab.co.uk) Funded by the Engineering and Physical Sciences Research Council (EPSRC), Grant reference (EP/R032696/1). The authors would also like to thank MiniFactory.com and their users for sharing their models.

Publisher Copyright:
© 2021 Elsevier B.V.. All rights reserved.


  • Design Repositories
  • Search & Retrieval
  • convolutional neural network
  • CNN
  • Machine Learning
  • ML
  • Synthetic data
  • Surrogate models


Dive into the research topics of 'Distinguishing Artefacts: Evaluating the Saturation Point of Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this