The FLINT Project

Research

People

Publications

Software

Support

Links

Internal

Task-Aware Novelty Detection for Visual-Based Deep Learning in Autonomous Systems

Last modified: Mon Jul 6 01:16:21 2020 GMT.

Authors

Valerie_Chen
Man-Ki Yoon
Zhong Shao

Abstract

Deep-learning driven safety-critical autonomous systems, such as self-driving cars, must be able to detect situations where its trained model is not able to make a trustworthy prediction. This ability to determine the novelty of a new input with respect to a trained model is critical for such systems because novel inputs due to changes in the environment, adversarial attacks, or even unintentional noise can potentially lead to erroneous, perhaps life-threatening decisions. This paper proposes a learning framework that leverages information learned by the prediction model in a task-aware manner to detect novel scenarios. We use network saliency to provide the learning architecture with knowledge of the input areas that are most relevant to the decision-making and learn an association between the saliency map and the predicted output to determine the novelty of the input. We demonstrate the efficacy of this method through experiments on real-world driving datasets as well as through driving scenarios in our in-house indoor driving environment where the novel image can be sampled from another similar driving dataset with similar features or from adversarial attacked images from the training dataset. We find that our method is able to systematically detect novel inputs and quantify the deviation from the target prediction through this task-aware approach.

Published

In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA'20), Paris, France, pages 11060-11066, June 2020.
  • Conference Paper [PDF]

  • Copyright © 1996-2024 The FLINT Group <flint at cs dot yale dot edu>
    Yale University Department of Computer Science
    Validate this page
    colophon