An invariants based architecture for combining small and large data sets in neural networks

Auteurs Roelant Ossewaarde, Stefan Leijnen, Thijs van den Berg
Gepubliceerd in Proceedings of BNAIC/BeneLearn 2021.
Publicatiedatum 10 november 2021
Lectoraat Artificial Intelligence
Soort publicatie Lezing

Samenvatting

We present a novel architecture for an AI system that allows a priori knowledge to combine with deep learning. In traditional neural networks, all available data is pooled at the input layer. Our alternative neural network is constructed so that partial representations (invariants) are learned in the intermediate layers, which can then be combined with a priori knowledge or with other predictive analyses of the same data. This leads to smaller training datasets due to more efficient learning. In addition, because this architecture allows inclusion of a priori knowledge and interpretable predictive models, the interpretability of the entire system increases while the data can still be used in a black box neural network. Our system makes use of networks of neurons rather than single neurons to enable the representation of approximations (invariants) of the output.

Aan deze publicatie werkten mee

Taal Engels
Gepubliceerd in Proceedings of BNAIC/BeneLearn 2021.
ISBN/ISSN URN:ISBN:0-2799-2527-X
Trefwoorden Interpretability, Neural Network architecture, A priori knowledge
Paginabereik 748-749

Roelant Ossewaarde

Roelant Ossewaarde | onderzoeker | Intelligent Data Systems

Roelant Ossewaarde

  • Onderzoeker
  • Lectoraat: Artificial Intelligence