Pullback bundles and the geometry of learning
Résumé
Explainable Artificial Intelligence (XAI) and acceptable artificial intelligence are active topics of research in machine learning. For critical applications, being able to prove, or at least to ensure with a high probability the correctness of algorithms is of utmost importance. In practice, however, few theoretical tools are known that can be used for this purpose. Using the Fisher Information Metric (FIM) on the ouput space yields interesting indicators in both the input and parameter spaces, but the underlying geometry is not yet fully understood. In this work, a approach based on the pullback bundle, a well-known trick for describing bundle morphisms, is introduced and applied to the encoder-decoder block. With constant rank hypothesis on the derivative of the network with respect to its inputs, a description of its behaviour is obtained. Further generalization is gained through the introduction of the pullback generalized bundle that takes into account the sensitivity with respect to weights.
Fichier principal
dirac_ml.pdf (401.44 Ko)
Télécharger le fichier
dirac_ml (1).pdf (401.44 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |