2022. 03. 30. 16:00 - 2022. 03. 30. 17:30
             online
           -
             -
           
  
    Esemény típusa:
              szeminárium
          
             
  
    Szervezés:
              Intézeti
          
           Deep learning szeminárium
          Leírás
Feed-forward networks can be interpreted as mappings with linear decision surfaces at the level of the last layer.
e investigate how the tangent space of the network can be exploited to refine the decision in case of ReLU (Rectified Linear Unit) activations.
We show that a simple Riemannian metric parametrized on the parameters of the network forms a similarity function at least as good as the original network and we
suggest a sparse metric to increase the similarity gap.