Loss

Knowledge

This page contains papers and related content linked to the loss landscape challenge. If you have a paper or article that you would like to share, email ideami@ideami.com.

  • Emergent properties of the local geometry of neural loss landscapes
    Authors: Stanislav Fort, Surya Ganguli
    Submitted on 14 Oct, 2019;
    arXiv:1910.05929

 

  • Loss Landscape Sightseeing with Multi-Point Optimization
    Authors: Ivan Skorokhodov, Mikhail Burtsev
    Submitted on 9 Oct, 2019;
    arXiv:1910.03867

 

  • How noise affects the Hessian spectrum in overparameterized neural networks
    Authors: Mingwei Wei, David J Schwab
    Submitted on 1 Oct, 2019;
    arXiv:1910.00195

 

 

  • The Difficulty of Training Sparse Neural Networks
    Authors: Utku Evci, Fabian Pedregosa, Aidan Gomez, Erich Elsen
    Submitted on 25 June, 2019;
    arXiv:1906.10732

 

  • Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets
    Authors: Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, Rong Ge
    Submitted on 14 June, 2019;
    arXiv:1906.06247

 

  • Large Scale Structure of Neural Network Loss Landscapes
    Stanislav Fort, Stanislaw Jastrzebski
    Submitted on 11 Jun 2019
    arXiv:1906.04724

 

 

  • The Effect of Network Depth on the Optimization Landscape
    Behrooz Ghorbani, Ying Xiao, Shankar Krishnan
    Submitted 28 May 2019
    Link

 

  • Visualizing Loss Landscape of Deep Neural Networks…..but can we Trust them?
    Jae Duk Seo
    May 5, 2019
    Link

 

  • Negative eigenvalues of the Hessian in deep neural networks
    Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol
    Submitted on 6 Feb 2019
    arXiv:1902.02366

 

  • Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
    Charles H. Martin, Michael W. Mahoney
    Submitted on 2 Oct 2018
    arXiv:1810.01075

 

  • On the loss landscape of a class of deep neural networks with no bad local valleys
    Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein
    Submitted on 27 Sep 2018
    arXiv:1809.10749

 

  • The Goldilocks zone: Towards better understanding of neural network loss landscapes
    Stanislav Fort, Adam Scherlis
    Submitted on 6 Jul 2018
    arXiv:1807.02581

 

  • The loss landscape of overparameterized neural networks
    Y Cooper
    Submitted on 26 Apr 2018
    arXiv:1804.10200

 

  • Measuring the Intrinsic Dimension of Objective Landscapes
    Chunyuan Li, Heerad Farkhoor, Rosanne Liu, Jason Yosinski
    Submitted on 24 Apr 2018
    arXiv:1804.08838

 

  • A Mean Field View of the Landscape of Two-Layers Neural Networks
    Song Mei, Andrea Montanari, Phan-Minh Nguyen
    Submitted on 18 Apr 2018
    arXiv:1804.06561

 

  • The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
    Jonathan Frankle, Michael Carbin
    Submitted on 9 Mar 2018
    arXiv:1803.03635

 

  • Essentially No Barriers in Neural Network Energy Landscape
    Felix Draxler, Kambis Veschgini, Manfred Salmhofer, Fred A. Hamprecht
    Submitted on 2 Mar 2018
    arXiv:1803.00885

 

  • Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
    Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson
    Neural Information Processing Systems 2018
    Submitted on 27 Feb 2018
    arXiv:1802.10026

 

  • Visualizing the Loss Landscape of Neural Nets (A favourite of mine)  *********
    Li, Hao and Xu, Zheng and Taylor, Gavin and Studer, Christoph and Goldstein, Tom
    Neural Information Processing Systems 2018
    Submitted on 28 Dec 2017
    arXiv:1712.09913

 

  • Sharp Minima Can Generalize For Deep Nets
    Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
    Submitted on 28 March 2017
    arXiv:1703.04933

xyz

Go deep

What lies within the landscape?