Alexandre Gramfort

@agramfort

Joined on Mar 7, 2019

  • Summary The GW problem is important and challenging (non-convex quadratic objective in space of n x n bistochastic matrices). This paper proposes a provably convergent (to local minimum) single loop approach, which could be of great practical interest, because single-loop algorithms are easier to tune (no need for inner loop's # of iterations, or tolerance). The non-asymptotic convergence theory that is provided is original, but some limitations: The authors mention at the end of their introduction, in Table 1 and elsewhere in the paper, that their algorithm does not always return a feasible solution for GW (e.g. a bistochastic matrix). The author mention in R.3.7 that the algorithm used in experiments is not provably convergent (theory supports a symmetric Bregman divergence, i.e. L2; their implementation uses KL). The authors show speed-ups as well as comparable or better performance (in accuracy) to other approaches in graph alignment /partitioning tasks. Process The paper was supported by all reviewers during the rebuttal phase. After the rebuttal phase, AC and reviewers discussed shortcomings. This led to 2 reviewers (h6kQ and Jipi) lowering their grades. Because the system was locked, this is only reflected in messages to the authors ("Unfortunately have to lower my score", "Lower my score due to my misunderstanding"). Reviewer h6kQ communicated to me a new grade of 5. One reviewer participated in a discussion with the others but stopped participating. Another reviewer did not join the discussion after the rebuttal period. Some of the concerns were then discussed with the authors at a later date. The AC and SAC now acknowledge that these discussions came late in the review process, and therefore did not let enough opportunity to the authors to effectively amend their paper, and address the late concerns.
     Like  Bookmark
  • Author : Alex Gramfort (agramfort@) moved to google doc @ meta https://docs.google.com/document/d/1JfA7R7DCF2WHDKQkbytgnAE8O-GmcJ8F6ufCPDJ7li8/edit?usp=sharing
     Like  Bookmark
  • Ruiyang ZHOU ruiyang.zhou@telecom-paris.fr: I don't have an account on moodle.polytechnique.fr I am enrolled at Telecom 3A interne SD I tried to contact Fabienne Duc-Seillan duc-seillan@telecom-paris.fr and no answer Peilin TIAN: peilin.tian@telecom-paris.fr I don't have an account on moodle.polytechnique.fr I am enrolled at Telecom 3A interne SD
     Like  Bookmark
  • What is deep learning? Introduce notions: Layer, Optimizer, Gradient descent, Data augmentation, Early Stopping (patience) Hands on: Implement the forward pass of an MLP Introduction to gradient computations: back propagration without too much maths (pre-read on calculus for gradient and function composition, what is the Jacobian of a function -- Say how much time should be spent on pre-reads) Hands on: Add gradient computations to the notebook with MLP (shorter than before) Hands on using tabular data: Comparison between ResNet/MLP and GBRT on some tabular data -- Add early-stopping, and the tricks to make ResNet competitive with GBRT
     Like  Bookmark