Phil 3.9.18

8:00 – 4:30 ASRC MKT

  • Still working on the nomad->flocking->stampede slide. Do I need a “dimensions” arrow?
  • Labeled slides. Need to do timings – done
  • And then Aaron showed up, so lots of reworking. Done again!
  • Put the ONR proposal back in its original form
  • An overview of gradient descent optimization algorithm
    • Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne’scaffe’s, and keras’ documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This blog post aims at providing you with intuitions towards the behaviour of different algorithms for optimizing gradient descent that will help you put them to use.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s