7:00 – 5:00 ASRC MKT

 Aaron found this hyperparameter optimization service: Sigopt
 Improve ML models 100x faster
 SigOpt’s API tunes your model’s parameters through stateoftheart Bayesian optimization.
 Exponentially faster and more accurate than grid search. Faster, more stable, and easier to use than open source solutions.
 Extracts additional revenue and performance left on the table by conventional tuning.
 A Strategy for Ranking Optimization Methods using Multiple Criteria
 An important component of a suitably automated machine learning process is the automation of the model selection which often contains some optimal selection of hyperparameters. The hyperparameter optimization process is often conducted with a blackbox tool, but, because different tools may perform better in different circumstances, automating the machine learning workflow might involve choosing the appropriate optimization method for a given situation. This paper proposes a mechanism for comparing the performance of multiple optimization methods for multiple performance metrics across a range of optimization problems. Using nonparametric statistical tests to convert the metrics recorded for each problem into a partial ranking of optimization methods, results from each problem are then amalgamated through a voting mechanism to generate a final score for each optimization method. Mathematical analysis is provided to motivate decisions within this strategy, and sample results are provided to demonstrate the impact of certain ranking decisions
 World Models: Can agents learn inside of their own dreams?
 We explore building generative neural network models of popular reinforcement learning environments[1]. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.
 Tweaked the SingleNeuron spreadsheet
 This came up again: A new optimizer using particle swarm theory (1995)
 The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed.
 New: Particle swarm optimization for hyperparameter selection in deep neural networks
 Working with the CIFAR10 data now. Tradeoff between filters and epochs:
NB_EPOCH = 10 NUM_FIRST_FILTERS = int(32/2) NUM_MIDDLE_FILTERS = int(64/2) OUTPUT_NEURONS = int(512/2) Test score: 0.8670728429794311 Test accuracy: 0.6972 Elapsed time = 565.9446044602014 NB_EPOCH = 5 NUM_FIRST_FILTERS = int(32/1) NUM_MIDDLE_FILTERS = int(64/1) OUTPUT_NEURONS = int(512/1) Test score: 0.8821897733688354 Test accuracy: 0.6849 Elapsed time = 514.1915690121759 NB_EPOCH = 10 NUM_FIRST_FILTERS = int(32/1) NUM_MIDDLE_FILTERS = int(64/1) OUTPUT_NEURONS = int(512/1) Test score: 0.7007060846328735 Test accuracy: 0.765 Elapsed time = 1017.0974014300725 Augmented imagery NB_EPOCH = 10 NUM_FIRST_FILTERS = int(32/1) NUM_MIDDLE_FILTERS = int(64/1) OUTPUT_NEURONS = int(512/1) Test score: 0.7243581249237061 Test accuracy: 0.7514 Elapsed time = 1145.673343808471
 And yet, something is clearly wrong:
 Maybe try this version? samyzaf.com/ML/cifar10/cifar10.html
 Aaron found this hyperparameter optimization service: Sigopt