Phil 7.17.18

Some thoughts about Trump’s press conference with Putin, as opposed to the G7 and NATO meetings, from a game-theoretic perspective. Yes, it’s time for some (more) game theory!

Axelrod, in The Evolution of Cooperation, shows that there are several strategies that one can use in the iterated prisoner’s dilemma (IPD), and that these strategies vary by the amount of contact expected in the future. If none or very little future interaction is expected, then it pays to DEFECT, which basically means to screw your opponent.

If, on the other hand, there is an expectation of extensive future contact, the best strategy is some form of TIT-FOR-TAT, which means that you start by cooperating with your opponent, but if they defect, then you match that defection with their own. If they cooperate, then you match that as well.

This turns out to be a simple, clear strategy that rewards cooperative behavior and punishes jerks. It is powerful enough that a small cluster of TIT-FOR-TAT can invade a population of ALL_DEFECT. It has some weaknesses as well. We’ll get to that later.

Donald Trump, in the vast majority of his interactions has presented an ALL_DEFECT strategy. That actually can make sense in the world of real-estate, where there are lots of players that perform similar roles and bankruptcy protections exist.

But with Russia in general and Putin in particular, Trump is very cooperative. Why is this case different?

It turns out that  after four bankruptcies (1991, 1992, 2004 and 2009) it became impossible for Trump to get loans through traditional channels. In essence, he had defected on enough banks that the well was poisoned.

As the ability to get loans decreased, the amount of cash sales to Russian oligarchs increased. About $109 million were spent purchasing Trump-branded properties from 2003 – 2017, according to MccLatchy. Remember that TIT-FOR-TAT can win over ALL_DEFECT if there is prolonged interaction. Fourteen years is a long time to train someone.

There are two cognitively easy strategies in the IPD – ALL_DEFECT, and ALL_COOPERATE. I think that once Trump tried defection a few times and got punished for it he went the other way with the Russians. My guess is that they have a whole team of people working on how to keep him there.

Which is why, at every turn, Trump cooperates. He knows what will happen if he doesn’t, and frankly, it’s less work than any of the other alternatives. And if you really only care for yourself, that’s a perfectly reasonable place to be.

7:00 – ASRC MKT

  • Still can’t connect to the Service center (Betriebsdienst Zentrum) at Zurich U. Tried pinging the conference organizer, who appears to be based on the campus
  • Travel report for SASO
  • Hotel in Trento
  • Ping Aaron M. about Doodle
  • Start on slides

Phil 7.7.18

8:00 – 9:00 ASRC MKT

  • At CI 2018. Hell of a time setting up eduroam. Nice venue, though. Winston Churchill called for the unification of Europe from that podium. Probably without PowerPoint DSCN0310
  • Patrick Meier – keynote – Digital humanitarian efforts
    • Mission is to pioneer the next generation of humanitarian technology
    • DSCN0313
    • DSCN0315
  • Poster pitches
    • Multiple barriers to crowdsourcing, ranging from operational to strategic
    • Anita Wollie – trust in AI Embedded agency, Virtual agency, Physical Agency
    • Croudoscope – qualitative and quantitative surveys – open coments. Not lists, but graphs
    • Market volitility with High-Frequency trading an hmans
    • How many people constitutes a ‘crowd’
    • Is novelty an advantage in crowdfunding
    • QUEST – annotating questions on stackoverflow-style probles’
    • Cyber-physical systems – e.g. smart transportation systems
  • Papers
  • Keynote 2
    • Optimizing the Human-Machine Partnership with Zooniverse DSCN0321 DSCN0322
      • Lucy Fortson
      • Galaxy Zoo
      • Zooniverse is on its third iteration and now supports project building
      • Can also point to a project
  • Session 2
    • Collective Intelligence for Deep Reinforcement Learning (MIT, mostly)
      • Evolutionary strategies (Salimans 2017) DSCN0327
    • Social learning strategies for matters of taste (This is a must-read!)
      • DSCN0326DSCN0325DSCN0324
    • Photo Sleuth: Combining Collective Intelligence and Computer Vision to
      Identify Historical Portraits

      • Good discussion of how to blend human and ML person identification
    • Toward Safer Crowdsourced Content Moderation
    • How Intermittent Breaks in Interaction Improve Collective

Phil 6.19.18

7:00 – 9:00, 4:00 – 5:00 ASRC MKT

  • Here’s a list of organizations that are mobilizing to help immigrant children separated from their families
  • SASO trip
  • Rebuilt all the binaries, now I need to put them on the thumb drive – done
  • Added knobs to the implications slide. They sit next to the dimension and SIH lines. I realize that my slide deck is becoming a physical version of a memory palace.
  • Continuing Irrational Exuberance, though feeling like I should be reading Axelrod. Bring Evolution of Cooperation on the flight?
  • Naive Diversification Strategies in Defined Contribution Saving Plans
    • There is a worldwide trend toward defined contribution saving plans and growing interest in privatized social security plans. In both environments, individuals are given some responsibility to make their own asset allocation decisions, raising concerns about how well they do at this task. This paper investigates one aspect of the task, namely diversification. We show that many investors have very naive notions about diversification. For example, some investors follow what we call the 1/n strategy: they divide their contributions evenly across the funds offered in the plan. When this strategy (or others only slightly more sophisticated) is used, the assets chosen depend greatly on the make-up of the funds offered in the plan. We find evidence of naive diversification strategies both in experiments using employees at the University of California and the actual behavior of participants in a wide range of savings plans. In particular, we find the proportion of the assets the participants invest in stocks depends strongly on the proportion of stock funds in the plan. The results raise very serious questions about how privatized social security systems should be designed, questions that would be ignored in most economic analyses.
    • This is very much a dimension reduction exercise.
  • A2P maintenance proposal

9:00 – 4:00 ASRC A2P

  • Coming up to speed on the Angular interface
    • Logging into CI and QA
    • Dashboard configurations

Phil 6.18.18

ASRC MKT 7:00 – 8:00

  • Nice ride on Saturday on Skyline drive
  • Using Social Network Information in Bayesian Truth Discovery
    • We investigate the problem of truth discovery based on opinions from multiple agents who may be unreliable or biased. We consider the case where agents’ reliabilities or biases are correlated if they belong to the same community, which defines a group of agents with similar opinions regarding a particular event. An agent can belong to different communities for different events, and these communities are unknown a priori. We incorporate knowledge of the agents’ social network in our truth discovery framework and develop Laplace variational inference methods to estimate agents’ reliabilities, communities, and the event states. We also develop a stochastic variational inference method to scale our model to large social networks. Simulations and experiments on real data suggest that when observations are sparse, our proposed methods perform better than several other inference methods, including majority voting, the popular Bayesian Classifier Combination (BCC) method, and the Community BCC method.
  • Scale-free correlations in starling flocks
    • From bird flocks to fish schools, animal groups often seem to react to environmental perturbations as if of one mind. Most studies in collective animal behavior have aimed to understand how a globally ordered state may emerge from simple behavioral rules. Less effort has been devoted to understanding the origin of collective response, namely the way the group as a whole reacts to its environment. Yet, in the presence of strong predatory pressure on the group, collective response may yield a significant adaptive advantage. Here we suggest that collective response in animal groups may be achieved through scale-free behavioral correlations. By reconstructing the 3D position and velocity of individual birds in large flocks of starlings, we measured to what extent the velocity fluctuations of different birds are correlated to each other. We found that the range of such spatial correlation does not have a constant value, but it scales with the linear size of the flock. This result indicates that behavioral correlations are scale free: The change in the behavioral state of one animal affects and is affected by that of all other animals in the group, no matter how large the group is. Scale-free correlations provide each animal with an effective perception range much larger than the direct inter-individual interaction range, thus enhancing global response to perturbations. Our results suggest that flocks behave as critical systems, poised to respond maximally to environmental perturbations.
  • Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study
    • By reconstructing the three-dimensional positions of individual birds in airborne flocks of a few thousand members, we show that the interaction does not depend on the metric distance, as most current models and theories assume, but rather on the topological distance. In fact, we discovered that each bird interacts on average with a fixed number of neighbors (six to seven), rather than with all neighbors within a fixed metric distance. We argue that a topological interaction is indispensable to maintain a flock’s cohesion against the large density changes caused by external perturbations, typically predation. …
  • Thread on the failure to replicate the Stanford Prison Experiment by Alex Haslam (scholar) (home page). Paper coming soon
    • The Stanford Prison Experience—as it is presented in textbooks—presents human nature as naturally conforming to oppressive systems. This is a lesson that extends well beyond prison systems and the field criminology—but it’s wrong. Alex and his colleagues (especially Steve Reicher) have been arguing for years that conformity often emerges when leaders cultivate a sense of shared identity. This is an active, engaged process—very different from automatic and mindless conformity.
  • Started Irrational Exuberance, by Robert Shiller
  • Send note to Don, Aaron and Shimei
  • Read Ego-motion in Self-Aware Deep Learning on Medium. It’s about reflective learning of navigation in physical spaces, though I wonder if there is an equivalent process in belief spaces. Looked through scholar and
  • Slide prep and Fika walkthrough
    • Went well. Ravi suggested adding another slide that discusses the methods in detail, while Sy pretty much demanded that I get rid of “Questions” and put the title of the paper in its place
    • When adding the detail for Ravi, I discovered that the simulator and map reconstruction did not handle single, high dimensional agents well, so I spent a few hours fixing bugs to get the screen captures to build the slides.

Phil 6.15.18

7:00 – 6:00 ASRC MKT

  • Montaigne and the Art of Conversation held on June 11, 2018
    • Michel de Montaigne, the inventor of the essay and the greatest philosopher of the Renaissance, who is often imagined to be a solitary figure, lost in his library, writing to himself. However, his understanding of the practice of philosophy and the cultivation of the self were deeply social and tied to the give and take of debate and disputation among friends. Hampton’s talk—his “conversation”—will focus on one of Montaigne’s greatest essays, “On the Art of Conversation.” It will place the essay in Montaigne’s thought, and in the tradition of “philosophical conversation” that underpins the humanist tradition in the European West.
  • Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks (Thread overview)
    • Moreover, convolutional networks have precisely the same order-to-chaos transition as fully-connected networks, with vanishing gradients in the ordered phase and exploding gradients in the chaotic phase.
  • Susan Li (ML articles on Medium)
  • Working on slides. Walk through with Wayne today at 4:00
  • Re-read the paper. I’ve forgotten what’s in it!
  • Forward the Yao article, since it’s an example of what I’m modelling. It belongs up with the Strava maps
  • Strava maps are about discerning environment from behavior. Physical and social structures are visible (shorelines, mountains, and borders), from the perspective of road cyclists, who have simple rules:
    • Up is fun
    • Stations of the cross
    • Different populations on Strava (Commuter, mtn, road, etc)
    • Maps to Hofstede’s cultural dimensions
  • Meeting with Wayne to go over slides. Lots of rework. There is a difference in proposal and DC slides, which are showing a research direction, and a paper, which is showing a result.

Phil 6.13.18

7:00 – 4:00 ASRC MKT

  • International driver’s license – done
  • Add visually-impaired labels to paper – done
  • Start slides
  • Interesting article on dimension reduction: The faces of God in America: Revealing religious diversity across people and politics What strikes me about this study is actually how similar the depictions are. In belief space, this would be a closely woven neighborhood. It would be interesting to see an equivalent study on a less anthropomorphic deity like Vishnu… journal.pone.0198745.g002
    • Literature and art have long depicted God as a stern and elderly white man, but do people actually see Him this way? We use reverse correlation to understand how a representative sample of American Christians visualize the face of God, which we argue is indicative of how believers think about God’s mind. In contrast to historical depictions, Americans generally see God as young, Caucasian, and loving, but perceptions vary by believers’ political ideology and physical appearance. Liberals see God as relatively more feminine, more African American, and more loving than conservatives, who see God as older, more intelligent, and more powerful. All participants see God as similar to themselves on attractiveness, age, and, to a lesser extent, race. These differences are consistent with past research showing that people’s views of God are shaped by their group-based motivations and cognitive biases. Our results also speak to the broad scope of religious differences: even people of the same nationality and the same faith appear to think differently about God’s appearance.
  • Finished paper
  • Working on talk

personal

  • Shopping – done
  • taxes
  • laundry – done
  • generator/un-grounded short extension cord – done. Works!

Phil 5.18.18

7:00 – 4:00 ASRC MKT

Phil 5.17.18

7:00 – 4:00 ASRC MKT

  • How artificial intelligence is changing science – This page contains pointers to a bunch of interesting projects:
  • Multi-view Discriminative Learning via Joint Non-negative Matrix Factorization
    • Multi-view learning attempts to generate a classifier with a better performance by exploiting relationship among multiple views. Existing approaches often focus on learning the consistency and/or complementarity among different views. However, not all consistent or complementary information is useful for learning, instead, only class-specific discriminative information is essential. In this paper, we propose a new robust multi-view learning algorithm, called DICS, by exploring the Discriminative and non-discriminative Information existing in Common and view-Specific parts among different views via joint non-negative matrix factorization. The basic idea is to learn a latent common subspace and view-specific subspaces, and more importantly, discriminative and non-discriminative information from all subspaces are further extracted to support a better classification. Empirical extensive experiments on seven real-world data sets have demonstrated the effectiveness of DICS, and show its superiority over many state-of-the-art algorithms.
  • Add Nomadic, Flocking, and Stampede to terms. And a bunch more
  • Slides
  • Embedding navigation
    • Extend SmartShape to SourceShape. It should be a stripped down version of FlockingShape
    • Extend BaseCA to SourceCA, again, it should be a stripped down version of FlockingBeliefCA
    • Add a sourceShapeList for FlockingAgentManager that then passes that to the FlockingShapes
  • And it’s working! Well, drawing. Next is the interactions: Influence
  • Finally went and joined the IEEE

Phil 5.16.18

7:00 – 3:30 ASRC MKT

  • My home box has become very slow. 41 seconds to do a full recompile of GPM, while it takes 3 sec on a nearly identical machine at work. This may help?
  • Working on terms
  • Working on slides
  • Attending talk on Big Data, Security and Privacy – 11 am to 12 pm at ITE 459
    • Bhavani Thiraisingham
    • Big data management and analytics emphasizing GANs  and deep learning<- the new hotness
      • How do you detect attacks?
      • UMBC has real time analytics in cyber? IOCRC
    • Example systems
      • Cloud centric assured information sharing
    • Research challenges:
      • dynamically adapting and evolving policies to maintain privacy under a changing environment
      • Deep learning to detect attacks tat were previously not detectable
      • GANs or attacker and defender?
      • Scaleabe is a big problem, e.g. policies within Hadoop operatinos
      • How much information is being lost by not sharing data?
      • Fine grained access control with Hive RDF?
      • Distributed Search over Encrypted Big Data
    • Data Security & Privacy
      • Honypatching – Kevin xxx on software deception
      • Novel Class detection – novel class embodied in novel malware. There are malware repositories?
    • Lifecycle for IoT
    • Trustworthy analytics
      • Intel SGX
      • Adversarial SVM
      • This resembles hyperparameter tuning. What is the gradient that’s being descended?
      • Binary retrofitting. Some kind of binary man-in-the-middle?
      • Two body problem cybersecurity
    • Question –
      • discuss how a system might recognize an individual from session to session while being unable to identify the individual
      • What about multiple combinatorial attacks
      • What about generating credible false information to attackers, that also has steganographic components for identifying the attacker?
  • I had managed to not commit the embedding xml and the programs that made them, so first I had to install gensim and lxml at home. After that it’s pretty straightforward to recompute with what I currently have.
  • Moving ARFF and XLSX output to the menu choices. – done
  • Get started on rendering
    • Got the data read in and rendering, but it’s very brute force:
      if(getCurrentEmbeddings().loadSuccess){
          double posScalar = ResizableCanvas.DEFAULT_SCALAR/2.0;
          List<WordEmbedding> weList = currentEmbeddings.getEmbeddings();
          for (WordEmbedding we : weList){
              double size = 10.0 * we.getCount();
              SmartShape ss = new SmartShape(we.getEntry(), Color.WHITE, Color.BLACK);
              ss.setPos(we.getCoordinate(0)*posScalar, we.getCoordinate(1)*posScalar);
              ss.setSize(size, size);
              ss.setAngle(0);
              ss.setType(SmartShape.SHAPE_TYPE.OVAL);
              canvas.addShape(ss);
          }
      }

      It took a while to remember how shapes and agents work together. Next steps:

      • Extend SmartShape to SourceShape. It should be a stripped down version of FlockingShape
      • Extend BaseCA to SourceCA, again, it should be a stripped down version of FlockingBeliefCA
      • Add a sourceShapeList for FlockingAgentManager that then passes that to the FlockingShapes

Phil 5.15.18

7:00 – 4:00 ASRC MKT

Phil 5.14.18

7:00 – 3:00 ASRC MKT

    • Working on Zurich Travel. Ricardo is getting tix, and I got a response back from the conference on an extended stay
    • Continue with slides
    • See if there is a binary embedding reader in Java? Nope. Maybe in ml4j, but it’s easier to just write out the file in the format that I want
    • Done with the writer: Vim
  • Fika
  • Finished Simulacra and Simulation. So very, very French. From my perspective, there are so many different lines of thought coming out of the work that I can’t nail down anything definitive.
  • Started The Evolution of Cooperation

Phil 4.5.18

7:00 – 5:00 ASRC MKT

  • More car stampedes: On one of L.A.’s steepest streets, an app-driven frenzy of spinouts, confusion and crashes
  • Working on the first draft of the paper. I think(?) I’m reasonably happy with it.
  • Trying to determine the submission guidelines. Are IEEE paper anonymized? If they are, here’s the post on how to do it and my implementation:
    \usepackage{xcolor}
    \usepackage{soul}
    
    \sethlcolor{black}
    \makeatletter
    \newif\if@blind
    \@blindfalse %use \@blindtrue to anonymize, \@blindfalse on final version
    \if@blind \sethlcolor{black}\else
    	\let\hl\relax
    \fi
    
    \begin{document}
    this text is \hl{redacted}
    \end{document}
    
    
  • So this clever solution doesn’t work, because you can select under the highlight. This is my much simpler solution:
    %\newcommand*{\ANON}{}
    \ifdefined\ANON
    	\author{\IEEEauthorblockN{Anonymous Author(s)}
    	\IEEEauthorblockA{\textit{this line kept for formatting} \\
    		\textit{this line kept for formatting}\\
    		this line kept for formatting \\
    		this line kept for formatting}
    }
    \else
    	\author{\IEEEauthorblockN{Philip Feldman}
    	\IEEEauthorblockA{\textit{ASRC Federal} \\
    	Columbia, USA \\
    	philip.feldman@asrcfederal.com}
    	}
    \fi
  • Submitting to Arxive
  • Boy, this hit home: The Swamp of Sadness
    • Even with Arteyu pulling on his bridle, Artex still had to start walking and keep walking to survive, and so do you. You have to pull yourself out of the swamp. This sucks, because it’s difficult, slow, hand-over-hand, gritty, horrible work, and you will end up very muddy. But I think the muddier the swamp, the better the learning really. I suspect the best kinds of teachers have themselves walked through very horrible swamps.
  • You have found the cui2vec explorer. This website will let you interact with embeddings for over 108,000 medical concepts. These embeddings were created using insurance claims for 60 million americans, 1.7 million full-text PubMed articles, and clinical notes from 20 million patients at Stanford. More information about the methods used to create these embeddings can be found in our preprint: https://arxiv.org/abs/1804.01486 
  • Going to James Foulds’ lecture on Mixed Membership Word Embeddings for Computational Social Science. Send email for meeting! Such JuryRoom! Done!
  • Kickoff meeting for the DHS proposal. We have until the 20th to write everything. Sheesh

Phil 3.30.18

TF Dev Sumit

Highlights blog post from the TF product manager

Keynote

  • Connecterra tracking cows
  • Google is an AI – first company. All products are being influenced. TF is the dogfood that everyone is eating at google.

Rajat Monga

  • Last year has been focussed on making TF easy to use
  • 11 million downloads
  • blog.tensorflow.org
  • youtube.com/tensorflow
  • tensorflow.org/ub
  • tf.keras – full implementation.
  • Premade estimators
  • three line training from reading to model? What data formats?
  • Swift and tensorflow.js

Megan

  • Real-world data and time-to-accuracy
  • Fast version is the pretty version
  • TensorflowLite is 300% speedup in inference? Just on mobile(?)
  • Training speedup is about 300% – 400% anually
  • Cloud TPUs are available in V2. 180 TF computation
  • github.com/tensorflow/tpu
  • ResNet-50 on Cloud TPU in < 15

Jeff Dean

  • Grand Engineering challenges as a list of  ML goals
  • Engineer the tools for scientific discovery
  • AutoML – Hyperparameter tuning
  • Less expertise (What about data cleaning?)
    • Neural architecture search
    • Cloud Automl for computer vision (for now – more later)
  • Retinal data is being improved as the data labeling improves. The trained human trains the system proportionally
  • Completely new, novel scientific discoveries – machine scan explore horizons in different ways from humans
  • Single shot detector

Derrek Murray @mrry (tf.data)

  • Core TF team
  • tf.data  –
  • Fast, Flexible, and Easy to use
    • ETL for TF
    • tensorflow.org/performance/datasets_performance
    • Dataset tf.SparseTensor
    • Dataset.from_generator – generates graphs from numpy arrays
    • for batch in dataset: train_model(batch)
    • 1.8 will read in CSV
    • tf.contrib.data.make_batched_features_dataset
    • tf.contrib.data.make_csv_dataset()
    • Figures out types from column names

Alexandre Passos (Eager Execution)

  • Eager Execution
  • Automatic differentiation
  • Differentiation of graphs and code <- what does this mean?
  • Quick iterations without building graphs
  • Deep inspection of running models
  • Dynamic models with complex control flows
  • tf.enable_eager_execution()
  • immediately run the tf code that can then be conditional
  • w = tfe.variables([[1.0]])
  • tape to record actions, so it’s possible to evaluate a variety of approaches as functions
  • eager supports debugging!!!
  • And profilable…
  • Google collaboratory for Jupyter
  • Customizing gradient, clipping to keep from exploding, etc
  • tf variables are just python objects.
  • tfe.metrics
  • Object oriented savings of TF models Kind of like pickle, in that associated variables are saved as well
  • Supports component reuse?
  • Single GPU is competitive in speed
  • Interacting with graphs: Call into graphs Also call into eager from a graph
  • Use tf.keras.layers, tf.keras.Model, tf.contribs.summary, tfe.metrics, and object-based saving
  • Recursive RNNs work well in this
  • Live demo goo.gl/eRpP8j
  • getting started guide tensorflow.org/programmers_guide/eager
  • example models goo.gl/RTHJa5

Daniel Smilkov (@dsmilkov) Nikhl Thorat (@nsthorat)

  • In-Browser ML (No drivers, no installs)
  • Interactive
  • Browsers have access to sensors
  • Data stays on the client (preprocessing stage)
  • Allows inference and training entirely in the browser
  • Tensorflow.js
    • Author models directly in the browser
    • import pre-trained models for inference
    • re-train imported models (with private data)
    • Layers API, (Eager) Ops API
    • Can port keras or TF morel
  • Can continue to train a model that is downloaded from the website
  • This is really nice for accessibility
  • js.tensorflow.org
  • github.com/tensorflow/tfjs
  • Mailing list: goo.gl/drqpT5

Brennen Saeta

  • Performance optimization
  • Need to be able to increase performance exponentially to be able to train better
  • tf.data is the way to load data
  • Tensorboard profiling tools
  • Trace viewer within Tensorboard
  • Map functions seem to take a long time?
  • dataset.map(Parser_fn, num_parallel_calls = 64)) <- multithreading
  • Software pipelining
  • Distributed datasets are becoming critical. They will not fit on a single instance
  • Accelerators work in a variety of ways, so optimizing is hardware dependent For example, lower precision can be much faster
  • bfloat16 brain floating point format. Better for vanishing and exploding gradients
  • Systolic processors load the hardware matrix while it’s multiplying, since you start at the upper left corner…
  • Hardware is becoming harder and harder to do apples-to apples. You need to measure end-to-end on your own workloads. As a proxy, Stanford’s DAWNBench
  • Two frameworks XLA nd Graph

Mustafa Ispir (tf.estimator, high level modules for experiments and scaling)

  • estimators fill in the model, based on Google experiences
  • define as an ml problem
  • pre made estimators
  • reasonable defaults
  • feature columns – bucketing, embedding, etc
  • estimator = model_to_estimator
  • image = hum.image_embedding_column(…)
  • supports scaling
  • export to production
  • estimator.export_savemodel()
  • Feature columns (from csv, etc) intro, goo.gl/nMEPBy
  • Estimators documentation, custom estimators
  • Wide-n-deep (goo.gl/l1cL3N from 2017)
  • Estimators and Keras (goo.gl/ito9LE Effective TensorFlow for Non-Experts)

Igor Sapirkin

  • distributed tensorflow
  • estimator is TFs highest level of abstraction in the API google recommends using the highest level of abstraction you can be effective in
  • Justine debugging with Tensorflow Debugger
  • plugins are how you add features
  • embedding projector with interactive label editing

Sarah Sirajuddin, Andrew Selle (TensorFlow Lite) On-device ML

  • TF Lite interpreter is only 75 kilobytes!
  • Would be useful as a biometric anonymizer for trustworthy anonymous citizen journalism. Maybe even adversarial recognition
  • Introduction to TensorFlow Lite → https://goo.gl/8GsJVL
  • Take a look at this article “Using TensorFlow Lite on Android” → https://goo.gl/J1ZDqm

Vijay Vasudevan AutoML @spezzer

  • Theory lags practice in valuable discipline
  • Iteration using human input
  • Design your code to be tunable at all levels
  • Submit your idea to an idea bank

Ian Langmore

  • Nuclear Fusion
  • TF for math, not ML

Cory McLain

  • Genomics
  • Would this be useful for genetic algorithms as well?

Ed Wilder-James

  • Open source TF community
  • Developers mailing list developers@tensorflow.org
  • tensorflow.org/community
  • SIGs SIGBuild, other coming up
  • SIG Tensorboard <- this

Chris Lattner

  • Improved usability of TF
  • 2 approaches, Graph and Eager
  • Compiler analysis?
  • Swift language support as a better option than Python?
  • Richard Wei
  • Did not actually see the compilation process with error messages?

TensorFlow Hub Andrew Gasparovic and Jeremiah Harmsen

  • Version control for ML
  • Reusable module within the hub. Less than a model, but shareable
  • Retrainable and backpropagateable
  • Re-use the architecture and trained weights (And save, many, many, many hours in training)
  • tensorflow.org/hub
  • module = hub.Module(…., trainable = true)
  • Pretrained and ready to use for classification
  • Packages the graph and the data
  • Universal Sentence Encodings semantic similarity, etc. Very little training data
  • Lower the learning rate so that you don’t ruin the existing rates
  • tfhub.dev
  • modules are immutable
  • Colab notebooks
  • use #tfhub when modules are completed
  • Try out the end-to-end example on GitHub → https://goo.gl/4DBvX7

TF Extensions Clemens Mewald and Raz Mathias

  • TFX is developed to support lifecycle from data gathering to production
  • Transform: Develop training model and serving model during development
  • Model takes a raw data model as the request. The transform is being done in the graph
  • RESTful API
  • Model Analysis:
  • ml-fairness.com – ROC curve for every group of users
  • github.com/tensorflow/transform

Project Magenta (Sherol Chen)

People:

  • Suharsh Sivakumar – Google
  • Billy Lamberta (documentation?) Google
  • Ashay Agrawal Google
  • Rajesh Anantharaman Cray
  • Amanda Casari Concur Labs
  • Gary Engler Elemental Path
  • Keith J Bennett (bennett@bennettresearchtech.com – ask about rover decision transcripts)
  • Sandeep N. Gupta (sandeepngupta@google.com – ask about integration of latent variables into TF usage as a way of understanding the space better)
  • Charlie Costello (charlie.costello@cloudminds.com – human robot interaction communities)
  • Kevin A. Shaw (kevin@algoint.com data from elderly to infer condition)