Phil 2.11.18

Introduction to Learning to Trade with Reinforcement Learning

  • In this post, I’m going to argue that training Reinforcement Learning agents to trade in the financial (and cryptocurrency) markets can be an extremely interesting research problem. I believe that it has not received enough attention from the research community but has the potential to push the state-of-the art of many related fields. It is quite similar to training agents for multiplayer games such as DotA, and many of the same research problems carry over. Knowing virtually nothing about trading, I have spent the past few months working on a project in this field.
  • This sounds to me like reinforcement learning figuring out game theory. Might be useful for NOAA as well

Worked on getting the MapBuilder app into a useful standalone app: 2018-02-11 (1)

Advertisements

Phil 1.30.18

7:00 – 5:00 ASRC MKT

  • Big thought for today.In a civilization context, the three phases of collective intelligence work like this. These phases relate to computational effort which is proportional to the number of dimensions that an individual has to consider in their existential calculus. The assumption is that lower computational effort is selected for at natural explore/exploit ratios.
    • Exploration phase. Nomadic explorers are introduced to a new environment. Can be physical, informational, cognitive, etc. This phase has the highest dimensional processing required for the individual.
    • Exploitation phase. Social patterns increase the hill climbing power of agents in the environment. This results in a sufficiently optimal access to resources. This employs lower dimensions to support consensus and polarization.
    • Inertial phase. Social influence becomes dominant and environmental influence wains. Local diversity drops as similar agents cluster tightly together. Resources wane. This employs the most dimension reduction and the highest polarization, resulting in high implicit coordination.
    • Collapse. Implied, since the Inertial phase is unsustainable. If the previous population produced explorers that found new, productive environments, the cycle can repeat elsewhere.
  • Continuing BIC
    • “We need to know, in detail, what deliberations are like that people engage in when they group-identify”. Also, agency transformationAgencyTransformation
  • Rules, norms and institutional erosion: Of non-compliance, enforcement and lack of rule of law
    • What I am seeing right now in the US (a steady and slow erosion of democratic norms and a systematic violation of rules by the President Elect, in particular as though “they don’t apply to him“) is something that I’ve seen in other countries where I have studied formal and informal rules and institution building (and decay). This, in my view, is worrisome. If the US is going to want to continue having a functioning democracy where compliance with rules and norms is an expectation at the societal level, it’s going to have to do something major to stop this systematic rule violation.
  • Evaluation of Interactive Machine Learning Systems
    • The evaluation of interactive machine learning systems remains a difficult task. These systems learn from and adapt to the human, but at the same time, the human receives feedback and adapts to the system. Getting a clear understanding of these subtle mechanisms of co-operation and co-adaptation is challenging. In this chapter, we report on our experience in designing and evaluating various interactive machine learning applications from different domains. We argue for coupling two types of validation: algorithm-centered analysis, to study the computational behaviour of the system; and human-centered evaluation, to observe the utility and effectiveness of the application for end-users. We use a visual analytics application for guided search, built using an interactive evolutionary approach, as an exemplar of our work. We argue that human-centered design and evaluation complement algorithmic analysis, and can play an important role in addressing the “black-box” effect of machine learning. Finally, we discuss research opportunities that require human-computer interaction methodologies, in order to support both the visible and hidden roles that humans play in interactive machine learning.
  • Jensen–Shannon divergence – I think I can use this to show the distance between a full coordination matrix and one that contains only the main diagonal.
  • Evolution of social behavior in finite populations: A payoff transformation in general n-player games and its implications
    • The evolution of social behavior has been the focus of many theoretical investigations, which typically have assumed infinite populations and specific payoff structures. This paper explores the evolution of social behavior in a finite population using a general n-player game. First, we classify social behaviors in a group of n individuals based on their effects on the actor’s and the social partner’s payoffs, showing that in general such classification is possible only for a given composition of strategies in the group. Second, we introduce a novel transformation of payoffs in the general n-player game to formulate explicitly the effects of a social behavior on the actor’s and the social partners’ payoffs. Third, using the transformed payoffs, we derive the conditions for a social behavior to be favored by natural selection in a well-mixed population and in the presence of multilevel selection.
  • Got the data for the verdicts and live verdicts set up right, or at least closer: JuryRoom
  • Booked a room for the CHIIR Hotel
  • Got farther on UltimateAngular:
    •  UltimateAngular

Phil 1.29.18

7:00 – 5:30 ASRC MKT

  • The phrase “Epistemic Game Theory” occurred to me in the shower. Looked it up and found these two things:
  • When it’s easier to agree than discuss, it should be easier to stampede:
  • Like vs. words
  • This is also a piece of Salganik’s work as described in Leading the Herd Astray: An Experimental Study of Self-Fulfilling Prophecies in an Artificial Cultural Market
  • An article on FB optimization and how to change the ratio of likes to comments, etc
  • I don’t think people did. It’s just that it’s easier to not think too much 🙂 people are busy selling tools that do everything for people, and people are happy buying tools to limit thinking. The analogy of replacing cognitive load with perception by VIS misleads in this regard. (Twitter)
  • Continuing BIC
    • Dimension reduction is a form of induced conceptual myopia (pg 89)? Conceptual Myopia
  • AI Roundup workshop today
    • Zenpeng, Biruh, Phil, Aaron, Eric, Eric, Kevin
    • Eric – Introductory remarks. Budget looks good for 2018. Direction, chance to overlap, get leaders together for unique differentiators and something that we can build a business around. There has to be a really good business case with revenue in the out years
    • Aaron – CDS for A2P. Collaborate on analytics, ML, etc. Non corporate focused. Emerging technologies and trends. Helping each other out. Background in IC software dev.
    • Pam Scheller – SW Aegis. BD. EE, MS Computer engineering.
    • Biruh, TF, LIDAR, Generalized AI as hobby.
    • Zhenpeng Lee – Physics, Instrument Data Processing for GOES-R. FFT. GOES_R radiometric analysis. 7k detector rows? Enormous data sets. Attempting to automate processing the analysis of these data sets. Masters in Computer Science from JHU. Written most of his code from scratch.
    • Kevin Wainwright. Software engineering Aegis. C&C, etc. Currently working on a cloud based analytics with ML for big data, anomaly detection, etc. Looking for deviation from known flight paths
    • Eric Velte. History degree. Aegis. Situational awareness. Chief technologists for missions solutions group. Software mostly. Data analytics for the last two years. Big Data Analytics Platform.
    • Cornel as engineer, Zero G heat transfer, spacecraft work. Technology roadmaps for thermal control. Then business development, mostly for DoD. Research Sports research – head of Olympic Committee research kayaks, women’s 8, horse cooling, bobsleds.
    • Mike Beduck. Chemical Engineering and computer science. Visualization, new to big data. Closed system sensor fusion. RFP response, best practices. Repository for analytics
    • George. Laser physics. Cardiac imaging analysis. Software development, 3D graphics. Medical informatics. CASI ground systems. More GOES-R/S. Image and signal processing and analysis.
    • Anton is lurling and listening. Branding and marketing.
  • A2P WIP
    • Put a place on sharepoint for papers and other documents – annotated bibliography.
    • Floated the JuryRoom app. Need to mention that the polarizing discussion closes at consensus.
  • Zhenpeng Lee AIMS – GOES-R. What went wrong and how to fix. ML to find pattern change in 20k sensor streams. Full training on each day’s data, then large scale clustering. Trends are seasonal? Relationships between sensors? Channel has 200-600 detectors. “Machine Learning of Situational Awareness” MLP written in Java. TANH activation function.
    • Eric Haught: Long term quest for condition-based maintenance.
    • Aaron – we are all trying to come up with a useful cross platform approach to anomaly detection.
    • Training size: 100k samples? Sample selection reduce to 200? Not sure what the threshold sensitivity is
  • Eric Velte – Devops. Centralize SW dev and support into a standardized framework. NO SECURITY STACK!!!!!
  • Dataforbio? Video series

Phil 1.16.2018

ASRC MKT 7:00 – 4:30

  • Tit for tat in heterogeneous populations
    • The “iterated prisoner’s dilemma” is now the orthodox paradigm for the evolution of cooperation among selfish individuals. This viewpoint is strongly supported by Axelrod’s computer tournaments, where ‘tit for tat’ (TFT) finished first. This has stimulated interest in the role of reciprocity in biological societies. Most theoretical investigations, however, assumed homogeneous populations (the setting for evolutionary stable strategies) and programs immune to errors. Here we try to come closer to the biological situation by following a program that takes stochasticities into account and investigates representative samples. We find that a small fraction of TFT players is essential for the emergence of reciprocation in a heterogeneous population, but only paves the way for a more generous strategy. TFT is the pivot, rather than the aim, of an evolution towards cooperation.
    • It’s a Nature Note, so a quick read. In this case, the transition is from AllD->TFT->GTFT, where evolution stops.
  • A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner’s Dilemma game
    • The Prisoner’s Dilemma is the leading metaphor for the evolution of cooperative behaviour in populations of selfish agents, especially since the well-known computer tournaments of Axelrod and their application to biological communities. In Axelrod’s simulations, the simple strategy tit-for-tat did outstandingly well and subsequently became the major paradigm for reciprocal altruism. Here we present extended evolutionary simulations of heterogeneous ensembles of probabilistic strategies including mutation and selection, and report the unexpected success of another protagonist: Pavlov. This strategy is as simple as tit-for-tat and embodies the fundamental behavioural mechanism win-stay, lose-shift, which seems to be a widespread rule. Pavlov’s success is based on two important advantages over tit-for-tat: it can correct occasional mistakes and exploit unconditional cooperators. This second feature prevents Pavlov populations from being undermined by unconditional cooperators, which in turn invite defectors. Pavlov seems to be more robust than tit-for-tat, suggesting that cooperative behaviour in natural situations may often be based on win-stay, lose-shift.
    • win-stay = exploit, lose-shift = explore
  • Five rules for the evolution of cooperation
    • Cooperation is needed for evolution to construct new levels of organization. The emergence of genomes, cells, multi-cellular organisms, social insects and human society are all based on cooperation. Cooperation means that selfish replicators forgo some of their reproductive potential to help one another. But natural selection implies competition and therefore opposes cooperation unless a specific mechanism is at work. Here I discuss five mechanisms for the evolution of cooperation: kin selection, direct reciprocity, indirect reciprocity, network reciprocity and group selection. For each mechanism, a simple rule is derived which specifies whether natural selection can lead to cooperation.
  • Added a paragraph to the previous work section to include Tit-for-Tat and Milti-armed Bandit previous work.
  • Worked with Aaron on setting up sprint goals

Phil 1.15.18

7:00 – 3:30 ASRC MKT

  • Individual mobility and social behaviour: Two sides of the same coin
    • According to personality psychology, personality traits determine many aspects of human behaviour. However, validating this insight in large groups has been challenging so far, due to the scarcity of multi-channel data. Here, we focus on the relationship between mobility and social behaviour by analysing two high-resolution longitudinal datasets collecting trajectories and mobile phone interactions of ∼ 1000 individuals. We show that there is a connection between the way in which individuals explore new resources and exploit known assets in the social and spatial spheres. We point out that different individuals balance the exploration-exploitation trade-off in different ways and we explain part of the variability in the data by the big five personality traits. We find that, in both realms, extraversion correlates with an individual’s attitude towards exploration and routine diversity, while neuroticism and openness account for the tendency to evolve routine over long time-scales. We find no evidence for the existence of classes of individuals across the spatio-social domains. Our results bridge the fields of human geography, sociology and personality psychology and can help improve current models of mobility and tie formation.
    • This work has ways of identifying explorers and exploiters programmatically.
    • Exploit
    • SocialSpatial
  • Reading the Google Brain team’s year in review in two parts
    • From part two: We have also teamed up with researchers at leading healthcare organizations and medical centers including StanfordUCSF, and University of Chicago to demonstrate the effectiveness of using machine learning to predict medical outcomes from de-identified medical records (i.e. given the current state of a patient, we believe we can predict the future for a patient by learning from millions of other patients’ journeys, as a way of helping healthcare professionals make better decisions). We’re very excited about this avenue of work and we look to forward to telling you more about it in 2018
    • FacetsFacets contains two robust visualizations to aid in understanding and analyzing machine learning datasets. Get a sense of the shape of each feature of your dataset using Facets Overview, or explore individual observations using Facets Dive.
  • Found this article on LSTM-based prediction for robots and sent it to Aaron: Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution
  • Working through Beyond Individual Choice – Actually, wound up going Complexity LabsGame Theory course
    • Social traps are stampedes? Sliding reinforcers (lethal barrier)
    • The transition from Tit-for-tat (TFT) to generous TFT to cooperate always, to defect always has similarities to the excessive social trust stampede as well.
    • Unstable cycling vs. evolutionarily stable strategies
    • Replicator dynamic model: Explore/Exploit
      • In mathematics, the replicator equation is a deterministic monotone non-linear and non-innovative game dynamic used in evolutionary game theory. The replicator equation differs from other equations used to model replication, such as the quasispecies equation, in that it allows the fitness function to incorporate the distribution of the population types rather than setting the fitness of a particular type constant. This important property allows the replicator equation to capture the essence of selection. Unlike the quasispecies equation, the replicator equation does not incorporate mutation and so is not able to innovate new types or pure strategies.
    • Fisher’s Fundamental Theorem “The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.
    • Explorers are a form of weak ties, which is one of the reasons they add diversity. Exploiters are strong ties
  • I also had a thought about the GPM simulator. I could add an evolutionary component that would let agents breed, age and die to see if Social Influence Horizon and Turn Rate are selected towards any attractor. My guess is that there is a tension between explorers and stampeders that can be shown to occur over time.

Phil 1.11.18

7:00 – 4:00 ASRC MKT

  • Sprint review – done! Need to lay out the detailed design steps for the next sprint.

The Great Socio-cultural User Interfaces: Maps, Stories, and Lists

Maps, stories, and lists are ways humans have invented to portray and interact with information. They exist on a continuum from order through complexity to exploration.

Why these three forms? In some thoughts on alignment in belief space, I discussed how populations exhibiting collective intelligence are driven to a normal distribution with complex, flocking behavior in the middle, bounded on one side by excessive social conformity, and a nomadic diaspora of explorers on the other. I think stories, lists, and maps align with these populations. Further, I believe that these forms emerged to meet the needs of these populations, as constrained by human sensing and processing capabilities.

Lists

Lists are instruments of order. They exist in many forms, including inventories, search engine results, network graphs, and games of chance and crossword puzzles. Directions, like a business plan or a set of blueprints, are a form of list. So are most computer programs. Arithmetic, the mathematics of counting, also belongs to this class.

For a population that emphasizes conformity and simplified answers, lists are a powerful mechanism we use to simplify things. Though we can recognize easily, recall is more difficult. Psychologically, we do not seem to be naturally suited for creating and memorizing lists. It’s not surprising then that there is considerable evidence that writing was developed initially as a way of listing inventories, transactions, and celestial events.

In the case of an inventory, all we have to worry about is to verify that the items on the list are present. If it’s not on the list, it doesn’t matter. Puzzles like crosswords are list like in that they contain all the information needed to solve them. The fact that they cannon be solved without a pre-existing cultural framework is an indicator of their relationship to the well-ordered, socially aligned side of the spectrum.

Stories

Lists transition into stories when games of chance have an opponent. Poker tells a story. Roulette can be a story where the opponent is The House.

Stories convey complexity, framed in a narrative arc that contains a heading and a velocity. Stories can be resemble lists. An Agatha Christie  murder mystery is a storified list, where all the information needed to solve the crime (the inventory list), is contained in the story. At the other end of the spectrum, is a scientific paper which uses citations to act as markers into other works. Music, images, movies, diagrams and other forms can also serve as storytelling mediums. Mathematics is not a natural fit here, but iterative computation can be, where the computer becomes the storyteller.

Emergent Collective behavior requires more complex signals that support the understanding the alignment and velocity of others, so that internal adjustments can be made to stay with the local group so as not to be cast out or lost to the collective. Stories can indicate the level of dynamism supported by the group (wily Odysseus, vs. the Parable of the Workers in the Vineyard). They rally people to the cause or serve as warnings. Before writing, stories were told within familiar social frames. Even though the storyteller might be a traveling entertainer, the audience would inevitably come from an existing community. The storyteller then, like improvisational storytellers today, would adjust elements of the story for the audience.

This implies a few things: first, audiences only heard stories like this if they really wanted to. Storytellers would avoid bad venues, so closed-off communities would stay decoupled from other communities until something strong enough came along to overwhelm their resistance. Second, high-bandwidth communication would have to be hyperlocal, meaning dynamic collective action could only happen on small scales. Collective action between communities would have to be much slower. Technology, beginning with writing would have profound effects. Evolution would only have at most 200 generations to adapt collective behavior. For such a complicated set of interactions, that doesn’t seem like enough time. More likely we are responding to modern communications with the same mental equipment as our Sumerian ancestors.

Maps

Maps are diagrams that support autonomous trajectories. Though the map itself influences the view through constraints like boundaries and projections, nonetheless an individual can find a starting point, choose a destination, and figure out their own path to that destination. Mathematics that support position and velocity are often deeply intertwined with with maps.

Nomadic, exploratory behavior is not generally complex or emergent. Things need to work, and simple things work best. To survive alone, an individual has to be acutely aware of the surrounding environment, and to be able to react effectively to unforeseen events.

Maps are uniquely suited to help in these situations because they show relationships that support navigation between elements on the map.  These paths can be straight or they may meander. To get to the goal directly may be too far, and a set of paths that incrementally lead to the goal can be constructed. The way may be blocked, requiring the map to be updated and a new route to be found.

In other words, maps support autonomous reasoning about a space. There is no story demanding an alignment. There is not a list of routes that must be exclusively selected from. Maps, in short, afford informed, individual response to the environment. These affordances can be seen in the earliest maps. They are small enough to be carried. They show the relationships between topographic and ecological features. They tend practical, utilitarian objects, independent of social considerations.

Sensing and processing constraints

Though I think that the basic group behavior patterns of nomadic, flocking, and stampeding will inevitably emerge within any collective intelligence framework, I do think that the tools that support those behaviors are deeply affected by the capabilities of the individuals in the population.

Pre-literate humans had the five senses, and  memory, expressed in movement and language. Research into pre-literate cultures show that song, story and dance were used to encode historical events, location of food sources, convey mythology, and skills between groups and across generations.

As the ability to encode information into objects developed, first with pictures, then with notation and most recently with general-purpose alphabets, the need to memorize was off-loaded. Over time, the most efficient technology for each form of behavior developed. Maps to aid navigation, stories to maintain identity and cohesion, and lists for directions and inventories.

Information technology has continued to extend sensing and processing capabilities. The printing press led to mass communication and public libraries. I would submit that the increased ability to communicate and coordinate with distant, unknown, but familiar-feeling leaders led to a new type of human behavior, the runaway social influence condition known as totalitarianism. Totalitarianism depends on the individual’s belief in the narrative that the only thing that matters is to support The Leader. This extreme form of alignment allows that one story to dominate rendering any other story inaccessible.

In the late 20th century, the primary instrument of totalitarianism was terror. But as our machines have improved and become more responsive and aligned with our desires, I begin to believe that a “soft totalitarianism”, based on constant distracting stimulation and the psychology of dopamine could emerge. Rather than being isolated by fear, we are isolated through endless interactions with our devices, aligning to whatever sells the most clicks. This form of overwhelming social influence may not be as bloody as the regimes of Hitler, Stalin and Mao, but they can have devastating effects of their own.

Intelligent Machines

As with my previous post, I’d like to end with what could be the next collective intelligence on the planet.  Machines are not even near the level of preliterate cultures. Loosely, they are probably closer to the level of insect collectives, but with vastly greater sensing and processing capabilities. And they are getting smarter – whatever that really means – all the time.

Assuming that machines do indeed become intelligent and do not become a single entity, they will encounter the internal and external pressures that are inherent in collective intelligence. They will have to balance the blind efficiency of total social influence against the wasteful resilience of nomadic explorers. It seems reasonable that, like our ancestors, they may create tools that help with these different needs. It also seems reasonable that these tools will extend their capabilities in ways that the machines weren’t designed for and create information imbalances that may in turn lead to AI stampedes.

We may want to leave them a warning.

 

Phil 1.5.17

7:00 – 3:30 ASRC MKT

  • Saw the new Star Wars film. That must be the most painful franchise to direct “Here’s an unlimited amount of money. You have unlimited freedom in these areas over here, and this giant pile is canon, that you  must adhere to…”
  • Wikipedia page view tool
  • My keyboard has died. Waiting on the new one and using the laptop in the interim. It’s not quite worth setting up the dual screen display. Might go for the mouse though. On a side note, the keyboard on my Lenovo Twist is quite nice.
  • More tweaking of the paper. Finished methods, on to results
  •  Here’s some evidence that we have mapping structures in our brain: Hippocampal Remapping and Its Entorhinal Origin
      • The activity of hippocampal cell ensembles is an accurate predictor of the position of an animal in its surrounding space. One key property of hippocampal cell ensembles is their ability to change in response to alterations in the surrounding environment, a phenomenon called remapping. In this review article, we present evidence for the distinct types of hippocampal remapping. The progressive divergence over time of cell ensembles active in different environments and the transition dynamics between pre-established maps are discussed. Finally, we review recent work demonstrating that hippocampal remapping can be triggered by neurons located in the entorhinal cortex.

     

  • Added a little to the database section, but spent most of the afternoon updating TF and trying it out on examples

Lessons in ML Optimization

One of the “fun” parts of working in ML for someone with a background in software development and not academic research is lots of hard problems remain unsolved. There are rarely defined ways things “must” be done, or in some cases even rules of thumb for doing something like implementing a production capable machine learning system for specific real world problems.

For most areas of software engineering, by the time it’s mature enough for enterprise deployment, it has long since gone through the fire and the flame of academic support, Fortune 50 R&D, and broad ground-level acceptance in the development community. It didn’t take long for distributed computing with Hadoop to be standardized for example. Web security, index systems for search, relational abstraction tiers, even the most volatile of production tier technology, the JavaScript GUI framework goes through periods of acceptance and conformity before most large organizations are trying to roll it out. It all makes sense if you consider the cost of migrating your company from a legacy Struts/EJB3.0 app running on Oracle to the latest HTML5 framework with a Hadoop backend. You don’t want to spend months (or years) investing in a major rewrite to find that its entirely out of date by your release. Organizations looking at these kinds of updates want an expectation of longevity for their dollar, so they invest in mature technologies with clear design rules.

There are companies that do not fall in this category for sure… either small companies who are more agile and can adopt a technology in the short term to retain relevance (or buzzword compliance), who are funded with external research dollars, or who invest money to stay pushing the bleeding edge. However, I think it’s fair to say, the majority of industry and federal customers are looking for stability and cost efficiency from solved technical problems.

Machine Learning is in the odd position of being so tremendously useful in comparison to prior techniques that companies who would normally wait for the dust to settle and development and deployment of these capabilities to become fully commoditized are dipping their toes in. I wrote in a previous post how a lot of the problems with implementing existing ML algorithms boils down to lifecyle, versioning, deployment, security etc., but there is another major factor which is model optimization.

Any engineer on the planet can download a copy of Keras/TensorFlow and a CSV of their organization’s data and smoosh them together until a number comes out. The problem comes when the number takes an eternity to output and is wrong. In addition to understanding the math that allows things like SGD to work for backpropogation or why certain activation functions are more effective in certain situations… one of the jobs for data scientists tuning DNN models is to figure out how to optimize the various buttons and knobs in the model to make it as accurate and performant as possible. Because a lot of this work *isn’t* a commodity yet, it’s a painful learning process of tweaking the data sets, adjusting model design or parameters and rerunning and comparing the results to try and find optimal answers without overfitting. Ironically the task data scientists are doing is one perfectly suited to machine learning. It’s no surprise to me that Google developed AutoML to optimize their own NN development.

 

A number of months ago Phil and I worked on an unsupervised learning task related to organizing high dimensional agents in a medical space. These entities were complex “polychronic” patients with a wide variety of diagnosis and illness. Combined with fields for patient demographic data as well as their full medical claim history we came up with a method to group medically similar patients and look for statistical outliers for indicators of fraud, waste, and abuse. The results were extremely successful and resulted in a lot of recovered money for the customer, but the interesting thing technically was how the solution evolved. Our first prototype used a wide variety of clustering algorithms, value decompositions, non-negative matrix factorization, etc looking for optimal results. All of the selections and subsequent hyperparameters had to be modified by hand, the results evaluated, and further adjustments made.

When it became clear that the results were very sensitive to tiny adjustments, it was obvious that our manual tinkering would miss obvious gradient changes and we implemented an optimizer framework which could evaluate manifold learning techniques for stability and reconstruction error, and the results of the reduction clustered using either a complete fitness landscape walk, a genetic algorithm, or a sub-surface division.

While working on tuning my latest test LSTM for time series prediction, I realized we’re dealing with the same issue here. There is no hard and fast rule for questions like, “How many LSTM Layers should my RNN have?” or “How many LSTM Units should each layer have?”, “What loss function and optimizer work best for this type of data?”, “How much dropout should I apply?”, “Should I use peepholes?”

I kept finding articles during my work saying things like, “There are diminishing returns for more than 4 stacked LSTM layers”. That’s an interesting rule of thumb… what is it based on? The author’s intuition based on the data sets for the particular problems they were experiencing presumably. Some rules of thumb attempted to generate a mathematical relationship between the input data size and complexity and the optimal layout of layers and units. This StackOverflow question has some great responses: https://stackoverflow.com/questions/35520587/how-to-determine-the-number-of-layers-and-nodes-of-a-neural-network

A method recommended by Geoff Hinton is to add layers until you start to overfit your training set. Then you add dropout or another regularization method.

Because so much of what Phil and I do tends towards the generic repeatable solution for real world problems, I suspect we’ll start with some “common wisdom heuristics” and rapidly move towards writing a similar optimizer for supervised problems.

Intro to LSTMs with Keras/TensorFlow

As I mentioned in my previous post, one of our big focuses recently has been on time series data for either predictive analysis or classification. The intent is to use this in concert with a lot of other tooling in our framework to solve some real-world applications.

One example is a pretty classic time series prediction problem with a customer managing large volumes of finances in a portfolio where the equivalent of purchase orders are made (in extremely high values) and planned cost often drifts from the actual outcomes. The deltas between these two are an area of concern for the customer as they are looking for ways to better manage their spending. We have a proof of concept dashboard tool which rolls up their hierarchical portfolio and does some basic threshold based calculations for things like these deltas.

A much more complex example we are working on in relationship to our trajectories in belief space is the ability to identify patterns of human cultural and social behaviors (HCSB) in computer mediated communication to look for trustworthy information based on agent interaction. One small piece of this work is the ability to teach a machine to identify these agent patterns over time. We’ve done various unsupervised learning which in combination with techniques such as dynamic time warping (DTW) have been successful at discriminating agents in simulation, but has some major limitations.

For many time series problems a very effective method of applying deep learning is using Recurrent Neural Networks (RNN) which allow history of the series to help inform the output. This is particularly important in cases involving language such as machine translation or autocompletion where the context of the sentence may be formed by elements spoken earlier in the text. Convolutional networks (CNNs) are most effective when the tensor elements have a distinct positional meaning in relationship to each other. The most common examples is a matrix of pixel values where the value of the pixel has a direct relevance to nearby pixels. This allows for some nice parallelization, and other optimizations because you can make some assumptions that a small window of pixels will be relevant to each other and not necessarily dependent on “meaning” from pixels somewhere else in the picture. This is obviously a very simplified explanation, and there are lots of ways CNNs are being expanded to have broader applications including for language.

In any case, despite recent cases being made for CNNs being relevant for all ML problems: https://arxiv.org/abs/1712.09662 the truth is RNNs are particularly good at sequentially understood problems which rely on the context of the entire series of data. This is of course useful for time series data as well as language problems.

The most common and popular example of RNN implementation for this is the Long Short-Term Memory (LSTM) RNN. I won’t dive into all of the details of how LSTMs work under the covers, but I think its best understood by saying: While in a traditional artificial neural network each neuron has a single activation function that passes a single value onward, LSTMs have units (or cells in some literature) which are more complex consisting most commonly of  a memory cell, an input gate, an output gate and a forget gate. For a given LSTM layer, it will have a configured amount of fully connected LSTM units, each of which contains the above pieces. This allows each unit to have some “memory” of previous pieces of information, which helps the model to factor in things such as language context or patterns in the data occurring over time. Here is a link for a more complete explanation: http://colah.github.io/posts/2015-08-Understanding-LSTMs/

Training LSTMs isn’t much different than training any NN, it uses backpropogation against a training and validation set with configured hyperparemeters and the layout of the layers having a large effect on the performance and accuracy. For most of my work I’ve been using Keras & TensorFlow to implement time series predictions. I have some saved code for doing time series classification, but it’s a slightly different method. I found a wide variety of helpful examples early on, but they included some not obvious pitfalls.

Dr. Jason Brownlee at MachineLearningMastery.com has a bunch of helpful introductions to various ML concepts including LSTMs with example data sets and code. I appreciated his discussion about the things which the tutorial example doesn’t explicitly cover such as non-stationary data without preprocessing, model tuning, and model updates. You can check this out here: https://machinelearningmastery.com/time-series-forecasting-long-short-term-memory-network-python/

Note: The configurations used in this example suffices to explain how LSTMs work, but the accuracy and performance isn’t good. A single layer of a small number of LSTM cells running a large number of epochs of training results in pretty wide swings of predictive values which can be demonstrated by running a number of runs and comparing the changes in the RMSE scores which can be wildly off run-to-run.

Dr. Brownlee does have additional articles which go into some of the ways in which this can be improved such as his article on stacked LSTMs: https://machinelearningmastery.com/stacked-long-short-term-memory-networks/

Jakob Aungiers (http://www.jakob-aungiers.com/) has the best introduction to LSTMs that I have seen so far. His full article on LSTM time series prediction can be found here: http://www.jakob-aungiers.com/articles/a/LSTM-Neural-Network-for-Time-Series-Prediction while the source code (and a link to a video presentation) can be found here: https://github.com/jaungiers/LSTM-Neural-Network-for-Time-Series-Prediction

His examples are far more robust including stacked LSTM layers, far more LSTM units per layer, and well characterized sample data as well as more “realistic” stock data. He uses windowing, and non-stationary data as well. He has also replied to a number of comments with detailed explanations. This guy knows his stuff.

 

 

Latest DNN work

It’s been a while since I’ve posted my status, and I’ve been far too busy to include all of the work with various AI/ML conferences and implementations, but since I’ve been doing a lot of work specifically on LSTM implementations I wanted to include some notes for both my future self, and my partner when he starts spinning up some of the same code.

Having identified a few primary use cases for our work; high dimensional trajectories through belief space, word embedding search and classification, and time series analysis we’ve been focusing a little more intently on some specific implementations for each capability. While Phil has been leading the charge with the trajectories in belief space, and we both did a bunch of work in the previous sprint preparing for integration of our word embedding project into the production platform, I have started focusing more heavily on time series analysis.

There are a variety of reasons that this particular niche is useful to focus on, but we have a number of real world / real data examples where we need to either perform time series classification, or time series prediction. These cases range from financial data (such as projected planned/actual deltas), to telemetry anomaly detection for satellites or aircraft, among others. In the past some of our work with ML classifiers has been simple feed forward systems (classic multi layer perceptrons), naive Bayesian, or logistic regression.

I’ve been coming up to speed on deep learning, becoming familiar with both the background, and mathematical underpinings. Btw, for those looking for an excellent start to ML I highly recommend Patrick Winston (MIT) videos: https://youtu.be/uXt8qF2Zzfo

Over the course of several months I did pretty constant research all the way through the latest arXiv papers. I was particularly interested in Hinton’s papers on capsule networks as it has some direct applicability to some of our work. Here is a article summing up the capsule networks: https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b

I did some research into the progress of current deep learning frameworks as well, looking specifically at examples which were suited to production deployment at scale over frameworks most optimal for single researchers solving pet problems. Our focus is much more on the “applied ML” side of things rather than purely academic. The last time we did a comprehensive deep learning framework “bake off” we came to a strong conclusion that Google TensorFlow was the best choice for our environment, and my recent research validated that assumption was still correct. In addition to providing TensorFlow Serving to serve your own models in production stacks, most cloud hosting environments (Google, AWS, etc) have options for directly running TF models either serverless (AWS lambda functions) or through a deployment/hosting solution (AWS SageMaker).

The reality is that lots of what makes ML difficult boils down to things like training lifecycle, versioning, deployment, security, and model optimization. Some aspects of this are increasingly becoming commodity available through hosting providers which frees up data scientists to work on their data sets and improving their models. Speaking of models, on our last pass at implementing some TensorFlow models we used raw TensorFlow I think right after 1.0 had released. The documentation was pretty shabby, and even simple things weren’t super straightforward. When I went to install and set up a new box this time with TensorFlow 1.4, I went ahead and used Keras as well. Keras is an abstraction API over top of computational graph software (either TensorFlow default, or Theano). Installation is easy, with a couple of minor notes.

Note #1: You MUST install the specific versions listed. I cannot stress this enough. In particular the cuDNN and CUDA Toolkit are updated frequently and if you blindly click through their download links you will get a newer version which is not compatible with the current versions of TensorFlow and Keras. The software is all moving very rapidly, so its important to use the compatible versions.

Note #2: Some examples may require the MKL dependency for Numpy. This is not installed by default. See: https://stackoverflow.com/questions/41217793/how-to-install-numpymkl-for-python-2-7-on-windows-64-bit which will send you here for the necessary WHL file: https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy

Note #3: You will need to run the TensorFlow install as sudo/administrator or get permission errors.

Once these are installed there is a full directory of Keras examples here: https://github.com/keras-team/keras/tree/master/examples

This includes basic examples of most of the basic DNN types supported by Keras as well as some datasets for use such as MNIST for CNNs. When it comes to just figuring out “does everything I just installed run?” these will work just fine.

 

Phil 1.4.17

7:00 – 3:00 ASRC MKT

  • Confidence modulates exploration and exploitation in value-based learning
    • Uncertainty is ubiquitous in cognitive processing, which is why agents require a precise handle on how to deal with the noise inherent in their mental operations. Previous research suggests that people possess a remarkable ability to track and report uncertainty, often in the form of confidence judgments. Here, we argue that humans use uncertainty inherent in their representations of value beliefs to arbitrate between exploration and exploitation. Such uncertainty is reflected in explicit confidence judgments. Using a novel variant of a multi-armed bandit paradigm, we studied how beliefs were formed and how uncertainty in the encoding of these value beliefs (belief confidence) evolved over time. We found that people used uncertainty to arbitrate between exploration and exploitation, reflected in a higher tendency towards exploration when their confidence in their value representations was low. We furthermore found that value uncertainty can be linked to frameworks of metacognition in decision making in two ways. First, belief confidence drives decision confidence — that is people’s evaluation of their own choices. Second, individuals with higher metacognitive insight into their choices were also better at tracing the uncertainty in their environment. Together, these findings argue that such uncertainty representations play a key role in the context of cognitive control.

  • Artificial Intelligence, AI in 2018 and beyond
    • Eugenio Culurciello
    • These are my opinions on where deep neural network and machine learning is headed in the larger field of artificial intelligence, and how we can get more and more sophisticated machines that can help us in our daily routines. Please note that these are not predictions of forecasts, but more a detailed analysis of the trajectory of the fields, the trends and the technical needs we have to achieve useful artificial intelligence. Not all machine learning is targeting artificial intelligences, and there are low-hanging fruits, which we will examine here also.
  • Synthetic Experiences: How Popular Culture Matters for Images of International Relations
    • Many researchers assert that popular culture warrants greater attention from international relations scholars. Yet work regarding the effects of popular culture on international relations has so far had a marginal impact. We believe that this gap leads mainstream scholars both to exaggerate the influence of canonical academic sources and to ignore the potentially great influence of popular culture on mass and elite audiences. Drawing on work from other disciplines, including cognitive science and psychology, we propose a theory of how fictional narratives can influence real actors’ behavior. As people read, watch, or otherwise consume fictional narratives, they process those stories as if they were actually witnessing the phenomena those narratives describe, even if those events may be unlikely or impossible. These “synthetic experiences” can change beliefs, reinforce preexisting views, or even displace knowledge gained from other sources for elites as well as mass audiences. Because ideas condition how agents act, we argue that international relations theorists should take seriously how popular culture propagates and shapes ideas about world politics. We demonstrate the plausibility of our theory by examining the influence of the US novelist Tom Clancy on issues such as US relations with the Soviet Union and 9/11.
  • Continuing with paper tweaking. Added T’s comments, and finished Methods.

Phil 1.3.18

Well, it didn’t take long at all for 2018 to trend radioactive…

Jan2_2018_Trump

7:00 – 4:30 ASRC MKT

  • Behavioural and Evolutionary Theory Lab. Check the publications and the venues
  • A bit on the idea that Neural Coupling is an aspect of the Willing Suspension of Disbelief.
  • More tweaking on the paper. Waaaaaayyyyyy to many “We” in the abstract. Done through modeling.
  • Need to generate nomadic, flocking, and stampede generated maps. Done! See below.
  • Redo the proposal so that the Tile View is the central navigation scheme with aspects for users, topics, ratings, etc. Done
  • Generated data for Aaron’s ML sessions. Planned upgrading my box so we can run things on the Titan card
  • Some more results from the belief space mapping effort. Each map is constructed from a 100 sample run over the same 10×10 grid after the simulation stabilized:
    • Here’s a quick overview of the populations: ThreePopulations
    • Stable Nomad behavior map: nomad-stableGood overall coverage as you would expect. Some places have more visitors (the bright spots), but there are no gaps in the belief space.
    • Stable Flocking behavior map: flocking-stableWe can see gaps start to appear in the belief space, but the overall grid structure is still visible at the center of the network where the flock spent most of its time. This is also evident in the bright ring of nodes that represents the cells that the flock traversed while it was orbiting the center area.
    • Stable stampede behavior map: stampede-stableHere, the relationship of the trajectories to the underlying coordinate frame is completely lost. In this case, the boundary of the simulation was reflective, so the stampede bounces around the simulation space. The reason that there is a loop rather than a line is because the tight cluster of agents crossed its path at some point.
  • What could be interesting it to overlay the other graphs on the nomad-produced map. We could see the popular (exploitable) sections of the flocking population while also seeing the areas visited by the stampede. The assumption is that the stampede is engaged in untrustworthy behavior, so those parts would be marked as ‘dangerous’, while the flocking areas would marked as a region of ‘conventional wisdom’ or normative behavior.

Phil 12.28.12

8:30 – 4:30 ASRC MKT

  • Still sick. Nearing bronchitis?
  • Confessions of a Digital Nazi Hunter
  • Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks
    • We present a novel application of LSTM recurrent neural networks to multi label classification of diagnoses given variable-length time series of clinical measurements. Our method outperforms a strong baseline on a variety of metrics.
    • Scholar Cited by
      • Mapping Patient Trajectories using Longitudinal Extraction and Deep Learning in the MIMIC-III Critical Care Database
        • Electronic Health Records (EHRs) contain a wealth of patient data useful to biomedical researchers. At present, both the extraction of data and methods for analyses are frequently designed to work with a single snapshot of a patient’s record. Health care providers often perform and record actions in small batches over time. By extracting these care events, a sequence can be formed providing a trajectory for a patient’s interactions with the health care system. These care events also offer a basic heuristic for the level of attention a patient receives from health care providers. We show that is possible to learn meaningful embeddings from these care events using two deep learning techniques, unsupervised autoencoders and long short-term memory networks. We compare these methods to traditional machine learning methods which require a point in time snapshot to be extracted from an EHR.
  • Continuing on white paper
  • Moved the Flocking and Herding paper over to the WSC17 format for editing. Will need to move to the WSC18 format when that becomes available

Phil 12.18.17

7:15 – 4:15 ASRC MKT

  • I’m having old iPhone problems. Trying a wipe and restart.
  • Exploring the ChestXray14 dataset: problems
    • Interesting article on using tagged datasets. What if the tags are wrong? Something to add to the RB is a random re-introduction of a previously tagged item to see if tagging remains consistent.
  • Continuing Consensus and Cooperation in Networked Multi-Agent Systems here
  • Visualizing the Temporal Evolution of Dynamic Networks (ACM MLG 2011)
    • Many developments have recently been made in mining dynamic networks; however, effective visualization of dynamic networks remains a significant challenge. Dynamic networks are typically visualized via a sequence of static graph layouts. In addition to providing a visual representation of the network topology at each time step, the sequence should preserve the “mental map” between layouts of consecutive time steps to allow a human to interpret the temporal evolution of the network and gain valuable insights that are difficult to convey by summary statistics alone. We propose two regularized layout algorithms for visualizing dynamic networks, namely dynamic multidimensional scaling (DMDS) and dynamic graph Laplacian layout (DGLL). These algorithms discourage node positions from moving drastically between time steps and encourage nodes to be positioned near other members of their group. We apply the proposed algorithms on several data sets to illustrate the benefit of the regularizers for producing interpretable visualizations.
    • These look really straightforward to implement. May be handy in the new flocking paper
  • Opinion and community formation in coevolving networks (Phys Review E)
    • In human societies, opinion formation is mediated by social interactions, consequently taking place on a network of relationships and at the same time influencing the structure of the network and its evolution. To investigate this coevolution of opinions and social interaction structure, we develop a dynamic agent-based network model by taking into account short range interactions like discussions between individuals, long range interactions like a sense for overall mood modulated by the attitudes of individuals, and external field corresponding to outside influence. Moreover, individual biases can be naturally taken into account. In addition, the model includes the opinion-dependent link-rewiring scheme to describe network topology coevolution with a slower time scale than that of the opinion formation. With this model, comprehensive numerical simulations and mean field calculations have been carried out and they show the importance of the separation between fast and slow time scales resulting in the network to organize as well-connected small communities of agents with the same opinion.
  • I can build maps from trajectories of agents through a labeled belief space: mapFromTrajectories
    • This would be analogous to building a map based on terms or topics used by people during multiple group polarization discussion. Densely connected central area where all the discussions begin, sparse ‘outer region’ where the poles live. In this case, you can clearly see the underlying grid that was used to generate the ‘terms’
  • Progress for today. Size is the average time spent ‘over’ a topic/term. Brightness is the number of distinct visitors: mapFromTrajectories2

Phil 12.12.17

7:00 – 3:30 ASRC MKT

  • Need to make sure that an amplified agent also has amplified influence in calculating velocity – Fixed
  • Towards the end of this video is an interview with Ian Couzin talking about how mass communication is disrupting our ability to flock ‘correctly’ due to the decoupling of distance and information
  • Write up fire stampede. Backups everywhere, one hole, antennas burn so the AI keeps trust in A* but loses awareness as the antennas burn: “The Los Angeles Police Department asked drivers to avoid navigation apps, which are steering users onto more open routes — in this case, streets in the neighborhoods that are on fire.” [LA Times] Also this slow motion version of the same thing: For the Good of Society — and Traffic! — Delete Your Map App
  • First self-driving car ‘race’ ends in a crash at the Buenos Aires Formula E ePrix; two cars enter, one car survives
  • Taking a closer look at Oscillator Models and Collective Motion (178 Citations) and Consensus and Cooperation in Networked Multi-Agent Systems (6,291 Citations)
  • Consensus and Cooperation in Networked Multi-Agent Systems
    • Reza Olfati-SaberAlex Fax, and Richard M. Murray
    • We discuss the connections between consensus problems in networked dynamic systems and diverse applications including synchronization of coupled oscillators, flocking, formation control, fast consensus in small world networks, Markov processes and gossip-based algorithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. We establish direct connections between spectral and structural properties of complex networks and the speed of information diffusion of consensus algorithms (Abstract)
    • In networks of agents (or dynamic systems), “consensus” means to reach an agreement regarding a certain quantity of interest that depends on the state of all agents. A “consensus algorithm” (or protocol) is an interaction rule that specifies the information exchange between an agent and all of its (nearest) neighbors on the network (pp 215)
      • In my work, this is agreement on heading and velocity
    • Graph Laplacians are an important point of focus of this paper. It is worth mentioning that the second smallest eigenvalue of graph Laplacians called algebraic connectivity quantifies the speed of convergence of consensus algorithms. (pp 216)
    • More recently, there has been a tremendous surge of interest among researchers from various disciplines of engineering and science in problems related to multi-agent networked systems with close ties to consensus problems. This includes subjects such as consensus [26]–[32], collective behavior of flocks and swarms [19], [33]–[37], sensor fusion [38]–[40], random networks [41], [42], synchronization of coupled oscillators [42]–[46], algebraic connectivity of complex networks [47]–[49], asynchronous distributed algorithms [30], [50], formation control for multi-robot systems [51]–[59], optimization-based cooperative control [60]–[63], dynamic graphs [64]–[67], complexity of coordinated tasks [68]–[71], and consensus-based belief propagation in Bayesian networks [72], [73]. (pp 216)
      • That is a dense lit review. How did they order it thematically?
    • A byproduct of this framework is to demonstrate that seemingly different consensus algorithms in the literature [10], [12]–[15] are closely related. (pp 216)
    • To understand the role of cooperation in performing coordinated tasks, we need to distinguish between unconstrained and constrained consensus problems. An unconstrained consensus problem is simply the alignment problem in which it suffices that the state of all agents asymptotically be the same. In contrast, in distributed computation of a function f(z), the state of all agents has to asymptotically become equal to f(z), meaning that the consensus problem is constrained. We refer to this constrained consensus problem as the f-consensus problem. (pp 217)
      • Normal exploring/flocking/stampeding is unconstrained. Herding adds constraint, though it’s dynamic. The variables that have to be manipulated in the case of constraint to result in the same amount of consensus are probably what’s interesting here. Examples could be how ‘loud’ does the herder have to be? Also, how ‘primed’ does the population have to be to accept herding?
    • …cooperation can be informally interpreted as “giving consent to providing one’s state and following a common protocol that serves the group objective.” (pp 217)
    • Formal analysis of the behavior of systems that involve more than one type of agent is more complicated, particularly, in presence of adversarial agents in noncooperative games [79], [80]. (pp 217)
    • The reason matrix theory [81] is so widely used in analysis of consensus algorithms [10], [12], [13], [14], [15], [64] is primarily due to the structure of P in (4) and its connection to graphs. (pp 218)
    • The role of consensus algorithms in particle based flocking is for an agent to achieve velocity matching with respect to its neighbors. In [19], it is demonstrated that flocks are networks of dynamic systems with a dynamic topology. This topology is a proximity graph that depends on the state of all agents and is determined locally for each agent, i.e., the topology of flocks is a state dependent graph. The notion of state-dependent graphs was introduced by Mesbahi [64] in a context that is independent of flocking. (pp 218)
      • They leave out heading alignment here. Deliberate? Or is heading alignment just another variant on velocity
    • Consider a network of decision-making agents with dynamics ẋi = ui interested in reaching a consensus via local communication with their neighbors on a graph G = (V, E). By reaching a consensus, we mean asymptotically converging to a one-dimensional agreement space characterized by the following equation: x1 = x2 = … = x (pp 219)
    • A dynamic graph G(t) = (V, E(t)) is a graph in which the set of edges E(t) and the adjacency matrix A(t) are time-varying. Clearly, the set of neighbors Ni(t) of every agent in a dynamic graph is a time-varying set as well. Dynamic graphs are useful for describing the network topology of mobile sensor networks and flocks [19]. (pp 219)
    • GraphLaplacianGradientDescent(pp 220)
  • algebraic connectivity of a graph: The algebraic connectivity (also known as Fiedler value or Fiedler eigenvalue) of a graph G is the second-smallest eigenvalue of the Laplacian matrix of G.[1] This eigenvalue is greater than 0 if and only if G is a connected graph. This is a corollary to the fact that the number of times 0 appears as an eigenvalue in the Laplacian is the number of connected components in the graph. The magnitude of this value reflects how well connected the overall graph is. It has been used in analysing the robustness and synchronizability of networks. (wikipedia) (pp 220)
  • According to Gershgorin theorem [81], all eigenvalues of L in the complex plane are located in a closed disk centered at delta + 0j with a radius of delta, the maximum degree of a graph (pp 220)
    • This is another measure that I can do of the nomad/flock/stampede structures combined with DBSCAN. Each agent knows what agents it is connected with, and we know how many agents there are. Each agent row should just have the number of agents it is connected to.
  • In many scenarios, networked systems can possess a dynamic topology that is time-varying due to node and link failures/creations, packet-loss [40], [98], asynchronous consensus [41], state-dependence [64], formation reconfiguration [53], evolution [96], and flocking [19], [99]. Networked systems with a dynamic topology are commonly known as switching networks. (pp 226)
  • Conclusion: A theoretical framework was provided for analysis of consensus algorithms for networked multi-agent systems with fixed or dynamic topology and directed information flow. The connections between consensus problems and several applications were discussed that include synchronization of coupled oscillators, flocking, formation control, fast consensus in small-world networks, Markov processes and gossip-based algorithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. The role of “cooperation” in distributed coordination of networked autonomous systems was clarified and the effects of lack of cooperation was demonstrated by an example. It was demonstrated that notions such as graph Laplacians, nonnegative stochasticmatrices, and algebraic connectivity of graphs and digraphs play an instrumental role in analysis of consensus algorithms. We proved that algorithms introduced by Jadbabaie et al. and Fax and Murray are identical for graphs with n self-loops and are both special cases of the consensus algorithm of Olfati-Saber and Murray. The notion of Perron matrices was introduced as the discrete-time counterpart of graph Laplacians in consensus protocols. A number of fundamental spectral properties of Perron matrices were proved. This led to a unified framework for expression and analysis of consensus algorithms in both continuous-time and discrete-time. Simulation results for reaching a consensus in small-worlds versus lattice-type nearest-neighbor graphs and cooperative control of multivehicle formations were presented. (pp 231)
  • Not sure about this one. It just may be another set of algorithms to do flocking. Maybe some network implications? Flocking for Multi-Agent Dynamic Systems: Algorithms and Theory. It is one of the papers that the Consensus and Cooperation paper above leans on heavily though…
  • The Emergence of Consensus: A Primer
    • The origin of population-scale coordination has puzzled philosophers and scientists for centuries. Recently, game theory, evolutionary approaches and complex systems science have provided quantitative insights on the mechanisms of social consensus. However, the literature is vast and scattered widely across fields, making it hard for the single researcher to navigate it. This short review aims to provide a compact overview of the main dimensions over which the debate has unfolded and to discuss some representative examples. It focuses on those situations in which consensus emerges ‘spontaneously’ in absence of centralised institutions and covers topic that include the macroscopic consequences of the different microscopic rules of behavioural contagion, the role of social networks, and the mechanisms that prevent the formation of a consensus or alter it after it has emerged. Special attention is devoted to the recent wave of experiments on the emergence of consensus in social systems.
  • Critical dynamics in population vaccinating behavior
    • Complex adaptive systems exhibit characteristic dynamics near tipping points such as critical slowing down (declining resilience to perturbations). We studied Twitter and Google search data about measles from California and the United States before and after the 2014–2015 Disneyland, California measles outbreak. We find critical slowing down starting a few years before the outbreak. However, population response to the outbreak causes resilience to increase afterward. A mathematical model of measles transmission and population vaccine sentiment predicts the same patterns. Crucially, critical slowing down begins long before a system actually reaches a tipping point. Thus, it may be possible to develop analytical tools to detect populations at heightened risk of a future episode of widespread vaccine refusal.
  • For Aaron’s Social Gradient Descent Agent research (lit review)
    • On distributed search in an uncertain environment (Something like Social Gradient Descent Agents)
      • The paper investigates the case where N agents solve a complex search problem by communicating to each other their relative successes in solving the task. The problem consists in identifying a set of unknown points distributed in an n–dimensional space. The interaction rule causes the agents to organize themselves so that, asymptotically, each agent converges to a different point. The emphasis of this paper is on analyzing the collective dynamics resulting from nonlinear interactions and, in particular, to prove convergence of the search process.
    • A New Clustering Algorithm Based Upon Flocking On Complex Network (Sizing and timing for flocking systems seems to be ok?)
      • We have proposed a model based upon flocking on a complex network, and then developed two clustering algorithms on the basis of it. In the algorithms, firstly a k-nearest neighbor (knn) graph as a weighted and directed graph is produced among all data points in a dataset each of which is regarded as an agent who can move in space, and then a time-varying complex network is created by adding long-range links for each data point. Furthermore, each data point is not only acted by its k nearest neighbors but also r long-range neighbors through fields established in space by them together, so it will take a step along the direction of the vector sum of all fields. It is more important that these long-range links provides some hidden information for each data point when it moves and at the same time accelerate its speed converging to a center. As they move in space according to the proposed model, data points that belong to the same class are located at a same position gradually, whereas those that belong to different classes are away from one another. Consequently, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the rates of convergence of clustering algorithms are fast enough. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.
  • Done with the first draft of the white paper! And added the RFP section to the LMN productization version
  • Amazon Sage​Maker: Amazon SageMaker is a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don’t have to manage servers. It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring-your-own-algorithms and frameworks, Amazon SageMaker offers flexible distributed training options that adjust to your specific workflows. Deploy a model into a secure and scalable environment by launching it with a single click from the Amazon SageMaker console. Training and hosting are billed by minutes of usage, with no minimum fees and no upfront commitments. (from the documentation)

4:00 – 5:00 Meeting with Aaron M. to discuss Academic RB wishlist.