# Phil 4.19.18

8:00 – ASRC MKT/BD

• Good discussion with Aaron about the agents navigating embedding space. This would be a great example of creating “more realistic” data from simulation that bridges the gap between simulation and human data. This becomes the basis for work producing text for inputs such as DHS input streams.
• Get the embedding space from the Jack London corpora (crawl here)
• Train a classifier that recognizes JL using the embedding vectors instead of the words. This allows for contextual closeness. Additionally, it might allow a corpus to be trained “at once” as a pattern in the embedding space using CNNs.
• Train an NN(what type?) to produce sentences that contain words sent by agents that fool the classifier
• Record the sentences as the trajectories
• Reconstruct trajectories from the sentences and compare to the input
• Some thoughts WRT generating Twitter data
• Closely aligned agents can retweet (alignment measure?)
• Less closely aligned agents can mention/respond, and also add their tweet
• Handed off the proposal to Red Team. Still need to rework the Exec Summary. Nope. Doesn’t matter that the current exec summary does not comply with the requirements.
• A dog with high social influence creates an adorable stampede:
• Using Machine Learning to Replicate Chaotic Attractors and Calculate Lyapunov Exponents from Data
• This is a paper that describes how ML can be used to predict the behavior of chaotic systems. An implication is that this technique could be used for early classification of nomadic/flocking/stampede behavior
• Visualizing a Thinker’s Life
• This paper presents a visualization framework that aids readers in understanding and analyzing the contents of medium-sized text collections that are typical for the opus of a single or few authors.We contribute several document-based visualization techniques to facilitate the exploration of the work of the German author Bazon Brock by depicting various aspects of its texts, such as the TextGenetics that shows the structure of the collection along with its chronology. The ConceptCircuit augments the TextGenetics with entities – persons and locations that were crucial to his work. All visualizations are sensitive to a wildcard-based phrase search that allows complex requests towards the author’s work. Further development, as well as expert reviews and discussions with the author Bazon Brock, focused on the assessment and comparison of visualizations based on automatic topic extraction against ones that are based on expert knowledge.

# Phil 4.18.18

7:00 – 6:30 ASRC MKT/BD

• Meeting with James Foulds. We talked about building an embedding space for a literature body (The works of Jack London, for example) that agents can then navigate across. At the same time, train an LSTM on the same corpora so that the ML system, when given the vector of terms from the embedding (with probabilities/similarities?), produce a line that could be from the work that incorporates those terms. This provides a much more realistic model of the agent output that could be used for mapping. Nice paper to continue the current work while JuryRoom comes up to speed.
• Recurrent Neural Networks for Multivariate Time Series with Missing Values
• Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRUD, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.
•  The fall of RNN / LSTM
• We fell for Recurrent neural networks (RNN), Long-short term memory (LSTM), and all their variants. Now it is time to drop them!
• JuryRoom
• Back to proposal writing
• Done with section 5! LaTex FTW!
• Clean up Abstract, Exec Summary and Transformative Impact tomorrow

# Phil 4.17.18

7:00 – ASRC MKT

• Listening to an interview with Nial Ferguson this morning where he talks about how the Chinese IT model aligns more closely with developing countries because they have solved the payment problem. And the surveillance state apparatus comes along for free. A ML/AI trained in that population will provide even closer alignment and will feel more “native”.
• A ML/AI trained in that population will feel more “native”, and increase the traction of the Chinese IT. The Chinese approach expands its footprint in the developing world because it feels better and solves problems.
• This sets up a conflict between corporate systems in the US and EU and China? In sheer demographics that means that it’s more likely that the dominant ML/AI perspective would reflect the surveillance biases of the Chinese government.
• Payment systems are Socio-cultural user interfaces
• Submitted to SASO. Submission #32. Updated the ArXiv file too. ArXiv “forgets” all the attachments too, so the tarball approach is soooooo much nicer.
• Alt text for screen readers using LaTex
\documentclass{article}
\usepackage{graphicx}
\usepackage{pdfcomment}
\pagestyle{empty}

\begin{document}
one two three

\pdftooltip{\includegraphics{img.png}}{This is the ALT text}%

four five six
\end{document}

# Phil 4.16.18

9:00 – ASRC MKT

• Finished up and submitted the CI 2018 and also put up on ArXive. Probably 90 minutes total?
• Abstract submission (extended)  April 23, 2018
• Submission (extended) April 30, 2018
• Some diversity injection: Report for America Supports Journalism Where Cutbacks Hit Hard
• Report for America, a nonprofit organization modeled after AmeriCorps, aims to install 1,000 journalists in understaffed newsrooms by 2022. Now in its pilot stage, the initiative has placed three reporters in Appalachia. It has chosen nine more, from 740 applicants, to be deployed across the country in June.
• An information-theoretic, all-scales approach to comparing networks
• As network research becomes more sophisticated, it is more common than ever for researchers to find themselves not studying a single network but needing to analyze sets of networks. An important task when working with sets of networks is network comparison, developing a similarity or distance measure between networks so that meaningful comparisons can be drawn. The best means to accomplish this task remains an open area of research. Here we introduce a new measure to compare networks, the Portrait Divergence, that is mathematically principled, incorporates the topological characteristics of networks at all structural scales, and is general-purpose and applicable to all types of networks. An important feature of our measure that enables many of its useful properties is that it is based on a graph invariant, the network portrait. We test our measure on both synthetic graphs and real world networks taken from protein interaction data, neuroscience, and computational social science applications. The Portrait Divergence reveals important characteristics of multilayer and temporal networks extracted from data.

3:00 – 4:00 Fika

# Phil 4.13.18

7:00 – ASRC MKT/BD

• That Politico article on “news deserts” doesn’t really show what it claims to show
• Its heart is in the right place, and the decline of local news really is a big threat to democratic governance.
• Firing up the JuryRoom effort again
• And a lot of fixing plugins. Big update
• Ok, back to having PHP and MySQL working. Need to see how to integrate it with the Angular CLI
• Updated CLI as per stackoverflow
• In order to update the angular-cli package installed globally in your system, you need to run:

npm uninstall -g angular-cli
npm cache clean
npm install -g @angular/cli@latest


Depending on your system, you may need to prefix the above commands with sudo.

Also, most likely you want to also update your local project version, because inside your project directory it will be selected with higher priority than the global one:

rm -rf node_modules
npm uninstall --save-dev angular-cli
npm install --save-dev @angular/cli@latest
npm install


thanks grizzm0 for pointing this out on GitHub.

• Updated my work environment too. Some PHP issues, and the Angular CLI wouldn’t update until I turned on the VPN. Duh.
• Angular 4 + PHP: Setting Up Angular And Bootstrap – Part 2
• Back to proposal writing

# Phil 4.12.18

7:00 – 5:00 ASRC MKT/BD

• Downloaded my FB DB today. Honestly, the only thing that seems excessive is the contact information
• Interactive Semantic Alignment Model: Social Influence and Local Transmission Bottleneck
• Dariusz Kalociński
• Marcin Mostowski
• Nina Gierasimczuk
• We provide a computational model of semantic alignment among communicating agents constrained by social and cognitive pressures. We use our model to analyze the effects of social stratification and a local transmission bottleneck on the coordination of meaning in isolated dyads. The analysis suggests that the traditional approach to learning—understood as inferring prescribed meaning from observations—can be viewed as a special case of semantic alignment, manifesting itself in the behaviour of socially imbalanced dyads put under mild pressure of a local transmission bottleneck. Other parametrizations of the model yield different long-term effects, including lack of convergence or convergence on simple meanings only.
• Starting to get back to the JuryRoom app. I need a better way to get the data parts up and running. This tutorial seems to have a minimal piece that works with PHP. That may be for the best since this looks like a solo effort for the foreseeable future
• Proposal
• Cut implementation down to proof-of-concept?
• We are keeping the ASRC format
• Got Dr. Lee’s contribution
• And a lot of writing and figuring out of things

# Phil 4.11.18

7:00 – 5:00 ASRC MKT

• Fixed the quotes in Simon’s Anthill
• Ordered Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations by Yoav Shoham.
• Meeting with Aaron and T about aligning dev plan
• More writing. We got a week extension!
• Triaged exec summary
• Triaged Transformational
• Introducing TensorFlow Probability
• At the 2018 TensorFlow Developer Summit, we announced TensorFlow Probability: a probabilistic programming toolbox for machine learning researchers and practitioners to quickly and reliably build sophisticated models that leverage state-of-the-art hardware. You should use TensorFlow Probability if:
• You want to build a generative model of data, reasoning about its hidden processes.
• You need to quantify the uncertainty in your predictions, as opposed to predicting a single value.
• Your training set has a large number of features relative to the number of data points.
• Your data is structured — for example, with groups, space, graphs, or language semantics — and you’d like to capture this structure with prior information.
• You have an inverse problem — see this TFDS’18 talk for reconstructing fusion plasmas from measurements.
• TensorFlow Probability gives you the tools to solve these problems. In addition, it inherits the strengths of TensorFlow such as automatic differentiation and the ability to scale performance across a variety of platforms: CPUs, GPUs, and TPUs.

# Phil 4.10.18

7:00 – 5:00 ASRC MKT

• Incorporating Wajanat’s changes
• Discovered the csquotes package!
\usepackage[autostyle]{csquotes}

\begin{document}

\enquote{Thanks!}

\end{document}
• Meeting with Drew
• Nice chat. Basically, “use the databases!”
• Also found this:
• A Mechanism for Reasoning about Time and Belief
• Hideki Isozaki
• Several computational frameworks have been proposed to maintain information about the evolving world, which embody a default persistence mechanism; examples include time maps and the event calculus. In multi-agent environments, time and belief both play essential roles. Belief interacts with time in two ways: there is the time at which something is believed, and the time about which it is believed. We augment the default mechanisms proposed for the purely temporal case so as to maintain information not only about the objective world but also about the evolution of beliefs. In the simplest case, this yields a two dimensional map of time, with persistence along each dimension. Since beliefs themselves may refer to other beliefs, we have to think of a statement referring to an agent’s temporal belief about another agent’s temporal belief ( a nested temporal belief statement). It poses both semantical and algorithmic problems. In this paper, we concentrate on the algorithmic aspect of the problems. The general case involves multi-dimensional maps of time called Temporal Belief Maps.
• Register for CI 2018 – done
• Finalize and submit paper by April 27, 2018
• Did not get a go ahead for ONR
• More work on the DHS proposal. Thinking about having a discussion about using latent values and clustering as the initial detection approach, and using ML as the initial simulation approach.
• Then much banging away at keyboards. Good progress, I think
• Neural Artistic Style Transfer: A Comprehensive Look

# Phil 4.9.18

7:00 – ASRC MKT / BD

• The Collective Intelligence 2018 paper was accepted! Now I need to start thinking about the presentation. And lodging, travel, etc.
• Tweaking the SASO paper
• The reasonably current version is on ArXive! Will update after submission to SASO this week.
• This One Simple Trick Disrupts Digital Communities
• This paper describes an agent based simulation used to model human actions in belief space, a high-dimensional subset of information space associated with opinions. Using insights from animal collective behavior, we are able to simulate and identify behavior patterns that are similar to nomadic, flocking and stampeding patterns of animal groups. These behaviors have analogous manifestations in human interaction, emerging as solitary explorers, the fashion-conscious, and members of polarized echo chambers. We demonstrate that a small portion of nomadic agents that widely traverse belief space can disrupt a larger population of stampeding agents. Extending the model, we introduce the concept of Adversarial Herding, where bad actors can exploit properties of technologically mediated communication to artificially create self sustaining runaway polarization. We call this condition the Pishkin Effect as it recalls the large scale buffalo stampedes that could be created by native Americans hunters. We then discuss opportunities for system design that could leverage the ability to recognize these negative patterns, and discuss affordances that may disrupt the formation of natural and deliberate echo chambers.
• Kind of between things, so I wrote up my notes on Influence of augmented humans in online interactions during voting events
• Looks important: Lessons Learned Reproducing a Deep Reinforcement Learning Paper
• Proposal all day today probably
• Fika

# Phil 4.6.18

7:00 – 9:00 ASRC MKT

• Heard a San Francisco comedian refer to Google as “Mordor” to knowing laughter in the audience. That says a lot about the relationship between the SF folks and their technology nation-states to the south. It also makes me rethink what Mordor actually was…
• More ArXive submission
• Tips for submitting to ArXive for the first time
• Make sure that only the used pix are uploaded
• EchoChamberAngle
• Explore-Exploit
• directionpreserving
• SlewAngle
• Explorer
• coloredFlocking
• stampede
• RunawayTrace
• populations
• HerdingImpact
• It may be possible to submit as a single zipped (.gz? .tar?)  package. Will try that next time
• Submitted and pending approval.
• Start on DHS proposal
• Built LaTex document
• The templates provided by ASRC are completely wrong. Fixed in the LaTex template
• Lots of discussion and negotiation on the form of the concept. I think we’re ready to start Monday
• Nice chat with Wajanat about the paper and then her work. It’s interesting to hear how references and metaphors that I think are common get missed when they are read by a non-native english speaker from a different cultural frame. For example, I refer to a “plague of locusts” , which I had to explain as one of the biblical plagues of Egypt. Once explained, Wajanat immediately got it, and mentioned the Arabic word طاعون, We then asked Ali, who’s Iranian. He didn’t know about plagues either, but by using طاعون, he was able to get the entire context. She also suggested improving the screenshot at the beginning of the paper and expanding the transition to the intelligent vehicle stampede section.
• Then a meandering and fun chat with Shimei, mostly about psychology and AI ethics. Left at 9:00

# Phil 3.22.18

7:00 – 5:00 ASRC MKT

• The ONR proposal is in!
• Promoted the Odyssey thoughts to Phlog
• More BIC
• The problem posed by Heads and Tails is not that the players lack a common understanding of salience; it is that game theory lacks an adequate explanation of how salience affects the decisions of rational players. All we gain by adding preplay communication to the model is the realisation that game theory also lacks an adequate explanation of how costless messages affect the decisions of rational players. (pg 180)
• More TF crash course
• Invert the ratio for train and validation
• Add the check against test data
• Get started on LSTM w/Aaron?

# Phil 3.20.18

7:00 – 3:00 ASRC MKT

• What (satirical) denying a map looks like. Nice application of believability.
• Need to make a folder with all the CUDA bits and Visual Studio to get all my boxes working with GPU tensorflow
• Assemble one-page resume for ONR proposal
• More BIC
• The fundamental principle of this morality is that what each agent ought to do is to co-operate, with whoever else is co-operating, in the production of the best consequences possible given the behaviour of non-co-operators’ (Regan 1980, p. 124). (pg 167)
• Ordered On Social Facts
• Are social groups real in any sense that is independent of the thoughts, actions, and beliefs of the individuals making up the group? Using methods of philosophy to examine such longstanding sociological questions, Margaret Gilbert gives a general characterization of the core phenomena at issue in the domain of human social life.

Back to the TF crash course

• Had to update my numpy from Christoph Gohlke’s Unofficial Windows Binaries for Python Extension Packages. It’s wonderful, but WHY???
• Also had this problem updating numpy
D:\installed>pip3 install "numpy-1.14.2+mkl-cp37-cp37m-win_amd64.whl"
numpy-1.14.2+mkl-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform.
• That was solved by installing numpy-1.14.2+mkl-cp36-cp36m-win_amd64.whl. Why cp36 works and cp 37 doesn’t is beyond me.
• Left early due to snow

# Phil 3.16.18

7:00 – 4:00 ASRC MKT

• Umwelt
• In the semiotic theories of Jakob von Uexküll and Thomas A. Sebeokumwelt (plural: umwelten; from the German Umwelt meaning “environment” or “surroundings”) is the “biological foundations that lie at the very epicenter of the study of both communication and signification in the human [and non-human] animal”.[1] The term is usually translated as “self-centered world”.[2] Uexküll theorised that organisms can have different umwelten, even though they share the same environment. The subject of umwelt and Uexküll’s work is described by Dorion Sagan in an introduction to a collection of translations.[3] The term umwelt, together with companion terms umgebungand innenwelt, have special relevance for cognitive philosophers, roboticists and cyberneticians, since they offer a solution to the conundrum of the infinite regress of the Cartesian Theater.
• Benjamin Kuipers
• How Can We Trust a Robot? (video)
• Advances in artificial intelligence (AI) and robotics have raised concerns about the impact on our society of intelligent robots, unconstrained by morality or ethics
• Socially-Aware Navigation Using Topological Maps and Social Norm Learning
• We present socially-aware navigation for an intelligent robot wheelchair in an environment with many pedestrians. The robot learns social norms by observing the behaviors of human pedestrians, interpreting detected biases as social norms, and incorporating those norms into its motion planning. We compare our socially-aware motion planner with a baseline motion planner that produces safe, collision-free motion. The ability of our robot to learn generalizable social norms depends on our use of a topological map abstraction, so that a practical number of observations can allow learning of a social norm applicable in a wide variety of circumstances. We show that the robot can detect biases in observed human behavior that support learning the social norm of driving on the right. Furthermore, we show that when the robot follows these social norms, its behavior influences the behavior of pedestrians around it, increasing their adherence to the same norms. We conjecture that the legibility of the robot’s normative behavior improves human pedestrians’ ability to predict the robot’s future behavior, making them more likely to follow the same norm.
• Erin’s defense
• Nice slides!
• Slide 4 – narrowing from big question to dissertation topic. Nice way to set up framing
• Intellectual function vs. adaptive behavior
• Loss of self-determination
• Maker culture as a way of having your own high-dimensional vector? Does this mean that the maker culture is inherently more exploratory when compared to …?
• “Frustration is an easy way to end up in off-task behavior”
• Peer learning as gradient descent?
• Emic ethnography
• Pervasive technology in education
• Turn-taking
• Antecedent behavior consequence theory
• Reducing the burden on the educators. Low-level detection and to draw attention to the educator and annotate. Capturing and labeling
• Helina – bring the conclusions back to the core questions
• Diversity injection works! Mainstream students gained broader appreciation of students with disability
• Q: Does it make more sense to focus on potentially charismatic technologies that will include the more difficult outliers even if it requires a breakthrough? Or to make incremental improvements that can improve accessibility to some people with disabilities faster?
• Boris analytic software

# Phil 3.15.18

8:30 – 4:30 ASRC MKT

# Phil 3.9.18

8:00 – 4:30 ASRC MKT

• Still working on the nomad->flocking->stampede slide. Do I need a “dimensions” arrow?
• Labeled slides. Need to do timings – done
• And then Aaron showed up, so lots of reworking. Done again!
• Put the ONR proposal back in its original form
• An overview of gradient descent optimization algorithm
• Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne’scaffe’s, and keras’ documentation). These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This blog post aims at providing you with intuitions towards the behaviour of different algorithms for optimizing gradient descent that will help you put them to use.