The ability for an agent to localize itself within an environment is crucial for many real-world applications. For unknown environments, Simultaneous Localization and Mapping (SLAM) enables incremental and concurrent building of and localizing within a map. We present a new, differentiable architecture, Neural Graph Optimizer, progressing towards a complete neural network solution for SLAM by designing a system composed of a local pose estimation model, a novel pose selection module, and a novel graph optimization process. The entire architecture is trained in an end-to-end fashion, enabling the network to automatically learn domain-specific features relevant to the visual odometry and avoid the involved process of feature engineering. We demonstrate the effectiveness of our system on a simulated 2D maze and the 3D ViZ-Doom environment.
IR context -> Sociocultural context
Writing Fika. Make a few printouts of the abstract
Write up LMN4A2P thoughts
Storing a corpora (raw text, BoW, TF-IDF, Matrix)
Uploading from file
Uploading from link/crawl
Corpora labeling and exploring
Index with ElasticSearch
Production of word vectors or ‘effigy documents’
Effigy search using Google CSE for public documents that are similar
Semantic (Academic, etc)
Lists (reweightable) or terms and documents
Cluster-based map (pan/zoom/search)
I’m as enthusiastic about the future of AI as (almost) anyone, but I would estimate I’ve created 1000X more value from careful manual analysis of a few high quality data sets than I have from all the fancy ML models I’ve trained combined. (Thread by Sean Taylor on Twitter, 8:33 Feb 19, 2018)
Prophet is a procedure for forecasting time series data. It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It works best with daily periodicity data with at least one year of historical data. Prophet is robust to missing data, shifts in the trend, and large outliers.
Cambridge researchers developed a game to help people understand, broadly, how fake news works by having users play trolls and create misinformation. By “placing news consumers in the shoes of (fake) news producers, they are not merely exposed to small portions of misinformation,” the researchers write in their accompanying paper.
In recent years, many physicists have used evolutionary game theory combined with a complex systems perspective in an attempt to understand social phenomena and challenges. Prominent among such phenomena is the issue of the emergence and sustainability of cooperation in a networked world of selfish or self-focused individuals. The vast majority of research done by physicists on these questions is theoretical, and is almost always posed in terms of agent-based models. Unfortunately, more often than not such models ignore a number of facts that are well established experimentally, and are thus rendered irrelevant to actual social applications. I here summarize some of the facts that any realistic model should incorporate and take into account, discuss important aspects underlying the relation between theory and experiments, and discuss future directions for research based on the available experimental knowledge.
We investigate the alignment of international attention of news media organizations within 193 countries with the expressed international interests of the public within those same countries from March 7, 2016 to April 14, 2017. We collect fourteen months of longitudinal data of online news from Unfiltered News and web search volume data from Google Trends and build a multiplex network of media attention and public attention in order to study its structural and dynamic properties. Structurally, the media attention and the public attention are both similar and different depending on the resolution of the analysis. For example, we find that 63.2% of the country-specific media and the public pay attention to different countries, but local attention flow patterns, which are measured by network motifs, are very similar. We also show that there are strong regional similarities with both media and public attention that is only disrupted by significantly major worldwide incidents (e.g., Brexit). Using Granger causality, we show that there are a substantial number of countries where media attention and public attention are dissimilar by topical interest. Our findings show that the media and public attention toward specific countries are often at odds, indicating that the public within these countries may be ignoring their country-specific news outlets and seeking other online sources to address their media needs and desires.
Sent Jen a note about carpooling to CHIIR. Need to check out one day earlier
Two phases – theoretical model building, then study
Implications for design based on
Something about velocity? Academic journal papers (slow production, slow consumption) at one end and twitter on the other (fast production, fast consumption)
Finished the first draft of the CI 2018 extended abstract!
And I also figured out how to run the sub projects in the Ultimate Angular src collection. You need to go to the root directory for the chapter, run yarn install, then yarn start. Everything works then.
This is the kind of data that compels us to rethink how we understand Twitter — and what I feel are more influential platforms for reaching regular people that include Facebook, Instagram, Google, and Tumblr, as well as understand ad tech tracking and RSS feed–harvesting as part of the greater propaganda ecosystem.
The News Landscape (NELA) Toolkit is an open source toolkit for the systematic exploration of the news landscape. The goal of NELA is to both speed up human fact-checking efforts and increase the understanding of online news as a whole. NELA is made up of multiple indepedent modules, that work at article level granularity: reliability prediction, political impartiality prediction, text objectivity prediction, and reddit community interest prediction. As well as, modules that work at source level granularity: reliability prediction, political impartiality prediction, content-based feature visualization.
I built ANN-benchmarksto address this. It pits a bunch of implementations (including Annoy) against each other in a death match: which one can return the most accurate nearest neighbors in the fastest time possible. It’s not a new project, but I haven’t actively worked on it for a while.
Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical complexity.
Social networks are frequently cited as vital for facilitating successful adaptation and transformation in linked social–ecological systems to overcome pressing resource management challenges. Yet confusion remains over the precise nature of adaptation vs. transformation and the specific social network structures that facilitate these processes. Here, we adopt a network perspective to theorize a continuum of structural capacities in social–ecological systems that set the stage for effective adaptation and transformation. We begin by drawing on the resilience literature and the multilayered action situation to link processes of change in social–ecological systems to decision making across multiple layers of rules underpinning societal organization. We then present a framework that hypothesizes seven specific social–ecological network configurations that lay the structural foundation necessary for facilitating adaptation and transformation, given the type and magnitude of human action required. A key contribution of the framework is explicit consideration of how social networks relate to ecological structures and the particular environmental problem at hand. Of the seven configurations identified, three are linked to capacities conducive to adaptation and three to transformation, and one is hypothesized to be important for facilitating both processes.
Starting to trim paper down to three pages
Starting on CHIIR slide stack – Still need to add future work
From October 1993 to late 1994, RTLM was used by Hutu leaders to advance an extremist Hutu message and anti-Tutsi disinformation, spreading fear of a Tutsi genocide against Hutu, identifying specific Tutsi targets or areas where they could be found, and encouraging the progress of the genocide. In April 1994, Radio Rwanda began to advance a similar message, speaking for the national authorities, issuing directives on how and where to kill Tutsis, and congratulating those who had already taken part.
Set up Fika Writing group that will meet Wednesdays at 4:00. We’ll see how that goes.
Took four much needed days off on Sanibel island. Forgot to pack some things? Need to call the hotel at (239) 215-3401
Starting CI 2018 abstract. And oddly, the abstract isn’t showing??? Sent a note to the conference chair. IN the meantime, I have a subsection for the abstract. It appears to be acmlarge for the most part, so maybe use that????
Was going to get back to Angular, but stuck with 404s on CRUD operations:
Working on the 3D map application. Decided to go with JavaFX and their 3d implementation. It’s going quickly.
I’ve also gotten the graph generator creating spreadsheets that the map app can read in. So the next job will be to wire everything together, where the position information is based off the nomad trajectories, with the size and visitor (height) data being overlayed with the different colors.
Efficient computation of N-body forces
By: Jeffrey Heer
Computers can serve as exciting tools for discovery, with which we can model and explore complex phenomena. For example, to test theories about the formation of the universe, we can perform simulations to predict how galaxies evolve. To do this, we could gather the estimated mass and location of stars and then model their gravitational interactions over time.
More Angular, feeling my way through the Http code, which has been deprecated. Looked at the similar code in Tour of Heroes. We’ll see if the old stuff works and then try to update? Need to ask Jeremy.
Back to BIC. Evolutionary reasons for cooperation as group fitness, where group payoff is maximized. This makes the stag salient in stag hunt.
A thorough explanation of synchronization/phase locking. My mental model is this: Imaging a set of coaxial but randomly oscillating identical weights sliding back and forth in their section of lightweight tubing. From the outside, the tube would be stationary, as all the forces would be cancelling. If the weights can synchronize, then the lightweight tube will be doing most of the moving. Since the mass of the tube is lower than the mass of the combined weights, The force required for the whole system will be lower, and as a result (I think?) the system will run more efficiently and longer. Need to work out the math.
Most of us last saw calculus in school, but derivatives are a critical part of machine learning, particularly deep neural networks, which are trained by optimizing a loss function. Pick up a machine learning paper or the documentation of a library such as PyTorch and calculus comes screeching back into your life like distant relatives around the holidays. And it’s not just any old scalar calculus that pops up—you need differential matrix calculus, the shotgun wedding of linear algebra and multivariate calculus.
Explaining the evolution of any human behavior trait (say, a tendency to play C in Prisoner’s Dilemmas) raises three questions. The first is the behavior selection question: why did this trait, rather than some other, get selected by natural selection? Answering this involves giving details of the selection process, and saying what made the disposition confer fitness in the ecology in which selection took place. But now note that ‘When a behavior evolves, a proximate mechanism also must evolve that allows the organism to produce the target behavior. Ivy plants grow toward the light. This is a behavior, broadly construed. For phototropism to evolve, there must be some mechanism inside of ivy plants that causes them to grow in one direction rather than in another’ (Sober and Wilson 1998, pp. 199-200). This raises the second question, the production question: how is the behavior produced within the individual-what is the ‘proximate mechanism’? In the human case, the interest is often in a psychological mechanism: we ask what perceptual, affective and cognitive processes issue in the behavior. Finally, note that these processes must also have evolved, so an answer to the second question brings a third: why did this proximate mechanism evolve rather than some other that could have produced the same behavior? This is the mechanism selection question. (pg 95)
These are good questions to answer, or at least address. Roughly, I thing my answers are
Selection Question: The three phases are a very efficient way to exploit an environment
Production Question: Neural coupling, as developed in physical swarms and moving on to cognitive clustering
Mechanism Question: Oscillator frequency locking provides a natural foundation for collective behavior. Dimension reduction is how axis are selected for matching.
Tweaked my hypotheses from this post. I need to promote to a Phlog page.
Using Self-Organizing Maps to solve the Traveling Salesman Problem
The Traveling Salesman Problem is a well known challenge in Computer Science: it consists on finding the shortest route possible that traverses all cities in a given map only once. To solve it, we can try to apply a modification of the Self-Organizing Map (SOM) technique. Let us take a look at what this technique consists, and then apply it to the TSP once we understand it better.
Everybody that has an interest in influencing public opinion will happily pay a handful of Dollars to amplify their voices. Governments, political groups, corporations, traders, and just simple plain trolls will continue to shout through bot armies—as long as it is so cheap. Bots are cheaper than buying ad space, less risky than a network of spies, more efficient and less prone to failure than creating 50 fake accounts by hand. If bots could be identified and tagged, the fake news industry would suffer a heavy blow. Here is how we can make this happen.
Groups are defined by a common location, orientation, and velocity through a physical or virtual space. They influence each other dependent on awareness and trust. The lower the number of dimensions, the easier it is to produce a group.
This post examines one full spectrum case to illustrate the method. @DFRLab examined this case in an earlier post; since then, further evidence emerged, which changed and improved our understanding of the technique.
More Angular. Nice progress. I had some issues where I wanted to keep an old version of the app directory and did a refactor. This (of course) refactored the calling program, so I broke quite a few things figuring it out. That being said, Angular 1.5 is really, really nice.
Long chat about handling Trolls in the discussion app