Phil 11.14.17

7:00 – 4:00 ASRC MKT

  • Reinforcement Learning: An Introduction (2nd Edition)
    • Richard S. Sutton (Scholar): I am seeking to identify general computational principles underlying what we mean by intelligence and goal-directed behavior. I start with the interaction between the intelligent agent and its environment. Goals, choices, and sources of information are all defined in terms of this interaction. In some sense it is the only thing that is real, and from it all our sense of the world is created. How is this done? How can interaction lead to better behavior, better perception, better models of the world? What are the computational issues in doing this efficiently and in realtime? These are the sort of questions that I ask in trying to understand what it means to be intelligent, to predict and influence the world, to learn, perceive, act, and think. In practice, I work primarily in reinforcement learning as an approach to artificial intelligence. I am exploring ways to represent a broad range of human knowledge in an empirical form–that is, in a form directly in terms of experience–and in ways of reducing the dependence on manual encoding of world state and knowledge.
    • Andrew G. Barto : Most of my recent work has been about extending reinforcement learning methods so that they can work in real-time with real experience, rather than solely with simulated experience as in many of the most impressive applications to date. Of particular interest to me at present is what psychologists call intrinsically motivated behavior, meaning behavior that is done for its own sake rather than as a step toward solving a specific problem of clear practical value. What we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. Recent work by my colleagues and me on what we call intrinsically motivated reinforcement learning is aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that form the building blocks for open-ended learning. Visit the Autonomous Learning Laboratory page for some more details.
  • There was a piece on BBC Business Daily on social network moderators. Aside from it being a horrible job, the show touched on how international criminal cases often rest on video uploaded to services like Twitter and Facebook. This process worked as long as the moderators were human and could tell the difference between criminal activity and the documentation of criminal activity, but now with ML solutions being implemented, these videos are being deleted. First, this shows how ad-hoc the usage of these networks are as a place for legal and journalistic activity. Second, it shows the need for a mechanism that is built to support these activities, where there is a more expansive role of reporter/researcher and editor. This is near the center of gravity for the TACJOUR project.
  • Flying home yesterday, I was thinking about how the maps need to get built. One way of thinking about it is that you are given a set of directions that run through a geographic area and have to build a map from that. We know the adjacencies by the sequence of the directions. It follows that we should be able to build a map by overlaying all the routes in an n-dimensional space. I was then reading Technical Perspective: Exploring a Kingdom by Geodesic Measures, and at least some of the concepts appear related. In the case of the game at least, we have the center ‘post’, which is the discussion starting point. The discussion is (can be) a random walk towards the poles created in that iteration. Multiple walks create multiple paths over this unknown Manifold.  I’m thinking that this should be enough information to build a self organizing map. This might help: Visual analysis of self-organizing maps
    • Had some discussions with Arron about this. It should be pretty straightforward to build a map, grid or hex that trajectories can be recorded from. Then the trajectories can be used to reconstruct the map. Success is evaluated by the similarity between the source map and the reconstructed one.
    • I could also add recorded trajectories to the generated spreadsheet. It could be a list of cells that the agent traverses. Comparing explore, flocking and stampede behaviors in their reconstructed maps?
  • Continuing with From Keyword Search to Exploration
    • The mSpace Browser is a multi faceted column based client for exploring large data sets in the way that makes sense to you. You decide the columns and the order that best suits your browsing needs.
    • Yippy search
    • Exalead search
    • pg 62, animation
  • Continuing along with Angular
  • Multiple discussions with Aaron about next steps, particularly for anomaly detection

Phil 11.8.17

ASRC MKT 7:00 – 5:00, with about two hours for personal time

  • After the fall of DNAinfo, it’s time to stop hoping local news will scale
    • I think people understand that this sensation of unreality has a lot to do with the platforms that deliver our news, because Facebook and Google package journalism and bullshit identically. But I’d argue that it also has a lot to do with the death of local news to a degree few of us recognize.
    • This is not unheard of in digital local news: People pay to drink with the investigative reporters at The Lens in New Orleans and to watch Steelers games with the staff of The Incline in Pittsburgh.
  • And as a counterbalance: Weaken from Within
    • The turtle didn’t know and never will, that information warfare — it is the purposeful training of an enemy on how to remove its own shell.
  • Rescuing Collective Wisdom when the Average Group Opinion Is Wrong
    • Yet the collective knowledge will remain inaccessible to us unless we are able to find efficient knowledge aggregation methods that produce reliable decisions based on the behavior or opinions of the collective’s members.
    • Our analysis indicates that in the ideal case, there should be a matching between the aggregation procedure and the nature of the knowledge distribution, correlations, and associated error costs. This leads us to explore how machine learning techniques can be used to extract near-optimal decision rules in a data-driven manner.
  • Inferring Relations in Knowledge Graphs with Tensor Decompositions
  • From today’s Pulse of the Planet episode:
    • Colin Ellard is a cognitive neuroscientist and the author of Places of the Heart: the Psychogeography of Everyday Life. He says that the choices we make in siting a house or even where we choose to sit in a crowded room give us clues about the way humans have evolved.  The idea of prospect and refuge is an inherently biological idea. It goes back through the history of human beings. In fact for any kind of animal selecting a habitat, kind of the holy grail of good habitat choice can be summed up by the principal of seeing but not being seen.
      Ideally what we want is a set of circumstances where we have some protection, visual protection, in the sense of not being able to be easily located ourselves, and that’s Refuge. But we also want to be able to know what’s going on around us. We need to be able to see out from wherever that refuge is. And that’s Prospect. The operation of our preference for situations that are high in both refuge and prospect is something that cuts across everything we build or everywhere we find ourselves.
  • So, prospect-refuge theory sounds interesting. It seems to come from psychology rather than ecology-related fields. Still, it’s a discussion of affordances. Searching around, I found this: Methodological characteristics of research testing prospect–refuge theory: a comparative analysis. Couldn’t get it directly, so I’m trying ILL.
    • Prospect–refuge theory proposes that environments which offer both outlook and enclosure provoke not only feelings of safety but also of spatially derived pleasure. This theory, which was adopted in environmental psychology, led Hildebrand to argue for its relevance to architecture and interior design. Hildebrand added further spatial qualities to this theory – including complexity and order – as key measures of the environmental aesthetics of space. Since that time, prospect–refuge theory has been associated with a growing number of works by renowned architects, but so far there is only limited empirical evidence to substantiate the theory. This paper analyses and compares the methods used in 30 quantitative attempts to examine the validity of prospect–refuge theory. Its purpose is not to review the findings of these studies, but to examine their methodological bases and biases and comment on their relevance for future research in this field.
    • This is the book by Hildebrand: The Wright Space: Patterns and Meaning in Frank Lloyd Wright’s Houses. Ordered.
  • Ok, back to Angular2
    • Done with chapter 3.

Phil 11.3.17

7:00 – ASRC MKT

  • Good comments from Cindy on yesterday’s work
  • Facebook’s 2016 Election Team Gave Advertisers A Blueprint To A Divided US
  • Some flocking activity? AntifaNov4
  • I realized that I had not added the herding variables to the Excel output. Fixed.
  • DINH Q. LÊ: South China Sea Pishkun
    • In his new work, South China Sea Pishkun, Dinh Q. Lê references the horrifying events that occurred on April 30th 1975 (the day Saigon fell) as hundreds of thousands of people tried to flee Saigon from the encroaching North Vietnamese Army and Viet Cong. The mass exodus was a “Pishkun” a term used to describe the way in which the Blackfoot American Indians would drive roaming buffalo off cliffs in what is known as a buffalo jump.
  • Back to writing – got some done, mostly editing.
  • Stochastic gradient descent with momentum
  • Referred to in this: There’s No Fire Alarm for Artificial General Intelligence
    •  AlphaGo did look like a product of relatively general insights and techniques being turned on the special case of Go, in a way that Deep Blue wasn’t. I also updated significantly on “The general learning capabilities of the human cortical algorithm are less impressive, less difficult to capture with a ton of gradient descent and a zillion GPUs, than I thought,” because if there were anywhere we expected an impressive hard-to-match highly-natural-selected but-still-general cortical algorithm to come into play, it would be in humans playing Go.
  • In another article: The AI Alignment Problem: Why It’s Hard, and Where to Start
    • This is where we are on most of the AI alignment problems, like if I ask you, “How do you build a friendly AI?” What stops you is not that you don’t have enough computing power. What stops you is that even if I handed you a hypercomputer, you still couldn’t write the Python program that if we just gave it enough memory would be a nice AI.
    • I think this is where models of flocking and “healthy group behaviors” matters. Explore in small numbers is healthy – it defines the bounds of the problem space. Flocking is a good way to balance bounded trust and balanced awareness. Runaway echo chambers are very bad. These patterns are recognizable, regardless of whether they come from human, machine, or bison.
  • Added contacts and invites. I think the DB is ready: polarizationgameone
  • While out riding, I realized what I can do to show results in the herding paper. There are at least three ways to herd:
    1. No herding
    2. Take the average of the herd
    3. Weight a random agent
    4. Weight random agents (randomly select an agent and leave it that way for a few cycles, then switch
  • Look at the times it takes for these to converge and see which one is best. Also look at the DTW to see if they would be different populations.
  • Then re-do the above for the two populations inverted case (max polarization)
  • Started to put in the code changes for the above. There is now a combobox for herding with the above options.

Phil 11.2.17

ASRC MKT 7:00 – 4:30

  • Add a switch to the GPM that makes the adversarial herders point in opposite directions, based on this: Russia organized 2 sides of a Texas protest and encouraged ‘both sides to battle in the streets’
  • It’s in and running. Here’s a screenshot: 2017-11-02 There are some interesting things to note. First, the vector is derived from the average heading of the largest group (green in this case). This explains why the green agents are more tightly clustered than the red ones. In the green case, the alignment is intrinsic. In the red case, it’s extrinsic. What this says to me is that although adversarial herding works well when amplifying the heading already present, it is not as effective when enforcing a heading that does not already predominant. That being said, when we have groups existing in opposition to each other, that is a tragically easy thing to enhance.
  • Hierarchical Representations for Efficient Architecture Search
    • We explore efficient neural architecture search methods and present a simple yet powerful evolutionary algorithm that can discover new architectures achieving state of the art results. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches and represents the new state of the art for evolutionary strategies on this task. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the architecture search time from 36 hours down to 1 hour.
  • Continuing with the schema. Here’s where we are today: polarizationgameone

Phil 11.1.17

Phil 7:00 – ASRC MKT

    • The identity of the machine is just as important as the identity of the human, argues Jeff Hudson.
    • Agent-based simulation for economics: The Tool Central Bankers Need Most Now
    • Introducing Vega-Lite 2.0 (from MIT Interactive Data Lab)
      • Vega-Lite enables concise descriptions of visualizations as a set of encodings that map data fields to the properties of graphical marks. Vega-Lite uses a portable JSON format that compiles to full specifications in the larger Vega language. Vega-Lite includes support for data transformations such as aggregation, binning, filtering, and sorting, as well as visual transformations such as stacking and faceting into small multiples.
    • Wayne says ‘awareness’ is too overloaded, at least in CSCW where it means ‘a shared awareness’. What about alertness, cognition, or perception?
    • Started Simulating Flocking and Herding in Belief Space. Shared with Wayne, Aaron and Cindy
    • Yay, finally got the array problems solved. The problem is that a PHP array is actually a set. But you can convert any set into a zero-indexed array using array_values(). So now all my arrays begin at zero, as God intended.
    • Meeting with the lads. Some really good stuff.
      • Add tmanage
        • dungeon_master
        • game
        • scenario
        • min_players
        • max_players
        • time_to_live
        • state (waiting, running, timeout, terminated, success)
        • open (true/false)
        • visible
      • Add trating
        • target_message
        • relevance
        • quality
        • vote
        • rating_player
      • Add ttopics
        • title
        • description
        • parent
      • Add tplayerstate
        • player
        • game
        • state (waiting, playing, finished, terminated)
      • Add tcontact
        • player
        • name
        • email
        • facebook (oAuth)
        • google (oAuth)
      • Add tinvite
        • contact
        • game
        • player


  • Humans + Machines (CNAS livestream)
    12:30 – 1:35 PM
    Dr. Jeff Clune, Assistant Professor of Computer Science, University of Wyoming
    Kimberly Jackson Ryan, Senior Human Systems Engineer, Draper Laboratory
    Dr. John Hawley, Engineering Psychologist, Army Research Laboratory
    Dr. Caitlin Surakitbanharn, Research Scientist, Purdue University
    Dan Lamothe, National Security Writer, The Washington Post (moderator)

Phil 10.31.17

7:00 – 4:30 ASRC MKT

    • Wrote up notes from yesterday’s meeting
    • Look for JCMC requirements
    • Change the rest of the “we” to “I” in the DC, then submit. Done, did a spell check because I had forgotten to integrate a spell checker!
    • Saw this today on the Google Research Blog: Closing the Simulation-to-Reality Gap for Deep Robotic Learning. In it they show how simulation can be used to improve deep learning because of the vast increase in conditions that can be simulated rather than found or built in the real world. The reason that it’s important in my work is that the simulation can feed and support the training of the classifiers once the simulation becomes sufficiently realistic.
    • Because I can’t stop reading horrible things, ordered Totalitarianism, Terrorism and Supreme Values: History and Theory, by  Peter Bernholz
    • Not the most exciting thing, but yay!
      ID	posted		message					playerID	parentID
      1	1509458541	message 0 of 20 by Abbe, Karleen	5	6	
      2	1509458541	message 1 of 20 by Abbey, Abbi	7	6	
      3	1509458541	message 2 of 20 by Abbey, Abbi, responding to message 1	7	6	2
      4	1509458542	message 3 of 20 by Abbe, Karleen, responding to message 2	5	6	3
      5	1509458542	message 4 of 20 by Abbe, Karleen, responding to message 1	5	6	2
      6	1509458542	message 5 of 20 by Abbe, Karleen, responding to message 4	5	6	5
      7	1509458542	message 6 of 20 by Abbe, Karleen, responding to message 3	5	6	4
      8	1509458542	message 7 of 20 by Abbe, Karleen, responding to message 1	5	6	2
      9	1509458542	message 8 of 20 by Abbe, Karleen, responding to message 1	5	6	2
      10	1509458542	message 9 of 20 by Aaren, Abbie, responding to message 2	3	6	3
      11	1509458542	message 10 of 20 by Abbey, Abbi, responding to message 5	7	6	6
      12	1509458542	message 11 of 20 by Abbe, Karleen, responding to message 10	5	6	11
      13	1509458542	message 12 of 20 by Abbey, Abbi, responding to message 7	7	6	8
      14	1509458542	message 13 of 20 by Aaren, Abbie	3	6	
      15	1509458542	message 14 of 20 by Abbe, Karleen, responding to message 8	5	6	9
      16	1509458542	message 15 of 20 by Abbe, Karleen, responding to message 11	5	6	12
      17	1509458542	message 16 of 20 by Abbe, Karleen	5	6	
      18	1509458542	message 17 of 20 by Abbe, Karleen, responding to message 4	5	6	5
      19	1509458542	message 18 of 20 by Aaren, Abbie, responding to message 14	3	6	15
      20	1509458542	message 19 of 20 by Aaren, Abbie, responding to message 2	3	6	3
    • cleaning up some cases where scenario is set to null. Fixed. It’s the first array index problem. Grrrrr. Ok, broke some things trying to make things better….
    • Then it’s time to make some REST interfaces
    • Meeting with Cindy. Much progress!
      • User-specified scenarios, seeded with some fun topics like conspiracy theories
      • Private deliberations.
      • Esperanto for verdict: verdikto
      • Lobbies for collecting users
      • Game starts when an DM-specified minimum is met, though there may be time to accumulate into a max as well
      • Game ‘dies’ if no contribution (by all players?) in a certain window
      • One user can kill a game by withdrawing. This can be attached to a user (troll), so the player can anonymously block in the future
      • Games can be respawned, optionally without a triggering troll from the last time
      • Games/Scenarios can be cloned
      • Highest-quality games that reach a verdict are featured on the site. Quality could be determined by tagging or NLP+heuristics.


Phil 10.30.17

7:00 – 4:30 ASRC MKT

  • The discussion and conclusion
  • Tweaked the “Future Work” section of the CHIIR DC proposal to reflect the herding work. More words means less bullet points!
  • Updated Java and XAMMP on my home machine
  • Pointed the IDE at the correct places
  • I don’t think I have PhpInspections (EA Extended) installed at work? It does nice things – Have it now
  • Working through creating a strawman game. Having some issues with a one-to-many relationship with RedBeanPHP. Ah – it’s because you have to sync the beans. I think rather than have a game point at all the players, I’ll have the players point at the scenario, and the chat messages point at the game and players.
  • Got that mostly working, but having a null player issues
  • Important PHP issue – arrays don’t need to start at zero! The bean arrays are indexed with respect to their db id!
  • Meeting with Wayne
  • The DC is good to submit
  • Start working on a JCMC article that connects the flocking model to qualitative theory.
  • Keep on working on the game. Possible project for a class/group in either 729 – design and evaluate class (Komlodi) or 728 – Online Communities & Social Media (Branham)

Phil 10.11.17

7:00 – 3:30 ASRC MKT

  • Call ACK today about landing pad 7s. Nope – closed today
  • The Thirteenth International Conference on Spatial Information Theory (COSIT 2017)
  • Topic-Relevance Map: Visualization for Improving Search Result Comprehension
    • We introduce topic-relevance map, an interactive search result visualization that assists rapid information comprehension across a large ranked set of results. The topic-relevance map visualizes a topical overview of the search result space as keywords with respect to two essential information retrieval measures: relevance and topical similarity. Non-linear dimensionality reduction is used to embed high-dimensional keyword representations of search result data into angles on a radial layout. Relevance of keywords is estimated by a ranking method and visualized as radiuses on the radial layout. As a result, similar keywords are modeled by nearby points, dissimilar keywords are modeled by distant points, more relevant keywords are closer to the center of the radial display, and less relevant keywords are distant from the center of the radial display. We evaluated the effect of the topic-relevance map in a search result comprehension task where 24 participants were summarizing search results and produced a conceptualization of the result space. The results show that topic-relevance map significantly improves participants’ comprehension capability compared to a conventional ranked list presentation.
  • Important to remember for the Research Browser: Where to Add Actions in Human-in-the-Loop Reinforcement Learning
    • In order for reinforcement learning systems to learn quickly in vast action spaces such as the space of all possible pieces of text or the space of all images, leveraging human intuition and creativity is key. However, a human-designed action space is likely to be initially imperfect and limited; furthermore, humans may improve at creating useful actions with practice or new information. Therefore, we propose a framework in which a human adds actions to a reinforcement learning system over time to boost performance. In this setting, however, it is key that we use human effort as efficiently as possible, and one significant danger is that humans waste effort adding actions at places (states) that aren’t very important. Therefore, we propose Expected Local Improvement (ELI), an automated method which selects states at which to query humans for a new action. We evaluate ELI on a variety of simulated domains adapted from the literature, including domains with over a million actions and domains where the simulated experts change over time. We find ELI demonstrates excellent empirical performance, even in settings where the synthetic “experts” are quite poor.
  • This is interesting. DARPA had a Memex project that they open-sourced
  • Got PHP and xdebug set up on my home machines, mostly following these instructions. The dll that matches the PHP install needs to be downloaded from here and placed in the /php directory. Then add the following to the php.ini file:
    zend_extension = "C:\xampp\php\ext\php_xdebug.dll"
    xdebug.profiler_append = 0
    xdebug.profiler_enable = 1
    xdebug.profiler_enable_trigger = 1
    xdebug.profiler_output_dir = "C:\xampp\tmp"
    xdebug.profiler_output_name = "cachegrind.out.%t-%s"
    xdebug.remote_enable = 0
    xdebug.remote_handler = "dbgp"
    xdebug.remote_host = ""
    xdebug.remote_port = "9876"
    xdebug.trace_output_dir = "C:\xampp\tmp"

    Then go to settings->Languages & Frameworks -> PHP, and either attach to the php CLI or refresh. The debugger should become visible: PHPsetup

  • Reworking the CHI DC to a CHIIR DC
    • There is a new version of the LaTex templates as of Oct 2 here. I wonder if that fixes the CHI problems?
    • Put things in the right format, got the pix in the columns. Four pages! Working on fixing text.
    • Finished first pass (time for multiple passes! Woohoo!)
    • Working on paragraph
    • Start schema for PolarizationGame
  • Theresa asked me to set up a new set of CSEs. Will need a credit card and the repository location. Waiting for that.

Phil 12.2.15

7:00 –

  • Learning: Neural Nets, Back Propagation
    • Synaptic weights are higher for some synapses than others
    • Cumulative stimulus
    • All-or-none threshold for propagation.
    • Once we have a model, we can ask what we can do with it.
    • Now I’m curious about the MIT approach to calculus. It’s online too: MIT 18.01 Single Variable Calculus
    • Back-propagation algorithm. Starts from the end and works forward so that each new calculation depends only on its local information plus values that have already been calculated.
    • Overfitting and under/over damping issues are also considerations.
  • Scrum meeting
  • Remember to bring a keyboard tomorrow!!!!
  • Checking that my home dev code is the same as what I pulled down from the repository
    • No change in definitelytyped
    • No change in the other files either, so those were real bugs. Don’t know why they didn’t get caught. But that means the repo is good and the bugs are fixed.
  • Validate that PHP runs and debugs in the new dev env. Done
  • Add a new test that inputs large (thousands -> millions) of unique ENTITY entries with small-ish star networks of partially shared URL entries. Time view retrieval times for SELECT COUNT(*) from tn_view_network_items WHERE network_id = 8;
    • Computer: 2008 Dell Precision M6300
    • System: Processor Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz, 2201 Mhz, 2 Core(s), 2 Logical Processor(s), Available Physical Memory 611 MB
    • 100 is 0.09 sec
    • 1000 is 0.14 sec
    • 10,000 is 0.84 sec
    • Using Open Office’s linear regression function, I get the equation t = 0.00007657x + 0.733 with an R squared of 0.99948.
    • That means 1,000,000 view entries can be processed in 75 seconds or so as long as things don’t get IO bound
  • Got the PHP interpreter and debugger working. In this case, it was just refreshing in settings->languages->php

Phil 11.25.15

7:00 – 1:00 Leave

  • Constraints: Search, Domain Reduction
    • Order from most constrained to least.
    • For a constrained problem, check over and under allocations to see where the gap between fast failure and fast completion lie.
    • Only recurse through neighbors where domain (choices) have been reduced to 1.
  • Dictionary
    • Add an optional ‘source_text’ field to the tn_dictionaries table so that user added words can be compared to the text. Done. There is the issue that the dictionary could be used against a different corpus, at which point this would be little more than a creation artifact
    • Add a ‘source_count’ to the tn_dictionary_entries table that is shown in the directive. Defaults to zero? Done. Same issue as above, when compared to a new corpus, do we recompute the counts?
    • Wire up Attach Dictionary to Network
      • Working on AlchemyDictReflect that will place keywords in the tn_items table and connect them in the tn_associations table.
      • Had to add a few helper methods in networkDbIo.php to handle the modifying of the network tables, since alchemyNLPbase doesn’t extend baseBdIo. Not the cleanest thing I’ve ever done, but not *horrible*.
      • Done and working! Need to deploy.

Phil 11.24.15

7:00 – Leave

  • Constraints: Interpreting Line Drawings
    • Successful research:
      • Finds a problem
      • Finds a method that solves the problem
      • Using some principal (That can be generalized)
  • Gave Aaron M. A subversion account and sent him a description of the structure of the project
  • Back to dictionary creation
    • Wire up Extract into Dictionary
      • I think I’m going to do most of this on the server. If I do a select text from tn_view_network_items where network = X, then I can run that text that is already in the DB through the term extractor, which should be the fastest thing I can do.
      • The next fastest thing would be to pull the text from the url (if it exists) and add that to the text pull.
      • Added a getTextFromNetwork() method to NetworkDbObject.
      • The html was getting extracted badly, so I had to add a call to alchemy to return the cleaned text. TODO: in the future add a ‘clean_text’ column to tn_items so this is done on ingestion. I also added
      • Added all the pieces to the rssPull.php file and tested. And integrated with the client. Looks like it takes about 8 seconds to go through my resume, so some offline processing will probably be needed for ACM papers, for example.
    • Wire up Attach Dictionary to Network
      • The current setup is set so that a new item that is read in will associate with the current network dictionary. Need to add a way to have the items that are already in the network to check themselves against the new dictionary.
      • Added class AlchemyDictReflect that will place keywords in the DB. Still need to debug. And don’t forget that the controller will have to reload the network after all thechanges are made.


Phil 11.23.15

7:00 – Leave

  • Search: Games, Minimax, and Alpha-Beta
    • Branching factor (B)
    • Search depth (D)
    • Combining the two gives the number of leaf nodes or B^D
    • Branching factor of chess is approximately 14?
  • Dictionaries
    • Wire up Create New Dictionary – done
    • Wire up Extract into Dictionary
      • I think I’m going to do most of this on the server. If I do a select text from tn_view_network_items where network = X, then I can run that text that is already in the DB through the term extractor, which should be the fastest thing I can do.
      • The next fastest thing would be to pull the text from the url (if it exists) and add that to the text pull.
    • Wire up Attach Dictionary to Network
      • The current setup is set so that a new item that is read in will associate with the current network dictionary. Need to add a way to have the items that are already in the network to check themselves against the new dictionary.

Phil 11.19.15

7:00 – 5:00 Leave

  • Reasoning: Goal Trees and Rule-Based Expert Systems
    • There, now I’m back in order.
    • H. Simon – The complexity of the behavior is max(cplx(prgm), cplx(env))
    • More Genesis – Elaboration graphs
    • Genesis judges similarity in multiple ways: (this presentation, page 25)
      • Using word vectors
      • Using concept vectors: seeing similarities not evident in the words.
    • Genesis aligns similar stories for analogical reasoning (Needleman-Wunch algorithm, which is a way of comparing string similarity using matrices)
  • IRB renewal – ask Wayne
  • In a fit of orderliness, created shortcuts to the cmd windows that the makefiles run in
  • Dictionaries
    • Don’t forget to move the text-extraction calls to the methods that need it and see if that speeds up the others. Done. Much faster. PHP is dumb. Or needs a preloader/compiler
    • Cleaned up the loading and display code a bit. Need to add a tree view (which looks like it can be done in pure CSS), but that can wait.
    • Adding manual entry
      • Finished the form
      • Clear for entry and separate parent is done
      • addition
      • deletion
      • modification (adding parents in particular)
    • Adding text extraction

Phil 11.17.15

7:00 – 4:00 leave

  • Reasoning: Goal Trees and Problem Solving (why is this pertinent to me?)
    • Apply all safe transforms
    • Apply heuristics
    • AND nodes and OR nodes (AND/OR, problem reduction, or goal tree)
  • Realized that I have a redundant user_id in tn_dictionary_entries. This could be used to allow non-owners to add words to the dictionary. Which means that dictionaries could be shared. On the whole, I think that’s a good idea. Adding the change to the dictionary code.
  • Ok, back to text extraction (Is this a safe transform?)
  • Added a _buildExtractor() private method and imported all the extractor parts.
  • It does take a long time. Just to load? I’d like to try profiling…
  • Wow. Extracting, loading into the database, getting the JSON output and deleting the dictionary all work!
  • I think the next step is to either get some definitions or start building the directive. After my lunchtime ride, I decided that the directive is probably the best thing to do next. Adding user functionality is a good way of ensuring that the server functionality makes sense.
  • Added ‘get_user_dictionaries‘ to rssPull.php. We’ll start with that.
  • Got the skeleton of a directive up and retrieving the dictionary list.