Category Archives: Ranking

Phil 6.2.16

7:00 – 5:00 VTX

  • Writing
  • Write up sprint story – done
    • Develop a ‘training’ corpus known bad actors (KBA) for each domain.

      • KBAs will be pulled from http://w3.nyhealth.gov/opmc/factions.nsf, which provides a large list.
      • List of KBAs will be added to the content rating DB for human curation
      • HTML and PDF data will be used to populate a list of documents that will then be scanned and analyzed to prepare TF-IDF and LSI term-document tables.
      • The resulting table will in turn be analyzed using term centrality, with the output being an ordered list of terms to be evaluated for each domain.

  • Building view to get person, rating and link from the db – done, or at least V1
    CREATE VIEW view_ratings AS
      select io.link, qo.search_type, po.first_name, po.last_name, po.pp_state, ro.person_characterization from item_object io
        INNER JOIN query_object qo ON io.query_id = qo.id
        INNER JOIN rating_object ro on io.id = ro.result_id
        INNER JOIN poi_object po on qo.provider_id = po.id;
  • Took results from w3.nyhealth.gov and ran them through the whole system. The full results are in the Corpus file under w3.nyhealth.gov-PDF-centrality_06_02_16-13_12_09.xlsx and w3.nyhealth.gov-WEB-centrality_06_02_16-13_12_09.xlsx. The results seem to make incredibly specific searches. Here are the two first examples. Note that there are very few .com sites.:

Phil 5.17.16

7:00 -7:00

  • Great discussion with Greg yesterday. Very encouraging.
  • Some thoughts that came up during Fahad’s (Successful!) defense
    • It should be possible to determine the ‘deletable’ codes at the bottom of the ranking by setting the allowable difference between the initial ranking and the trimmed rank.
    • The ‘filter’ box should also be set by clicking on one of the items in the list of associations for the selected items. This way, selection is a two-step process in this context.
    • Suggesting grouping of terms based on connectivity? Maybe second degree? Allows for domain independence?
    • Using a 3D display to show the shared second, third and nth degree as different layer
    • NLP tagged words for TF-IDF to produce a more characterized matrix?
    • 50 samples per iteration, 2,000 iterations? Check! And add info to spreadsheet! Done, and it’s 1,000 iterations
  • Writing
  • Parsing Jeremy’s JSON file
    • Moving the OptionalContent and JsonLoadable over to JavaJtils2
    • Adding javax.persistence-2.1.0
    • Adding json-simple-1.1.1
    • It worked, but it’s junk. It looks like these are un-curated pages
  • Long discussion with Aaron about calculating flag rollups.

Phil 5.11.16

7:00 – 4:30 VTX

  • Continuing paper – working on the ‘motivations’ section
  • Need to set the mode to interactive after a successful load
  • Need to find out where the JSON ratings are in the medicalpractitioner db? Or just rely on Jeremy’s interface? I guess it depends on what gets blown away. But it doesn’t seem like the JSON is in the db.
  • Added a stanfordNLP package to JavaUtils
    • NLPtoken stores all the extracted information about a token (word, lemma, index, POS, etc)
    • DocumentStatistics holds token data across one or more documents
    • StringAnnotator parses strings into NLPtokens.
  • Fixed a bunch of math issues (in Excel, too), but here are the two versions;
    am = 1.969
    be = 2.523
    da = 0.984
    do = 1.892
    i = 1.761
    is = 1.130
    it = 1.130
    let = 1.130
    not = 1.380
    or = 3.523
    thfor = 1.380
    think = 1.380
    to = 1.469
    what = 1.380

    And Excel:

     da	is	 it	 let	 not	 thfor	 think	 what	 to	 i	 do	 am	 be	 or
    0.984	1.130	1.130	1.130	1.380	1.380	1.380	1.380	1.469	1.761	1.892	1.969	2.523	3.523
    

Phil 5.5.16

7:00 – 5:30 VTX

  • Continuing An Introduction to the Bootstrap.
  • This helped a lot. I hope it’s right…
  • Had a thought about how to build the Bootstrap class. Build it using RealVector and then use Interface RealVectorPreservingVisitor to do whatever calculation is desired. Default methods for Mean, Median, Variance and StdDev. It will probably need arguments for max iteration and epsilon.
  • Didn’t do that at all. Wound up using ArrayRealVector for the population and Percentile to hold the mean and variance values. I can add something else later
  • I think to capture how the centrality affects the makeup of the data in a matrix. I think it makes sense to use the normalized eigenvector to multiply the counts in the initial matrix and submit that population (the whole matrix) to the Bootstrap
  • Meeting with Wayne? Need to finish tool updates though.
  • Got bogged down in understanding the Percentile class and how binomial distributions work.
  • Built and then fixed a copy ctor for Labled2DMatrix.
  • Testing. It looks ok, but I want to try multiplying the counts by the eigenVec. Tomorrow.

Phil 5.3.16

7:00 – 3:30 VTX

  • Out riding, I realized that I could have a column called ‘counts’ that would add up the total number of ‘terms per document’ and ‘documents per terms ‘. Unitizing the values would then show the number of unique terms per document. That’s useful, I think.
  • Helena pointed to an interesting CHI 2016 site. This is sort of the other side of extracting pertinence from relevant data. I wonder where they got their data from?
    • Found it!. It’s in a public set of Google docs, in XML and JSON formats. I found it by looking at the GitHub home page. In the example code  there was this structure:
      source: {
          gdocId: '0Ai6LdDWgaqgNdG1WX29BanYzRHU4VHpDUTNPX3JLaUE',
          tables: "Presidents"
        }

      That gave me a hint of what to look for in the document source of the demo, where I found this:

      var urlBase = 'https://ca480fa8cd553f048c65766cc0d0f07f93f6fe2f.googledrive.com/host/0By6LdDWgaqgNfmpDajZMdHMtU3FWTEkzZW9LTndWdFg0Qk9MNzd0ZW9mcjA4aUJlV0p1Zk0/CHI2016/';
      

      And that’s the link from above.

    • There appear to be other useful data sets as well. For example, there is an extensive CHI paper database sitting behind this demo.
    • So this makes generalizing the PageRank approach much more simple since it looks like I can pull the data down pretty simply. In my case I think the best thing would be to write small apps that pull down the data and build Excel spreadsheets that are read in by the tool for now.
  • Exporting a new data set from Atlas. Done and committed. I need to do runs before meeting with Wayne.
  • Added Counts in and refactored a bit.
  • I think I want a list of what a doc or term is directly linked to and the number of references. Addid the basics. Wiring up next. Done! But now I want to click on an item in the counts list and have it be selected? Or at least highlighted?
  • Stored the new version on dropbox: https://www.dropbox.com/s/92err4z2posuaa1/LMN.zip?dl=0
  • Meeting with Wayne
    • There’s some bug with counts. Add it to the WeightedItem.toString() and test.
    • Add a ‘move to top’ button near the weight slider that adds just enough weight to move the item to the top of the list. This could be iterative?
    • Add code that compares the population of ranks with the population of scaled ranks. Maybe bootstrapping? Apache Commons Math has KolmogorovSmirnovTest, which has public double kolmogorovSmirnovTest(double[] x, double[] y, boolean strict), which looks promising.
  • Added ability to log out of the rating app.

Phil 4.29.16

7:00 – 5:00 VTX

  • Expense reports and timesheets! Done.
  • Continuing Informed Citizenship in a Media-Centric Way of Life
    • The pertinence interface may be an example of a UI affording the concept of monitorial citizenship.
      • Page 219: The monitorial citizen, in Schudson’s (1998) view, does environmental surveillance rather than gathering in-depth information. By implication, citizens have social awareness that spans vast territory without having in-depth understanding of specific topics. Related to the idea of monitorial instead of informed citizenship, Pew Center (2008) data identified an emerging group of young (18–34) mobile media users called news grazers. These grazers ind what they need by switching across media platforms rather than waiting for content to be served.
    • Page 222: Risk as Feelings. The abstract is below. There is an emotional hacking aspect here that traditional journalism has used (heuristically?) for most(?) of its history.
      • Virtually all current theories of choice under risk or uncertainty are cognitive and consequentialist. They assume that people assess the desirability and likelihood of possible outcomes of choice alternatives and integrate this information through some type of expectation-based calculus to arrive at a decision. The authors propose an alternative theoretical perspective, the risk-as-feelings hypothesis, that highlights the role of affect experienced at the moment of decision making. Drawing on research from clinical, physiological, and other subfields of psychology, they show that emotional reactions to risky situations often diverge from cognitive assessments of those risks. When such divergence occurs, emotional reactions often drive behavior. The risk-as-feelings hypothesis is shown to explain a wide range of phenomena that have resisted interpretation in cognitive–consequentialist terms.
    • At page 223 – Elections as the canon of participation

  • Working on getting tables to sort – Done

  • Loading excel file -done
  • Calculating – done
  • Using weights -done
  • Reset weights – done
  • Saving (don’t forget to add sheet with variables!) – done
  • Wrapped in executable – done
  • Uploading to dropbox. Wow – the files with JavaFX are *much* bigger than Swing.

Phil 4.28.16

7:00 – 5:00 VTX

  • Reading Informed Citizenship in a Media-Centric Way of Life
    • Jessica Gall Myrick
    • This is a bit out of the concentration of the thesis, but it addresses several themes that relate to system and social trust. And I’m thinking that behind these themes of social vs. system is the Designer’s Social Trust of the user. Think of it this way: If the designer has a high Social Trust intention with respect to the benevolence of the users, then a more ‘human’ interactive site may result with more opportunities for the user to see more deeply into the system and contribute more meaningfully. There is risks in this, such as hellish comment sections, but also rewards (see the YouTube comments section for The Idea Channel episodes). If the designer has a System Trust intention with respect to say, the reliability of the user watching ads, then different systems get designed that learns to generate click-bait using neural networks such as clickotron). Or, closer to home, Instagram might decide to curate a feed for you without affordances to support changing of feed options. The truism goes ‘If you’re not paying, then you’re the product’. And products aren’t people. Products are systems.
    • Page 218: Graber (2001) argues that researchers oten treat the information value of images as a subsidiary to verbal information, rather than having value themselves. Slowly, studies employing visual measures and examining how images facilitate knowledge gain are emerging (Grabe, Bas, & van Driel, 2015; Graber, 2001; Prior, 2014). In a burgeoning media age with citizens who overwhelmingly favor (audio)visually distributed information, research momentum on the role of visual modalities in shaping informed citizenship is needed. Paired with it, reconsideration of the written word as the preeminent conduit of information and rational thought are necessary.
      • The rise of infographics  makes me believe that it’s not image and video per se, but clear information with low cognitive load.
  • ————————–
  • Bob had a little trouble with inappropriate and unclear identity, as well as education, info and other
  • Got tables working for terms and docs.
  • Got callbacks working from table clicks
  • Couldn’t get the table to display. Had to use this ugly hack.
  • Realized that I need name, weight and eigenval. Sorting is by eigenval. Weight is the multiplier of the weights in a row or column associated with a term or document. Mostly done.

Phil 4.22.16

7:00 – 4:30 VTX

  • Had a thought going to sleep last night that it would be interesting to see the difference between a ‘naive’ ranking based on the number of quotes vs. PageRank. Pretty much as soon as I got up, I pulled down the spreadsheet and got the lists. It’s in the previous post, but I’ll pot them here too:
    • Sorted from most to least quotes
      P61: A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web.pdf
      P13: The Egyptian Blogosphere.pdf
      P10: Sensing_And_Shaping_Emerging_Conflicts.pdf
      P85: Technology Humanness and Trust-Rethinking Trust in Technology.pdf
      P 5: Saracevic_relevance_75.pdf
      P 1: Social Media and Trust during the Gezi Protests in Turkey.pdf
      P77: The Law of Group Polarization.pdf
      P43: On the Accuracy of Media-based Conflict Event Data.pdf
      System Trust
      P37: Security-control methods for statistical databases – a comparative study.pdf
    • Sorted on Page Rank eigenvector
      P85: Technology Humanness and Trust-Rethinking Trust in Technology.pdf
      System Trust
      Social Trust
      P61: A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web.pdf
      P84: What is Trust_ A Conceptual Analysis–AMCIS-2000.pdf
      P 1: Social Media and Trust during the Gezi Protests in Turkey.pdf
      Credibility Cues
      P13: The Egyptian Blogosphere.pdf
      P10: Sensing_And_Shaping_Emerging_Conflicts.pdf
      P82: The ‘like me’ framework for recognizing and becoming an intentional agent.pdf
  • To me it’s really interesting how much better the codes are mixed in to the results. I actually thought it could be the other way, since the codes are common across many papers. Also, the concepts of System Trust, Social Trust and Credibility  Cues very much became a central point in my mind as I worked through the papers.
  • A second thought, which is the next step in the research, is to see ho weighting affects relationships. Right now, the the papers and codes are weighted by the number of quotes. What happens when all the weights are normalized (set to 1.0)?. And then there is the setup of the interactivity. With zero optimizations, this took 4.2 seconds to calculate on a modern laptop. Not sliderbar rates, but change a (some?) values and click a ‘run’ button.
  • So, moving forward, the next steps are to create the Swing App that will:
    • read in a spreadsheet (xls and xlsx)
    • Write out spreadsheets (page containing the data information
      • File
      • User
      • Date run
      • Settings used
    • allow for manipulation of row and column values (in this case, papers and codes, but the possibilities are endless)
      • Select the value to manipulate (reset should be an option)
      • Spinner/entry field to set changes (original value in label)
      • ‘Calculate’ button
      • Sorted list(s) of rows and columns. (indicate +/- change in rank)
    • Reset all button
    • Normalize all button
  • I’d like to do something with the connectivity graph. Not sure what yet.
  • And I think I’ll do this in JavaFX rather than Swing this time.
  • Huh. JavaFX Scene Builder is no longer supported by Oracle. Now it’s a Gluon project.
  • Documentation still seems to be at Oracle though
  • Spent most of the day seeing what’s going on with the Crawl. Turns out it was bad formatting on the terms?

Phil 4.21.16

7:00 – VTX

  • A little more bitcoin
  • Installed *another* new Java 1.8.0_92
  • Discovered the arXiv API page. This might be very helpful. I need to dig into it a bit.
  • Testing ranking code. I hate to say this, but if it works I think I’m going to write *another* Swing app to check interactivity rates. Which means I need to instrument the matrix calculations for timing.
  • Ok, the rank table is consistent across all columns. In my test code, the eigenvector stabilizes after 5 iterations:
    initial
     , col1, col2, col3, col4,
    row1, 11, 21, 31, 41,
    row2, 12, 22, 32, 42,
    row3, 13, 23, 33, 43,
    
    derived
    , row1, row2, row3, col1, col2, col3, col4,
    row1, 1, 0, 0, 0.26, 0.49, 0.72, 0.95,
    row2, 0, 1, 0, 0.28, 0.51, 0.74, 0.98,
    row3, 0, 0, 1, 0.3, 0.53, 0.77, 1,
    col1, 0.26, 0.28, 0.3, 1, 0, 0, 0,
    col2, 0.49, 0.51, 0.53, 0, 1, 0, 0,
    col3, 0.72, 0.74, 0.77, 0, 0, 1, 0,
    col4, 0.95, 0.98, 1, 0, 0, 0, 1,
    
    rank
    , row1, row2, row3, col1, col2, col3, col4,
    row1, 0.61, 0.62, 0.64, 0.22, 0.41, 0.59, 0.78,
    row2, 0.62, 0.65, 0.67, 0.23, 0.42, 0.61, 0.8,
    row3, 0.64, 0.67, 0.69, 0.24, 0.43, 0.63, 0.83,
    col1, 0.22, 0.23, 0.24, 0.08, 0.15, 0.22, 0.29,
    col2, 0.41, 0.42, 0.43, 0.15, 0.27, 0.4, 0.52,
    col3, 0.59, 0.61, 0.63, 0.22, 0.4, 0.58, 0.76,
    col4, 0.78, 0.8, 0.83, 0.29, 0.52, 0.76, 1,
    
    EigenVec
    row1, 1, 0.71, 0.62, 0.61, 0.61, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    row2, 0, 0.46, 0.61, 0.62, 0.62, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    row3, 0, 0.48, 0.63, 0.64, 0.64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    col1, 0.26, 0.13, 0.21, 0.22, 0.22, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    col2, 0.49, 0.25, 0.38, 0.41, 0.41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    col3, 0.72, 0.37, 0.55, 0.59, 0.59, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    col4, 0.95, 0.49, 0.73, 0.78, 0.78, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  • And after a lot of banging away, here’s my lit review in PageRank: pageRank
  • And here’s the difference between PageRank and sorting based on number of quotes:
  • Page Rank
    P85: Technology Humanness and Trust-Rethinking Trust in Technology.pdf
    System Trust
    Social Trust
    P61: A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web.pdf
    P84: What is Trust_ A Conceptual Analysis–AMCIS-2000.pdf
    P 1: Social Media and Trust during the Gezi Protests in Turkey.pdf
    Credibility Cues
    P13: The Egyptian Blogosphere.pdf
    P10: Sensing_And_Shaping_Emerging_Conflicts.pdf
    P82: The ‘like me’ framework for recognizing and becoming an intentional agent.pdf
  • Sorted from most to least quotes
    P61: A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web.pdf
    P13: The Egyptian Blogosphere.pdf
    P10: Sensing_And_Shaping_Emerging_Conflicts.pdf
    P85: Technology Humanness and Trust-Rethinking Trust in Technology.pdf
    P 5: Saracevic_relevance_75.pdf
    P 1: Social Media and Trust during the Gezi Protests in Turkey.pdf
    P77: The Law of Group Polarization.pdf
    P43: On the Accuracy of Media-based Conflict Event Data.pdf
    System Trust
    P37: Security-control methods for statistical databases – a comparative study.pdf

Phil 4.20.16

7:00 – 4:00 VTX

  • Read a little more of the BitCoin article. Nice description of the blockchain.
  • Generated a fresh Excel matrix of codes and papers. I excluded the ‘meta’ codes (Definitions, Methods, etc) and all the papers that have zero quotes
  • Duke Ellington & His Orchestra make great coding music.
  • Installed new Java
  • Well drat. I can’t increase the size of an existing matrix. Will have to return a new one.
  • Yay!
    FileUtils.getInputFileName() opening: mat1.xls
    initial
     , col1, col2, col3, col4, 
    row1, 11.0, 21.0, 31.0, 41.0, 
    row2, 12.0, 22.0, 32.0, 42.0, 
    row3, 13.0, 23.0, 33.0, 43.0, 
    done calculating
    done creating
    derived
     , row1, row2, row3, col1, col2, col3, col4, 
    row1, 0.0, 0.0, 0.0, 11.0, 21.0, 31.0, 41.0, 
    row2, 0.0, 0.0, 0.0, 12.0, 22.0, 32.0, 42.0, 
    row3, 0.0, 0.0, 0.0, 13.0, 23.0, 33.0, 43.0, 
    col1, 11.0, 12.0, 13.0, 0.0, 0.0, 0.0, 0.0, 
    col2, 21.0, 22.0, 23.0, 0.0, 0.0, 0.0, 0.0, 
    col3, 31.0, 32.0, 33.0, 0.0, 0.0, 0.0, 0.0, 
    col4, 41.0, 42.0, 43.0, 0.0, 0.0, 0.0, 0.0,
  • Need to set Identity and other housekeeping.
  • Added ‘normalizeByMatrix that sets the entire matrix on a unit scale
  • Need to have a calcRank function that squares and normalizes until the difference between output eigenvectors are below a certain threshold or a limit of iterations. Done?