Monthly Archives: February 2016

Phil 2.29.16

7:00 – 3:00 VTX

  • Seminar today, sent Aaron a reminder.
    • Some discussion about your publication quantity. Amy suggests 8 papers as the baseline for credibility: So here are some preliminary thoughts about what could come out of my work:
      • Page Rank document return sorting Pertinence
      • User Interfaces for trustworthy input
      • Rating the raters / harnessing the Troll
      • Trustworthiness inference using network shape
      • Adjusting relevance through GUI pertinence
      • Something about ranking of credibility cues – Video, photos, physical presence, etc.
      • Something about the patterns of posting indicating the need for news. Sweden vs. Gezi. And how this can be indicative of emerging crisis informatics need
      • Something about fragment synthesis across disciplines and being able to use it to ‘cross reference’ information?
      • Fragment synthesis vs. community fragmentation.
    • 2013 SenseCam paper
    • Narrative Clip
  • Continuing Incentivizing High-quality User-Generated Content.
    • Looking at the authors
    • The proportional mechanism therefore improves upon the baseline mechanism by disincentivizing q = 0, i.e., it eliminates the worst reviews. Ideally, we would like to be able to drive the equilibrium qualities to 1 in the limit as the number of viewers, M, diverges to infinity; however, as we saw above, this cannot be achieved with the proportional mechanism.
    • This reflects my intuition. The lower the quality of the rating, the worse the proportional rating system is, and the lower the bar for quality for the contributor. The three places that I can think of offhand that have high-quality UCG (Idea Channel, StackOverflow and Wikipedia) all have people rating the data (contextually!!!) rather than a simple up/downvote.Idea Channel – The main content creators read the comments and incorporate the best in the subsequent episode.Stackoverflow – Has become a place to show of knowledge, and there are community mechanisms of enforcement, and the number of answers are low enough that it’s possible to look over all of them.Others that might be worth thinking aboutQuora? Seems to be an odd mix of questions. Some just seem lazy (how do I become successful) or very open ended (What kind of guy is Barak Obama). The quality of the writing is usually good, but I don’t wind up using it much. So why is that?Reddit. So ugly that I really don’t like using it. Is there a System Quality/Attractiveness as well as System Trust?

      Slashdot. Good headline service, but low information in the comments. Occasionally something insightful, but often it seems like rehearsed talking points.

    • So the better the raters, the better the quality. How can the System evaluate rater quality? Link analysis? Pertinence selection? And if we know a rater is low-quality, can we use that as a measure in its own right?
  • Trying to test the redundant web page filter, but the urls for most identical pages are actually slightly different:
  • I think tomorrow I might parse the URL or look at page content. Tomorrow.

Phil 2.26.16

7:00 – 4:30 VTX

  • Continuing Incentivizing High-quality User-Generated Content.
    • Suppose there are M viewers. The distribution of the total available attention from these M viewers amongst the K participating contributors is determined by the mechanism M being used to display the content. But what if it’s the Idea Channel’s hybrid approach? Google does some ranking of the replies (that has to do with viewer rating?), but then (Mike Rugnetta? Staff?) go through some sample of the comments looking for those that are worth incorporating into the show? Oh, wait, are the comments are on Reddit? Or is that where we go to comment on the comments? I’m confused. There does seem to be more dialog on Rettit. Is this cultural? Design? Both?
    • After poking around a bit, I discovered that youtube creators have special tools to search through their comments:
  • More rating tool stuff
    • Working on Blacklist. Kinda done? The JPA query that uses LIKE isn’t behaving in the way I think it should. Using ‘flaggable match’ for now instead of ‘match’. Oh. Duh. You need to use ‘%’ to indicate the match %pattern%. Now I’m done.
    • Create a loop that changes all the QueryObjects so that qo.getUnquotedName() is used and persist. Done.
    • Moving on to eliminating redundant URLs that have the same rating per person (maybe also start skipping when the same rating for two people?)
    • I think it’s done – need to test. I’m a bit worried about recursion in loadNextPage/loadNextQuery. Might have to clean that up a bit.

Phil 2.25.16

7:00 – 5:00 VTX

  • Thinking more about the economics of contributing trustworthy information. Recently, I’ve discovered the PBS Idea Channel, which is a show that explores pop culture with a philosophical bent (LA Times review here). For example, Deadpool is explored from a phenomenology perspective. But what’s really interesting and seems unique to me is the relationship of the show with its commenters. For each show, there is a follow-on show where the most interesting comments are discussed by the host, Mike Rugnetta. And the comments are surprisingly cogent and good. I think that this is because Rugnetta is acting like the anchor of an interactive news program where the commenters are the reporters. He sets up the topic, gets the ball rolling, and then incorporates the best comments (stories) to wrap up the story. Interestingly, in a recent comment section on aesthetic (which I can’t find now?), he brings up a comment that about science and philosophy and invites the commenter into a deeper discussion and also discusses the potential of an episode about that.
  • To get a flavor, here’s one of the longer comments (with 25 replies on its own) from the Deadpool show:
    I could actually buy that DeadPool’s ability to understand the medium he’s in if it weren’t for one thing he does very often: references to our world. If his fourth wall breaks were limited to interacting with the panels, making quips and nods about the idea of “readers”, and joking about general comic book (or video game or movie) tropes, then I’d be on board with the idea that he is hyper-aware due to his constant physical torment and knowledge of his own perceptions. however, he somehow has knowledge of things that do not seem to exist in the world he inhabits, such as memes, pop culture references, and things like “Leeroy Jenkins”. His hypersensitivity can explain his knowledge of the medium he’s in (an integral part of the reality he inhabits), but I don’t see a way that it could explain him knowing about things that, as far as I’m aware, do not exist in his reality.
  • Compare that to the comments for the MIT opencourseware intro to MIT 6.034, which I ‘took’ and found well presented and deeply interesting, though not as flashy. Here’s a rough equivalent (with 21 replies):
    wow’s such an overwhelming feeling for a guy like me ..who had no chance in hell of ever getting into MIT or any other ivy’s to be able to listen and learn from this lectures online and that too free. :’)
  • To me, it seems like the Deadpool post is deeply involved with the subject matter of the episode, while the MIT comment is more typical of a YouTube comment in that it is more about the commenter and less about the content. This does imply that working on providing value to good commenting through inclusion in the content of the show can improve the quality and relevance of the comments.
  • To continue the ‘News Anchor’ thought from above, it might be possible to structure a news entity of some kind where different areas (sports, entertainment, local/regional, etc) could have their own anchors that produce interactive content with their commenters. Some additional capability to handle multimedia uploads from commenters should probably be supported and better navigation, but this sounds more to me like a 21st century news product than many other things that I’ve seen. It’s certainly the opposite of the Sweden paper.
  • And speaking of papers, here’s one on YouTube comments: Commenting on YouTube Videos: From Guatemalan Rock to El Big Bang
  • Starting on Incentivizing High-quality User-Generated Content.
    • References look really good. Only 8? For a WWW paper?
    • This is starting to look like what I was trying to find. Nash Equilibrium. Huh. The model predicts, as observed in practice, that if exposure is independent of quality, there will be a flood of low quality contributions in equilibrium. An ideal mechanism in this context would elicit both high quality and high participation in equilibrium.
  • Need to add ‘change password’ option. Done. And now that I know my way around JPA, I like it a lot
  • Added role-based enabling of menu choices
  • The code base could really use a cleanup. We have the classic research->production problem…
  • Adding match/nomatch and blacklist queries. Note that blacklist needs to be by search engine
    • Finished match
    • Finished nomatch
    • Working on Blacklist
    • Create a loop that changes all the QueryObjects so that qo.getUnquotedName() is used and persist.

Phil 2.24.16

7:00 – 4:00 VTX

Phil 2.23.16

7:00 – 3:30VTX

  • Much needed vacation is now history. I started Information Rules – A Strategic Guide to the Network Economy. Probably not going to read the whole book, but it does address the economics issues I’m thinking about. Though more of a focus on financial transactions. For example, if discusses how people place different values on information, which makes me think about the differences between Sweden, Egypt and Turkey, as well as Crisis Informatics in general.
    • From my notes: This is why there is no blogging in Sweden. Since the reporting is good enough for most people, the only reason people blog was about things that weren’t covered in the news – personal expression or similar arcanum. Where news is not available, the value for this kind of information goes up, and people who respond to the perceived need step in to fill the gap.This is important – an individual can have an information need, but also a perception of information needs in others, and have a need/desire to provide for that need.
  • And based on one of Paul Krugman’s blog entries, I went and found the wikipedia entry on information economics, which looks like it will be worth looking at. This part in particular leaped out at me: The subject of “information economics” is treated under Journal of Economic Literature classification code JEL D8 – Information, Knowledge, and Uncertainty. The present article reflects topics included in that code. There are several subfields of information economics. Information as signal has been described as a kind ofnegative measure of uncertainty.[2] It includes complete and scientific knowledge as special cases. The first insights in information economics related to the economics of information goods.
  • Submitting paperwork for CHIR
  • And, back to normal… Continue to refine the rating app?
    • Make uploading a super user thing. Which means user accounts and passwords. Probably add everyone to a DB and just let them put in/change passwords.
    • Add code to scan the DB for previous pages that had the same rating for the same doctor (and the same term?)
    • Add an analytics app that looks for ratings that disagree, either as outliers (watch out for that reviewer) or there is disagreement (are we having problems with terms, fuzzy matching, or what?)
    • Add a second app that tags the ontology onto the ‘Flaggable Match’
    • Write up a guidance manual for edge conditions. Comes up when you click ‘help’
    • Add a ‘total MATCH’ search. That shows how many relevant documents were returned
    • Add a ‘total NO MATCH’ search. That shows how many non-relevant documents were returned – basically
      select search_type, count(*) as matches, total_results from view_rated_items2 where rating NOT LIKE '%match%' group by search_type;
    • Add a blacklist query that lists all root domains that only show up in non-match results
    • Incorporate Flywaydb
      • Verified that I can generate just the table structure with mysqldump: mysqldump -u xxx-pyyy -d googlecse1 > gcse1Tables.sql
    • Get DB deployed somewhere and validate – talk to Damien and specify what’s needed. He’ll cost out hours. Done
    • Build a web repo that contains gold standard data that we can point a special test GoogleCSE and keep track of return changes.
    • Machine Learning framework
      • Get back up to speed on WEKA
      • probably have to write some java data translator generator code
      • Run some tests, get some results in the interactive mode,
      • Redo programatically, so a collection of urls (text? Yeah, extracted text. Compare Stanford and Alchemy?)
      • Data flow:
        • Raw pages,
        • Cleaned content
        • Machine learning (per provider?) returns scored pages
        • Extraction of flags from highly-ranked pages
  • Took all of the above and rolled it into stories. For points I built an Excel spreadsheet. Turns out that Excel doesn’t have Fibonacci, so I used this version.

Phil 2.18.16

7:00 – 6:00 VTX

I think that this is more an issue of information economics. The incentives in social publication is honor, glory and followers. Maybe some money from ad revenue sharing (Though this is changing?). Traditional news media offers a more direct model where the product (news) is sold to readers and/or advertisers so that the news-making product can be made.

Connectivism states that there is now an emphasis on leaning how to find information as opposed to knowing the information (since information obsolescence happens more rapidly, the value of the information is lower than the knowledge of how to find current knowledge).

Since traditional news media tends to aggregate information to produce stories because it makes learning entertaining and worth the price paid (cash or time watching commercials). However, if the friction to finding free alternatives of the initial information for the story is low, then the value of the story becomes lower, since now all you’re paying for is a pleasing presentation.

Blogs and other free sources make this more difficult for the consumer, since what appears credible may not be, but may be confused with an actual information source nonetheless. Or, looking at confirmation bias, a free pleasing story may have higher value for a consumer than a (non-free) well researched story that disputes the reader’s beliefs.

There is also an emotional cost for checking rumors that you agree with. Going to Snopes to find out that the politician that you hate didn’t actually do that stupid thing you just saw in your feed.  So the traditional few-channel media is being subsumed by networks that we construct to support our biases?

  • Banged away at the white paper. Done! Off to Key West for a long weekend!

Phil 2.17.16

7:00 – 5:00 VTX

  • Starting to list strawman hypothesis
  • Reading Connectivism paper. Very good so far.
  • Albert-László Barabásipublications Google Scholar Profile
  • LexRank: graph-based lexical centrality as salience in text
  • Talked to Thresea about the human rating app/results and sent her this article on
  • Add doctor disambiguation popup – done
  • Add a ‘total results’ search. That shows how many relevant documents exist.
    MariaDB [googlecse1]> select distinct search_type, total_results from query_object where total_results > 0 order by total_results desc;
    | search_type                                    | total_results |
    | RESTRICTED_COM(Ram Singh: board actions)       |         12600 |
    | RESTRICTED_COM(Ram Singh: criminal)            |          7490 |
    | ALL_ORG(Ram Singh: board actions)              |          4200 |
    | BASELINE(Ram Singh: board actions)             |          3360 |
    | BASELINE(Ram Singh: criminal)                  |          1880 |
    | RESTRICTED_COM(Ram Singh: sanctions)           |          1580 |
    | ALL_ORG(Ram Singh: criminal)                   |          1390 |
    | ALL_ORG(Ram Singh: sanctions)                  |           539 |
    | ALL_GOV(Ram Singh: board actions)              |           401 |
    | BASELINE(Ram Singh: sanctions)                 |           284 |
    | ALL_US(Ram Singh: board actions)               |           157 |
    | ALL_EDU(Ram Singh: criminal)                   |           126 |
    | ALL_EDU(Ram Singh: board actions)              |           125 |
    | RESTRICTED_COM(Ram Singh: malpractice)         |           108 |
    | ALL_US(Ram Singh: criminal)                    |           103 |
    | ALL_GOV(Ram Singh: criminal)                   |            57 |
    | ALL_EDU(Ram Singh: sanctions)                  |            50 |
    | BASELINE(Ram Singh: malpractice)               |            34 |
    | ALL_ORG(Ram Singh: malpractice)                |            31 |
    | ALL_GOV(Ram Singh: sanctions)                  |            15 |
    | RESTRICTED_COM(Russell Johnson: criminal)      |             9 |
    | ALL_US(Ram Singh: sanctions)                   |             8 |
    | RESTRICTED_COM(Tommy Osborne: criminal)        |             8 |
    | ALL_EDU(Ram Singh: malpractice)                |             7 |
    | RESTRICTED_COM(Russell Johnson: board actions) |             7 |
    | RESTRICTED_COM(Tommy Osborne: board actions)   |             7 |
    | RESTRICTED_COM(Tommy Osborne: malpractice)     |             7 |
    | ALL_ORG(Tommy Osborne: board actions)          |             5 |
    | ALL_GOV(Ram Singh: malpractice)                |             4 |
    | ALL_US(Ram Singh: malpractice)                 |             4 |
    | ALL_ORG(Tommy Osborne: malpractice)            |             3 |
    | BASELINE(Tommy Osborne: board actions)         |             3 |
    | BASELINE(Tommy Osborne: malpractice)           |             3 |
    | ALL_GOV(Tommy Osborne: board actions)          |             2 |
    | ALL_GOV(Tommy Osborne: criminal)               |             2 |
    | ALL_GOV(Tommy Osborne: malpractice)            |             2 |
    | ALL_GOV(Tommy Osborne: sanctions)              |             2 |
    | ALL_ORG(Tommy Osborne: criminal)               |             2 |
    | RESTRICTED_COM(Tommy Osborne: sanctions)       |             2 |
    | BASELINE(Tommy Osborne: criminal)              |             1 |
    | BASELINE(Tommy Osborne: sanctions)             |             1 |
    | RESTRICTED_COM(Russell Johnson: malpractice)   |             1 |
    | RESTRICTED_COM(Russell Johnson: sanctions)     |             1 |
    43 rows in set (0.00 sec)
  • Need to run about 30 doctors through the system to get statistical significance for making recommendations
  • CommonCrawl vs. Google approximation. For this analysis, I listed all the domains that produced a ‘flaggable match’ and fed them into the common crawl index search for November 2015 (the most recent at the time of this writing). In the results listed below, the number indicates the number of blocks stored in the CommonCrawl. A value of zero indicates that the CommonCrawl index did not contain any reference to that domain:
    1 -
    6 -
    2 -
    3 -
    40 -
    0 -
    1 -
    1 -
    2 -
    0 -
    2 -
    2 -
    0 -
    0 -
    240 -
    0 -
    3 -
    3 -
  • As can be seen, 5 out of 18 domains, or approximately 27% of the domains containing useful information are missing. Of the remaining sites, it is an open question as to whether the crawl contains the full data from the site.
  • Here’s the ratios of search results to hits
    search type			pertenence	relevance	ratio
    ALL_GOV(Tommy Osborne: board actions)	2		2	100.00%
    ALL_GOV(Tommy Osborne: criminal)	2		2	100.00%
    ALL_GOV(Tommy Osborne: malpractice)	2		2	100.00%
    ALL_GOV(Tommy Osborne: sanctions)	2		2	100.00%
    BASELINE(Tommy Osborne: criminal)	1		1	100.00%
    BASELINE(Tommy Osborne: sanctions)	1		1	100.00%
    RESTRICTED_COM(Russell Johnson: malpractice)	1	1	100.00%
    ALL_ORG(Tommy Osborne: malpractice)	2		3	66.67%
    ALL_ORG(Tommy Osborne: board actions)	3		5	60.00%
    RESTRICTED_COM(Tommy Osborne: board actions)	4	7	57.14%
    ALL_GOV(Ram Singh: malpractice)		2		4	50.00%
    RESTRICTED_COM(Tommy Osborne: sanctions)	1	2	50.00%
    BASELINE(Tommy Osborne: board actions)	1		3	33.33%
    BASELINE(Tommy Osborne: malpractice)	1		3	33.33%
    RESTRICTED_COM(Russell Johnson: board actions)	2	7	28.57%
    RESTRICTED_COM(Tommy Osborne: malpractice)	2	7	28.57%
    ALL_US(Ram Singh: malpractice)		1		4	25.00%
    ALL_GOV(Ram Singh: sanctions)		2		15	13.33%
    RESTRICTED_COM(Tommy Osborne: criminal)	1		8	12.50%
    ALL_ORG(Ram Singh: malpractice)		3		31	9.68%
    ALL_GOV(Ram Singh: criminal)		1		57	1.75%
    ALL_GOV(Ram Singh: board actions)	4		401	1.00%
    ALL_US(Ram Singh: criminal)		1		103	0.97%
    RESTRICTED_COM(Ram Singh: malpractice)	1		108	0.93%
    ALL_ORG(Ram Singh: criminal)		2		1390	0.14%
    ALL_ORG(Ram Singh: board actions)	3		4200	0.07%
    RESTRICTED_COM(Ram Singh: criminal)	2		7490	0.03%
    RESTRICTED_COM(Ram Singh: board actions)	2	12600	0.02%

Phil 2.16.16

7:00 – 4:00 VTX

  • Interesting stuff from Stephen Wolfram’s blog: Data Science of the Facebook World. Makes me wonder if you can infer age and gender from writing. Is this global or just US?
  • Meeting today with Wayne at 4:00
  • Added Config load
  • Added provider load
  • Added query generation. I realized that there is no need to generate a new query that is the same as a query that has never been run, so after generating all the potential new queries, I compare them to the untested list and remove any common items before persisting.
    • HOWEVER, while doing the calculation, I was adding all the QueryObjects to the ProviderObjects and then deleting them, so that when I persists, I was adding HUGE numbers of lines. Moved the testing around so that it happens before a potential QueryObject is created.

Phil 2.15.16

7:30 – 1:30 VTX

Phil 2.12.16

6:30 – 4:30 VTX

  • Continuing Participatory journalism – the (r)evolution that wasn’t. Content and user behavior in Sweden 2007–2013
  • Create xml configuration file
  • Integrate Flyway?
  • Meeting on rating tool. Thoughts:
    • Add a ‘I goofed’ button to the GUI (or maybe a ‘back’ button that lets you change the rating?
    • Add more info that pops up medical provider.
    • Add an analytics app that looks for ratings that disagree, either as outliers (watch out for that reviewer) or there is disagreement (are we having problems with terms, fuzzy matching, or what?)
    • Add a second app that tags the ontology onto the ‘Flaggable Match’
    • Write up a guidance manual for edge conditions. Comes up when you click ‘help’
    • When an url comes up that has already been reviewed more than N times and the reviews match substantially (A majority? – means odd numbers of reviews) for the same provider don’t run that result item, just add a copy of the rating object wit the name of (‘computed’)
  • Return from NJ