Phil 3.8.16

7:00 – 3:00 VTX

  • Continuing A Survey on Assessment and Ranking Methodologies for User-Generated Content on the Web. Dense paper, slow going.
    • Ok, Figure 3 is terrible. Blue and slightly darker blue in an area chart? Sheesh.
    • Here’s a nice nugget though regarding detecting fake reviews using machine learning: For assessing spam product reviews, three types of features are used [Jindal and Liu 2008]: (1) review-centric features, which include rating- and text-based features; (2) reviewer-centric features, which include author based features; and (3) product-centric features. The highest accuracy is achieved by using all features. However, it performs as efficiently without using rating-based features. Rating-based features are not effective factors for distinguishing spam and nonspam because ratings (feedback) can also be spammed [Jindal and Liu 2008]. With regard to deceptive product reviews, deceptive and truthful reviews vary concerning the complexity of vocabulary, personal and impersonal use  of language, trademarks, and personal feelings. Nevertheless, linguistic features of a text are simply not enough to distinguish between false and truthful reviews. (Comparison of deceptive and truthful travel reviews). Here’s a later paper that cites the previous. Looks like some progress has been made: Using Supervised Learning to Classify Authentic and Fake Online Reviews 
    • And here’s a good nugget on calculating credibility. Correlating with expert sources has been very important: Examining approaches for assessing credibility or reliability more closely indicates that most of the available approaches use supervised learning and are mainly based on external sources of ground truth [Castillo et al. 2011; Canini et al. 2011]—features such as author activities and history (e.g., a bio ofan author), author network and structure, propagation (e.g., a resharing tree of a post and who shares), and topical-based affect source credibility [Castillo et al. 2011; Morris et al. 2012]. Castillo et al. [2011] and Morris et al. [2012] show that text- and content-based features are themselves not enough for this task. In addition, Castillo et al. [2011] indicate that authors’ features are by themselves inadequate. Moreover, conducting a study on explicit and implicit credibility judgments, Canini et al. [2011] find that the expertise factor has a strong impact on judging credibility, whereas social status has less impact. Based on these findings, it is suggested that to better convey credibility, improving the way in which social search results are displayed is required [Canini et al. 2011]. Morris et al. [2012] also suggest that information regarding credentials related to the author should be readily accessible (“accessible at a glance”) due to the fact that it is time consuming for a user to search for them. Such information includes factors related to consistency (e.g., the number of posts on a topic), ratings by other users (or resharing or number of mentions), and information related to an author’s personal characteristics (bio, location, number of connections).
    • On centrality in finding representative posts, from Beyond trending topics: Real-world event identification on twitterThe problem is approached in two concrete steps: first by identifying each event and its associated tweets using a clustering technique that clusters together topically similar posts, and second, for each cluster of event, posts are selected that best represent the event. Centrality-based techniques are used to identify relevant posts with high textual quality and are useful for people looking for information about the event. Quality refers to the textual quality of the messages—how well the text can be understood by any person. From three centrality-based approaches (Centroid, LexRank [Radev 2004], and Degree), Centroid is found to be the preferred way to select tweets given a cluster of messages related to an event [Becker et al. 2012]. Furthermore, Becker et al. [2011a] investigate approaches for analyzing the stream of tweets to distinguish between relevant posts about real-world events and nonevent messages. First, they identify each event and its related tweets by using a clustering technique that clusters together topically similar tweets. Then, they compute a set of features for each cluster to help determine which clusters correspond to events and use these features to train a classifier to recognizing between event and nonevent clusters.
  • Meeting with Wayne at 4:15
  • Crawl Service
    • had the ‘&q=’ part at the wrong place
    • Was setting the key = to the CSE in the payload, which caused much errors. And it’s working now! Here’s the full payload:
      {
       "query": "phil+feldman+typescript+angular+oop",
       "engineId": "cx=017379340413921634422:swl1wknfxia",
       "keyId": "key=AIzaSyBCNVJb3v-FvfRbLDNcPX9hkF0TyMfhGNU",
       "searchUrl": "https://www.googleapis.com/customsearch/v1?",
       "requestId": "0101016604"
      }
    • Only the “query” field is required. There are hard-coded defaults for engineId, keyId and searchUrlPrefix
    • Ok, time for tests, but before I try them in the Crawl Service, I’m going to try out Mockito in a sandbox
    • Added mockito-core to the GoogleCSE2 sandbox. Starting on the documentation. Ok – that makes sense
    • Added SearchRequestTest to CrawlService
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: