Transcript PPT

We toiled, we submitted, …
we conq got rejected
Discussion on close rejects
Saptarshi Ghosh
CNeRG Retreat
Typical review process (SIGIR)


3 reviewers review paper
Primary Area Chair discusses with reviewers, writes
meta-review



Secondary Area Chair double-checks reviews, and may
provide additional review
Area Chairs  PC chairs: Accept / Reject / Accept If Room
PC chairs rank papers by average score


Clear accepts, clear rejects identified
“Accept if room” papers discussed further
Methodology and Disclaimer

What I should have done, but did not




What I did



Did not read the papers
Did not find out about state-of-the-art
So, I believe what reviewers said
Read the reviews carefully, formed my own views
Discussed with 1st authors why they think paper got rejected
Views are not personal attacks on anyone
Rejects
Work
Venue
Evaluation
Nested query SIGIR 2013
segmentation 19.9% (73/366)
5, 3, 3, metareview:2 (1-5, threshold: 3)
Community
detection
WWW 2014
12.9% (84/650)
-4, -2, -2 (-4: should reject, -2: marginal)
Twitter topic
search
WSDM 2014
18% (out of 356)
-1, 2, -2 (weak reject, accept, reject)
Broadcast
delay in DTN
INFOCOM 2014
19.5% (320/1645)
3, 3, 3, 1 (3: accept if room, 1: reject)
Attack
tolerance of
time-varying
networks
PRE
Rejected after editorial review
Rejects turned into Accepts
Work
Failures
Final success
Spam and link farming in
Twitter
Rejected at IMC 2012
Rejected at WSDM 2012
Accepted at WWW 2012
Coverage maximization
under resource
constraints
Initially rejected after PRE Normal review process
editorial review
after report by Editorial
Board members
Finally accepted to PRE
SIGIR 2013 (5, 3, 3, 2)

Submission



Nested Query Segmentation for Information Retrieval
Rishiraj, Anusha Suresh, NG, Monojit Choudhury
Reasons for rejection




Dataset used not well-known, 2 reviewers advise TREC
Improvement in proposed method is very low
No comparison with method in [Metzler, Croft], “the most
commonly used method to segment queries”
How important / necessary is nested query segmentation?
SIGIR 2013 (5, 3, 3, 2)


Scores in range 1 – 5, accept threshold: 3
Scores from 3 reviewers, one meta-reviewer (last)








Relevance to SIGIR: 5 – 4 – 4 – 5
Originality of Work: 4 – 4 – 4 – 4
Technical Soundness: 4 – 4 – 2 – 2
Quality of Presentation: 4 – 4 – 4 – 4
Impact of Ideas or Results: 4 – 2 – 3 – 3
Adequacy of Citations: 4 – 4 – 4 – 3
Reproducibility of Methods: 3 – 4 – 3 – 3
Overall Recommendation (1-6): 5-3-3-2 (meta-review)
WWW 2014 (-4, -2, -2)

Submission



Stay where you belong: on the permanence of vertices in
network communities
Tanmoy, Sriram Srinivasan, NG, AM, Sanjukta Bhowmick
Reasons for rejection



Presentation: other community detection methods heavily
criticized
Incomplete literature survey
Evaluation: compared local measure with other local
measures, not to global measures like modularity
WWW 2014 (-4, -2, -2)

Reasons for rejection

Questions over basic approach of how metric is defined

Contribution not enough


Permeanence maximization yields a poor performance on the LFR
benchmarks
Poor performance for mu=0.6 questions the whole usefulness of
the measure. “Why would one need it if there are already better
techniques?”
WSDM 2014 (-1, 2, -2)

Submission



Searching for Topical Content in Microblogs: On the
Wisdom of Experts vs. Crowds
Bilal, Parantapa, NG, Saptarshi, Krishna Gummadi
Reasons for rejection



Motivation / story-line was not clear – reviewers did not
realize solution to a new type of search was proposed
Evaluation – more quantitative results required
Writing / presentation was not good
INFOCOM 2014 (3, 3, 3, 1)

Submission



Segmented message broadcast in delay tolerant networks:
An analytical and numerical study
Biswajit Paria, Rajib, NG, AM, Tyll Krueger
Reasons for rejection



Positioning of the work as a DTN paper was not clear
Justification for some technical design choices not given
Presentation not good – lot of missing information
PRE (rejected after editorial review)

Submission



Attack tolerance of correlated time-varying social
networks with well-defined communities
Souvik, NG, AM
Reasons for rejection


“will consider only papers with significant and new results”
“your manuscript is a variant of existing work in the
literature, displays predictable results, and lacks novelty”
Failure  Success: PRE

Submission




Coverage maximization under resource constraints using
nonuniform proliferating random walk
Sudipta Saha, NG
Editorial review: not suitable for publication in PRE
Review by two Editorial Board Members



Statistics of random walks is a reasonable topic for PRE
Results are a rather small technical incremental
advance with respect to the previous methods
Basic idea suitable, but awkward presentation of
theoretical arguments, limited numerical experiments
Failure  Success: PRE

For first review



For second review


Addition of more results on some different types of graphs
Reorientation of the content
Added a small theory to explain the whole phenomena
Possibly the physics community is not as excited by
development of an algorithm, as they are by a new
theory or model which explains some phenomenon
Failure  Success: WWW

Submission at IMC, WSDM



Rejected at IMC (3, 3, 2, 2, 2)



Who let the spammers in? Analyzing the Vulnerability of
the Twitter Social Network to Spammer Infiltration
Saptarshi, students at MPI, NG, Fabricio, Krishna
3: Good paper: can accept, but will not champion it
2: Weak paper: should reject, but not strongly against it
Rejected at WSDM: -1, 0, -2

-1: weak reject, 0: borderline, -2: reject
Failure  Success: WWW

IMC and WSDM: most reviewer issues were on





Not enough done to identify spammers
Not much distinction between spammers and marketers
Study explains only a small fraction of spammers’ links
Observations are mostly obvious
Accept at WWW (12%): 2, 2, 0, 2 (meta)



Understanding and Combating Link Farming in the Twitter
Social Network
Focused more on marketers than on spammers
Clearly differentiated between the two
Summary of reasons for rejection
Venue
SIGIR
Reasons
Low improvement / contribution
Motivation, significance of problem not clear
Lack of comparison with related work
WWW
Low improvement / contribution
Lack of comparison with related work
WSDM
Positioning of the work not good
INFOCOM Positioning of the work not good
Motivation, technical decisions not clear
PRE
Low improvement / contribution
Questions: Are we …










choosing the right journals / conferences in terms of scope?
addressing sufficiently important problems?
aiming too high for some projects without realistically
estimating the novelty / contribution?
contributing sufficiently for the chosen problems?
doing sufficient literature survey?
comparing with state-of-the-art?
using acceptable evaluation methodologies / metrics?
thinking of alternative / counter arguments?
writing the paper well?
giving sufficient time to a project?
Possible solutions

Do a comprehensive literature survey ‘early’

Establish that the problem is really important

Discuss works in progress with others



To know alternative points of view / positioning
Better to be grilled by peers than by reviewers
Use reading group
Thank You