Symposium on The Impact of Linguistics Journal Rankings and Citations Organized by members of the Committee of Editors of Linguistics Journals.

Download Report

Transcript Symposium on The Impact of Linguistics Journal Rankings and Citations Organized by members of the Committee of Editors of Linguistics Journals.

Symposium on
The Impact of Linguistics Journal
Rankings and Citations
Organized by members of the
Committee of Editors of Linguistics Journals
Outline
• Brian Joseph (The Ohio State University): The editor’s
perspective
• Tim Stowell (University of California, Los Angeles): The
administrator’s perspective
• John Cullars (University of Illinois at Chicago): The
bibliometrician’s perspective
• Marian Hollingsworth (Thompson Scientific): The
industry perspective
• Keren Rice (University of Toronto): Summation
• Martha Ratliff (Wayne State University), Discussant
Rossner, M. et al. 2007. Show me the data.
J. of Cell Biology.
… Impact factor data … have a strong influence
on the scientific community, affecting decisions
on where to publish, whom to promote or hire,
the success of grant applications, and even
salary bonuses. Yet, members of the
community seem to have little understanding of
how impact factors are determined … .
What kind of material is there?
Bare lists of journals implicitly or explicitly deemed ‘major’:
• DeMiller, Anna L. 2000. Linguistics. A guide to the reference
literature (2nd edn.). Englewood: Libraries Unlimited. (on-line)
Citation indexes:
• Thomson Scientific, Web of Science, “impact factor”
• www.journal-ranking.com (on-line, interactive)
Other efforts to evaluate, rate or rank:
• European Science Foundation, “European Reference Index for the
Humanities”, here. [A, B, C assigned to each journal.]
• Rolf A. Zwaan & Anton J. Nederhof. 1990. Some aspects of
scholarly communication in linguistics: An empirical study.
Language 66.3.553-57. [Survey data from Dutch linguists.]
• Other academic bibliometric work like Georgas and Cullars.
The citations approach
“Citations can be counted, and one way of
measuring the quality of a publication to
assess whether it appears in a high-citation
outlet. … The chain of inference … runs as
follows. The more papers in the journal are
cited, the more impact that journal has. The
more impact a journal has, the more authors
will want to publish in that journal. …
Continued …
“The more authors who want to publish in the
journal, the more demanding will be the selection
criteria applied in the refereeing process. The
more demanding the selection criteria applied in
the refereeing process, the better the average
paper will be. The better the average paper in the
journal, the more it will be cited. And so a virtuous
circle is completed.”
—British Academy Report
An alternative
• “The ERIH lists will help identify excellence in
Humanities scholarship and should prove useful
for the aggregate benchmarking of national
research systems, for example, in determining
the international standing of the research
activity carried out in a given field in a
particular country. As they stand, the lists are
not a bibliometric tool. The ERIH Steering
Committee and the Expert Panels therefore
advise against using the lists as the only basis
for assessment.”
Thomson
journal-ranking.com
Reactions & meta-evaluation
• Wouter Gerritsma, Citation analysis for research evaluation.
[here]
• Equivalence of results from two citation analyses: Thomson
ISI’s Citation Index and Google’s Scholar service. Pauly &
Stergiou. Inter-Research 2005. (int-res.com)
• Various (negative) editorials in Nature.
• Peer Review: the challenges for the humanities and social
sciences. A British Academy Report, Sept. 2007.
• Show me the data. Mike Rossner, Heather Van Epps, and
Emma Hill. Journal of Cell Biology 2007.
• Chronicle of Higher Education, 10 October 2008: “New
Ratings of Humanities Journals Do More Than Rank — They
Rankle” (and earlier articles there.)
Evaluitis — Eine neue Krankheit
Bruno S. Frey, University of Zurich, 2006
"Evaluitis" - i.e. ex post assessments of
organizations and persons - has become a rapidly
spreading disease. In addition to the well-known
costs imposed on evaluees and evaluators,
additional significant costs are commonly
disregarded: incentives are distorted, ossification
is induced and the decision approach is wrongly
conceived. As a result, evaluations are used too
often and too intensively.
But really, a good system could …
• encourage rigor,
• establish standards,
• and set a tone.
More fundamental problems
• Limited or problematic data and measures
• Poorly defined standards
Worse:
• Results being put to uses that weren’t
intended or aren’t justified?
An example: Rejection rates
No standard and no way to verify, yet numbers
are widely quoted. Two dramatically different
views:
•#1: Every editorial decision = submission.
•#2: Don’t count submissions not sent out for
review, or revisions of earlier submissions.
We (ELJ) are working to establish a norm for linguistics.
Time is tight, so let’s get going …
These slides are available at …
http://groups.google.com/group/editors-of-linguistics-journals.
We plan to produce and post a white paper
based on this session … stay tuned.
Our original questions
1. Philosophical
• Why is ‘objective’ ranking of journals considered important?
• Why have organizations like Thomson ISI and Google Scholar become
important within academia?
• What are the arguments for and against rankings and impact factors?
2. Practical
• Can measurement of journal impact and rank be done objectively?
• Is this being done within linguistics in ways that are valuable?
• Is this being done within linguistics in ways that might threaten the vitality
of research and of journals?
3. Professional
• What role does the assessment of journal impact and rank actually play in
grantsmanship, promotion, tenure and hiring decisions, library
acquisitions, etc.?