An Introduction to Using Incites to Evaluate Research Performance and Support Submissions for REF 2014 [email protected] http://incites.isiknowledge.com.

Download Report

Transcript An Introduction to Using Incites to Evaluate Research Performance and Support Submissions for REF 2014 [email protected] http://incites.isiknowledge.com.

An Introduction to Using Incites to Evaluate Research Performance and
Support Submissions for REF 2014
[email protected]
http://incites.isiknowledge.com
Objectives:
After this session you can:
• Understand the basic components of Incites (Slide 3)
• Understand the normalised indicators and how to use them (Slide 15)
• Navigate in Research Performance Profiles, Global Comparisons and Institutional
Profiles (RPP= Slide 22, GC=57, IP=76)
• Understand the Preset reports and what they inform on (Slide 24)
• Perform an analysis of authors/departments/subject areas/collaborations using
standard and normalised indicators (Slide 32)
• Create simple custom reports (Slide 46)
• Save and share reports with colleagues (Slide 51)
• Bench mark the performance of institutions/countries to global averages in multiple
subject schema (Slides 57-75)
• Compare the performance of an institution against peer institutions across a wide
range of activities and subject focus (Slides 76-88)
• Using Incites to support the REF 2014 (Slides 89-93)
2
Objective: Understand the basic components
of Incites
• Incites is a customised, citation-based research
evaluation tool on the web that enables you to
analyse institutional productivity and benchmark
your output against peers worldwide.
• All bibliographic and citation data is drawn from the
Web of Science
• Incites platform offers 3 modules
– Research Performance Profiles (RPP)
– Global Comparisons (GC)
– Institutional Profiles (IP)
3
InCites Modules
Research
Performance Profiles
Global Comparisons
Institutional Profiles
o
 Bibliometrics Driven
 Bibliometrics Driven
360 View of the world’s
 Internal View - Data for
 Comparative across
your institution’s published
work.
institutions and
countries.
leading research
institutions.
 Granular - Detail at the
Top-Level - Detail
paper, author, discipline
level, and more.
summarized at the
institution and country
level for various
disciplines
 Institution Submitted
Standardized - annual
A huge undertaking,
providing a unique set of
data, objective and
subjective, and tools
through which to examine
and compare institutional
resources, influence, and
reputation.
 Collaboration data.
 Customized - Based on
customer requirements.
Optional Author-Based data
sets
 Current -
Updated Quarterly
production, uniform data
cut-off for all institutions
 Diverse, a wide range
of metrics place influence
of published research into
multiple perspectives for
an institution.
 Academic Reputation
Survey
Data
 Bibliometrics
4
Research Performance Profiles
• A custom-built dataset created by Thomson Reuters to match customer
specifications
• Datasets can be compiled using the following search criteria:
• Address (extracting from WOS records that contain at least one
occurrence of an address e.g. Univ Manchester and variants as
identified by the customer)
• Author (extracting from WOS records that contain specific authors/ or
papers as identified by the customer)
• Other datasets are available for topic and journal
• Updated quarterly from date of issue. Customers can work with Incites
team to request changes for better unification to improve further updates
• Incites can include source articles published between 1981 and 2013 as
indexed in the Web of Science
• Customers can extract the data to populate their CRIS systems (API).
Must be a TR approved vendor.
5
Research Performance Profiles
RPP can be used to inform on..
• The overall performance of research at an institution
• The performance of authors
• The performance of departments
• The performance of collaborations
• The performance of areas of research
• The performance of individual papers
• The performance of papers in specific journals
• The impact/influence of published research
• The performance of papers funded by a funding agency
6
RPP- Web of Science data
1. All document types included that match customer specification (articles, reviews, editorials letters, etc..)
2. All authors indexed
–
Last name + initials
–
Variants included
–
Name as published
–
Full author name displays in Author Ranking report in Author based dataset
3. All address indexed
–
Author affiliation as published
–
Main organisation (e.g. Univ Manchester) displayed in RPP
4. Funding information from 2008 onwards
–
Funding Agency as published Grant numbers in the Funding Acknowledgement
5. Web of Science Subject Area applied at journal level
–
249 WOS/JCR subject categories
–
Source records inherit all journal level categories (an article published in the Journal of Dental
Research will inherit the categories Dentistry, Oral Surgery & Medicine)
–
Multidisciplinary journals categorised as ‘Multidisciplinary Sciences’
–
For some multidisciplinary journals (Science, Nature, British Medical Journal etc..) articles
reassigned a new WOS category based on analysis of citing/cited relationships
6. Journal Impact Factor from 2011 JCR
7. Author Keywords and Key Words Plus
7
RPP- Web of Science Data
2
6
1
7
6
3
4
5
8
RPP Key Metrics
These metrics
enable the
comparison of an
article(s) impact to
global averages
• Journal Expected Citation Rate
–
Average citations for records of the same type, from same journal, published in the
same year
• Category Expected Citation Rate
–
Average citations for records of same type, from same category, published in the
same year
• Percentile in Field
–
Citation performance relative to records of same document type, from same
category, published in the same year. Most cited paper awarded lowest percentile
(0%) and least to non-cited awarded highest percentile (100%)
• H Index
• Journal Actual/ Journal Expected
–
Ratio of the actual citation count (of a paper) to the expected count of papers
published in same journal, year and document type
• Category Actual/ Category Expected
–
Ratio of the actual citation count (of a paper) to the expected count for papers from
same category , year and document type
9
Global Comparisons (GC)
•
Global Comparisons contains aggregated comparative statistics for institutions, countries and
fields of research
•
Built by Thomson Reuters. Common to all customers. All customers see the same data in GC
•
All data drawn from Web of Science (SSI, SCI & AHCI)
•
File depth from 1981-2011
•
Updated annually
•
Data for Articles, Reviews and Research Notes
•
–
Use Institutional Comparisons to compare performance of an institution or groups of
institutions overall, across fields or within fields
–
Institutional name variant unification (main organisation)
–
New! Corporation entities
–
Use National Comparisons to compare the performance of more than 180 countries
and 9 geopolitical regions overall, across fields or within fields.
Multiple Subject Categories
–
WOS- 249 subject categories
–
Essential Science Indicators – 22 broad categories
–
Regional Categories (UK, Australia, Brazil and China)
–
OECD
10
Institutional Comparisons Key Metrics
• Web of Science documents (%international collaboration)
• Times Cited
• Cites per document (Average Impact)
• % Documents Cited (at least 1 citation)
• Impact Relative to Subject Area (average cites of an institution in a subject
area compared to the expected impact in the subject area)
• Impact Relative to Institution (average cites of papers in a field compared to
the average cites overall for the institution)
• % Documents in Subject Area (market share)
• % Documents in Institution
• % Documents Cited Relative to Subject Area
• % Documents Cited to Relative to Institution
• Aggregate Performance Indicator: this metric normalises for period,
document type and subject area and is a useful indicator to compare
institutions of different age, size and subject focus.
11
National Comparisons Key Metrics
• Web of Science documents
• Times Cited
• Cites per document (Average Impact)
• % Documents Cited (at least 1 citation)
• Impact Relative to Subject Area (average cites of an institution in a subject
area compared to the expected impact in the subject area)
• Impact Relative to Country (average cites of papers in a field compared to
the average cites overall for the country)
• % Documents in Subject Area (market share)
• % Documents in Country
• % Documents Cited Relative to Subject Area
• % Documents Cited to Relative to Country
12
Institutional Profiles
• Institutional Profiles is a dynamic web-based resource presenting
portraits on more than 550 of the world’s leading research
institutions.
• Through rigorous collection, vetting, aggregation and normalization
of both quantitative and qualitative date, the profiles present details
on a wide array of indicators such as faculty size, reputation,
funding, citation measures and more.
• Citation metrics from Web of ScienceSM
• Profile information from the institution’s themselves.
• Reputational data from the Global Institutional Profiles Project. Data
from this project is used by THE to inform on the World University
Rankings
13
Key Features
• Visualization tools facilitate instant comparisons of
performance across a wide array of indicators and
subjects :
– Research Footprint™
– Trend Graph
– Scatter Plot
• The ability to create customized Peer Groups for
continuous comparative tracking
14
Objective: Understand the normalised
indicators and how to use them
‘The number of times that papers are cited is not
in itself an informative indicator; citation counts
need to be benchmarked or normalised against
similar research. In particular citations accumulate
over time, so the year of publication needs to be
taken into account; citation patterns differ greatly in
different disciplines, so the field of research
needs to be taken into account; and citations to
review papers tend to be higher than for articles
and this also needs to be taken into account.’
Source REF Pilot Study
15
Normalisation
• It is necessary to normalise absolute citation
counts for:
– Document type (reviews cited more than articles, some
document types cited less readily)
– Journal where published
– Year of publication (citations accumulate over time)
– Category (there is a marked difference in citation activity
between categories)
• Golden rule: Compare like with like
16
Is this a high citation count?
This paper has been cited
1104 times.
How does this citation count
compare to the expected
citation count of other reviews
published in the same journal,
in the same year?
It is necessary to normalise
for:
•Journal =Angewandte
Chemie-International Edition
•Year = 2006
•Document type = review
17
Create a benchmark- the expected citations
Search for papers that
match the criteria
Run the Citation Report
on the results page
18
Create a benchmark- the expected citations
Reviews published in ‘Angewandte Chemie-International Edition in 2006
have been cited on average 190.10 times. This is the Expected Count
We compare the total citations received to a paper to what is expected
1104(Journal Actual) / 190.10 (Journal Expected) = 5.81
The paper has been cited 5.81 times more than expected. We call this
Journal Actual/Journal Expected
19
Percentile in Field. How many papers in the
dataset are in the top 1%, 5% or 10% in their
respective fields?
This is an example of the citation frequency distribution of a set of papers in
a given category, database year and document type. The papers are
ordered none/least cited on the left, moving to the highest cited papers in the
set on the right.
We can assign each paper to a Percentile in the set.
In any given set, there
are always many low
cited/ none cited papers
(bottom 100%)
100%
In any given set, there
are always few highly
cited papers (top 1%)
50%
0%
Only document types article, note, and review are used to determine the percentile
distribution, and only those same article types receive a percentile value. If a journal is
classified into more than one subject area, the percentile is based on the subject area in
which the paper performs the best, i.e. lowest value
20
No All Purpose Indicator
This is a list of a number of
different purposes a university
might have for evaluating its
research performance.
Each purpose calls for
particular kinds of information.
Identify the question the results
will help to answer and collect
the data accordingly
21
Incites Access
•http://incites.isiknowledge.com
•Enter username and password
or
•IP Authentication
22
Incites Start Page
These are the modules.
Click on ‘Get Started’ to
open a module
23
Objective: Navigate the two principal modules:
1. Research Performance Profiles
• RPP is custom built for each institution
– Article level statistics
– Aggregations as a whole dataset or create custom
subsets
Create a custom
report to analyse a
subset of papers
Run a preset report
on the whole dataset
24
Executive Summary- an overall synopsis
•107, 781 source papers
•1979-2011 timespan
•949,293 citing papers
•Green bar = papers published
per year, scale on left side
•Blue bar = citations received to
papers published in that year,
scale on right side
•Tables to highlight
frequently occurring
authors, subject areas
and most cited authors
25
Source Article Listing-Paper level metrics
Article citation data and
normalised metrics
Article bibliographic
information
Click on article title to
navigate to the record in
Web of Science
Order the papers by the
metrics available in drop
down menu
•Times Cited
•Percentile in Field
•2nd Generation Citations
26
Source Article Listing Key Metrics- for
individual paper evaluation
METRIC
MEASURE
IDENTIFY
Times Cited
Total cites to paper
Most cited papers
Second Generation Cites
Total cites to the citing papers
Long term impact of a paper
Journal Expected Citations
Average Times Cited count to
papers from same journal,
publication year and document
type
Papers which perform above or
below what is expected
compared to similar papers
from same journal and same
period
Category Expected Citations
Average Times Cited count to
papers from same category,
publication year and document
type
Papers which perform above or
below what is expected
compared to similar papers in
the same subject category from
same period
Percentile in Subject Area
Percentile a paper is assigned
to with papers from same
subject category/year/
document type ordered most
cited (0%) to least cited (100%)
Papers which perform the
highest or lowest in their field
based on the papers citation
count
Journal Impact Factor
Average cites in 2010 to papers
published in the previous 2
Journals which have high or27
low
impact in 2011
Summary Metrics- a dashboard of
performance indicators
Citation data and
normalised metrics which
give an overview of the
overall performance of
the papers in the data set
Percentile Graph
For each percentile range, the “expected” number of papers (article,
review & notes) in each would be equal to that same “Percentile”,
meaning…
We’d expect 5% of this institutions papers to rank in the 5th
Percentile.
However, 6.79% of this institution’s papers rank in the 5th Percentile.
6.79% - 5% = 1.79%
Therefore, the number of papers this institution has placed in the top
5% of all papers published exceeds what is expected by 1.79%
This 1.79% is what is presented on the graph, in Green because it
exceeded the expected. Below-expected would be presented in Red
28
Summary Metrics Key Indicators (for an
author, institution, department..)
METRIC
MEASURE
IDENTIFY
% Cited to %Un Cited
% of papers in dataset that have received at least one
cite
Amount of research in dataset with no
impact
Mean Percentile
Average Percentile for set of papers in dataset.
Percentile is assigned to a paper within a set of
papers from same subject category/year/ document
type ordered most cited (0%) to least cited (100%)
Average ranking of papers in dataset.
How well the papers perform
compared to papers from same
category/year/document type
Average cites per
document
Efficiency (or average impact) of author papers
Dataset with high/low average impact
(using when making comparisons)
Mean Journal
Actual/Expected Citations
Average ratio for papers in dataset. Ratio is
relationship between actual citations to each paper to
what is expected for papers in same journal/
publication year and document type
Papers that perform above (1) or
below the expected journal citation
count
Mean Category
Actual/Expected Citations
Average ratio for papers in the dataset. Ratio is
relationship between actual citations to each paper to
what is expected for papers in same category/
publication year and document type
Papers that perform above (1) or
below the expected category citation
count
Percentage articles
above/ below what is
expected
1% of papers are expected to be in top 1% percentile.
Green bar indicates by what percentage the papers
are performing better than expected. Red bar indicates
the percentage by which the papers are performing
lower that expected at a given percentile range
How well the papers in the dataset are
performing at the specific percentile
ranges (1%, 5%, 10% 50%).
29
Funding Agency Listing
Click on the WOS document
column to view the papers
funded by the agency
Order the Funding Agencies
by the indicators in the drop
down menu
30
Article Type Listing
Use the Article Type Listing to
examine the weighting of each
document type in the dataset and
differences in performance/
impact between the document
types
31
Objective: Perform analysis of
authors/collaborations/subject areas using
citation data and normalised metrics
32
Author Ranking Report
Order authors using the citation and
normalised metrics in the menu
Click on any data
value to view the
Author Profile
Report
•It may be necessary to establish
thresholds to focus on authors who
achieve a minimum parameter such as:
Papers published
Citations received
•Create an ‘Author Ranking Report’ in
Custom Reports and establish the
thresholds required.
33
Author Profile- Author dataset
A profile of an authors
performance including:
•Collaborations
(external and internal)
•Subject focus
•Publication activity
•Citation Impact
34
Author Profile- Address dataset
A profile of an authors
performance including:
•Collaborations
•Subject focus
•Publication activity
•Citation Impact
35
Author Ranking Report
36
Author Ranking Report for Author Dataset
•Full author names
•Only authors who have been
identified by the customer appear
in this report
•Removes contamination from
co-authors from other institutions
as viewed in an Address Dataset
37
Author Ranking Key Metrics
METRIC
MEASURE
IDENTIFY
Times Cited
Total cites to an authors papers
Authors with highest /lowest total cites to
their papers
WOS documents
Total number of papers by an author in dataset
Authors with highest/ lowest number of
publications
Average cites per
document
Efficiency (or average impact) of author papers
Authors with highest/lowest average
impact
h-index
An authors research performance. Publications are
ranked in descending order by the times cited. The
value of h is equal to the number of papers (N) in the list
that have N or more citations
Authors with highest impact and quantity
of publications in a single indicator
Journal Actual/Expected
Citations
Average ratio for authors papers. Ratio is relationship
between actual citations to each paper to what is
expected for papers in same journal/ publication year
and document type
Authors who’s papers perform above (1)
or below what is expect in their respective
journals. Useful when comparing authors
in different fields/ career length
Category
Actual/Expected
Citations
Average ratio for authors papers. Ratio is relationship
between actual citations to each paper to what is
expected for papers in same category/ publication year
and document type
Authors who’s papers perform above (1)
or below what is expected in their
respective subject categories. Useful
when comparing authors in different fields/
career length
Average percentile
Average Percentile for set of authors papers. Percentile
is assigned to a paper within a set of papers from same
subject category/year/ document type ordered most
Authors who’s papers are performing at
the top or bottom of their respective fields
38
Time Series and Trend Report
Total citations received to
papers published in an
individual year.
E.g. Papers published in 1981
have received 23,789 citations.
Raw data in table below
Papers published per year.
1981= 1833 documents.
Raw data in table below
Average citations to papers
published in an individual year.
Papers published in 1981 have
been cited an average of 12.98
times.
Raw data in table below.
Use this indicator to identify
the year/s in which the research
had the highest average impact.
39
Collaborating Institutions Report
•Order the collaborations using the indicators
in the menu.
•The Collaborating Institutions report is
extremely important in not only identifying
most frequent collaborating institutions, but
those collaborations producing the most
influential research. In practical terms, one
can identify collaborations that produce the
most return on investment .
•Sorting by Category Actual/Expected
Cites is an easy way to identify this.
•Customise this report to focus on
collaborations that meet a minimum
threshold.
40
Collaborating Countries Report
•Order the country level
collaborations using the indicators
in the menu.
•Customise this report to focus on
collaborations that meet a minimum
threshold.
41
Collaboration Reports Key Metrics
METRIC
MEASURE
IDENTIFY
Times Cited
Total cites to set of papers (collaboration)
Institutions/countries with which the research
has the most impact (cites)
WOS documents
Total number of papers published in collaboration with an
institution/country
Institution/ countries with which your
researcher collaborate the most
Average cites per
document
Efficiency (or average impact) of papers
Institution/ countries with which the research
has the highest/lowest average impact
h-index
Performance of a set of papers. Publications are ranked in
descending order by the times cited. The value of h is equal to
the number of papers (N) in the list that have N or more
citations
Institutions/ countries with which the
collaboration has the highest impact and
quantity of publications as measured in this
single number indicator
Journal Actual/Expected
Citations
Average ratio for collaboration papers. Ratio is relationship
between actual citations to each paper to what is expected for
papers in same journal/ publication year and document type
Collaboration with an institution or country
with which the papers perform above or
below what is expect when compared to
similar papers in their respective journals
Collaboration with best return on investment
Category
Actual/Expected
Citations
Average ratio for authors papers. Ratio is relationship between
actual citations to each paper to what is expected for papers in
same category/ publication year and document type
Collaboration with an institution or country
with which papers perform above or below
what is expected in their respective subject
categories
Average percentile
Average Percentile for set of collaboration papers. Percentile is
assigned to a paper from a set of papers from same subject
category/year/ document type ordered most cited (0%) to least
cited (100%)
Collaborations with which the papers on
average rank high (0%) or low (100%) with
regard to their total cites in the respective
fields the papers belong to
42
Subject Area Ranking Report
•Order the subject areas using the
indicators in the menu.
•Use this report to determine the
intensity of publication output for
each subject area and compare the
performance of papers across
disciplines.
43
Journal Ranking Report
•Order the journals using the indicators
in the menu.
•Use this report to identify the journals
in which the source papers are
published and compare the
performance of papers in these
journals using the standard and
normalised metrics.
44
Impact and Citation Ranking Reports
•949,293 Citing Papers in dataset
Examine the citing papers to
determine:
 Who is influenced (authors,
institutions)
Where is the influence
(countries)
What is influenced (fields,
journals and article type)
45
Objective: Create Custom Reports
1. Specify a
report type from
the menu
3. Set the
time period
2. Select the
metrics to be
included in the
report
4. Use the
delimiters to
create a custom
dataset
5. You can preview
the papers that match
the parameters
specified, run the
report or save the
selections
46
Create Custom Reports- Preview Documents
Save your
Refined
Collection
to ‘Folders’
Use the Refine
Document Collection to
refine your custom
dataset
47
Create Ego Networks-Custom Reports
• Author Ego Networks
• Institution Ego Networks
•To create an Ego
Network, go to Custom
Reports and open the
Report Type menu
•Choose from author and
institution ego networks
•Customise your network
even further by using the
delimiters
48
Author Ego Network- Customise Network
Customise
node size
indicator
Search within
author names
Customise
indicator for
intensity
colour
49
Institution Ego Network
Customise
node size
indicator
Customise
indicator for
intensity
colour
50
Objective: Save and share reports with
colleagues
51
Folders
• My Saved Reports
– Save reports you generate
• My Saved Custom Report Selections
– Save selections for the report you frequently run
• My Saved Document Collections
– Save collections (subset) of the documents
• Shared Reports
• Shared Custom Report Selections
• Shared Document Collections
52
Save Selections
Provide a
title for your
saved
selection
Save your
selection to
‘My Folders’
53
Open a Custom Report
Click on the
title of report to
open it
Create a folder,
share the
report or delete
54
Shared Reports
Click on the
title of any
report in the
‘Shared
Reports’ folder
to open it
55
Create PDF’s
You can print,
export to
excel, or
create a PDF
of any report
56
Objective: Navigate the principal modules
2. Global Comparisons
• Institutional Comparisons
– Compare output and impact for institutions/corporate entities
• National Comparisons
– Compare output and impact for countries
• Updated on an annual basis. 2011 is the current file
• WOS documents include articles, reviews and notes
only
57
Institutional Comparisons
• Compare the overall impact and productivity of a single institution for specified period.
Include World for benchmarking against global averages.
 Select Comparison Tab
 Select country grouping
 Select institution and World
 Select Time period
58
Institutional Profiles- a single institution
 101,902 papers have ‘Imperial Coll London’ in the address
 39,903 papers (39.2%) have a non UK institution in the address
 101,902 papers have received a total of 2,753,153 citations
 Imperial publications written with an international collaboration received 43.3% of all citations to
Imperial publications
 Average Cites per paper is 27.02 (2753,158/101,902)
 Imperial papers have an average cites per paper 1.65 times above the worlds average. (27.02/16.38)
 Imperials percentage of documents cited relative to world is 89.92%. For the international collaboration
papers this is slightly lower at 89.59%
 Imperials % documents cited is above the worlds % cited. (89.92/80.00)
 0.44% of all Web of Science papers have ‘Imperial Coll London’ in the address
 Imperial’s Aggregate Performance Indicator is 1.51, indicating that Imperial’s average cites per paper is
above the expected cites per papers of papers from same period, subject area and document type.
59
Institutional Comparisons- multiple institutions
compared in a field of interest
• Compare the overall performance of selected institutions in a particular field. Include
World to view subject area baselines.
 Select Comparison Tab
 Select Country groupings
 Select institutions of interest (include World for global averages)
 Select subject (WOS, ESI, RAE 2008)
 Select Time Period
60
Institutional Comparisons- multiple institutions
compared in a field of interest
•Generate graphs for
each indicator in the
table
•Use the ‘Subject
Metrics’ to inform on
how papers from each
institution perform in
that subject when
compared to what is
expected in that
subject area.
61
Institutional Comparisons
• Compare trended performance of selected institutions in a field
 Select Comparison tab
 Select country grouping
 Select institutions of interest
 Select field (WOS,ESI, RAE 2008)
 Select in 5 year groupings (or use the time period 1981-2010 to select a preferred
time period)
•Trended graphs are
useful for tracking
changes over time,
illustrating changes
that may have arisen
from policy decisions,
hiring of staff,
investment etc..
62
Institutional Comparisons
• Compare the overall performance of a single institution in multiple subject areas
 Select Institution Tab
 Select country grouping
 Select preferred institution
 Select fields (ESI works best)
 Select time period (overall or trended)
63
Institutional Comparisons
•Use the % in institution
graph to examine the
areas of research with a
strong focus at that
institution
64
Institutional Comparisons
• Compare the trended/overall performance of All institutions in a single field
 Select ‘Subject Area’ tab
 Select for example, UK or other UK grouping (Russell Group etc..)
 Select All United Kingdom or All for other grouping
 Select time period (overall or trended)
65
Institutional Comparisons
•Use the ‘impact relative
to field’ graph to identify
institutions that have an
impact above what is
expected in that field
(greater than 1).
66
Global Comparisons
• Examine the overall performance of a single country during the period 1981-2010. Add
World to view global averages.
 Select Compare Tab
 Select Country grouping
 Select country of interest (add World for global averages)
 Select All Years Cumulative
7.37% of all Web of Science (world) documents have England in the
Address field.
The average impact of documents from England is 20.11. This is 27%
above the worlds average impact (15.84)
The % cited for England is 84.69. This is greater than the worlds % cited
by 6%. This indicates that documents with at least one occurrence of
England in the address received more citations per document than the world
average.
67
Global Comparisons
• Compare the overall performance of multiple countries for the period 1981-2010
 Select Comparison Tab
 Select country grouping
 Select countries of interest
 Select time period Overall (Cumulative)
68
Global Comparisons
•Use the ‘Impact
Relative to World’
indicator to identify
countries that have a
higher ratio of cites per
document than the
world average cites per
document (red line in
graph)
69
Global Comparisons
•
Compare the trended performance of multiple countries in a Subject Area for a preferred period of time

Select Subject Area Tab

Select country grouping

Select ‘All’ grouping

Select a field (WOS, ESI, OECD)

Select in 5 year groupings (or any other preferred time period)
70
Global Comparisons
•Use the % documents in
country to track changes
in a field of research
over time between
countries.
71
Global Comparisons
•
Compare overall performance of selected country groupings for time period 1981-2010

Select Comparison Tab

Select country groupings

Select All Years (Cumulative)
72
Global Comparisons
•Use the ‘%
documents in world’
graph to examine each
groupings share of the
worlds total research
output
73
Global Comparisons
•
Compare the trended performance of selected country groupings in a subject area for a preferred
period of time

Select Comparison Tab

Select country groupings

Select Field (WOS, ESI, OECD)

Select in 5 year groupings or any preferred time period
74
Global Comparisons
•Use the ‘%
documents in Subject
Area’ indicator to
examine changes in
each territories’ share
of papers in an area of
research over time
75
Objective: Navigate the principal modules
3. Institutional Profiles
76
Institutional Profiles can answer........
• Where is my institution viewed most favorably in terms of the
reputation of research produced or teaching practices?
• How does our research output compare with the results it yields –
both in terms of impact and income?
• What is the optimal balance between capacity and performance?
• Is my institution's output being appropriately reflected in our
perceived reputation?
• How does my institution's research perform against citation impact?
• How does our research income compare to others and are we
seeing a return on investment?
77
Institutional Profiles
Select a visualization tool
78
Create an Institutional Profile
•Create a profile from over 500 research
institutions from 47 countries
•Use country groupings, the index or perform
a search
79
Institutional Profile
•Change the indicators in the
radar graph using the indicator
groups listed
•The table provides the raw
value and the score (see below)
for each indicator included in
the group selected.
Cumulative probability is a statistical method of representing a
single value within a normally distributed set of data.
For example, if the value of research income for a given
institution is $443,500,650 and its cumulative probability score is
90, then there is a 90% chance that the research income of a
randomly selected institution will be less than $443,500,650.
80
InCites – Institutional Profiles: View an Institutional Profile
The Research Footprint facilitates the
visualization of levels of performance for
the various indicators.
One need simply to visually align the
range of scores to the graphic.
Reputation -- The value is the percentage of the vote
that went to this university, i.e. what percentage of all
the responses in the reputation survey suggested
Caltech as one of the top institutions -- based on an
invitation-only survey of more than 13,000
academics around the world.
Score- 99, therefore any randomly selected university
will 99% of the time fall below the Caltech Value for
Reputation
InCites – Institutional Profiles: View an Institutional Profile
All institutions represented in Institutional
Profiles have supplied up-to-date facts and
statistics about:
 the size of their research and academic staffs
 their levels of funding
 the number of undergraduate and graduate
degrees awarded
82
Create a Research Footprint
1. Select institutions
from country groupings
or use the search
feature. You can select
up to 5 institutions
2a. Select the
indicators to include in
the graph. Choose from
indicator groups or
individual indicators
2b. Select subject
areas
3. Select a time period
(individual years)
Hovering the mouse over
an Indicator Group will
provide detail of individual
indicators.
Options on this page enable you to create radar
graphs that:
•illustrate the high and low achievers in a small group
of institutions
•identify the strong and weak subject areas of an
institution
•support focused analyses of research performance
based on a customized group of indicators
83
Research Footprint
The Research Footprint is a radar graph that illustrates
the relative strength or weakness of performance
indicators. It is accompanied by a table containing two
measures of size, strength or activity for each indicator
84
Create a Trended Graph
1. Select institutions
from country groupings
or use the search
feature. You can select
up to 6 institutions
2a. Select the
indicators to include in
the graph. Choose
from individual
indicators
2b. Select subject
areas
3. Select a time period (individual
years or a time span from 20042009)
85
Trended Graph
•A trend graph illustrates the activity or
progress of one indicator in one subject
area.
•An upward trend with a modest slope
indicates improvement over time.
•A downward trend over a period of more
than two years shows decline.
•A perfectly horizontal line might represent
steady activity, or it might signify stasis,
depending on the indicator selected.
•Trend graphs are especially useful for
comparing the performance of institutions
of comparable size and mission.
86
Create a Scatter Plot Graph
Creation of a Scatter Plot.
1. Select one or more
institutions or country averages.
Each institution you select will be
represented by a large green
circle on the scatter plot.
All other institutions will be
represented by small circles.
2. Select indicators
The selections you make here
determine the values of the
coordinates that form each
datapoint. They also determine
the scale of the x and y axes.
3. Select a Time Period
Select one of the years listed.
The values that form the
datapoints will derive from data for
the selected year.
87
Scatter Plot Graph
•Each circle or datapoint on a scatter
plot represents the relationship of two
indicators for one institution or one
country average.
•The large green circles represent the
relationship of two indicators for the
institutions you selected.
•The other circles represent the
relationship of two indicators for all
other institutions in the dataset.
88
Objective: Understand the use of citation data for the 2014
Research Excellence Frame Work and how Incites may be used
to inform on submissions
89
Research Excellence Framework 2014
• Purpose: new system for assessing the quality of research in higher education institutions in
the UK
• Inform UK funding bodies allocation of grant for research (£1.76 billion for research)
• Conducted by HEFCE, SFC, HEFCW & DEL
• 36 Units of Assessment
• Process of expert review by expert panels
• Assessment criteria: 3 elements
–
Output: assess quality of research output in terms of their ‘originality, significance and
rigour with reference to international research quality standards. Weighting 65%
–
Impact: Significant additional recognition will be given where researchers have built on
excellent research to deliver demonstrable benefits to the economy, society, public
policy, culture or quality of life. Weighting 20%
–
Environment: asses the research environment in terms of its ‘vitality and sustainability’
including its contribution to vitality and sustainability of the wider discipline or research
base. Weighting 15%
• Research outputs: details of up to FOUR research outputs produced by each member of
submitted staff during publication period (1st January 2008 to 31St December 2013)
90
REF Units of Assessment 2014
The following sub-panels will make use of citation data to inform their assessment of the
quality of research outputs:
• Sub-panel 1: Clinical Medicine
• Sub-panel 2: Public Health, Health Services and Primary Care
• Sub-panel 3: Allied Health Professions, Dentistry, Nursing and Pharmacy
• Sub-panel 4: Psychology, Psychiatry and Neuroscience
• Sub-panel 5: Biological Sciences
• Sub-panel 6: Agriculture, Veterinary, and Food Science
• Sub panel 7: Earth Systems and Environmental Sciences
• Sub-panel 8: Chemistry
• Sub-panel 9: Physics
• Sub-panel 11: Computer Science and Informatics
• Sub-panel 18: Economics and Econometrics
We will also provide contextual data to the sub-panels to assist them in interpreting
citation counts, given that citations depend partly on the field of research and a
publication’s age.
91
Contextual
• The REF team will provide the following information for each publication year in the
period 2008 to 2012, and for each relevant ASJC code (Scopus Journal Classification)
• The average (mean) number of times that journal articles and conference proceedings
published worldwide in that year, in that ASJC code, were cited
• The number of times that journal articles and conference proceedings in that ASJC code
would need to be cited to be in following % ranges of papers published worldwide in that
year.
–
top 1%
–
top 5%
–
top 10%
–
top 25%
92
Using Incites in REF preparation
The most useful report to inform on paper level metrics is the Source Article Listing.
Use the following metrics:
• Times Cited Count for individual papers
• Percentile in Subject Area to inform on papers which are in the top 1%, top 5%, top
10% and top 25% of their field
• Category Expected Citations to inform on the Average (mean) cites per paper for a
given category, year and document type.
• Category Actual/ Category Expected to inform on papers that have an impact that is
above the expected impact
Other useful reports in Incites
• Use Subject Area Ranking to inform on which UOA to submit to
• Use Collaboration Reports to inform on papers that are a result of an international
collaboration
• Use the Summary Metrics (customised for a researcher, department, subject area or
the organisation overall) to inform on the number of papers in the top 1%, top 5%, top
10% and top 25% of their field.
93
Thank You!
[email protected]
http://researchanalytics.thomsonreuters.com/cu/inc-support/