Transcript Slide 1

A limit on the diffuse flux of muon
neutrinos using data from 2000 – 2003
Light at the end
of the tunnel
Jessica Hodges
University of Wisconsin – Madison
Baton Rouge Collaboration Meeting
April 13, 2006
Search for a Diffuse Flux of Neutrinos (TeV – PeV)
downgoing muons and neutrinos
m
2000 – 2003 :
807 days of detector livetime
n
Monte Carlo simulation
Atmospheric Muons: muons
E-2
“Signal”
m
E-3.7
created when cosmic rays hit the
atmosphere, including simulation of
simultaneous downgoing muons
m
Atmospheric Neutrinos:
neutrinos created when cosmic rays hit the
atmosphere. Have an E-3.7 energy
spectrum.
Signal Neutrinos: extraterrestrial
neutrinos with an E-2 energy spectrum
<1> Remove downgoing events with a zenith angle cut and by requiring
high quality event observables.
<2> Separate atmospheric neutrinos from signal by an energy cut.
After Event Quality Cuts
Upgoing events
Horizontal events
The zenith angle distribution of high quality events before an energy cut is applied.
The signal test flux is E2 = 10-6 GeV cm-2 s-1 sr-1.
After event quality cuts, this is the final sample of upgoing
events for 4 years.
linear
log
keep
keep
Key Elements for Setting a Limit:
Signal hypothesis
1) number of actual data events observed
E2 Φ= 10-6
2) number of background events predicted
3) number of signal events predicted given the signal strength that you are testing
How to go calculate the number of background events in the
final sample…
1) Count the number of data events above and below the final energy cut
(NChannel >= 100)
2) Count the number of atmospheric neutrinos above and below the final
cut.
3) Apply a scale factor to the low energy Monte Carlo events so that the
number of events exactly matches the low energy data.
4) Apply this same scale factor to the background Monte Carlo above
100. This is the number of background events (b) that goes in the final
computation of the limit.
This sounds very easy, but what does it mean to “normalize the
Monte Carlo”?
Scaling the low energy atmospheric neutrino Monte Carlo
prediction to the low energy data can mean either…
You are “correcting”
the theoretical flux
that went in to the
Monte Carlo
prediction because
you believe that the
theorists incorrectly
predicted the
atmospheric neutrino
flux.
You believe the
atmospheric neutrino
theory to be correct
and you are
“correcting” the
detector acceptance
or efficiency. (This
could be many
factors: ice, muon
propagation, OM
sensitivity….) This
was done in the 1997
B10 analysis.
Two Interpretations of What it Means to Scale the Low
Energy Monte Carlo Events to the Low Energy Data
Atms MC
Data
Atms MC
Data
Signal
Signal
Apply the scale factor to correct the
theory that predicts the atmospheric
neutrino flux. Do NOT apply the
correction to the signal since it was
meant for only atmospheric neutrinos.
Apply the scale factor to correct for
detector efficiency. DO apply the
correction factor to the signal since we
are correcting the entire detector
efficiency or acceptance.
Atms MC
Atms MC
Data
Signal
Data
Signal
We chose to account the scale factor to the uncertainties in the
atmospheric neutrino flux.
Hence, I will NOT apply the scale factor to the signal Monte Carlo.
This means that any uncertainties in our detection of the predicted flux
must be accounted for separately.
Key Elements for Setting a Limit:
1) number of actual data events observed
That’s easy! 6 events observed
2) number of background events predicted
3) number of signal events predicted given the signal strength that you are
testing
2 & 3 are based on Monte Carlo simulations that contain many uncertain
inputs!
We must consider how systematic errors would change the amount of
signal or background in the final sample.
First, let’s consider the
uncertainties in the
background prediction.
Every model or uncertainty that is applied to the
background spectrum affects the number of
events that will survive to the final event sample.
In the past, most AMANDA analysis have used the Lipari model for
atmospheric neutrinos.
Instead, I will use the more up-to-date calculations done by two different
groups.
Barr, Gaisser, Lipari, Robbins, Stanev 2004
BARTOL
Honda, Kajita, Kasahara, Midorikawa 2004
HONDA
1 background model
(Lipari)
Bartol
There are now
Honda
2 background predictions.
The atmospheric neutrino flux models (Bartol and Honda) are
affected by:
1) uncertainties in the cosmic ray spectrum
2) uncertainties about hadronic interactions
Cosmic Ray Proton Flux
short dashed green line = old HONDA
pink solid line = HONDA 2004
dashed green line = BARTOL 2004
taken from HKKM 2004 paper
Uncertainties in the cosmic ray spectrum and hadronic
interactions were estimated as a function of energy.
Percentage
Uncertainty in
the Atmospheric
Neutrino Flux
Log10 (Eν )
Every background Monte Carlo event can be weighted with this function by
using the event’s true energy.
If every background event is weighted UP by its maximum error,
then you can get a new, higher prediction of the background.
Every event can also be weighted DOWN by its maximum amount
of error. This gives a new, minimum prediction.
Bartol
Bartol
Max
Bartol
Honda
Bartol
Min
There are now
Honda
Max
Honda
6 background predictions.
Honda
Min
Our Monte Carlo simulation is NOT perfect.
Reasons for disagreement between data and Monte Carlo:
Ice properties, muon propagation, OM sensitivity, other unknowns???
Consider what happens when you cut on a distribution that does not
show perfect agreement.
cut
keep
BLUE is the true distribution (data)
and orange is the Monte Carlo.
A cut on this distribution would yield
TOO MANY Monte Carlo events
compared to the truth (data).
Fortunately, I have a good sample of downgoing muons
and minimum bias data that can be used to study the
uncertainties in my cuts!
Data
Atms μ
I have performed an inverted analysis to select the
highest quality downgoing events.
All cuts and reconstructions are the same as the upgoing analysis – just
turned upside-down.
Examining the Cuts with Downgoing Muons
See disagreement at the final cut
level.
Go back to the level before events
quality cuts were made. Estimate a
percentage shift for the Monte
Carlo in each parameter that will
provide better data – MC
agreement .
Apply the shifted Monte Carlo cuts
at the final cut level.
Median Resolution (degrees)
Examining the Cuts with Downgoing Muons
See disagreement at the final cut
level.
Go back to the level before events
quality cuts were made. Estimate a
percentage shift for the Monte
Carlo in each parameter that will
provide better data – MC
agreement .
Apply the shifted Monte Carlo cuts
at the final cut level.
Ndirc : Number of direct hits
“Shift” the downgoing Monte Carlo in the parameters that show
disagreement. This is tricky because you must find a way to shift each
parameter into agreement without creating disagreement in other parameters.
Estimate a correction to the Monte Carlo for each parameter before any cuts
are applied.
Apply all of the shifted cuts at the final cut level and see if all of the
distributions show agreement.
RESULTING SHIFT showing good agreement across all important parameters:
Number of direct hits (ndirc)
->
1.1*ndirc
Smoothness of hits along track (smootallphit)
->
1.08*smootallphit
Median Resolution
->
1.05*med_res
Likelihood Ratio (up to down)
->
1.01*L.R.
I will use the modified cuts from the downgoing analysis to apply an
additional uncertainty on my upgoing events (both signal and background).
Bartol
Max
Bartol
Bartol
Min
Normal
MC
Honda
Max
Honda
Honda
Min
Shifted
MC
Each model can be considered WITH and WITHOUT the Monte Carlo shifted on those 4 parameters.
There are now
Signal
(normal)
Signal
(shifted MC)
12 background predictions.
There are now
2 signal predictions.
Number of Events in the Final Data Set (Nch >= 100)
Background Model
Signal
Model
# bg
# sig
(Nch>=100) (Nch>=100)
Bartol Max
Normal
7.79
68.4
Bartol Central
Normal
6.87
68.4
Bartol Min
Normal
5.20
68.4
Honda Max
Normal
7.11
68.4
Honda Central
Normal
6.28
68.4
Honda Min
Normal
4.66
68.4
Bartol Max – Shifted MC
Shifted MC
7.20
65.0
Bartol Central – Shifted MC
Shifted MC
6.41
65.0
Bartol Min – Shifted MC
Shifted MC
4.84
65.0
Honda Max – Shifted MC
Shifted MC
6.64
65.0
Honda Central – Shifted MC
Shifted MC
5.88
65.0
Honda Min – Shifted MC
Shifted MC
4.52
65.0
Average Background = 6.12
Average Signal = 66.7
These are 12 background predictions for the final sample, ranging
from 4.52 to 7.79.
Red = background from normal cuts
Blue = background from shifted MC cuts
Another look at the inverted analysis…. does the
detector have a linear response in NChannel?
linear
log
Here, you see the NChannel comparison between the minimum
bias data and dCorsika atmospheric muon simulation.
It would be desirable for the ratio of data to atmospheric
muons as a function of NChannel to be flat.
This suggests that
we should not use
events between 0
and 50 channels hit
to perform the
upgoing
normalization.
A fit to the ratio as a function of NChannel is not exactly flat,
but its slope is very small.
How does this affect the signal and background predictions?
I used the fit as a function of
NChannel to scale up the
Monte Carlo from the upgoing
analysis.
How the Upgoing Atmospheric Neutrinos Responded….
This only caused a small change in the number of background events
predicted to be in my final sample. The uncertainty due to NChannel
being a non-linear parameter is at most 10%. This is much less than the
other errors I have been considering and I will not worry about it.
How the Upgoing Signal Neutrinos Responded…
The number of signal events predicted above the NChannel cut changed
from 68.4 events to 85.6 events. This is a 25% error.
There may be additional detector effects which
mean that our signal efficiency is not 1.0.
Changing the OM sensitivity seems to have a linear effect
on the NChannel spectrum of the signal. Consider the
uncertainty in the number of signal events predicted in the
final sample from this effect to be 10%.
25% uncertainty due to NChannel non-linearity uncertainties
uncertainty in our overall detection of the signal
(102 + 252) 1/2 = 27 %
Now it’s time to build the confidence belt…..
How to include systematic errors in confidence belt
construction…
Systematic Errors included by the methods laid out in:
Cousins and Highland (Nucl. Instrum. Methods Phys. Res. A, 1992)
Conrad et al. (Phys. Rev. D, 2003)
Hill (Phys. Rev. D, 2003)
P (x | μ, P(ε,b)) = ∫ P (x | ε’μ + b’) P(ε’,b’) dε’ db’
I have found 12 different background predictions, b, and 3
different values for the signal efficiency, ε. By integrating
over these options, I can include systematic errors in my
confidence belt construction.
This can be simplified with a summation and the Poisson formula.
The construction of the Feldman-Cousins
confidence belt relies on the Poisson distribution.
P ( x | μ + b) =
(μ +b)x e -(μ +b)
x!
At every value of the signal strength, μ, you can
calculate a probability distribution function.
P( x | μ + b)
Applying the efficiency error to the PDF….
(μ+b)
(εμ+b)
where ε is the efficiency uncertainty on the signal
ε = 0.73, 1.00 or 1.27
Why not put a factor of ε on the background?
1) Linear nch-dependent effects in the detector will be removed by
normalization at low Nch.
2) Non-linear nch-dependent effects were computed to be very small compared
to the other errors in the background that we are considering.
There are now
12 values for the background prediction and
3 values for the signal efficiency.
36 values of (εμ + b)
This makes a total of
that you can use to construct the confidence belt.
Here you see the PDF for μ = 0.5.
Three PDFs are averaged into one
(the magenta line).
For my analysis, 36 PDFs are
averaged into one for every
value of μ.
My Sensitivity
Event
upper limit
= 5.78
The most probable observation if μ= 0 is 6. (This is also the median.)
The sensitivity is the maximum signal strength μ that is consistent with the
most probable observation if there is no signal (μ = 0).
My Limit =
My Sensitivity
Event
upper limit
= 5.78
6 events were observed
The upper limit is the maximum signal strength μ that is consistent with 6
events observed.
No Systematic
Errors
Event
upper limit
= 4.91
Limit with no systematic errors
This confidence band, without systematic errors, is not as wide.
(The background assumption is the average of Bartol Central + Honda Central)
E2 Φ < (E2 Φtest) * event upper limit
nsignal
E2 Φ < (10-6) * 5.78
66.7
E2 Φ90% (E) < 8.7 x 10-8 GeV cm-2 s-1 sr-1
Limit with no systematic errors = 10-6 * 4.91 / 68.4 = 7.2 x 10-8
[Gev] )
90% signal region
The limit is valid over the region that contains 90% of the signal in the final data
sample (Nch>=100).
90% region = 104.2 GeV to 106.4 GeV = 15.8 TeV to 2.51 PeV
Limit from
this analysis
Testing signal models other than E-2
MODEL
Bgd
predicted in
final sample
Signal
predicted in
final sample
Nch >= X
Sensitivity
SDSS
1.23
1.8
139
1.8Φsdss
MPR
1.23
1.4
139
2.3Φmpr
CharmD
27.52
26.2
71
0.33Φcharmd
Naumov RQPM
27.52
4.75
71
1.8Φnaumov
Martin GBW
94.72
0.71
55
29.0Φmartin
Waxman
Starburst
24.5
0.60
73
15.8Φstarburst
These models have not yet been unblinded, but the sensitivity and suggested
NChannel cut are listed.
Thanks to Gary
Hill – who says he “always has time for another question”
Thanks to Teresa Montaruli for help with the neutrino flux models and their
uncertainties.
Thanks to the “diffuse” discussion group – Gary, Teresa, Chris, Paolo,
Albrecht and Francis
Thanks to John and Jim for your advice in the office everyday…
Ndirc > 13
Ldirb > 170
abs(Smootallphit) < 0.250
Median resolution < 4.0
No Cogz cut
Likelihood ratio cut was zenith dependent
Jkchi (down - Bayesian) – Jkchi (up - Pandel) >38.2*cos(Zenith[7]/57.29)+27.506
~180o
~123o
Zenith > 100
~100o
FINAL CUTS:
Ndirc (Pandel) > 13
Ldirb (Pandel) >170
abs (Smootallphit (Pandel) ) < 0.250
Zenith (Pandel) > 100o
Median_resolution (P08err1,P08err2) < 4.0o
Jkchi (down - Bayesian) – Jkchi (up - Pandel) >38.2*cos(Zenith[7]/57.29)+27.506
Nch>=100