From jgl@uhheph.phys.hawaii.eduWed Oct 1 11:00:49 1997 Date: Sun, 28 Sep 1997 17:45:06 -1000 (HST) From: "John G. Learned" To: atmpd all , US high energy analysis SuperK Subject: a better dm2 measurement (?) Dear Oscillation Fans:

### Measuring the Oscillation Parameters

circulated as email note to he group, 28 Sep. '97

Below you will find a figure illustrating a new way to "measure" dm2. The value I get for nu_mu<->nu_x is (3.09 +2.66 -1.71)*10^-3 eV^2 at the 1 sigma points, under the assumption of maximal mixing. This is explained below.

In thinking about how to proceed with the oscillations game it seems to me we might break our process down into steps something like the following:

1) validation of the MC
2) demonstration of the cleanliness of the data sample
3) testing for the inadequacy of the null hypothesis
4) testing for acceptable models
5) measuring the appropriate parameters of acceptable model(s)

We have been working on numbers 1 through 3. Perhaps the up-down asymmetry tests demonstrate 4 (although other methods would work too). Then assuming we are satisfied that we have the right model(s) (for which the only candidate right now seems to be nu_mu <-> nu_x), we need the best tests to squeeze out s2th and dm2 values. (Moving from hypothesis testing to parameter estimation). This may involve the use of different analytical tools, but not necessarily (eg., devotees of maximum liklihood may claim it does everything.

First, it seems that we can use the plateau region in L/E space for a measure of s2th, decoupled from dm2 (as long as we fit in the plateau region alone). We need to work on this, but it is close to 1/2 (see section on threshold cuts below), maximal mixing, which is what we use in the following.

In talking with John Flanagan, we have been thinking that we need some method which puts as many events into as few bins as possible, for maximal statistical power to measure dm2. I will not trouble you with the variations we considered, but in the end we came back to good old R ([nu_mu/nu_e] data/ [nu_mu/nu_e] mc), though with R as a function of the dm2 inserted into the MC. This has the added virtue that is is easy to explain to people, as they are already used to R.

This you will see plotted in the following figure. The leftmost value is actually for dm2 = 0.0 (no oscillations) and you see we get back the R = 0.65 of no oscillations. So, we can calculate the dm2 value by the crossing of R = 1.0, and take the errors to be where the upper and lower statistical errors cross 1.0 as well. Note we only account for statistics here (data and MC). Note also that since the curve crosses 1.0 at a low slope, this illustrates why we get such a wide region of acceptability in dm2. It also points out that the error band will shrink speedily with better statistics, which is good.

It is not obvious to me that this is the optimal combination of measureables to obtain dm2. I welcome suggestions.