Note -- italics indicate decisions reached
Engineering mode since February; Frequent stoppages for daq upgrades and calibrations; Close to 2 billion events
Problems: air conditioner cycling frequently; power outages are very frequent, manual mg reset still required; fscc hangs; pmt tubes dying - total 67 muon tubes, 26 shower tubes now dead - problem is continuing at the rate of ~5 per month. This is less than the rate in December/January, which is a good sign (?).
Issues - calibrations; when we decide we're actually taking data (so taking down the experiment has to be requested in advance); what data we're saving (raw, reconstructed, etc.); who will run shifts - how to train and schedule
Work that needs to be done this summer: fix pmts, cable plant for WACT and outriggers
Front End boards - 2 tdc channels needed repair, and were done by Dave Williams in April.
ADC - problems noted by Diane Evans on cables or adc outputs
Caen HV supplies - a problem noted in February on the block connector, fixed; parts available for fixing cables
Acopian low voltage supplies - problems with power switch - should be replaced this summer - these should now be powered off by unplugging rather than with the switch, since the switches are going bad; we should buy a spare power supply unit for $850 - decided to buy this (guaranteed by Jordan Goodman).
There is now a manual for the front end electronics, in PDF format - will be loaded up to our website - right now on Linda's web page: http://scipp.ucsc.edu/~linda/
Temperature stability on the crates: rapid to within 7šF, long term to within 10šF. Could be somewhat better, but requirements are loose.
Most experiments do not cool their electronics with air conditioners!! Michael proposes that we use a dual-loop water heat exchanger, possibly using the cold water in the pond to cool a secondary loop of cooling water. Such a system could be had for ~$4k, cooling capacity of 34 kW. This price does not include the pumps and plumbing. He got 4 heat exchangers and a manifold for free from Bar Bar.
Total cost would be around $7k, but could control the electronics temperature to about 1šC. Installation could be accomplished by 2 people in 2 days; there is space under the racks at present.
How does the pond water get from the pub to the counting house? There is an available conduit, apparently, through which we could run a 1" diameter flexible hose. We need a backup system for when the pond water recirculation system fails; Michael suggests using the domestic water supply, but this is probably too warm to be of use.
Linda says running between 60 and 70 š F is fine. Reliability gets better as temperature goes down.
We have achieved 20m attenuation lengths longward of 3200 Angstroms. Actually, it could be even better than this, his equipment won't measure longer. Consistent measurements since 1996 show steady improvement over time. Changing the 0.2-micron filter 5 months ago presaged the latest improvement, from 12m to 20m.
Richard Miller would like to study gamma-hadron separation as a function of the attenuation length - might be better with dirtier water! How long does it take to get from 6m water to 20m water? Not known.
Portable water testing unit is just about ready to be shipped to the Milagro site. We can discuss where precisely this will go and how it will be fed with water. There are some safety considerations. It uses a HeCd laser, 2mWatt.
Is there something we can measure in the data itself as an independent monitor of attenuation? Possible that the scalers may tell you this, if you can take out the effects of temperature and pressure. Abe is working on this.
There are prototype outriggers now, 11 tanks with 2 more to build. 8 of these have 2 layers of tyvek on all sides, 2 of them have 1 layer of tyvek on all sides, and 1 just has 2 layers of tyvek on top and bottom only. These are set out mainly on the north and west of the pond.
Need to review the acquired data to monitor the tank response - not done yet. Should also take water samples, may be pretty bad. How to calibrate? Use fitted showers? Reference time back to the pond?
Monte Carlo work - did new Corsika simulations of gamma showers with 172 tanks on 15m grid. Median angle error improves using the tanks, especially for showers with fewer top layer hits and for showers with cores far away from the center of the pond. Core resolution also improves dramatically for distant cores.
Should we use the tanks in the trigger criterion? Say pmttop < 50 and tank > 5 (i.e. no pond requirement). Tank trigger gives 2.5 deg angular resolution, and effectively doubles the effective area at low primary energies. Using both together, doubles the effective area at all primary energies.
Are there any pmt failures that look that they might not be leaks? Maybe 2 (out of 90+)
Use water proof heat shrink tubing (made by Raychem) - apply 4" over bulkhead and cable.
Another possibility from Sweglock - Tony tested two of these at 16 psi on pmts for 3 weeks, and they were fine.
Tony made 47 of the Raychem heat shrink sample connections for testing, just stub ends of cable with leads to apply HV, RTV'ed. 38 of these were tested at 63 psi, 55 psi, 72 psi for a period of about 6 weeks. One sample drew a micro Amp, others were okay. The other 9 were tested at 4 to 8 psi.
This looks like the right fix to apply (Raychem heat shrink) and we will apply this, unless something catastrophic happens between now and summer.
We have $52k left in the construction budget. UNH not included.
Operations budget - NSF meant to give us more, but they didn't, then Patricia Rankin left, and supplemental request dropped on the floor. Operations budget from the following year was released early, but this doesn't meet the original need. If they give us all we need, we will run out of money in April.
We need to write new proposals, which will be helped by the fact that we are now returning results. They will not let us turn off a detector that is returning results.
New call: travel support from NASA for ICRC, students and post docs, accommodations and hotel. Urge people to apply for these.
Either the Sweglock or the Raychem would work, but the consensus is that the heat shrink from Raychem would be easier to apply in the pond. With the Sweglock, have to take the tubes out of the pond, fix, and then replace. On the heat shrink - you have to ensure you hold the tubing firmly so that it doesn't slip back off the thicker portion, or pry the connector loose. One way to fix this is a wire twisted around the cable.
Use an electrical heat gun or a propane torch (easier in the pond). Raychem doesn't recommend propane torch, because heating will not be uniform enough. Direct flame is too hot. So Michael tried to design his own torch which would give more uniform heating. A copper shield fits on the end of a conventional torch, is heated up, then you could apply the heat from the shield. This doesn't work well, and even if it did, there would be significant problems getting it approved for safety at LANL.
So - go back to electrical heat gun, just have to get electricity to the boat. One possibility: Battery powered inverters for marine use to give 120 V at 1800 watts (700 watts will do). These cost $2k, weigh 75 pounds, and the batteries last two hours. Need rubber boots in the boat! Another possibility: waterproof cable to a GFCI outlet at the pond edge. Should have an additional GFCI on the boat itself. Sounds like the cable is the way to go; take out life insurance policies on the boat people -- more seriously, we need to work up realistic Hazard Control Plans for this work and understand the risks.
The water is still < 40šF, should start warming soon. But hasn't started warming yet!
Scuba divers go down to get the tubes, release them to the surface (or just below the surface, with a small weight), do them all at once on the same day (Alan estimates you can only do ~50 tubes per day, realistically, in this water). Then follow doing the repair job, using two rafts - possibly people in the water too, repair/dry/install heat shrink. We do the testing at this point also. When all the repairs are done, the scuba divers come back and replace all the tubes. During the repair process, the tubes remain above their grid locations.
When do we start? Not till it gets warm. Also need a safety plan for doing it. July? August? We don't know how long it's going to take, so we should start as early as we possibly can.
Options - (A) fix all muon layer tubes, and dead shower layer tubes [~300]; (B) fix only dead muon and shower layer tubes [~100]; (C) fix all pmts [~723]. The last option assumes we know this is the final best answer. We should have the material to repair all of them, and we'll decide as things go by how far to go in repairing. Emptying the pond down to shower pmt level would help in the C option (fixing all tubes), but we lose a month or two of data doing that. Under option A, we may be able to run the air shower layer at night during the fix, though this may be doubtful because of the care needed to seal up against light leaks. Sentiment generally runs towards the A option (all muon, dead shower), with a staged approach, being sensitive to the time, without lowering the water level.
We must learn: how long does it take per pmt? are there other possible fixes? will this work? do we have to replace the connectors? how many people will this take?
All data is attached to Cosmo, but computing resources are down on pcs. If you want to run fast, you run on Cosmo, but you screw everyone else who needs access to the data. Best policy is to not run on Cosmo. More realistic is to avoid starting the job that uses the last CPUs on Cosmo (which will soon have 4 CPUs).
Offline computing - there are 3 major tasks: Monte Carlo, assault on the Crab, analysis of reconstructed data. All of these take loads of computing, Crab and analysis also make demands on the tape archive. Crab reconstruction in a +/- 5š dec. band is 6% of our data.
Recommend - move Monte Carlo efforts off site; get more CPUs (20 or so) and more disk space. All our disk space at the moment is devoted to Milagrito analysis.
One year of Milagrito is 140 GB, but Milagro will be 1.2 TB. How do we use this? HSM - Hierarchical Storage Management - $30k for software. Or buy 1.2 TB of disk, for $60k (per year). Eventually the cost of the latter option will go down, and we will eventually go that way.
$20k more would fill out the RAID unit and buy more CPUs at LANL, we should probably do this.
DAQ system on Milagro very different from Milagrito - using message queues. We've been taking engineering runs since Jan 27. Lot's of down time for testing software, grounding problems, have been fixed, clock problems (known since Grito) have now been "fixed" - that is we no longer have these errors, because we've added multiple reads and voting. Relatively stable now, but FSCC still hangs every other day or so.
We currently have ~100% dead time at 2001 Hz, though it works (10% dead time) at 2000 Hz. There are delays and inefficiencies in the code, but without them, things hang. This clearly needs some work to understand. We should be able to get up to 7 kHz with the new code.
Online code is working now since 4/29. Six worker CPUs, decode/compress/calibrate/core fit/angle fit/pool info. Each worker takes about 1/2 CPUs at a rate of 1 kHz. With this system we could probably get up to 4 kHz, with some overlapping. For analysis online to run at 7 kHz, we would probably need more CPUs up there, but not on Challenge because of expense.
Output streams: CMP (compressed raw data) / REC (compressed reconstructed data, 24 bytes/event) / sun (compressed raw data) / moon (compressed raw data) / Crab (compressed raw data) / Mrk501 (compressed raw data) / Mrk421 (compressed raw data) / Calib (not determined yet) / SAVE (compressed raw data at 350 Hz only, biggest events, current trigger) . CMP will go to /DATA/temp/, others go to tape, but if GCN alert, /DATA/temp goes to tape for an hour or so, else to /dev/null. If we do this, we can run with ~5 tapes per day, of which 2.5 is SAVE, REC is 0.12, and sources take the rest. We discussed the value of the various sources (Crab/Mrk501/Mrk421) to help us with gamma/hadron separation.
To start a run: login to kahuna as daq, type "START". to stop a run, type "STOP". There are commands to reboot the fscc (REBOOT_FSCC), reset the fastbus (RESET_FSCC), pause triggers (TRIGHOLD), resume triggers (TRIGRESUME), kill the online (KILL), and view the status of the message queues (vq).
Took data in February and March, all that was required to calibrate Milagro. Unfortunately, it ended up being useless because of bug in compression code, and because of a reset in the hardware. The laser is also known to be unstable, but it died in March (went to ground state!).
We now have a new laser, but it's also unstable. We've taken data from two laser balls, but the Lab has issued an order to stand-down our laser operations, because our laser shack is no longer considered safe. They changed the requirements for laser enclosures. We are attempting to fix this.
We have some slewing curves, some of them look nice, others pathological.
Still need to make a link between the calibration and daq, to get Monte Carlo data for fitter testing, and to find a laser expert.
To get Milagrito sensitivity, need the correct Mrk501 flux, the correct trigger criterion we were using, the contribution of heavier nuclei to the background rate, and the dead and uncalibrated tubes.
Milagrito rate in the Mrk501 source bin (1.1š) is not constant with time - some changes are understood, general instability is not.
Whipple Flux and Hegra spectra from Mrk 501 agree extremely well. From this spectrum, the ratio of Mrk 501 to Crab in Milagrito should be about 3.6.
Our trigger criterion was "soft" - this has to be mimicked in the Monte Carlo as well.
Heavier nuclei - protons are 67% of the background cosmic ray rate, helium accounts for 26% of the rate, CNO for 4%, other nuclei should be comparable to CNO.
From Mrk 501, predict 12.9 +/- 0.4 gamma events per day in the source bin, background 3080 (+205/-110) events per day. What we see is 8.7 +/-3, giving us 3 to 4 sigma. For the Crab we don't see any excess, Monte Carlo predicted 3.6 +/- 0.2.
Our excess on Mrk 501 is not evenly distributed in time, but has a significant peak in May-June 1997.
We should publish Mrk 501 result in a journal. Unfortunately, the analysis of Mrk 501 is somewhat unstable, owing to the fact it is only a 3 sigma result. Let's look how to improve. We should try lower values of nfit - since most likely value (30) is lower than the value we use (40).
Solution: Make an unbinned sky map using separate sigmas for each event, spreading each of them out accordingly. Over whole lifetime of Milagrito, at (0,0) get a 3.5 sigma result, but highest significance is offset by 0.7š. If you look just at events with inferred angular resolution between 0.4š and 0.8š, get a 5 sigma excess on source! This resolution bin includes most of the best events.
If we use only data from when the source was known to be in its high-state, gives us a significance about 4.2 sigma (at 0,0).
Crab gives a signal at 1.8 sigma, which is consistent with the expectations, given that we see Mrk 501 at about 4 sigma.
The distribution of angular resolution in our data is very non-Gaussian. Difficult to address how to extract the best from our data.
The shadows of the sun and moon were both seen by Milagrito, the moon at 9 sigma, the sun at 7 sigma (all Milagrito data, smoothed with square bin). Both are offset in the same direction (-1/3, -1/3 in dec., ra). Deficit for moon is fit by a Gaussian with a width of 1.57š, but the deficit depth is only about 1/3 that expected. Taking account of the smearing of the shadow by the geomagnetic field may resolve this discrepancy.
Gus sees an event on an all-sky burst analysis; Julie sees an event by looking through the BATSE triggers.
The BATSE trigger 6188, on 99/04/17, may have been seen by Milagrito. T90 was 7.9 seconds, background event density is 3.45 per bin, we saw 18 events in a 1.6š bin, 3e-08 probability of occurring by chance. If you opened up to 2.7š bin, get 8 more events. Zenith angle was about 22 degrees, declination 54 degrees. Our overall trigger rate did not flinch. No water on the cover, some light leaks were present, but covered with tarps, no run status anomalies reported. EMS not yet checked. Time sequence of our events overlaid on the BATSE light curve shows everything fitting into T90, with possible clumps associated with minor BATSE peaks.
No delayed emission - looking for an hour after T90, get 1.6 sigma.
Trials: We had 51 bursts in our field of view. 34.5 independent bins in our field of view, 2209 total bins. Number of trials therefore between 1700 and 110000. Testing with fake signal maps, and trials seems to be at the lower end of this, but yet uncertain.
Isabel looked at 54 BATSE GRBs within 45š, within T90, square bins. Only 12 of these have IPN arcs. Two of them have Beppo-Sax localizations.
For background, look at T90-length bins over a six-hour interval before and after. Get optimal bin size as a function of background counts, and use this for the study of each burst. On the ensemble of bursts, only one had a significant signal, and this was the same one that Julie found. (This was an independent analysis.) For 6188, there were 324 bins, of size 2.85š side square bin. There were 17 events on, background 3.4. Hottest bin center 54.3š dec., 290.8š ra; BATSE position was 55.77š dec., 295.66š ra. BATSE statistical error was 6.2š, 95% confidence radius was 11š.
Looking at sky maps to study the sensitivity of Milagrito. Finds a few 5 sigma points on the whole sky.
Our NIM / Astroparticle Physics paper is in progress, Cy still needs lots of help, figures, numbers, acknowledgments (including all summer students and REU students). Authors: Cy, Jordan, ... Editorial Committee - Morgan, Gus, David Williams, Dave Berley.
We decided to write a paper on the Markarian 501 result. If we can say that it turned off, so much the better. Stefan should initiate the writing of this, with help from Andy. Editorial Committee - Alan, David, Gus, Jordan, Scott H. Deadline 1st June.
We should write a paper on the CME/Nov Solar Event - to be written by Abe and Jim, with Editorial Committee - Gaurang, Galen, Rich, Cy. Deadline 1st June.
We should write a paper on the GRB BATSE 6188 trigger - authors Julie, Isabel, Brenda & Andy - within 2 weeks we should have a memo (towards paper) telling what's been found. Cy has a list of questions that need answering. Editorial Committee: Galen, Don, Todd, Joe. Deadline 10th May.
All Sky Burst Search
Julie, Rob A
Analysis of Scalar Data
Drafts exist for all these except Jim's, Lazar's, and the last, but we are assured those are in progress.
Some computing to be moved to Santa Cruz. Official plots will be placed on two web sites, Maryland and probably mioruilt, vetted by an "Official Plots Committee" - soliciting official plots of Mrk 501, sun & moon data, pictures, event displays, official parameters, CME plots, Crab, etc.
Outrigger proposal has to go in in the cycle this October - should succeed, but needs to be written early fall. Everybody also has to write their renewal proposals this fall. KDI proposal for outrigger to get computers upgraded.
Concrete pad in NE corner of pond enclosure has cinderblock wall and frame for the canvas covering. Building will be complete in < 2 weeks. One mirror mount is there, will be tested this spring and summer. Plan for summer is to build remaining 5 pads, and figure out what to do with the telescopes as we get data from the one running mount.
Exterior cables needed for the 6 WACT stations, 25 pmts per station. Need spares, ac power, 2-3 fibers, and EMS. Need cables for the outriggers - 170 tanks, each having a cable, spares, and possibly fibers and ems.
Proposal - install main trunk lines, bury them and anchor them to the inner fence. Make four main junction boxes at the 4 corners of the pond enclosure, with patch panels, spark gaps. These junction boxes feed both the outriggers and the WACT stations. All the conduit to the NE corner would be put in this summer, including conduit ultimately destined to lead to the SE corner. This is the "scary" corner, because all the power goes in through the gate region. We will have to get the right permits, of course, and contract with someone, probably P&M, to do the digging.
Milagro so far - 1.66 Billion events. Version 31 software is reconstructing the data, using rough proto-calibrations. Goal is to measure the Crab as quickly as we can, so we're stripping the Crab data, +/- 10š, taking about 1 Billion events, to April 14.
Moon plot shown, there is a deficit, closer to the center than the Milagrito one, with a significance of ~5 sigma (in 2 months). Throwing out more tubes than Milagrito, but saving many more good tubes. Things will improve greatly when we do the calibrations right, but seeing the shadow at this point is a good sign.
Crab plot shown, nothing shows up.
Joined GCN in Feb 1999. We've received 131 burst notifications (45 triggers) via email. 7 of the triggers have zenith angles less than 45 degrees, by the end of March, and 6 of these we were on for.
Julie looked at the 6, and nothing really significant shows up. Lowest probability 2e-5, before trials factor is applied.
Adding data sets from two sources with different S/N can reduce the significance from that of the dataset with greater significance. Weighting method is a way of combining datasets with different significance in a way that does not wash the significance out. The appropriate weights for combining are the significances themselves. The errors are combined in quadrature.
Doing this with the data: parameterize the angular resolution (e.g. by nfit and del e-o), produce a different error for each event, then add them in quadrature to get the error in the ensemble.
Gaussian weighting gives significances averaging 20% better than normal MC technique.
How many surviving single hadrons do we expect to see in Milagro?
Looking at a selection for single hadrons in Milagro shower data (based on simulations by Stefan). These could be muons, unaccompanied single hadrons, or surviving single hadrons. Making cut in t vs r plane in the top layer, got 55 events out of 6000 shower triggers in simulations. All except one are below 1 TeV. All make a bigger hit in the bottom layer than in the top. This fraction, ~0.01, compares with what one might expect in the real data. Muon burst events are expected to be ~5% of surviving primaries with energies above 100 GeV, this decreases with energy.
Should think about developing a possible trigger which depends only on the bottom layer.
Should get > 400 events above 60 TeV and 1 event at 1 PeV in 3 years, measure of the protons in the CR. Physics interest is in the CR composition at the knee of the cosmic ray distribution. This will be complementary to the physics addressed by WACT. Should also be able to compare directly with the JACEE flux.
Maximum likelihood fitter used to fit ensemble of tube hits to give arrival plane for the event. There is no PE cut, no t-chi cut, uses all the information, gives better angular resolution, but is slow.
Use a functional fit to the t-chi distribution, with 5 parameters. This function is Gaussian for early hits, exponential for later hits, smoothly matched at the peak. For all but the weakest hits (lowest PE bin), the function is an excellent match to the distributions. The function is needed to get the derivatives, which are needed to apply the method.
However, the function & gradient evaluations are slow. Better to use a lookup table based on the function. This works with no resolution loss, with linear interpolation.
The maximization step uses the conjugate gradient method, a variant of it from Numerical Recipes. Likelihood function is smooth with good global minimum and no other local minima, so convergence is extremely good. Starting parameters are taken from a quick flattop fit. Runs quickly, then returns theta, phi, and T. Running on Milagro, get a 5.4% improvement in del-angle (14% in del-e-o) over chi-square fit on gamma showers, and similar for proton showers. The fit runs at >5 microseconds per event on his PC, < 200 Hz.
Likelihood fitting never has to fail. But fitting high-theta events takes much longer and does much worse.
On Milagrito data, this method runs 6 times slower than the chi-square method, but gives a 17% improvement in del-angle comparing with the simulations. Delta-theta versus theta systematic is also improved using this method. He proposes to take the Mrk 501 data to UCSC (format needs changing) and run this method on that data.
August 1 - Sun
TeV workshop starts Aug 13 - we should have our meeting before this -
ICRC starts Aug 17 -
We don't need a weekend meeting during the summer
July 28 is Cherenkov's birthday
Jordan not available 25th July thru 4th August
Julie away 16th thru 26th July
Stefan away 5th thru 15th August
How about 14th July? Dave Williams on vacation 11th week
How about 9th &10th July? Settled. This is Friday and Saturday, which some folks objected to.
RawVise - lossless raw data compression - key is in throwing away what you don't need. Fit a plane to the leading edges. Try to peak the statistics to lump things together. Group things in the peak using a short word, then a longer word describes those out of the peak, etc.
This achieves a compression of 3:2 over the current raw compression. Current technique is actually fairly efficient. Other features: it is truly lossless: it is believed to be byte-swap insensitive (can run on pc or sgi with no problems - current system doesn't satisfy this); it is possible to use in multi-stream context.
Propose to implement this. He will implement it on kahuna, run some tests there, and then it will be ready to go (one minor change?). It's very modular, should be very easy to link in. Can be called with one or two lines of code for both compression and decompression. Raw event must be in memory to compress. No assumptions on input data structure. It works at 400 Hz on one kahuna CPUs.
The grid was built pinned. The crosses shouldn't have moved, but things sank to the lowest point. Between the beginning and end of Milagrito, things compressed by 1/4" per grid point. Don't know when this happened.
Thermal effects as well as gravity compression effects go into this. Alan has built a correction taking these into account. This correction changes theta by an average of -0.7 degrees (rms 0.9 degrees). This is in the direction away from zenith. This effect may be getting worse in Milagro, because there are more tubes on the slope. For Milagro, compression effect has already taken place, but the thermal effect is still there and worse. But cycling effect during Milagro will be smaller because temperature variance will be smaller.
Shifts will go in one-week units. Everyone must take between one and two shifts per year, perhaps three every two years. Cy will define this a little more precisely. We usually start shifts on Wednesdays, so change of shift occurs when people are around, in case of problems with that transition. We can be a bit flexible about this in case travel plans go awry.
We will keep an on-line schedule on the operations web site. Cy will put out a blank schedule, and people should sign up for particular times, first come first served. Questions about fairness and priority, especially for teaching faculty. Consider using the SuperK model.
We have two people on shift, primary and backup. The primary carries the pager, the backup should be available in case two people are needed for a job.
Plan is to start this in about a month or so. People need to get trained for this, should arrive a day before they start shift to be shown how to do things. Milagro site training from last year is still sufficient.
There should be a list of tasks that shift people could do while they're on shift. Maintaining the site, support of the experiment, besides routine shift duties. The shift person should also be writing entries in the log book, especially for when the experiment shuts down. There should be a desk assigned to the shift person.