ID |
Date |
Author |
Type |
Category |
Subject |
11430
|
Mon Jul 20 11:57:17 2015 |
ericq | Update | General | Arm Locking recovered | The interferometer is warming up!
I had some issues locking the IMC at first. It turned out that the MC3 side OSEM signal wasn't getting to the ADC. A satellite box sqush fixed it.
I touched up the PMC alignment; the best I could do is 0.75V, probably due to the AOM being in place.
I haven't touched the WFS offsets, but the current ones seem to be doing ok. I'll touch them up tonight when the seismic activity has calmed.
I made some changes to the state of the PZT/PC crossover gain in the mcdown script, resulting in the IMC catching lock quicker.
Thankfully, the tip tilt pointing stayed good during the upgrade. I barely had to touch the ETM alignment to lock the arms. ETMX is showing some errant motion, though... |
11429
|
Sat Jul 18 16:59:01 2015 |
jamie | Update | CDS | unloaded, turned off loading of, symmetricom kernel module on fb | fb has been loading a 'symmetricom' kernel module, presumably because it was once being used to help with timing. It's no longer needed, so I unloaded it and commented out the lines that loaded it in /etc/conf.d/local.start. |
11428
|
Sat Jul 18 16:03:00 2015 |
jamie | Update | CDS | EPICS freezes persist | I notice that the periodic EPICS freezes persist. They last for 5-10 seconds. MEDM completely freezes up, but then it comes back.
The sites have been noticing similar issues on a less dramatic scale. Maybe we can learn from whatever they figure out. |
11427
|
Sat Jul 18 15:37:19 2015 |
Jamie | Summary | CDS | CDS upgrade: current status | So it appears we have found a semi-stable configuration for the DAQ system post upgrade:

Here are the issues:
daqd
dadq is running mostly stably for the moment, although it still crashes at the top of every hour (see below). Here are some relevant points of about the current configuration:
- recording data from only a subset of front-ends, to reduce the overall load:
- c1x01
- c1scx
- c1x02
- c1sus
- c1mcs
- c1pem
- c1x04
- c1lsc
- c1ass
- c1x05
- c1scy
- 16 second main buffer:
start main 16;
- trend lengths: second: 600, minute: 60
start trender 600 60;
- writing to frames:
- full
- second
- minute
- (NOT raw minute trends)
- frame compression ON
This elliminates most of the random daqd crashing. However, daqd still crashes at the top of every hour after writing out the minute trend frame. Still unclear what the issue is, but Keith is investigating. In some sense this is no worse that where we were before the upgrade, since daqd was also crashing hourly then. It's still crappy, though, so hopefully we'll figure something out.
The inittab on fb automatically restarts daqd after it crashes, and monit on all of the front ends automatically restarts the mx_stream processes.
front ends
The front end modules are mostly running fine.
One issue is that the execution times seem to have increased a bit, which is problematic for models that were already on the hairy edge. For instance, the rough aversage for c1sus has some from ~48us to 50us. This is most problematic for c1cal, which is now running at ~66us out of 60, which is obviously untenable. We'll need to reduce the load in c1cal somehow.
All other front end models seem to be working fine, but a full test is still needed.
There was an issue with the DACs on c1sus, but I rebooted and everything came up fine, optics are now damped:

|
11426
|
Sat Jul 18 14:55:33 2015 |
jamie | Update | General | all front ends back up and running | After some surgery yesterday the front ends are all back up and running:
- Eric found that one of the DAC cards in the c1sus front end was not being properly initialized (with the new RCG code). Turned out that it was an older version DAC, with a daughter board on top of a PCIe board. We suspected that there was some compatibility issue with that version of the card, so Eric pulled an unused card from c1ioo to replace the one in c1sus. That worked and now c1sus is running happily.
- Eric put the old DAC card into c1ioo, but it didn't like it and was having trouble booting. I removed the card and c1ioo came up fine on it's own.
- After all front end were back up and running, all RFM connections were dead. I tracked this down to the RFM switch being off, because the power cable was not fully seated. This probably happened when Steve was cleaning up the 1X4/5 racks. I re-powered the RFM switch and all the RFM connections came back on-line
- All receivers of Dolphin (DIS) "PCIE" IPC signals from c1ioo where throwing errors. I tracked this down to the Dolphin cable going to c1ioo being plugged in to the wrong port on the c1ioo dolphin card. I unplugged it and plugged it into the correct port, which of course caused all front end modules using dolphin to crash. Once I restarted all those models, everything is back:

|
11425
|
Sat Jul 18 06:12:07 2015 |
Ignacio | Update | General | MCL Wiener filtering + FIR to IIR conversion using vectfit (Update) | (updateAfter Eric gave me feedback on my previous elog post, I went back and fixed some of the silly stuff I stated.
First of all, I have come to realized that it makes zero sense to plot the ASD's of the mode cleaner against the seismometer noise. These measurements are not only quite different, but elementary, they posess different units. I have focused my attention to the MCL being Wiener filtered with the three siesmometer signals.
One of the major improvements that I make in the following analysis is,
1) Prefiltering; a band pass filter from 1 to 5 Hz, in order to emphasize subtraction of the bump shown in the figure below.
2) I have used vectfit exclusively in the 1 to ~5 Hz range, in order to model the FIR filter properly, as in, the kind of subtraction that we care about. Limiting myself to the 1 - 5 Hz range has allowed me to play freeley with the number of poles, hence being able to fit the FIIR filter properly with an IIR rational transfer function properly,
The resulting ASD's are shown below, in blue we show the raw MCL output, in blac the Wiener filter (FIR) result, and finally in black, the resultant data being filtered with the calculated IIR Wiener filter.

Now, in the following plots I show the IIR Wiener filters for each of the three seismometers,
X Seismometer,


For the Y seismometer,


and for the Z seismometer,


The matlab code for this work is attached: code.zip |
Attachment 1: Wiener_MCL_seismometers_iir.png
|
|
Attachment 2: seisx_mag.png
|
|
Attachment 3: seisx_mag.png
|
|
Attachment 4: seisx_mag.png
|
|
Attachment 5: seisx_ph.png
|
|
Attachment 6: seisy_mag.png
|
|
Attachment 7: seisy_mag.png
|
|
Attachment 8: seisy_mag.png
|
|
Attachment 9: seisy_ph.png
|
|
Attachment 10: seisz_ph.png
|
|
Attachment 11: seisz_ph.png
|
|
Attachment 12: code.zip
|
Attachment 13: seisz_mag.png
|
|
Attachment 14: seisz_mag.png
|
|
Attachment 15: seisz_ph.png
|
|
11424
|
Fri Jul 17 04:56:37 2015 |
Ignacio | Update | General | MCL Wiener filtering + FIR to IIR conversion using vectfit |
We took data for the mode cleaner a while ago, June 30th I believe. This data contained signals from the six accelerometers and the three seismometers. In here I have only focused on the seimometer signals as witnesses in order to construct Wiener filters for each of the three seismometer signals (x,y,z) and for the combined seismometers signal. The following plot showing the ASD's shows the results,

Wiener filtering works beautifully for the seismometers. Note that subtraction is best when we use all three seismometers as the witnesses in the Wiener filter calculation, as can be clearly seen in the first plot above.
Now, I used vectfit to conver the Wiener FIR filters for each seismometer to their IIR versions. The following are the bode plots for the IIR filters,
For the x-direction seismometer,


For the y-direction seismometer
,


And for the z-direction seismometer,


The IIR filters were computed using 5 zeros and 5 poles using vectfit. That was the maximum number of poles that I could use wihtout running into trouble with matrices being almost singular in Matlab. I still need to figure out how to deal with this issue in more detail as fitting the y-seismometer was a bit problematic. I think having a greater number of poles will make the fitting a bit easier. |
Attachment 1: Wiener_MCL_seismometers.png
|
|
Attachment 2: seisx_mag.png
|
|
Attachment 3: seisx_mag.png
|
|
Attachment 4: seisx_mag.png
|
|
Attachment 5: seisx_phase.png
|
|
Attachment 6: seisy_mag.png
|
|
Attachment 7: seisy_phase.png
|
|
Attachment 8: seisz_mag.png
|
|
Attachment 9: seisz_phase.png
|
|
11423
|
Fri Jul 17 02:46:07 2015 |
Ignacio | Update | General | New huddle test data for Wilcoxon 731A results | On Thursday, new huddle test data for the Wilcoxon 731A was aquired by Eric.
The difference between this new data and the previous data, is:
1) We used three accelerometers instead of six this time around.
2) We used a foam box, and clamped cables on the experimental set up as shown in the previous elog, http://nodus.ligo.caltech.edu:8080/40m/11389
I have analyzed the new data. Here I present my results.
The following plot shows the ASD's for the three accelerometers raw outputs as well as their error signals computed using the three cornered hat method,

As before, I computed the mean for the output signals of the accelerometers above as well as their mean self noise to get the following plot
,

Now, below I compare the new results with the results that I got from the old data,

Did the enclosure and cable clamping do much? Not really, according to the computed three hat results. Also, notice how much better, even if its a small improvement, we get from using six accelerometers and calculating their self noise by the six cornered hat method.
Now, I moved on to analyzing the same data with Wiener Filtering.
Here are again, the raw outputs, and the self noises of each individual accelerometer calculated using Wiener filtering,

The accelerometer in the Y direction is show a kind of funky signal at low frequncies. Why? Anyways, I calculated the mean of the above signals as I did for the three cornered hat method above to get the following, I also show the means of the signals computed with the old data using wiener filtering,

Is the enclosure really doing much? The Wiener filter that I applied to the huddle test old data gave me a much better, by an order of magnitude better self noise curve. Keep in mind that this was using SIX accelerometers, not THREE as we did this time. I want to REDO the huddle test for the WIlcoxon accelerometers using SIX accelerometers with the improved experimental setup to see what I get.
Finally, I compare the computed self noises above with what the manufacturer gives,
,

As I expected, the self noise using six accelerometers and Wiener filtering is the best I could work out. The three cornered hat method works out pretty well from 1 to 10 Hz, but the noise is just too much anywhere higher than 10 Hz. The enclosed, clamped, 3 accelerometer wiener filter result is an order of magnitude worse than the six accelerometer wiener filtered result, and two orders of magnitude worse than the three cornered hat method in the 1 to 10 Hz frequency band.
As I stated, I think we must performed the huddle test with SIX accelerometers and see what kind of results we get. |
Attachment 1: selfnoise_allthree_threehat_enclosed.png
|
|
Attachment 2: selfnoise_3hat_enclosed_averages.png
|
|
Attachment 3: selfnoise_3hat_6hat_enc.png
|
|
Attachment 4: miso_wiener_enclosedall.png
|
|
Attachment 5: selfnoise_wiener_enclosed.png
|
|
Attachment 6: compare_encl.png
|
|
11422
|
Thu Jul 16 16:46:18 2015 |
ericq | Update | General | Starting IFO recovery, DAC troubles | Jamie showed me how to use the SDF system. We created new safe.snap files for all of the running models based on the autoburts from the morning of July 1st, before the upgrade began, and then pruned them of invalid channels.
Now all of the models start up without having to race for the BURT button. 
We saw that c1sus was timing out all over the place once the filter settings had been restored. I was thinking I would move one of the vertex optics into c1mcs, but instead I found it easier to remove the global damping parts. Now the c1sus model runs at ~50usec.
The c1sus frontend's DAC is still nonfunctional. Jamie is seeking advice. |
11421
|
Thu Jul 16 16:33:56 2015 |
Jessica | Update | General | Added Bode Plots of Bandpass Filter | I updated the bandpass filter I was using, finding that having different stopband attenuations before and after the passband better emphasized the area from 3 Hz to 20 Hz. I chose a low passband ripple but high stopband attenuation to do this. My passband ripple was 2 dB, the first stopband was 25 dB, and the second stopband attenuation was 40 dB. As can be seen in the filter Magnitude plot, this resulted in a fairly smooth passband and a fairly step dropoff to the stopband, which will better emphasize the region I am trying to isolate. My goal was to emphasize the 3-20 Hz region 10-30 times more than the outside regions. I think I accomplished this by looking at the Bode plot, but I may have chosen the second stopband attenuation to be slightly too high for this. |
Attachment 1: acc1_update.png
|
|
Attachment 2: acc2_update.png
|
|
Attachment 3: acc3_update.png
|
|
Attachment 4: bp_BodeMag.png
|
|
Attachment 5: bp_BodePhase.png
|
|
11420
|
Thu Jul 16 11:18:37 2015 |
jamie | Update | General | Starting IFO recovery, DAC troubles |
Quote: |
I've been trying to start recovering IFO functionality, but quickly hit a frustrating roadblock.
Upon opening the PSL shutter, and deactivating the MC mirror watchdogs, I saw the MC reflected beam moving way more than normal.
A series of investigations revealed no signals coming out of c1sus's DAC. 
The IOP (c1x02) shows two of its DAC-related statewords (DAC and DK) in a fault mode, which means (quoting T1100625):
"As of RCG V2.7, if an error is detected in oneor more DAC modules, the IOP will continue to run but only write zero values to the DAC modules as a protective measure. This can only be cleared by restarting the IOP and all applications running on the affected computer."
The offending card may be DAC1, which has its fourth bit red even with only the IOP running, which corresponds to a "FIFO error". /proc/c1x02/status states, in part:
DAC #0 16-bit fifo_status=2 (OK)
DAC #1 16-bit fifo_status=3 (empty)
DAC #2 16-bit fifo_status=2 (OK)
Squishing cables and restarting the frontend have not helped anything.
c1lsc, c1isce[x/y] are not suffering from this problem, and appear to be happily using their DACs. c1ioo does not use any DAC channels.
|
We need to update the indicators on the CDS_FE_STATUS screen to expose the new indicators, so that we have better visibility for these issues.
I'm not sure why this DAC is failing. It may indicate an actual problem with the DAC itself.
Quote: |
As a further headache, any time I restart any of the models on the c1sus frontend, the BURT restore is totally bunk. Moreover, using burtgooey to restore a good snapshot to the c1sus model triggers a timing overflow and model crash, maybe not so surprising since the model seems to be averaging ~56usec or so.
|
This is related to changes to how the front ends load their safe.snaps. I think they're now explictly expecting the file:
targtet/<model>/<model>epics/burt/safe.snap
I'll come over this afternoon and we can get acquainted with the new SDF system that now handles management of the safe.snap files. |
11419
|
Thu Jul 16 03:01:57 2015 |
ericq | Update | LSC | Old beatbox hooked back up | I was having issues trying to get reasonable noise performance out of the aLIGO demod board as an ALS DFD. Terminating the inputs to the LSC whitening inputs did not show much 60Hz noise, and an RMS in the single Hz range.
A 60Hz line of hundreds of uV was visible in the power spectrum of the single ended BNC and double-ended DB25 outputs of the board no matter how I drove or terminated.
So, I tried out hooking up the ALS beatbox. It turns out to work better for the time being; not only is the 60Hz line in the analog outputs about ten times smaller, the broadband noise floor in the resultant beat spectrum when driven by a 55MHz LO on the LSC rack is a fair bit lower too. I wonder if this is due to not driving the aLIGO board LO at the +10dBm it expects. With the amplifiers and beat note amplitudes we have, we'd only be able to supply around 0 dBm anyways.
Here's a comparison of the aLIGO board (black) and ALS beatbox (dark green) driven with the 55MHz LO, both going through the LSC whitening filters for a resultant magnitude of 3kCounts in the I-Q plane. The RMS sensing noise is about 30 times lower for the beatbox. (Note, this is with the old delay cables. When we switch to the 50m cables, we'll win further frequency noise sensitivity through the better degrees->Hz calibration.) I'm very interested to see what the green beat spectrum looks like with this setup.

Not only is the 60Hz line smaller, there is simply less junk in the beatbox signal. I did not expect this to be the case.
There were some indications of funky status of the aLIGO board: channels 3 and 4 are totally nonfunctioning, so who knows what's going on in there. I've pulled it out, to take a gander if I can figure out how to make it suitiable for our purposes. |
Attachment 1: beat_comparison.png
|
|
Attachment 2: aLIGO_vs_beatbox.xml.zip
|
11418
|
Thu Jul 16 01:04:21 2015 |
ericq | Update | General | Starting IFO recovery, DAC troubles | I've been trying to start recovering IFO functionality, but quickly hit a frustrating roadblock.
Upon opening the PSL shutter, and deactivating the MC mirror watchdogs, I saw the MC reflected beam moving way more than normal.
A series of investigations revealed no signals coming out of c1sus's DAC. 
The IOP (c1x02) shows two of its DAC-related statewords (DAC and DK) in a fault mode, which means (quoting T1100625):
"As of RCG V2.7, if an error is detected in oneor more DAC modules, the IOP will continue to run but only write zero values to the DAC modules as a protective measure. This can only be cleared by restarting the IOP and all applications running on the affected computer."
The offending card may be DAC1, which has its fourth bit red even with only the IOP running, which corresponds to a "FIFO error". /proc/c1x02/status states, in part:
DAC #0 16-bit fifo_status=2 (OK)
DAC #1 16-bit fifo_status=3 (empty)
DAC #2 16-bit fifo_status=2 (OK)
Squishing cables and restarting the frontend have not helped anything.
c1lsc, c1isce[x/y] are not suffering from this problem, and appear to be happily using their DACs. c1ioo does not use any DAC channels.
As a further headache, any time I restart any of the models on the c1sus frontend, the BURT restore is totally bunk. Moreover, using burtgooey to restore a good snapshot to the c1sus model triggers a timing overflow and model crash, maybe not so surprising since the model seems to be averaging ~56usec or so. |
11417
|
Wed Jul 15 18:19:12 2015 |
Jamie | Summary | CDS | CDS upgrade: tentative stabilty? | Keith Thorne provided his eyes on the situation today and had some suggestions that might have helped things
Reorder ini file list in master file. Apparently the EDCU.ini file (C0EDCU.ini in our case), which describes EPICS subscriptions to be recorded by the daq, now has to be specified *after* all other front end ini files. It's unclear why, but it has something to do with RTS 2.8 which changed all slow channels to be transported over the mx network. This alone did not fix the problem, though.
Increase second trend frame size. Interestingly, this might have been the key. The second trend frame size was increased to 600 seconds:
start trender 600 60;
The two numbers are the lengths in seconds for the second and minute trends respectively. They had been set to "60 60", but Keith suggested that longer second trend frames are better, for whatever reason. It seems he may be right, given that daqd has been running and writing full and trend frames for 1.5 hours now without issue.
As I'm writing this, though, the daqd just crashed again. I note, though, that it's right after the hour, and immediately following writing out a one hour minute trend file. We've been seeing these hour, on the hour, crashes of daqd for quite a while now. So maybe this is nothing new. I've actually been wondering if the hourly daqd crashes were associated with writing out the minute trend frames, and I think we might have more evidence to point to that.
If increasing the size of the second trend frames from 60 seconds (35M) to 600 seconds (70M) made a difference in stability, could there be an issue since writing out files that are smaller than some value? The full frames are 60M, and the minute trends are 35M. |
11416
|
Wed Jul 15 17:05:06 2015 |
Jessica | Update | General | Bandpass Pre-Filter created | I applied a bandpass filter to the accelerometer huddle data as a pre-filter. The passband was from 5 Hz to 20 Hz. I found that applying this pre-filter did very little when comparing the PSD after pre-filtering to the PSD with no pre-filtering. There was some improvement though, just not a significant amount. For some reason, it also seemed as though the second accelerometer improved the most from pre-filtering the data, while the first and third remained closer to the unfiltered noise. Also, I have not yet figured out a consistent method for choosing passband ripple and stopband attentuation, both of which determine how good the filter is.
My next step in pre-filtering will be determining a good method for choosing passband ripple and stopband attenuation, along with implementing other pre-filtering methods to combine with the bandpass filter. |
Attachment 1: acc1.png
|
|
Attachment 2: acc2.png
|
|
Attachment 3: acc3.png
|
|
11415
|
Wed Jul 15 13:19:14 2015 |
Jamie | Summary | CDS | CDS upgrade: reducing mx end-points as last ditch effort | I tried one last thing, suggested by Keith and Gerrit. I tried reducing the number of mx end-points on fb to zero, which should reduce the total number of fb threads, in the hope that the extra threads were causing the chokes.
On Tue, Jul 14 2015, Keith Thorne <kthorne@ligo-la.caltech.edu> wrote:
> Assumptions
> 1) Before the upgrade (from RCG 2.6?), the DAQ had been working, reading out front-ends, writing frames trends
> 2) In upgrading to RCG 2.9, the mx start-up on the frame builder was modified to use multiple end-points
> (i.e. /etc/init.d/mx has a line like
> # 1 10G card - X2
> MX_MODULE_PARAMS="mx_max_instance=1 mx_max_endpoints=16 $MX_MODULE_PARAMS"
> (This can be confirmed by the daqd log file with lines at the top like
> 263596
> MX has 16 maximum end-points configured
> 2 MX NICs available
> [Fri Jul 10 16:12:50 2015] ->4: set thread_stack_size=10240
> [Fri Jul 10 16:12:50 2015] new threads will be created with the stack of size 10
> 240K
>
> If this is the case, the problem may be that the additional thread on the frame-builder (one per end-point) take up so many slots on the 8-core
> frame-builder that they interrupt the frame-writing thread, thus preventing the main buffer from being emptied.
>
> One could go back to a single end-point. This only helps keep restart of front-end A from hiccuping DAQ for front-end B.
>
> You would have to remove code on front-ends (/etc/init.d/mx_stream) that chooses endpoints. i.e.
> # find line number in rtsystab. Use that to mx_stream slot on card (0-15)
> line_num=`grep -v ^# /etc/rtsystab | grep --perl-regexp -n "^${hostname}\s" | se
> d 's/^\([0-9]*\):.*/\1/g'`
> line_off=$(expr $line_num - 1)
> epnum=$(expr $line_off % 2)
> cnum=$(expr $line_off / 2)
>
> start-stop-daemon --start --quiet -b -m --pidfile /var/log/mx_stream0.pid --exec /opt/rtcds/tst/x2/target/x2daqdc0/mx_stream -- -e 0 -r "$epnum" -W 0 -w 0 -s "$sys" -d x2daqdc0:$cnum -l /opt/rtcds/tst/x2/target/x2daqdc0/mx_stream_logs/$hostname.log
As per Keith's suggestion, I modified the mx startup script to only initialize a single endpoint, and I modified the mx_stream startup to point them all to endpoint 0. I verified that indeed daqd was a single MX end-point:
MX has 1 maximum end-points configured
It didn't help. After 5-10 minutes daqd crashes with the same "0 empty blocks" messages.
I should also mention that I'm pretty sure the start of these messages does not seem coincident with any frame writing to disk; further evidence that it's not a disk IO issue.
Keith is looking at the system now, so we if he can see anything obvious. If not, I will start reverting to 2.5. |
11414
|
Tue Jul 14 17:14:23 2015 |
Eve | Summary | Summary Pages | Future summary pages improvements | Here is a list of suggested improvements to the summary pages. Let me know if there's something you'd like for me to add to this list!
- A lot of plots are missing axis labels and titles, and I often don't know what to call these labels. I could use some help with this.
- Check the weather and vacuum tabs to make sure that we're getting the expected output. Set the axis labels accordingly.
- Investigate past periods of missing data on DataViewer to see if the problem was with the data requisition process, the summary page production process, or something else.
- Based on trends in data over the past three months, set axis ranges accordingly to encapsulate the full data range.
- Create a CDS tab to store statistics of our digital systems. We will use the CDS signals to determine when the digital system is running and when the minute trend is missing. This will allow us to exclude irrelevant parts of the data.
- Provide duty ratio statistics for the IMC.
- Set triggers for certain plots. For example, for channels C1:LSC-XARM OUT DQ and page 4 LIGO-T1500123–v1 C1:LSC-YARM OUT DQ to be plotted in the Arm LSC Control signals figures, C1:LSCTRX OUT DQ and C1:LSC-TRY OUT DQ must be higher than 0.5, thus acting as triggers.
- Include some flag or other marking indicating when data is not being represented at a certain time for specific plots.
- Maybe include some cool features like interactive plots.
|
11413
|
Tue Jul 14 17:06:00 2015 |
jamie | Update | CDS | running test on daqd, please leave undisturbed | I have reverted daqd to the previous configuration, so that it's writing frames to disk. It's still showing instability. |
11412
|
Tue Jul 14 16:51:01 2015 |
Jamie | Summary | CDS | CDS upgrade: problem is not disk access | I think I have now determined once and for all that the daqd problems are NOT due to disk IO contention.
I have mounted a tmpfs at /frames/tmp and have told daqd to write frames there. The tmpfs exists entirely in RAM. There is essentially zero IO wait for such a filesystem, so daqd should never have trouble writing out the frames.
But yet daqd continues to fail with the "0 empty blocks in the buffer" warnings. I've been down a rabbit hole. |
11411
|
Tue Jul 14 16:47:18 2015 |
Eve | Update | Summary Pages | Summary page updates continue during upgrade | I've continued to make changes to the summary pages on my own environment, which I plan on implementing on the main summary pages when they are back online.
Motivation:
I created my own summary page environment and manipulated data from June 30 to make additional plots and change already existing plots. The main summary pages (https://nodus.ligo.caltech.edu:30889/detcharsummary/ or https://ldas-jobs.ligo.caltech.edu/~max.isi/summary/) are currently down due to the CDS upgrade, so my own summary page environment acts as a temporary playground to continue working on my SURF project. My summary pages can be found here (https://ldas-jobs.ligo.caltech.edu/~eve.chase/summary/day/20150630/); they contian identical plots to the main summary pages, except for the Summary tab. I'm open to suggestions, so I can make the summary pages as useful as possible.
What I did:
- SUS OpLev: For every already existing optical lever timeseries, I created a corresponding spectrum, showing all channels present in the original timeseries. The spectra are now placed to the right of their corresponding timeseries. I'm still playing with the axes to make sure I set the best ranges.
- SUSdrift: I added two new timeseries, DRMI SUS Pitch and DRMI SUS Yaw, to add to the four already-existing timeseries in this tab. These plots represent channels not previously displayed on the summary pages
- Minor changes
- Added axis labels on IOO plot 6
- Changes axis ranges of IOO: MC2 Trans QPD and IOO: IMC REFL RFPD DC
- Changes axis label on PSL plot 6
Results:
So far, all of these changes have been properly implemented into my personal summary page environment. I would like some feedback as to how I can improve the summary pages.
|
11410
|
Tue Jul 14 13:55:28 2015 |
jamie | Update | CDS | running test on daqd, please leave undisturbed | I'm running a test with daqd right now, so please do not disturb for the moment.
I'm temporarily writing frames into a tempfs, which is a filesystem that exists purely in memory. There should be ZERO IO contention for this filesystem, so if the daqd failures are due to IO then all problems should disappear. If they don't, then we're dealing with some other problem.
There will be no data saved during this period. |
11409
|
Tue Jul 14 11:57:27 2015 |
jamie | Summary | CDS | CDS upgrade: left running in semi-stable configuration |
Quote: |
There remains a pattern to some of the restarts, the following times are all reported as restart times. (There are others in between, however.)
daqd: Tue Jul 14 00:02:48 PDT 2015
daqd: Tue Jul 14 01:02:32 PDT 2015
daqd: Tue Jul 14 03:02:33 PDT 2015
daqd: Tue Jul 14 05:02:46 PDT 2015
daqd: Tue Jul 14 06:01:57 PDT 2015
daqd: Tue Jul 14 07:02:19 PDT 2015
daqd: Tue Jul 14 08:02:44 PDT 2015
daqd: Tue Jul 14 09:02:24 PDT 2015
daqd: Tue Jul 14 10:02:03 PDT 2015
Before the upgrade, we suffered from hourly crashes too:
daqd_start Sun Jun 21 00:01:06 PDT 2015
daqd_start Sun Jun 21 01:03:47 PDT 2015
daqd_start Sun Jun 21 02:04:04 PDT 2015
daqd_start Sun Jun 21 03:04:35 PDT 2015
daqd_start Sun Jun 21 04:04:04 PDT 2015
daqd_start Sun Jun 21 05:03:45 PDT 2015
daqd_start Sun Jun 21 06:02:43 PDT 2015
daqd_start Sun Jun 21 07:04:42 PDT 2015
daqd_start Sun Jun 21 08:04:34 PDT 2015
daqd_start Sun Jun 21 09:03:30 PDT 2015
daqd_start Sun Jun 21 10:04:11 PDT 2015
So, this isn't neccesarily new behavior, just something that remains unfixed.
|
That's interesting, that we're still seeing those hourly crashes.
We're not writing out the full set of channels, though, and we're getting more failures than just those at the hour, so we're still suffering. |
11408
|
Tue Jul 14 10:28:02 2015 |
ericq | Summary | CDS | CDS upgrade: left running in semi-stable configuration | There remains a pattern to some of the restarts, the following times are all reported as restart times. (There are others in between, however.)
daqd: Tue Jul 14 00:02:48 PDT 2015
daqd: Tue Jul 14 01:02:32 PDT 2015
daqd: Tue Jul 14 03:02:33 PDT 2015
daqd: Tue Jul 14 05:02:46 PDT 2015
daqd: Tue Jul 14 06:01:57 PDT 2015
daqd: Tue Jul 14 07:02:19 PDT 2015
daqd: Tue Jul 14 08:02:44 PDT 2015
daqd: Tue Jul 14 09:02:24 PDT 2015
daqd: Tue Jul 14 10:02:03 PDT 2015
Before the upgrade, we suffered from hourly crashes too:
daqd_start Sun Jun 21 00:01:06 PDT 2015
daqd_start Sun Jun 21 01:03:47 PDT 2015
daqd_start Sun Jun 21 02:04:04 PDT 2015
daqd_start Sun Jun 21 03:04:35 PDT 2015
daqd_start Sun Jun 21 04:04:04 PDT 2015
daqd_start Sun Jun 21 05:03:45 PDT 2015
daqd_start Sun Jun 21 06:02:43 PDT 2015
daqd_start Sun Jun 21 07:04:42 PDT 2015
daqd_start Sun Jun 21 08:04:34 PDT 2015
daqd_start Sun Jun 21 09:03:30 PDT 2015
daqd_start Sun Jun 21 10:04:11 PDT 2015
So, this isn't neccesarily new behavior, just something that remains unfixed. |
11407
|
Tue Jul 14 10:23:27 2015 |
Ignacio | Update | General | Optimal detector array placement thoughts | Over the past few days, I've been thinking about how to workout the details conerning Rana's request about a 'map' of the vicinity of the 40m interferometer. This map will take the positions of N randomly placed seismic sensors as well as the signals measured by each one of them and the calculated cross correlations between the sensors and between the sensors and the test mass of interest to give out a displacement vector with new sensor positions that are close to optimum for better seismic (and Newtonian) noise cancellation.
Now, I believe that much of the mathematical details have been already work out by Jenne in her thesis. She explains that the quantity of interest that we wish to minimize in order to find an optimal array is the following,

where is the cross-correlation vector between the seismic detectors and the seismic (or Newtonian) noise, is the cross-correlation matrix between the sensors and is the seismic (or Newtonian) noise variance.
I looked at the paper that Jenne cited from which she obtained the above quantity and noted that it is a bit different as it contains an extra term inside the square root, it is given by

where the new term, is the matrix describing the self noise of the sensors. I think Jenne set this term to zero since we can always perform a huddle test on our detectors and know the self noise, thus effectively subtracting it from the signals of interest that we use to calculate the other cross correlation quantities.
Anyways, the quantity above is a function of the positions of the sensors. In order to apply it to our situation, I'm planning on:
1) Performing the huddle tests on our sensors, redoing it for the accelerometers and then the seismometers (once the data aquisition system is working... )
2) Randomly (well not randomly, there are some assumptions we can make as to what might work best in terms of sensor placement) place the sensors around the interferometer. I'm planning on using all six Wilcoxon 731A accelerometers, the two Guralps and the STS seismometer (any more?).
3) Measure the ground signals and use wiener filtering in order to cancel out their self noises.
4) From the measured signals and their present positions we should be able to figure out where to move the sensors in order to optimize subtraction.
i have also been messing around with Jenne's code on seismic field simulations with the hopes of simulating a version of the seismic field around the 40m in order to understand the NN of the site a little better... maybe. While the data aquisition gets back to a working state, I'm planning on using my simulated NN curve as a way to play around with sensor optimization before its done experimentally.
i have as well been thinking and learning a little bit about source characterization through machine learning methods, specially using neural networks as Masha did back in her SURF project on 2012. I have also been looking at Support vector machines. The reasons why I have been looking at machine learning algorithms is because of the nature of the everchanging seismic field around the interferometer. Suppose we find a pretty good sensor array that we like. How do we make sure that this array is any good at some time t after it has been found? If the array mostly deals with the usual seismic background (quiet) of the site of interest, we could incorporate machine learning techniques in order to mitigate any of the more random disturbances that happen around the sites, like delivery trucks, earthquakes, etc. |
11406
|
Tue Jul 14 09:08:37 2015 |
Jamie | Summary | CDS | CDS upgrade: left running in semi-stable configuration | Overnight daqd restarted itself only about twice an hour, which is an improvement:
controls@fb /opt/rtcds/caltech/c1/target/fb 0$ tail logs/restart.log
daqd: Tue Jul 14 03:13:50 PDT 2015
daqd: Tue Jul 14 04:01:39 PDT 2015
daqd: Tue Jul 14 04:09:57 PDT 2015
daqd: Tue Jul 14 05:02:46 PDT 2015
daqd: Tue Jul 14 06:01:57 PDT 2015
daqd: Tue Jul 14 06:43:18 PDT 2015
daqd: Tue Jul 14 07:02:19 PDT 2015
daqd: Tue Jul 14 07:58:16 PDT 2015
daqd: Tue Jul 14 08:02:44 PDT 2015
daqd: Tue Jul 14 09:02:24 PDT 2015
Un-exporting /frames might have helped a bit. However, the problem is obviously still not fixed. |
11405
|
Mon Jul 13 18:27:27 2015 |
Eve | Configuration | General | How to set up your own summary page environment on the LDG cluster | I'd like to build off of Koji's instructions with a few useful tips I discovered while setting up my own summary page environment.
To only make a specified selection of tabs for the summary pages, copy only the corresponding .ini files into /home/albert.einstein/summary/config and run the gw_daily_summary_custom following Koji's instructions below. When asked for nodus's password either hit "enter" three times without providing the password or comment out this section of the code to stop the summary page creation process from taking current data files from nodus. This is especially helpful when the 40m is down (like it is now).
After running the summary page code, the pages can be viewed at https://ldas-jobs.ligo.caltech.edu/~albert.einstein/summary/day/YYYYMMDD/ and corresponding error logs can be found at ~/public_html/summary/logs/gw_summary_pipe_local-687496-0.err. |
11404
|
Mon Jul 13 18:12:50 2015 |
Jamie | Summary | CDS | CDS upgrade: left running in semi-stable configuration | I have been watching daqd all day and I don't feel particularly closer to understanding what the issues are. However, things are
Interestingly, though, the stability appears highly variable at the moment. This morning, daqd was very unstable and was crashing within a couple of minutes of starting. However this afternoon, things seemed much more stable. As of this moment, daqd has been running for for 25 minutes now, writing full frames as well as minute and second trends (no minute_raw), without any issues. What has changed?
To reiterate, I have been closing watching disk IO to /frames. I see no indication that there is any disk contention while daqd is failing. It's still possible, though, that there are disk IO issues affecting daqd at a level that is not readily visible. From dstat, the frame writes are visible, but nothing else.
I have made one change that could be positively affecting things right now: I un-exported /frames from NFS. This eliminates anything external from reading /frames over the network. In particular, it also shuts off the transfer of frames to LDAS. Since I've done this, daqd has appeared to be more stable. It's NOT totally stable, though, as the instance that I described above did eventually just die after 43 minutes, as I was writing this.
In any event, as things are currently as stable as I've seen them, I'm leaving it running in this configuration for the moment, with the following relevant daqdrc parameters:
start main 16;
start frame-saver;
sync frame-saver;
start trender 60 60;
start trend-frame-saver;
sync trend-frame-saver;
start minute-trend-frame-saver;
sync minute-trend-frame-saver;
start profiler;
start trend profiler; |
11403
|
Mon Jul 13 14:08:10 2015 |
ericq | Update | Electronics | New RF amps, housed | I made a little box for the new RF amplifiers we'll be using for the green beatnotes, to keep things tidy on the PSL table. They are both Minicircuits model ZHL-3A-S.

I took TFs of their response with the agilient analyzer (calibrating out the cables, splitters, etc.) Powered at +24V, we get a solid ~27dB of gain up to around 200MHz, which is fine for our needs. The phase profile is mostly a 6-7 nsec delay, which is negligible for ALS. Data files are attached.

Koji looked at me like I was crazy for using a BNC connector for the DC power. I haven't yet been able to find panel mount banana connectors, but when I do, I'll replace it.
Banana'd:

|
Attachment 1: ampBox.jpg
|
|
Attachment 2: ampTFs.png
|
|
Attachment 3: ampTFs.zip
|
Attachment 4: ampBox2.jpg
|
|
11402
|
Mon Jul 13 01:11:14 2015 |
Jamie | Summary | CDS | CDS upgrade: current assessment | daqd is still behaving unstably. It's still unclear what the issue is.
The current failures look like disk IO contention. However, it's hard to see any evidince of daqd is suffering from large IO wait while it's failing.
The frame size itself is currently smaller than it was before the upgrade:
controls@fb /frames/full 0$ ls -alth 11190 | head
total 369G
drwxr-xr-x 321 controls controls 36K Jul 12 22:20 ..
drwxr-xr-x 2 controls controls 268K Jun 23 06:06 .
-rw-r--r-- 1 controls controls 67M Jun 23 06:06 C-R-1119099984-16.gwf
-rw-r--r-- 1 controls controls 68M Jun 23 06:06 C-R-1119099968-16.gwf
-rw-r--r-- 1 controls controls 69M Jun 23 06:05 C-R-1119099952-16.gwf
-rw-r--r-- 1 controls controls 69M Jun 23 06:05 C-R-1119099936-16.gwf
-rw-r--r-- 1 controls controls 67M Jun 23 06:05 C-R-1119099920-16.gwf
-rw-r--r-- 1 controls controls 68M Jun 23 06:05 C-R-1119099904-16.gwf
-rw-r--r-- 1 controls controls 68M Jun 23 06:04 C-R-1119099888-16.gwf
controls@fb /frames/full 0$ ls -alth 11208 | head
total 17G
drwxr-xr-x 2 controls controls 20K Jul 13 01:00 .
-rw-r--r-- 1 controls controls 45M Jul 13 01:00 C-R-1120809632-16.gwf
-rw-r--r-- 1 controls controls 50M Jul 13 01:00 C-R-1120809408-16.gwf
-rw-r--r-- 1 controls controls 50M Jul 13 00:56 C-R-1120809392-16.gwf
-rw-r--r-- 1 controls controls 50M Jul 13 00:56 C-R-1120809376-16.gwf
-rw-r--r-- 1 controls controls 50M Jul 13 00:56 C-R-1120809360-16.gwf
-rw-r--r-- 1 controls controls 50M Jul 13 00:55 C-R-1120809344-16.gwf
-rw-r--r-- 1 controls controls 50M Jul 13 00:55 C-R-1120809328-16.gwf
controls@fb /frames/full 0$
This would seem to indicate that it's not an increase in frame size that's to blame.
Because slow data is now transported to daqd over the MX data concentrator network rather than via EPICS (RTS 2.8), there is more network on the MX network. I note also that the channel lists have increased in size:
controls@fb /opt/rtcds/caltech/c1/chans/daq 0$ ls -alt archive/C1LSC* | head -20
-rw-r--r-- 1 4294967294 4294967294 262554 Jul 6 18:21 archive/C1LSC_150706_182146.ini
-rw-r--r-- 1 4294967294 4294967294 262554 Jul 6 18:16 archive/C1LSC_150706_181603.ini
-rw-r--r-- 1 4294967294 4294967294 262554 Jul 6 16:09 archive/C1LSC_150706_160946.ini
-rw-r--r-- 1 4294967294 4294967294 43366 Jul 1 16:05 archive/C1LSC_150701_160519.ini
-rw-r--r-- 1 4294967294 4294967294 43366 Jun 25 15:47 archive/C1LSC_150625_154739.ini
...
I would have thought, though, that data transmission errors would show up in the daqd status bits. |
11401
|
Fri Jul 10 17:57:38 2015 |
Max Isi | Update | General | Summary pages down | The summary pages are currently unstable due to priority issues on the cluster*. The plots had been empty ever since the CDS updated started anyway. This issue will (presubmably) disappear once the jobs are moved to the new 40m shared LDAS account by the end of next week.
*namely, the jobs are put on hold (rather, status "idle") because we have low priority in the processing queue, making the usual 30min latency impossible. |
11400
|
Thu Jul 9 16:50:13 2015 |
Jamie | Summary | CDS | CDS upgrade: if all else fails try throwing metal at the problem | I roped Rolf into coming over and adding his eyes to the problem. After much discussion we couldn't come up with any reasonable explanation for the problems we've been seeing other than daqd just needing a lot more resources that it did before. He said he had some old Sun SunFire X4600s from which we could pilfer memory. I went over to Downs and ripped all the CPU/memory cards out of one of his machines and stuffed them into fb:
fb now has 8 CPU and 16G of RAM
Unfortunately, this is still not enough. Or at least it didn't solve the problem; daqd is showing the same instabilities, falling over a couple of minutes after I turn on trend frame writing. As always, before daqd fails it starts spitting out the following to the logs:
[Thu Jul 9 16:37:09 2015] main profiler warning: 0 empty blocks in the buffer
followed by lines like:
[Thu Jul 9 16:37:27 2015] GPS MISS dcu 44 (ASX); dcu_gps=1120520264 gps=1120519812
right before it dies.
I'm no longer convinced that this is a resource issue, though, judging by the resource usage right before the crash:
top - 16:47:32 up 48 min, 5 users, load average: 0.91, 0.62, 0.61
Tasks: 2 total, 0 running, 2 sleeping, 0 stopped, 0 zombie
Cpu(s): 8.9%us, 0.9%sy, 0.0%ni, 89.1%id, 0.9%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 15952104k total, 13063468k used, 2888636k free, 138648k buffers
Swap: 1023996k total, 0k used, 1023996k free, 7672292k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12016 controls 20 0 8098m 4.4g 104m S 106 29.1 6:45.79 daqd
4953 controls 20 0 53580 6092 5096 S 0 0.0 0:00.04 nds
Load average less than 1 per CPU, plenty of free memory (~3G free, 0 swap), no waiting for IO (0.9%wa), etc. daqd is utilizing lots of threads, which should be spread across many cpus, so even the >100%CPU should be ok. I'm at a loss... |
11399
|
Thu Jul 9 16:39:03 2015 |
Koji | Configuration | General | How to set up your own summary page environment on the LDG cluster | Here is the summary of my investigation how to set up my own "summary page" environment on the LDG (LIGO Data Grid) cluster.
Here all albert.einstein must be replaced with your own LIGO.ORG user name.
1. Obtain LDAS cluster account
Run the following from any of the terminal and use your LIGO.ORG credential
ssh albert.einstein@ssh.ligo.org
You will be suggested to visit a particular web site. Fill the form on the web site and wait for the approval e-mails.
2. Use LDG SSH login portal
Once you received the approval of the account, you should be able to log in to the system. Type the following command again from your local terminal
ssh albert.einstein@ssh.ligo.org
You are asked to select the site and machines. Select 2- CIT and b. ldas-pcdev1, c. ldas-pcdev2, or d. ldas-pcdev3.
3. Setup bash environment
Setting up the python library path is very important for the proper processing.
Here is my setup for .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
if [ -f ~/.profile ]; then
. ~/.profile
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# So that ssh will work, take care with X logins - see .xsession
[[ -z $SSH_AGENT_PID && -z $DISPLAY ]] &&
exec -l ssh-agent $SHELL -c "bash --login"
|
and .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Set Python environment (based on gpwy-env script)
# clean path environment variable of duplicate entries
cleanpath() {
if [[ -z "$1" ]]; then
$1=PATH
fi
# map to local variable
local badpath=$(eval echo \$$1)
badpath=${badpath%%:}
# remove duplicates
badpath="$(echo "${badpath}" | awk -v RS=':' -v ORS=":" '!a[$1]++')"
# remove trailing colon
badpath=${badpath%%:}
# reset variable and export
eval $1=${badpath}
eval export $1
}
# set PATH
cleanpath PATH
cleanpath PYTHONPATH
PATH=${HOME}/.local/bin:${PATH}
PYTHONPATH=${HOME}/.local/lib/python2.6/site-packages:${PYTHONPATH}
|
The order of cleanpath and PYTHONPATH= is important as we want to use the local library installation before anything kicks in.
4. Install required Python libraries
Run the following lines with this order so that they are installed in your "~/local"
# PIP installation
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py --user
# numpy, scipy, distribute, matplotlib, astropy, importlib installation
pip install numpy --upgrade --user
pip install scipy --upgrade --user
pip install distribute --upgrade --user
pip install matplotlib --user --upgrade
pip install astropy --upgrade --user
pip install importlib --user --upgrade
# We need to use dev branch of gwpy to run gwsumm propery
cp -r /home/detchar/opt/gwpysoft/lib/python2.6/site-packages/gwpy* ~/.local/lib/python2.6/site-packages/
# gwsumm installation
pip install --user git+https://github.com/gwpy/gwsumm
|
5. Setup summary pages for the 40m
Copy summary page setting from Max's directory.
cp -r ~max.isi/summary ~/
And make temporary directory for the summary pages.
mkdir /usr1/albert.einstein/summary
6. Modify typos in gw_summary_custon
Use your own editor to fix typos
emacs ~/summary/bin/gw_daily_summary_custom
replace max.isi to albert.einstein
change summary40m -> summary
Now the installation is done. From here, the description is for the routine procedure.
7. Run your summary page code
Run Kerberos authentication
kinit albert.einstein@LIGO.ORG
Run a summary page code for a specific date (e.g. for Jul 1st, 2015)
bash ${HOME}/summary/bin/gw_daily_summary_custom --day 20150701
The result can be checked under
https://ldas-jobs.ligo.caltech.edu/~albert.einstein/summary/
https://ldas-jobs.ligo.caltech.edu/~albert.einstein/summary/day/20150701/
Rerun a code for a specific page. This requires the page structure already exists.
The red texts should be modified depending on what ini file you want to run for what day.
/home/albert.einstein/.local/bin/gw_summary day --on-segdb-error warn --verbose --output-dir . --multi-process 20 --no-html --ifo C1 --archive C1EVE 20150630 --config-file /mnt/qfs2/albert.einstein/public_html/summary/etc/defaults.ini,/mnt/qfs2/albert.einstein/public_html/summary/etc/c1eve.ini
This command can actually be found in
https://ldas-jobs.ligo.caltech.edu/~albert.einstein/summary/gw_summary_pipe.sh
8. Some useful command
To check which python library is used
python -c 'import gwpy; print gwpy.__file__'
To list installed python libraries and versions
pip list
This should return the list like the following.
...
astropy (1.0.3)
...
gwpy (0.1b1.dev121)
gwsumm (0.0.0.dev854)
...
matplotlib (1.4.3)
...
numpy (1.9.2)
...
scipy (0.15.1)
...
|
11398
|
Thu Jul 9 13:26:47 2015 |
Jamie | Summary | CDS | CDS upgrade: new mx 1.2.16 installed | I rebuilt/installed mx 1.2.16 to use "ether-mode", instead of the default MX-10G:
controls@fb /opt/src/mx-1.2.16 0$ ./configure --enable-ether-mode --prefix=/opt/mx-1.2.16
...
controls@fb /opt/src/mx-1.2.16 0$ make
..
controls@fb /opt/src/mx-1.2.16 0$ make install
...
I then rebuilt/installed daqd so that it properly linked against the updated mx install:
controls@fb /opt/rtcds/rtscore/release/src/daqd 0$ ./configure --enable-debug --disable-broadcast --without-myrinet --with-mx --with epics=/opt/rtapps/epics/base --with-framecpp=/opt/rtapps/framecpp --enable-local-timing
...
controls@fb /opt/rtcds/rtscore/release/src/daqd 0$ make
...
controls@fb /opt/rtcds/rtscore/release/src/daqd 0$ install daqd /opt/rtcds/caltech/c1/target/fb/
It's now back to running and receiving data from the front ends (still not stable yet, though). |
11397
|
Wed Jul 8 21:02:02 2015 |
Jamie | Summary | CDS | CDS upgrade: another step forward, so we're back to where we started (plus a bit?) | Koji did a bit of googling to determine that 'Wrong Network' status message could be explained by the fb myrinet operating in the wrong mode:
(This was the useful link to track down the issue (KA))
Network: Myrinet 10G
I didn't notice it before, but we should in fact be operating in "Ethernet" mode, since that's the fabric we're using for the DC network. Digging a bit deeper we found that the new version of mx (1.2.16) had indeed been configured with a different compile option than the 1.2.15 version had:
controls@fb ~ 0$ grep '$ ./configure' /opt/src/mx-1.2.15/config.log
$ ./configure --enable-ether-mode --prefix=/opt/mx
controls@fb ~ 0$ grep '$ ./configure' /opt/src/mx-1.2.16/config.log
$ ./configure --enable-mx-wire --prefix=/opt/mx-1.2.16
controls@fb ~ 0$
So that would entirely explain the problem. I re-linked mx to the older version (1.2.15), reloaded the mx drivers, and everything showed up correctly:
controls@fb ~ 0$ /opt/mx/bin/mx_info
MX Version: 1.2.12
MX Build: root@fb:/root/mx-1.2.12 Mon Nov 1 13:34:38 PDT 2010
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0: 299.8 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
Status: Running, P0: Link Up
Network: Ethernet 10G
MAC Address: 00:60:dd:46:ea:ec
Product code: 10G-PCIE-8AL-S
Part number: 09-03916
Serial number: 352143
Mapper: 00:60:dd:46:ea:ec, version = 0x00000000, configured
Mapped hosts: 6
ROUTE COUNT
INDEX MAC ADDRESS HOST NAME P0
----- ----------- --------- ---
0) 00:60:dd:46:ea:ec fb:0 1,0
1) 00:25:90:0d:75:bb c1sus:0 1,0
2) 00:30:48:be:11:5d c1iscex:0 1,0
3) 00:30:48:d6:11:17 c1iscey:0 1,0
4) 00:30:48:bf:69:4f c1lsc:0 1,0
5) 00:14:4f:40:64:25 c1ioo:0 1,0
controls@fb ~ 0$
The front end hosts are also showing good omx info (even though they had been previously as well):
controls@c1lsc ~ 0$ /opt/open-mx/bin/omx_info
Open-MX version 1.5.2
build: controls@fb:/opt/src/open-mx-1.5.2 Tue May 21 11:03:54 PDT 2013
Found 1 boards (32 max) supporting 32 endpoints each:
c1lsc:0 (board #0 name eth1 addr 00:30:48:bf:69:4f)
managed by driver 'igb'
Peer table is ready, mapper is 00:30:48:d6:11:17
================================================
0) 00:30:48:bf:69:4f c1lsc:0
1) 00:60:dd:46:ea:ec fb:0
2) 00:25:90:0d:75:bb c1sus:0
3) 00:30:48:be:11:5d c1iscex:0
4) 00:30:48:d6:11:17 c1iscey:0
5) 00:14:4f:40:64:25 c1ioo:0
controls@c1lsc ~ 0$
This got all the mx_stream connections back up and running.
Unfortunately, daqd is back to being a bit flaky. With all frame writing enabled we saw daqd crash again. I then shut off all trend frame writing and we're back to a marginally stable state: we have data flowing from all front ends, and full frames are being written, but not trends.
I'll pick up on this again tomorrow, and maybe try to rebuild the new version of mx with the proper flags. |
11396
|
Wed Jul 8 20:37:02 2015 |
Jamie | Summary | CDS | CDS upgrade: one step forward, two steps back | After determining yesterday that all the daqd issues were coming from the frame writing, I started to dig into it more today. I also spoke to Keith Thorne, and got some good suggestions from Gerrit Kuhn at GEO.
I realized that it probably wasn't the trend writing per se, but that turning on more writing to disk was causing increased load on daqd, and consequently on the system itself. With more frame writing turned on the memory consuption increased to the point of maxing out the physical RAM. The system the probably starting swaping, which certainly would have choked daqd.
I noticed that fb only had 4G of RAM, which Keith suggested was just not enough. Even if the memory consumption of daqd has increased significantly, it still seems like 4G would not be enough. I opened up fb only to find that fb actually had 8G of RAM installed! Not sure what happend to the other 4G, but somehow they were not visible to the system. Koji and I eventually determined, via some frankenstein operations with megatron, that the RAM was just dead. We then pulled 4G of RAM from megatron and replaced the bad RAM in fb, so that fb now has a full 8G of RAM .
Unfortunately, when we got fb fully back up and running we found that fb is not able to see any of the other hosts on the data concentrator network . mx_info, which displays the card and network status for the myricom myrinet fiber card, shows:
MX Version: 1.2.16
MX Build: controls@fb:/opt/src/mx-1.2.16 Tue May 21 10:58:40 PDT 2013
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0: 299.8 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
Status: Running, P0: Wrong Network
Network: Myrinet 10G
MAC Address: 00:60:dd:46:ea:ec
Product code: 10G-PCIE-8AL-S
Part number: 09-03916
Serial number: 352143
Mapper: 00:60:dd:46:ea:ec, version = 0x63e745ee, configured
Mapped hosts: 1
ROUTE COUNT
INDEX MAC ADDRESS HOST NAME P0
----- ----------- --------- ---
0) 00:60:dd:46:ea:ec fb:0 D 0,0
Note that all front end machines should be listed in the table at the bottom, and they're not. Also note the "Wrong Network" note in the Status line above. It appears that the card has maybe been initialized in a bad state? Or Koji and I somehow disturbed the network when we were cleaning up things in the rack. "sudo /etc/init.d/mx restart" on fb doesn't solve the problem. We even rebooted fb and it didn't seem to help.
In any event, we're back to no data flow. I'll pick up again tomorrow. |
11395
|
Wed Jul 8 17:46:20 2015 |
Jessica | Summary | General | Updated Time Delay Plots | I re-measured the transfer function for Cable B, because the residuals in my previous post for cable B indicated a bad fit.
I also realized I had made a mistake in calculating the time delay, and calculated more reasonable time delays today.
Cable A had a delay of 202.43 +- 0.01 ns.
Cable B had a delay of 202.44 +- 0.01 ns. |
Attachment 1: resid_CableA.png
|
|
Attachment 2: resid_CableB.png
|
|
11394
|
Tue Jul 7 23:26:19 2015 |
Koji | Update | CDS | Attempt to list CDS issues | As Jamie succeded to realize somewhat workable condition of the 40m CDS, I tried to list the obvious CDS issues so that we can attack them one by one.
c1cal is constantly time-outing now (t>60usec). c1sus is close to it (t=56~57us)
- We should check the trends of the CPU meters
("C1:FEC-**_CPU_METER"). In fact this should be listed in the summary pages in a new CDS tab.
- Probably this is related to 1): c1lsc is constantly showing IPC error (bit0 = shmem).
C1LSC_IPC_STATUS.adl is telling that this is coming from the IPC error between c1lsc and c1cal. ("C1:CAL-LSC_SENSMAT_OSC_**** "). This information is found by opning C1LSC_GDS_TP.adl screen and click RT NET STAT button next to the IPC error status.
- We wonder how the RFM access is accelerated or decelerated by this upgrade.
- We need tests to see if the time delays of the models/IPCs are still reasonable.
- LSC Overviw screen has a small digest of the CDS status. Now there are many white boxes that correspond to the channels "
C1:FEC-**_DIAG1 ".
- All realtime systems have default (0 or 1) epics channel values (i.e. gains, FM switches, matrices, etc). Need burtrestores.
- I tried to burtrestore the models but
burtgooey indicated there are some errors.
- Detailed check of the snapshot files comparing snapshot files in
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2015/Jul/7/19:07 and /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2015/Jun/1/19:07 :
c1alsepics shows bunch of volatile channels to be snapshot. It seems that all of the static epics channels are missing in the snapshot file. Is this related to the current omission of the slow data acquisition? => No actually this must be the modification of the ALS model to accommodate the ALS in the LSC model for the new ALS setup.
c1lscepics was checked indeed slow channels were properly snapshot. So what was the problem in burting???
- I made a simple csh script to restore the snapshots one by one while collecting the error messages.
This script is located as /users/koji/150707/burtrevert.sh
-
#!/bin/csh
echo 'This script restores all of the snapshot files found in' $argv[1] '.'
echo 'Are you sure? y/n'
set ans = $<
set ANS = `echo $ans | tr "[:upper:]" "[:lower:]" `
if ($ANS == y) then
foreach fname ($argv[1]/*epics.snap)
echo ''
echo '#################################'
echo $fname
echo '#################################'
burtwb -f $fname
end
else
echo "exiting..."
endif |
- Now I ran the command
./burtrevert.sh /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2015/Jun/1/19:07 &>burt.log
This lists up the missing channels. The zipped log is attached to this entry.
- Burting old snapshot always crashes the RT process "c1sus" (not the c1sus host). If I use the newly generated snapshot today,
the process does not crash. The process halts at the cycle time of 74us (>60us). I left the process crashed so that we can take a new snapshot with the matrix numbers filled. Once we have the correct snapshot, we don't need to worry about this crash. Let's see.
- c1sus still crashes with the new burt file. Theremust be a trigeer that makes the model frozen. We need to split the burtfile into pieces
to figure out which line causes the halt.
|
Attachment 1: burt.log.zip
|
11393
|
Tue Jul 7 18:27:54 2015 |
Jamie | Summary | CDS | CDS upgrade: progress! | After a couple of days of struggle, I made some progress on the CDS upgrade today:

Front end status:
- RTS upgraded to 2.9.4, and linked in as "release":
/opt/rtcds/rtscore/release -> tags/advLigoRTS-2.9.4
- mbuf kernel module built installed
- All front ends have been rebooted with the latest patched kernel (from 2.6 upgrade)
- All models have been rebuilt, installed, restarted. Only minor model issues had to be corrected (unterminated unused inputs mostly).
- awgtpman rebuilt, and installed/running on all front-ends
- open-mx upgraded to 1.5.2:
/opt/open-mx -> open-mx-1.5.2
- All front ends running latest version of mx_stream, built against 2.9.4 and open-mx-1.5.2.
We have new GDS overview screens for the front end models:

It's possible that our current lack of IRIG-B GPS distribution means that the 'TIM' status bit will always be red on the IOP models. Will consult with Rolf.
There are other new features in the front ends that I can get into later.
DAQ (fb) status:
- daqd and nds rebuilt against 2.9.4, both now running on fb
40m daqd compile flags:
cd src/daqd
./configure --enable-debug --disable-broadcast --without-myrinet --with-mx --enable-local-timing --with-epics=/opt/rtapps/epics/base --with-framecpp=/opt/rtapps/framecpp
make
make clean
install daqd /opt/rtcds/caltech/c1/target/fb/
However, daqd has unfortunately been very unstable, and I've been trying to figure out why. I originally thought it was some sort of timing issue, but now I'm not so sure.
I had to make the following changes to the daqdrc:
set gps_leaps = 820108813 914803214 1119744016;
That enumerates some list of leap seconds since some time. Not sure if that actually does anything, but I added the latest leap seconds anyway:
set symm_gps_offset=315964803;
This updates the silly, arbitrary GPS offset, that is required to be correct when not using external GPS reference.
Finally, the last thing I did that finally got it running stably was to turn off all trend frame writing:
# start trender;
# start trend-frame-saver;
# sync trend-frame-saver;
# start minute-trend-frame-saver;
# sync minute-trend-frame-saver;
# start raw_minute_trend_saver;
For whatever reason, it's the trend frame writing that that was causing things daqd to fall over after a short amount of time. I'll continue investigating tomorrow.
We still have a lot of cleanup burt restores, testing, etc. to do, but we're getting there. |
11392
|
Tue Jul 7 17:22:16 2015 |
Jessica | Summary | | Time Delay in ALS Cables | I measured the transfer functions in the delay line cables, and then calculated the time delay from that.
The first cable had a time delay of 1272 ns and the second had a time delay of 1264 ns.
Below are the plots I created to calculate this. There does seem to be a pattern in the residual plots however, which was not expected.
The R-Square parameter was very close to 1 for both fits, indicating that the fit was good. |
Attachment 1: cableA_fit.jpg
|
|
Attachment 2: cableA_resid.jpg
|
|
Attachment 3: cableB_fit.jpg
|
|
Attachment 4: cableB_resid.jpg
|
|
11391
|
Sun Jul 5 18:14:13 2015 |
Ignacio | Update | PEM | Wilcoxon Accelerometer Huddle Test | Updated: On Thursday/Friday (sorry for late elog) I was messing with Eric's Wilcoxon 731A accelerometer huddle test data that was taken without the box and cables being adjusted properly. Anyways, I performed the three cornered hat analysis as he had done but I also performed a six cornered hat method as well instead of permuting around in pairs of three accelerometers. The following plots of the ASD's show the results,

It is interesting to note the improvement at low frequencies when six accelerometers are used instead of six while at higher frequencies we can clearly see how the results are worst than the three hat results.

I decided to take a mean of all six accelerometers measured ground signal as well as that for their computed selfnoises, this is plotted below,

Notice the obvious improvement along the entire frequency band of the measurements when all accelerometers are used in the data analysis.
I also performed some Wiener filtering of this data. There was an obvious improvement in the results,
The mean of the signals is also plotted below, just as I did with the cornered hat methods,

I also compared the mean self noise of the accelerometers against the manufacturers calculated selfnoise that Rana put up on Github. Both methods are compared against what the manufacturer claims,

As expected the measured noise curves of the Wilcoxon is not as good as what the manufactures stated. This should improve once we redo the huddle test with a better experimental setup. We have some pretty interesting results with the six cornered hat method at around 5 Hz, it is surprisingly accurate given how rough the calculations seemed to be.
I have attached my code for reference: code_accel.zipselfnoise_allsix.png
SEE attachments for better plots of the six accelerometers... |
Attachment 5: code_accel.zip
|
Attachment 6: selfnoise_allsix_miso.png
|
|
Attachment 8: selfnoise_allsix.png
|
|
11390
|
Wed Jul 1 19:16:21 2015 |
Jamie | Summary | CDS | CDS upgrade in progress | The CDS upgrade is now underway
Here's what's happened so far:
/opt/rtcds/rtscore/tags/advLigoRTS-2.9.4
That's it for today. Will pick up again first thing tomorrow |
11389
|
Wed Jul 1 16:16:46 2015 |
Ignacio | Update | General | Accelerometers reinstalled for future huddle test | Today, I installed the Wilcoxon accelerometers in the table near the end of the mode cleaner. I only set three of them up instead of all six. They were set up just as Rana suggeted we should have them properly set up, i.e. cables being tighten up, and a box on top to prevent any airflow introducing any disturbances. We are planning on running the huddle test on these guys once the upgrade? to the interferometer is done.

The cables were tightly clamped to the table as shown below, I used a thick piece of shock absorbing rubber to do this.

A small piece of thin rubber was used to hold each of the accelerometers tightly to the table in order not to damage them.

We had to borrow Megan's and Kate's piece of black foam in order to seal one of the sides properly, as the cable had to come out through somewhere. We didn't want to mess with drilling any holes into the box!

There was a small crack even after using the foam. I sealed it up with duck tape.

The box isn't perfect, so there were multiple cracks along the bottom and top of it that could potentially allow for air to flow to the inside. Eric suggested that we should be super careful this time and do it right, so every crack was sealed up with ducktape.


Finally, we needed something heavy to be placed on top of the box to hold everything well. We used Rana's baby to accomplish this goal.

Just kidding! Rana's baby is too delicate for our purposes. A layman box of heavy cables was used instead.

|
11388
|
Wed Jul 1 11:45:48 2015 |
Steve | Update | PEM | cleaning around ETMX chamber |
Quote: |
Keven is our regular janitor is out for a few weeks.
The sub is careful- gentel Mario. We wiped arouind the vertex cambers north side on the floor and east arm racks.
|
Mario wiped the floor around the ETMX chamber today. |
11387
|
Wed Jul 1 10:01:25 2015 |
Steve | Update | VAC | RGA scan pd78 day 275 |
|
Attachment 1: rga275d.png
|
|
11386
|
Wed Jul 1 09:33:31 2015 |
Koji | Update | General | Shutters closed, watch dogs disabled for the RCG upgrade | I closed the PSL/GREEN shutters and shut off the LSC feedback/SUS watch dogs at 9AM PDT, to allow Jamie to start his disruptive work.
|
11385
|
Tue Jun 30 20:26:24 2015 |
Eve | Update | General | Minor Summary Page Changes | I made several small, nit-picky changes to the summary pages.
Motivation:
I'm still working on getting used to editing the summary pages. I also wanted to change some of the easy-to-alter cosmetics of the pages.
What I did:
I changed axis ranges, axis labels, and typos throughout the summary pages. Read below for an excrutiating list of the minor details of my alterations, if you wish:
- Changed axes on LSC control signals plots on the Summary tab (but will probably change these back to their original state)
- Moved an OpLev plot from the Sandbox tab to "Eve" tab
- Increased the y axis range on IOO MC2 Trans QPD and IMC REFLY RFPD DC plots (which may change when I better incorporate triggers into these plots)
- Fixed title on IOO Whitened Spectrogram and Rayleigh Spectrogram
- Fixed degree sign on Weather: Temperature and PSL Table Temperature
- Fixed percent sign on Weather: Humidity
-
Results:
So far, everything looks good. I'll continue to make more changes later this week and hope to soon get on to more substatial changes. |
11384
|
Tue Jun 30 11:33:00 2015 |
Jamie | Summary | CDS | prepping for CDS upgrade | This is going to be a big one. We're at version 2.5 and we're going to go to 2.9.3.
RCG components that need to be updated:
- mbuf kernel module
- mx_stream driver
- iniChk.pl script
- daqd
- nds
Supporting software:
- EPICS 3.14.12.2_long
- ldas-tools (framecpp) 1.19.32-p1
- libframe 8.17.2
- gds 2.16.3.2
- fftw 3.3.2
Things to watch out for:
- RTS 2.6:
- raw minute trend frame location has changed (CRC-based subdirectory)
- new kernel patch
- RTS 2.7:
- supports "commissioning frames", which we will probably not utilize. need to make sure that we're not writing extra frames somewhere
- RTS 2.8:
- "slow" (EPICS) data from the front-end processes is acquired via DAQ network, and not through EPICS. This will increase traffic on the DAQ lan. Hopefully this will not be an issue, and the existing network infrastructure can handle it, but it should be monitored.
|
11383
|
Tue Jun 30 05:47:38 2015 |
rana | Update | LSC | Use BALUNs |
Quote: |
The RMS in both channels mostly comes from a whole mess of 60Hz harmonics. I'll see what I can do by taking better care of the delay line cables, but it is kind of weird that this would be worse now, given that there was little care given to them before either.
|
BALUNs |
11382
|
Mon Jun 29 17:40:56 2015 |
Max Isi | Update | General | Summary pages "Code status" page fixed | It was brought to my attention that the "Code status" page (https://nodus.ligo.caltech.edu:30889/detcharsummary/status.html) had been stuck showing "Unknown status" for a while.
This was due to a sync error with LDAS and has now been fixed. Let me know if the issue returns. |
11381
|
Mon Jun 29 12:28:45 2015 |
ericq | Update | LSC | ALS reconstruction in progress | Turns out the reason that the BEATY signal wasn't working is that one of the two RF amplifiers (both of which are model ZHL-32A), isn't amplifying. Voltage at the pins is fine, so maybe its just broken. When the ZHL-3As that Rana ordered arrive, I'll install those.
Switching the working amplifier between the two channels, and using a Marconi driving -20dBm (the Y green beatnote amplitude), the phase tracker output RMSs are 70Hz and 150Hz for X and Y, respectively, which isn't too exciting. There is enough whitening gain and filtering that I don't think ADC noise is an issue (The magnitude of the phase tracker Q is ~10kcounts after +6dB whitening gain).
The RMS in both channels mostly comes from a whole mess of 60Hz harmonics. I'll see what I can do by taking better care of the delay line cables, but it is kind of weird that this would be worse now, given that there was little care given to them before either.
Also, for now, so I don't have to lug the marconi around everywhere, I'm currently driving both channels of the demod board with a spare 55MHz LO output of the LSC LO distribution box, which ends up being a factor of 5 smaller phase tracker error signal, but the noise level is about the same as with the marconi. |
|