40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 232 of 354  Not logged in ELOG logo
ID Dateup Author Type Category Subject
  11626   Mon Sep 21 11:40:30 2015 ericqUpdateGeneralMegatron maitenence

The MC autolocker and FSSslow scripts weren't running on Megtron. These were started by running the following commands on megatron:

sudo initctl start MCautolocker
sudo initctl start FSSslow

The new autoburt cronjob was failing because the .cron file was not executable (fixed by chmod +x burtnew.cron), and the new perl script didn't use the full path for ifconfig. Similarly, the simulink webview updating script was failing because the full path for matlab wasn't being given. Both of these fixes have been tested and commited to SVN. 

In general, cron scripts can be a real pain, since the cron process doesn't run our .bashrc, and so doesn't know about updates to $PATH, or other environment vairables that get updated through /ligo/cdscfg/workstationrc.sh, which is called by .bashrc. So something that manually works fine in the terminal may not play out as expected when run by cron.

  11627   Mon Sep 21 15:22:19 2015 jamieUpdateDAQworking on new fb replacement

I've been putting together a new machine that Rolf got for us as a replacement for fb.

I've installed and configured the OS, and compiled daqd and the necessary supporting software.  I want to try acquiring data with it.  This will require removing the current/old fb from the DAQ network, and adding the new machine.  It should be able to be done relatively non-invasively, such that none of the front end configuration needs to be adjusted, and the old fb can be put back in place easily.

If the test is successfully, then I'll push ahead with the rest of the replacement (such as either moving or copying the /frames RAID to the new machine).

I will do this work in the early AM tomorrow, September 22, 2015.

  11628   Mon Sep 21 18:31:06 2015 gautamSummaryComputer Scripts / ProgramsFrequency counting algorithm

I have been working on setting up a frequency counting module that can give us a readout of the beat frequency, divided by a factor of 2^14 using the Wenzel frequency dividers as described here. This is a summary of what I have thus far.

The algorithm, and simulink model

The basic idea is to pass the digitized signal through a Schmitt trigger (existing RCG module), which provides some noise immunity, and should in theory output a clean square wave with the same frequency as the input. The output of the Schmitt trigger module is either 0 (for input < lower threshold value) and 1 (for input greater than the high threshold value). By differencing this between successive samples, we can detect a "zero-crossing", and by measuring the time interval between successive zero crossings, we can take the reciprocal to get the frequency. The last bit of this operation (i.e. measuring the interval) is done using a piece of custom C code. Initially, I was trying to use the part "GPS" from CDS_PARTS to get the current GPS time and hence measure intervals between successive zero-crossings, but this didn't work out because the output of GPS is in seconds, and that doesn't give me the required precision to count frequency. I tried implementing some more precision timing using the clock_gettime() function, which is capable of giving nanosecond precision, but this didn't work for me. So I am now using a more crude way of measuring the interval, by using a counter variable that is incremented each time a zero-crossing is NOT detected, and then converting this to time using the FE_RATE macro (=16384). In any case, the ADC sampling rate limits the resolution of frequency counting using zero-crossing detection (more on this later). Attachment 1 shows the SIMULINK block diagram for this entire procedure.

Testing the model

I implemented all of this on c1tst, and followed the steps listed here to get the model up and running. I then used one of the DB37 breakout boards to send a signal to the ADC using the DS345 function generator. Attachment 2 shows some diagnostic plots - input signal was a 2.5Vpp (chosen to match the output from the Wenzel dividers) square wave at 2kHz:

  • Bottom left: digitized version of the input signal - I used this to set the upper and lower thresholds on the Schmitt trigger at +1000 counts and -1000 counts respectively.
  • Top left: Schmitt trigger output (red trace) and the difference between successive samples of the Schmitt trigger output (blue trace - this variable is used to detect a zero crossing)
  • Top right: Counter variable used to measure intervals between successive zero crossings, and hence, the frequency. The frequency output is held until the next zero crossing is detected, at which time counter is reset
  • Bottom right: frequency output in Hz.

The right column pointed me to the limitations of frequency counting using this method - even though the input frequency was constant (2kHz), the counter variable, and hence the frequency readout, was neither accurate nor precise. But this was to be expected given the limitations imposed by ADC sampling? We only get information of the state of the input signal once within each sampling interval, and hence, we cannot know if a zero crossing has occurred until the next sampling interval. Moreover, we can only count frequency in discrete steps. In attachments 3 and 4, I've plotted these discrete frequencies which can be measured - the error bars indicate the error in the frequency readout if the counter variable is 1 more or less than the "true" value - this can (and does) happen if the high and low times of the Schmitt trigger are not equal over time (see top left plot in Attachment 2, its not very obvious, but all the "low" times are not equal, and so, the interval between detected zero crossings is not equal). This becomes a problem for small values of the counter variable, i.e. at high input frequencies. I was having a look at the elogs Aidan wrote some years ago for a different digital frequency counting approach, and I guess the conclusion there was similar - for high input frequencies, the error is large. 

I further did two frequency sweeps using the DS345, to see if I could recover this in the frequency readout. Attachments 5 and 6 show the results of these sweeps. For low frequencies, i.e. 100-500 Hz, the jitter in the readout is small (though this will be multiplied by a factor of 2^14), but by the time the input frequency gets up to 2kHz, the jitter in the readout is pretty bad (and gets worse for even higher frequencies.

Bottom line

Some refinements can be made to the algorithm, perhaps by introducing some averaging (i.e. not reading out frequency for every pair of zero crossings, but every 5) which may improve the jitter in the readout, but I would think that the current approach is not very useful above 2kHz (corresponding to ~30MHz of pre-divider frequency), because of the limitations shown in attachments 3 and 4. 

Attachment 1: Simulink_model.pdf
Simulink_model.pdf
Attachment 2: diagnostic_plots.pdf
diagnostic_plots.pdf
Attachment 3: Error_high_frequency.pdf
Error_high_frequency.pdf
Attachment 4: Error_low_frequency.pdf
Error_low_frequency.pdf
Attachment 5: Frequency_sweep_100_500_Hz.pdf
Frequency_sweep_100_500_Hz.pdf
Attachment 6: Frequency_sweep_100_2000_Hz.pdf
Frequency_sweep_100_2000_Hz.pdf
  11629   Mon Sep 21 23:18:55 2015 ericqSummaryComputer Scripts / ProgramsFrequency counting algorithm

I definitely think lowpassing the output is the way to go. Since this frequency readback will be used for slow control of the beatnote frequency via auxillary laser temperature, even lowpassing at tens of Hz is fine. The jitter doesn't mean its useless, though.

If we lowpass at 16Hz, we're effectively averaging over 1024 samples, bringing, for example, a +-2kHz jitter of a 6kHz signal as you post down to 2kHz/sqrt(1024) ~ 60Hz, which is 1% of the carrier. This seems ok to me. 

  11631   Tue Sep 22 02:11:17 2015 ranaSummaryComputer Scripts / ProgramsFrequency counting algorithm

I was going to suggest using a software PLL, but perhaps averaging gives the same result. The same ADC signal can be fed to multiple blocks with different averaging times and we can just use whichever ones seems the most useful.

  11632   Tue Sep 22 03:48:18 2015 ericqUpdateLSCDRMI tweaked, briefly held with ALS arms

Given the RF component power supply grounding, POP110, POP22 and REFL165 all changed somewhat. They have all been rephased for the DRMI, as they were before. 

I tweaked the 3F DRMI settings, and chose to phase REFL165I to PRCL, instead of SRCL as before, to try and minimize the PRCL->MICH coupling instead of the SRCL->MICH coupling. 

With these settings, I once locked the DRMI for ~5 seconds with the arms held off on ALS, during which I could see some indications of neccesary demod angle changes. Haven't yet gotten longer, but we're getting there...

  11633   Tue Sep 22 08:58:38 2015 SteveUpdateVACcold cathode is flaky

The cold cathode gauge is back to normal. cc4 is the last gauge is "functioning"

MKS is not responding. The spare controller and gauges are back for repair.

 

Attachment 1: 4and80days.png
4and80days.png
  11634   Tue Sep 22 16:42:39 2015 ericqUpdateIOOHousekeeping

I've moved the OAF MC2 signal path to go directly from c1oaf to c1mcs, so that the LSC being ON/OFF doesn't interfere with the MC length seismic feedforward. Since the FB is currently down, I can't do a full test, but looking at monitor points in StripTool indicates it's working as intended. 

I also cleaned up some LSC medm stuff; exposing the existing SRCL UGF servo, and removing a misleading arrow. This reminds me that I need to get calibration lockins back up and running...

  11635   Tue Sep 22 16:52:36 2015 ericqSummaryGeneralRandom Notes

Some things bouncing around my head that haven't made it to ELOG yet:

  • Last week, Rana and I were investigating excess power line noise coming from the DFD demodulation. We put transformers on the green beat signals where they arrive at the LSC rack, to avoid connecting their signal ground from the PSL table to the LSC rack ground. This didn't help; it's unclear what the culprit is; maybe the demod board power board?
  • Lately, when the interferometer loses lock, the Y arm will not lock on POY, or even flash its IR resonance. for a little while. The green beam can be locked, the X arm can be locked, and no excess angular noise is evident from glancing at the oplev XY plots. Mysterious.  
  • Sometimes, when writing new values to the C1LSC SDF table, the c1lscepics process dies (though the write is successful). This is highly annoying. This may have been adressed in some slightly newer RCG code. 
  • C1OAF is running with a big red NO SYNC message on its GDS screen. C1LSC has shown this too, but I think only when the SDF/epics crash happens. 
  • C1OAF also doesn't seem to properly load the "safe" SDF table when starting up, and errantly puts ones in every element in the static FF matrix. Be careful when restarting OAF!
  11636   Tue Sep 22 17:30:55 2015 jamieUpdateDAQattempts at getting new fb working

Today I've been trying to get the new frame builder, tentatively 'fb1', to work.  It's not fully working yet, so I'm about to revert the system back to using 'fb'.  The switch-over process is annoying, since our one myrinet card has to be moved between the hosts.

A brief update on the process so far:

I'm being a little bold with this system by trying to build daqd against more system libraries, instead of the manually installed stuff usually nominally required.  Here's some of the relevant info about th fb1 system:

  • Debian 7 (wheezy)
  • lscsoft ldas-tools-framecpp-dev 2.4.1-1+deb7u0
  • lscsoft gds-dev 2.17.2-2+deb7u0
  • lscsoft libmetaio-dev 8.4.0-1+deb7u0
  • lscsoft libframe-dev 8.20-1+deb7u0
  • /opt/rtapps/epics-1.4.12.2_long
  • /opt/mx-1.2.16
  • advLigoRTS trunk

I finally managed to get daqd to build against the advLigoRTS trunk (post 2.9 branch).  I'll post detailed build log once I work out all the kinks.  It runs ok, including writing out full frames, as well as second and minute trends and raw minute trends, but there are a couple of show-stopper problems:

  • daqd segfaults if the C1EDCU.ini is specified.  If I comment out that one file from the 'master' channel ini file list then it runs without segfaulting.
  • Something is going on with the mx_streams from the front ends:
    • They appear to look ok from the daqd side, but the FEC-<ID>_FB_NET_STATUS indicators remain red.  The "DAQ" bit in the STATE_WORD is also red.  Again, this is even though data seems to be flowing.
    • The mx_stream processes on the front ends are dying (and restarting via monit) about every 2 minutes.  It's unclear what exactly is happening, but they all dia around the same time, so it possibly initiated from a daqd problem.  Around the time of the mx_stream failures, we see this in the daqd log:
[Tue Sep 22 17:24:07 2015] GPS MISS dcu 91 (TST); dcu_gps=1127003062 gps=1127003063

Aborted 1 send requests due to remote peer Aborted 1 send requests due to remote peer 00:25:90:0d:75:bb (c1sus:0) disconnected
mx_wait failed in rcvr eid=004, reqn=11; wait did not complete; status code is Remote endpoint is closed
00:30:48:d6:11:17 (c1iscey:0) disconnected
mx_wait failed in rcvr eid=002, reqn=235; wait did not complete; status code is Remote endpoint is closed
disconnected from the sender on endpoint 002
mx_wait failed in rcvr eid=005, reqn=253; wait did not complete; status code is Bad session (missing mx_connect?)
disconnected from the sender on endpoint 005
disconnected from the sender on endpoint 004
[Tue Sep 22 17:24:13 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127003062 gps=1127003069
  • Occaissionally the daqd process dies when the front end mx_streams processes die.

I'll keep investigating, hopefully with some feedback from Keith and Rolf tomorrow.

  11637   Wed Sep 23 03:08:50 2015 ericqUpdateLSCDRMI + ALS Arms

[ericq, Gautam]

We can reliably lock the DRMI with the arms held off on ALS. yes

I have not been able to hold it at zero CARM offset; but this is probably just a matter of setting up the right loop shapes with enough phase margin to handle the CARM fluctuations ( or figuring out high bandwidth ALS...)

Right now, it's the most stable at CARM offsets larger (in magnitude) than -1. Positive CARM offsets don't work well for some reason. 


The key to getting this to work was to futz around, starting from the misaligned arms DRMI settings, until brief locks were seen (triggering all 3 DRMI DoFs on POP22, since the correct AS110 sign was amiguous). I could tell from how the control signals responded to gain changes that REFL165Q, which was being used as the MICH error signal, was seeing significant cross coupling from both PRCL and SRCL, suggesting the demod angle of REFL165 had to be adjusted. I randomly tweaked the REFL165 demod angle until a 20 second lock was achieved, with excitations running. Then, I downloaded that data and analyzed the sensing matrix. This showed me that the REFL33 demod angle was ok, and the PRCL-from-SRCL subtraction factor determined with the arms misaligned was still valid. The main difference was indeed the SRCL angle in REFL165.

With the REFL165 demod angle properly adjusted, the DRMI would briefly lock, but the DRMI had become somewhat misaligned at this point, and the SRC could be seen to mode hop. Interestingly, the higer order modes had an opposite sign in AS110, with respect to the TM00. At that point, I went back to PRMI on carrier to dither-align the BS and PRM. 

With alignment set, the DRMI would lock on TM00 readily, still only triggering on POP22. I set the AS110 angle, and moved SRCL triggering over to that, which sped up acquisition even more. The input matrix and FM gains from no-arms DRMI still work for acquistion; UGF servos were used to adjust overall gains a bit. 

At CARM offsets larger in magnitude than -1, the DRMI lock seems indefinite. I just broke it to see how fast it would acquire; 3 seconds. cool

Lastly, here is the sensing matrix at CARM offset of -4, measured over five minutes. REFL11 is the only degenerate looking PD. Thus, I feel like controlling the DRMI of the DRFPMI should be more managable than I had feared.

(I didn't include/excite CARM or DARM, because I'm not sure it would really mean anything at such a large CARM offset)

Attachment 1: DRMIarms.pdf
DRMIarms.pdf
  11638   Wed Sep 23 10:31:49 2015 ericqUpdateLSCDRMI + ALS Arms

Looking good. How many meters of CARM is '-1 counts'?

  11639   Wed Sep 23 12:51:03 2015 JenneUpdateLSCDRMI + ALS Arms

Nice!!

  11640   Thu Sep 24 17:01:37 2015 ericqUpdateComputer Scripts / ProgramsFreeing up some space on /cvs/cds

I noticed that Chiara's backup HD (which has a capacity of 1.8TB, vs the main drives 2TB) was near to getting full, meaning that we would soon be without a local backup. 

I freed up ~200GB of space by compressing the autoburt snapshots from 2012, 2013, 2014. Nothing is deleted, I've just compressed text files into archives, so we can still dig out the data whenever we want.

  11641   Thu Sep 24 17:06:14 2015 ericqUpdateCalibration-RepairC1CAL Lockins

Just a quick note for now: I've repopulated C1CAL with a limited set of lockin oscillators/demodulators, informed by the aLIGO common LSC model. Screens are updated too. 

Rather than trying to do the whole magnitude phase decompostion, it just does the demodulation of the RFPD signals online; everything beyond that is up to the user to do offline. 

Briefly testing with PRMI, it seems to work as expected. There is some beating evident from the fact that the MICH and PRCL oscillation frequencies are only 2Hz apart; the demod low pass is currently at an arbitrary 1Hz, so it doesn't filter the beat much. 

Screens, models, etc. all svn'd.

  11642   Fri Sep 25 11:08:33 2015 SteveUpdateSUSwire standoffs

Our last effort to change the existing Al-6061 wire standoffs was at April 2012

We requested sapphire and/or ruby materials with smaller R at the bottom of the  groove. Groove polishing was asked for.

Insaco Inc. quote 84740 as " best effort " NO POLISHING.  The groove cut to be with eximer laser.

Jeff Lewis as 9-12-2012:  the LIGO sapphire prisms grooves were NOT POLISHED  but Resonatics used the corner of a rasor blade to scrape off the

ablated material wich was redeposited in and around the grooves.
 

Attachment 1: wireStandOff_SOS.PDF
wireStandOff_SOS.PDF
  11643   Fri Sep 25 14:52:08 2015 SteveUpdateSUSETMX is not drifting

We have talked about the drift of ETMX sus on the Wednesday meeting.

It has stopped moving on Jan 8, 2015 and it has been reasanable stable since than.

 

Attachment 1: ETMXstoppedDrifting.png
ETMXstoppedDrifting.png
Attachment 2: 25daysArmsT.png
25daysArmsT.png
  11644   Fri Sep 25 17:00:38 2015 SteveUpdateVACcold cathode is flaky

The IFO pressure is estimated ~1E-6 Torr with modified Vac normal valve configuration.

CC4 is in a jumping mode between 2e-5 and 1e-6 Torr

 Pressure based interlock kicks in to close VM1   at 2e-5 Torr to protect the RGA.

I did open VM1 repeatedly in the last few mornings but as cc4 jumps VM1 closes.

As VM1  closed to RGA scans are not seeing the IFO. I will look at some scans on Monday.

Mean while I opened VM2 to lower the pressure for the RGA. This change will be read by "Current Status: Undefined State"

So do not panic, the IFO pressure is normal.

I need someone's help to raise the interlock threshold to 5e-5 Torr

I'm buying a new cold cathode gauge on Monday.

 

note: cc1 is out of order!

         just read P1  the pressure is < 7e-4 Torr  This gauge is very reliable and it is at the low end of it's range.

 

Attachment 1: flakyCC4.png
flakyCC4.png
  11645   Fri Sep 25 17:51:11 2015 jamieUpdateDAQfb replacement work update

Brief update about the fb replacement status.

The new hardware for fb is in the rack, temporarily sitting on top of megatron, and on the CDS network with the name 'fb1'.  I've installed an OS on it and have re-built daqd.

Earlier this week I swapped it into the network and tried to get it to acquire data from the front ends.  I was ultimately unsuccessfully.  The problem seemed to be the mx_stream communication from the front ends to the new host.

The swap is sort of a pain because we only have one Myrinet fiber network adapter card that has to be moved between machines, which of course requires shutting down both machines and opening up their chassis.  I instructed Steve to order us a new Myrinet card for the new machine, which will allow us to swap daqd machines by just moving the fiber connection.  Once that's in place (early next week) I'll go back to trying to figure out what the issue is with the mx_streams.

If all else fails I'll take the repulsive last resort of either swapping or cloning the disk from the old fb.

  11646   Fri Sep 25 19:06:13 2015 ranaUpdateSUSETMX IS drifting

I don't see any evidence of it getting more stable. It seems there was a big step in January, but the problem we were talking about - the suspension shifting when it gets a big kick - can't be proven to be gone or not by just looking at the trends. The real issue is whether or not it slips when we put in a large step in the LSC.

Quote:

We have talked about the drift of ETMX sus on the Wednesday meeting.

It has stopped moving on Jan 8, 2015 and it has been reasanable stable since than.

 

  11647   Tue Sep 29 03:14:04 2015 gautamUpdateCDSFrequency divider box

Earlier today, the front panels for the 1U chassis I obtained to house the Wenzel dividers + RF amplifiers arrived, which meant that finally I had everything needed to complete the assembly. Pictures of the finished arrangement attached. 

Summary of the arrangement:

  • Two identical channels (RF amplifier + /64 divider + /256 divider), one for each arm
  • The front panels are anodized, and isolated SMA feedthroughs are used 
  • Given the large number of units to be supplied with DC power (2 amplifiers + 4 dividers), I chose to use two D1000217 power regulators (the default configuration takes +-18V as input, and outputs regulated +-15V, which was fine for the dividers, but the ZKL-1R5 requires +12V, so I changed the resistor R2 in the schematic from a 10.7K to 8.451K so as to accommodate this).
  • The amplifiers and dividers are mounted on a steel plate, which is itself mounted on the chassis via insulating posts. 

Testing:

  • I first verified the power regulator circuitry without hooking up the amplifiers/dividers - with a multimeter, I verified I was getting +15V and +12V as expected.
  • I then connected the amplifiers and dividers, and decided to first check the behaviour of each channel using the Fluke 6061 RF function generator and an oscilloscope. One of the channels (X-arm in the current configuration) worked fine - I got a 0-2.5V square wave as the output for input signals as low as -38dBm at 130MHz (consistent with out earlier observations).
  • The Y-arm channel however did not give me any output. In order to debug the problem, I decided to check the output after the amplifier first. The amplifier does not seem to be working for this channel - I get the same amplitude at the output as at the input. I verified that the correct DC power voltage of +12V was being supplied with a multimeter, but I am not sure how to debug this further. The amplifier is basically straight out of the box, and as far as I can tell, I have not done anything to damage it, as this was the first time I am connecting it to anything, and I repeated the same steps on the Y-arm as the X-arm, which seems to work alright.
  • The rest of the Y-arm signal chain was verified to be working by bypassing the amplifier stage (the attached photographs show the box in this configuration. There seems to be no issues with the divider part of the signal chain. 

Once I figure out the problem with this amplifier/replace it, the box is ready to be installed. 

 

Attachment 1: IMG_0014.JPG
IMG_0014.JPG
Attachment 2: IMG_0015.JPG
IMG_0015.JPG
  11648   Tue Sep 29 16:52:49 2015 ericqUpdateLSCFast ALS troubles - unknown zero

Fast ALS control continues to elude me. 

I fixed my LPF to take the input impedance of the CM board input into account; this unfortunately results in about -12dB DC gain of the ALS signal due to voltage-divider-y things, but by my estimation, this still puts the DFD noise above the input-referred voltage noise of the input AD829 on the CM board, so it'll do for now. The 120Hz pole shows up as expected when comparing the usual digital channels and the CM_SLOW output, and is digitally compensated with a zero at 120Hz (with a digital pole at 5k so nothing blows up). 

However, there seems to be some zero in the analog path somewhere that spoils the loop shape for the AO path. Here's a measurement of the X arm OLG from 10-100kHz, when the digital control is happening with ~100Hz UGF via ALS X I -> CM IN2 -> CM_SLOW -> LSC_CARM -> ETMX, and there is some AO action via ALS X I -> CM IN2 -> IMC IN2

The peak is recognizable as the gain peaking in the IMC servo (and changes predictably with changes to the IMC crossover and loop gains), which is expected. However, one can see that the magnitude is roughly flat before the peak, and the phase is around 0. With the 1/f LPF, we should see some downward slope and phase starting around -90. 

Thus, there must be some zero in the fast or common path, maybe at a few kHz where the digital loop wouldn't really see its effect. I'm not sure what it could be at this point in time.

One thought I had is that I never really checked the TF of DFD response to frequency modulation of the RF beat. I used an SR785 to drive the external FM input of a Fluke 1061A synthesizer, and saw it to be totally flat from 1-100kHz with carriers from 30-100MHz, so that should be fine. (For a little while I was confused by what seemed to be some heavy high-passing going on, but it turns out that the Fluke just can't push much low frequency FM; the manual says -3dB at 20Hz.)

Attachment 1: OLG_fastALS.pdf
OLG_fastALS.pdf
  11649   Tue Sep 29 18:03:11 2015 ranaUpdateLSCuse LISO

Use LISO - see what it tells you. I would think that you should make a differential RC filter to get the right behavior. (e.g. 1K on each leg and 1 uF between them)

Each leg of the diff input of the board has a 4k input impedance.

But surely the AO input to the MC servo should also make sense independently.

Attachment 1: Screen_Shot_2015-09-29_at_5.55.34_PM.png
Screen_Shot_2015-09-29_at_5.55.34_PM.png
  11650   Tue Sep 29 19:38:09 2015 gautamUpdateGeneralFOL fiber box revamp

The new 2x2 fiber couplers arrived today so Eric gave me an overview on the changes to be made to the existing configuration of the FOL fiber box. I removed the box from the table after ensuring that the PDs were powered OFF and removing and capping all fiber leads on the front panel. Here is a summary of the changes made.

  • On-Off positions for the rocker switches corrected - these switches for the power to the PDs were installed such that the "1" position was OFF. I flipped both the switches such that the "1" position now corresponds to ON (see Attachment #1).
  • All the couplers/beam combiners/splitters were initially removed. 
  • I then re-configured the layout as per the schematic (Attachment #2). I only needed to use one of the 4 new 2x2 couplers ordered. I think the 1x2 couplers are appropriate for mixing the PSL and AUX beams, as if we use a 2x2 coupler, half of the mixed light goes nowhere? Indeed, if we had one more such coupler, we could do away with the 2x2 coupler I am now using to divide the PSL light into two. 
  • The spec-sheets on the inside of the top cover were updated to reflect the new hardware (Attachment #3).
  • The old hardware from the box that was not used, along with their spec-sheets, are stored temporarily in a Thorlabs lab snacks box (all the fibers have been capped).
  • The finished layout is shown in Attachment #4.

I then ran a quick check to see what the power levels were at the input to the PDs, using the fiber coupled power meter. However, I found that there was no light in the fiber marked "PSL light in" (the power meter read out "Sig. Low"). The X arm Aux light had an input power of 1.12 mW, which after the various coupling losses etc went down to 63 uW just before the PD. The corresponding figures for the Y arm are 200 uW and 2.2 uW. I am not too sure of how the AUX light is coupled into fibers so I am not trying to tweak the alignment to see if I get more power. 

Attachment 1: IMG_0017.JPG
IMG_0017.JPG
Attachment 2: FOL_schematic.pdf
FOL_schematic.pdf
Attachment 3: IMG_0018.JPG
IMG_0018.JPG
Attachment 4: IMG_0016.JPG
IMG_0016.JPG
  11651   Wed Sep 30 10:00:02 2015 ericqUpdateLSCused LISO

LISO confirms that I did my algebra right in picking the component values, and shows no extra zeros. 

I also took some TFs with the SR785 and confirmed that both CM board inputs behave the same, and that including the LPF on the input gives the expected 1/f shape at the slow and fast outputs.

  11652   Wed Sep 30 13:07:13 2015 gautamUpdateGeneralFOL fiber box revamp

Eric pointed out that the 1x2 couplers that were used in the previous arrangement and which I recycled, were in fact NOT appropriate - they are not 50-50 couplers but 90-10 couplers, which explains the measured power levels I quoted here.

I switched out these for a pair of the newly arrived 2x2 couplers, and have also replaced the datasheets on the inside of the top cover. I then redid the power level measurements, and got some sensible values this time (see Attachment #1 for revised layout and measured power levels, numbers in red are powers for PSL light, numbers in green are for AUX laser light, and all numbers are in mW). I did find that the 90-10 splitter in the PSL+Y path was not working (though the one in the PSL+X path seems to be working fine), and hence, have not quoted power levels at the output of these splitters. For now, I guess we can bypass the splitters and take the PSL+AUX light from the 2x2 couplers directly to the PDs.

Attachment 1: FOL_schematic.pdf
FOL_schematic.pdf
  11653   Wed Sep 30 13:59:49 2015 jamieUpdateDAQattempts at getting new fb working

I got Steve to get us a new Myrinet fiber network adapter card for fb1:

  • Myrinet 10G-PCIE-8B-S

I just finished installing the card in fb1, and it came up fine.  We happened to have a spare fiber, and a spare fiber jack in the DAQ switch, so I went ahead and plugged it in in parallel to the old fb:

controls@fb1:~/rtbuild/trunk 130$ /opt/mx/bin/mx_info
MX Version: 1.2.16
MX Build: controls@fb1:/opt/src/mx-1.2.16 Fri Sep 18 18:32:59 PDT 2015
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
    8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0:  364.4 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
    Status:         Running, P0: Link Up
    Network:        Ethernet 10G

    MAC Address:    00:60:dd:43:74:62
    Product code:   10G-PCIE-8B-S
    Part number:    09-04228
    Serial number:  485052
    Mapper:         00:60:dd:46:ea:ec, version = 0x00000000, configured
    Mapped hosts:   7

                                                        ROUTE COUNT
INDEX    MAC ADDRESS     HOST NAME                        P0
-----    -----------     ---------                        ---
   0) 00:60:dd:43:74:62 fb1:0                             1,0
   1) 00:25:90:0d:75:bb c1sus:0                           1,0
   2) 00:30:48:be:11:5d c1iscex:0                         1,0
   3) 00:30:48:d6:11:17 c1iscey:0                         1,0
   4) 00:30:48:bf:69:4f c1lsc:0                           1,0
   5) 00:14:4f:40:64:25 c1ioo:0                           1,0
   6) 00:60:dd:46:ea:ec fb:0                              1,0

We can now work on fb1 while fb continues to run and collect data from the front ends.

I'm still not getting the mx_stream connections to the new fb1 daq to work.  I'm leaving everything running as is on fb for the moment.

  11654   Wed Sep 30 15:44:06 2015 SteveUpdateGeneralSun Fire X4600

Gautam and Steve,

The decommissioned server from LDAS is retired to the 40m   with 32 cores and 128GB of memory in rack 1X7   http://docs.oracle.com/cd/E19121-01/sf.x4600/

  11655   Thu Oct 1 19:49:52 2015 jamieUpdateDAQmore failed attempts at getting new fb working

Summary

I've not really been able to make additional progress with the new 'fb1' DAQ.  It's still flaky as hell.  Therefore we're still using old 'fb'.

Issues

mx_stream

The mx_stream processes on the front ends initially run fine, connecting to the daqd and transferring data, with both DAQ-..._STATUS and FE-..._FB_NET_STATUS indicators green.  Then after about two minutes all the mx_stream processes on all the front ends die.  Monit eventually restarts them all, at which point they come up green for a while until the crash again ~2 minutes later.  This is essentially the same situation as reported previously.

In the daqd logs when the mx_streams die:

Aborted 2 send requests due to remote peer 00:30:48:be:11:5d (c1iscex:0) disconnected
Aborted 2 send requests due to remote peer 00:14:4f:40:64:25 (c1ioo:0) disconnected
Aborted 2 send requests due to remote peer 00:30:48:d6:11:17 (c1iscey:0) disconnected
Aborted 2 send requests due to remote peer 00:25:90:0d:75:bb (c1sus:0) disconnected
Aborted 1 send requests due to remote peer 00:30:48:bf:69:4f (c1lsc:0) disconnected
mx_wait failed in rcvr eid=000, reqn=176; wait did not complete; status code is Remote endpoint is closed
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=177; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=178; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=179; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=180; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786407 gps=1127786425

[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786408 gps=1127786426

[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786408 gps=1127786426

In the mx_stream logs:

controls@c1iscey ~ 0$ /opt/rtcds/caltech/c1/target/fb/mx_stream -r 0 -W 0 -w 0 -s 'c1x05 c1scy c1tst' -d fb1:0
mmapped address is 0x7f0df23a6000
mmapped address is 0x7f0dee3a6000
mmapped address is 0x7f0dea3a6000
send len = 263596
Connection Made
isendxxx failed with status Remote Endpoint Unreachable
disconnected from the sender

daqd

While the mx_stream processes are running daqd seems to write out data just fine.  At least for the full frames.  I manually verified that there is indeed data in the frames that are written.

Eventually, though, daqd itself crashes with the same error that we've been seeing:

main profiler warning: 0 empty blocks in the buffer

I'm not exactly sure what the crashes are coincident with, but it looks like they are also coincident with the writing out of the minute and/or second trend files.  It's unclear how it's related to the mx_stream crashes, if at all.  The mx_stream crashes happen every couple of minutes, whereas the daqd itself crashes much less frequently.

The new daqd can't handle EDCU files.  If an EDCU file is specified (e.g. C0EDCU.ini in our case), the daqd will segfault very soon after startup.  This was an issue with the current daqd on fb, but was "fixed" by moving where the EDCU file was specified in the master file.

Conclusion

There are a number of differences between the fb1 and fb configurations:

  • newer OS (Debian 7 vs. ancient gentoo)
  • newer advLigoRTS (trunk vs. 2.9.4)
  • newer framecpp library installed from LSCSoft Debian repo (2.4.1-1+deb7u0 vs. 1.19.32-p1)

It's possible those differences could account for the problems (/opt/rtapps/epics incompatible with this Debian install, for instance).  Somehow I doubt it.  I wonder if all the weird network issues we've been seeing are somehow involved.  If the NFS mount of chiara is problematic for some reason that would affect everything that mounts it, which includes all the front ends and fb/fb1.

There are two things to try:

  • Fix the weird network problem.  Try remove EVERYTHING from the network except for chiara, fb/fb1, and the front ends and see if that helps.
  • Rebuild fb1 with Ubuntu and daqd as prescribed by Keith Thorne.
  11656   Thu Oct 1 20:24:02 2015 jamieUpdateDAQmore failed attempts at getting new fb working

I just realized that when running fb1, if a single mx_stream dies they all die.

  11657   Thu Oct 1 20:26:21 2015 jamieUpdateDAQSwapping between fb and fb1

Swapping between fb and fb1 as DAQ is very straightforward, now that they are both on the DAQ network:

  • stop daqd on fb
  • on fb sudoedit /diskless/root/etc/init.d/mx_stream and set: endpoint=fb1:0
  • start daqd on fb1.  The "new" daqd binary on fb1 is at: ~controls/rtbuild/trunk/build/mx-localtime/daqd

Once daqd starts, the front end mx_stream processes will be restarted by their monits, and be pointing to the new location.

Moving back is just reversing those steps.

  11658   Fri Oct 2 03:29:16 2015 ericqUpdateLSCFast ALS progress - AO path crossed over, but no high BW

I've been using an SR560 to experiment with differnent pole frequencies, to try and cancel the mystery zero. It's after the ALS demod board, before the pomona LPF with a gain of five. 

A pole frequency of 3kHz seems to recover sensible loop shapes. I've been able to crossover the AO path to make a nice long phase bubble which isn't the prettiest, but seems workable.

Getting to this point is now almost entirely scripted and repeatable; one just has to make sure that the ALS beat has the correct sign and adjust the delay line length. Most frustratingly, due to the dependence of the ALS gain on beat frequency / magnitude / delay, which can all vary on the order of a few dB, the AO gain settings to get to the crossed over point are not always the same, so at the end it's a lot of small steps and frequent loop measurements. 

The FSS crossover and overall IMC loop gain have to be pretty actively managed too. It's all too easy to drive the pockel's cell crazy. And if it's going crazy on its own anyways, there's no hope in trying to pile ALS sensing noise on top of it... It would really help in this effort to fix the whole PC situation up. 

Unfortunately, lock is lost when increasing the overall gain on the common mode board even by 1dB.angry We've seen in the single arm tests, that the gain settings have an appreciable difference in offset between them. Maybe this step is more than what the loop can handle? Or maybe it's the voltage glitches... Maybe some gain reallocation can put me on a region of the slider that glitches less.

In terms of the mystery plant features, I figure I'd like to take the analog TF of AO control signal to, say, AS55, and see what may or may not be there. I just haven't done this tonight since it would involve recabling the analyzer, and I still need frequent loop measurements to get to the crossed over state. Having ITMY misaligned and using the digital AS55Q spectrum as an out of loop monitor has been very helpful. 

Attachment 1: crossedover.pdf
crossedover.pdf
  11659   Fri Oct 2 15:11:08 2015 SteveUpdatePEMcable squashed

Cable numbered #53 from Accelerometer 4 to 1X7 / DAQ input c26 was squased while removing network card from Sun Fire x4600 today.

This cable has to be tested.

  11660   Fri Oct 2 15:33:09 2015 SteveUpdatesafetysafety training

Gautom has received 40m specific basic safety training today.

  11661   Sun Oct 4 12:07:11 2015 jamieConfigurationCDSCSD network tests in progress

I'm about to start conducting some tests on the CDS network.  Things will probably be offline for a bit.  Will post when things are back to normal.

  11662   Sun Oct 4 13:53:30 2015 jamieUpdateLSCSENSMAT oscillator used for EPICS tests

I've taken over one of the SENSMAT oscillators for a test of the EPICS system.

These are the channels I've modified, with their original and current settings:

controls@donatella|~ > caget C1:LSC-OUTPUT_MTRX_7_13 C1:CAL-SENSMAT_CARM_OSC_FREQ C1:CAL-SENSMAT_CARM_OSC_CLKGAIN
C1:LSC-OUTPUT_MTRX_7_13          -1
C1:CAL-SENSMAT_CARM_OSC_FREQ    309.21
C1:CAL-SENSMAT_CARM_OSC_CLKGAIN   0
controls@donatella|~ > caget C1:LSC-OUTPUT_MTRX_7_13 C1:CAL-SENSMAT_CARM_OSC_FREQ C1:CAL-SENSMAT_CARM_OSC_CLKGAIN
C1:LSC-OUTPUT_MTRX_7_13           0
C1:CAL-SENSMAT_CARM_OSC_FREQ      0.1
C1:CAL-SENSMAT_CARM_OSC_CLKGAIN   3
controls@donatella|~ >

 

 

  11663   Sun Oct 4 14:23:42 2015 jamieConfigurationCDSCSD network test complete

I've finished, for now, the CDS network tests that I was conducting.  Everything should be back to normal.

What I did:

I wanted to see if I could make the EPICS glitches we've been seeing go away if I unplugged everything from the CDS martian switch in 1X6 except for:

  • fb
  • fb1
  • chiara
  • all the front end machines

What I unplugged were things like megatron, nodus, the slow computers, etc.  The control room workstations were still connected, so that I could monitor.

I then used StripTool to plot the output of a front end oscillator that I had set up to generate a 0.1 Hz sine wave (see elog 11662).  The slow sine wave makes it easy to see the glitches, which show up as flatlines in the trace.

More tests are needed, but there was evidence that unplugging all the extra stuff from the switch did make the EPICS glitches go away.  During the duration of the test I did not see any EPICS glitches.  Once I plugged everything back in, I started to see them again.  However, I'm currently not seeing many glitches (with everything plugged back in) so I'm not sure what that means.  I think more tests are needed.  If unplugging everything did help, we still need to figure out which machine is the culprit.

  11664   Sun Oct 4 14:28:03 2015 jamieUpdateDAQmore failed attempts at getting new fb working

I tried to look at fb1 again today, but still haven't made any progress.

The one thing I did notice, though, is that every hour on the hour the fb1 daqd process dies in an identical manor to how the fb daqd dies, with these:

[Sun Oct  4 12:02:56 2015] main profiler warning: 0 empty blocks in the buffer

errors right as/after it tries to write out the minute trend frames.

This makes me think that this new hardware isn't actually going to fix the problem we've been seeing with the fb daqd, even if we do get daqd "working" on fb1 as well as it's currently working on fb.

  11665   Sun Oct 4 14:32:49 2015 jamieConfigurationCDSCSD network test complete

Here's an example of the glitches we've been seeing, as seen in the StripTool trace of the front end oscillator:

You can clearly see the glitch at around T = -18.  Obviously during non-glitch times the sine wave is nice and cleanish (there are still the very small discretisation from the EPICS sample times).

  11666   Mon Oct 5 10:11:35 2015 SteveUpdateVACRGA scan pd78 day 386

RGA background scan.

The IFO is closed off from RGA with VM1

CC4 ( and CC1)  is still flaky and it's interlock closes VM1

 

Attachment 1: RGA_background.png
RGA_background.png
  11667   Mon Oct 5 11:25:21 2015 ericqUpdateSUSETMY OL laser dead

Gautam alerted me that the Y arm looked like it was being dithered, even though the ASS was turned off. I found that the ETMY OL signals were garbage, leading to the servos flipping back and forth between their rails. 

We went out to the ETMY table, and found the HeNe laser to be emitting a paltry <0.5mW; the OL QPD could not register the puny beam incident on it.

Here is the last 30 days of OL_SUM:

Steve will replace the laser this afternoon. 

  11668   Mon Oct 5 16:41:22 2015 SteveUpdateSUSETMY OL laser replaced

This JDSU 1103P laser, sn P892324 lived for 2 years. It's power output is 0.05 mW now

It was replaced with brand new JDSU 1103P,  sn P919645, Mfg date 12/2014 with 2.75 mW output.

There is 0.14 mW  light returning to the qpd = 7,250 counts without AR 632 lenses

  11669   Tue Oct 6 03:30:17 2015 ericqUpdateLSCDRFPMI Progress

[ericq, Gautam]

Highlight of the night: the DRFPMI was held at arm powers > 110 for 20 seconds. ALS feedback was still running though, but so was some nonzero REFL11 AO path action.

In short, time was spent finding the right FM trigger settings to keep the DRMI locked while CARM is fluctuating through resonance, what CARM offset to acquire DRMI lock at, order of operations of turning on AO / turning up overall CARM gain, etc. 

Sadly, for the past hour or so, the DRMI has refused to stay locked for more than ~20 seconds, so I haven't been able to push things much further. This is a shame, since I'm very nearly at the equivalent point in the PRFPMI locking script where the ALS control is turned off completely. 

  11670   Tue Oct 6 16:56:40 2015 gautamUpdateGeneralFOL fiber box revamp

[gautam, ericq]

We had a look at the IR beat (PSL+Xarm) today using the new FOL fiber box, and compared it to the green beat signal for the same combination. We first switched out the green Y beat input into the RF amplifiers on the PSL table with the PSL+Xarm IR beat input (so in all the plots, the BEATY channels really correspond to the IR beat for PSL+X). The IR and green beat notes were found without much difficulty, and we compared the beat signal PSDs for the green and IR signals (see Attachment #1 - arms were locked to green and the X slow control was turned on). The pink trace (labeled REF1) corresponds to the green beat signal, and was in good agreement with an earlier reference trace Eric had saved for the same signal. The teal trace (labeled REF0) corresponds to the the IR beat signal monitored simultaneously. 

We then went back to the PSL table to check the amplitude of the signal from the broadband fiber PDs using the Agilent network analyzer. An initial measurement yielded a beat note (@~50MHz) at ~-22dbm (17mV rms). We figured that by bypassing the 90-10 splitter in this path, we could get a stronger signal. But after switching out the fiber connections we found that the signal amplitude had fallen to ~-27dbm (10mV rms). As per my earlier measurements here, we expect ~600uW of light on the PD, and a quick calculation suggested the signal should be more like 60mV, so we used the fiber power meter to check the power levels after each of the couplers again. We then found that the fiber connector on the front panel of the box for the PSL input wasnt ideal (the laser power after the first 50-50 coupler was only ~250 uW, though the input was ~1.2  mW). The power after the first coupler also fluctuated unpredictably (<100 uW to 350 uW) in response to slightly tightening/loosening the fiber connections on the front panel. I then switched the PSL input to one of the two unused fiber connectors on the front panel (meant for the 10% of the beat signal for the DC readout), and found that this input behaved much better, with ~450 uW of power available after the first 50-50 coupler. The power going into the beat PD was also measured to be ~550uW, closer to what was expected. The beat signal peak now was ~-14dbm (~30mV rms).

We then once again repeated the comparison between green and IR beat signals - but while in the control room, I noticed that the beat signal amplitude on the network analyzer in the control room was fluctuating by nearly 1.5 divisions on the vertical scale - not sure what the reason for this is. A look at the PSD of the IR beat with higher power incident on the PD was also not encouraging (see blue trace in Attachment #1), it seems to have gotten worse in the 10-30 Hz range. We also looked at the coherence between the beat spectrum and the beat note amplitude in order to look for any linear coupling between the two, but from Attachment #2, we cannot explain the disparity between the green and IR beat spectra. This warrants further investigation.

Everything on the PSL table has now been restored to the configurations before these investigations (i.e. the Y+PSL green beat cable has been reconnected to the RF amplifier, and both green beat PDs have been powered back ON. The fiber PDs are powered OFF) 

Attachment 1: 20151006_Xbeat_psd.pdf
20151006_Xbeat_psd.pdf
Attachment 2: 20151006_Xbeat_coherence.pdf
20151006_Xbeat_coherence.pdf
  11671   Thu Oct 8 04:48:50 2015 ericqUpdateLSCDRFPMI Progress

Progress was made. CARM was stably locked on RF only. DARM was RF only for a few moments before I typed in a wrong number...

A change was made to the LSC model's triggering section to make the DRMI hold more reliably at zero CARM offset. Namely, the POPDC signal now has its absolute value taken before the trigger matrix. Even unwhitened, it occaisionally would somehow go negative enough to break the DRMI trigger.

AUX X laser was acting up again. As before, tweaking laser current is the temporary fix.

  11672   Thu Oct 8 13:13:20 2015 KojiUpdateLSCDRFPMI Progress

Please clarify: I wonder if you were at the zero offset for CARM and DARM or not. I am 25% excited right now.

  11673   Thu Oct 8 14:14:50 2015 ericqUpdateLSCDRFPMI Progress
Quote:

Please clarify: I wonder if you were at the zero offset for CARM and DARM or not. 

Yes, this was at the full DRFPMI resonance.

  11674   Thu Oct 8 16:48:23 2015 KojiUpdateLSCDRFPMI Progress

Awesome

  11675   Thu Oct 8 21:35:49 2015 ranaUpdateLSCDRFPMI Progress

Give us a lockloss or other kind of time series plot so we can bask in the glory.

  11676   Fri Oct 9 09:22:38 2015 ericqUpdateLSCDRFPMI Progress

Look upon this three second lock, ye Mighty, and rejoice!

Attachment 1: oct8_allRF.pdf
oct8_allRF.pdf
ELOG V3.1.3-