40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 91 of 344  Not logged in ELOG logo
ID Date Author Type Category Subject
  12763   Fri Jan 27 17:49:41 2017 jamieUpdateCDStest of new daqd code on fb1

Just FYI I'm running a test of updated daqd code on fb1. 

fb1 has it's own fiber to the daq network switch, so nothing had to be modified to do this test. This *should* not affect anything in the rest of the system, but as we all know these are famous last words....  If something is going haywire, and you can't get in touch with me and can't figure what else to do, you can just log on to fb1 and shut it down.  It's not writing any data to any of the network filesystems.

The daqd code under test is from the latest advLigoRTS 3.2.1 tag, which has daqd stability fixes that will hopefully address the problems we were seeing last time I tried this upgrade.  We'll see...

I'm going to let it run over the weekend, and will check in periodically.

  12762   Fri Jan 27 17:07:52 2017 LydiaUpdateCDSslow machine bootfest

Rebooted c1iscaux, c1auxex and c1auxey which were all not reponding to telnet. The watchdogs for the ETMs were turned off and then I keyed all 3 crates. All slow machines are reponding to telnet now. Both green lasers locked to the arms so I didn't do any burt restore.

  12761   Fri Jan 27 15:36:17 2017 KojiUpdateSUS wire standoffs update

Very nice! I got excited.

  • Don't you ask Calum and co to check the groove size with their microscopes? Just give the samples and the wire.
  • Do we want to make a simple "guitar" setup to measure the vibration Qs with Al piece, glass prism, ungrooved Sapphire, this grooved sapphire, grooved ruby, etc?
  12760   Fri Jan 27 14:50:04 2017 SteveUpdateSUS wire standoffs update

The 3 pieces of Sapphire v-groove test cuts are back. They look good. The suspension wire 0.0017" ( 43 micron ) is show on some of the pictures.

  12759   Fri Jan 27 00:14:02 2017 gautamSummaryIOOIMC WFS RF power levels

It was raised at the Wednesday meeting that I did not check the RF pickup levels while measuring the RF error signal levels into the Demod board. So I closed the PSL shutter, and re-did the measurement with the same measurement scheme. The detailed power levels (with no light incident on the WFS, so all RF pickup) is reported in the table below.

IMC WFS RF Pickup levels @ 29.5MHz
WFS Quadrant Pin (pW) Preflected
1 1 0.21 10.
2 1.41 148
3 0.71 7.1
4 0.16 3.6
2 1 0.16 10.5
2 1.48 166
3 0.81 5.1
4 0.56 0.33

These numbers can be subtracted from the corresponding columns in the previous elog to get a more accurate estimate of the true RF error signal levels. Note that the abnormal behaviour of Quadrant #2 on both WFS demod boards persists.

  12758   Wed Jan 25 19:39:07 2017 gautam UpdateIMC29.5 MHz modulation depth measurement plan

Just collecting some links from my elog searching today here for easy reference later.

  • EOM datasheet: Newfocus 4064 (according to this, the input Impedance is 10pF, and can handle up to 10W max input RF power).
  • An elog thread with some past measurement details: elog 5339. According to this, the modulation depth at 29.5 MHz is 4mrad. The EOM's manual says 13mrad/V @1000nm, so we expect an input signal at 29.5MHz of 0.3V(pk?). But presumably there is some dependance of this coefficient on the actual modulation frequency, which I could not find in the manual. Also, Kiwamu's note (see next bullet) says that the EOM was measured to have a modulation depth of 8 mrad/V
  • A 2015 update from Kiwamu on the triple resonant circuit: elog 11109. In this elog, there is also a link to quite a detailed note that Kiwamu wrote, based on his analysis of how to make this circuit better. I will go through this, perhaps we want to pursue installing a better triple resonant circuit...

I couldn't find any details of the actual measurement technique, though perhaps I just didn't look for the right keywords. But Koji's suggestion of measuring powers with the bi-directional coupler before the triple resonant circuit (but after the power combiner) should be straightforward. 

  12757   Wed Jan 25 18:18:08 2017 KojiSummaryIOOMCL / MCF / Calibration

jiSome notes on the FSS configuration: ELOG 10321

  12756   Wed Jan 25 17:30:03 2017 KojiUpdateIMC29.5 MHz modulation depth measurement plan

I'm afraid that the bidirectional coupler, designed to be 50ohm in/out, disturbs the resonant circuit designed for the EOM which is almost purely capacitive.

One possible way could be to measure the transfer function using the active FET probe from the triple resonant input to the output with the EOM attached.

Another way: How about to measure the reflection before the resonant circuit? Then, of course, there is the triple resonant interface circuit between the power combiner and the EOM. This case, we will see how much power is consumed in EOM and the resonant circuit. Then we can use the previous measurement to see the conversion factor between the power consumption to the modulation depth. Kiwamu may give us his measurement.

  12755   Wed Jan 25 15:41:29 2017 LydiaUpdateIMC29.5 MHz modulation depth measurement plan

[Lydia, gautam]

To measure the modulation depth of the 29.5 MHz sideband, we plan to connect a bidirectional coupler between the EOM and the triple resonant circuit box. This will let us measure the power going into the EOM and the power in the reflection. According to the manual for the EOM (Newport 4064), the modulation depth is 13 mrad/V at a wavelength of 1000 nm. Before disconnecting these we will turn off the Marconi.

Hopefully we can be gentle enough that the EOM can be realigned without too much trouble. Before touching anything we'll measure the beam power before and after the EOM so we know what to match after.

If anyone has an objection to this plan, speak now or we will proceed tomorrow morning.

  12754   Wed Jan 25 14:30:20 2017 gautamUpdateCDSslow machine bootfest

[gautam, lydia]

We rebooted c1psl, c1iscaux and c1aux which were all showing the typical symptom of responding to ping but not to telnet (and also blanked out epics fields on the MEDM screens). Keyed all these crates.

Restored burt snapshots for c1psl, PMC locked fine, and IMC is also locked now.

Johannes forgot to elog this yesterday, but he rebooted c1susaux following the usual procedure to avoid getting ITMX stuck. 

 

  12753   Wed Jan 25 10:46:58 2017 steveSummarySUSoplev laser summary updated

                    Oct.  5, 2015              ETMY He/Ne replaced by 1103P, sr P919645,  made Dec 2014, after 2 years

                   Jan. 24, 2017              ETMY He/Ne replaced by 1103P,  sr P947049,  made Apr 2016,  after 477 hrs running hot

Attachment 1: oplev_sums.png
oplev_sums.png
  12752   Wed Jan 25 09:00:39 2017 Max IsiUpdateSummary PagesCluster maintenance
LDAS has not recovered from maintenance causing the pages to remain unavailable until further notice.

> System-wide CIT LDAS cluster maintenance may cause disruptions to summary pages today. 
  12751   Wed Jan 25 01:27:45 2017 gautamUpdateIMCIMC feedforward checkup

This is probably just a confirmation of something we discussed a couple of weeks back, but I wanted to get more familiar with using the multi-coherence (using EricQs nice function from the pynoisesub package) as an indicator of how much feedforward noise cancellation can be achieved. In particular, in light of our newly improved WFS demod/whitening boards, I wanted to see if there was anything to be gained by adding the WFS to our current MCL feedforward topology.

I used a 1 hour data segment - the channels I looked at were the vertex seismometer (X,Y,Z) and the pitch and yaw signals of the two WFS, and the coherence of the uncorrelated part of these multiple witnesses with MCL. I tried a few combinations to see what is the theoretical best achievable subtraction:

  1. Vertex seismometer X and Y channels - in the plot, this is "Seis only"
  2. Seis + WFS 1 P & Y
  3. Seis + WFS 2 P & Y
  4. Seis + WFS 1 & 2 P
  5. Seis + WFS 1 & 2 Y

The attached plot suggests that there is negligible benefit from adding the WFS in any combination to the MCL feedforward, at least from the point of view of theoretical achievable subtraction

I also wanted to put up a plot of the current FF filter performance, for which I collected 1 hour of data tonight with the FF on. While the feedforward does improve the MCL spectrum, I expected better performance judging by previous entries in the elog, which suggest that the FIR implementation almost saturates the achievable lower bound. The performance seems to have degraded particularly around 3Hz, despite the multi-coherence being near unity at these frequencies. Perhaps it is time to retrain the Weiner filter? I will also look into installation of the accelerometers on the MC2 chamber, which we have been wanting to do for a while now...

Attachment 1: IMC_FF_potential.pdf
IMC_FF_potential.pdf
  12750   Tue Jan 24 17:52:15 2017 CraigUpdateOptical LeversETMY Oplev HeNe is replaced

Steve, Craig, Gautam

Today Steve replaced the ETMY He/Ne sr P919645 OpLev laser with sr P947049 and Craig realigned it using a new AR coated lenses.

Attached are the RIN of the OpLev QPD Sum channels.  The ETMY OpLev RIN is much lower than when Gautam took the same measurement yesterday.

Also attached are the pitch and yaw OLG TFs to ensure we still have acceptable phase margins at the UGF.

The last three plots show the optical layout of the ETMY OpLev, a QPD reflection blocker we added to the table, and green light to ETMY not being blocked by any changes to the OpLev.

Quote:

ETMY He/Ne body temp is  ~45 C The laser was seated loosely  in the V-mount with black rubber padding.

The enclosure has a stinky plastic smell from this black plastic. This laser was installed on Oct 5, 2016 See 1 year plot.

Oplev servo turned off. Thermocouple attached to the He/Ne

It will be replaced tomorrow morning.

Quote:

On the control room monitors, I noticed that the IR TEM00 spot was moving around rather a lot in the Y arm. The last time this happened had something to do with the ETMY Oplev, so I took a look at the 30 day trend of the QPD sum, and saw that it was decaying steeply (Steve will update with a long term trend plot shortly). I noticed the RIN also seemed rather high, judging by how much the EPICS channel reading for the QPD sum was jumping around. Attached are the RIN spectra, taken with the OL spot well centered on the QPD and the arms locked to IR. Steve will swap the laser out if it is indeed the cluprit.

 

 

Attachment 1: OpLevRIN24Jan2017.pdf
OpLevRIN24Jan2017.pdf
Attachment 2: ETMYpit_24Jan2017.pdf
ETMYpit_24Jan2017.pdf
Attachment 3: ETMYyaw_24Jan2017.pdf
ETMYyaw_24Jan2017.pdf
Attachment 4: IMG_3510.JPG
IMG_3510.JPG
Attachment 5: IMG_3513.JPG
IMG_3513.JPG
Attachment 6: IMG_3514.JPG
IMG_3514.JPG
  12749   Tue Jan 24 07:36:56 2017 Max IsiUpdateSummary PagesCluster maintenance
System-wide CIT LDAS cluster maintenance may cause disruptions to summary pages today. 
  12748   Tue Jan 24 01:04:16 2017 gautamSummaryIOOIMC WFS RF power levels

Summary:

I got around to doing this measurement today, using a minicircuits bi-directional coupler (ZFBDC20-61-HP-S+), along with some SMA-LEMO cables.

  • With the IMC "well aligned" (MC transmission maximized, WFS control signals ~0), the RF power per quadrant into the Demod board is of the order of tens of pW up to a 100pW.
  • With MC1 misaligned such that the MC transmission dropped by ~10%, the power per quadrant into the demod board is of the order of hundreds of pW.
  • In both cases, the peak at 29.5MHz was well above the analyzer noise floor (>20dB for the smaller RF signals), which was all that was visible in the 1MHz span centered around 29.5 MHz (except for the side-lobes described later).
  • There is anomalously large reflection from Quadrant 2 input to the Demod board for both WFS
  • The LO levels are ~-12dBm, ~2dBm lower than the 10dBm that I gather is the recommended level from the AD831 datasheet
Quote:

We should insert a bi-directional coupler (if we can find some LEMO to SMA converters) and find out how much actual RF is getting into the demod board.


Details:

I first aligned the mode cleaner, and offloaded the DC offsets from the WFS servos.

The bi-directional coupler has 4 ports: Input, Output, Coupled forward RF and Coupled Reverse RF. I connected the LEMO going to the input of the Demod board to the Input, and connected the output of the coupler to the Demod board (via some SMA-LEMO adaptor cables). The two (20dB) coupled ports were connected to the Agilent spectrum analyzer, which have input impedance 50ohms and hence should be impedance matched to the coupled outputs. I set the analyzer to span 1MHz (29-30MHz), IF BW 30Hz, 0dB input attenuation. It was not necessary to turn on averaging to resolve the peaks at ~29.5MHz since the IF bandwidth was fine enough.

I took two sets of measurements, one with the IMC well aligned (I maximized the MC Trans as best as I could to ~15,000 cts), and one with a macroscopic misalignment to MC1 such that the MC Trans fell to 90% of its usual value (~13,500 cts). The peak function on the analyzer was used to read off the peak height in dBm. I then converted this to RF power, which is summarized in the table below. I did not account for the main line loss of the coupler, but according to the datasheet, the maximum value is 0.25dB so there numbers should be accurate to ~10% (so I'm really quoting more S.Fs than I should be).

WFS Quadrant Pin (pW) Preflected(pW) Pin-demod board (pW)

IMC well aligned

1 1 50.1 12.6 37.5
2 20.0 199.5 -179.6
3 28.2 10.0 18.2
4 70.8 5.0

65.8

2 5 100 19.6 80.0
6 56.2 158.5 -102.3
7 125.9 6.3 11.5
8 17.8 6.3

119.6
 

WFS Quadrant Pin (pW) Preflected(pW) Pin-demod board (pW)

MC1 Misaligned

1 1 501.2 5.0 496.2
2 630.6 208.9 422
3 871.0 5.0 866
4 407.4 16.6

190.8

2 5 407.4 28.2 379.2
6 316.2 141.3 175.0
7 199.5 15.8 183.7
8 446.7 10.0 436.7

 

For the well aligned measurement, there was ~0.4mW incident on WFS1, and ~0.3mW incident on WFS2 (measured with Ophir power meter, filter out).

I am not sure how to interpret the numbers for quadrants #2 and #6 in the first table, where the reverse coupled RF power was greater than the forward coupled RF power. But this measurement was repeatable, and even in the second table, the reverse coupled power from these quadrants are more than 10x the other quadrants. The peaks were also well above (>10dBm) the analyzer noise floor 

I haven't gone through the full misalginment -> Power coupled to TEM10 mode algebra to see if these numbers make sense, but assuming a photodetector responsivity of 0.8A/W, the product (P1P2) of the powers of the beating modes works out to ~tens of pW (for the IMC well aligned case), which seems reasonable as something like P1~10uW, P2 ~ 5uW would lead to P1P2~50pW. This discussion was based on me wrongly looking at numbers for the aLIGO WFS heads, and Koji pointed out that we have a much older generation here. I will try and find numbers for the version we have and update this discussion.

Misc:

  1. For the sake of completeness, the LO levels are ~ -12.1dBm for both WFS demod boards (reflected coupling was negligible)
  2. In the input signal coupled spectrum, there were side lobes (about 10dB lower than the central peak) at 29.44875 MHz and 29.52125 MHz (central peak at 29.485MHz) for all of the quadrants. These were not seen for the LO spectra.
  3. Attached is a plot of the OSEM sensor signals during the time I misaligned MC1 (in both pitch and yaw approximately by equal amounts). Assuming 2V/mm for the OSEM calibration, the approximate misalignment was by ~10urad in each direction.
  4. No IMC suspension glitching the whole time I was working today yes

 

Attachment 1: MC1_misalignment.png
MC1_misalignment.png
  12747   Mon Jan 23 17:24:26 2017 SteveUpdateOptical LeversETMY Oplev HeNe is running hot

ETMY He/Ne 1103P  body temp is  ~45 C The laser was seated loosely  in the V-mount with black rubber padding.

The enclosure has a stinky plastic smell from this black plastic. This laser was installed on Oct 5, 2016 See 1 year plot.

Oplev servo turned off. Thermocouple attached to the He/Ne

It will be replaced tomorrow morning.

Quote:

On the control room monitors, I noticed that the IR TEM00 spot was moving around rather a lot in the Y arm. The last time this happened had something to do with the ETMY Oplev, so I took a look at the 30 day trend of the QPD sum, and saw that it was decaying steeply (Steve will update with a long term trend plot shortly). I noticed the RIN also seemed rather high, judging by how much the EPICS channel reading for the QPD sum was jumping around. Attached are the RIN spectra, taken with the OL spot well centered on the QPD and the arms locked to IR. Steve will swap the laser out if it is indeed the cluprit.

 

Attachment 1: ETMY_oplev__lasers.png
ETMY_oplev__lasers.png
  12746   Mon Jan 23 15:16:52 2017 gautamUpdateOptical LeversETMY Oplev HeNe needs to be replaced

On the control room monitors, I noticed that the IR TEM00 spot was moving around rather a lot in the Y arm. The last time this happened had something to do with the ETMY Oplev, so I took a look at the 30 day trend of the QPD sum, and saw that it was decaying steeply (Steve will update with a long term trend plot shortly). I noticed the RIN also seemed rather high, judging by how much the EPICS channel reading for the QPD sum was jumping around. Attached are the RIN spectra, taken with the OL spot well centered on the QPD and the arms locked to IR. Steve will swap the laser out if it is indeed the cluprit.

Attachment 1: ETMY_Oplev.pdf
ETMY_Oplev.pdf
  12745   Mon Jan 23 10:24:01 2017 SteveUpdateSUSMC1 SUS electronics investigation

Two day plot of glitching suspentions: MC3, ITMY and ETMX

Attachment 1: 3glitchingSUS.png
3glitchingSUS.png
  12744   Fri Jan 20 17:12:37 2017 SteveUpdatePEMEM172 mic is hooked up in the PSL

Gautam and Steve,

It is hanging in the midle of the PSL enclosure. Wired to 1X1 to get plus and minus 15V through fuse.    It's output is connected to FB c17 input.

GV: C17 corresponds to "MIC 1" in the PEM model. So the output is saved as "C1:PEM-MIC_1_OUT_DQ"

Quote:

I added an EM172 to my soldered circuit and it seems to be working so far. I have taken a spectra using the EM172 in ambient noise in the control room as well as in white noise from Audacity. My computer's speakers are not very good so the white noise results aren't great but this was mainly to confirm that the microphone is actually working.

white_v_ambient.pdf

 

Attachment 1: EM172c.jpg
EM172c.jpg
  12743   Fri Jan 20 14:42:12 2017 SteveUpdatePEMCaltech weather station

We should be able to connect to this station

  12742   Fri Jan 20 11:16:30 2017 gautamUpdateSUSMC1 SUS electronics investigation

Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.

Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.

I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...


Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this.

  12741   Thu Jan 19 19:56:09 2017 ranaUpdateSUSMC1 SUS electronics investigation

Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.

Quote:

 

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12740   Thu Jan 19 16:36:35 2017 ericqUpdateComputer Scripts / Programsnodus web apache simlinks too soft
Quote:

EQ: https://nodus.ligo.caltech.edu:30889/FE is live

This was done by adding "Options +Indexes" to /etc/apache/sites-available/nodus

I've added a little more info about the apache configuration on the wiki: ApacheOnNodus

  12739   Thu Jan 19 12:00:10 2017 gautamUpdateSUSMC1 SUS electronics investigation

Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12738   Thu Jan 19 10:21:54 2017 AshleyUpdateGeneralPreliminary Microphone Data

Brief Summary: I am currently looking at the acoustic noise around both arms to see if there are any frequencies from machinery around the lab that stand out and to see what we can remove/change. I am using a Bluebird microphone suspended with surgical tubing from the cable trays to isolate it from vibrations. I am also using a preamp and the SR875 spectrum analyzer taking 6 sets of data every 1.5 meters (0 to 200Hz, 200Hz to 400Hz, 400z to 800Hz, 800Hz to 3200Hz, 3.2kHz to 12kHz, 12kHz to 100kHz).

 

·                Attachment 1 is a PSD of the first 3 measurements (from 0 to 12kHz) that I took every 1.5 meters along the x arm with the preamp and spectrum analyzer

·                Attachment 2 is a blrms color map of the first 6 sets of data I took (from 2.4m to 9.9m) 

·                Attachmetn 3 is a picture of the microphone set up with the surgical tubing 

Problems that occurred: settings on the preamp made the first set of data I took significantly smaller than the data I took with the 0dB button off and the last problem I had was the spectrum analyzer reading only from -50 to -50 dBVpk

 

 

Attachment 1: xend_psd.png
xend_psd.png
Attachment 2: xblrms.png
xblrms.png
Attachment 3: IMG_3734.JPG
IMG_3734.JPG
  12737   Thu Jan 19 08:25:12 2017 SteveUpdateSUSMC1 SUS electronics investigation
Quote:
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

No change.

Attachment 1: MC1_MC3_ITMY_ETMX_sensors.png
MC1_MC3_ITMY_ETMX_sensors.png
Attachment 2: sensors_UL.png
sensors_UL.png
  12736   Wed Jan 18 18:44:53 2017 gautamUpdateSUSMC1 SUS electronics investigation
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

  12735   Wed Jan 18 15:17:38 2017 ranaUpdateComputer Scripts / Programsnodus web apache simlinks too soft

I suppose before directory listings were turned off we should have fixed the script to make an index.html, but that's how it goes with "up"-grades. How about re-allow directory listing so that our old links for webview work again?


EQ: https://nodus.ligo.caltech.edu:30889/FE is live

  12734   Wed Jan 18 14:23:47 2017 gautamUpdateSUSMC1 SUS electronics investigation

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

  12733   Wed Jan 18 12:46:47 2017 ericqUpdateComputer Scripts / Programsnodus web apache simlinks too soft
Quote:

I tried to follow these instructions today to make the Simulink Webview accessible:

controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/

But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security?

This link works for me: https://nodus.ligo.caltech.edu:30889/FE/c1als_slwebview.html. The problem with just /FE/ is that there is no index.html, and we have turned off automatic directory listings.

IIRC, this arrangement was due to the fact that authentication of some of the folders (maybe the wikis) was broken during the nodus upgrade, so there was sensitive information being publicly displayed. This setup gives us discretion over what gets exposed.

  12732   Wed Jan 18 12:34:21 2017 ericqSummaryIOOMCL / MCF / Calibration
Quote:

In the filter banks there were various version of a 'dewhite' filter. They were all approximately z=150, p=15, g =1 @ DC, but with ~1% differences. I don't trust their provenance and so I've enforced symmetry and fixed their names to reflect what they are (150:15).

The filters were made in response to a measurement of the pentek whitening boards in 2015 (ELOG 11550), but this level of accuracy probably isn't important.

  12731   Wed Jan 18 11:40:54 2017 gautamUpdateSUSMC1 SUS electronics investigation

After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.

A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).

Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.

But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?

Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...

  12730   Wed Jan 18 10:41:14 2017 gautamUpdateGeneralETMX suspension electronics problems?

Summary pages show no kicking in the ETMX watchdogs from midnight to 6 AM (0800 - 1400 UTC):

https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20170118/sus/watchdogs/

  12729   Tue Jan 17 21:31:57 2017 gautamUpdateGeneralETMX suspension electronics problems?

Last night, I plugged the ETMX suspension coils back into the satellite box. Tonight, we turned on the damping loops for ETMX. Rana centered the Oplev so we can use that as an additional diagnostic to see if the optic gets kicked around overnight. We will re-assess the situation tomorrow.

Sometime earlier today, Lydia noticed that the +/- 5V Sorensens at the X end were not displaying their nominal voltage/current values (as per the stickers on them). She corrected this.

  12728   Tue Jan 17 21:29:52 2017 gautamUpdateSUSMC1 SUS electronics investigation

 

Quote:
 

After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.

The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.

PSL shutter remains closed

  12727   Tue Jan 17 20:47:23 2017 ranaUpdateCDSSimulink Webview updated

Seems like this stops working every ~2 years. Its been busted since early 2016 according to cron, so I fixed up the paths and restored some missing files and committed things to the SVN (with comments!) and now its working and grabbing the Web viewable versions of the front end models. Just need to restore its viewability and then the world can watch our models any time.

Quote:

Back in 2011, JoeB wrote some entries on how to automatically update the Simulink webview stuff.

Somehow, the cron broke down over the years. I reran the matlab file by hand today and it worked fine, so now you can see the up to date models using the internet.

https://nodus.ligo.caltech.edu:30889/FE/

 

  12726   Tue Jan 17 20:39:30 2017 ranaUpdateComputer Scripts / Programsnodus web apache simlinks too soft

I tried to follow these instructions today to make the Simulink Webview accessible:

controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/

Quote:

The story is: we currently don't expose the whole /users/public_html folder. Instead, we are symlinking the folders from public_html to /export/home/ on nodus, which is where apache looks for things

So, I fixed the links on the Core Optics page by running:

controls@nodus|~ > ln -sfn /users/public_html/40m_phasemap /export/home/

But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security?

  12725   Mon Jan 16 23:25:07 2017 gautamUpdateSUSMC1 SUS electronics investigation

[rana,gautam]

Summary:

  • MC1 glitchy behaviour is back
  • Found a broken LEMO cable, left unplugged for the night -> to be repaired tomorrow
  • Further diagnosis to follow

During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:

  1. Closed PSL shutter
  2. Ramped down the gains of the MC1 damping loops by factor of 1000 in ~4 secs using z step
  3. Shut down the watchdog for MC1
  4. Observed dataviewer traces for glitches

Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.

Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.

Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.

Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.

Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..


Some misc points:

  1. Regarding the adaptor boards that take the PD signals from the satellite box and route it to the whitening board, there are some clamps that hold the IDE connectors in place for MC1, MC2 and MC3 boards, but not for the others (see attached picture). Steve, can we install clamps for all of the boards? [taken care of, see here]
  2. The whitening boards are not screwed in place into the Eurocrate. This should be rectified.

PSL shutter is closed, MC1 watchdog is shutdown for the night.

Attachment 1: 20170116_231625.png
20170116_231625.png
Attachment 2: IMG_7175.JPG
IMG_7175.JPG
Attachment 3: IMG_7174.JPG
IMG_7174.JPG
  12724   Mon Jan 16 22:03:30 2017 jamieConfigurationComputersMegatron update
Quote:
 

We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance.

I would recommend upgrading the workstations to one of the reference operating systems, either SL7 or Debian squeeze, since that's what the sites are moving towards.  If you do that you can just install all the control room software from the supported repos, and not worry about having to compile things from source anymore.

  12723   Mon Jan 16 21:03:47 2017 ranaSummaryIOOMCL / MCF / Calibration

Oot on the streets and in the chat rooms, people often ask, "What is up with the MC_F calibration?".

Not being sure of the wiring in the c1ioo model, I have formed this screencap of today's model and put it here. The MC_LENGTH and MC_FREQ are the filter banks which would calibrate these channels. In the filter banks there were various version of a 'dewhite' filter. They were all approximately z=150, p=15, g =1 @ DC, but with ~1% differences. I don't trust their provenance and so I've enforced symmetry and fixed their names to reflect what they are (150:15). I have also turned on one filter in MC_FREQ so that now the whitening of the Pentek Interface board is compensated.

Why is this TF 1/f? It should be -20 dB/decade if MC_F is in units of Hz* and MCL is a pendulum response. Perhaps its because the combination of the Koji summing box, the Thorlabs HV driver, and the Pomona box forms an additional 1/f ? IF so, this would explain the TF we see. Once we get confirmation from Koji, we can load the TF into the MC_FREQ filter bank and then MC_F will be in units of Hz (as will the summary pages).

(along the way I've also turned off the craaaazzzy servo input enable tickling that gets put in the MC AutoLocker every April Fool's leap year - resist the temptation)

Since we have a frequency counter system here and some oscillators, I wonder if we can just calibrate the MC_L and MC_F directly using a mixer lashed up to one of the counters. If so, and we can get the stabilized laser frequency noise down below 10 mHz/rHz, maybe this is a viable alternative method to the photon calibrators. Counting zero crossings is more honest than counting photons.

Attachment 1: c1ioo_zoom_MCLF.png
c1ioo_zoom_MCLF.png
Attachment 2: MCL.pdf
MCL.pdf
  12722   Mon Jan 16 18:54:01 2017 ranaUpdateSUSBS: whitening re-engaged

Found that the BS whitening was off. Gautam says that "it has always been that way" and "there's nothing in the elog about this" and "I have no special relationship with Putin".

I looked at DV and DTT while turning the OSEM whitening back on. As expected, the sensor noise improved by 10x above 10 Hz. The time series shows no problems - its just less fuzzy now.

All OSEM spectra after the switch show on upper panel of plot. Lower panel shows comparison of BS UL before/after. To rotate the DTT PDF landscape output I typed this:

pdftk BS-white.pdf cat 1N output BSwhite.pdf

"if you see something, do something"

Attachment 1: BSwhite.pdf
BSwhite.pdf
  12721   Mon Jan 16 12:49:06 2017 ranaConfigurationComputersMegatron update

The "apt-get update" was failing on some machines because it couldn't find the 'Debian squeeze' repos, so I made some changes so that Megatron could be upgraded.

I think Jamie set this up for us a long time ago, but now the LSC has stopped supporting these versions of the software. We're running Ubuntu12 and 'squeeze' is meant to support Ubuntu10. Ubuntu12 (which is what LLO is running) corresponds to 'Debian-wheezy' and Ubuntu14 to 'Debian-Jessie' and Ubuntu16 to 'debian-stretch'.

We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance.

I followed the instructions from software.ligo.org (https://wiki.ligo.org/DASWG/DebianWheezy) and put the recommended lines into the /etc/apt/sources.list.d/lsc-debian.list file.

but I still got 1 error (previously there were ~7 errors):

W: Failed to fetch http://software.ligo.org/lscsoft/debian/dists/wheezy/Release  Unable to find expected entry 'contrib/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)

Restarting now to see if things work. If its OK, we ought to change our squeeze lines into wheezy for all workstations so that our LSC software can be upgraded.

  12720   Sat Jan 14 22:39:30 2017 ranaSummarySUSITMY is drifting ?

https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20170114/sus/susdrift/

ITMY is not like the others. Real or just OSEM madness?

  12719   Sat Jan 14 12:36:57 2017 ericqUpdateDAQminute trends missing

Yes, writing minute trends causes hourly FB crashes in the current state of things. The "raw" minute trending is turned on, but I think that these are unknown to nds.

  12718   Sat Jan 14 12:12:03 2017 ranaUpdateDAQminute trends missing

Did we turn off minute trend writing in one of the recent FrameBuilder debug sessions? Seems we only have second trends in 2016. Maybe this explains why its so slow to get minute trends? Dataviewer has to rebuild it from second trend.

controls@nodus|frames > l
total 64
drwx------   2 root     root     16384 Jun  8  2009 lost+found/
drwxr-xr-x   2 controls controls  4096 Jul 14  2015 tmp/
-rw-r--r--   1 controls controls     0 Jul 14  2015 test-file
drwxr-xr-x   5 controls controls  4096 Apr  7  2016 trend/
drwxr-xr-x   4 root     root      4096 Apr 11  2016 archive/
drwxr-xr-x 789 controls controls 36864 Jan 13 19:34 full/
controls@nodus|frames > cd trend
controls@nodus|trend > l
total 3340
drwxr-xr-x 258 controls controls 3342336 Jul  6  2015 minute_raw/
drwxr-xr-x 387 controls controls   36864 Nov  5  2015 minute/
drwxr-xr-x 969 controls controls   36864 Jan 13 19:49 second/

  12717   Sat Jan 14 00:53:05 2017 ranaHowToDAQGet 40m data using NDS2 and Python

Minute trend data seems not available using the NDS2 server. Its super slow using dataviewer from the control room.

Did some digging into the NDS2 config on megatron. It hasn't been updated in 2 years.

All of the stuff is run by the user 'nds2mgr'. The CronTab for this user was running all the channel name updates and server restarts at 3 AM each day; I've moved it to 5:05 AM. I don't know the password for this user, so I just did 'sudo su nds2mgr' to become him.

On megatron, in /home/nds2mgr/nds2-megatron/ there is a list of channels and configs. The file for the minute trend (C-M-ChanList.txt), hasn't been updated since Nov-2015. ???

  12716   Fri Jan 13 23:39:46 2017 gautamUpdateGeneralETMX suspension electronics problems?

[Koji,gautam]

After Koji's leap second fix, we were playing around with the X arm locking. In particular, we were playing around with the limit value on the X arm LSC filter bank - the nominal value is 4000, we wanted to see if we could increase this without kicking the optic while acquiring arm lock. We initially increased it to 8000, and then turned it off altogether. Then we rapidly turned the output of the servo ON/OFF, and looked at the arm transmission to see if it came back to the level before unlocking, as an indication of whether the optic was kicked.

These trials suggested a value of 8000 for the limiter was OK, so we left the LSC mode on with the limiter set to 8000. But just as we were about to leave for the night, I noticed on the wall Striptool that the X arm was unlocked. Investigating, we found that the green wasn't even locking to a HOM. Further investigation of the Oplev spot showed that ETMX had received a large kick (both pitch and law errors were ~200urad). ITMX was unaffected.

We initially tried lowering the LSC limit value back to 4000, then used first the Oplev spot and then the green to align the arm. But turning on LSC misaligned the arm after acquiring lock. So we decided to leave LSC off, thinking that the notorious ETMX suspension problems have resurfaced. As a diagnostic, we figured we'd leave the watchdog tripped, and use the Oplev to see if the optic was getting kicked. But the act of turning the watchdog off kicked the optic again (WHY?!).

Looking at the ETMX sus screen, turning off all the damping and LSC (but watchdog on) still leaves a non-zero offset in the "Vmon" field, between 0.02-0.05V depending on the coil. Turning the watchdog OFF takes all these to 0.009V, although I can see the LR value fluctuating between 0.004V and 0.009V. I went to the Xend and squished all the cables on the Sat. Box, but the problem persisted.

At this time, I can't think of any explanation, so I am giving up for the night. To avoid unnecessarily kicking the optic, I am going to unplug the suspension from the Sat. Box and leave one of our tester boxes plugged in, lets see if that sheds any light on the situation...


Notes:

  1. The +/-20V sorensens at this end were "tripped" for a few days after the power glitch until they were reset and turned back on yesterday. But this should not affect Vmon, as these Sorensens only supply the DC voltage for the coil bias, which is a slow machine channel?
  2. The X arm was staying locked and well aligned for hours on end earlier this afternon - in fact it was locked for about 2 hours 6-8 hours ago, I can still see the trace on the wall StripTool....
  12715   Fri Jan 13 21:41:23 2017 KojiUpdateCDSDC errors

I think I fixed the DC error issue

1. I added the leap second (leapsecond ?) entry for 2016/12/31, 23:60:00 UTC to daqdrc


[OLD]
set gps_leaps = 820108813 914803214 1119744016;
[NEW]
set gps_leaps = 820108813 914803214 1119744016 1167264018;

2. Restarted FB and all realtime models

Now I don't see any RED light.

  12714   Fri Jan 13 21:32:49 2017 ranaHowToDAQGet 40m data using NDS2 and Python

The attached file is a python notebook that you can use to get data. Minimal syntax.

Attachment 1: get40mData.ipynb
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Get some 40m data using NDS"
   ]
  },
  {
... 137 more lines ...
ELOG V3.1.3-