40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 333 of 357  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Typeup Category Subject
  16201   Tue Jun 15 11:46:40 2021 Ian MacMillanUpdateCDSSUS simPlant model

I have added more degrees of freedom. The model includes x, y, z, pitch, yaw, roll and is controlled by a matrix of transfer functions (See Attachment 2). I have added 5 control filters to individually control UL, UR, LL, LR, and side. Eventually, this should become a matrix too but for the moment this is fine.

Note the Unit delay blocks in the control in Attachment 1. The model will not compile without these blocks.

  16203   Tue Jun 15 21:48:55 2021 KojiUpdateCDSOpto-isolator for c1auxey

If my understanding is correct, the (photo receiving) NPN transistor of the optocoupler is energized through the acromag. The LED side should be driven by the coil driver circuit. It is properly done for the "enable mon" through 750Ohm and +V. However, "Run/Acquire" is a relay switch and there is no one to drive the line. I propose to add the pull-up network to the run/acquire outputs. This way all 8 outputs become identical and symmetric.

We should test the configuration if this works properly. This can be done with just a manual switch, R=750Ohm, and a +V supply  (+18V I guess).

  16205   Wed Jun 16 17:24:29 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

I updated the wiring diagram according to Koji's suggestion. According to the isolator manual, this configuration requires that the isolator input be configured as PNP.

Additionally, when the switch in the coil driver is open the LED in the isolator is signaling an on-state. Therefore, we might need to configure the Acromag to invert the input.

There are the Run/Aquire channels that we might need to add to the wiring diagram. If we do need to read them using slow channels, we will have to pull them up like the EnableMon channels to use them like in the wiring diagram.

  16206   Wed Jun 16 19:34:18 2021 KojiUpdateGeneralHVAC

I made a flow sensor with a stick and tissue paper to check the airflow.

- The HVAC indicator was not lit, but it was just the blub problem. The replacement bulb is inside the gray box.

- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
  The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.

- Then I went to the vertex and the east arm. The outlets and intakes are flowing.

  16207   Wed Jun 16 20:32:39 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

I installed 2 additional isolators in the Acromag chassis. I set all the input channels to PNP. I ran the digital inputs (EnableMon channels) through these isolators according to the previous post.

I tested the digital inputs in the following way:

I connected an 18V voltage source to the signal wire under test through a 1Kohm resistor. I connected the GND of the voltage source to the RTN wire of the feedthrough. When the voltage source was connected, the LED on the isolator turned on and the EPICs channel under test was Enabled. When I disconnected the voltage source or shorted the signal wire to GND the LED on the isolator turned off and the EPICs channel showed a Disabled state.

  16208   Thu Jun 17 11:19:37 2021 Ian MacMillanUpdateCDSCDS Upgrade

Jon and I tested the ADC and DAC cards in both of the systems on the test stand. We had to swap out an 18-bit DAC for a 16-bit one that worked but now both machines have at least one working ADC and DAC.

[Still working on this post. I need to look at what is in the machines to say everything ]

  16209   Thu Jun 17 11:45:42 2021 Anchal, PacoUpdateSUSMC1 Gave trouble again

TL;DR

MC1 LL Sensor showed signs of fluctuating large offsets. We tried to find the issue in the box but couldn't find any. On power cycling, the sensor got back to normal. But in putting back the box, we bumped something and c1susaux slow channels froze. We tried to reboot it, but it didn't work and the channels do not exist anymore.


Today morning we came to find that IMC struggled to lock all night (See attachment 1). We kind of had an indication yesterday evening that MC1 LL Sensor PD had a higher variance than usual and Paco had to reset WFS offsets because they had integrated the noise from this sensor. Something similar happened last night, that a false offset and its fluctuation overwhelmed WFS and MC1 got misaligned making it impossible for IMC to get lock.

In the morning, Paco again reset the WFS offsets but not we were sure that the PD variance from MC1 LL osem was very high. See attachment 2 to see how only 1 OSEM is showing higher noise in comparison to the other 4 OSEMs. This behavior is similar to what we saw earlier in 16138 but for UL sensor. Koji and I fixed it in 16139 and we tested all other channels too.

So, Paco and I, went ahead and took out the MC1 satellite amplifier box S2100029 D1002812, opened the top, and checked all the PD channel testpoints with no input current. We didn't find anything odd. Next we checked the LED dirver circuit testpoints with LED OUT and GND shorted. We got 4.997V on all LED MON testpoints which indicate normal functioning.

We just hooked back everything on the MC1 satellit box and checked the sensor channels again on medm screens. To our surprise, it started functioning normally. So maybe, just a power cycling was required but we still don't know what caused this issue.

BUT when I (Anchal) was plugging back the power cables and D25 connectors on the back side in 1X4 after moving the box back into the rack, we found that the slow channels stopped updating. They just froze!

We got worried for some time as the negative power supply indicator LEDs on the acromag chassis (which is just below the MC1 satellite box) were not ON. We checked the power cables and had to open the side panel of the 1X4 rack to check how the power cables are connected. We found that there is no third wire in the power cables and the acromag chassis only takes in single rail supply. We confirmed this by looking at another acromag chassis on Xend. We pasted a note on the acromag chassis for future reference that it uses only positive rails and negative LED monitors are not usually ON.

Back to solving the frozen acromag issue, we conjectured that maybe the ethernet connection is broken. The DB25 cables for the satellite box are bit short and pull around other cables with it when connected. We checked all the ethernet cabling, it looked fine. On c1susaux computer, we saw that the monitor LED for ethernet port 2 which is connected to acromag chassis is solid ON while the other one (which is probably connection to the switch) is blinking.

We tried doing telnet to the computer, it didn't work. The host refused connection from pianosa workstation. We tried pinging the c1susaux computer, and that worked. So we concluded that most probably, the epics modbus server hosting the slow channels on c1susaux is unable to communicate with acromag chassis and hence the solid LED light on that ethernet port instead of a blinking one. We checked computer restart procedure page for SLOW computers on wiki and found that it said if telnet is not working, we can hard reboot the computer.

We hard reboot the computer by long pressing the power button and then presssing it back on. We did this process 3 times with the same result. The ethernet port 2 LED (Acromag chassis) would blink but the ethernet port 1 LED (connected to switch) would not turn ON. We now can not even ping the machine now, let alone telnet into it. All SUS slow monitor channels are not present now ofcourse. We also tried once pressing the reset button (which the manual said would reboot the machine), but we got the same outcome.

Now, we decided to stop poking around until someone with more experience can help us on this.


Bottomline: We don't know what caused the LL sensor issue and hence it has not been fixed. It can happen again. We lost all C1SUSAUX slow channels which are the OSEM and COIL slow monitor channels for PRM, BS, ITMX, ITMY, MC1, MC2 and MC3.

  16210   Thu Jun 17 16:37:23 2021 Anchal, PacoUpdateSUSc1susaux computer rebooted

Jon suggested to reboot the acromag chassis, then the computer, and we did this without success. Then, Koji suggested we try running ifup eth0, so we ran `sudo /sbin/ifup eth0` and it worked to put c1susaux back in the martian network, but the modbus service was still down. We switched off the chassis and rebooted the computer and we had to do sudo /sbin/ifup eth0` again (why do we need to do this manually everytime?). Switched on the chassis but still no channels. `sudo systemctl status modbusioc.service' gave us inactive (dead) status. So  we ran sudo systemctl restart modbusioc.service'.

The status became:


● modbusIOC.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
   Active: inactive (dead)
           start condition failed at Thu 2021-06-17 16:10:42 PDT; 12min ago
           ConditionPathExists=/opt/rtcds/caltech/c1/burt/autoburt/latest/c1susaux.snap was not met`

After another iteration we finally got a modbusIOC.service OK status, and we then repeated Jon's reboot procedure. This time, the acromags were on but reading 0.0, so we just needed to run `sudo /sbin/ifup eth1`and finally some sweet slow channels were read. As a final step we burt restored to 05:19 AM today c1susaux.snap file and managed to relock the IMC >> will keep an eye on it.... Finally, in the process of damping all the suspended optics, we noticed some OSEM channels on BS and PRM are reading 0.0 (they are red as we browse them)... We succeeded in locking both arms, but this remains an unknown for us.

  16211   Thu Jun 17 22:19:12 2021 KojiUpdateElectronics25 HAM-A coil driver units delivered

25 HAM-A coil driver units were fabricated by Todd and I've transported them to the 40m.
 2 units we already have received earlier.
The last (1) unit has been completed, but Luis wants to use it for some A+ testing. So 1 more unit is coming.

  16212   Thu Jun 17 22:25:38 2021 KojiUpdateSUSNew electronics: Sat Amp / Coil Drivers

It is a belated report: We received 5 more sat amps on June 4th. (I said 7 more but it was 6 more) So we still have one more sat amp coming from Todd.

- 1 already delivered long ago
- 8 received from Todd -> DeLeone -> Chub. They are in the lab.
- 11 units on May 21st
- 5 units on Jun 4th
Total 1+8+11+5 = 25
1 more unit is coming

 

Quote:

11 new Satellite Amps were picked up from Downs. 7 more are coming from there. I have one spare unit I made. 1 sat amp has already been used at MC1.

We had 8 HAM-A coil drivers delivered from the assembling company. We also have two coil drivers delivered from Downs (Anchal tested)

 

  16215   Fri Jun 18 19:02:00 2021 YehonathanUpdateBHDSOS assembly

Today I glued some magnets to dumbells.

First, I took 6 magnets (the maximum I can glue in one go) and divided them into 3 north and 3 south. Each triplet on a different razor (attachment 1).

I put the gluing fixture I found on top of these magnets so that each of the magnets sits in a hole in the fixture. I close the fixture but not all the way so that the dumbells get in easily (attachment 2).

I prepared EP-30 glue according to this dcc. I tested the mixture by putting some of it in the small toaster oven in the cleanroom for 15min at 200 degrees F.

The first two batches came out sticky and soft. I discarded the glue cartridge and opened a new one. The oven test results with the new cartridge were much better: smooth and hard surface. I picked up some glue with a needle and applied it to the surface of 6 dumbells I prepared in advance. I dropped the dumbells with the glue facing down into the magnet holes in the fixtures (attachment 3). I tightened the fixture and put some weight on it. I let it cure over the weekend.

I also pushed cut Viton tips that Jordan cleaned into the vented screws. While screwing small EQ stops into the lower clamps I found some problems. 4 of the lower clamps need rethreading. This is quite urgent because without those 4 clamps we don't have enough SOS towers. Moreover, I found that the screws that we bought from UC components to hold the lower clamps on the SOS towers were silver plated. This is a mistake in the SOS schematics (part 23) - they should be SS.

  16216   Fri Jun 18 23:53:08 2021 KojiUpdateBHDSOS assembly

Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).

 

  16217   Mon Jun 21 17:15:49 2021 Ian MacMillanUpdateCDSCDS Upgrade

Anchal and I wrote a script (Attachment 1) that will test the ADC and DAC connections with inputs on the INMON from -3000 to 3000. We could not run it because some of the channels seemed to be frozen. 

  16218   Tue Jun 22 11:56:16 2021 Anchal, PacoUpdateSUSADC/Slow channels issues

We checked back in time to see how the BS and PRM OSEM slow channels are zero. It was clear that they became zero when we worked on this issue on June 17th, Thursday. So we simply went back and power cycled the c1susaux acromag chassis. After that, we had to log in to c1susaux computer and run

sudo /sbin/ifdown eth1
sudo /sbin/ifup eth1

This restarted the ethernet port acromag chassis is connected to. This solved this issue and we were able to see all the slow channels in BS and PRM.

But then, we noticed that the OPLEV of ITMX is unable to read the position of the beam on the QPD at all. No light was reaching the QPD. We went in, opened the ITMX table cover and confirmed that the return OPLEV beam is way off and is not even hitting one of the steering mirrors that brings it to the QPD. We switched off the OPLEV contribution to the damping.

We did burt restore to 16th June morning using
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Jun/16/06:19/c1susaux.snap -l /tmp/controls_1210622_095432_0.write.log -o /tmp/controls_1210622_095432_0.nowrite.snap -v

This did not solve the issue.

Then we noticed that the OSEM signals from ITMX were saturated in opposite directions for Left and Right OSEMs. The Left OSEM fast channels are saturated to 1.918 um for UL and 1.399 um for LL, while both right OSEM channels are bottomed to 0 um. On the other hand, the acromag slow PD monitors are showing 0 on the right channels but 1097 cts on UL PDMon and 802 cts in LL PD Mon. We actually went in and checked the DC voltages from the PD input monitor LEMO ports on the ITMX dewhitening board D000210-A1 and measured non-zero voltages across all the channels. Following is a summary:

ITMX OSEM readouts
  C1-SUS-ITMX_XXSEN_OUT
(Fast ADC Channels) (um)
C1-SUS-ITMX_xxPDMon
(Slow Acromag Monitors) (cts)
Multimeter measurements at input to Dewhitening Boards
(V)
UL 1.918 1097 0.901
LL 1.399 802 0.998
UR 0 0 0.856
LR 0 0 0.792
SD 0.035 20 0.883

We even took out the 4-pin LEMO outputs from the dewhitening boards that go to the anti-aliasing chassis and checked the voltages. They are same as the input voltages as expected. So the dewhitening board is doing its job fine and the OSEMs are doing their jobs fine.

It is weird that both the ADC and the acromags are reading these values wrong. We believe this is causing a big yaw offset in the ITMX control signal causing the ITMX to turn enough make OPLEV go out of range. We checked the CDS FE status (attachment 1). Other than c1rfm showing a yellow bar (bit 2 = GE FANUC RFM card 0) in RT Net Status, nothing else seems wrong in c1sus computer. c1sus FE model is running fine. c1x02 (the lower level model) does show a red bar in TIM which suggests some timing issue. This is present in c1x04 too.


Bottomline:

Currently, the ITMX coil outputs are disabled as we can't trust the OSEM channels. We're investigating more why any of this is happening. Any input is welcome.

 

 

 

  16219   Tue Jun 22 16:52:28 2021 PacoUpdateSUSADC/Slow channels issues

After sliding the alignment bias around and browsing through elog while searching for "stuck" we concluded the ITMX osems needed to be freed. To do this, the procedure is to slide the alignment bias back and forth ("shaking") and then as the OSEMs start to vary, enable the damping. We did just this, and then restored the alignment bias sliders slowly into their original positions. Attachment 1 shows the ITMX OSEM sensor input monitors throughout this procedure.


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

  16220   Tue Jun 22 16:53:01 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

The channels on both the C1BHD and C1SUS2 seem to be frozen: they arent updating and are holding one value. To fix this Anchal and I tried:

  • restarting the computers 
    • restarting basically everything including the models
  • Changing the matrix values
  • adding filters
  • messing with the offset 
  • restarting the network ports (Paco suggested this apparently it worked for him at some point)
  • Checking to make sure everything was still connected inside the case (DAC, ADC, etc..)

I wonder if Jon has any ideas. 

  16221   Tue Jun 22 17:05:26 2021 YehonathanUpdateBHDSOS assembly

According to the schematics, the distance between the original EQ tap holes is 0.5". Given that the original tap holes' diameter is 0.13" there is enough room for a 1/4" drill.

Quote:

Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).

 

 

  16222   Wed Jun 23 09:05:02 2021 AnchalUpdateSUSMC lock acquired back again

MC was unable to acquire lock because the WFS offsets were cleared to zero at some point and because of that MC was very misaligned to be able to catch back lock. In such cases, one wants the WFS to start accumulating offsets as soon as minimal lock is attained so that the mode cleaner can be automatically aligned. So I did following that worked:

  • Made the C1:IOO-WFS_TRIG_WAIT_TIME (delay in WFS trigger) from 3s to 0s.
  • Reduced C1:IOO-WFS_TRIGGER_THRESH_ON (Switchin on threshold) from 5000 to 1000.
  • Then as soon as a TEM00 was locked with poor efficiency, the WFS loops started aligning the optics to bring it back to lock.
  • After robust lock has been acquired, I restored the two settings I changed above.
Quote:

 


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

 

  16223   Thu Jun 24 16:40:37 2021 KojiUpdateSUSMC lock acquired back again

[Koji, Anchal]

The issue of the PD output was that the PD whitened outputs of the sat amp (D080276) are differential, while the successive circuit (D000210 PD whitening unit) has the single-ended inputs. This means that the neg outputs (D080276 U2) have always been shorted to GND with no output R. This forced AD8672 to work hard at the output current limit. Maybe there was a heat problem due to this current saturation as Anchal reported that the unit came back sane after some power-cycling or opening the lid. But the heat issue and the forced differential voltage to the input stage of the chip eventually cause it to fail, I believe.

Anchal came up with the brilliant idea to bypass this issue. The sat amp box has the PD mon channels which are single-ended. We simply shifted the output cables to the mon connectors. The MC1 sus was nicely damped and the IMC was locked as usual. Anchal will keep checking if the circuit will keep working for a few days.

  16224   Thu Jun 24 17:32:52 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

Anchal and I ran tests on the two systems (C1-SUS2 and C1-BHD). Attached are the results and the code and data to recreate them.

We connected one DAC channel to one ADC channel and thus all of the results represent a DAC/ADC pair. We then set the offset to different values from -3000 to 3000 and recorded the measured signal. I then plotted the response curve of every DAC/ADC pair so each was tested at least once.

There are two types of plots included in the attachments

1) a summary plot found on the last pages of the pdf files. This is a quick and dirty way to see if all of the channels are working. It is NOT a replacement for the other plots. It shows all the data quickly but sacrifices precision.

2) In an in-depth look at an ADC/DAC pair. Here I show the measured value for a defined DC offset. The Gain of the system should be 0.5 (put in an offset of 100 and measure 50). I included a line to show where this should be. I also plotted the difference between the 0.5 gain line and the measured data. 

As seen in the provided plots the channels get saturated after about the -2000 to 2000 mark, which is why the difference graph is only concentrated on -2000 to 2000 range. 

Summary: all the channels look to be working they all report very little deviation off of the theoretical gain. 

Note: ADC channel 31 is the timing signal so it is the only channel that is wildly off. It is not a measurement channel and we just measured it by mistake.

  16225   Fri Jun 25 14:06:10 2021 JonUpdateCDSFront-End Assembly and Testing

Summary

Here is the final summary (from me) of where things stand with the new front-end systems. With Anchal and Ian's recent scripted loopback testing [16224], all the testing that can be performed in isolation with the hardware on hand has been completed. We currently have no indication of any problem with the new hardware. However, the high-frequency signal integrity and noise testing remains to be done.

I detail those tests and link some DTT templates for performing them below. We have not yet received the Myricom 10G network card being sent from LHO, which is required to complete the standalone DAQ network. Thus we do not have a working NDS server in the test stand, so cannot yet run any of the usual CDS tools such as Diaggui. Another option would be to just connect the new front-ends to the 40m Martian/DAQ networks and test them there.

Final Hardware Configuration

Due to the unavailablity of the 18-bit DACs that were expected from the sites, we elected to convert all the new 18-bit AO channels to 16-bit. I was able to locate four unused 16-bit DACs around the 40m [16185], with three of the four found to be working. I was also able to obtain three spare 16-bit DAC adapter boards from Todd Etzel. With the addition of the three working DACs, we ended up with just enough hardware to complete both systems.

The final configuration of each I/O chassis is as follows. The full setup is pictured in Attachment 1.

  C1BHD C1SUS2
Component Qty Installed Qty Installed
16-bit ADC 1 2
16-bit ADC adapter 1 2
16-bit DAC 1 3
16-bit DAC adapter 1 3
16-channel BIO 1 1
32-channel BO 0 6

This hardware provides the following breakdown of channels available to user models:

  C1BHD C1SUS2
Channel Type Channel Count Channel Count
16-bit AI* 31 63
16-bit AO 16 48
BO 0 192

*The last channel of the first ADC is reserved for timing diagnostics.

The chassis have been closed up and their permanent signal cabling installed. They do not need to be reopened, unless future testing finds a problem.

RCG Model Configuration

An IOP model has been created for each system reflecting its final hardware configuration. The IOP models are permanent and system-specific. When ready to install the new systems, the IOP models should be copied to the 40m network drive and installed following the RCG-compilation procedure in [15979]. Each system also has one temporary user model which was set up for testing purposes. These user models will be replaced with the actual SUS, OMC, and BHD models when the new systems are installed.

The current RCG models and the action to take with each one are listed below:

Model Name Host CPU DCUID Path (all paths local to chiara clone machine) Action
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl Copy to same location on 40m network drive; compile and install
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl Copy to same location on 40m network drive; compile and install
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl Do not copy; replace with permanent OMC/BHD model(s)
c1su2 c1su2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl Do not copy; replace with permanent SUS model(s)

Each front-end can support up to four user models.

Future Signal-Integrity Testing

Recently, the CDS group has released a well-documented procedure for testing General Standards ADC and DACs: T2000188. They've also automated the tests using a related set of shell scripts (T2000203). Unfortnately I don't believe these scripts will work at the 40m, as they require the latest v4.x RCG.

However, there is an accompanying set of DTT templates that could be very useful for accelerating the testing. They are available from the LIGO SVN (log in with username: "first.last@LIGO.ORG"). I believe these can be used almost directly, with only minor updates to channel names, etc. There are two classes of DTT-templated tests:

  1. DAC -> ADC loopback transfer functions
  2. Voltage noise floor PSD measurements of individual cards

The T2000188 document contains images of normal/passing DTT measurements, as well as known abnormalities and failure modes. More sophisticated tests could also be configured, using these templates as a guiding example.

Hardware Reordering

Due to the unexpected change from 18- to 16-bit AO, we are now short on several pieces of hardware:

  • 16-bit AI chassis. We originally ordered five of these chassis, and all are obligated as replacements within the existing system. Four of them are now (temporarily) in use in the front-end test stand. Thus four of the new 18-bit AI chassis will need to be retrofitted with 16-bit hardware.
  • 16-bit DACs. We currently have exactly enough DACs. I have requested a quote from General Standards for two additional units to have as spares.
  • 16-bit DAC adapters. I have asked Todd Etzel for two additional adapter boards to also have as spares. If no more are available, a few more should be fabricated.
  16226   Fri Jun 25 19:14:45 2021 JonUpdateEquipment loanZurich Instruments analyzer

I returned the Zurich Instruments analyzer I borrowed some time ago to test out at home. It is sitting on first table across from Steve's old desk.

  16227   Mon Jun 28 12:35:19 2021 YehonathanUpdateBHDSOS assembly

On Thursday, I glued another set of 6 dumbells+magnets using the same method as before. I made sure that dumbells are pressed onto the magnets.

I came in today to check the gluing situation. The situation looks much better than before. It seems like the glue is stable against small forces (magnetic etc.). I checked the assemblies under a microscope.

It seems like I used excessive amounts of glue (attachment 1,2). The surfaces of the dumbells were also contaminated (attachment 3). I cleaned the dumbells' surfaces using acetone and IPO (attachment 4) and scratched some of the glue residues from the sides of the assemblies.

Next time, I will make a shallow bath of glue to obtain precise amounts using a needle.

I glued a sample assembly on a metal bracket using epoxy. Once it cures I will hang a weight on the dumbell to test the gluing strength.

  16229   Tue Jun 29 20:45:52 2021 YehonathanUpdateBHDSOS assembly

I glued another batch of 6 magnet+dumbell assemblies. I will take a look at them under the microscope once they are cured.

I also hanged a weight of ~150g from a sample dumbell made in the previous batch (attachments) to test the magnet+dumbell bonding strength.

  16230   Wed Jun 30 14:09:26 2021 Ian MacMillanUpdateCDSSUS simPlant model

I have looked at my code from the previous plot of the transfer function and realized that there is a slight error that must be fixed before we can analyze the difference between the theoretical transfer function and the measured transfer function.

The theoretical transfer function, which was generated from Photon has approximately 1000 data points while the measured one has about 120. There are no points between the two datasets that have the same frequency values, so they are not directly comparable. In order to compare them I must infer the data between the points. In the previous post [16195] I expanded the measured dataset. In other words: I filled in the space between points linearly so that I could compare the two data sets. Using this code:

#make values for the comparison
tck_mag = splrep(tst_f, tst_mag) # get bspline representation given (x,y) values
gen_mag = splev(sim_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(gen_mag)):
    dif_mag.append(gen_mag[x]-sim_mag[x]) # measured minus predicted

tck_ph = splrep(tst_f, tst_ph) # get bspline representation given (x,y) values
gen_ph = splev(sim_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(gen_ph)):
    dif_ph.append(gen_ph[x]-sim_ph[x])

At points like a sharp peak where the measured data set was sparse compared to the peak, the difference would see the difference between the intermediate “measured” values and the theoretical ones, which would make the difference much higher than it really was.

To fix this I changed the code to generate the intermediate values for the theoretical data set. Using the code here:

tck_mag = splrep(sim_f, sim_mag) # get bspline representation given (x,y) values
gen_mag = splev(tst_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(tst_mag)):
    dif_mag.append(tst_mag[x]-gen_mag[x])#measured minus predicted

tck_ph = splrep(sim_f, sim_ph) # get bspline representation given (x,y) values
gen_ph = splev(tst_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(tst_ph)):
    dif_ph.append(tst_ph[x]-gen_ph[x])

Because this dataset has far more values (about 10 times more) the previous problem is not such an issue. In addition, there is never an inferred measured value used. That makes it more representative of the true accuracy of the real transfer function.

This is an update to a previous plot, so I am still using the same data just changing the way it is coded. This plot/data does not have a Q of 1000. That plot will be in a later post along with the error estimation that we talked about in this week’s meeting.

The new plot is shown below in attachment 1. Data and code are contained in attachment 2

  16234   Thu Jul 1 11:37:50 2021 PacoUpdateGeneralrestarted c0rga

Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then.

  16235   Thu Jul 1 16:45:25 2021 YehonathanUpdateBHDSOS assembly

The bonding test passed - the weight still hangs from the dumbell. Unfortunately, I broke the bond trying to release the assembly from the bracket. I made another batch of 6 dumbell+magnet.

I used some of the leftover epoxy to bond an assembly from the previous batch to a bracket so I can test it.

  16238   Tue Jul 6 10:47:07 2021 Paco, AnchalUpdateIOORestored MC

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  16239   Tue Jul 6 16:35:04 2021 Anchal, Paco, GautamUpdateIOORestored MC

We found that megatron is unable to properly run scripts/MC/WFS/mcwfsoff and scripts/MC/WFS/mcwfson scripts. It fails cdsutils commands due to a library conflict. This meant that WFS loops were not turned off when IMC would get unlocked and they would keep integrating noise into offsets. The mcwfsoff script is also supposed to clear up WFS loop offsets, but that wasn't happening either. The mcwfson script was also not bringing back WFS loops on.

Gautam fixed these scripts temprorarily for running on megatron by using ezcawrite and ezcaswitch commands instead of cdsutils commands. Now these scripts are running normally. This could be the reason for wildly fluctuating WFS offsets that we have seen in teh past few months.

gautam: the problem here is that megatron is running Ubuntu18 - I'm not sure if there is any dedicated CDS group packaging for Ubuntu, and so we're using some shared install of the cdsutils (hosted on the shared chiara NFS drive), which is complaining about missing linked lib files. Depending on people's mood, it may be worth biting the bullet and make Megatron run Debian10, for which the CDS group maintains packages.

Quote:

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  16243   Fri Jul 9 18:35:32 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

Following Koji's channel list review, we made changes to the wiring spreadsheet.

Today, I made the changes real in the Acromag chassis. I went through the channel list one by one and made sure it is wired correctly. Additionally, since we now need all the channels the existing isolators have, I replaced the isolator with the defective channel with a new one.

The things to do next:

1. Create entries for the spare coil driver and satellite box channels in the EPICs DB.

2. Test the spare channels.

  16244   Mon Jul 12 18:06:25 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

I edited /cvs/cds/caltech/target/c1auxey1/ETMYaux.db (after creating a backup) and added the spare coil driver channels.

I tested those channels using caget while fixing wiring issues. The tests were all succesful. The digital output channel were tested using the Windows machine since they are locked by some EPICs mechanism I don't yet understand.

One worrying point is I found that the differential analog inputs to be unstable unless I connected a reference to some stable voltage source unlike previous tests showed. It was unstable (but less) even when I connected the ref to the ground connectors on the power supplies on the workbench. This is really puzzling.

When I say unstable I mean that most of the time the voltage reading shows the right value, but occasionly there is a transient sharp volage drop of the order of 0.5V. I will do a more quantitative analysis tomorrow.

 

  16245   Wed Jul 14 16:19:44 2021 gautamUpdateGeneralBrrr

Since the repair work, the temperature is significantly cooler. Surprisingly, even at the vertex (to be more specific, inside the PSL enclosure, which for the time being is the only place where we have a logged temperature sensor, but this is not attributable to any change in the HEPA speed), the temperature is a good 3 deg C cooler than it was before the HVAC work (even though Koji's wind vane suggest the vents at the vertex were working). The setpoint for the entire lab was modified? What should the setpoint even be?

Quote:
 

- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
  The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.

- Then I went to the vertex and the east arm. The outlets and intakes are flowing.

  16246   Wed Jul 14 19:21:44 2021 KojiUpdateGeneralBrrr

Jordan reported on Jun 18, 2021:
"HVAC tech came today, and replaced the thermostat and a coolant tube in the AC unit. It is working now and he left the thermostat set to 68F, which was what the old one was set to."

  16247   Wed Jul 14 20:42:04 2021 gautamUpdateLSCLocking

[paco, gautam]

we decided to give the PRFPMI lock a go early-ish. Summary of findings today eve:

  1. Arms under ALS control display normal noise and loop UGFs.
  2. PRMI took longer than usual to lock (when arms are held off resonance) - could be elevated sesimic, but warrants measuring PRMI loop TFs to rule out any funkiness. MICH loop also displayed some saturation on acquisition, but after the boosts and other filters were turned on, the lock seemed robust and the in-loop noise was at the usual levels.
  3. We are gonna do the high bandwidth single arm locking experiments during daytime to rule out any issues with the CM board.

The ALS--> IR CARM handoff is the problematic step. In the past, getting over this hump has just required some systematic loop TF measurements / gain slider readjustments. We will do this in the next few days. I don't think the ALS noise is any higher than it used to be, and I could do the direct handoff as recently as March, so probably something minor has changed.

  16248   Thu Jul 15 14:25:48 2021 PacoUpdateLSCCM board

[gautam, paco]

We tested the CM board by implementing the high bandwidth IR lock (single arm). In preparation for this test we temporarily connected the POY11_Q_MON output to the CM board IN1 input and checked the YARM POY transfer function by running the AA_YARM_TEMPLATE under users/Templates/LSC/LSC_loops/YARM_POY/. We made sure the YARM dither optimized TRY so as to maximize the optical gain stage. Then we proceeded as follows:

  • From the LSC --> CM Servo screen, we controlled the REFL 1 Gain (dB) slider (nominal +25) and MC Servo IN2 Gain (dB) slider (nominal -32 dB) to transfer the low bandwidth (digital) control to the high bandwidth (analog) control of the YARM.
  • During this game, we monitored the C1:LSC-POY11_I_ERR_DQ & C1:LSC-CM_SLOW_OUT_DQ error signal channels for saturation, oscillations, or stability.
  • Once a set of gains was successful in maintaining a stable lock, we measured the OLTF using SR 785 to track the UGF as we mix the two paths.
  • Once the gains have increased, a boost and super-boost stages may be enabled as well.

Ultimately, our ability to progressively increase the control bandwidth of the YARM is a proxy that the CM board is working properly. Attachment 1 shows the OLTF progression as we increased the loop's UGF. Note how as we approached the maximum measured UGF of ~ 22 kHz, our phase margin decreased signifying poor stability.


At the end of this measurement, at about ~ 15:45 I restored the CM board IN1 input and disconnected the POY11_Q_MON

gautam: the conclusion here is that the CM board seems to work as advertised, and it's not solely responsible for not being able to achieve the IR handoff. 

  16249   Fri Jul 16 16:26:50 2021 gautamUpdateComputersDocker installed on nodus

I wanted to try hosting some docker images on a "private" server, so I installed Docker on nodus following the instructions here. The install seems to have succeeded, and as far as I can tell, none of the functionality of nodus has been disturbed (I can ssh in, access shared drive, elog seems to work fine etc). But if you find a problem, maybe this action is responsible. Note that nodus is running Scientific Linux 7.3 (Nitrogen).

  16250   Sat Jul 17 00:52:33 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Canon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

  16251   Mon Jul 19 22:16:08 2021 pacoUpdateLSCPRFPMI locking

[gautam, paco]

Gautam managed to lock PRFPMI a little before ~ 22:00 local time. The ALS to RF handoff logic was found to be repeatable, which enabled us to lock a total of 4 times this evening. Under this nominal state, we can work on PRFPMI to narrow down less known issues and carry out systematic optimization. The second time we achieved lock, we ran sensing lines before entering the ASC stage (which we knew would destroy the lock), and offline analysis of the sensing matrix is pending (gpstime = 1310792709 + 5 min).

Things to note:

(a) there is an unexpected offset suggesting that the ALS and RF disagreed on what the lock setpoint should be, and it is still unclear where the offset is coming from.

(b) the first time the lock was reached, the ASC up stage destroyed it, suggesting these loops need some care (we were able to engage the ASC loops at low gains (0.2 instead of 1) but as soon as we enabled some integrators this consistently destroyed the lock

(c) gautam had (burt) restored to the settings from back in March when the PRFPMI was last locked, suggesting there was a small but somehow significant difference in the IFO that helped today relative to last week


Take home message--> The mere fact that we were able to lock PRFPMI rules out the considerably more serious problems with the signal chain electronics or processing. This should also be a good starting point for further debugging and optimization.


gautam: the circulating power, when the ASC was tweaked, hit 400 (normalized to single arm locked with a misaligned PRM) suggesting a recycling gain of 22.5, and an average arm loss of ~30ppm round trip (assuming 2% loss in the PRC). 

  16252   Wed Jul 21 14:50:23 2021 KojiUpdateSUSNew electronics

Received:

Jun 29, 2021 BIO I/F 6 units
Jul 19, 2021 PZT Drivers x2 / QPD Transimedance amp x2

 

  16253   Wed Jul 21 18:08:35 2021 yehonathanUpdateLoss MeasurementLoss measurement

{Gautam, Yehonathan, Anchal, Paco}

We prepared for the loss measurement using DC reflection method. We did the following changes:

1. REFL55_Q was disconnected and replaced with MC_T cable coming from the PD on the MC2 table. The cable has a red tag on it. Consequently we lost the AS beam. We realigned the optics and regained arm locks. The spot on the AS QPD had to be corrected.

2. We tried using AS55 as the PD for the DC measurement but we got ratios of ~ 0.97 which implies losses of more than 100 ppm. We decided to go with the traditional PD520 used for these measurements in the past.

3. We placed the PD520 used for loss measurements in front of the AS55 PD and optimized its position.

4. AS110 cable was disconnected from the PD and connected to PD520 to be used as the loss measurement cable.

5. In 1Y2 rack, AS110 PD cable was disconnected, REFL55_I was disconnected and AS110 cable was connected to REFL55_I channel.

So for the test, the MC transmission was measured at REFL55_Q and the AS DC was measured at REFL55_I.

We used the scripts/lossmap_scripts/armLoss/measArmLoss.py script. Note that this script assumes that you begin with the arm locked.

We are leaving the IFO in the configuration described above overnight and we plan to measure the XARM loss early AM. After which we shall restore the affected electrical and optical paths.


We ran the /scripts/lossmap_scripts/armLoss/measureArmLoss.py script in pianosa with 25 repetitions and a 30 s "duty cycle" (wait time) for the Y arm. Preliminary results give an estimated individual arm loss of ~ 30 ppm (on both X/Y arms) but we will provide a better estimate with this measurement. 

  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16255   Sun Jul 25 18:21:10 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light returned / Electronics borrowed

Camera and accesories returned

One HAM-A coildriver and one sat amp borrowed -> QIL

https://nodus.ligo.caltech.edu:8081/QIL/2616

 

  16256   Sun Jul 25 20:41:47 2021 ranaUpdateLoss MeasurementLoss measurement

What are the quantitative root causes for why the statistical uncertainty is so large? Its larger than 1/sqrt(N)

  16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement

[gautam, yehonathan, paco]

We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.

Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).


Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 \pm 2.6 ppm and a YARM loss of 38.9 \pm 0.6 ppm. To make the distinction more clear, Attachment 2 and  Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.


For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.

  16259   Tue Jul 27 17:14:18 2021 YehonathanUpdateBHDSOS assembly

Jordan has made 1/4" tap holes in the lower EQ stop holders (attachment). The 1/4" stops (schematics) fit nicely in them. Also, they are about the same length as the small EQ stops, so they can be used.

However, counting all the 1/4"-3/4" vented screws we have shows that we are missing 2 screws to cover all the 7 SOSs. We can either:

1. Order new vented screws.

2. Use 2 old (stained but clean) EQ stops.

3. Screw holes into existing 1/4"-3/4" screws and clean them.

4. Use small EQ stops for one SOS.

etc.

Also, I found a mistake in the schematics of the SOS tower. The 4-40 screws used to hold the lower EQ stop holders should be SS and not silver plated as noted. I'll have to find some (28) spares in the cleanroom or order new ones.

 

  16260   Tue Jul 27 20:12:53 2021 KojiUpdateBHDSOS assembly

1 or 2. The stained ones are just fine. If you find the vented 1/4-20 screws in the clean room, you can use them.

For the 28 screws, yeah find some spares in the clean room (faster), otherwise just order.

  16261   Tue Jul 27 23:04:37 2021 AnchalUpdateLSC40 meter party

[ian, anchal, paco]

After our second attempt of locking PRFPMI tonight, we tried to resotre XARM and YARM locks to IR by clicking on IFO_CONFIGURE>Restore XARM (POX) and IFO_CONFIGURE>Restore YARM (POY) but the arms did not lock. The green lasers were locked to the arms at maximum power, so the relative alignments of each cavity was ok. We were also able to lock PRMI using IFO_CONFIGURE>Restore PRMI carrier.

This was very weird to us. We were pretty sure that the aligment is correct, so we decided to cehck the POX POY signal chain. There was essentially no signal coming at POX11 and there was a -100 offset on it. We could see some PDH signal on POY11 but not enough to catch the locks.

We tried running IFO_CONFIGURE>LSC OFFSETS to cancel out any dark current DC offsets. The changes made by the script are shown in attachment 1.

We went to check the tables and found no light visible on beam finder cards on POX11 or POY11. We found that ITMX was stuck on one of the coils. We unstuck it using the shaking method. The OPLEVs on ITMX after this could not be switched on as the OPLEV servo were railing to limits. But when we ran Restore XARM (POX) again, they started working fine. Something is done by this script that we are not aware of.

We're stopping here. We still can not lock any of the single arms.


Wed Jul 28 11:19:00 2021 Update:

[gautam, paco]

Gautam found that the restoring of POX/POY failed to restore the whitening filter gains in POX11 / POY11. These are meant to be restored to 30 dB and 18 dB for POX11 and POY11 respectively but were set to 0 dB in detriment of any POX/POY triggering/locking. The reason these are lowered is to avoid saturating the speakers during lock acquisition. Yesterday, burt-restore didn't work because we restored the c1lscepics.snap but said gains are actually in c1lscaux.snap. After manually restoring the POX11 and POY11 whitening filter gains, gautam ran the LSCOffsets script. The XARM and YARM were able to quickly lock after we restored these settings.

The root of our issue may be that we didn't run the CARM & DARM watch script (which can be accessed from the ALS/Watch Scripts in medm). Gautam added a line on the Transition_IR_ALS.py script to run the watch script instead.

  16262   Wed Jul 28 12:00:35 2021 YehonathanUpdateBHDSOS assembly

After receiving two new tubes of EP-30 I resumed the gluing activities. I made a spreadsheet to track the assemblies that have been made, their position on the metal sheet in the cleanroom, their magnetic field, and the batch number.

I made another batch of 6 magnets yesterday (4th batch), the assembly from the 2nd batch is currently being tested for bonding strength.

One thing that we overlooked in calculating the amount of glue needed is that in addition to the minimum 8gr of EP-30 needed for every gluing session, there is also 4gr of EP-30 wasted on the mixing tube. So that means 12gr of EP-30 are used in every gluing session. We need 5 more batches so at least 60gr of EP-30 is needed. Luckily, we bought two tubes of 50gr each.

  16263   Wed Jul 28 12:47:52 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

To simulate a differential output I used two power supplies connected in series. The outer connectors were used as the outputs and the common connector was connected to the ground and used as a reference. I hooked these outputs to one of the differential analog channels and measured it over time using Striptool. The setup is shown in attachment 3.

I tested two cases: With reference disconnected (attachment 1), and connected (attachment 2). Clearly, the non-referred case is way too noisy.

  16264   Wed Jul 28 17:10:24 2021 AnchalUpdateLSCSchnupp asymmetry

[Anchal, Paco]

I redid the measurement of Schnupp asymmetry today and found it to be 3.8 cm \pm 0.9 cm.


Method

  • One of the arms is misalgined both at ITM and ETM.
  • The other arm is locked and aligned using ASS.
  • The SRCL oscillator's output is changed to the ETM of the chosen arm.
  • The AS55_Q channel in demodulation of SRCL oscillator is configured (phase corrected) so that all signal comes in C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT.
  • The rotation angle of AS55 RFPD is scanned and the C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT is averaged over 10s after waiting for 5s to let the transients pass.
  • This data is used to find the zero crossing of AS55_Q signal when light is coming from one particular arm only.
  • The same is repeated for the other arm.
  • The difference in the zero crossing phase angles is twice the phase accumulated by a 55 MHz signal in travelling the length difference between the arm cavities i.e. the Schnupp Asymmetry.

I measured a phase difference of 5 \pm1 degrees between the two paths.

The uncertainty in this measurement is much more than gautam's 15956 measurement. I'm not sure yet why, but would look into it.

 

Quote:

I used the Valera technique to measure the Schnupp asymmetry to be \approx 3.5 \, \mathrm{cm}, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to. 

 

ELOG V3.1.3-