40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 57 of 339  Not logged in ELOG logo
ID Date Author Type Category Subject
  14222   Mon Oct 1 20:39:09 2018 gautamConfigurationASCc1asy

We need to set up a copy of the c1asx model (which currently runs on c1iscex), to be named c1asy, on c1iscey for the green steering PZTs. The plan discussed at the meeting last Wednesday was to rename the existing model c1tst into c1asy, and recompile it with the relevant parts copied over from c1asx. However, I suspect this will create some problems related to the "dcuid" field in the CDS params block (I ran into this issue when I tried to use the dcuid for an old model which no longer exists, called c1imc, for the c1omc model).

From what I can gather, we should be able to circumvent this problem by deleting the .par file corresponding to the c1tst model living at /opt/rtcds/caltech/c1/target/gds/param/, and rename the model to c1asy, and recompile it. But I thought I should post this here checking if anyone knows of other potential conflicts that will need to be managed before I start poking around and breaking things. Alternatively, there are plenty of cores available on c1iscey, so we could just set up a fresh c1asy model...

 
  • (write programming code of making alignment control automatically)
  14221   Mon Oct 1 13:33:55 2018 yukiConfigurationASCQPD calibration
Quote:

I assume this QPD set is a D1600079/D1600273 combo.

How much was the SUM output during the measurement? Also how much were the beam radii of this beam (from the error func fittings)?
Then the calibration [V/m] is going to be the linear/inv-linear function of the incident power and the beam radus.

You mean the linear range is +/-50mV (for a given beam), I guess.

  • The SUM output was from -174 to -127 mV.
  • The beam radii calculated from the error func fittings was 0.47 mm.
  • Total optical path length measured by a ruler= 36 cm.
  • Beam power measured at QPD was 2.96 mW. (There are some loss mechanism in the setup.)

Then the calibration factor of the QPD is

X axis: 584 * (POWER / 2.96mW) * (0.472mm /  RADIUS) [mV/mm]
Y axis: 588 * (POWER / 2.96mW) * (0.472mm /  RADIUS) [mV/mm].

Attachment 1: Pic_QPDcalibration.jpg
Pic_QPDcalibration.jpg
  14220   Mon Oct 1 12:03:41 2018 not yukiConfigurationASCPZT driver board verification

I assume this QPD set is a D1600079/D1600273 combo.

How much was the SUM output during the measurement? Also how much were the beam radii of this beam (from the error func fittings)?
Then the calibration [V/m] is going to be the linear/inv-linear function of the incident power and the beam radus.

You mean the linear range is +/-50mV (for a given beam), I guess.

 

  14219   Sun Sep 30 20:14:51 2018 yukiConfigurationASCQPD calibration

[ Yuki, Gautam, Steve ]

Results:
I calibrated a QPD (D1600079, V1009) and made sure it performes well. The calibration constants are as follows:

X-Axis: 584 mV/mm
Y-Axis: 588 mV/mm

Details:
The calibration of QPD is needed to calibrate steeing PZT mirrors. It was measured by moving QPD on a translation stage. The QPD was connected to its amplifier (D1700110-v1) and +-18V was supplied from DC power supplier. The amplifier has three output ports; Pitch, Yaw, and Sum. I did the calibration as follows:

  • Center beam spot on QPD using steering mirror, which was confirmed by monitored Pitch and Yaw signals that were around zero.  
  • Kept Y-axis micrometer fixed, moved X-axis micrometer and measured the outputs. 
  • Repeated the procedure for the Y-axis. 

The results are attached. The main signal was fitted with error function and I drawed a slope at zero crossing point, which is calibration factor. I determined the linear range of the QPD to be when the output was in range -50V to 50V, then corresponding displacement range is about 0.2 mm width. Using this result, the PZT mirrors will be calibrated in linear range of the QPD tomorrow. 

Comments:

  • Some X-Y coupling existed. When one axis micrometer was moved, a little signal of the other direction was also generated.
  • As Gautam proposed in the previous study, there is some hysteresis. That process would bring some errors to this result.
  • A scale of micrometer is expressed in INCH!
  • The micrometer I used was made to have 1/2 inch range, but it didn't work well and the range of X-axis was much narrower. 

Reference:
previous experiment by Gautam for X-arm: elog:40m/8873, elog:40m/8884

Attachment 1: QPDcalibrationXaxis.pdf
QPDcalibrationXaxis.pdf
Attachment 2: QPDcalibrationYaxis.pdf
QPDcalibrationYaxis.pdf
  14218   Thu Sep 27 14:02:55 2018 yukiConfigurationASCPZT driver board verification

[ Yuki, Gautam ]

I fixed the input terminal that had been off, and made sure PZT driver board performs as we expect. 

At first I ran a simulation of the PZT driver circuit using LTspice (Attached #1 and #2). It shows that when the bias is 30V the driver performs well only with high input volatage (bigger than 3V). Then I measured the performance as following way:

  1. Applied +-15V to the board with an expansion card and 31.8V to the high voltage port which is the maximum voltage of PS280 DC power supplier C10013.
  2. Terminated input and connectd input bias to GND, then set offset to -10.4V. This value is refered as elog:40m/8832.
  3. Injected DC signal into input port using a function generator.
  4. Measured voltage at the OUT port and MON port.

The result of this is attached #3 and #4. It is consistent with simulated one. All ports performed well.

  • V(M1_PIT_OUT) = -4.86 *Vin +49.3 [V]
  • V(M1_YAW_OUT) = -4.86 *Vin +49.2 [V]
  • V(M2_PIT_OUT) = -4.85 *Vin +49.4 [V]
  • V(M2_YAW_OUT) = -4.86 *Vin +49.1 [V]
  • V(M1_PIT_MON) = -0.333 *Vin +3.40 [V]
  • V(M1_YAW_MON) = -0.333 *Vin +3.40 [V]
  • V(M2_PIT_MON) = -0.333 *Vin +3.40 [V]
  • V(M2_YAW_MON) = -0.333 *Vin +3.40 [V]

The high voltage points (100V DC) remain to be tested.

Attachment 1: PZTdriverSimulationDiagram.pdf
PZTdriverSimulationDiagram.pdf
Attachment 2: PZTdriverSimulationResult.pdf
PZTdriverSimulationResult.pdf
Attachment 3: PZTdriverPerformanceCheck_ResultOUT.pdf
PZTdriverPerformanceCheck_ResultOUT.pdf
Attachment 4: PZTdriverPerformanceCheck_ResultMON.pdf
PZTdriverPerformanceCheck_ResultMON.pdf
Attachment 5: PZTdriver.asc
Version 4
SHEET 1 2120 2120
WIRE 1408 656 1408 624
WIRE 1552 656 1552 624
WIRE 1712 656 1712 624
WIRE 1872 656 1872 624
WIRE 2016 656 2016 624
WIRE 1408 768 1408 736
WIRE 1552 768 1552 736
WIRE 1712 768 1712 736
... 193 more lines ...
  14217   Wed Sep 26 10:07:16 2018 SteveUpdateVACwhy reboot c1vac1

Precondition: c1vac1 & c1vac2 all LED warning lights green [ atm3 ], the only error message is in the gauge readings NO COMM, dataviewer will plot zero [ atm1 ], valves are operational

When our vacuum gauges read " NO COMM " than our INTERLOCKS  do  NOT communicate either.

So V1 gate valve and PSL output shutter can not be triggered to close if the the IFO pressure goes up.                        

   [ only CC1_HORNET_PRESSURE reading is working in this condition because it goes to a different compuer ] 

Quote:

[steve, gautam]

Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.

While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):

  • Turn power off using switch on rear.
  • Remove 4 connecting cables on the back.
  • Switch controllers.
  • Reconnect 4 cables on the back panel.
  • Turn power back on using switch on rear.

However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. 

Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.

Quote:

The problem will be revisited on Monday.

 

Attachment 1: NOcomm.png
NOcomm.png
Attachment 2: Reboot_&_sawp.png
Reboot_&_sawp.png
Attachment 3: c1vac1&2_.jpg
c1vac1&2_.jpg
  14216   Tue Sep 25 18:08:50 2018 yukiConfigurationASCY end table upgrade plan

[ Yuki, Gautam ]

We want to remotely control steeing PZT mirrors so its driver is needed. We already have a PZT driver board (D980323-C) and the output voltage is expected to be verified to be in the range 0-100 V DC for input voltages in the range -10 to 10 V DC.
Then I checked to make sure ir perform as we expected. The input signal was supplied using voltage calibrator and the output was monitored using a multimeter. 
But it didn't perform well. Some tuning of voltage bias seemed to be needed. I will calculate its transfer function by simulation and check the performance again tommorow. And I found one solder was off so it needs fixing.  

Reference:
diagram --> elog 8932
 

Plan of Action:

  • Check PZT driver performs as we expected
  • Also check cable, high voltage, PZT mirrors, anti-imaging board
  • Obtain calibration factor of PZT mirrors using QPD
  • Measure some status value before changing setup (such as tranmitted power of green laser)
  • Revise setup after a new lens arrives
  • Align the setup and check mode-matching
  • Measure status value again and confirm it improves
  • (write programming code of making alignment control automatically)
  14215   Mon Sep 24 15:06:10 2018 gautamUpdateVACc1vac1 reboot + TP1 controller replacement

[steve, gautam]

Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.

While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):

  • Turn power off using switch on rear.
  • Remove 4 connecting cables on the back.
  • Switch controllers.
  • Reconnect 4 cables on the back panel.
  • Turn power back on using switch on rear.

However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. 

Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.

Quote:

The problem will be revisited on Monday.

Attachment 1: beforeReboot.png
beforeReboot.png
Attachment 2: afterReboot.png
afterReboot.png
Attachment 3: CC1.png
CC1.png
  14214   Mon Sep 24 11:09:05 2018 yukiConfigurationASCY end table upgrade plan

[ Yuki, Steve ]

With Steve's help, we checked a new lens can be set soon after dichroic mirror.

Quote:

There may be a problem: One lens should be put soon after dichroic mirror, but there is little room for fix it. (Attached #4, It will be put where the pedestal is.)  Tomorrow we will check this problem again.

Attachment 1: pic0924_1.jpg
pic0924_1.jpg
  14213   Sun Sep 23 20:15:35 2018 KojiSummaryOMCMontecarlo simulation of the phase difference between P and S pols for a modeled HR mirror

Link to OMC_Lab ELOG 308

  14212   Sun Sep 23 19:32:23 2018 yukiConfigurationASCY end table upgrade plan

[ Yuki, Gautam ]

The setup I designed before has abrupt gouy phase shift between two steering mirrors which makes alignment much sensitive. So I designed a new one (Attached #1, #2 and #3). It improves the slope of gouy phase and the difference between steering mirrors is about 100 deg. To install this, we need new lenses: f=100mm, f=200mm, f=-250mm which have 532nm coating. If this setup is OK, I will order them.

There may be a problem: One lens should be put soon after dichroic mirror, but there is little room for fix it. (Attached #4, It will be put where the pedestal is.)  Tomorrow we will check this problem again.

And another problem; one steering mirror on the corner of the box is not easy to access. (Attached #5) I have to design a new seup with considering this problem.

Quote:

One of the example for improvement is just adding a new lens (f=10cm) soon after the doubling crystal. That will make mode matching better (100%) and also make separation better (85 deg) (Attachments #4 and #5). I'm checking whether we have the lens and there is space to set it. And I will measure current power of transmitted main laser in order to confirm the improvement of alignment.

 

Attachment 1: Pic_NewSetup0923_AUXYgreen.jpeg
Pic_NewSetup0923_AUXYgreen.jpeg
Attachment 2: ModeMatchingSolution_Result.pdf
ModeMatchingSolution_Result.pdf
Attachment 3: ModeMatchingSolution_Magnified_0923.jpg
ModeMatchingSolution_Magnified_0923.jpg
Attachment 4: pic0923_1.jpg
pic0923_1.jpg
Attachment 5: pic0923_2.jpg
pic0923_2.jpg
  14211   Sun Sep 23 17:38:48 2018 yukiUpdateASCAlignment of AUX Y end green beam was recovered

[ Yuki, Koji, Gautam ]

An alignment of AUX Y end green beam was bad. With Koji and Gautam's advice, it was recovered on Friday. The maximum value of TRY was about 0.5.

  14210   Sat Sep 22 00:21:07 2018 KojiUpdateCDSFrequent time out

[Gautam, Koji]

We had another crash of c1sus and Gautam did full power cycling of c1sus. It was a sturggle to recover all the frontends, but this solved the timing issue.

We went through full reset of c1sus, and rebooting all the other RT hosts, as well as daqd and fb1.

Attachment 1: 23.png
23.png
  14208   Fri Sep 21 19:50:17 2018 KojiUpdateCDSFrequent time out

Multiple realtime processes on c1sus are suffering from frequent time outs. It eventually knocks out c1sus (process).

Obviously this has started since the fiber swap this afternoon.

gautam 10pm: there are no clues as to the origin of this problem on the c1sus frontend dmesg logs. The only clue (see Attachment #3) is that the "ADC" error bit in the CDS status word is red - but opening up the individual ADC error log MEDM screens show no errors or overflows. Not sure what to make of this. The IOP model on this machine (c1x02) reports an error in the "Timing" bit of the CDS status word, but from the previous exchange with Rolf / J Hanks, this is down to a misuse of ADC0 Ch31 which is supposed to be reserved for a DuoTone diagnostic signal, but which we use for some other signal (one of the MC suspension shadow sensors iirc). The response is also not consistent with this CDS manual - which suggests that an "ADC" error should just kill the models. There are no obvious red indicator lights in the c1sus expansion chassis either.

Attachment 1: 33.png
33.png
Attachment 2: 49.png
49.png
Attachment 3: Screenshot_from_2018-09-21_21-52-54.png
Screenshot_from_2018-09-21_21-52-54.png
  14207   Fri Sep 21 16:51:43 2018 gautamUpdateVACc1vac1 is unresponsive

Steve pointed out that some of the vacuum MEDM screen fields were reporting "NO COMM". Koji confirmed that this is a c1vac1 problem, likely the same as reported here and can be fixed using the same procedure.

However, Steve is worried that the interlock won't kick in in case of a vacuum emergency, so we are leaving the PSL shutter closed over the weekend. The problem will be revisited on Monday.

  14206   Fri Sep 21 16:46:38 2018 gautamUpdateCDSNew PCIe fiber installed and routed

[steve, koji, gautam]

We took another pass at this today, and it seems to have worked - see Attachment #1. I'm leaving CDS in this configuration so that we can investigate stability. IMC could be locked. However, due to the vacuum slow machine having failed, we are going to leave the PSL shutter closed over the weekend.

Attachment 1: PCIeFiber.png
PCIeFiber.png
Attachment 2: IMG_5878.JPG
IMG_5878.JPG
  14205   Fri Sep 21 09:59:09 2018 yukiConfigurationASCY end table upgrade plan

[Yuki, Gautam]

Attachments #1 is the current setup of AUX Y Green locking and it has to be improved because:

  • current efficiency of mode matching is about 50%
  • current setup doesn't separate the degrees of freedom of TEM01 with PZT mirrors (the difference of gouy phase between PZT mirrors should be around 90 deg) 
  • we want to remotely control PZT mirrors for alignment
    (Attachments #2 and #3)

About the above two: 

One of the example for improvement is just adding a new lens (f=10cm) soon after the doubling crystal. That will make mode matching better (100%) and also make separation better (85 deg) (Attachments #4 and #5). I'm checking whether we have the lens and there is space to set it. And I will measure current power of transmitted main laser in order to confirm the improvement of alignment.

About the last:

I am considering what component is needed. 

Reference:

Attachment 1: Pic_CurrentSetup_AUXYgreen.jpeg
Pic_CurrentSetup_AUXYgreen.jpeg
Attachment 2: ModeMatchingSolution_Current.pdf
ModeMatchingSolution_Current.pdf
Attachment 3: ModeMatchingSolution_Current_Magnified.pdf
ModeMatchingSolution_Current_Magnified.pdf
Attachment 4: ModeMatchingSolution_Optimized.pdf
ModeMatchingSolution_Optimized.pdf
Attachment 5: ModeMatchingSolution_Optimized_Magnified.pdf
ModeMatchingSolution_Optimized_Magnified.pdf
  14203   Thu Sep 20 16:19:04 2018 gautamUpdateCDSNew PCIe fiber install postponed to tomorrow

[steve, gautam]

This didn't go as smoothly as planned. While there were no issues with the new fiber over the ~3 hours that I left it plugged in, I didn't realize the fiber has distinct ends for the "HOST" and "TARGET" (-5 points to me I guess). So while we had plugged in the ends correctly (by accident) for the pre-lunch test, while routing the fiber on the overhead cable tray, we switched the ends (because the "HOST" end of the cable is close to the reel and we felt it would be easier to do the routing the other way. 

Anyway, we will fix this tomorrow. For now, the old fiber was re-connected, and the models are running. IMC is locked.

Quote:

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

  14202   Thu Sep 20 11:29:04 2018 gautamUpdateCDSNew PCIe fiber housed

[steve, yuki, gautam]

The plastic tubing/housing for the fiber arrived a couple of days ago. We routed ~40m of fiber through roughly that length of the tubing this morning, using some custom implements Steve sourced. To make sure we didn't damage the fiber during this process, I'm now testing the vertex models with the plastic tubing just routed casually (= illegally) along the floor from 1X4 to 1Y3 (NOTE THAT THE WIKI PAGE DIAGRAM IS OUT OF DATE AND NEEDS TO BE UPDATED), and have plugged in the new fiber to the expansion chassis and the c1lsc front end machine. But I'm seeing a DC error (0x4000), which is indicative of some sort of timing error (Attachment #1) **. Needs more investigation...

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

** In the past, I have been able to fix the 0x4000 error by manually rebooting fb (simply restarting the daqd processes on fb using sudo systemctl restart daqd_* doesn't seem to fix the problem). Sure enough, seems to have done the job this time as well (Attachment #2). So my initial impression is that the new fiber is functioning alright yes.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3)

Attachment 1: PCIeFiberSwap.png
PCIeFiberSwap.png
Attachment 2: PCIeFiberSwap_FBrebooted.png
PCIeFiberSwap_FBrebooted.png
  14201   Thu Sep 20 08:17:14 2018 SteveUpdateSUSlocal 3.4M earth quake

M3.4 Colton shake did not trip sus.

 

Attachment 1: local_3.4M.png
local_3.4M.png
  14200   Tue Sep 18 17:56:01 2018 not gautamUpdateIOOPMC and IMC relocked, WFS inputs turned off

I restarted the LSC models in the usual way via the c1lsc reboot script. After doing this I was able to lock the YARM configuration for more noise coupling scripting.

Quote:

The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.

9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.

 

  14199   Tue Sep 18 14:02:37 2018 SteveUpdatesafety safety training

Yuki Miyazaki received 40m specific basic safety training.

 

  14198   Mon Sep 17 12:28:19 2018 gautamUpdateIOOPMC and IMC relocked, WFS inputs turned off

The PMC and IMC were unlocked. Both were re-locked, and alignment of both cavities were adjusted so as to maximize MC2 trans (by hand, input alignment to PMC tweaked on PSL table, IMC alignment tweaked using slow bias voltages). I disabled the inputs to the WFS loops, as it looks like they are not able to deal with the glitching IMC suspensions. c1lsc models have crashed again but I am not worrying about that for now.

9pm: The alignment is wandering all over the place so I'm just closing the PSL shutter for now.

  14197   Wed Sep 12 22:22:30 2018 KojiUpdateComputersSSL2.0, SSL3.0 disabled

LIGO GC notified us that nodus had SSL2.0 and SSL3.0 enabled. This has been disabled now.
The details are described on 40m wiki.

  14196   Mon Sep 10 12:44:48 2018 JonUpdateCDSADC replacement in c1lsc expansion chassis

Gautam and I restarted the models on c1lsc, c1ioo, and c1sus. The LSC system is functioning again. We found that only restarting c1lsc as Rolf had recommended did actually kill the models running on the other two machines. We simply reverted the rebootC1LSC.sh script to its previous form, since that does work. I'll keep using that as required until the ongoing investigations find the source of the problem.

Quote:

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

 

  14195   Fri Sep 7 12:35:14 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

Attachment 1: Screenshot_from_2018-09-07_12-34-52.png
Screenshot_from_2018-09-07_12-34-52.png
  14194   Thu Sep 6 14:21:26 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Todd E. came by this morning and gave us (i) 1x new ADC card and (ii) 1x roll of 100m (2017 vintage) PCIe fiber. This afternoon, I replaced the old ADC card in the c1lsc expansion chassis, and have returned the old card to Todd. The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue. CDS is back to what is now the nominal state (Attachment #1) and Yarm is locked for Jon to work on his IFOcoupling study. We will monitor the stability in the coming days.

Quote:

(i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Attachment 1: CDSoverview.png
CDSoverview.png
  14193   Wed Sep 5 10:59:23 2018 wgautamUpdateCDSCDS status update

Rolf came by today morning. For now, we've restarted the FE machine and the expansion chassis (note that the correct order in which to do this is: turn off computer--->turn off expansion chassis--->turn on expansion chassis--->turn on computer). The debugging measures Rolf suggested are (i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Another tip from Rolf: if the c1lsc FE is responsive but the models have crashed, then doing sudo reboot by ssh-ing into c1lsc should suffice* (i.e. it shouldn't take down the models on the other vertex FEs, although if the FE is unresponsive and you hard reboot it, this may still be a problem). I'll modify I've modified the c1lsc reboot script accordingly.

* Seems like this can still lead to the other vertex FEs crashing, so I'm leaving the reboot script as is (so all vertex machines are softly rebooted when c1lsc models crash).

Quote:

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

  14192   Tue Sep 4 10:14:11 2018 gautamUpdateCDSCDS status update

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

Quote:

Starting c1cal now, let's see if the other c1lsc FE models are affected at all... Moreover, since MC1 seems to be well-behaved, I'm going to restore the nominal eurocrate configuration (sans extender board) tomorrow.

  14191   Wed Aug 29 14:51:05 2018 SteveUpdateGeneraltomorrow morning

Electrician is coming to fix one of the fluorenent light fixture holder in the east arm tomorrow morning at 8am. He will be out by 9am.

The job did not get done. There was no scaffolding or ladder to reach troubled areas.

  14190   Wed Aug 29 11:46:27 2018 JonUpdateSUSlocal 4.4M earth quake

I freed ITMX and coarsely realigned the IFO using the OPLEVs. All the alignments were a bit off from overnight.

The IFO is still only able to lock in MICH mode currently, which was the situation before the earthquake. This morning I additionally tried restoring the burt state of the four machines that had been rebooted in the last week (c1iscaux, c1aux, c1psl, c1lsc) but that did not solve it.

Quote:

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

 

  14189   Wed Aug 29 09:56:00 2018 SteveUpdateVACMaglev controller needs service

TP-1 Osaka maglev controller  [  model TCO10M,  ser V3F04J07 ]  needs maintenance. Alarm led  on indicating  that we need Lv2 service.

The turbo and the controller are in good working order.

*****************************

Hi Steve,

Our maintenance level 2 service price is $...... It consists of a complete disassembly of the controller for internal cleaning of all ICB’s, replacement of all main board capacitors, replacement of all internal cooling units, ROM battery replacement, re-assembly, and mandatory final testing to make sure it meets our factory specifications. Turnaround time is approximately 3 weeks.

  RMA 5686 has been assigned to Caltech’s returning TC010M controller. Attached please find our RMA forms. Complete and return them to us via email, along with your PO, prior to shipping the cont

Best regards,

Pedro Gutierrez

Osaka Vacuum USA, Inc.

510-770-0100 x 109

*************************************************

our TP-1 TG390MCAB is 9 years old. What is the life expectancy of this turbo?

                        The Osaka maglev turbopumps are designed with a 100,000 hours(or ~ 10 operating years) life span but as you know most of our end-users are

                        running their Osaka maglev turbopumps in excess of 10+, 15+ years continuously.     The 100,000 hours design value is based upon the AL material being rotated at

                        the given speed.   But the design fudge factor have somehow elongated the practical life span.  

We should have the cost of new maglev & controller in next year budget. I  put the quote into the wiki.

 

                         

 

  14188   Wed Aug 29 09:20:27 2018 SteveUpdateSUSlocal 4.4M earth quake

All suspension tripped. Their damping restored. The MC is locked.

ITMX-UL & side magnets are stuck.

 

Attachment 1: 4.4_La_Verne.png
4.4_La_Verne.png
Attachment 2: 3.4_&_4.4M_EQ.png
3.4_&_4.4M_EQ.png
  14187   Tue Aug 28 18:39:41 2018 JonUpdateCDSC1LSC, C1AUX reboots

I found c1lsc unresponsive again today. Following the procedure in elog #13935, I ran the rebootC1LSC.sh script to perform a soft reboot of c1lsc and restart the epics processes on c1lsc, c1sus, and c1ioo. It worked. I also manually restarted one unresponsive slow machine, c1aux.

After the restarts, the CDS overview page shows the first three models on c1lsc are online (image attached). The above elog references c1oaf having to be restarted manually, so I attempted to do that. I connect via ssh to c1lsc and ran the script startc1oaf. This failed as well, however.

In this state I was able to lock the MICH configuration, which is sufficient for my purposes for now, but I was not able to lock either of the arm cavities. Are some of the still-dead models necessary to lock in resonant configurations?

Attachment 1: CDS_FE_STATUS.png
CDS_FE_STATUS.png
  14186   Tue Aug 28 15:29:19 2018 SteveFrogsPEMRat is cut

The rat is cut by mechanical trap and it was removed from ITMX south west location.

A nagy kover patkanyt a fogo elkapta es megolte.

Attachment 1: rat#2.png.png
rat#2.png.png
  14185   Mon Aug 27 09:14:45 2018 SteveUpdatePEMsmall earth quakes

Small earth quakes and suspensions. Which one is the most free and most sensitive: ITMX

 

Attachment 1: small_EQs_vs_SUSs.png
small_EQs_vs_SUSs.png
  14184   Fri Aug 24 14:58:30 2018 SteveUpdateSUSETMX trips again

The second big glich trips ETMX sus. There were small earth quakes around the glitches. It's damping recovered.

Quote:

Glitch, small amplitude, 350 counts  &  no trip.

Quote:

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

 

 

Attachment 1: glitches.png
glitches.png
  14183   Fri Aug 24 10:51:23 2018 SteveUpdateVACpumpdown 81 at day 38

 

 

Attachment 1: d38.png
d38.png
  14182   Fri Aug 24 08:04:37 2018 SteveUpdateGeneralsmall earth quake

 

 

Attachment 1: small_EQ.png
small_EQ.png
  14181   Thu Aug 23 16:10:13 2018 not KojiUpdateIMCMC/PMC trouble

Great, thanks!

Quote:

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

 

  14180   Thu Aug 23 16:05:24 2018 KojiUpdateIMCMC/PMC trouble

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

  14179   Thu Aug 23 15:26:54 2018 JonUpdateIMCMC/PMC trouble

I tried unsuccessfully to relock the MC this afternoon.

I came in to find it in a trouble state with a huge amount of noise on C1:PSL-FSS_PCDRIVE visible on the projector monitor. Light was reaching the MC but it was unable to lock.

  • I checked the status of the fast machines on the CDS>FE STATUS page. All up.
  • Then I checked the slow machine status. c1iscaux and c1psl were both down. I manually reset both machines. The large noise visible on C1:PSL-FSS_PCDRIVE disappeared.
  • After the reset, light was no longer reaching the MC, which I take to mean the PMC was not locked. On the PSL>PMC page, I blanked the control signal, reenabled it, and attempted to relock by adjusting the servo gain as Gautam had showed me before. The PMC locks were unstable, with each one lasting only a second or so.
  • Next I tried restoring the burt states for c1iscaux and c1psl from a snapshot taken earlier today, before the machine reboots. That did not solve the problem either.
  14178   Thu Aug 23 08:24:38 2018 SteveUpdateSUSETMX trip follow-up

Glitch, small amplitude, 350 counts  &  no trip.

Quote:

Here is an other big one

Quote:

A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.

 

 

Attachment 1: ETMX-UL_glitch.png
ETMX-UL_glitch.png
Attachment 2: PEM_4d.png
PEM_4d.png
  14177   Wed Aug 22 12:22:27 2018 ranaSummaryElectronicsInspection of the possible dual backplane interfaces for Acromag DAQ

I think we don't need to keep Crystal Ref: we can change this into a regular Wenzel box with no outside control or monitoring.

Quote:

 

  • Crystal Ref (D980353)
    • Schematic source: LIGO DCC D980353
    • Assesment: Only P1 (1A-4A) is to be connected to Acromag. (Just one DSub is sufficient)
    • P1 1A-4A

 

  14176   Wed Aug 22 08:44:09 2018 SteveUpdateGeneralearth quake

6.2M Bandon, OR did not trip any sus

 

Attachment 1: yesterday_EQs.png
yesterday_EQs.png
  14175   Wed Aug 22 00:22:05 2018 KojiSummaryElectronicsInspection of the possible dual backplane interfaces for Acromag DAQ

[Johannes, Koji]

We went around the LSC, PSL, IOO, and SUS racks to check how many dual backplane interfaces will be required.

Euro card modules are connected to the backplane with two DIN 41612 connectors (as you know). The backplane connectors provide DC supplies and GND connections.
In addition, they are also used for the input and output connections with the fast and slow machines.

According to the past inspection by Johannes, most of the modules just use the upper DIN41612 connector (called P1). But there are some modules exhibited the possibility of the additional use of the other connector (P2).

Tuesday afternoon Johannes and I made the list of the modules with the possible dual use. And I took a time to check the modules with DCC, Jay's schematics, and the visual inspection of the actual modules.

LSC Rack

  • Common mode servo (D040180 Rev B)
    • Schematic source D040180 Rev B D1500308
    • Assesment: Both P1 and P2 are to be connected to Acromag, but there are only a few channels on P2
    • P1: 1A-32A Digital In
    • P2: 1A-3A Analog Out (D32/33/34, SLOW MON and spare?)
            9A Digital Out for D35 (Limitter)
            10A-15A Spare
            16A Digital In (Latch Enable/Disable)
            25A, 25C  Differential Analog in (Differential offset input, indicated as "BIAS") 
  • PD Interface (D990543 Rev B)
    • Schematic source D990543 RevB
    • Assesment: No connection necessary. We don't monitor/control anything of any LSC PDs from Acromag.

PSL Rack

  • Generic DAQ Interface (D990155) - This is a DAC interface.
    • Schematic source: Jay's page D990155 Rev.B All the lines between P2 and P3 are connected.
    • Assesment: Only P2 is to be connected to Acromag.
    • P1 DAC mon -> not necessary
    • P2 A1-A16, Connected to DAC in P2-P3
  • PMC Servo
    • Schematic source: LIGO DCC D980352
    • Assesment: Only P1 (1A-9A) is to be connected to Acromag. (Just one DSub is sufficient)
    • P1 1A-9A
  • Crystal Ref (D980353)
    • Schematic source: LIGO DCC D980353
    • Assesment: Only P1 (1A-4A) is to be connected to Acromag. (Just one DSub is sufficient)
    • P1 1A-4A
  • TTFSS REV A
    • Schematic source: PNot found
    • Assesment: Probably Only P1 is sufficient. We need to analyze the board to figure out the channel assignment.

IOO Rack

  • PD Interface (D990543 Rev B)
    • Schematic source D990543 RevB
    • Assesment: Only P1 connection is sufficient.
  • Generic DAQ Interface (D990155)
    • Assesment: Remove the module. We already have the same module in PSL Rack. This is redundant.
  • Common mode servo (D040180 Rev B)
    • See above
  • Pentek Generic Input Board D020432
    • Schematic source Jay's page D020432-A
    • Assesment: No connection. There is no signal on the backplane.

SUS Rack

  • SUS Dewhitening
    • Schematic source: Jay's page D000316-A
    • Assesment: No connection.
    • We can omit Mon CHs.
    • Bypass/Inputs are already connected to the fast channels.

 

  14174   Tue Aug 21 17:32:51 2018 awadeBureaucracyEquipment loanOne P-810.10 Piezo Actuators element removed

I've taken a PI Piezo Actuator (P-810.10) from the 40m collection. I forgot to note it on the equipment checklist by the door, will do so when I next drop by.

  14173   Tue Aug 21 09:16:23 2018 SteveUpdateWiki AP table layout 20180821

 

 

Attachment 1: 20180821.JPG
20180821.JPG
  14172   Tue Aug 21 03:09:59 2018 johannesOmnistructureDAQPanels for Acromag DAQ chassis

I expanded the previous panels to 6U height for the new DAQ chassis we're buying for the upgrade. I figure it's best if we stick to the modular design, so I'm showing a panel for 8 BNC connectors as an example. The front panel has 12 slots, the back has 10 plus power connectors, switches, and the ethernet plug.

I moved the power switch to the rear because it's a waste of space to put it in the front, and it's not like we're power cycling this thing all the time. Note that the unit only requires +24V (for general operation, +20V also does the trick, as is the situation for ETMX) and +15V (excitation field for the binary I/O modules). While these could fit into a single CONEC power connector, it's probably for the better if we don't make a version that supplies a large positive voltage where negative is expected, so I put in two CONEC plugs for +/- 15 and +/- 24.

I want to order 5-6 of these as soon as possible, so if anyone wants anything changed or sees a problem, please do tell!

Attachment 1: auxdaq_40m_6U_front.pdf
auxdaq_40m_6U_front.pdf
Attachment 2: auxdaq_40m_6U_rear.pdf
auxdaq_40m_6U_rear.pdf
Attachment 3: auxdaq_40m_6U_BNC.pdf
auxdaq_40m_6U_BNC.pdf
  14171   Mon Aug 20 15:16:39 2018 JonUpdateCDSRebooted c1lsc, slow machines

When I came in this morning no light was reaching the MC. One fast machine was dead, c1lsc, and a number of the slow machines: c1susaux, c1iool0, c1auxex, c1auxey, c1iscaux. Gautam walked me through reseting the slow machines manually and the fast machines via the reboot script. The computers are all back online and the MC is again able to lock.

ELOG V3.1.3-