40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 40m Log, Page 229 of 335 Not logged in
ID Date Author Type Category Subject
82   Thu Nov 8 00:55:44 2007 pkpUpdateOMCSuspension tests
[Sam , Pinkesh]

We tried to measure the transfer functions of the 6 degrees of freedom in the OMS SUS. To our chagrin, we found that it was very hard to get the OSEMs to center and get a mean value of around 6000 counts. Somehow the left and top OSEMs were coupled and we tried to see if any of the OSEMs/suspension parts were touching each other. But there is still a significant coupling between the various OSEMs. In theory, the only OSEMS that are supposed to couple are [SIDE] , [LEFT, RIGHT] , [TOP1, TOP2 , TOP3] , since the motion along these 3 sets is orthogonal to the other sets. Thus an excitation along any one OSEM in a set should only couple with another OSEM in the same same set and not with the others. The graphs below were obtained by driving all the OSEMS one by one at 7 Hz and at 500 counts ( I still have to figure out how much that is in units of length). These graphs show that there is some sort of contact somewhere. I cant locate any physical contact at this point, although TOP2 is suspicious and we moved it a bit, but it seems to be hanging free now. This can also be caused by the stiff wire with the peek on it. This wire is very stiff and it can transmit motion from one degree of freedom to another quite easily. I also have a graph showing the transfer function of the longitudnal degree of freedom. I decided to do this first because it was simple and I had to only deal with SIDE, which seems to be decoupled from the other DOFs. This graph is similar to one Norna has for the longitudnal DOF transfer function, with the addition of a peak around 1.8 Hz. This I reckon could very be due to the wire, although it is hard to claim for certain. I am going to stop the measurement at this time and start a fresh high resolution spectrum and leave it running over night.

There is an extra peak in the high res spectrum that is disturbing.
Attachment 1: shakeleft.pdf
Attachment 2: shakeright.pdf
Attachment 3: shakeside.pdf
Attachment 4: shaketop1.pdf
Attachment 5: shaketop2.pdf
Attachment 6: shaketop3.pdf
Attachment 7: LongTransfer.pdf
Attachment 8: Shakeleft7Nov2007_2.pdf
Attachment 9: Shakeleft7Nov2007_2.png
12648   Wed Nov 30 01:47:56 2016 gautamUpdateLSCSuspension woes

Short summary:

• Looks like Satellite boxes are not to blame for glitchy behaviour of shadow sensor PD readouts
• Problem may lie at the PD whitening boards (D000210) or with the Contec binary output cards in c1sus
• Today evening, similar glitchy behaviour was observed in all MC1 PD readout channels, leading to frequent IMC unlocking. Cause unknown, although I did work at 1X5, 1X6 today, and pulled out the PD whitening board for ITMY which sits in the same eurocrate as that for MC1. MC2/MC3 do not show any glitches.

Detailed story below...

Part 1: Satellite box swap

Yesterday, I switched the ITMY and ETMY satellite boxes, to see if the problems we have been seeing with ITMY UL move with the box to ETMY. It did not, while ITMY UL remained glitchy (based on data from approximately 10pm PDT on 28Nov - 10am PDT 29 Nov). Along with the tabletop diagnosis I did with the tester box, I concluded that the satellite box is not to blame.

Part 2: Tracing the signal chain (actually this was part 3 chronologically but this is how it should have been done...)

So if the problem isn't with the OSEMs themselves or the satellite box, what is wrong? I attempted to trace the signal chain from the satellite box into our CDS system as best as I could. The suspension wiring diagram on our wiki page is (I think) a past incarnation. Of course putting together a new diagram was a monumental task I wasn't prepared to undertake tonight, but in the long run this may be helpful. I will put up a diagram of the part I did trace out tomorrow, but the relevant links for this discussion are as follows (? indicates I am unsure):

1. Sat box (?)--> D010069 via 64pin IDE connector --> D000210 via DB15 --> D990147 via 4pin LEMO connectors --> D080281 via DB25 --> ADC0 of c1sus
2. D000210 backplane --> cross-connect (mis)labelled "ITMX white" via IDE connector
3. c1sus CONTEC DO-32L-PE --> D080478 via DB37 --> BO0-1 --> cross-connect labelled "XY220 1Y4-33-16A" via IDE --> (?)  cross-connect (mis)labelled "ITMX white" via IDE connector

I have linked to the DCC page for the various parts where available. Unfortunately I can't locate (on new DCC or old or elog or wiki) drawings for D010069 (Satellite Amplifier Adapter Board), D080281 ("anti-aliasing interface)" or D080478 (which is the binary output breakout box). I have emailed Ben Abbott who may have access to some other archive - the diagrams would be useful as it is looking likely that the problem may lie with the binary output.

So presumably the first piece of electronics after the Satellite box is the PD whitening board. After placing tags on the 3 LEMOs and 1 DB15 cable plugged into this board, I pulled out the ITMY board to do some tabletop diagnosis in the afternoon around 2pm 29Nov.

Part 3: PD whitening board debugging

This particular board has been reported as problematic in the recent past. I started by inserting a tester board into the slot occupied by this board - the LEDs on the tester board suggested that power-supply from the backplane connectors were alright, confirmed with a DMM.

Looking at the board itself, C4 and C6 are tantalum capacitors, and I have faced problems with this type of capacitor in the past. In fact, on the corresponding MC3 board (which is the only one visible, I didn't want to pull out boards unnecessarily) have been replaced with electrolytic capacitors, which are presumably more reliable. In any case, these capacitors do not seem to be at any fault, the board receives +/-15 V as advertised.

The whitening switching is handled by the MAX333 - this is what I looked at next. This IC is essentially a quad SPDT switch, and a binary input supplied via the backplane connector serves to route the PD input either through a whitening filter, or bypass it via a unity gain buffer. The logic levels that effect the switching are +15V and 0V (and not the conventional 5V and 0V), but according to the MAX333 datasheet, this is fine. I looked at the supply voltage to all ICs on the board, DC levels seemed fine (as measured with a DMM) and I also looked at it on an oscilloscope, no glitches were seen in ~30sec viewing stretch. I did notice something peculiar in that with no input supplied to the MAX333 IC (i.e. the logic level should be 15V), the NO and NC terminals appear shorted when checked with a DMM. Zach has noticed something similar in the past, but Koji pointed out that the DMM can be fooled into thinking there is a short. Anyway, the real test was to pull the logic input of the MAX333 to 0, and look at the output, this is what I did next.

The schematic says the whitening filter has poles at 30,100Hz and a zero at 3 Hz. So I supplied as "PD input" a 12Hz 1Vpp sinewave - there should be a gain of ~x4 when this signal passes through the path with the whitening filter. I then applied a low frequency (0.1Hz) square wave (0-5V) to the "bypass" input, and looked at the output, and indeed saw the signal amplitude change by ~4x when the input to the switch was pulled low. This behaviour was confirmed on all five channels, there was no problem. I took transfer functions for all 5 channels (both at the "monitor" point on the backplane connector and on the front panel LEMOs), and they came out as expected (plot to be uploaded soon).

Next, I took the board back to the eurocrate. I first put in a tester box into the slot and measured the voltage levels on the backplane pins that are meant to trigger bypassing of the whitening stage, all the pins were at 0V. I am not sure if this is what is expected, I will have to look inside D080478 as there is no drawing for it. Note that these levels are set using a Contec binary output card. Then I attached the PD whitening board to the tester board, and measured the voltages at the "Input" pins of all the 5 SPDT switches used under 2 conditions - with the appropriate bit sent out via the Contec card set to 0 or 1 (using the button on the suspension MEDM screens). I confirmed using the BIO medm screen that the bit is indeed changing on the software side, but until I look at D080478, I am not sure how to verify the right voltage is being sent out, except to check at the pins on the MAX333. For this test, the UL channel was indeed anomalous - while the other 4 channels yielded 0V (whitening ON, bit=1) and 15V (whitening OFF, bit=0), the corresponding values for the UL channel were 12V and 10V.

I didn't really get any further than this tonight. But this still leaves unanswered questions - if the measured values are faithful, then the UL channel always bypasses the whitening stage. Can this explain the glitchy behaviour?

Part 4: MC1 troubles

At approximately 8pm, the IMC started losing lock far too often - see the attached StripTool trace. There was a good ~2hour stretch before that when I realigned the IMC, and it held lock, but something changed abruptly around 8pm. Looking at the IMC mirror OSEM PD signals, all 5 MC1 channels are glitching frequently. Indeed, almost every IMC lockloss in the attached StripTool is because of the MC1 PD readouts glitching, and subsequently, the damping loops applying a macroscopic drive to the optic which the FSS can't keep up with. Why has this surfaced now? The IMC satellite boxes were not touched anytime recently as far as I am aware. The MC1 PD whitening board sits in the same eurocrate I pulled the ITMY board out of, but squishing cables/pushing board in did not do anything to alleviate the situation. Moreover, MC2 and MC3 look fine, even though their PD whitening boards also sit in the same eurocrate. Because I was out of ideas, I (soft) restarted c1sus and all the models (the thinking being if something was wrong with the Contec boards, a restart may fix it), but there was no improvement. The last longish lock stretch was with the MC1 watchdog turned off, but as soon as I turned it back on the IMC lost lock shortly after.

I am leaving the autolocker off for the night, hopefully there is an easy fix for all of this...

Attachment 1: IMCwoes.png
10345   Thu Aug 7 12:34:56 2014 JenneUpdateLSCSuspensions not kicking?

Yesterday, Q helped me look at the DACs for some of the suspensions, since Gabriele pointed out that the DACs may have trouble with zero crossings.

First, I looked at the oplevs of all the test masses with the oplev servos off, as well as the coil drive outputs from the suspension screen which should go straight out to the DACs.  I put some biases on the suspensions in either pitch or yaw so that one or two of the coil outputs was crossing zero regularly.  I didn't see any kicks.

Next, we turned off the inputs of the coil driver filter banks, unplugged the cable from the coil driver board to the satellite box, and put in sinusoidal excitations to each of the coils using awggui.  We then looked with a 'scope at the monitor point of the coil driver boards, but didn't see any glitches or abnormalities.  (We then put everything back to normal)

Finally, I locked and aligned the 2 arms, and just left them sitting.  The oplev servos were engaged, but I didn't ever see any big kicks.

I am suspicious that there was something funny going on with the computers and RFM over the weekend, when we were not getting RFM connections between the vertex and the end stations, and that somehow weird signals were also getting sent to some of the optics.  Q's nuclear reboot (all the front ends simultaneously) fixed the RFM situation, and I don't know that I've seen any kicks since then, although Eric thinks that he has, at least once.  Anyhow, I think they might be gone for now.

107   Thu Nov 15 18:23:55 2007 JohnHowToComputersSwap CAPS and CTRL on a Windows 2000/XP machine
I've swapped ctrl and caps on the four control room Windows machines. Right ctrl is unchanged.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout

Click on the KeyboardLayout entry.

Edit->New Binary Value name it Scancode Map.

Then select the new Scancode Map entry.

In the dialog box enter the following data:

0000: 00 00 00 00 00 00 00 00
0008: 03 00 00 00 3A 00 1D 00
0010: 1D 00 3A 00 00 00 00 00

Exit the Registry Editor. You need to log off and then on in XP (and restart in Windows 2000) for the changes to be made.
3947   Thu Nov 18 14:19:01 2010 josephbUpdateCDSSwapped c1auxex and c1auxey codes

## Problem:

We had not switched the c1aux crates when we renamed the arms, thus the watchdogs labeled ETMX were really watching ETMY and vice-versa.

Solution:

I used telnet to connect to c1auxey, and then c1auxex.

I used the bootChange command to change the IP address of c1auxey to 192.168.113.59 (c1auxex's IP), and its startup script.  Similarly c1auxex was changed to c1auxey and then both were rebooted.

c1auxey > bootChange '.' = clear field;  '-' = go to previous field;  ^D = quit boot device          : ei processor number     : 0 host name            : linux1 file name            : /cvs/cds/vw/mv162-262-16M/vxWorks inet on ethernet (e) : 192.168.113.60:ffffff00 192.168.113.59:ffffff00 inet on backplane (b): host inet (h)        : 192.168.113.20 gateway inet (g)     : user (u)             : controls ftp password (pw) (blank = use rsh): flags (f)            : 0x0 target name (tn)     : c1auxey c1auxex startup script (s)   : /cvs/cds/caltech/target/c1auxey/startup.cmd /cvs/cds/caltech/target/c1auxex/startup.cmd other (o)            : value = 0 = 0x0

c1auxex > bootChange '.' = clear field;  '-' = go to previous field;  ^D = quit boot device          : ei processor number     : 0 host name            : linux1 file name            : /cvs/cds/vw/mv162-262-16M/vxWorks inet on ethernet (e) : 192.168.113.59:ffffff00 192.168.113.60:ffffff00 inet on backplane (b): host inet (h)        : 192.168.113.20 gateway inet (g)     : user (u)             : controls ftp password (pw) (blank = use rsh): flags (f)            : 0x0 target name (tn)     : c1auxex c1auxey startup script (s)   : /cvs/cds/caltech/target/c1auxex/startup.cmd /cvs/cds/caltech/target/c1auxey/startup.cmd other (o)            : value = 0 = 0x0

11657   Thu Oct 1 20:26:21 2015 jamieUpdateDAQSwapping between fb and fb1

Swapping between fb and fb1 as DAQ is very straightforward, now that they are both on the DAQ network:

• stop daqd on fb
• on fb sudoedit /diskless/root/etc/init.d/mx_stream and set: endpoint=fb1:0
• start daqd on fb1.  The "new" daqd binary on fb1 is at: ~controls/rtbuild/trunk/build/mx-localtime/daqd

Once daqd starts, the front end mx_stream processes will be restarted by their monits, and be pointing to the new location.

Moving back is just reversing those steps.

1135   Fri Nov 14 17:41:50 2008 JenneOmnistructureElectronicsSweet New Soldering Iron
The fancy new Weller Soldering Iron is now hooked up on the electronics bench.

Accessories for it are in the blue twirly cabinet (spare tips of different types, CD, and USB cable to connect it to a computer, should we ever decide to do so.

Rana: the soldering iron has a USB port?
Attachment 1: newSolderingIron.JPG
689   Thu Jul 17 12:15:21 2008 EricUpdatePSLSwept PMC PZT voltage range
I unlocked the PMC and swept over C1:PSL-PMC_RAMP's full range a couple of times this morning.  The PMC should now be relocked and returned
to normal.
14339   Mon Dec 10 15:53:16 2018 gautamUpdateLSCSwept-sine measurement with DTT

Disclaimer: This is almost certainly some user error on my part.

I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.

Test:

I wanted to measure some transfer functions in the simulated model I set up.

• To start with, I put a pendulum (f0 = 1Hz, Q=5) TF into one of the filter modules
• Isolated it from the other interconnections (by turning off the MEDM ON/OFF switches).
• Set up a DTT swept-sine measurement
• EXC channel was C1:OMC-TST_AUX_A_EXC
• Monitored channels were C1:OMC-TST_AUX_A_IN2 and C1:OMC-TST_AUX_A_OUT.
• Transfer function being measured was C1:OMC-TST_AUX_A_OUT/C1:OMC-TST_AUX_A_IN2.
• Coherence between the excitation and output were also monitored.
• Sweep parameters:
• Measurement band was 0.1 - 900 Hz
• Logarithmic, downward.
• Excitation amplitude = 1ct, waveform = "Sine"

Unexplained behavior:

• The transfer function measurement fails with a "Synchronization error", at ~15 Hz.
• I don't know what is special about this frequency, but it fails repeatedly at the same point in the measurement.
• Coherence is not 1 always
• Why should the coherence deviate from 1 since everything is simulated? I think numerical noise would manifest when the gain of the filter is small (i.e. high frequencies for the pendulum), but the measurement and coherence seem fine down to a few tens of Hz.

To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?

* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)?  But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.

the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.

11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive).

Attachment 1: SimTF.pdf
3589   Mon Sep 20 11:39:45 2010 josephbUpdateCDSSwitch over

I talked to with Alex this morning, discussing what he needed to do to have a frame builder running that was compatible with the new front ends.

1) We need a heavy duty router as a separate network dedicated to data acquisition running between the front ends and the frame builder.  Alex says they have one over at downs, although a new one may need to be ordered to replace that one.

2) The frame builder is a linux machine (basically we stop using the Sun fb40m and start using the linux fb40m2 directly.).

3) He is currently working on the code today.  Depending on progress today, it might be installable tomorrow.

815   Fri Aug 8 12:21:57 2008 josephbConfigurationComputersSwitched X end ethernet connections over to new switch
In 1X4, I've switched the ethernet connections from c1iscex and c1auxex over to the new Prosafe 24 port switches. They also use the new cat6 cables, and are labeled.

At the moment, everything seems to be working as normally as it was before. In addition:

I can telnet into c1auxex (and can do the same to c1auxey which I didn't touch).
I can't telnet into c1iscex (but I couldn't do that before, nor can I telnet into c1iscey either, and I think these are computers which once running don't let you in).
852   Tue Aug 19 13:34:58 2008 josephbConfigurationComputersSwitched c1pem1, c0daqawg, c0daqctrl over to new switches
Moved the Ethernet connections for c1pem1, c0daqawg, and c0daqctrl over to the Netgear Prosafe switch in 1Y6, using new cat6 cables.
4791   Mon Jun 6 22:41:22 2011 ranaUpdateSUSSwitching problem in SUS models

Some weeks ago, Joe, Jamie, and I reworked the ETMY controls.

Today we found that the model rebuilds and BURT restores have conspired to put the SUS damping into a bad state.

1) The FM1 files in the XXSEN modules should switch the analog shadow sensor whitening. I found today that, at least on ETMY and ETMX, they do nothing. This needs to be fixed before we can use the suspensions.

2) I found all of the 3:30 and cts2um buttons OFF AGAIN. There's something certainly wrong with the way the models are being built or BURTed. All of our suspension tuning work is being lost as a consequence. We (Joe and Jamie) need to learn to use CONLOG and check that the system is not in a nonsense state after rebuilds. Just because the monitors have lights and the MEDM values are fluctuating doesn't mean that "ITS WORKING". As a rule, when someone says "it seems to work", that basically means that they have no idea if anything is working.

3) We need a way to test that the CDS system is working...

Attachment 1: a.pdf
15077   Thu Dec 5 14:54:15 2019 gautamUpdateGeneralSymlink to SRmeasure and AGmeasure

I symlinked the SRmeasure and AGmeasure commands to /usr/bin/ on donatella (as it is done on pianosa) so that these scripts are in \$PATH and may be run without having to navigate to the labutils directory.

13909   Fri Jun 1 19:25:11 2018 poojaUpdateCamerasSynchronizing video data with the applied motion to the mirror

Aim: To synchronize data from the captured video and the signal applied to ETMX

In order to correlate the intensity fluctuations of the scattered light with the motion of the test mass, we are planning to use the technique of neural network. For this, we need a synchronised video of scattered light with the signal applied to the test mass. Gautam helped me capture 60sec video of scattering of infrared laser light after ETMX was dithered in PITCH at ~0.2Hz..

I developed a python program to capture the video and convert it into a time series of the sum of pixel values in each frame using OpenCV to see the variation. Initially we had tried the same with green laser light and signal of approximately 11.12Hz. But in order to see the variation clearly, we repeated with a lower frequency signal after locking IR laser today. I have attached the plots that we got below. The first graph gives the intensity fluctuations from the video. The third and fourth graphs are that of transmitted light and the signal applied to ETMX to shake it. Since the video captured using the camera was very noisy and intensity fluctuations in the scattered light had twice the frequency of the signal applied, we captured a video after turning off the laser. The second plot gives the background noise probably from the camera. Since camera noise is very high, it may not be possible to train this data set in neural network.

Since the videos captured consume a lot of memory I haven't uploaded it here. I have uploaded the python code 'sync_plots.py' in github (https://github.com/CaltechExperimentalGravity/GigEcamera/tree/master/Pooja%20Sekhar/PythonCode).

Attachment 1: camera_mirror_motion_plots.pdf
446   Thu Apr 24 23:50:10 2008 ranaUpdateGeneralSyringes in George the Freezer
There are some packets of syringes in the freezer which are labeled as belonging to an S. Waldman.
Thu Apr 24 23:48:55 2008

Be careful of them, don't give them out to the undergrads, and just generally leave them alone. I
will consult with the proper authorities about it.
16315   Tue Sep 7 18:00:54 2021 TegaSummaryCalibrationSystem Identification via line injection

[paco]

This morning, I spent some time restoring the jupyter notebook server running in allegra. This server was first set up by Anchal to be able to use the latest nds python API tools which is handy for the calibration stuff. The process to restore the environment was to run "source ~/bashrc.d/*" to restore some of the aliases, variables, paths, etc... that made the nds server work. I then ran ssh -N -f -L localhost:8888:localhost:8888 controls@allegra from pianosa and carry on with the experiment.

[paco, hang, tega]

We started a notebook under /users/paco/20210906_XARM_Cal/XARM_Cal.ipynb on which the first part was doing the following;

• Set up list of excitations for C1:LSC-XARM_EXC (for example three sine waveforms) using awg.py
• Make sure the arm is locked
• Read a reference time trace of the C1:LSC-XARM_IN2 channel for some duration
• Start excitations (one by one at the moment, ramptime ~ 3 seconds, same duration as above)
• Get data for C1:LSC-XARM_IN2 for an equal duration (raw data in Attachment #1)
• Generate the excitation sine and cosine waveforms using numpy and demodulate the raw timeseries using a 4th order lowpass filter with fc ~ 10 Hz
• Estimate the correct demod phase by computing arctan(Q / I) and rerunning the demodulation to dump the information into the I quadrature (Attachment #2).
• Plot the estimated ASD of all the quadratures (Attachment #3)

[paco, hang, tega]

Estimation of open loop gain:

• Grab data from the C1:LSC-XARM_IN1 and C1:LSC-XARM_IN2 test points
• Infer excitation from their differnce, i.e. C1:LSC-XARM_EXC = C1:LSC-XARM_IN2 - C1:LSC-XARM_IN1
• Compute the open loop gain as follows : G(f) = csd(EXC,IN1)/csd(EXC,IN2), where csd computes the cross spectra density of the input arguments
• For the uncertainty in G, dG, we repeat steps (1) to (3) with & without signal injection in the C1:LSC-XARM_EXC channel. In the absence of signal injection, the signal in C1:LSC-XARM_IN2 is of the form: Y_ref = Noise/(1-G), whereas with nonzero signal injection, the signal in C1:LSC-XARM_IN2 has the form: Y_cal = EXC/(1-G) + Noise/(1-G), so their ratio, Y_cal/Y_ref = EXC/Noise, gives the SNR, which we can then invert to give the uncertainty in our estimation of G, i.e dG = Y_ref/Y_cal.
• For the excitation at 53 Hz, our measurtement for the open loop gain comes out to about 5 dB whiich is consistent with previous measurement.
• We seem to have an SNR in excess of 100 at measurement time of 35 seconds and 1 count of amplitude which gives a relative uncertainty of G of 0.1%
• The analysis details are ongoing. Feedback is welcome.
Attachment 1: raw_timeseries.pdf
Attachment 2: demod_signals.pdf
Attachment 3: cal_noise_asd.pdf
2474   Mon Jan 4 17:26:01 2010 MottUpdateGeneralT & R plots for Y1 and Y1S mirrors

The most up-to-date T and R plots for the Y1 and Y1S mirrors, as well as a T measurement for the ETM, can be found on:

11229   Tue Apr 21 01:17:13 2015 JenneUpdateModern ControlT-240 self-noise propagated through stack and pendulum

Going back to Wiener filtering for a moment, I took a look at what the T-240 noise level looks like in terms of pitch motion on one of our SOS optics (eg. PRM).

The self-noise of the T-240 (PSD, in dB referenced to 1m^2/s^4/Hz) was taken by pulling numbers from the Users Guide.  This is the ideal noise floor, if our installation was perfect.  I'm not sure where Kissel got the numbers from, but on page 13 of G1200556 he shows higher "measured" noise values for a T-240, although his numbers are already transformed to m/rtHz.

To get the noise numbers to meters, I use:  $\left[ \frac{\rm m}{\sqrt{\rm Hz}} \right] = \frac{\sqrt{10^{\frac{[\rm dB/\sqrt{Hz}]}{10}}}}{(2 \pi f)^2}$.  The top of that fraction is (a) getting to magnitude from power-dB and (b) getting to asd units from psd units.  The bottom of the fraction is getting rid of the extra 1/s^2.

Next I propagate this seismometer noise (in units of m/rtHz) to effective pendulum pitch motion, by propagating through the stacks and the transfer function for pos motion at the anchor point of the pendulum to pitch motion of the mirror (see eq 63 of T000134 for the calculation of this TF).   This gives me radians/rtHz of mirror motion, caused by the ground motion:

.

I have not actually calibrated the POP QPD, so I will need to do that in order to compare this seismometer noise to my Wiener filter results.

Attachment 1: T240selfnoise.png
Attachment 2: Limits.tar.gz
8558   Thu May 9 02:47:23 2013 JenneUpdatePEMT240 at corner station - cabling thoughts

Something that I want to look at is the coherence between seismic motion and PRM motion.  Since Den has been working on the fancy new seismometer installations, I got caught up for the day with getting the new corner seismometer station set up with a T240.  Later, Rana pointed out that we already have a Guralp sitting underneath the POX table, and that will give us a good first look at the coherence.  However, I'm still going to write down all the cable thoughts that I had today:

The cables that came with the electronics that we have (from Vladimir and tilt meter -land) are not long enough to go from the seismometer to 1X7, which is where I'd like to put the readout box (since the acquisition electronics are in that rack). I want to make a long cable that is 19pin MilSpec on one end, and 25pin Dsub on the other.  This will eliminate the creative connector type changes that happen in the existing setup.  However, before making the cable, I had to figure out what pins go to what.  So.

25pin Dsub          19pin MilSpec

1                   P

2                   N

3                   E

4                   No conn

5                   D

6                   R and V

7                   H

8                   J

9                   No connection

10                  T

11                  F

12                  L

13                  No conn

14                  B

15                  A

16                  R and V

17                  No conn

18                  C

19                  G

20                  G

21                  K

22                  U

23                  No conn

24                  S

25                  M

I am not sure why R and V are shorted to each other, but this connection is happening on the little PCB MilSpec->ribbon changer, right at the MilSpec side.  I need to glance at the manual to see if these are both ground (or something similar), or if these pins should be separate. Also, I'm not sure why 19 and 20 are shorted together.  I can't find (yet) where the short is happening. This is also something that I want to check before making the cable.

Den had one Female 19 pin MilSpec connector, meant for connecting to a T240, but the cable strain relief pieces of the connector have 'walked off'.  I can't find them, and after a solid search of the control room, the electronics bench, and the place inside where all of Den's connectors were stored, I gave up and ordered 2 more.  If we do find the missing bits for this connector, we can use it for the 2nd T240 setup, since we'll need 2 of these per seismometer. If anyone sees mysterious camo-green metal pieces that could go with a MilSpec connector, please let me know.

14992   Thu Oct 24 18:37:15 2019 gautamUpdatePEMT240 checkout

Summary:

The Trillium T240 seismometer needs mass re-centering. Has anyone done this before, and do we have any hardware to do this?

Details:

I went to the Trillium interface box in 1X5. In this elog, Koji says it is D1000749-v2. But looking at the connector footprint on the back panel, it is more consistent with the v1 layout. Anyway I didn't open it to check. Main point is that none of the backplane data I/O ports are used. We are digitizing (using the fast CDS system) the front panel BNC outputs for the three axes. So of the various connectors available on the interface box, we are only using the front panel DB25, the front panel BNCs, and the rear panel power.

The cable connecting this interface box to the actual seismometer is a custom one I believe. It has a 19 pin military circular type hermetic connector on one end, and a DB25 on the other. Power is supplied to the seismometer from the interface box via this cable, so in order to run the test, I had to use a DB25 breakout board to act as a feedthrough and peek at the signals while the seismometer and interface boards were connected. I used Jenne's mapping of the DB25--> 19 pin connector (which also seems consistent with the schematic). Findings:

1. We are supplying the Trillium with 39 V DC between the +PWR and -PWR pins, while the datasheet specifies 9V to 36V DC isolated. Probably this is okay?
2. The analog (AGND) and digital (DGND) ground pins are shorted. Is this okay?
3. I measured the DC voltages between the AGND pin and each of the mass position outputs.
• These are supposed to indicate when the masses need re-centering.
• The nominal output ranges for these are +/- 4 V single-ended.
• I measured the following values (I don't know how the U,V,W basis is mapped onto the cartesian X,Y,Z coordinates):
U_MP: 0.708 V
V_MP: -2.151 V
W_MP: -3.6 V
• So at the very least, the mass needs centering in the W direction (the manual recommends doing the re-centering procedure when one of these indicators exceeds 3.5 V in absolute value).
4. I also checked the DC voltages of the (X,Y,Z) outputs of the seismometer on the front panel BNCs, and also on the DB25 connector (so directly from the seismometer). These are rated to have a range of 40 Vpp differential between the pins. I measured ~0V on all the three axes - this is a bit confusing as I assumed a de-centered mass would lead to saturation in one of these outouts, but maybe we are measuring velocities and not positions?
5. We probably should consider monitoring these signals long term to inform of such drifts, what is the spare channel situation in the c1sus acromag?
6. Interestingly, today evening, there is no excess noise in the 0.1-0.3 Hz band in the X-axis of the seismometer even though it is past 6pm PDT now, which is usually the time when the excess begins to show up. The z-axis 0.3-1Hz BLRMS channel has flatlined though...

I am holding off on attempting any re-centering, for more experienced people to comment.

15013   Tue Nov 5 12:37:50 2019 gautamUpdatePEMT240 interface unit pulled out

I removed the Trillium T240 DAQ interface unit from 1X4 for investigation.

It was returned to the electronics rack and all the connections were re-made. Some details:

1. The board is indeed a D1000749-v2 as Koji said it is. There is just an additional board (labelled D1001872 but for which there is no schematic on the DCC) inside the 1U box that breaks out the D37 connector of the v2 into 3 D15 connectors. I took photos.
2. Armed with the new cable Chub got, and following the manual, I ran the re-centering routine.
• Now all the mass-monitoring position voltages are <0.3 V DC, as the manual tells me they should be.
• I noticed that when the seismometer is just plugged in and powered, it takes a few minutes for the mass monitoring voltages to acquire their steady state values.
• The V indicator reported ~-2V DC, and the W indicator reported -3.9V DC.
• While running the re-centering routine, I monitored the mass-position indicator voltages (via the backplane D15 connector) on an oscilloscope. See Attachment #1 for the time series. The data was rather noisy, I don't know why this is, so I plot the raw data in light colors and a filtered version in darker colors. Also, there seems to be a gain of x2 in the voltages on the backplane relative to what the T240 manual tells me I should expect, and the values reported when I query the unit via the serial port.
• We should ideally just install another Acromag ADC in the c1susaux box and acquire these and other available diagnostic information, since the signals are available.
• We should also probably check the mass position indicator values in a few days to see if they've drifted off again.
• Looking at the raw time series / spectra of the BS channels, I see no obvious signatures of any change.
• I will run a test by locking the PRC and looking for coherence between the seismometer data and angular motion witnessed by the POP QPD, as this was what signalled my investigation in the first place.

Update 445pm: Seems to have done something good - the old feedforward filters reduce the YAW RMS motion by a factor of a few. Pitch performance is not so good, maybe the filter needs re-training, but I see coherence, see Attachment #2 for the frequency domain WF.

Attachment 1: T240_recenter.pdf
Attachment 2: ffPotential.pdf
12063   Tue Apr 5 11:42:17 2016 gaericqutamUpdateendtable upgradeTABLE REMOVAL

There is currently no table at the X end!

We have moved the vast majority of the optics to a temporary storage breadbord, and moved the end table itself to the workbench at the end.

Steve says Transportation is coming at 1PM to put the new table in.

15693   Wed Dec 2 12:35:31 2020 PacoSummaryComputer Scripts / ProgramsTC200 python driver

Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here

*Warning* this first version of the driver remains untested

15694   Wed Dec 2 15:27:06 2020 gautamSummaryComputer Scripts / ProgramsTC200 python driver

FYI, there is this. Seems pretty well maintained, and so might be more useful in the long run. The available catalog of instruments is quite impressive - TC200 temp controller and SRS345 func gen are included and are things we use in the lab. maybe you can make a pull request to add MDT694B (there is some nice API already built I think). We should also put our netgpibdata stuff and the vacuum gauge control (basically everything that isn't rtcds) on there (unless there is some intellectual property rights issues that the Caltech lawyers have to sort out).

 Quote: Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here.  *Warning* this first version of the driver remains untested
356   Tue Mar 4 19:14:09 2008 ranaConfigurationCDSTDS & SVN
Matt, Rob, Rana

Today we added the TDS software to the 40m SVN repo.

First we rationalized things by deleting all the old TDS directories and taking
the tds_mevans dir and making it be the main one (apps/linux/tds).

We also deleted all of the TDS directories in the project area. It is now very
likely that several scripts will not work.
We're going to have the teething
problems of repointing everything to the nominal paths (in the apps areas).

Finally we did:
svn import tds https://40m.ligo.caltech.edu/svn/40m/tds --username rana

to stick it in. To check it out do:
svn checkout https://40m.ligo.caltech.edu/svn/40m/tds --username rana

We'll get a couple of the O'Reilly SVN books as well to supplement our verion control knowledge.
Unitl then you can use the SVN cheat sheets available at:
http://www.digilife.be/quickreferences/quickrefs.htm
12770   Mon Jan 30 18:41:41 2017 jamieUpdateCDSTEST ABORTED of new daqd code on fb1

I just aborted the fb1 test and reverted everything to the nominal configuration.  Everything looks to be operating nominally.  Front ends are mostly green except for c1rfm and c1asx which are currently not being acquired by the DAQ, and an unknown IPC error with c1daf.  Please let me know if any unusual problems are encountered.

The behavior of daqd on fb1 with the latest release (3.2.1) was not improved.  After turning on the full pipe it was back to crashing every 10 minutes or so when the full and second trend frames were being written out.  lame.  back to the drawing board...

13628   Fri Feb 9 13:37:44 2018 gautamUpdateALSTHD measurement trial

I quickly put together some code that calculates the THD from CDS data and generates a plot (see e.g. Attachment #1).

Algorithm is:

2. Compute power spectrum using scipy.signal.periodogram. I use a Kaiser window with beta=38 based on some cursory googling, and do 10 averages (i.e. nfft is total length / 10), and set the scaling to "spectrum" so as to directly get a power spectrum as opposed to a spectral density.
3. Find fundamental (assumed highest peak) and N harmonics using scipy.signal.find_peaks_cwt. I downsample 16k data to 2k for speed. A 120second time series takes ~5 seconds.
4. Compute THD as $\mathrm{THD} = \frac{\sqrt{\sum_{i=2}^{N}\mathrm{V}_i^2}}{V_1}$where V_i denotes an rms voltage, or in the case of a power spectrum, just the y-axis value.

I conducted a trial on the Y arm ALS channel whitening board (while the X arm counterpart is still undergoing surgery). With the whitening gain set to 0dB, and a 1Vpp input signal (so nothing should be saturated), I measure a THD of ~0.08% according to the above formula. Seems rather high - the LT1125 datasheet tells us to expect <0.001% THD+N at ~100Hz for a closed loop gain of ~10. I can only assume that the digitization process somehow introduces more THD? Of course the FoM we care about is what happens to this number as we increase the gain.

 Quote: I'm going to work on putting together some code that gives me a quick readback on the measured THD, and then do the test for real with different amplitude input signal and whitening gain settings.

Attachment 1: THD_trial.pdf
3243   Mon Jul 19 13:51:09 2010 kiwamuUpdateCDSTIming card at X end

[Joe, Kiwamu]

The timing slave in the IO chassis on the new X end was not working with symptoms of no front "OK" green light, no "PPS" light, 3.3V testpoint not working and  ERROR testpoint bouncing between 5-6V.

We took out the timing slave from the X end IO chassis put in to the new Y end IO chassis .

It worked perfectly there. We took the working one from Y end put in the X end IO chassis.

We slowly added cables. First we added power , it worked fine and we saw green "OK" light. Then we added 1PPS signal by a fiber and it also worked.

We turned everything off and then we added 40pin IPC cable from the chassis and infiniband cable from the  computer.

When we turned ON it we didn't see the green light.

This means something in the computer configuration might be wrong not in the timing card, we now are trying to make contact with Alex.

We are comparing the setup of the C1SCX  machine and the working C1ISCEX machine.

1796   Mon Jul 27 14:12:14 2009 ranaSummarySUSTM Coil Noise Spectra
Rob noticed that the ITMY DAC channels were saturating occassionally. Looking at the spectrum we can see why.
With an RMS of 10000 cts, the peak excursions sometimes cause saturations.

There's a lot of mechanical noise which is showing up on the ITM oplevs and then going to the mirror via
the oplev servo. We need to reduce the mechanical noise and/or modify the filters to compensate. The ITM
COIL_OUT RMS needs to be less than ~3000.
Attachment 1: Coils.pdf
750   Mon Jul 28 17:58:05 2008 SharonUpdate TOP screen changes
I wanted to test the adaptive code with a downsampling rate of 32 instead of 16. To do this I entered a 32 Hz ((2048/32)/2 - match Nyquist Freq.) low pass filter on the ERROR EMPH, MC1 and the relevant PEM channels.
16643   Thu Feb 3 10:25:59 2022 JordanUpdateVACTP1 and Manual Gate Valve Install

Jordan, Chub

Chub and I installed the new manual gate valve (Nor-Cal GVM-6002-CF-K79) and reinstalled TP1. The new gate valave was placed with the sealing side towards the main 40m volume, then TP1 was installed on top and the foreline reattched to TP1.

This valve has a hard stop in the actuator to prevent over torquing.

Attachment 1: 20220203_101455.jpg
Attachment 2: 20220203_094831.jpg
Attachment 3: 20220203_094823.jpg
16634   Mon Jan 31 10:39:19 2022 JordanUpdateVACTP1 and Manual Gate Valve Removal

Jordan, Chub

Today, Chub and I removed TP1 and the failed manual gate valve off of the pumping spool.

First, P2 needed to be vented in order to remove TP1. TP1 has a purge valve on the side of the pump which we slowly opened bringing the P2 volume up to atmosphere. Although, this was not vented using the dry air/N2, using this purge valve eliminated the need to vent the RGA volume.

Then we disconnected TP1 foreline, removed TP1+8" flange reducer, then the gate valve. All of the removed hardware looked good, so no need to replace bolts/nuts, only needs new gaskets. TP1 and the failed valve are sitting on a cart wrapped in foil next to the pumping station.

Attachment 1: 20220131_100637.jpg
Attachment 2: 20220131_102807.jpg
Attachment 3: 20220131_102818.jpg
Attachment 4: 20220131_100647.jpg
11593   Mon Sep 14 10:41:03 2015 SteveUpdateVACTP2 dry pump replaced

TP2 dry fore pump sn PLE10082 was replaced at pressure 717 mTorr,  TP2 50K rpm 0.33A @ 112,677 hrs

Top seal life was 8,160 hrs

Model  SH110, Sn LP1007L556 was installed. It's fore line pressure after 30 minutes of running 38 mTorr, TP2 turbo at 50K rpm 0.18A

Attachment 1: TP2_dry_pump_replaced.png
15602   Wed Sep 23 15:06:54 2020 JordanUpdateVACTP2 Forepump Re-install

I removed the forepump to TP2 this morning after the vacuum failure, and tested in the C&B lab. I pumped down on a small volume 10 times, with no issue. The ultimate pressure was ~30 mtorr.

I re-installed the forepump in the afternoon, and restarted TP2, leaving V4 closed. This will run overnight to test, while TP3 backs TP1.

In order to open V1, with TP3 backing TP1, the interlock system had to be reset since it is expecting TP2 as a backing pump. TP2 is running normally, and pumping of the main volume has resumed.

gautam 2030:

1. The monitor (LCD display) at the vacuum rack doesn't work - this has been the case since Monday at least. I usually use my laptop to ssh in so I didn't notice it so it could have been busted from before. But for anyone wishing to use the workstation arrangement at 1X8, this is not great. Today, we borrowed the vertex laptop to ssh in, the vertex laptop has since been returned to its nominal location.
2. The modification to the interlock condition was made by simply commenting out the line requiring V4 to be open for V1 to be opened. I made a copy of the original .yaml file which we can revert to once we go back to the normal config.
3. I also opened VM1 to allow the RGA scans to continue to be meaningful.
4. At the time of writing, all systems seem nominal. See Attachment #2. The vertical line indicates when we started pumping on the main volume again earlier today, with TP3 backing TP1.

Unclear why the TP2 foreline pump failed in the first place, it has been running fine for several hours now (although TP2 has no load, since V4 isolates it from the main volume). Koji's plots show that the TP2 foreline pressure did not recover even after the interlock tripped and V4 was closed (i.e. the same conditions as TP2 sees right now).

Attachment 1: Screenshot_from_2020-09-23_15-15-43.png
Attachment 2: MainVolPumpDown.png
15409   Thu Jun 18 15:25:08 2020 JordanUpdateVACTP2 and TP3 Forepump removal

I removed the backing pumps for TP2 and TP3 today to test ultimate pressure and determine if they need a tip seal replacement. This was done with Jon backing me on Zoom. We closed off TP3 and powered down TP3 and the auxilliary pump, in order to remove the forepumps from the exhaust line.

1. Close V1
2. Close V5
3. Turn off TP3
4. Turn off aux dry pump (manually)
5. Once the PTP3 foreline pressure has come up to atmosphere, you can disconnect the TP3 dry pump and cap the exhaust line with a KF blank.
6. Restore the vac configuration in reverse order: dry pump ON, TP3 ON, open V5, open V1

Once pumps were removed I connected a Pirani gauge to the pump directly and pumped down, results as follows:

TP2 Forepump (Agilent IDP 7):

• Ultimate Pressure: 123 mtorr
• Hours: 10903

TP3 Forepump (Varian SH 110):

• Ultimate pressure: ~70 torr
• Hours: 60300

TP3 forepump defintely needs a new tip seal, and while the pressure on TP2 Forepump was good there was a significant amount of particulate that came out of the exhaust line, so a new tip seal might not be needed but it is recommeded.

15411   Thu Jun 18 16:56:34 2020 JordanUpdateVACTP2 and TP3 Forepump removal
 Quote: I removed the backing pumps for TP2 and TP3 today to test ultimate pressure and determine if they need a tip seal replacement. This was done with Jon backing me on Zoom. We closed off TP3 and powered down TP3 and the auxilliary pump, in order to remove the forepumps from the exhaust line. Close V1 Close V5 Turn off TP3 Turn off aux dry pump (manually) Once the PTP3 foreline pressure has come up to atmosphere, you can disconnect the TP3 dry pump and cap the exhaust line with a KF blank. Restore the vac configuration in reverse order: dry pump ON, TP3 ON, open V5, open V1 Once pumps were removed I connected a Pirani gauge to the pump directly and pumped down, results as follows: TP2 Forepump (Agilent IDP 7): Ultimate Pressure: 123 mtorr Hours: 10903 TP3 Forepump (Varian SH 110): Ultimate pressure: ~70 torr Hours: 60300 TP3 forepump defintely needs a new tip seal, and while the pressure on TP2 Forepump was good there was a significant amount of particulate that came out of the exhaust line, so a new tip seal might not be needed but it is recommeded.

I agree with your assessment, Jordan.  If I'm not mistaken the scroll pump for TP2 is new; we had a very early failure with the last new scroll pump (the forepump for TP3) tip seals at just over 5000 hours.  Glad to see my replacement seals held up for over 60K hours. If this is the trend with these pumps, we can simply run them to  around 60000 hours and replace the seals at that time, rather than waiting for failure! - Chub

8978   Wed Aug 7 15:36:29 2013 SteveUpdateVACTP2 drypump replaced

TP2's fore line - dry pump replaced at performance level 600 mTorr after 10,377 hrs of continuous operation.

Where are the foreline pressure gauges? These values are not on the vac.medm screen.

The new tip seal dry pump lowered the small turbo foreline pressure 10x

TP2fl after 2 day of pumping 65mTorr

Attachment 1: forelinesPs.jpg
9876   Tue Apr 29 16:42:29 2014 SteveUpdateVACTP2 drypump replaced

 Quote: TP2's fore line - dry pump replaced at performance level 600 mTorr after 10,377 hrs of continuous operation. Where are the foreline pressure gauges? These values are not on the vac.medm screen. The new tip seal dry pump lowered the small turbo foreline pressure 10x TP2fl after 2 day of pumping 65mTorr

TP2 dry pump replaced at fore pump pressure 1 Torr,  TP2 50K_rpm 0.34A

Top seal life 6,362 hrs

New seal performance at 1 hr  36 mTorr,

Maglev at 560 Hz, cc1 6e-6 Torr

Attachment 1: dryforepumpreplaced.png
13407   Mon Oct 30 10:09:41 2017 SteveUpdateVACTP2 failed

IFO pressure 1.2e-5 Torr at 9:30am

 Quote: Valve configuration: Vacuum normal Note: Tp2 running at 75Krpm 0.25A 26C has a  load high pitch sound today. It's fore line pressure 78 mTorr. Room temp 20C

Atm. 1,  This was the vacuum condition  this morning.

IFO P1  9.7 mTorr, V1 openV4 was in closed position , ~37 C warm Maglev at normal 560Hz rotation speed with foreline pressure 3.9 Torr because V4 closed 2 days ago when TP2 failed .....see Atm.3

The error messege at TP2  controller was: fault overtemp.

I did the following to restored IFO pumping: stopped pumping of the annulose with TP3 and valves were configured so TP3 can be the forepump of the Maglev.

closed VM1 to protect the RGA,  close PSL shutter .....see Gautam  entry

aux fan on to cool down Maglev-TP1, room temp 20 C,

aux drypump turned on and opend to TP3 foreline to gain pumping speed,

closed PAN to isolate annulos pumping,

opened V7 to pump Maglev forline with TP3 running at 50Krpm, It took 10 minutes to reach P2  1mTorr from 3.9 Torr

aux drypump closed off at P2  1 mTorr, TP3 foreline pressure 362 mTorr.......see Atm.2

As we are running now:

IFO pressure 7e-6 Torr at Hornet cold cathode gauge at 15:50  We have no IFO CC1 logging now.  Annuloses are in 3-5 mTorr range are not pumped.

TP3 as foreline pump of TP1 at 50 Krpm, 0.24 A, 24 C, it's drypump forline pressure 324 mTorr

V4 valve cable is disconnected.

I need help with wiring up the logging of the Hornet cold cathode gauge.

Attachment 1: tp2failed.png
Attachment 2: ifo_1.0E-5_Torrit.png
Attachment 3: tp2failed2dago.png
Attachment 4: 4days.png
14512   Wed Apr 3 10:42:36 2019 gautamUpdateVACTP2 forepump replaced

Bob and Chub concluded that the drypump that serves as TP2's forepump had failed. Steve had told me the whereabouts of a spare Agilent IDP-7. This was meant to be a replacement for the TP3 foreline pump when it failed, but we decided to swap it in while diagnosing the failed drypump (which had 2182 hours continuous running according to the hour counter). Sure enough, the spare pump spun up and the TP2fl pressure dropped at a rate consistent with what is expected. I was then able to spin up TP1, TP2 and TP3.

However, when opening V4 (the foreline of TP1 pumped by TP2), I heard a loud repeated click track (~5Hz) from the electronics rack. Shortly after, the interlocks shut down all the TPs again, citing "AC power loss". Something is not right, I leave it to Jon and Chub to investigate.

14514   Wed Apr 3 16:17:17 2019 JonUpdateVACTP2 forepump replaced

I can't explain the mechanical switching sound Gautam reported. The relay controlling power to the TP2 forepump is housed in the main AC relay box under the arm tube, not in the Acromag chassis, so it can't be from that. I've cycled through the pumpdown sequence several times and can't reproduce the effect. The Acromag switches for TP2 still work fine.

In any case, I've made modifications to the vacuum interlocks that will help with two of the issues:

1. For the "AC power loss" over-triggering: New logic added requiring the UPS to be out of the "on line power, battery OK" state for ~5 seconds before tripping the interlock. This will prevent electrical transients from triggering an emergency shutdown, as seems to be the case here (the UPS briefly isolates the load to battery during such events).
2. PSL interlocking: New logic added which directly sets C1:AUX-PSL_ShutterRqst --> 0 (closes the PSL shutter) when the main volume pressure is 3 mtorr-500 torr. Previously there was a channel exposed for this interlock (C1:Vac-interlock_high_voltage), but c1aux was not actually monitoring it. Following the convention of every vac interlock, after the PSL shutter has been closed, it has to be manually reopened. Once the pressure is out of this range, the vac system will stop blocking the shutter from reopening, but it will not perform the reopen action itself. gautam: a separate interlock logic needs to be implemented on c1aux (the shutter machine) that only permits the shutter to be opened if the Vac pressure range is okay. The SUS watchdog style AND logic in the EPICS database file should work just fine.

After finishing this vac work, I began a new pumpdown at ~4:30pm. The pressure fell quickly and has already reached ~1e-5 torr. TP2 current and temp look fine.

 Quote: However, when opening V4 (the foreline of TP1 pumped by TP2), I heard a loud repeated click track (~5Hz) from the electronics rack. Shortly after, the interlocks shut down all the TPs again, citing "AC power loss". Something is not right, I leave it to Jon and Chub to investigate.
Attachment 1: IMG_3180.jpg
15599   Wed Sep 23 08:57:18 2020 gautamUpdateVACTP2 running HOT

The interlocks tripped at ~630am local time. Jordan reported that TP2 was supposedly running at 52 C (!).

V1 was already closed, but TP2 was still running. With him standing by the rack, I remotely exectued the following sequence:

• VM1 closed (isolates RGA volume).
• VA6 closed (isolates annuli from being pumped).
• V7 opened (TP3 now backs TP1, temporarily, until I'm in the lab to check things out further).
• TP2 turned off.

Jordan confirmed (by hand) that TP2 was indeed hot and this is not just some serial readback issue. I'll do the forensics later.

Attachment 1: Screen_Shot_2020-09-23_at_8.55.39_AM.png
15600   Wed Sep 23 10:06:52 2020 KojiUpdateVACTP2 running HOT

Here is the timeline. This suggests TP2 backing RP failure.

1st line: TP2 foreline pressure went up. Accordingly TP2 P, current, voltage, and temp went up. TP2 rotation went down.

2nd line: TP2 temp triggered the interlock. TP2 foreline pressure was still high (10torr) so TP2 struggled and was running at 1 torr.

3rd line: Gautam's operation. TP2 was isolated and stopped.

Between the 1st line and 2nd line, TP2 pressue (=TP1 foreline pressure) went up to 1torr. This made TP1 current increased from 0.55A to 0.68A (not shown in the plot), but TP1 rotation was not affected.

Attachment 1: Screen_Shot_2020-09-23_at_10.00.43.png
5887   Mon Nov 14 15:46:14 2011 steveUpdateVACTP2's forepump changed

The foreline pressure of TP2 was 1.4 Torr this morning. This drypump worked well for ten months.

Recently rebuilt  drypump with new seal was swapped in.

This is how you do it: close  V1,  V4 and turn off TP2. Replace drypump and start up TP2

Set pump speed to 50 K rpm and open V4 to TP1 Note that the Maglev was not turned off  because V4 was closed off only 5-10 minutes.

Open V1 the status is Vac Normal.

TP2 is rotating at  50K rpm, current pick up 0.2A,  the temp is 26C and its foreline pressure 33 mTorr

Attachment 1: drypump.png
4654   Fri May 6 15:59:40 2011 steveUpdateVACTP3 fore pump replaced

Dry fore pump of TP3 replaced by brand new Varian SH-110  at  1.1 Torr_75,208 hrs

The annuloses were closed off for 25 minutes. We are back to VACUUM NORMAL mode.

The TP3 fore line pressure dropped to 44 mTorr at 25 minutes in operation.......9.4mTorr at day 2 with full annulos load

15591   Mon Sep 21 15:57:08 2020 JordanUpdateVACTP3 Forepump Replacement and Vac reset

I removed the forepump (Varian SH-110) for TP3 today to see why it had failed over the weekend. I tested it in the C&B lab and the ultimate pressure was only ~40torr. I checked the tip seals and they were destroyed. The scroll housing also easily pulled off of the motor drive shaft, which is indicative of bad bearings. The excess travel in the bearings likely led to significant increase in tip seal wear. This pump will need to be scrapped, or rebuilt.

I tested the spare Varian SH-110 pump located at the X-end and the ultimate pressure was ~98 mtorr. This pump had tip seals replaced on 11/5/18, and is currently at 55163 operating hours. It has been installed as the TP3 forepump.

Once installed, restarting the pump line occured as follows: V5 Closed, VA6 closed, VASE Closed, VASV closed, VABSSCI closed, VABS closed, VABSSCO closed, VAEV closed, VAEE closed,TP3 was restarted and once at normal operation, valves were opened in same order.

The pressure differential interlock condition for V5 was temporaily changed to 10 torr (by Gautam), so that valves could be opened in a controlled manner. Once, the vacuum system was back to normal state the V5 interlock condition was set back to the nominal 1 torr. Vacuum system is now running normally.

Attachment 1: Screenshot_from_2020-09-21_15-56-04.png
Attachment 2: 20200921_145043.jpg
Attachment 3: 20200921_145040.jpg
16806   Fri Apr 22 14:14:33 2022 JordanUpdateVACTP3 Forepump tip seal replacement

Jordan, JC

While the pumpspool is vented, I thought it would be a convenient time to change out the tip seal on the TP3 forepump. This one had not been changed since 2018, so as preventative maintence I had JC remove the pump and begin cleaning/installing the new tip seal.

Unfortunately the tip seal broke, but I have ordered another. We should have this pump ready to go late next week. If one is needed sooner, there is a spare IDP 7 pump we can install as the TP3 forepump.

Attachment 1: IMG_0572.jpeg
Attachment 2: IMG_0573.jpeg
16822   Mon May 2 10:00:34 2022 JordanUpdateVACTP3 Forepump tip seal replacement

[JC, Jordan]

Jordan recieved the new tip seal Friday afternoon and I continued the replacement process in the morning. Finishing up, we proceeded to test the pump in the Clean and Bake room. The pump's pressure lowered to 110 mTorr, and we continue pumping so the seal can recieve a good fitting.

Update: We have confirmed the pump is working great and have reinstalled this back into the vacuum system.                                                                                                                   Note: The same O-Rings were used.

 Quote: Jordan, JC While the pumpspool is vented, I thought it would be a convenient time to change out the tip seal on the TP3 forepump. This one had not been changed since 2018, so as preventative maintence I had JC remove the pump and begin cleaning/installing the new tip seal. Unfortunately the tip seal broke, but I have ordered another. We should have this pump ready to go late next week. If one is needed sooner, there is a spare IDP 7 pump we can install as the TP3 forepump.

15582   Sat Sep 19 18:07:35 2020 KojiUpdateVACTP3 RP failure

I came to the campus and Gautam notified that he just had received the alert from the vac watchdog.

I checked the vac status at c1vac. PTP3 went up to 10 torr-ish and this made the diff pressure for TP3 over 1torr. Then the watchdog kicked in.

To check the TP3 functionality, AUX RP was turned on and the manual valve (MV in the figure) was opened to pump the foreline of TP3. This easily made PTP3 <0.2 torr and TP3 happy (I didn't try to open V5 though).

So the conclusion is that RP for TP3 has failed. Presumably, the tip-seal needs to be replaced.

Right now TP3 was turned off and is ready for the tip-seal replacement. V5 was closed since the watchdog tripped.

Attachment 1: vac.png
Attachment 2: Screen_Shot_2020-09-19_at_17.52.40.png
ELOG V3.1.3-