40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 232 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  3528   Mon Sep 6 21:08:44 2010 ranaUpdateCDSSusension model reviewed

We must remember that we are using the Rev.B SOS Coil Drivers and not the Rev. A. 

The main change from A->B was the addition of the extra path for the bias inputs. These inputs were previously handled by the slow EPICS system and not a part of the front end. So we used to have a separate bias screen for these than the bias which is in the front end. The slow bias is what was used for the alignment to avoid overloading the range of the main coil driver path.

  9982   Wed May 21 13:18:47 2014 ericqUpdateCDSSuspension MEDM Bug

I fixed a bug in the SUS_SINGLE screen, where the total YAW output was incorrectly displayed (TO_COIL_3_1 instead of TO_COIL_1_3). I noticed this by seeing that the yaw bias slider had no effect on the number that claimed to be the yaw sum. The first time I did this, I accidently changed the screen size a bit which smushed things together, but that's fixed now.

I committed it to the svn, along with some uncommitted changed to the oplev servo screen.

  14499   Thu Mar 28 23:29:00 2019 KojiUpdateSUSSuspension PD whitening and I/F boards modified for susaux replacement

Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.


Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.

The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.

After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.

The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.

The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).

 

Attachment 1: IMG_7474.JPG
IMG_7474.JPG
Attachment 2: D000210_backplane.pdf
D000210_backplane.pdf
Attachment 3: Screenshot_from_2019-03-28_23-28-23.png
Screenshot_from_2019-03-28_23-28-23.png
  2642   Fri Feb 26 01:00:07 2010 JenneUpdateCOCSuspension Progress

This is going to be a laundry list of the mile markers achieved so far:

* Guiderod and wire standoff glued to each ITMX and ITMY

* Magnets glued to dumbbells (4 sets done now).  ITMX has 244 +- 3 Gauss, ITMY has 255 +- 3 Gauss.  The 2 sets for SRM and PRM are 255 +- 3 G and 264 +- 3 G.  I don't know which set will go with which optic yet.

* Magnets glued to ITMX.  There were some complications removing the optic from the magnet gluing fixture.  The way the optic is left with the glue to dry overnight is with "pickle picker" type grippers holding the magnets to the optic.  After the epoxy had cured, Kiwamu and I took the grippers off, in preparation to remove the optic from the fixture.  The side magnet (thankfully the side where we won't have an OSEM) and dumbbell assembly snapped off.  Also, on the UL magnet, the magnet came off of the dumbbell (the dumbbell was still glued to the glass).  We left the optic in the fixture (to maintain the original alignment), and used one of the grippers to glue the magnet back to the UL dumbbell.  The gripper in the fixture has very little slop in where it places the magnet/dumbbell, so the magnet was reglued with very good axial alignment.  Since after the side magnet+dumbbell came off the glass, the 2 broke apart, we did not glue them back on to the optic.  They were reattached, so that we can in the future put the extra side magnet on, but I don't think that will be necessary, since we already know which side the OSEM will be on.

* Magnets glued to ITMY.  This happened today, so it's drying overnight.  Hopefully the grippers won't be sticky and jerky like last time when we were removing them from the fixture, so hopefully we won't lose any magnets when I take the optic out of the fixture.

* ITMX has been placed in its suspension cage.  The first step, before getting out the wire, is to set the optic on the bottom EQ stops, and get the correct height and get the optic leveled, to make things easier once the wire is in place.  Koji and I did this step, and then we clamped all of the EQ stops in place to leave it for the night.

* The HeNe laser has been leveled, to a beam height of 5.5inches, in preparation for the final leveling of the optics, beginning tomorrow.  The QPD with the XY decoder is also in place at the 5.5 inch height for the op lev readout.  The game plan is to leave this set up for the entire time that we're hanging optics.  This is kind of a pain to set up, but now that it's there, it can stay out of the way huddled on the side of the flow bench table, ready for whenever we get the ETMs in, and the recoated PRM. 

* Koji and Steve got the ITMX OSEMs from in the vacuum, and they're ready for the hanging and balancing of the optic tomorrow.  Also, they got out the satellite box, and ran the crazy-long cable to control the OSEMs while they're on the flow bench in the clean room.

 

Koji and I discovered a problem with the small EQ stops, which will be used in all of the SOS suspensions for the bottom EQ stops.  They're too big.  :(  The original document (D970312-A-D) describing the size for these screws was drawn in 1997, and it calls for 4-40 screws.  The updated drawing, from 2000 (D970312-B-D) calls for 6-32 screws.  I naively trusted that updated meant updated, and ordered and prepared 6-32 screws for the bottom EQ stops for all of the SOSes.  Unfortunately, the suspension towers that we have are tapped for 4-40.  Thumbs down to that.  We have a bunch of vented 4-40 screws in the clean room cabinets, which I can drill, and have Bob rebake, so that Zach and Mott can make viton inserts for them, but that will be a future enhancement.  For tonight, Koji and I put in bare vented 4-40 screws from the clean room supply of pre-baked screws.  This is consistent with the optics in our chambers having bare screws for the bottom EQ stops, although it might be nicer to have cushy viton for emergencies when the wire might snap.  The real moral of this story is: don't trust the drawings.  They're good for guidelines, but I should have confirmed that everything fit and was the correct size.

  16597   Wed Jan 19 14:41:23 2022 KojiUpdateBHDSuspension Status

Is this the correct status? Please directly update this entry.


LO1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
LO2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]

AS4 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR3 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
SR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]


Last updated: Fri Jan 28 10:34:19 2022

  176   Thu Dec 6 19:19:47 2007 AndreyConfigurationSUSSuspension damping Gain was restored

Suspension damping gain was disabled for some reason (all the indicators in the most right part of the screen C1SUS_ETMX.adl were red), it is now restored.
  14725   Thu Jul 4 10:54:21 2019 KojiSummarySUSSuspension damping recovered, ITMX stuck

So Cal Earthquake. All suspension watchdogs tripped.

Tried to recover the OSEM damping. 

=> The watchdogs for all suspensions except for ITMX were restored. ITMX seems to be stuck. No further action by me for now.

  14471   Wed Feb 27 21:34:21 2019 gautamUpdateGeneralSuspension diagnosis

In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are

  1. To see how much the resonant peaks have shifted w.r.t. the database, if at all - I claim that the ETMY resonances have shifted by a large amount and also has lost one of the resonant peaks.
  2. To check the status of the existing diagonalization.

All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.

Watchdogs restored at 10 AM PST

  844   Mon Aug 18 08:07:10 2008 YoichiConfigurationSUSSuspension free swinging
I've started a free swinging measurement of OSEM spectra now. Please leave the watchdogs untouched.
  15610   Sun Oct 4 15:32:21 2020 gautamUpdateSUSSuspension health check

Summary:

After the earthquake on September 19 2020, it looks to me like the only lasting damage to suspensions in vacuum is the ETMY UR magnet being knocked off. 

Suspension ringdown tests:

I did the usual suspension kicking/ringdown test:

  • One difference is that I now kick the suspension "N" times where N is the number of PSD averages desired. 
  • After kicking the suspension, it is allowed to ring down with the damping disabled, for ~1100 seconds so that we can get spectra with 1mHz resolution.
  • We may want to get more e-folding times in, but since the Qs of the modes are a few hundred, I figured this is long enough.
  • I think this kind of approach gives better SNR than letting it ringdown 10,000 seconds (for 10 averages with 10 non overlapping segments of 1000 seconds), and I wanted to test this scheme out, seems to work well.
  • Attachment #1 shows a summary of the results.
  • Attachment #2 has more plots (e.g. transfer function from UL to all other coils), in case anyone is interested in more forensics. The data files are large but if anyone is interested in the times that the suspension was kicked, you can extract it from here.

Conclusions:

  1. My cursory scans of the analysis don't throw up any red flags (apart from the known problem of ETMY UR being dislodged) 👌 
  2. The PRM data is weird 
    • I believe this is because the DC bias voltage to the coils was significantly off from what it normally is when the PRC is aligned.
    • In any case, I am able to lock the PRC, so I think the PRM magnets are fine.
  3. The PRC angular FF no longer works turns out this was just a weird interaction with the Oplev loop because the beam was significantly off-centered on the Oplev QPD. Better alignment fixed it, the FF works as it did before.
    • With the PRC locked and the carrier resonant (no ETMs), the old feedforward filters significantly degrade the angular stability to the point that the lock is lost.
    • My best hypothesis is that the earthquake caused a spot shift on PR2/PR3, which changed the TF from seismometer signal to PRC spot motion.
    • Anyways, we can retrain the filter.
    • The fact that the PRC can be locked suggest PR2/PR3 are still suspended and okay.
  4. The SRM data is also questionable, because the DC bias voltage wasn't set to the values for an aligned SRC when the data was collected
    • Nevertheless, the time series shows a clean ringdown, so at least all 5 OSEMs are seeing a signal.
    • Fact that the beam comes out at the AS port suggest SR3/SR2 suspensions are fine 👍 

Attachment #2 also includes info about the matrix diagonalization, and the condition numbers of the resulting matrices are as large as ~30 for some suspensions, but I think this isn't a new feature. 

Attachment 1: combined.pdf
combined.pdf
Attachment 2: allPlots.zip
  15506   Thu Jul 30 16:16:43 2020 gautamUpdateSUSSuspension recovery

This earthquake and friends had tripped all watchdogs. I used the scripted watchdog re-enabler, and released the stuck ITMX (this operation is still requires a human and hasn't been scripted yet). IMC is locked again and all Oplevs report healthy optic alignment.

  82   Thu Nov 8 00:55:44 2007 pkpUpdateOMCSuspension tests
[Sam , Pinkesh]

We tried to measure the transfer functions of the 6 degrees of freedom in the OMS SUS. To our chagrin, we found that it was very hard to get the OSEMs to center and get a mean value of around 6000 counts. Somehow the left and top OSEMs were coupled and we tried to see if any of the OSEMs/suspension parts were touching each other. But there is still a significant coupling between the various OSEMs. In theory, the only OSEMS that are supposed to couple are [SIDE] , [LEFT, RIGHT] , [TOP1, TOP2 , TOP3] , since the motion along these 3 sets is orthogonal to the other sets. Thus an excitation along any one OSEM in a set should only couple with another OSEM in the same same set and not with the others. The graphs below were obtained by driving all the OSEMS one by one at 7 Hz and at 500 counts ( I still have to figure out how much that is in units of length). These graphs show that there is some sort of contact somewhere. I cant locate any physical contact at this point, although TOP2 is suspicious and we moved it a bit, but it seems to be hanging free now. This can also be caused by the stiff wire with the peek on it. This wire is very stiff and it can transmit motion from one degree of freedom to another quite easily. I also have a graph showing the transfer function of the longitudnal degree of freedom. I decided to do this first because it was simple and I had to only deal with SIDE, which seems to be decoupled from the other DOFs. This graph is similar to one Norna has for the longitudnal DOF transfer function, with the addition of a peak around 1.8 Hz. This I reckon could very be due to the wire, although it is hard to claim for certain. I am going to stop the measurement at this time and start a fresh high resolution spectrum and leave it running over night.

There is an extra peak in the high res spectrum that is disturbing.
Attachment 1: shakeleft.pdf
shakeleft.pdf
Attachment 2: shakeright.pdf
shakeright.pdf
Attachment 3: shakeside.pdf
shakeside.pdf
Attachment 4: shaketop1.pdf
shaketop1.pdf
Attachment 5: shaketop2.pdf
shaketop2.pdf
Attachment 6: shaketop3.pdf
shaketop3.pdf
Attachment 7: LongTransfer.pdf
LongTransfer.pdf LongTransfer.pdf LongTransfer.pdf
Attachment 8: Shakeleft7Nov2007_2.pdf
Shakeleft7Nov2007_2.pdf
Attachment 9: Shakeleft7Nov2007_2.png
Shakeleft7Nov2007_2.png
  12648   Wed Nov 30 01:47:56 2016 gautamUpdateLSCSuspension woes

Short summary:

  • Looks like Satellite boxes are not to blame for glitchy behaviour of shadow sensor PD readouts
  • Problem may lie at the PD whitening boards (D000210) or with the Contec binary output cards in c1sus
  • Today evening, similar glitchy behaviour was observed in all MC1 PD readout channels, leading to frequent IMC unlocking. Cause unknown, although I did work at 1X5, 1X6 today, and pulled out the PD whitening board for ITMY which sits in the same eurocrate as that for MC1. MC2/MC3 do not show any glitches.

Detailed story below...


Part 1: Satellite box swap

Yesterday, I switched the ITMY and ETMY satellite boxes, to see if the problems we have been seeing with ITMY UL move with the box to ETMY. It did not, while ITMY UL remained glitchy (based on data from approximately 10pm PDT on 28Nov - 10am PDT 29 Nov). Along with the tabletop diagnosis I did with the tester box, I concluded that the satellite box is not to blame.


Part 2: Tracing the signal chain (actually this was part 3 chronologically but this is how it should have been done...)

So if the problem isn't with the OSEMs themselves or the satellite box, what is wrong? I attempted to trace the signal chain from the satellite box into our CDS system as best as I could. The suspension wiring diagram on our wiki page is (I think) a past incarnation. Of course putting together a new diagram was a monumental task I wasn't prepared to undertake tonight, but in the long run this may be helpful. I will put up a diagram of the part I did trace out tomorrow, but the relevant links for this discussion are as follows (? indicates I am unsure):

  1. Sat box (?)--> D010069 via 64pin IDE connector --> D000210 via DB15 --> D990147 via 4pin LEMO connectors --> D080281 via DB25 --> ADC0 of c1sus
  2. D000210 backplane --> cross-connect (mis)labelled "ITMX white" via IDE connector
  3. c1sus CONTEC DO-32L-PE --> D080478 via DB37 --> BO0-1 --> cross-connect labelled "XY220 1Y4-33-16A" via IDE --> (?)  cross-connect (mis)labelled "ITMX white" via IDE connector

I have linked to the DCC page for the various parts where available. Unfortunately I can't locate (on new DCC or old or elog or wiki) drawings for D010069 (Satellite Amplifier Adapter Board), D080281 ("anti-aliasing interface)" or D080478 (which is the binary output breakout box). I have emailed Ben Abbott who may have access to some other archive - the diagrams would be useful as it is looking likely that the problem may lie with the binary output.

So presumably the first piece of electronics after the Satellite box is the PD whitening board. After placing tags on the 3 LEMOs and 1 DB15 cable plugged into this board, I pulled out the ITMY board to do some tabletop diagnosis in the afternoon around 2pm 29Nov.


Part 3: PD whitening board debugging

This particular board has been reported as problematic in the recent past. I started by inserting a tester board into the slot occupied by this board - the LEDs on the tester board suggested that power-supply from the backplane connectors were alright, confirmed with a DMM.

Looking at the board itself, C4 and C6 are tantalum capacitors, and I have faced problems with this type of capacitor in the past. In fact, on the corresponding MC3 board (which is the only one visible, I didn't want to pull out boards unnecessarily) have been replaced with electrolytic capacitors, which are presumably more reliable. In any case, these capacitors do not seem to be at any fault, the board receives +/-15 V as advertised.

The whitening switching is handled by the MAX333 - this is what I looked at next. This IC is essentially a quad SPDT switch, and a binary input supplied via the backplane connector serves to route the PD input either through a whitening filter, or bypass it via a unity gain buffer. The logic levels that effect the switching are +15V and 0V (and not the conventional 5V and 0V), but according to the MAX333 datasheet, this is fine. I looked at the supply voltage to all ICs on the board, DC levels seemed fine (as measured with a DMM) and I also looked at it on an oscilloscope, no glitches were seen in ~30sec viewing stretch. I did notice something peculiar in that with no input supplied to the MAX333 IC (i.e. the logic level should be 15V), the NO and NC terminals appear shorted when checked with a DMM. Zach has noticed something similar in the past, but Koji pointed out that the DMM can be fooled into thinking there is a short. Anyway, the real test was to pull the logic input of the MAX333 to 0, and look at the output, this is what I did next.

The schematic says the whitening filter has poles at 30,100Hz and a zero at 3 Hz. So I supplied as "PD input" a 12Hz 1Vpp sinewave - there should be a gain of ~x4 when this signal passes through the path with the whitening filter. I then applied a low frequency (0.1Hz) square wave (0-5V) to the "bypass" input, and looked at the output, and indeed saw the signal amplitude change by ~4x when the input to the switch was pulled low. This behaviour was confirmed on all five channels, there was no problem. I took transfer functions for all 5 channels (both at the "monitor" point on the backplane connector and on the front panel LEMOs), and they came out as expected (plot to be uploaded soon).

Next, I took the board back to the eurocrate. I first put in a tester box into the slot and measured the voltage levels on the backplane pins that are meant to trigger bypassing of the whitening stage, all the pins were at 0V. I am not sure if this is what is expected, I will have to look inside D080478 as there is no drawing for it. Note that these levels are set using a Contec binary output card. Then I attached the PD whitening board to the tester board, and measured the voltages at the "Input" pins of all the 5 SPDT switches used under 2 conditions - with the appropriate bit sent out via the Contec card set to 0 or 1 (using the button on the suspension MEDM screens). I confirmed using the BIO medm screen that the bit is indeed changing on the software side, but until I look at D080478, I am not sure how to verify the right voltage is being sent out, except to check at the pins on the MAX333. For this test, the UL channel was indeed anomalous - while the other 4 channels yielded 0V (whitening ON, bit=1) and 15V (whitening OFF, bit=0), the corresponding values for the UL channel were 12V and 10V.

I didn't really get any further than this tonight. But this still leaves unanswered questions - if the measured values are faithful, then the UL channel always bypasses the whitening stage. Can this explain the glitchy behaviour?


Part 4: MC1 troublesfrown

At approximately 8pm, the IMC started losing lock far too often - see the attached StripTool trace. There was a good ~2hour stretch before that when I realigned the IMC, and it held lock, but something changed abruptly around 8pm. Looking at the IMC mirror OSEM PD signals, all 5 MC1 channels are glitching frequently. Indeed, almost every IMC lockloss in the attached StripTool is because of the MC1 PD readouts glitching, and subsequently, the damping loops applying a macroscopic drive to the optic which the FSS can't keep up with. Why has this surfaced now? The IMC satellite boxes were not touched anytime recently as far as I am aware. The MC1 PD whitening board sits in the same eurocrate I pulled the ITMY board out of, but squishing cables/pushing board in did not do anything to alleviate the situation. Moreover, MC2 and MC3 look fine, even though their PD whitening boards also sit in the same eurocrate. Because I was out of ideas, I (soft) restarted c1sus and all the models (the thinking being if something was wrong with the Contec boards, a restart may fix it), but there was no improvement. The last longish lock stretch was with the MC1 watchdog turned off, but as soon as I turned it back on the IMC lost lock shortly after.

I am leaving the autolocker off for the night, hopefully there is an easy fix for all of this...

Attachment 1: IMCwoes.png
IMCwoes.png
  10345   Thu Aug 7 12:34:56 2014 JenneUpdateLSCSuspensions not kicking?

Yesterday, Q helped me look at the DACs for some of the suspensions, since Gabriele pointed out that the DACs may have trouble with zero crossings.  

First, I looked at the oplevs of all the test masses with the oplev servos off, as well as the coil drive outputs from the suspension screen which should go straight out to the DACs.  I put some biases on the suspensions in either pitch or yaw so that one or two of the coil outputs was crossing zero regularly.  I didn't see any kicks. 

Next, we turned off the inputs of the coil driver filter banks, unplugged the cable from the coil driver board to the satellite box, and put in sinusoidal excitations to each of the coils using awggui.  We then looked with a 'scope at the monitor point of the coil driver boards, but didn't see any glitches or abnormalities.  (We then put everything back to normal)

Finally, I locked and aligned the 2 arms, and just left them sitting.  The oplev servos were engaged, but I didn't ever see any big kicks. 

I am suspicious that there was something funny going on with the computers and RFM over the weekend, when we were not getting RFM connections between the vertex and the end stations, and that somehow weird signals were also getting sent to some of the optics.  Q's nuclear reboot (all the front ends simultaneously) fixed the RFM situation, and I don't know that I've seen any kicks since then, although Eric thinks that he has, at least once.  Anyhow, I think they might be gone for now.

  16873   Wed May 25 16:38:27 2022 yutaUpdateSUSSuspensions quick health check

[JC, Yuta]

We did a quick health check of suspesions after the pump down.

Summary:
 - ITMX LRSEN is too bright (~761) and not responding to any optic motions (we knew this before the pump down)
 - ITMY ULCOIL is not working
 - LO1 LLCOIL is not working
 - Damping loops need to be retuned, especially for ETMY (too much damping), SRM, PR3 and AS4 (damping too weak)
 - MC1 sensor outputs are minus instead of plus
 - LO2 OSEMs got stuck during the pump down, but now it is free after some kicks. OSEM sensorr values almost came back (see attached)

What we did:
 1. Kicked optics with C1:SUS-{optic}_{UL,LL,UR,LR,SD}COIL_OFFSET one by one with offsets of +/- 10000 (or 100000), and checked if C1:SUS-{optic}_{UL,LL,UR,LR,SD}SEN_OUT16 move in both directions.

 2. Check if the optic damps nicely.

 3. Attached photo of the note is the result.

Attachment 1: Screenshot_2022-05-25_16-46-13.png
Screenshot_2022-05-25_16-46-13.png
Attachment 2: OSEMcheck.JPG
OSEMcheck.JPG
  107   Thu Nov 15 18:23:55 2007 JohnHowToComputersSwap CAPS and CTRL on a Windows 2000/XP machine
I've swapped ctrl and caps on the four control room Windows machines. Right ctrl is unchanged.



Start menu->Run "regedit"

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout

Click on the KeyboardLayout entry.

Edit->New Binary Value name it Scancode Map.

Then select the new Scancode Map entry.

Edit menu->Modify Binary Data.

In the dialog box enter the following data:

0000: 00 00 00 00 00 00 00 00
0008: 03 00 00 00 3A 00 1D 00
0010: 1D 00 3A 00 00 00 00 00

Exit the Registry Editor. You need to log off and then on in XP (and restart in Windows 2000) for the changes to be made.
  3947   Thu Nov 18 14:19:01 2010 josephbUpdateCDSSwapped c1auxex and c1auxey codes

Problem:

We had not switched the c1aux crates when we renamed the arms, thus the watchdogs labeled ETMX were really watching ETMY and vice-versa.

Solution:

I used telnet to connect to c1auxey, and then c1auxex.

I used the bootChange command to change the IP address of c1auxey to 192.168.113.59 (c1auxex's IP), and its startup script.  Similarly c1auxex was changed to c1auxey and then both were rebooted.

 

c1auxey > bootChange

'.' = clear field;  '-' = go to previous field;  ^D = quit

boot device          : ei
processor number     : 0
host name            : linux1
file name            : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.60:ffffff00 192.168.113.59:ffffff00
inet on backplane (b):
host inet (h)        : 192.168.113.20
gateway inet (g)     :
user (u)             : controls
ftp password (pw) (blank = use rsh):
flags (f)            : 0x0
target name (tn)     : c1auxey c1auxex
startup script (s)   : /cvs/cds/caltech/target/c1auxey/startup.cmd /cvs/cds/caltech/target/c1auxex/startup.cmd
other (o)            :

value = 0 = 0x0

c1auxex > bootChange

'.' = clear field;  '-' = go to previous field;  ^D = quit

boot device          : ei
processor number     : 0
host name            : linux1
file name            : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.59:ffffff00 192.168.113.60:ffffff00
inet on backplane (b):
host inet (h)        : 192.168.113.20
gateway inet (g)     :
user (u)             : controls
ftp password (pw) (blank = use rsh):
flags (f)            : 0x0
target name (tn)     : c1auxex c1auxey
startup script (s)   : /cvs/cds/caltech/target/c1auxex/startup.cmd /cvs/cds/caltech/target/c1auxey/startup.cmd
other (o)            :

value = 0 = 0x0

  11657   Thu Oct 1 20:26:21 2015 jamieUpdateDAQSwapping between fb and fb1

Swapping between fb and fb1 as DAQ is very straightforward, now that they are both on the DAQ network:

  • stop daqd on fb
  • on fb sudoedit /diskless/root/etc/init.d/mx_stream and set: endpoint=fb1:0
  • start daqd on fb1.  The "new" daqd binary on fb1 is at: ~controls/rtbuild/trunk/build/mx-localtime/daqd

Once daqd starts, the front end mx_stream processes will be restarted by their monits, and be pointing to the new location.

Moving back is just reversing those steps.

  1135   Fri Nov 14 17:41:50 2008 JenneOmnistructureElectronicsSweet New Soldering Iron
The fancy new Weller Soldering Iron is now hooked up on the electronics bench.

Accessories for it are in the blue twirly cabinet (spare tips of different types, CD, and USB cable to connect it to a computer, should we ever decide to do so.

Rana: the soldering iron has a USB port?
Attachment 1: newSolderingIron.JPG
newSolderingIron.JPG
  689   Thu Jul 17 12:15:21 2008 EricUpdatePSLSwept PMC PZT voltage range
I unlocked the PMC and swept over C1:PSL-PMC_RAMP's full range a couple of times this morning.  The PMC should now be relocked and returned 
to normal.
  14339   Mon Dec 10 15:53:16 2018 gautamUpdateLSCSwept-sine measurement with DTT

Disclaimer: This is almost certainly some user error on my part.

I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.

Test:

I wanted to measure some transfer functions in the simulated model I set up.

  • To start with, I put a pendulum (f0 = 1Hz, Q=5) TF into one of the filter modules
  • Isolated it from the other interconnections (by turning off the MEDM ON/OFF switches).
  • Set up a DTT swept-sine measurement
    • EXC channel was C1:OMC-TST_AUX_A_EXC
    • Monitored channels were C1:OMC-TST_AUX_A_IN2 and C1:OMC-TST_AUX_A_OUT.
    • Transfer function being measured was C1:OMC-TST_AUX_A_OUT/C1:OMC-TST_AUX_A_IN2.
    • Coherence between the excitation and output were also monitored.
  • Sweep parameters:
    • Measurement band was 0.1 - 900 Hz
    • Logarithmic, downward.
    • Excitation amplitude = 1ct, waveform = "Sine"

Unexplained behavior:

  • The transfer function measurement fails with a "Synchronization error", at ~15 Hz.
    • I don't know what is special about this frequency, but it fails repeatedly at the same point in the measurement.
  • Coherence is not 1 always
    • Why should the coherence deviate from 1 since everything is simulated? I think numerical noise would manifest when the gain of the filter is small (i.e. high frequencies for the pendulum), but the measurement and coherence seem fine down to a few tens of Hz.

To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?

* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)?  But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.


the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.


11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive).

Attachment 1: SimTF.pdf
SimTF.pdf
  3589   Mon Sep 20 11:39:45 2010 josephbUpdateCDSSwitch over

I talked to with Alex this morning, discussing what he needed to do to have a frame builder running that was compatible with the new front ends.

 

1) We need a heavy duty router as a separate network dedicated to data acquisition running between the front ends and the frame builder.  Alex says they have one over at downs, although a new one may need to be ordered to replace that one.

2) The frame builder is a linux machine (basically we stop using the Sun fb40m and start using the linux fb40m2 directly.).

3) He is currently working on the code today.  Depending on progress today, it might be installable tomorrow.

  815   Fri Aug 8 12:21:57 2008 josephbConfigurationComputersSwitched X end ethernet connections over to new switch
In 1X4, I've switched the ethernet connections from c1iscex and c1auxex over to the new Prosafe 24 port switches. They also use the new cat6 cables, and are labeled.

At the moment, everything seems to be working as normally as it was before. In addition:

I can telnet into c1auxex (and can do the same to c1auxey which I didn't touch).
I can't telnet into c1iscex (but I couldn't do that before, nor can I telnet into c1iscey either, and I think these are computers which once running don't let you in).
  852   Tue Aug 19 13:34:58 2008 josephbConfigurationComputersSwitched c1pem1, c0daqawg, c0daqctrl over to new switches
Moved the Ethernet connections for c1pem1, c0daqawg, and c0daqctrl over to the Netgear Prosafe switch in 1Y6, using new cat6 cables.
  4791   Mon Jun 6 22:41:22 2011 ranaUpdateSUSSwitching problem in SUS models

Some weeks ago, Joe, Jamie, and I reworked the ETMY controls.

Today we found that the model rebuilds and BURT restores have conspired to put the SUS damping into a bad state.

1) The FM1 files in the XXSEN modules should switch the analog shadow sensor whitening. I found today that, at least on ETMY and ETMX, they do nothing. This needs to be fixed before we can use the suspensions.

2) I found all of the 3:30 and cts2um buttons OFF AGAIN. There's something certainly wrong with the way the models are being built or BURTed. All of our suspension tuning work is being lost as a consequence. We (Joe and Jamie) need to learn to use CONLOG and check that the system is not in a nonsense state after rebuilds. Just because the monitors have lights and the MEDM values are fluctuating doesn't mean that "ITS WORKING". As a rule, when someone says "it seems to work", that basically means that they have no idea if anything is working.

3) We need a way to test that the CDS system is working...

Attachment 1: a.pdf
a.pdf
  15077   Thu Dec 5 14:54:15 2019 gautamUpdateGeneralSymlink to SRmeasure and AGmeasure

I symlinked the SRmeasure and AGmeasure commands to /usr/bin/ on donatella (as it is done on pianosa) so that these scripts are in $PATH and may be run without having to navigate to the labutils directory.

  13909   Fri Jun 1 19:25:11 2018 poojaUpdateCamerasSynchronizing video data with the applied motion to the mirror

Aim: To synchronize data from the captured video and the signal applied to ETMX

In order to correlate the intensity fluctuations of the scattered light with the motion of the test mass, we are planning to use the technique of neural network. For this, we need a synchronised video of scattered light with the signal applied to the test mass. Gautam helped me capture 60sec video of scattering of infrared laser light after ETMX was dithered in PITCH at ~0.2Hz..

I developed a python program to capture the video and convert it into a time series of the sum of pixel values in each frame using OpenCV to see the variation. Initially we had tried the same with green laser light and signal of approximately 11.12Hz. But in order to see the variation clearly, we repeated with a lower frequency signal after locking IR laser today. I have attached the plots that we got below. The first graph gives the intensity fluctuations from the video. The third and fourth graphs are that of transmitted light and the signal applied to ETMX to shake it. Since the video captured using the camera was very noisy and intensity fluctuations in the scattered light had twice the frequency of the signal applied, we captured a video after turning off the laser. The second plot gives the background noise probably from the camera. Since camera noise is very high, it may not be possible to train this data set in neural network.

Since the videos captured consume a lot of memory I haven't uploaded it here. I have uploaded the python code 'sync_plots.py' in github (https://github.com/CaltechExperimentalGravity/GigEcamera/tree/master/Pooja%20Sekhar/PythonCode).

 

Attachment 1: camera_mirror_motion_plots.pdf
camera_mirror_motion_plots.pdf
  446   Thu Apr 24 23:50:10 2008 ranaUpdateGeneralSyringes in George the Freezer
There are some packets of syringes in the freezer which are labeled as belonging to an S. Waldman.
Thu Apr 24 23:48:55 2008

Be careful of them, don't give them out to the undergrads, and just generally leave them alone. I
will consult with the proper authorities about it.
  16315   Tue Sep 7 18:00:54 2021 TegaSummaryCalibrationSystem Identification via line injection

[paco]

This morning, I spent some time restoring the jupyter notebook server running in allegra. This server was first set up by Anchal to be able to use the latest nds python API tools which is handy for the calibration stuff. The process to restore the environment was to run "source ~/bashrc.d/*" to restore some of the aliases, variables, paths, etc... that made the nds server work. I then ran ssh -N -f -L localhost:8888:localhost:8888 controls@allegra from pianosa and carry on with the experiment.


[paco, hang, tega]

We started a notebook under /users/paco/20210906_XARM_Cal/XARM_Cal.ipynb on which the first part was doing the following;

  • Set up list of excitations for C1:LSC-XARM_EXC (for example three sine waveforms) using awg.py
  • Make sure the arm is locked
  • Read a reference time trace of the C1:LSC-XARM_IN2 channel for some duration
  • Start excitations (one by one at the moment, ramptime ~ 3 seconds, same duration as above)
  • Get data for C1:LSC-XARM_IN2 for an equal duration (raw data in Attachment #1)
  • Generate the excitation sine and cosine waveforms using numpy and demodulate the raw timeseries using a 4th order lowpass filter with fc ~ 10 Hz
  • Estimate the correct demod phase by computing arctan(Q / I) and rerunning the demodulation to dump the information into the I quadrature (Attachment #2).
  • Plot the estimated ASD of all the quadratures (Attachment #3)

[paco, hang, tega]

Estimation of open loop gain:

  • Grab data from the C1:LSC-XARM_IN1 and C1:LSC-XARM_IN2 test points
  • Infer excitation from their differnce, i.e. C1:LSC-XARM_EXC = C1:LSC-XARM_IN2 - C1:LSC-XARM_IN1
  • Compute the open loop gain as follows : G(f) = csd(EXC,IN1)/csd(EXC,IN2), where csd computes the cross spectra density of the input arguments
  • For the uncertainty in G, dG, we repeat steps (1) to (3) with & without signal injection in the C1:LSC-XARM_EXC channel. In the absence of signal injection, the signal in C1:LSC-XARM_IN2 is of the form: Y_ref = Noise/(1-G), whereas with nonzero signal injection, the signal in C1:LSC-XARM_IN2 has the form: Y_cal = EXC/(1-G) + Noise/(1-G), so their ratio, Y_cal/Y_ref = EXC/Noise, gives the SNR, which we can then invert to give the uncertainty in our estimation of G, i.e dG = Y_ref/Y_cal.
  • For the excitation at 53 Hz, our measurtement for the open loop gain comes out to about 5 dB whiich is consistent with previous measurement.
  • We seem to have an SNR in excess of 100 at measurement time of 35 seconds and 1 count of amplitude which gives a relative uncertainty of G of 0.1%
  • The analysis details are ongoing. Feedback is welcome.
Attachment 1: raw_timeseries.pdf
raw_timeseries.pdf
Attachment 2: demod_signals.pdf
demod_signals.pdf
Attachment 3: cal_noise_asd.pdf
cal_noise_asd.pdf
  2474   Mon Jan 4 17:26:01 2010 MottUpdateGeneralT & R plots for Y1 and Y1S mirrors

The most up-to-date T and R plots for the Y1 and Y1S mirrors, as well as a T measurement for the ETM, can be found on:

http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/Optics/RTmeasurement

 

  11229   Tue Apr 21 01:17:13 2015 JenneUpdateModern ControlT-240 self-noise propagated through stack and pendulum

Going back to Wiener filtering for a moment, I took a look at what the T-240 noise level looks like in terms of pitch motion on one of our SOS optics (eg. PRM).

The self-noise of the T-240 (PSD, in dB referenced to 1m^2/s^4/Hz) was taken by pulling numbers from the Users Guide.  This is the ideal noise floor, if our installation was perfect.  I'm not sure where Kissel got the numbers from, but on page 13 of G1200556 he shows higher "measured" noise values for a T-240, although his numbers are already transformed to m/rtHz.

To get the noise numbers to meters, I use:  \left[ \frac{\rm m}{\sqrt{\rm Hz}} \right] = \frac{\sqrt{10^{\frac{[\rm dB/\sqrt{Hz}]}{10}}}}{(2 \pi f)^2}.  The top of that fraction is (a) getting to magnitude from power-dB and (b) getting to asd units from psd units.  The bottom of the fraction is getting rid of the extra 1/s^2.

 

Next I propagate this seismometer noise (in units of m/rtHz) to effective pendulum pitch motion, by propagating through the stacks and the transfer function for pos motion at the anchor point of the pendulum to pitch motion of the mirror (see eq 63 of T000134 for the calculation of this TF).   This gives me radians/rtHz of mirror motion, caused by the ground motion:

.

I have not actually calibrated the POP QPD, so I will need to do that in order to compare this seismometer noise to my Wiener filter results.

 

Attachment 1: T240selfnoise.png
T240selfnoise.png
Attachment 2: Limits.tar.gz
  8558   Thu May 9 02:47:23 2013 JenneUpdatePEMT240 at corner station - cabling thoughts

Something that I want to look at is the coherence between seismic motion and PRM motion.  Since Den has been working on the fancy new seismometer installations, I got caught up for the day with getting the new corner seismometer station set up with a T240.  Later, Rana pointed out that we already have a Guralp sitting underneath the POX table, and that will give us a good first look at the coherence.  However, I'm still going to write down all the cable thoughts that I had today:

The cables that came with the electronics that we have (from Vladimir and tilt meter -land) are not long enough to go from the seismometer to 1X7, which is where I'd like to put the readout box (since the acquisition electronics are in that rack). I want to make a long cable that is 19pin MilSpec on one end, and 25pin Dsub on the other.  This will eliminate the creative connector type changes that happen in the existing setup.  However, before making the cable, I had to figure out what pins go to what.  So.

25pin Dsub          19pin MilSpec

1                   P

2                   N

3                   E

4                   No conn

5                   D

6                   R and V

7                   H

8                   J

9                   No connection

10                  T

11                  F

12                  L

13                  No conn

14                  B

15                  A

16                  R and V

17                  No conn

18                  C

19                  G

20                  G

21                  K

22                  U

23                  No conn

24                  S

25                  M 

I am not sure why R and V are shorted to each other, but this connection is happening on the little PCB MilSpec->ribbon changer, right at the MilSpec side.  I need to glance at the manual to see if these are both ground (or something similar), or if these pins should be separate. Also, I'm not sure why 19 and 20 are shorted together.  I can't find (yet) where the short is happening. This is also something that I want to check before making the cable.

Den had one Female 19 pin MilSpec connector, meant for connecting to a T240, but the cable strain relief pieces of the connector have 'walked off'.  I can't find them, and after a solid search of the control room, the electronics bench, and the place inside where all of Den's connectors were stored, I gave up and ordered 2 more.  If we do find the missing bits for this connector, we can use it for the 2nd T240 setup, since we'll need 2 of these per seismometer. If anyone sees mysterious camo-green metal pieces that could go with a MilSpec connector, please let me know.

  14992   Thu Oct 24 18:37:15 2019 gautamUpdatePEMT240 checkout

Summary:

The Trillium T240 seismometer needs mass re-centering. Has anyone done this before, and do we have any hardware to do this?

Details:

I went to the Trillium interface box in 1X5. In this elog, Koji says it is D1000749-v2. But looking at the connector footprint on the back panel, it is more consistent with the v1 layout. Anyway I didn't open it to check. Main point is that none of the backplane data I/O ports are used. We are digitizing (using the fast CDS system) the front panel BNC outputs for the three axes. So of the various connectors available on the interface box, we are only using the front panel DB25, the front panel BNCs, and the rear panel power.

The cable connecting this interface box to the actual seismometer is a custom one I believe. It has a 19 pin military circular type hermetic connector on one end, and a DB25 on the other. Power is supplied to the seismometer from the interface box via this cable, so in order to run the test, I had to use a DB25 breakout board to act as a feedthrough and peek at the signals while the seismometer and interface boards were connected. I used Jenne's mapping of the DB25--> 19 pin connector (which also seems consistent with the schematic). Findings:

  1. We are supplying the Trillium with 39 V DC between the +PWR and -PWR pins, while the datasheet specifies 9V to 36V DC isolated. Probably this is okay?
  2. The analog (AGND) and digital (DGND) ground pins are shorted. Is this okay?
  3. I measured the DC voltages between the AGND pin and each of the mass position outputs.
    • These are supposed to indicate when the masses need re-centering.
    • The nominal output ranges for these are +/- 4 V single-ended.
    • I measured the following values (I don't know how the U,V,W basis is mapped onto the cartesian X,Y,Z coordinates):
      U_MP: 0.708 V
      V_MP: -2.151 V
      W_MP: -3.6 V
    • So at the very least, the mass needs centering in the W direction (the manual recommends doing the re-centering procedure when one of these indicators exceeds 3.5 V in absolute value).
  4. I also checked the DC voltages of the (X,Y,Z) outputs of the seismometer on the front panel BNCs, and also on the DB25 connector (so directly from the seismometer). These are rated to have a range of 40 Vpp differential between the pins. I measured ~0V on all the three axes - this is a bit confusing as I assumed a de-centered mass would lead to saturation in one of these outouts, but maybe we are measuring velocities and not positions?
  5. We probably should consider monitoring these signals long term to inform of such drifts, what is the spare channel situation in the c1sus acromag?
  6. Interestingly, today evening, there is no excess noise in the 0.1-0.3 Hz band in the X-axis of the seismometer even though it is past 6pm PDT now, which is usually the time when the excess begins to show up. The z-axis 0.3-1Hz BLRMS channel has flatlined though...

I am holding off on attempting any re-centering, for more experienced people to comment.

  15013   Tue Nov 5 12:37:50 2019 gautamUpdatePEMT240 interface unit pulled out

I removed the Trillium T240 DAQ interface unit from 1X4 for investigation.

It was returned to the electronics rack and all the connections were re-made. Some details:

  1. The board is indeed a D1000749-v2 as Koji said it is. There is just an additional board (labelled D1001872 but for which there is no schematic on the DCC) inside the 1U box that breaks out the D37 connector of the v2 into 3 D15 connectors. I took photos.
  2. Armed with the new cable Chub got, and following the manual, I ran the re-centering routine.
    • Now all the mass-monitoring position voltages are <0.3 V DC, as the manual tells me they should be.
    • I noticed that when the seismometer is just plugged in and powered, it takes a few minutes for the mass monitoring voltages to acquire their steady state values.
    • The V indicator reported ~-2V DC, and the W indicator reported -3.9V DC.
    • While running the re-centering routine, I monitored the mass-position indicator voltages (via the backplane D15 connector) on an oscilloscope. See Attachment #1 for the time series. The data was rather noisy, I don't know why this is, so I plot the raw data in light colors and a filtered version in darker colors. Also, there seems to be a gain of x2 in the voltages on the backplane relative to what the T240 manual tells me I should expect, and the values reported when I query the unit via the serial port.
    • We should ideally just install another Acromag ADC in the c1susaux box and acquire these and other available diagnostic information, since the signals are available.
    • We should also probably check the mass position indicator values in a few days to see if they've drifted off again.
    • Looking at the raw time series / spectra of the BS channels, I see no obvious signatures of any change. 
    • I will run a test by locking the PRC and looking for coherence between the seismometer data and angular motion witnessed by the POP QPD, as this was what signalled my investigation in the first place.

Update 445pm: Seems to have done something good - the old feedforward filters reduce the YAW RMS motion by a factor of a few. Pitch performance is not so good, maybe the filter needs re-training, but I see coherence, see Attachment #2 for the frequency domain WF.

Attachment 1: T240_recenter.pdf
T240_recenter.pdf
Attachment 2: ffPotential.pdf
ffPotential.pdf
  12063   Tue Apr 5 11:42:17 2016 gaericqutamUpdateendtable upgradeTABLE REMOVAL

There is currently no table at the X end!

We have moved the vast majority of the optics to a temporary storage breadbord, and moved the end table itself to the workbench at the end. 

Steve says Transportation is coming at 1PM to put the new table in.

  15693   Wed Dec 2 12:35:31 2020 PacoSummaryComputer Scripts / ProgramsTC200 python driver

Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here

*Warning* this first version of the driver remains untested

  15694   Wed Dec 2 15:27:06 2020 gautamSummaryComputer Scripts / ProgramsTC200 python driver

FYI, there is this. Seems pretty well maintained, and so might be more useful in the long run. The available catalog of instruments is quite impressive - TC200 temp controller and SRS345 func gen are included and are things we use in the lab. maybe you can make a pull request to add MDT694B (there is some nice API already built I think). We should also put our netgpibdata stuff and the vacuum gauge control (basically everything that isn't rtcds) on there (unless there is some intellectual property rights issues that the Caltech lawyers have to sort out).

Quote:

Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here

*Warning* this first version of the driver remains untested

  356   Tue Mar 4 19:14:09 2008 ranaConfigurationCDSTDS & SVN
Matt, Rob, Rana

Today we added the TDS software to the 40m SVN repo.

First we rationalized things by deleting all the old TDS directories and taking
the tds_mevans dir and making it be the main one (apps/linux/tds).

We also deleted all of the TDS directories in the project area. It is now very
likely that several scripts will not work.
We're going to have the teething
problems of repointing everything to the nominal paths (in the apps areas).

Finally we did:
svn import tds https://40m.ligo.caltech.edu/svn/40m/tds --username rana

to stick it in. To check it out do:
svn checkout https://40m.ligo.caltech.edu/svn/40m/tds --username rana

We'll get a couple of the O'Reilly SVN books as well to supplement our verion control knowledge.
Unitl then you can use the SVN cheat sheets available at:
http://www.digilife.be/quickreferences/quickrefs.htm
  12770   Mon Jan 30 18:41:41 2017 jamieUpdateCDSTEST ABORTED of new daqd code on fb1

I just aborted the fb1 test and reverted everything to the nominal configuration.  Everything looks to be operating nominally.  Front ends are mostly green except for c1rfm and c1asx which are currently not being acquired by the DAQ, and an unknown IPC error with c1daf.  Please let me know if any unusual problems are encountered.

The behavior of daqd on fb1 with the latest release (3.2.1) was not improved.  After turning on the full pipe it was back to crashing every 10 minutes or so when the full and second trend frames were being written out.  lame.  back to the drawing board...

  13628   Fri Feb 9 13:37:44 2018 gautamUpdateALSTHD measurement trial

I quickly put together some code that calculates the THD from CDS data and generates a plot (see e.g. Attachment #1).

Algorithm is:

  1. Get data (for now, offile file, but can be readily adapted to download data live or from NDS).
  2. Compute power spectrum using scipy.signal.periodogram. I use a Kaiser window with beta=38 based on some cursory googling, and do 10 averages (i.e. nfft is total length / 10), and set the scaling to "spectrum" so as to directly get a power spectrum as opposed to a spectral density.
  3. Find fundamental (assumed highest peak) and N harmonics using scipy.signal.find_peaks_cwt. I downsample 16k data to 2k for speed. A 120second time series takes ~5 seconds.
  4. Compute THD as \mathrm{THD} = \frac{\sqrt{\sum_{i=2}^{N}\mathrm{V}_i^2}}{V_1}where V_i denotes an rms voltage, or in the case of a power spectrum, just the y-axis value.

I conducted a trial on the Y arm ALS channel whitening board (while the X arm counterpart is still undergoing surgery). With the whitening gain set to 0dB, and a 1Vpp input signal (so nothing should be saturated), I measure a THD of ~0.08% according to the above formula. Seems rather high - the LT1125 datasheet tells us to expect <0.001% THD+N at ~100Hz for a closed loop gain of ~10. I can only assume that the digitization process somehow introduces more THD? Of course the FoM we care about is what happens to this number as we increase the gain.

Quote:
 

I'm going to work on putting together some code that gives me a quick readback on the measured THD, and then do the test for real with different amplitude input signal and whitening gain settings.

 

Attachment 1: THD_trial.pdf
THD_trial.pdf
  3243   Mon Jul 19 13:51:09 2010 kiwamuUpdateCDSTIming card at X end

[Joe, Kiwamu]

 

 The timing slave in the IO chassis on the new X end was not working with symptoms of no front "OK" green light, no "PPS" light, 3.3V testpoint not working and  ERROR testpoint bouncing between 5-6V.

We took out the timing slave from the X end IO chassis put in to the new Y end IO chassis .

It worked perfectly there. We took the working one from Y end put in the X end IO chassis.

We slowly added cables. First we added power , it worked fine and we saw green "OK" light. Then we added 1PPS signal by a fiber and it also worked.

We turned everything off and then we added 40pin IPC cable from the chassis and infiniband cable from the  computer.

When we turned ON it we didn't see the green light.

This means something in the computer configuration might be wrong not in the timing card, we now are trying to make contact with Alex.

We are comparing the setup of the C1SCX  machine and the working C1ISCEX machine.

  1796   Mon Jul 27 14:12:14 2009 ranaSummarySUSTM Coil Noise Spectra
Rob noticed that the ITMY DAC channels were saturating occassionally. Looking at the spectrum we can see why.
With an RMS of 10000 cts, the peak excursions sometimes cause saturations.

There's a lot of mechanical noise which is showing up on the ITM oplevs and then going to the mirror via
the oplev servo. We need to reduce the mechanical noise and/or modify the filters to compensate. The ITM
COIL_OUT RMS needs to be less than ~3000.
Attachment 1: Coils.pdf
Coils.pdf
  750   Mon Jul 28 17:58:05 2008 SharonUpdate TOP screen changes
I wanted to test the adaptive code with a downsampling rate of 32 instead of 16. To do this I entered a 32 Hz ((2048/32)/2 - match Nyquist Freq.) low pass filter on the ERROR EMPH, MC1 and the relevant PEM channels.
  16643   Thu Feb 3 10:25:59 2022 JordanUpdateVACTP1 and Manual Gate Valve Install

Jordan, Chub

Chub and I installed the new manual gate valve (Nor-Cal GVM-6002-CF-K79) and reinstalled TP1. The new gate valave was placed with the sealing side towards the main 40m volume, then TP1 was installed on top and the foreline reattched to TP1.

This valve has a hard stop in the actuator to prevent over torquing.

 

Attachment 1: 20220203_101455.jpg
20220203_101455.jpg
Attachment 2: 20220203_094831.jpg
20220203_094831.jpg
Attachment 3: 20220203_094823.jpg
20220203_094823.jpg
  16634   Mon Jan 31 10:39:19 2022 JordanUpdateVACTP1 and Manual Gate Valve Removal

Jordan, Chub

Today, Chub and I removed TP1 and the failed manual gate valve off of the pumping spool.

First, P2 needed to be vented in order to remove TP1. TP1 has a purge valve on the side of the pump which we slowly opened bringing the P2 volume up to atmosphere. Although, this was not vented using the dry air/N2, using this purge valve eliminated the need to vent the RGA volume.

Then we disconnected TP1 foreline, removed TP1+8" flange reducer, then the gate valve. All of the removed hardware looked good, so no need to replace bolts/nuts, only needs new gaskets. TP1 and the failed valve are sitting on a cart wrapped in foil next to the pumping station.

Attachment 1: 20220131_100637.jpg
20220131_100637.jpg
Attachment 2: 20220131_102807.jpg
20220131_102807.jpg
Attachment 3: 20220131_102818.jpg
20220131_102818.jpg
Attachment 4: 20220131_100647.jpg
20220131_100647.jpg
  11593   Mon Sep 14 10:41:03 2015 SteveUpdateVACTP2 dry pump replaced

TP2 dry fore pump sn PLE10082 was replaced at pressure 717 mTorr,  TP2 50K rpm 0.33A @ 112,677 hrs

Top seal life was 8,160 hrs

Model  SH110, Sn LP1007L556 was installed. It's fore line pressure after 30 minutes of running 38 mTorr, TP2 turbo at 50K rpm 0.18A

 

 

Attachment 1: TP2_dry_pump_replaced.png
TP2_dry_pump_replaced.png
  15602   Wed Sep 23 15:06:54 2020 JordanUpdateVACTP2 Forepump Re-install

I removed the forepump to TP2 this morning after the vacuum failure, and tested in the C&B lab. I pumped down on a small volume 10 times, with no issue. The ultimate pressure was ~30 mtorr.

I re-installed the forepump in the afternoon, and restarted TP2, leaving V4 closed. This will run overnight to test, while TP3 backs TP1.

In order to open V1, with TP3 backing TP1, the interlock system had to be reset since it is expecting TP2 as a backing pump. TP2 is running normally, and pumping of the main volume has resumed.


gautam 2030:

  1. The monitor (LCD display) at the vacuum rack doesn't work - this has been the case since Monday at least. I usually use my laptop to ssh in so I didn't notice it so it could have been busted from before. But for anyone wishing to use the workstation arrangement at 1X8, this is not great. Today, we borrowed the vertex laptop to ssh in, the vertex laptop has since been returned to its nominal location.
  2. The modification to the interlock condition was made by simply commenting out the line requiring V4 to be open for V1 to be opened. I made a copy of the original .yaml file which we can revert to once we go back to the normal config.
  3. I also opened VM1 to allow the RGA scans to continue to be meaningful.
  4. At the time of writing, all systems seem nominal. See Attachment #2. The vertical line indicates when we started pumping on the main volume again earlier today, with TP3 backing TP1.

Unclear why the TP2 foreline pump failed in the first place, it has been running fine for several hours now (although TP2 has no load, since V4 isolates it from the main volume). Koji's plots show that the TP2 foreline pressure did not recover even after the interlock tripped and V4 was closed (i.e. the same conditions as TP2 sees right now).

Attachment 1: Screenshot_from_2020-09-23_15-15-43.png
Screenshot_from_2020-09-23_15-15-43.png
Attachment 2: MainVolPumpDown.png
MainVolPumpDown.png
  15409   Thu Jun 18 15:25:08 2020 JordanUpdateVACTP2 and TP3 Forepump removal

I removed the backing pumps for TP2 and TP3 today to test ultimate pressure and determine if they need a tip seal replacement. This was done with Jon backing me on Zoom. We closed off TP3 and powered down TP3 and the auxilliary pump, in order to remove the forepumps from the exhaust line.

  1. Close V1
  2. Close V5
  3. Turn off TP3
  4. Turn off aux dry pump (manually)
  5. Once the PTP3 foreline pressure has come up to atmosphere, you can disconnect the TP3 dry pump and cap the exhaust line with a KF blank.
  6. Restore the vac configuration in reverse order: dry pump ON, TP3 ON, open V5, open V1

Once pumps were removed I connected a Pirani gauge to the pump directly and pumped down, results as follows:

TP2 Forepump (Agilent IDP 7):

  • Ultimate Pressure: 123 mtorr
  • Hours: 10903

TP3 Forepump (Varian SH 110):

  • Ultimate pressure: ~70 torr
  • Hours: 60300

TP3 forepump defintely needs a new tip seal, and while the pressure on TP2 Forepump was good there was a significant amount of particulate that came out of the exhaust line, so a new tip seal might not be needed but it is recommeded.

  15411   Thu Jun 18 16:56:34 2020 JordanUpdateVACTP2 and TP3 Forepump removal
Quote:

I removed the backing pumps for TP2 and TP3 today to test ultimate pressure and determine if they need a tip seal replacement. This was done with Jon backing me on Zoom. We closed off TP3 and powered down TP3 and the auxilliary pump, in order to remove the forepumps from the exhaust line.

  1. Close V1
  2. Close V5
  3. Turn off TP3
  4. Turn off aux dry pump (manually)
  5. Once the PTP3 foreline pressure has come up to atmosphere, you can disconnect the TP3 dry pump and cap the exhaust line with a KF blank.
  6. Restore the vac configuration in reverse order: dry pump ON, TP3 ON, open V5, open V1

Once pumps were removed I connected a Pirani gauge to the pump directly and pumped down, results as follows:

TP2 Forepump (Agilent IDP 7):

  • Ultimate Pressure: 123 mtorr
  • Hours: 10903

TP3 Forepump (Varian SH 110):

  • Ultimate pressure: ~70 torr
  • Hours: 60300

TP3 forepump defintely needs a new tip seal, and while the pressure on TP2 Forepump was good there was a significant amount of particulate that came out of the exhaust line, so a new tip seal might not be needed but it is recommeded.

I agree with your assessment, Jordan.  If I'm not mistaken the scroll pump for TP2 is new; we had a very early failure with the last new scroll pump (the forepump for TP3) tip seals at just over 5000 hours.  Glad to see my replacement seals held up for over 60K hours. If this is the trend with these pumps, we can simply run them to  around 60000 hours and replace the seals at that time, rather than waiting for failure! - Chub

  8978   Wed Aug 7 15:36:29 2013 SteveUpdateVACTP2 drypump replaced

 

 TP2's fore line - dry pump replaced at performance level 600 mTorr after 10,377 hrs of continuous operation.

Where are the foreline pressure gauges? These values are not on the vac.medm screen.

The new tip seal dry pump lowered the small turbo foreline pressure 10x

TP2fl after 2 day of pumping 65mTorr

Attachment 1: forelinesPs.jpg
forelinesPs.jpg
ELOG V3.1.3-