40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 40m Log, Page 50 of 341 Not logged in
ID Date Author Type Category Subject
3180   Thu Jul 8 16:24:30 2010 GopalUpdateOptic StacksCompletion of single stack layer

Single layer of stack successfully modeled in COMSOL. I'm working on trying to add Viton springs now and extend it to a full stack. Having some difficulty with finding consistent parameters to work with.

9708   Mon Mar 10 21:12:30 2014 KojiSummaryLSCComposite Error Signal for ARms (1)

The ALS error (i.e. phase tracker output) is linear everywhere, but noisy.
The 1/sqrt(TR) is linear and less noisy but is not linear at around the resonance and has no sign.
The PDH signal is linear and further less noisy but the linear range is limited.

Why don't we combine all of these to produce a composite error signal that is linear everywhere and less-noisy at the redsonance?

This concept was confirmed by a simple mathematica calculation:

The following plot shows the raw signals with arbitorary normalizations

1) ALS: (Blue)
2) 1/SQRT(TR): (Purple)
3) PDH: (Yellow)
4) Transmission (Green)

The following plot shows the preprocessed signals for composition

1) ALS: no preprocess (Blue)
2) 1/SQRT(TR): multiply sign(PDH) (Purple)
3) PDH: linarization with the transmission (If TR<0.1, use 0.1 for the normalization). (Yellow)
4) Transmittion (Green)

The composite error signal

1) Use ALS at TR<0.03. Use 1/SQRT(TR)*sign(PDH)*(1-TR) + PDH*TR at TR>0.03
2) Transmittion (Purple)

Attachment 1: composite_linear_signal.nb.zip
9715   Tue Mar 11 15:14:34 2014 denSummaryLSCComposite Error Signal for ARms (1)

 Quote: The composite error signal

Very nice error signal. Still, I think we need to take into account the frequency shape of the transfer function TR -> CARM.

9717   Tue Mar 11 15:21:08 2014 KojiSummaryLSCComposite Error Signal for ARms (1)

True. But we first want to realize this for a single arm, then move onto the two arms case.
At this point we'll need to incorporate frequency dependence.

9710   Mon Mar 10 21:14:58 2014 ericqSummaryLSCComposite Error Signal for ARms (3)

Using Koji's mathematica notebook, and Nic's python work, I set out to run a time domain simulation of the error signal, with band-limited white noise added in.

Basically, I sweep the displacement of the cavity (with no noise), and pass it to the analytical formulae with the coefficients Koji used, with some noise added in. I also included some 1/0 protection for the linearized PDH signal. I ran a sweep, and then compared it to an ALS sweep that Jenne ran on Monday; reconstructing what the CESAR signal would have looked like in the sweep.

The noise amounts were totally made up.

They matched up very well, qualitatively! [Since the real sweep was done by a (relatively) noisy ALS, the lower noise of the real pdh signal was obscured.]

Given this good match, we were motivated to start trying to implement it on Monday.

At this point, since we've gotten it working on the actual IFO, I don't plan on doing much more with this simulation right now, but it may come in handy in the future...

9751   Wed Mar 26 11:16:59 2014 ericqSummaryLSCComposite Error Signal for ARms (3)

Extending the previous model, I've closed a rudimentary CESAR loop in simulink. Error signals with varying noise levels are combined to bring a "cavity" to lock.

There are many things that are flat out arbitrary at this point, but it qualitatively works. The main components of this model are:

• The "Plant": A pendulum with f0 = 2Hz, Q = 10
• Some white force noise, low passed at 1Hz before input to the plant.
• The Controller: A very rough servo design that is stable...
• ALS signal: Infinite range Linear signal, with a bunch of noise
• Transmission and PDH signals are computed with some compiled C code containing analytic functions (which can be a total pain to get working), have less noise than ALS
• Some logic for computing linearized PDH and SqrtInv signals
• A C code block for doing the CESAR mixing, and feeding to the servo

And it can lock!

Right now, all of the functions and noise levels are similar to the previous simulation, and therefore don't tell us anything about anything real...

However, at this point, I can tune the parameters and noise levels to make it more like our interferometer, and thus maybe actually useful.

9711   Mon Mar 10 21:16:13 2014 KojiSummaryLSCComposite Error Signal for ARms (4)

The LSC model was modified for CESAR.

A block called ALSX_COMBINE was made in the LSC block. This block receives the signals for ALS (Phase Tracker output), TRX_SQRTINV, TRX, POX11 (Unnormalized POX11I).
It spits out the composite ALS signal.

Inside of the block we have several components:

1) a group of components for sign(x) function. We use the PDH signal to produce the sign for the transmission signal.

2) Hard triggering between ALS and TR/PDH signals. An epics channel "THRESH" is used to determine how much transmission
we should have to turn on the TR/PDH signals.

3) Blending of the TR and PDH. Currently we are using a confined TR between 0 and 1 using a saturation module. When the TR is 0, we use the 1/SQRT(TR) signal for the control,
When the TR is 1, we use the PDH signal for the control.

4) Finally the three processed signals are combined into a single signal by an adder.

It is important to make a consideration on the offsets. We want all of ALS, 1/SQRT(TR), and PDH to have zero crossing at the resonance.
ALS tends to have arbitorary offset. So we decided to use two offsets. One is before the CESAR block and in the ALS path.
The other is after the CESAR block.
Right now we are using the XARM servo offset for the latter purpose.

We run the resonance search script to find the first offset. Once this is set, we never touch this offset until the lock is lost.
Then for the further scanning of the arm length, we uused the offset in the XARM servo filter module.

Attachment 1: ss1.png
Attachment 2: ss2.png
Attachment 3: CESAR_OFFSETS.pdf
9712   Mon Mar 10 21:16:56 2014 KojiSummaryLSCComposite Error Signal for ARms (5)

After making the CDS modification, CESAR was tested with ALS.

First of all, we run CESAR with threshold of 10. This means that the error signal always used ALS.
The ALS was scanned over the resonance. The plot of the scan can be found in EricQ's elog.
At each point of the scan, the arm stability is limited by the ALS.

Using this scan data, we could adjust the gains for the TR and PDH signals. Once the gains were adjusted
the threshold was lowered to 0.25. This activates dynamic signal blending.

ALS was stabilized with XARM FM1/2/3/5/6/7/9. The resonance was scanned. No glitch was observed.
This is some level of success already.

Next step was to fully hand off the control to PDH. But this was not successfull. Everytime the gain for the TR was
reduced to zero, the lock was lost. When the TR is removed from the control, the raw PDH signal is used fot the control
without normalization. Without turning on FM4, we lose 60dB of DC gain. Therefore the residual motion may have been
too big for the linear range of the PDH signal. This could be mitigated by the normalization of the PDH signal by the TR.

9718   Tue Mar 11 18:33:21 2014 KojiUpdateLSCComposite Error Signal for ARms (6)

Today we modified the CESAR block.

- Now the sign(X) function is in a block.

- We decided to use the linearization of the PDH.

- By adding the offset to the TR signal used for the switching between TR and PDH, we can force pure 1/sqrt(TR) or pure PDH to control the cavity.

Attachment 1: 14.png
9719   Tue Mar 11 18:34:11 2014 JenneUpdateLSCComposite Error Signal for ARms (7)

[Nic, Jenne, EricQ, and Koji]

We have used CESAR successfully to bring the Xarm into resonance.  We start with the ALS signal, then as we approach resonance, the error signal is automatically transitioned to 1/sqrt(TRX), and ramped from there to POX, which we use as the PDH signal.

In the first plot, we have several spectra of the CESAR output signal (which is the error signal for the Xarm), at different arm resonance conditions.  Dark blue is the signal when we are locked with the ALS beatnote, far from IR resonance.  Gold is when we are starting to see IR resonance (arm buildup of about 0.03 or more), and we are using the 1/sqrt(TRX) signal for locking.  Cyan is after we have achieved resonance, and are using only the POX PDH signal.  Purple is the same condition as cyan, except that we have also engaged the low frequency boosts (FM 2, 3, 4) in the locking servo.  FM4 is only usable once you are at IR resonance, and locked using the PDH signal.  We see in the plot that our high frequency noise (and total RMS) decreases with each stage of CESAR (ALS, 1/sqrt(TR) and PDH).

To actually achieve the gold noise level of 1/sqrt(TR), we first had to increase the analog gain by swapping out a resistor on the whitening board.

The other plots attached are time series data.  For the python plots (last 2), the error signals are calibrated to nanometers, but the dark blue, which is the transmitted power of the cavity, is left in normalized power units (where 1 is full IR resonance).

In the scan from off resonance to on resonance, around the 58 second mark, we see a glitch when we engage FM4, the strong low frequency boosts.  Around the 75 second mark we turned off any contribution from 1/sqrt(TR), so the noise decreases once we are on pure PDH signal.

In the scan through the resonance, we see a little more clearly the glitch that happens when we switch from ALS to IR signals, around the 7 and 12 second marks.

We want to make some changes, so that the transition from ALS to IR signals is more smooth, and not a discrete switch.

Attachment 2: Screenshot-1.png
Attachment 3: ScanFromOffToOnResonance.pdf
Attachment 4: ScanThroughResonance.pdf
9724   Thu Mar 13 01:18:00 2014 JenneUpdateLSCComposite Error Signal for ARms (8)

[Jenne, EricQ]

As Koji suggested in his email this afternoon, we looked at how much actuator range is required for various forms of arm locking:  (1) "regular" PDH lock aquisition, (2) ALS lock acquisition, (3) CESAR cooling.

To start, I looked at the spectra and time series data of the control signal (XARM_OUT) for several locking situations.  Happily, when the arm is at the half fringe, where we expect the 1/sqrt(TRX) signal to be the most sensitive (versus the same signal at other arm powers), we see that it is in fact more quiet than even the PDH signal.  Of course, we can't ever use this signal once the arm is at resonance, so we haven't discovered anything new here.

EricQ then made some violin plots with the time series data from these situations, and we determined that a limit of ~400 counts encompasses most of the steady-state peak-to-peak output from locking on the PDH signal.

[ericq: What's being plotted here are "kernel density estimates" of the time series data of XARM_OUT when locked on these signals. The extent of the line goes to the furthest outlier, while the dashed and dotted lines indicate the median and quartiles, respectively]

I tried acquiring ALS and transitioning to final PDH signals with different limiters set in the Xarm servo.  I discovered that it's not too hard to do with a limit of 400 counts, but that below ~350 counts, I can't keep the ALS locked for long enough to find the IR resonance.  Here's a plot of acquiring ALS lock, scanning for the resonance, and then using CESAR to transition to PDH, with the limit of 400 counts in place, and then a lockloss at the end.  Even though I'm hitting the rails pretty consistently, until I transition to the more quiet signals, I don't ever lose lock (until, at the end, I started pushing other buttons...).

After that, I tried acquiring lock using our "regular" PDH method, and found that it wasn't too hard to capture lock with a limit of 400, but with limits below that I can't hold the lock through the boosts turning on.

Finally, I took spectra of the XARM_OUT control signal while locked using ALS only, but with different limiter values. Interestingly, I see much higher noise between 30-300 Hz with the limiter engaged, but the high frequency noise goes down.  Since the high frequency is dominating the RMS, we see that the RMS value is actually decreasing a bit (although not much).

We have not made any changes to the LSC model, so there is still no smoothing between the ALS and IR signals.  That is still on the to-do list.  I started modifying things to be compatible with CARM rather than a single arm, but that's more of a daytime-y task, so that version of the c1lsc model is saved under a different name, and the model that is currently compiled and running is reverted as the "c1lsc.mdl" file.

9728   Fri Mar 14 12:18:55 2014 KojiUpdateLSCComposite Error Signal for ARms (9)

Asymptotic cooling of the mirror motion with CESAR was tested.

With ALS and the full control bandwidth (UGF 80-100Hz), the actuator amplitude of 8000cnt_pp is required.

Varying control bandwidth depending on the noise level of the signal, the arm was brought to the final configuration with the actuator amplitude of 800cnt_pp.

Attachment 1: asymptotic_cooling.pdf
477   Wed May 14 14:05:40 2008 AndreyUpdateComputersComputer Linux-2, MEDM screen "Watchdogs"

Computer "Linux-2", MEDM screen "C1SUS_Watchdogs.adl": there is no indication for ETMY watchdogs, everything is white. There is information on that screen "C1SUS_Watchdogs.adl" about all other systems (MC, ETMX,...), but something is wrong with indicators for ETMY on that particular control computer.
447   Fri Apr 25 11:33:40 2008 AndreyConfigurationComputersComputer controlling vaccum equipment

Old computer (located in the south-end end of the interferometer room) that was almost unable to fulfill his duties of controlling vacuum equipment has been replaced to "Linux-3". MEDM runs on "Linux-3".

We checked later that day together with Steve Vass that vacuum equipment (like vacuum valves) can be really controlled from the MEDM-screen 'VacControl.adl'.

Unused flat LCD monitor, keyboard and mouse (parts of the former LINUX-3 computer) were put on the second shelf of the computer rack in the computer room near the HP printer.
192   Sun Dec 16 16:52:40 2007 dmassUpdateComputersComputer on the end Fixed
I had Mike Pedraza look at the computer on the end (tag c21256). It was running funny, and turns out it was a bad HD.

I backed up the SURF files as attachments to their wiki entries. Nothing else seemed important so the drive was (presumably) swapped, and a clean copy of xp pro was installed. The username/login is the standard one.

Also - that small corner of desk space is now clean, and it would be lovely if it stayed that way.
10010   Mon Jun 9 11:42:00 2014 JenneUpdateCDSComputer status

Current computer status:

All fast machines except c1iscey are up and running. I can't ssh to c1iscey, so I'll need to go down to the end station and have a look-see. On the c1lsc machine, neither the c1oaf nor the c1cal models are running (but for the oaf model, we know that this is because we need to revert the blrms block changes to some earlier version, see Jamie's elog 9911).

Daqd process is running on framebuilder.  However, when I try to open dataviewer, I get the popup error saying "Can't connect to rb", as well as an error in the terminal window that said something like "Error getting chan info".

Slow machines c1psl, c1auxex and c1auxey are not running (can't telnet to them, and white boxes on related medm screens for slow channels).  All other slow machines seem to be running, however nothing has been done to them to point them at the new location of the shared hard drive, so their status isn't ready to green-light yet.

Things that we did on Friday for the fast machines:

The shared hard drive is "physically" on Chiara, at /home/cds/.  Links are in place so that it looks like it's at the same place that it used to be:  /opt/rtcds/......

The first nameserver on all of the workstation machines inside of the file /etc/resolv.conf has been changed to be 192.168.113.104, which is Chiara's IP address (it used to be 192.168.113.20, which was linux1).  This change has also been made on the framebuilder, and in the framebuilder's /diskless/root/etc/resolv.conf file, which is what all of the fast front ends look to.

On the framebuilder, and in the /diskless place for the fast front ends, presumably we must have changed something to point at the new location for the shared drive, but I don't remember how we did that [ERIC, what did we do???]

The slow front ends that we have tried changing have not worked out.

First, we tried plugging a keyboard and monitor into c1auxey.  When we key the crate to reboot the machine, we get some error message about a "disk A drive error", but then it goes on to prompt pushing F1 for something, and F2 for entering setup.  No matter what we press, nothing happens.  c1auxey is still not running.

We were able to telnet into c1auxex, c1psl, and c1iool0.  On each of those machines, at the prompt, we used the command "bootChange".  This initially gives us a series of:

telnet c1susaux Trying 192.168.113.55... Connected to c1susaux. Escape character is '^]'. c1susaux > bootChange '.' = clear field; '-' = go to previous field; ^D = quit boot device : ei processor number : 0 host name : linux1 file name : /cvs/cds/vw/mv162-262-16M/vxWorks inet on ethernet (e) : 192.168.113.55:ffffff00 inet on backplane (b): host inet (h) : 192.168.113.20 gateway inet (g) : user (u) : controls ftp password (pw) (blank = use rsh): flags (f) : 0x0 target name (tn) : c1susaux startup script (s) : /cvs/cds/caltech/target/c1susaux/startup.cmd other (o) : value = 0 = 0x0 c1susaux >  If we go through that again (it comes up line-by-line, and you must press Enter to go to the next line) and put a period a the end of the Host Name line, and the Host Inet (h) line, they will come up blank the next time around. So, the next time you run bootChange, you can type "chiara" for the host name, and "192.168.113.104" for the "host inet (h)". If you run bootChange one more time, you'll see that the new things are in there, so that's good. However, when we then try to reboot the computer, I think the machines weren't coming back after this point. (Unfortunately, this is one of those things that I should have elogged back on Friday, since I don't remember precisely). Certainly whatever the effect was, it wasn't what I wanted, and I left with the machines that I had tried rebooting, not running. 10011 Mon Jun 9 12:19:17 2014 ericqUpdateCDSComputer status  Quote: The first nameserver on all of the workstation machines inside of the file /etc/resolv.conf has been changed to be 192.168.113.104, which is Chiara's IP address (it used to be 192.168.113.20, which was linux1). This change has also been made on the framebuilder, and in the framebuilder's /diskless/root/etc/resolv.conf file, which is what all of the fast front ends look to. On the framebuilder, and in the /diskless place for the fast front ends, presumably we must have changed something to point at the new location for the shared drive, but I don't remember how we did that [ERIC, what did we do???] In all of the fstabs, we're using chiara's IP instead of name, so that if the nameserver part isn't working, we can still get the NFS mounts. On control room computers, we mount the NFS through /etc/fstab having lines like: 192.168.113.104:/home/cds /cvs/cds nfs rw,bg 0 0 fb:/frames /frames nfs ro,bg 0 0 Then, things like /cvs/cds/foo are locally symlinked to /opt/foo For the diskless machines, we edited the files in /diskless/root. On FB, /diskless/root/etc/fstab becomes master:/diskless/root / nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0 master:/usr /usr nfs sync,hard,intr,ro,nolock,rsize=8192,wsize=8192 0 0 master:/home /home nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0 none /proc proc defaults 0 0 none /var/log tmpfs size=100m,rw 0 0 none /var/lib/init.d tmpfs size=100m,rw 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /sys sysfs defaults 0 0 master:/opt /opt nfs async,hard,intr,rw,nolock 0 0 192.168.113.104:/home/cds/rtcds /opt/rtcds nfs nolock 0 0 192.168.113.104:/home/cds/rtapps /opt/rtapps nfs nolock 0 0 ("master" is defined in /diskless/root/etc/hosts to be 192.168.113.202, which is fb's IP) and /diskless/root/etc/resolv.conf becomes: search martian nameserver 192.168.113.104 #Chiara 10585 Wed Oct 8 15:31:31 2014 JenneUpdateCDSComputer status After the Great Computer Meltdown of 2014, we forgot about poor c0rga, which is why the RGA hasn't been recording scans for the past several months (as Steve noted in elog 10548). Q helped me remember how to fix it. We added 3 lines to its /etc/fstab file, so that it knows to mount from Chiara and not Linux1. We changed the resolv.conf file, and Q made some simlinks. Steve and I ran ..../scripts/RGA/RGAset.py on c0rga to setup the RGA's settings after the power outage, and we're checking to make sure that the RGA will run right now, then we'll set it back to the usual daily 4am run via cron. EDIT, JCD: Ran ..../scripts/RGA/RGAlogger.py, saw that it works and logs data again. Also, c0rga had a slightly off time, so I ran sudo ntpdate -b -s -u pool.ntp.org, and that fixed it.   Quote: In all of the fstabs, we're using chiara's IP instead of name, so that if the nameserver part isn't working, we can still get the NFS mounts. On control room computers, we mount the NFS through /etc/fstab having lines like: 192.168.113.104:/home/cds /cvs/cds nfs rw,bg 0 0 fb:/frames /frames nfs ro,bg 0 0 Then, things like /cvs/cds/foo are locally symlinked to /opt/foo For the diskless machines, we edited the files in /diskless/root. On FB, /diskless/root/etc/fstab becomes master:/diskless/root / nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0 master:/usr /usr nfs sync,hard,intr,ro,nolock,rsize=8192,wsize=8192 0 0 master:/home /home nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0 none /proc proc defaults 0 0 none /var/log tmpfs size=100m,rw 0 0 none /var/lib/init.d tmpfs size=100m,rw 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /sys sysfs defaults 0 0 master:/opt /opt nfs async,hard,intr,rw,nolock 0 0 192.168.113.104:/home/cds/rtcds /opt/rtcds nfs nolock 0 0 192.168.113.104:/home/cds/rtapps /opt/rtapps nfs nolock 0 0 ("master" is defined in /diskless/root/etc/hosts to be 192.168.113.202, which is fb's IP) and /diskless/root/etc/resolv.conf becomes: search martian nameserver 192.168.113.104 #Chiara 10018 Tue Jun 10 09:25:29 2014 JamieUpdateCDSComputer status: should not be changing names ### I really think it's a bad idea to be making all these names changes. You're making things much much harder for yourselves. Instead of repointing everything to a new host, you should have just changed the DNS to point the name "linux1" to the IP address of the new server. That way you wouldn't need to reconfigure all of the clients. That's the whole point of name service: use a name so that you don't need to point to a number. Also, pointing to an IP address for this stuff is not a good idea. If the IP address of the server changes, everything will break again. Just point everything to linux1, and make the DNS entries for linux1 point to the IP address of chiara. You're doing all this work for nothing! RXA: Of course, I understand what DNS means. I wanted to make the changes to the startup to remove any misconfigurations or spaghetti mount situations (of which we found many). The way the VME162 are designed, changing the name doesn't make the fix - it uses the number instead. And, of course, the main issue was not the DNS, but just that we had to setup RSH on the new machine. This is all detailed in the ELOG entries we've made, but it might be difficult to understand remotely if you are not familiar with the 40m CDS system. 9662 Mon Feb 24 13:40:13 2014 JenneUpdateCDSComputer weirdness with c1lsc machine I noticed that the fb lights on all of the models on the c1lsc machine are red, and that even though the MC was locked, there was no light flashing in the IFO. Also, all of the EPICS values on the LSC screen were frozen. I tried restarting the ntp server on the frame builder, as in elog 9567, but that didn't fix things. (I realized later that the symptom there was a red light on every machine, while I'm just seeing problems with c1lsc. I did an mxstream restart, as a harmless thing that had some small hope of helping (it didn't). I logged on to c1lsc, and restarted all of the models (rtcds restart all), which stops all of the models (IOP last), and then restarts them (IOP first). This did not change the status of the lights on the status screen, but it did change the positioning of some optics (I suspect the tip tilts) significantly, and I was again seeing flashes in the arms. The LSC master enable switch was off, so I don't think that it was trying to send any signals out to the suspensions. The ASS model, which sends signals out to the input pointing tip tilts runs on c1lsc, and it was about when the ass model was restarted that the beam came back. Also, there are no jumps in any of the SOS OSEM sensors in the last few hours, except me misaligning and restoring the optics. I we don't have sensors on the tip tilts, so I can't show a jump in their positioning, but I suspect them. I called Jamie, and he suggested restarting the machine, which I did. (Once again, the beam went somewhere, and I saw it scattering big-time off of something in the BS chamber, as viewed on the PRM-face camera). This made the oaf and cal models run (I think they were running before I did the restart all, but they didn't come back after that. Now, they're running again). Anyhow, that did not fix the problem. For kicks, I re-ran mxstream restart, and diag reset, to no avail. I also tried running the sudo /etc/init.d/ntp-client restart command on just the lsc machine, but it doesn't know the command 'ntp-client'. Jamie suggested looking at the timing card in the chassis, to ensure all of the link lights are on, etc. I will do this next.  9663 Mon Feb 24 15:25:29 2014 JenneUpdateCDSComputer weirdness with c1lsc machine The LSC machine isn't any better, and now c1sus is showing the same symptoms. Lame. The link lights on the c1lsc I/O chassis and on the fiber timing system are the same as all other systems. On the timing card in the chassis, the light above the fibers was solid-on, and the light below blinks at 1pps. Koji and I power-cycled both the lsc I/O chassis, and the computer, including removing the power cables (after softly shutting down) so there was seriously no power. Upon plugging back in and turning everything on, no change to the timing status. It was after this reboot that the c1sus machine also started exhibiting symptoms. 10717 Fri Nov 14 15:45:34 2014 JenneUpdateCDSComputers back up after reboot [Jenne, Q] Everything seems to be back up and running. The computers weren't such a big problem (or at least didn't seem to be). I turned off the watchdogs, and remotely rebooted all of the computers (except for c1lsc, which Manasa already had gotten working). After this, I also ssh-ed to c1lsc and restarted all of the models, since half of them froze or something while the other computers were being power cycled. However, this power cycling somehow completely screwed up the vertex suspensions. The MC suspensions were fine, and SRM was fine, but the ITMs, BS and PRM were not damping. To get them to kind of damp rather than ring up, we had to flip the signs on the pos and pit gains. Also, we were a little suspicious of potential channel-hopping, since touching one optic was occasionally time-coincident with another optic ringing up. So, no hard evidence on the channel hopping, but suspicions. Anyhow, at some point I was concerned about the suspension slow computer, since the watchdogs weren't tripping even though the osem sensor rmses were well over the thresholds, so I keyed that crate. After this, the watchdogs tripped as expected when we enabled damping but the RMS was higher than the threshold. I eventually remotely rebooted c1sus again. This totally fixed everything. We put all of the local damping gains back to the values that we found them (in particular, undoing our sign flips), and everything seems good again. I don't know what happened, but we're back online now. Q notes that the bounce mode for at least ITMX (haven't checked the others) is rung up. We should check if it is starting to go down in a few hours. Also, the FSS slow servo was not running, we restarted it on op340m. 695 Fri Jul 18 17:06:20 2008 JenneUpdateComputersComputers down for most of the day, but back up now [Sharon, Alex, Rob, Alberto, Jenne] Sharon and I have been having trouble with the C1ASS computer the past couple of days. She has been corresponding with Alex, who has been rebooting the computers for us. At some point this afternoon, as a result of this work, or other stuff (I'm not totally sure which) about half of the computers' status lights on the MEDM screen were red. Alberto and Sharon spoke to Alex, who then fixed all of them except C1ASC. Alberto and I couldn't telnet into C1ASC to follow the restart procedures on the Wiki, so Rob helped us hook up a monitor and keyboard to the computer and restart it the old fashioned way. It seems like C1ASC has some confusion as to what its IP address is, or some other computer is now using C1ASC's IP address. As of now, all the computers are back up. 8324 Thu Mar 21 10:29:12 2013 ManasaUpdateComputersComputers down since last night I'm trying to figure out what went wrong last night. But the morning status...the computers are down. Attachment 1: down.png 9130 Mon Sep 16 13:11:15 2013 EvanUpdateComputer Scripts / ProgramsComsol 4.3b upgrade Comsol 4.3b is now installed under /cvs/cds/caltech/apps/linux64/COMSOL43b. I've left the existing Comsol 4.2 installation alone; according to the Comsol installation guide [PDF], it is unaffected by the new install. On megatron I've made a symlink so that you can call comsol in bash to start Comsol 4.3b. The first time I ran comsol server, it asked me to choose a username/password combo, so I made it the same as the combo used to log on to megatron. Edit: I've also added a ~/.screenrc on megatron (based on this Stackoverflow answer) so that I don't constantly go nuts trying to figure out if I'm already inside a screen session. 9770 Tue Apr 1 17:37:57 2014 EvanUpdateComputer Scripts / ProgramsComsol 4.4 upgrade Comsol 4.4 is now installed under /cvs/cds/caltech/apps/linux64/COMSOL44. I've left the other installations alone. I've changed the symlink on megatron so that comsol now starts Comsol 4.4. The first time I ran comsol server, it asked me to choose a username/password combo, so I made it the same as the combo used to log on to megatron. We should consider uninstalling some of the older Comsol versions; right now we have 4.0, 4.2, 4.3b, and 4.4 installed. 15389 Thu Jun 11 09:37:38 2020 JonUpdateBHDConclusions on Mode-Matching Telescopes After further astigmatism/tolerance analysis [ELOG 15380, 15387] our conclusion is that the stock-optic telescope designs [ELOG 15379] are sufficient for the first round of BHD testing. However, for the final BHD hardware we should still plan to procure the custom-curvature optics [DCC E2000296]. The optimized custom-curvature designs are much more error-tolerant and have high probability of achieving < 2% mode-matching loss. The stock-curvature designs can only guarantee about 95% mode-matching. Below are the final distances between optics in the relay paths. The base set of distances is taken from the 2020-05-21 layout. To minimize the changes required to the CAD model, I was able to achieve near-maximum mode-matching by moving only one optic in each relay path. In the AS path, AS3 moves inwards (towards the BHDBS) by 1.06 cm. In the LO path, LO4 moves backwards (away from the BHDBS) by 3.90 cm. ### AS Path Interval Distance (m) Change (cm) SRMAR-AS1 0.7192 0 AS1-AS2 0.5405 0 AS2-AS3 0.5955 -1.06 AS3-AS4 0.7058 -1.06 AS4-BHDBS 0.5922 0 BHDBS-OMCIC 0.1527 0 ### LO Path Interval Distance (m) Change (cm) PR2AR-LO1 0.4027 0 LO1-LO2 2.5808 0 LO2-LO3 1.5870 0 LO3-LO4 0.3691 +3.90 LO4-BHDBS 0.2573 +3.90 BHDBS-OMCIC 0.1527 0 11491 Tue Aug 11 10:13:32 2015 JessicaUpdateGeneralConductive SMAs seem to work best After testing both the Conductive and Isolated front panels on the ALS delay line box using the actual beatbox and comparing this to the previous setup, I found that the conductive SMAs improved crosstalk the most. Also, as the old cables were 30m and the new ones are 50m, Eric gave me a conversion factor to apply to the new cables to normalize the comparison. I used an amplitude of 1.41 Vpp and drove the following frequencies through each cable: X: 30.019 MHz Y: 30.019203 MHz which gave a difference of 203 Hz. In the first figure, it can be seen that, for the old setup with the 30m cables, in both cables there is a spike at 203 Hz with an amplitude of above 4 m/s^2/sqrt(Hz). When the 50m cables were measured in the box with the conductive front panel, the amplitude drops at 203 Hz by a factor of around 3. I also compared the isolated front panel with the old setup, and found that the isolated front panel worse by a factor of just over 2 than the old setup. Therefore, I think that using the conductive front panel for the ALS Delay Line box will reduce noise and crosstalk between the cables the most. Attachment 1: best4.png Attachment 2: isolated4.png 8433 Wed Apr 10 01:10:22 2013 JenneUpdateLockingConfigure screen and scripts updated I have gone through the different individual degrees of freedom on the IFO_CONFIGURE screen (I haven't done anything to the full IFO yet), and updated the burt snapshot request files to include all of the trigger thresholds (the DoF triggers were there, but the FM triggers and the FM mask - which filter modules to trigger - were not). I also made all of the restore scripts (which does the burt restore for all those settings) the same. They were widely different, rather than just different optics chosen for misaligning and restoring. Before doing any of this work, I moved the whole folder ..../caltech/c1/burt/c1ifoconfigure to ..../caltech/c1/burt/c1ifoconfigure_OLD_but_SAVE , so we can go back and look at the past settings, if we need to. I also changed the "C1save{DoF}" scripts to ask for keyboard input, and then added them as options to the CONFIGURE screen. The keyboard input is so that people randomly pushing the buttons don't overwrite our saved burt files. Here's the secret: It asks if you are REALLY sure you want to save the configuration. If you are, type the word "really", then hit enter (as in yes, I am really sure). Any other answer, and the script will tell you that it is quitting without saving. I have also removed the "PRM" option, since we really only want the "PRMsb" for almost all purposes. Also, I removed access to the very, very old text file about how to lock from the screen. That information is now on the wiki: https://wiki-40m.ligo.caltech.edu/How_To/Lock_the_Interferometer I have noted in the drop-down menus that the "align" functions are not yet working. I know that Den has gotten at least one of the arms' ASSes working today, so once those scripts are ready, we can call them from the configure screen. Anyhow, the IFO_CONFIGURE screen should be back to being useful! 8722 Wed Jun 19 02:46:19 2013 JenneUpdateCDSConnected ADC channels from IOO model to ASS model Following Jamie's table in elog 8654, I have connected up the channels 0, 1 and 2 from ADC0 on the IOO computer to rfm send blocks, which send the signals over to the rfm model, and then I use dolphin send blocks to get over to the ass model on the lsc machine. I'm using the 1st 3 channels on the Pentek Generic interface board, which is why I'm using channels 0,1,2. I compiled all 3 models (ioo, rfm, ass), and restarted them. I also restarted the daqd on the fb, since I put in a temporary set of filter banks in the ass model, to use as sinks for the signal (since I haven't done anything else to the ASS model yet). All 3 models were checked in to the svn. 16479 Mon Nov 22 17:42:19 2021 AnchalUpdateGeneralConnected Megatron to battery backed ports of another UPS [Anchal, Paco] I used the UPS that was providing battery backup for chiara earlier (a APS Back-UPS Pro 1000), to provide battery backup to Megatron. This completes UPS backup to all important computers in the lab. Note that this UPS nominally consumes 36% of UPS capacity in power delivery but at start-up, Megatron was many fans that use up to 90% of the capacity. So we should not use this UPS for any other computer or equipment. While doing so, we found that PS3 on Megatron was malfunctioning. It's green LED was not lighting up on connecting to power, so we replaced it from the PS3 of old FB computer from the same rack. This solved this issue. Another thing we found was that Megatron on restart does not get configured to correct nameserver resolution settings and loses the ability to resolve names chiara and fb1. This results in the nfs mounts to fail which in turn results in the script services to fail. We fixed this by identifying that the NetworkManager of ubuntu was not disabled and would mess up the nameserver settings which we want to be run by systemd-resolved instead. We corrected the symbolic link: /etc/resolv.conf -> /run/systemd/resolve/resolv.conf. the we stopped and diabled the NetworkManager service to keep this persistent on reboot. Following are the steps that did this: > sudo rm /etc/resolv.conf > ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf > sudo systemctl stop NetworkManager.service > sudo systemctl disable NetworkManager.service 16396 Tue Oct 12 17:20:12 2021 AnchalSummaryCDSConnected c1sus2 to martian network I connected c1sus2 to the martian network by splitting the c1sim connection with a 5-way switch. I also ran another ethernet cable from the second port of c1sus2 to the DAQ network switch on 1X7. Then I logged into chiara and added the following in chiara:/etc/dhcp/dhcpd.conf : host c1sus2 { hardware ethernet 00:25:90:06:69:C2; fixed-address 192.168.113.92; }  And following line in chiara:/var/lib/bind/martian.hosts : c1sus2 A 192.168.113.92  Note that entires c1bhd is already added in these files, probably during some earlier testing by Gautam or Jon. Then I ran following to restart the dhcp server and nameserver: ~> sudo service bind9 reload [sudo] password for controls: * Reloading domain name service... bind9 [ OK ] ~> sudo service isc-dhcp-server restart isc-dhcp-server stop/waiting isc-dhcp-server start/running, process 25764  Now, As I switched on c1sus2 from front panel, it booted over network from fb1 like other FE machines and I was able to login to it by first logging to fb1 and then sshing to c1sus2. Next, I copied the simulink models and the medm screens of c1x06, xc1x07, c1bhd, c1sus2 from the paths mentioned on this wiki page. I also copied the medm screens from chiara(clone):/opt/rtcds/caltech/c1/medm to martian network chiara in the appropriate places. I have placed the file /opt/rtcds/caltech/c1/medm/teststand_sitemap.adl which can be used to open sitemap for c1bhd and c1sus2 IOP and user models. Then I logged into c1sus2 (via fb1) and did make, install, start procedure: controls@c1sus2:~ 0 rtcds make c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### building c1x07...
Cleaning c1x07...
Done
Parsing the model c1x07...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1x07...
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl

Successfully compiled c1x07
***********************************************
Compile Warnings, found in c1x07_warnings.log:
***********************************************
***********************************************
controls@c1sus2:~ 0$rtcds install c1x07 buildd: /opt/rtcds/caltech/c1/rtbuild/release ### installing c1x07... Installing system=c1x07 site=caltech ifo=C1,c1 Installing /opt/rtcds/caltech/c1/chans/C1X07.txt Installing /opt/rtcds/caltech/c1/target/c1x07/c1x07epics Installing /opt/rtcds/caltech/c1/target/c1x07 Installing start and stop scripts /opt/rtcds/caltech/c1/scripts/killc1x07 /opt/rtcds/caltech/c1/scripts/startc1x07 sudo: unable to resolve host c1sus2 Performing install-daq Updating testpoint.par config file /opt/rtcds/caltech/c1/target/gds/param/testpoint.par /opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_211012_174226.par -gds_node=24 -site_letter=C -system=c1x07 -host=c1sus2 Installing GDS node 24 configuration file /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par Installing auto-generated DAQ configuration file /opt/rtcds/caltech/c1/chans/daq/C1X07.ini Installing Epics MEDM screens Running post-build script safe.snap exists controls@c1sus2:~ 0$ rtcds start c1x07
Cannot start/stop model 'c1x07' on host c1sus2.
controls@c1sus2:~ 4$rtcds list controls@c1sus2:~ 0$


One can see that even after making and installing, the model c1x07 is not listed as available models in rtcds list. Same is the case for c1sus2 as well. So I could not proceed with testing.

Good news is that nothing that I did affect the current CDS functioning. So we can probably do this testing safely from the main CDS setup.

16397   Tue Oct 12 23:42:56 2021 KojiSummaryCDSConnected c1sus2 to martian network

Don't you need to add the new hosts to /diskless/root/etc/rtsystab at fb1? --> There looks many elogs talking about editing "rtsystab".

controls@fb1:/diskless/root/etc 0\$ cat rtsystab # # host    list of control systems to run, starting with IOP # c1iscex  c1x01  c1scx c1asx c1sus     c1x02  c1sus c1mcs c1rfm c1pem c1ioo     c1x03  c1ioo c1als c1omc c1lsc    c1x04  c1lsc c1ass c1oaf c1cal c1dnn c1daf c1iscey  c1x05 c1scy c1asy #c1test   c1x10  c1tst2

164   Wed Dec 5 10:57:08 2007 albertoHowToComputersConnecting the GPIBto USB interface to the Dell laptop
The interface works only on one of the USB ports of the laptop (the one on the right, looking at the computer from the back).
1972   Tue Sep 8 12:26:16 2009 AlbertoUpdatePSLConnection of the RC heater's power supply replaced

I have replaced the temporary clamps that were connecting the RC heater to its power supply with a new permanent connection.

In the IY1 rack, I connected the control signal of the RC PID temperature servo - C1:PSL-FSS_TIDALSET - to the input of the RC heater's power supply.

The signal comes from a DAC in the same rack, through a pair of wires connected to the J9-4116*3-P3 cross-connector (FLKM). I joined the pair to the wires of the BNC cable coming from the power supply, by twisting and screwing them into two available clamps of the breakout FKLM in the IY1 rack - the same connected to the ribbon cable from RC Tmeperature box.

Instead of opening the BNC cable coming from the power supply, I thought it was a cleaner and more robust solution to use a BNC-to-crocodile clamp from which I had cut the clamps off.

During the transition process, I connected the power supply BNC input to a a voltage source that I set at the same voltage of the control signal before I disconnected it (~1.145V).

I monitored the temperature signals and it looked like the RC Temperature wasn't significantly affected by the operation.

290   Fri Feb 1 10:43:05 2008 JohnUpdateEnvironmentConstruction work
The boys next door have some bigger noisier toys.
Attachment 1: DSC_0433.JPG
14273   Tue Nov 6 10:03:02 2018 SteveUpdateElectronicsContec board found

The Contec test board with Dsub37Fs was on the top shelf of E7

Attachment 1: DSC01836.JPG
823   Mon Aug 11 12:42:04 2008 josephbConfigurationComputersContinuing saga of c1susvme1
Coming back after lunch around 12:30pm, c1susvme1's status was again red. After switching off watchdogs, a reboot (ssh, su, reboot) and restarting startup.cmd, c1susvme1 is still reporting a max sync value (16384), occassionally dropping down to about 16377. The error light cycles between green and red as well.

At this point, I'm under the impression further reboots are not going to solve the problem.

Currently leaving the watchdogs associated with c1susvme1 off for the moment, at least until I get an idea of how to proceed.
12564   Fri Oct 14 19:59:09 2016 YinziUpdateGreen LockingContinuing work with the TC 200

Oct. 15, 2016

Another attempt (following elog 8755) to extract the oven transfer function from time series data using Matlab’s system identification functionalities.

The same time series data from elog 8755 was used in Matlab’s system identification toolbox to try to find a transfer function model of the system.

From elog 8755: H(s) is known from current PID gains: H(s) = 250 + 60/s +25s, and from the approximation G(s)=K/(1+Ts), we can expect the transfer function of the system to have 3 poles and 2 zeros.

I tried fitting a continuous-time and a discrete time transfer function with 3 poles and 2 zeros, as well as using the "quick start" option. Trying to fit a discrete time transfer function model with 3 poles and 2 zeros gave the least inaccurate results, but it’s still really far off (13.4% fit to the data).

Ideas:

1. Obtain more time domain data with some modulation of the input signal (also gives a way to characterize nonlinearities like passive cooling). This can be done with some minor modifications to the existing code on the raspberry pi. This should hopefully lead to a better system ID.

2. Try iterative tuning approach (sample gains above and below current gains?) so that a tune can be obtained without having to characterize the exact behavior of the heater.

Oct. 16, 2016

-Found the raspberry pi but it didn’t have an SD card

-Modified code to run directly on a computer connected to the TC 200. Communication seems to be happening, but a UnicodeDecodeError is thrown saying that the received data can’t be decoded.

-Some troubleshooting: tried utf-8 and utf-16 but neither worked. The raw data coming in is just strings of K’s, [‘s, and ?’s

-Will investigate possible reasons (update to Mac OS or a difference in Python version?), but it might be easier to just find an SD card for the raspberry pi which is known to work. In the meantime, modify code to obtain more time series data with variable input signals.

12190   Thu Jun 16 15:57:46 2016 gautamUpdateCOCContrast as a function of RoC of ETMX

Summary

In a previous elog, I demonstrated that the RoC mismatch between ETMX and ETMY does not result in appreciable degradation in the mode overlap of the two arm modes. Koji suggested also checking the effect on the contrast defect. I'm attaching the results of this investigation (I've plotted the contrast, $C = \frac{P\mathrm{_{max}}-P\mathrm{_{min}}}{P\mathrm{_{max}}+P\mathrm{_{min}}}$  rather than the contrast defect 1-C).

Details and methodology

• I used the same .kat file that I had made for the current configuration of the 40m, except that I set the reflectivities of the PRM and the SRM to 0.
• Then, I traced the Y arm cavity mode back to the node at which the laser sits in my .kat file to determine what beam coming out of the laser would be 100% matched to the Y arm (code used to do this attached)
• I then set the beam coming out of the laser for the subsequent simulations to the value thus determined using the gauss command in finesse.
• I then varied the RoC of ETMX (I varied the sagittal and tangential RoCs simultaneously) between 50m and 70m. As per the wiki page, the spare ETMs have an RoC between 54 and 55m, while the current ETMs have an RoC of 60.26m and 59.48m for the Y and X arms respectively (I quote the values in the "ATF" column). Simultaneously, at each value of the RoC of ETMX, I swept the microscopic position of the ETMX HR surface through 2pi radians (-180 degrees to 180 degrees) using the phi functionalilty of finesse, while monitoring the power at the AS port of this configuration using a pd in finesse. This guarantees that I sweep through all the resonances. I then calculate the contrast using the above formula. I divided the parameter space into a grid of 50 points for the RoC of ETMX and 1000 points for the microscopic position of ETMX.
• I fixed the RoC of ETMY as 57.6m in the simulations... Also, the maxtem option in the .kat file is set to 4 (i.e. higher order modes with indices m+n<=4 are accounted for...)

Result:

Attachment #1 shows the result of this scan (as mentioned earlier, I plot the contrast C and not the contrast defect 1-C, sorry for the wrong plot title but it takes ~30mins to run the simulation which is why I didn't want to do it agian). If the RoC of the spare ETMs is about 54m, the loss in contrast is about 0.5%. This is in good agreement with this technical note by Koji - it tells us to expect a contrast defect in the region of 0.5%-1% (depending on what parameter you use as the RoC of ETMY).

Conclusion:

It doesn't seem that switching out the current ETM with one of the spare ETMs will result in dramatic degradation of the contrast defect...

Misc notes:

1. Regarding the phase command in Finesse - EricQ pointed out that the default value of this is 3, which as per the manual could give unphysical results sometimes. The flags "0" or "2" are guaranteed to yield physical results always according to the manual, so it is best to set this flag appropriately for all future Finesse simulaitons.
2. I quickly poked around inside the cabinet near the EX table labelled "clean optics" to see if I could locate the spare ETMs. In my (non-exhaustive) search, I could not find it in any of the boxes labelled "2010 upgrade" or something to that effect. I did however find empty boxes for ETMU05 and ETMU07 which are the ETMs currently in the IFO... Does anyone know if I should look elsewhere for these?
EDIT 17Jun2016: I have located ETMU06 and ETMU08, they are indeed in the cabinet at the X end...
3. I'm attaching a zip file with all the code used to do this simulation. The phase flag has been appropriately set in the (only) .kat file. setLaserQparam.py was used to determine what beam parameter to assign to be perfectly matched to the Y arm. modeMatchCheck_ETM.py was used to generate the contrast as a function of the RoC of ETMX.
4. With regards to the remaining checks to be done - I will post results of my investigations into the HOM scans as a function of the RoC of the ETMs and also the folding mirrors shortly...
Attachment 1: contrastDefect.pdf
Attachment 2: finesseCode.zip
12193   Thu Jun 16 18:42:12 2016 ranaUpdateCOCContrast as a function of RoC of ETMX

That sounds weird. If the ETMY RoC is 60 m, why would you use 57.6 m in the simulation? According to the phase map web page, it really is 60.2 m.

12194   Thu Jun 16 23:02:57 2016 gautamUpdateCOCContrast as a function of RoC of ETMX
 Quote: That sounds weird. If the ETMY RoC is 60 m, why would you use 57.6 m in the simulation? According to the phase map web page, it really is 60.2 m.

This was an oversight on my part. I've updated the .kat file to have all the optics have the RoC as per the phase map page. I then re-did the tracing of the Y arm cavity mode to determine the appropriate beam parameters at the laser in the simulation, and repeated the sweep of RoC of ETMX while holding RoC of ETMY fixed at 60.2m. The revised contrast defect plot is attached (this time it is the contrast defect, and not the contrast, but since I was running the simulation again I thought I may as well change the plot).

As per this plot, if the ETMX RoC is ~54.8m (the closer of the two spares to 60.2m), the contrast defect is 0.9%, again in good agreement with what the note linked in the previous elog tells us to expect...

Attachment 1: contrastDefect.pdf
12197   Mon Jun 20 01:38:04 2016 ranaUpdateCOCContrast as a function of RoC of ETMX

So, it seems that changing the ETMX for one of the spares will change the contrast defect from ~0.1% to 0.9%. True? Seems like that might be a big deal.

12204   Mon Jun 20 18:07:15 2016 gautamUpdateCOCContrast as a function of RoC of ETMX
 Quote: So, it seems that changing the ETMX for one of the spares will change the contrast defect from ~0.1% to 0.9%. True? Seems like that might be a big deal.

That is what the simulation suggests... I repeated the simulation for a PRFPMI configuration (i.e. no SRM, everything else  as per the most up to date 40m numbers), and the conclusion is roughly the same - the contrast defect degrades from ~0.1% to ~1.4%... So I would say this is significant. I also attempted to see what the contribution of the asymmetry in loss in the arms is, by running over the simulation with the current loss numbers of 230ppm for Yarm and 484ppm for the X arm, split equally between the ITMs and ETMs for both cases, and then again with lossless arms - see attachment #1. While this is a factor, this plot seems to suggest that the RoC mismatch effect dominates the contrast defect...

Attachment 1: contrastDefectComparison.pdf
17020   Tue Jul 19 18:41:42 2022 yutaUpdateBHDContrast measurements for Michelson and ITM-LO

[Paco, Yuta]

We measured contrast of Michelson fringe in both arms locked and mis-aligned. It was around 90%.
We also measured the contrast of ITM single bounce vs LO beam using BHD DC PDs. It was around 43%.
The measurement depends on the alignment and how to measure the maximum and minimum of the fringe. ITM-LO fringe was also not stable because motions of AS/LO mirrors are large. More tuning necessary.

Background
- As measured in elog 40m/17012, we see a lot of CARM in AS, which indicates large contrast defect.
- We want to check mode-matching of LO beam to AS beam.

BHD DC PD conditioning
- We added DCPD_A and DCPD_B to /opt/rtcds/caltech/c1/scripts/LSC/LSCoffsets3 script, which zeros the offsets when shutters are closed.
- We also set C1:LSC-DCPD_(A|B)_GAIN = -1 since they are inverted.

Contrast measurement
- Contrast was measured using channels ['C1:LSC-ASDC_OUT','C1:LSC-POPDC_OUT','C1:LSC-REFLDC_OUT','C1:LSC-DCPD_A_OUT','C1:LSC-DCPD_B_OUT']. For LO, only DCPD_(A|B) are used.
- We took 15%-percentile (40% for ITM-LO fringe) from the maximum and minimum of the data, and took the median to estimate the maximum value and the minimum value (see Attachment).
- Contrast = (Imax - Imin) / (Imax + Imin)
- We measured three times in each configuration to estimate the standard error.
- Jupyter notebook: https://git.ligo.org/40m/scripts/-/blob/main/CAL/BHD/measureContrast.ipynb

Results
Both arms locked, MICH fringe (15% percentile)
Contrast measured by C1:LSC-ASDC_OUT is 89.75 +/- 0.17 %
Contrast measured by C1:LSC-POPDC_OUT is 79.41 +/- 0.86 %
Contrast measured by C1:LSC-REFLDC_OUT is 97.34 +/- 0.34 %
Contrast measured by C1:LSC-DCPD_A_OUT is 95.41 +/- 1.55 %
Contrast measured by C1:LSC-DCPD_B_OUT is 89.76 +/- 1.49 %
Contrast measured by all is 90.34 +/- 1.68 %

Both arms mis-aligned, MICH fringe (15% percentile)
Contrast measured by C1:LSC-ASDC_OUT is 89.32 +/- 0.57 %
Contrast measured by C1:LSC-POPDC_OUT is 94.55 +/- 0.62 %
Contrast measured by C1:LSC-REFLDC_OUT is 97.95 +/- 1.37 %
Contrast measured by C1:LSC-DCPD_A_OUT is 96.40 +/- 1.04 %
Contrast measured by C1:LSC-DCPD_B_OUT is 90.98 +/- 1.07 %
Contrast measured by all is 93.84 +/- 0.94 %

ITMY-LO fringe (40% percentile)
Contrast measured by C1:LSC-DCPD_A_OUT is 45.51 +/- 0.45 %
Contrast measured by C1:LSC-DCPD_B_OUT is 38.69 +/- 0.43 %
Contrast measured by all is 42.10 +/- 1.03 %

ITMX-LO fringe (40% percentile)
Contrast measured by C1:LSC-DCPD_A_OUT is 46.65 +/- 0.65 %
Contrast measured by C1:LSC-DCPD_B_OUT is 39.82 +/- 0.51 %
Contrast measured by all is 43.24 +/- 1.45 %

Discussion
- As you can see from the attachment, REFLDC is noisy and over estimating the contrast. ASDC is reliable. We need to tune the threshold to measure the maximum value and minimum value. We should also use the mode instead of median.
- Contrast depends very much on the alignment. We didn't tweak too much today.
- ITM-LO fringe was not stable, probably due to too much motion in AS1, AS4, LO1, LO2. Their damping needs to be re-tuned.

Next:
- Model FPMI sensing matrix with measured contrast defect
- Estimate AS-LO mode-mismatch using the measured contrast
- Lock ITM-LO fringe using DCPD_(A|B) as error signal, and ITM or LO1/2 as actuator
- Lock MICH with DCPD_(A|B), and with LO beam
- Get better contrast data with better alignment and better AS1, AS4, LO1, LO2 damping

Attachment 1: ContrastMeasurements.pdf
15595   Tue Sep 22 16:29:30 2020 KojiUpdateGeneralControl Room AC setting for continuous running

I came to the lab. The control room AC was off -> Now it is on.

Here is the setting of the AC meant for continuous running

Attachment 1: P_20200922_161125.jpg
7866   Thu Dec 20 19:46:20 2012 ranaConfigurationEnvironmentControl Room Projector

Needs

1927   Wed Aug 19 02:17:52 2009 ranaOmnistructureEnvironmentControl Room Workstation desks lowered to human height

There were no injuries...Now we need to get some new chairs.

1931   Thu Aug 20 09:16:32 2009 steveHowToPhotosControl Room Workstation desks lowered to human height

 Quote: There were no injuries...Now we need to get some new chairs.

The control room desk tops heights on the east side were lowered by 127 mm

Attachment 1: P1040788.png
Attachment 2: P1040782.png
Attachment 3: P1040786.png
Attachment 4: P1040789.png
Attachment 5: P1040785.png
14778   Fri Jul 19 15:54:47 2019 gautamUpdateGeneralControl room UPS Batteries need replacement

The control room UPS started making a beeping noise saying batteries need replacement. I hit the "Test" button and the beeping went away. According to the label on it, the batteries were last repalced in March 2016, so maybe it is time for a replacement, @Chub, please look into this.

ELOG V3.1.3-