40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 151 of 354  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14316   Mon Nov 26 10:22:16 2018 aaronUpdateGeneralprojector light bulb replaced

I replaced the projector bulb. Previous bulb was shattered.

  14324   Thu Nov 29 17:46:43 2018 gautamUpdateGeneralSome to-dos

[koji, gautam, jon, steve]

  • We suspect analog voltage from N2 pressure gauge is connected to interfacing Omega controller with the 'wrong' polarity (i.e pressure is rising over ~4 days and then rapidly falling instead of the other way around). This should be fixed.
  • N2 check script logic doesn't seem robust. Indeed, it has not been sending out warning emails (threshold is set to 60 psi, it has certainly gone below this threshold even with the "wrong" polarity pressure gauge hookup). Probably the 40m list is rejecting the email because controls isn't a part of the 40m group.
  • Old frames have to be re-integrated from JETSTOR to the new FB in order to have long timescale lookback.
  • N2 cylinder pressure gauges (at the cylinder end) need a power supply - @ Steve, has this been purchased? If not, perhaps @ Chub can order it.
  • MEDM vacuum screen should be updated to have gate valves be a different color to the spring-loaded valves. Manual valve between TP1 and V1 should also be added.
  • P2, P3 and P4 aren't returning sensible values (they should all be reading ~760torr as is P1). @ Steve, any idea if these gauges are broken?
  • Hornet gauges (CC and Pirani) should be hooked up to the new vacuum system.
  • add slow channels of   foreline pressures of TP2 & 3   and    C1:Vac-IG1_status_pressure
  14325   Fri Nov 30 15:53:52 2018 JonOmnistructureGeneralN2 pressure gauge fix

I've made a repair to the N2 pressure monitor. I don't believe the polarity of the analog signal into the controller actually was reversed. I found the data sheet (attached) for the transducer model we have installed. Its voltage should read ~0 at 0 PSI and 100mV at 100 PSI. As wired, the input voltage reads +80 mV as it should.

The controller calibrates the sensor voltage to PSI (i.e., applies a scale and offset) based on two settable reference points which appeared to be incorrect. I changed them to:

  1. 0 mV = 0 PSI (neglecting the small dark bias)
  2. 100 mV = 100 PSI

After the change, the pressure reads 80 PSI. Let's see if the time history now shows a sensible trend. 

Quote:

[koji, gautam, jon, steve]

  • We suspect analog voltage from N2 pressure gauge is connected to interfacing Omega controller with the 'wrong' polarity (i.e pressure is rising over ~4 days and then rapidly falling instead of the other way around). This should be fixed.

 

  14331   Tue Dec 4 18:24:05 2018 gautamOmnistructureGeneralN2 line disconnected

[jon, gautam]

In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up. When we went into the drill-press area, we heard a hiss, turns out that one of the cylinders is leaking (to be fair, this was labelled, but i thought it isn't great to have a higher N2 concentration in an enclosed space). Since we don't need any actuation ability, I valved off the leaky cylinder, and disconnected the other properly functioning one. Attachment #1 shows the current state.

  14333   Thu Dec 6 17:33:33 2018 JonOmnistructureGeneralN2 line disconnected

I believe I finally have the N2 gauge working correctly. The wiring is unchanged from its original state and the controller has been recalibrated.

After letting the line pressure drop to 0 PSI as indicated by the analog gauge in the drill-press room, I recorded the number of counts read by the Omega controller. Then I pressurized the line to 80 PSI, again indicated by the analog gauge, and recorded the Omega counts again. I entered these two reference points into the controller (automatically determines the gain and offset from these), then confirmed the readings to agree with the anaog gauge as I varied the line pressure.

The two reference points are:

0 PSI  :  13 counts
80 PSI : 972 counts

 

Quote:

[jon, gautam]

In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up. 

 

  14347   Wed Dec 12 11:53:29 2018 aaronUpdateGeneralPower Outage

At 11:13 am there was a ~2-3 second interruption of all power at the 40m.

I checked that nobody was in any of the lab areas at the time of the outage.

I walked along both arms of the 40m and looked for any indicator lights or unusual activity. I took photos of the power supplies that I encountered, attached. I tried to be somewhat complete, but didn't have a list of things in mind to check, so I may have missed something. 

I noticed an electrical buzzing that seemed to emanate from one of the AC adapters on the vacuum rack. I've attached a photo of which one, the buzzing changes when I touch the case of the adapter. I did not modify anything on the vacuum rack. There is also 

Most of the cds channels are still down. I am going through the wiki for procedures on what to log when the power goes off, and will follow the procedures here to get some useful channels.

  14349   Thu Dec 13 01:26:34 2018 gautamUpdateGeneralPower Outage recovery

[koji, gautam]

After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:

  1. It seems that we need to first turn on FB
    • Manually start the open-mx and mx services using
      sudo systemctl start open-mx.service 
      sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources.
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  2. Then fully power cycle (including all front and rear panel power switches/cables) the FEs and the expansion chassis.
    • This seems to be a necessary step for models run on c1sus (as reported by the CDS MEDM screen) to pick up the correct system time (the FE itself seems to pick up the correct time, not sure what's going on here).
    • This was necessary to clear 0x4000 errors.
  3. Power on the expansion chassis.
  4. Power on the FE.
  5. Start the RTCDS models in the usual way
    • For some reason, there is a 1 second mismatch between the gpstime returned on the MEDM screen for a particular CDS model status, and that in the terminal for the host machine.
    • This in itself doesn't seem to cause any timing errors. But see remark about c1sus above in #2.

The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.

  14350   Thu Dec 13 10:03:07 2018 ChubUpdateGeneralOMC chamber

Bob, Aaron, and I removed the door from the OMC chamber this morning.  Everything went well.

  14351   Thu Dec 13 12:06:35 2018 gautamUpdateGeneralPower Outage recovery

I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...

I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...

Quote:
 

The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.

  14353   Thu Dec 13 20:10:08 2018 KojiUpdateGeneralPower Outage recovery

[Gautam, Aaron, Koji]

The PSL interlock system was fixed and now the 40m lab is laser hazard as usual.


- The schematic diagram of the interlock system D1200192
- We have opened the interlock box. Immediately we found that the DC switching supply (OMRON S82K-00712) is not functioning anymore.  (Attachment #1)
- We could not remove the module as the power supply was attached on the DIN rail. We decided to leave the broken supply there (it is still AC powered with no DC output).

- Instead, we brought a DC supply adapter from somewhere and chopped the head so that we can hook it up on the crimping-type quick connects. In Attachment #1, the gray is +12V, and the orange and black lines are GND.

- Upon the inspection, the wires of the "door interlock reset button" fell off and the momentary switch (GRAYHILL 30-05-01-502-03) got broken. So it was replaced with another momentary swicth, which is way smaller than the original unfortunately. (Attachments 2 and 3)

- Once the DC supply adapter was pluged to an AC tap, we heard the sounds of the relays working, and we recovered the laser hazard lamps, PSL door alerm lamps. Also it was confirmed that the PSL innolight is operatable now. 

- BTW, there is the big switch box on the wall close to the PSL enclosure. Some of the green lamps were gone. We found that we have plenty of spare lamps and relays inside of the box. So we replaced the bulbs and know the A.C. lights are functioning. (Attachments 4 & 5)

  14360   Fri Dec 14 22:19:22 2018 KojiUpdateGeneralChiara new USB 4TB DIsk

Edit: It was not 4TB disk but 6TB disk in fact. (We actually ordered 4TB disk...)

I think the problem of the backup disk was the flaky power supply for the external drive.
I swapped the drive to a new HGST 4TB one, but it was neither recognized nor spun up with the external power supply we had. So I decided to put both the new and old drives in the PC chassis to power them up with the internal power supply. I tested the old disk via a USB-SATA cable. However, this disk was not recognized. I noticed that the disk was not HGST 4TB but Seagate 3TB. Is it possible? I thought it was 4TB... Did I miss something?

Once the new 4TB was connected to the USB-SATA, it was very smooth to get it mounted. Now the disk is mounted as /media/40mBackup as before. /etc/fstab was also modified with the new UUID. All the command logs are found here below.

Let's see how the morning backup goes. It would take a while to copy everything on the new disk. So it was actually very nice to set this disk up by Friday midnight.


controls@chiara|~> lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk 
---- sda1   8:1    0 446.9G  0 part /
---- sda2   8:2    0     1K  0 part 
---- sda5   8:5    0  18.9G  0 part [SWAP]
sdb      8:16   0   1.8T  0 disk 
---- sdb1   8:17   0   1.8T  0 part 
sdc      8:32   0   3.7T  0 disk 
---- sdc1   8:33   0   3.7T  0 part /home/cds
sr0     11:0    1  1024M  0 rom  
sdd      8:64   0   5.5T  0 disk 

controls@chiara|~> sudo mkfs -t ext4 /dev/sdd1

mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
183144448 inodes, 1465130385 blocks
73256519 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
44713 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
    102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done      

controls@chiara|~> blkid

/dev/sda1: UUID="972db769-4020-4b74-b943-9b868c26043a" TYPE="ext4" 
/dev/sda5: UUID="a3f5d977-72d7-47c9-a059-38633d16413e" TYPE="swap" 
/dev/sdc1: UUID="92dc7073-bf4d-4c58-8052-63129ff5755b" TYPE="ext4" 
/dev/sdd1: UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" TYPE="ext4" 

controls@chiara|~> sudo emacs -nw /etc/fstab
controls@chiara|~> cat /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none            swap    sw              0       0
#UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0
UUID="1843f813-872b-44ff-9a4e-38b77976e8dc"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0

#fb:/frames      /frames nfs     ro,bg


UUID=92dc7073-bf4d-4c58-8052-63129ff5755b   /home/cds    ext4    defaults,relatime,commit=60    0   0

controls@chiara|~> sudo mount -a
controls@chiara|~> df

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sda1       461229088   10694700  427105320   3% /
udev             15915020         12   15915008   1% /dev
tmpfs             3185412        868    3184544   1% /run
none                 5120          0       5120   0% /run/lock
none             15927044        484   15926560   1% /run/shm
/dev/sdc1      3845709644 1809568856 1840789912  50% /home/cds
/dev/sdd1      5814346836     190408 5521130352   1% /media/40mBackup
  14361   Sat Dec 15 18:29:53 2018 KojiUpdateGeneralChiara new USB 4TB DIsk

The local backup was done at 18:18 after 11h18m of running.

2018-12-15 07:00:01,699 INFO       Updating backup image of /cvs/cds
2018-12-15 18:17:56,378 INFO       Backup rsync job ran successfully, transferred 5717707 files.

 

  14364   Tue Dec 18 11:42:40 2018 ChubUpdateGeneralAcromag box wired

The Auxiliary DAQ Chassis, or Acromag box, is now wired and ready for testing.  I will be sorting the cables at the vacuum rack to make connection to the box easier.

 

  14369   Wed Dec 19 19:51:19 2018 gautamUpdateGeneralPumpdown prep

[Koji, gautam]

Summary:

We are ready to put the heavy doors back on the chambers and do some test pumpdowns tomorrow morning if Jon gives us the go-ahead. Also, Koji made the OMC resonate some of the AUX beam light we send into ityes

Details:

  1. EY work:
    • IMC was locked, and we attempted to locate the beam with an IR card inside the chamber.
    • Koji found that the beam was too high, we were over-shooting the entire black-glass baffle on the EY table.
    • So I moved the TTs to try and center the beam through the aperture of aforementioned baffle.
    • Once this was done, we found that the beam was misaligned in yaw by ~1-inch in transmission on the EY optics table (there was an iris in place marking the cavity transmission axis). This explains why I couldn't find any TRY flashes while moving the TTs around.
    • We hypothesize that without the 2 degree ETM wedge in place, there isn't a compatible axis for the ITM transmission to also make it through the EY baffle and transmission iris. Over ~1m, the 2 degree wedge makes roughly 1.4 inch translation in yaw, so this seems to be a plausible hypothesis.
    • The ETMY suspension was moved from the mini-cleanroom setup back into the EY vacuum chamber. Two clamps (finger tightened only) hold it in place on the NE edge of the optical table. We decided that this is a better resting palce for the cage over the holidays than an in-air cleanroom.
  2. OMC chamber work:
    • While we were in clean garb, we decided to also investigate the OMC situation a bit.
    • It quickly became apparent that it was hopeless for me to work in chamber in the tightly confined IOO chamber. So Koji went in to have a look.
    • Koji will post the detailed alignment procedure - but after some alignment of the AUX laser input beam axis using in air steering mirrors and Koji's expert tweaking of the pointing into the OMC, we observed some resonances of the OMC.
    • Attachment #1 shows the full-range triangle ramp applied to the OMC length PZT (top row) and the OMC REFL signal (bottom row), measured using a PDA520 (chosen for its large active area) connected to a scope (AC-coupled, 1Mohm impedance, averaged to make the dips more prominent).
    • The OMC transmission was also (barely) visible on an IR card.
    • So the OMC length PZT seems capable of sweeping the length of the cavity. Based on the size of the dips we saw, the MM into the cavity is sub 1-percent.
    • The transmission PDs didn't output any measurable signal - but I'm not sure that the satellite box / readout electronics have been carefully characterized on the electroncis bench, so that will have to be done first.
    • We replaced the copper cover of the OMC (finger tightened for now) in case we do any test pumpdowns tomorrow. HV supply has been turned off, and the AUX laser has been reverted to standby mode.
  14371   Wed Dec 19 22:11:28 2018 KojiUpdateGeneralHow to align the copper OMC

The OMC input optics layout is attached

Checked the spot position on OMMT-FM1. It was off from the center. This was causing the spot on OMMT1 off-center. This was fixed by the steering mirror for the AUX laser.

The beam alignment onto the OMC was tweaked with OMC-SM1 and OMC-SM2. This was the painful part. We had to make a sensor card that could get in to the narrow space of the OMC. (Attachment 2 right)

Attachment 2 left shows the naming convention of the OMC mirrors.

For the alignment, we gave 5Vpp trig waves at 3.1Hz to the input of the PZT amp so that the cavity is kept scanned continuously. Firstly check the rough spot positions for OMC-CM1 and OMC-CM2. If you carefully use the card, you can check if the beam is returning to OMC-IC. This return beam should have roughtly same hight as the incident beam. This can be adjusted by either of the steering mirrors.

Once the beam is going around the mirrors multiple times, the spot alignment can be checked at OMC-CM1. Bring a card right in front of CM1. If the card is lifter slightly above the incident spot, this automatically allows for the outgoing beam to go through. Depending on the pitch alignment, the next roundtrip (1RT) will be seen on the card. As you lift the card up more, you will be able to see more round trip beams (e.g. 2RT, 3RT, in the figure). If the yaw alignment is perfect, these spots would be lined up vertically. So you can try to align the horizontal direction with the steering mirrors. Then the vertical alignment can be done with the pitch knobs.

At this point you should be able to see some super high-order transmission at the OMC trans. For today, we stopped here as we already ran out of the knob ranges at multiple knobs. This is because the beam height in the mode matching telescope was not right, and the steering mirrors had to work more than their range.

  14385   Fri Jan 4 15:18:15 2019 KojiUpdateGeneralChiara disk clean up and internally mounted

[Koji Gautam]

Took the opprtunity of the power glitch to take care of the disk situation of chiara.

- Unmounted /cvs/cds from nodus. This did not affect the services on nodus as they don't use /cvs/cds

- Go to chiara, shut it down, and physically checked the labels of the drives.

root = 0.5TB
/cvs/cds = 4TB HGST
backup of /cvs/cds= 6TB HGST

- These three disks are internally mounted and connected with SATA. Previously, 6TB was on USB.

- There were two other drives (2TB and 3TB) but they seemed logically or physically broken. These two disks were removed from chiara. (they came back online after reformatting on mac. So they seem still physically alive).

controls@chiara|~> df
df: `/var/lib/lightdm/.gvfs': Permission denied
Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sda1       461229088   10690932  427109088   3% /
udev             15915020          4   15915016   1% /dev
tmpfs             3185412        848    3184564   1% /run
none                 5120          0       5120   0% /run/lock
none             15927044        144   15926900   1% /run/shm
/dev/sdb1      5814346836 1783407788 3737912972  33% /media/40mBackup
/dev/sdc1      3845709644 1884187232 1766171536  52% /home/cds
controls@chiara|~> lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
├─sda1   8:1    0 446.9G  0 part /
├─sda2   8:2    0     1K  0 part
└─sda5   8:5    0  18.9G  0 part [SWAP]
sdb      8:16   0   5.5T  0 disk
└─sdb1   8:17   0   5.5T  0 part /media/40mBackup
sdc      8:32   0   3.7T  0 disk
└─sdc1   8:33   0   3.7T  0 part /home/cds
sr0     11:0    1  1024M  0 rom

- Rebooted the machine and just came back without any error. This time the control room machines were not shutdown, but they just recovered the NFS once chiara got back.

  14389   Tue Jan 8 10:27:27 2019 gautamUpdateGeneralNear-term in-chamber work

Here is a list of tasks I think we should prioritize for the next two weeks. The idea is to get back to the previous state of being able to do single arm, PRMI-on-carrier and DRMI locking, before making further changes.

Once the new folding mirrors arrive, I'd like to modify the SRC length to allow locking in the signal-recycled config as opposed to RSE. Still need to do the detailed layout, but I think the in-vacuum layout will work. In that case, I'd like to also move the OMC and OMMT to the IY table, and also move the in-air AS photodiodes to the IY in-air optical table. This is why I've omitted the OMC alignment from this near-term list, but if we want to not move the OMC, then we probably should add alignment of the AS beam to the OMC to this list.

List of in-chamber tasks for 1/2019

Chamber Task(s)
EY
  • Clean ETMY optic and suspension
  • Put ETMY suspension back in place, recover Y-arm cavity alignment
  • Remove any residual hardware from unused heater setup
  • Restore parabolic heater setup, center radiation pattern as best as possible on ETMY
  • Check beam position on IPANG steering mirror
IY
  • Clean ITMY optic and suspension cage
  • Restore ITMY suspension, recover Y arm cavity alignment.
  • Check position of AS beam on OM1/OM2
BS/PRM (if we decide to open it)
  • Replace BS/PRM Oplev HeNe, bring the beam in and out of vacuum with beam well centerd on in-vacuum mirrors (can take this opportunity to fix the in-air layout as well to minimize un-necessary steering mirrors)
  • Check position of AS beam on OM3/OM4, adjust if necessary
  • Check position of IPPOS and IPANG beams on their respective steering optics
OMC (if we decide to open it)
  • Check position of AS beam on OM5/OM6
  • Ensure AS beam exits the vacuum cleanly
  14397   Fri Jan 11 16:38:57 2019 gautamUpdateGeneralSome alignment checks

The pumpdown seems to be progressing smoothly, so I think we are going to stick with the plan decided on Wednesday, and vent the IFO on Monday at 8am. I decided to do some checks of the IFO alignment.

I turned on the PSL again (so goggles are advisable again inside the VEA until this work is done), re-locked the PMC, and opened the PSL shutter into the vacuum (still low power 100 mW beam going into vacuum). The IMC alignment required minor tweaking, but I recovered ~1300 cts transmission which is what it was --> so we didn't macroscopically change the input pointing into the IMC while working on the IOO table.

Centering the ITMY oplev spot, there is a spot on the AS camera roughly centered on the control room monitor, so the TT pointing must also be pretty close.

Then I centered the ITMY oplev spot to check how well-aligned or otherwise the Michelson was - the BS has no Oplev so there was considerable angular motion of the Michelson spot, but it looked like on average, it was swinging around through a well aligned place. I saved the slow bias voltages for the ITMs and BS in this config.

Then I re-aligned ETMX and checked the green transmission - it was okay, ~0.3, and I was able to increase it to ~0.4 using the EX green PZT mirrors. So far so good.

Finally, I tried to lock the X-arm on IR - after zeroing the offsets on the transmission QPD, there seems to be a few flashes as the cavity swings through resonances, but no discernible PDH error signal. Moreover the input pointing of the IR into the X arm is controlled by the BS which is swinging around all over the place right now, so perhaps locking is hopeless, but the overall alignment of the IFO seems not too bad. Once ETMY is cleaned and put back in place, perhaps the Y arm can be locked.

I shuttered the PSL and inserted a manual beam block, and also turned off the EX laser so that we can vent on Monday without laser goggles.

*Not directly related to this work: we still have to implement the vacuum interlock condition that closes the PSL shutter in the event of a vacuum failure. It's probably fine now while the PSL power is attenuated, but once we have the high power beam going in, it'd be a good to revert to the old standard.

  14400   Tue Jan 15 15:27:36 2019 gautamUpdateGeneralLasers and other stuff turned back on

VEA is now a laser hazard area as usual, several 1064nm lasers in the lab have been turned back on. Apart from this

  • the IFR was reset to the nominal modulation settings of +13dBm output at 11.066209 MHz (this has to be done manually following each power failure).
  • The temeprature control unit for the EY doubling oven PID control was turned back on.
  • The EY Oplev HeNe was turned back on.
  • EY green PZT HV Kepco was turned back on.
  14471   Wed Feb 27 21:34:21 2019 gautamUpdateGeneralSuspension diagnosis

In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are

  1. To see how much the resonant peaks have shifted w.r.t. the database, if at all - I claim that the ETMY resonances have shifted by a large amount and also has lost one of the resonant peaks.
  2. To check the status of the existing diagonalization.

All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.

Watchdogs restored at 10 AM PST

  14483   Mon Mar 18 12:27:42 2019 gautamUpdateGeneralIFO status
  1. c1iscaux2 VME crate is damaged - see Attachment #1. 
    • It is not generating the 12V supply voltage, and so nothing in the crate works.
    • Tried resetting via front panel button, power cycling by removing power cable on rear, all to no effect.
    • Tried pulling out all cards and checking if there was an internal short that was causing the failure - looks like the problem is with the crate itself.
    • Not sure how long this machine has been unresponsive as we don't have any readback of the status of the eurocrate machines.
    • Not a showstopper, mainly we can't control the whitening settings for AS55, REFL55, REFL165 and ALSY. 
    • Acromag installation schedule should be accelerated.
    • * Koji reminded me that \text{VME crate} \ \neq \ \text{eurocrate}. The former is what is used for the slow machines, the latter is what is used for holding the iLIGO style electronics boards.
  2. ITMX oplev is dead - see Attachment #2.
    • Lasted ~3 years (installed March 2016).
    • I confirmed that no light is coming out of the laser head on the optical table.
    • I'll ask Chub to replace it this afternoon.
  3. c1susaux is unresponsive
    • I didn't reboot it as I didn't want to spend some hours freeing ITMY. 
    • At some point we will have to bite the bullet and do it.
  4. Input pointing is still not stable
    • I aligned the input pointing using TT1/TT2 to maximize TRX/TRY before lunch, but in 1 hour, the pointing has already drifted.
  5. POX/POY locking is working okay. TRX has large low-frequency fluctuations because of ITMX not having an Oplev servo, should be rectified once we swap out the HeNe.

The goal for this week is to test out the ALS system, so this is kind of a workable state since POX/POY locking is working. But the number of broken things is accumulating fast.

  14485   Mon Mar 18 18:10:14 2019 KojiSummaryGeneralTask items and priority

[Gautam/Chub/Koji] ~ Mini discussion

Maintenance / Upgrade Items

(Priority high to low)

  • TT/IO suspension upgrade (solidworks work) -> order components -> TT characterization
  • Acromag upgrade c1susaux
    • Produce spread sheetfor DB files. Learn new format of the DB file with Acromag. Develop a python code for the DB file generation (Jon->Koji)
  • Satellite Box upgrade
    • Rack mount? Front panel DB connectors. New circuits (PD-LED)
       
  • Acromag iscaux1/2 & isc whitening upgrade
     
  • new RC mirror characterization -> installation
  14553   Fri Apr 19 09:42:18 2019 KojiBureaucracyGeneralItem borrowing (40m->OMC)

Apr 16, 2019
Borrowed two laser goggles from the 40m. (Returned Apr 29, 2019)
Apr 19, 2019
Borrowed from the 40m:
- Universal camera mount
- 50mm CCD lens
- zoom CCD lens (Returned Apr 29, 2019)
- Olympus SP-570UZ (Returned Apr 29, 2019)
- Special Olympus USB Cable (Returned Apr 29, 2019)

 

  14572   Thu Apr 25 10:13:15 2019 ChubUpdateGeneralAir Handler Out of Commission

The air handler on the roof of the 40M that supplies the electronics shop and computer room is out of operation until next week.  Adding insult to injury, there is a strong odor of Liquid Wrench oil (a creeping oil for loosening stuck bolts that has a solvent additive) in the building.  If you don't truly need to be in the 40M, you may want to wait until the environment is back to being cool and "unscented".  On a positive note, we should have a quieter environment soon!

  14594   Fri May 3 15:40:33 2019 gautamUpdateGeneralCVI 2" beamsplitters delivered

Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI.

  14601   Fri May 10 13:00:25 2019 ChubUpdateGeneralcrane inspection complete

The 40M jib cranes all passed inspection!

  14606   Mon May 13 18:48:32 2019 gautamUpdateGeneralVent prep
  1. c1auxey and c1aux VME crates were keyed.
  2. EX and EY NPROs were turned on.
  3. Y arm was aligned to the IR - best effort TRY ~0.75.
  4. EY green was aligned to the Y arm cavity. The spot is on the lower right quadrant on the CCD monitor, but GTRY ~0.35.
  5. #3 and #4 were repeated for XARM.
  6. All beams were centerd on Oplev and IP POS QPDs with this reference alignment - see Attachment #1. SOS Optic and TT DC bias positions were saved to burt snap files.
  7. I've never really used it but I updated all the SUS "driftmon" values - Attachment #2.
  8. Power going into the IMC was cut from 945 mW to 100 mW (both numbers measured with FieldMate power meter) by rotating the HWP installed last time for this purpose from 244 degrees (OLD) to 208 degrees (NEW). There was no beam dump for the reflected port of the PBS used to cut power, so I installed one, see Attachment #4.
  9. The T=90% BeamSplitter in the MC REFL path was replaced with a 2" HR mirror as is the norm for the low power IMC locking. Alignment of the MC REFL beam onto the MC REFL PD was tweaked.
  10. init.d file was edited and MCautolocker initctl process was restarted on Megatron to adopt the low power settings. It was locked, MCT ~1350 counts, see Attachment #3. Also adjusted the threshold level above which to have the slow PID offloading of FSS PZT voltage from 10000 to 1000.

I believe this completes the non-Chub portions of the pre-vent checklist, we will start letting air into the main volume ASAP tomorrow morning after crossing off the remaining items.

Main goal of this vent is to investigate the oddness of the YARM suspensions. I leave the PSL NPRO on overnight in the interest of data gathering, it's been running ~10 hrs now - I suspect it'll turn itself off before we are ready to vent in the AM.

  14607   Tue May 14 10:35:58 2019 gautamUpdateGeneralVent underway
  1. PSL had stayed on overnight. There was an EQ (M 4.6 near Costa Rica) which showed up on the Seis BLRMS, and I noticed that several optics were reporting Oplev spots off their QPDs (I had just centered these yesterday). So I did a quick alignment check:
    • IMC was readily locked
    • After moving test mass bias sliders to bring Oplev spots back to the center, the EX and EY green beams were readily locked to a TEM00 mode
    • IR flashes could be seen in TRX and TRY (though their levels are low, since we are operating with 1/10th the nominal power
    • The IP-POS QPD channels were reporting a "segmentation fault" so I keyed the c1iscaux crate and they came back. Still the QPD was reporting a low SUM value, but this too is because of the lower power. Conveniently, there was an ND2.0 filter in the beam path on a flip mount which I just flipped out of the way for the low-power tracking.
    • Then, PSL and green shutters were closed and Oplev loops were disengaged.
  2. Checked that we have an RGA scan from today
  3. During the walkthrough to check the jam nuts, Chub noticed that the outer nuts on the bellows between the OMC chamber and the IMC chamber were loose to the finger! He is tightening them now and checking the remaining jam nuts. AFAIK, Steve made it sound like this was always a formality. Should we be concerned? The other jam nuts are fine according to Chub.
  4. We valved off the pumpspool from the main volume and annuli, and started letting Nitrogen into the main volume at ~1045am.
  5. Started letting instrument grade air into the main volume at ~1130am. We are aiming for a pressure increase of 3 torr/min
  6. 4 cylinders of dry air were exhausted by ~330pm. It actually looks like we over-pressured the main volume by ~20torr - this is bad, we should've stopped the air inletting at 700 psi and then let it equilibriate to lab air pressure.
  7. At some point during the vent, the main volume pressure exceeded the working range of the cold cathode gauge CC1. It reports "Current Fail" on its LED display, which I'm assuming meant it auto-shutoff its HV to protect itself, Jon tells me the vacuum code isn't responsible for initiating any manual shutoff.
  8. A new vacuum state was added to reflect these conditions (pumpspool under vacuum, main volume at atmosphere).
  9. The annuli remain under vacuum for now. Tomorrow, when we remove the EY door, we will vent the EY annulus.

IMC was locked, MC2T ~ 1200cts after some alginment touch ups. The test mass oplevs indicate some drift, ~100urad. I didn't realign them.

The EY door removal will only be done tomorrow. I will take some free-swinging ETMY data today (suspension was kicked at 1241919438) to see if anything has changed (it shouldn't have). I need to think up a systematic debugging plan in the meantime.

  14642   Tue May 28 17:41:13 2019 gautamUpdateGeneralIFO status

[chub, gautam]

Today, we tried to resuscitate the c1iscaux2 channels by swapping the existing, failed VME crate with the newly freed up crate from c1susaux. In summary, the crate gets power, and the EPICS server gets satrted, but I am unable to switch the whitening gain on the whitening boards. I belive that this has to do with the FAIL LEDs that are on for the XVME-220 units. We were careful to preserve the location of the various cards in the VME crates during the swap. Rather than do a detailed debugging with custom RJ45 cables and terminal emulators, I think we should just focus the efforts on getting the Acromag system up and running.

Our work must have bumped a cable to the c1lsc expansion chassis in the same rack - the c1lsc FE had crashed. I rebooted it using the script - everything came back gracefully.

  14724   Thu Jul 4 10:47:37 2019 MilindUpdateGeneralEarthquake now

There was a magnitude 6.6 earthquake just a few minutes ago. I am attaching photographs of the monitor feeds for reference here. Is there a standard protocol to be followed in this situation? I'm looking through the wiki now.

Further, the IMC seems to be misaligned and is not locking! cryingcrying As Koji has let me know, I really hope this is not too serious and can be fixed easily.

  14739   Tue Jul 9 18:17:48 2019 gautamUpdateGeneralProjector lightbulb blown out

Last documented replacement in Nov 2018, so ~7 months, which I believe is par for the course. I am disconnecting its power supply cable.

  14743   Wed Jul 10 14:55:32 2019 KojiUpdateGeneralProjector lightbulb blown out

In fact the projector is still working. The lamp timer showed ~8200hrs. I just reset the timer, but not sure it was the cause of the shutdown. I also set the fan mode to be "High Altitude" to help cooling.

  14752   Thu Jul 11 16:22:54 2019 KruthiUpdateGeneralProjector lightbulb blown out

I heard a popping sound in the control room; the projector lightbulb has blown out.sad

  14756   Fri Jul 12 18:54:47 2019 KojiUpdateGeneralItem loan: optical chopper from Cryo Lab

Optical chopper borrowed from CryoLab to 40m

https://nodus.ligo.caltech.edu:8081/Cryo_Lab/2458

  14777   Fri Jul 19 15:51:55 2019 gautamUpdateGeneralProjector lightbulb blown out

[chub, gautam]

Bulb replaced. Projector is back on.

  14778   Fri Jul 19 15:54:47 2019 gautamUpdateGeneralControl room UPS Batteries need replacement

The control room UPS started making a beeping noise saying batteries need replacement. I hit the "Test" button and the beeping went away. According to the label on it, the batteries were last repalced in March 2016, so maybe it is time for a replacement, @Chub, please look into this.

  14780   Fri Jul 19 17:42:58 2019 gautamUpdateGeneralrossa Xdisp bricked

For some reason, rossa's Xdisplay won't start up anymore. This happened right after the UPS reset. Koji and I tried ~1.5 hours of debugging, got nowhere.

  14784   Sat Jul 20 11:24:04 2019 gautamUpdateGeneralrossa bricked

Summary:

SnapPy scripts made to work on Pianosa.

Details:

Of course rossa was the only machine in the lab that could run the python scripts to interface with the GigE camera. And it is totally bricked now. Lame.

So I installed several packages. The key was to install pypylon - if you go to the basler webpage, pypylon1.4.0 does not offer python2.7 support for x86_64 architecture, so I installed pypylon1.3.0. Here are the relevant lines from the changelog:

gstreamer-plugins-bad-0.10.23-5.el7.x86_64    Sat 20 Jul 2019 11:22:21 AM PDT
gstreamer-plugins-good-0.10.31-13.el7.x86_64  Sat 20 Jul 2019 11:22:11 AM PDT
gstreamer-plugins-ugly-0.10.19-31.el7.x86_64  Sat 20 Jul 2019 11:20:08 AM PDT
gstreamer-python-devel-0.10.22-6.el7.x86_64   Sat 20 Jul 2019 10:34:35 AM PDT
pygtk2-devel-2.24.0-9.el7.x86_64              Sat 20 Jul 2019 10:34:34 AM PDT
pygobject2-devel-2.28.6-11.el7.x86_64         Sat 20 Jul 2019 10:34:33 AM PDT
pygobject2-codegen-2.28.6-11.el7.x86_64       Sat 20 Jul 2019 10:34:33 AM PDT
gstreamer-devel-0.10.36-7.el7.x86_64          Sat 20 Jul 2019 10:34:32 AM PDT
gstreamer-python-0.10.22-6.el7.x86_64         Sat 20 Jul 2019 10:34:31 AM PDT
gtk2-devel-2.24.31-1.el7.x86_64               Sat 20 Jul 2019 10:34:30 AM PDT
libXrandr-devel-1.5.1-2.el7.x86_64            Sat 20 Jul 2019 10:34:28 AM PDT
pango-devel-1.42.4-1.el7.x86_64               Sat 20 Jul 2019 10:34:27 AM PDT
harfbuzz-devel-1.7.5-2.el7.x86_64             Sat 20 Jul 2019 10:34:26 AM PDT
graphite2-devel-1.3.10-1.el7_3.x86_64         Sat 20 Jul 2019 10:34:26 AM PDT
pycairo-devel-1.8.10-8.el7.x86_64             Sat 20 Jul 2019 10:34:25 AM PDT
cairo-devel-1.15.12-3.el7.x86_64              Sat 20 Jul 2019 10:34:25 AM PDT
mesa-libEGL-devel-18.0.5-3.el7.x86_64         Sat 20 Jul 2019 10:34:24 AM PDT
libXi-devel-1.7.9-1.el7.x86_64                Sat 20 Jul 2019 10:34:24 AM PDT
pygtk2-doc-2.24.0-9.el7.noarch                Sat 20 Jul 2019 10:34:23 AM PDT
atk-devel-2.28.1-1.el7.x86_64                 Sat 20 Jul 2019 10:34:21 AM PDT
libXcursor-devel-1.1.15-1.el7.x86_64          Sat 20 Jul 2019 10:34:20 AM PDT
fribidi-devel-1.0.2-1.el7.x86_64              Sat 20 Jul 2019 10:34:20 AM PDT
pixman-devel-0.34.0-1.el7.x86_64              Sat 20 Jul 2019 10:34:19 AM PDT
libXinerama-devel-1.1.3-2.1.el7.x86_64        Sat 20 Jul 2019 10:34:19 AM PDT
libXcomposite-devel-0.4.4-4.1.el7.x86_64      Sat 20 Jul 2019 10:34:19 AM PDT
libicu-devel-50.1.2-15.el7.x86_64             Sat 20 Jul 2019 10:34:18 AM PDT
gdk-pixbuf2-devel-2.36.12-3.el7.x86_64        Sat 20 Jul 2019 10:34:17 AM PDT
pygobject2-doc-2.28.6-11.el7.x86_64           Sat 20 Jul 2019 10:34:16 AM PDT
pygtk2-codegen-2.24.0-9.el7.x86_64            Sat 20 Jul 2019 10:34:15 AM PDT

Camera server is running on a tmux session on pianosa. But it keeps throwing up some gstreamer warnings/errors, and periodically (~every 20 mins) crashes. Kruthi tells me that this behavior was seen on Rossa as well, so whatever the problem is, doesn't seem to be because I missed out on installing some packages on pianosa. Moreover, if the server is in fact running, I am able to take a snapshot - but the camera client does not run.

  14794   Sun Jul 21 22:16:34 2019 ranaUpdateGeneralrossa Xdisp bricked

"bricked" is to mean that it has the functionality of a brick and can be tossed. But rossa seems to have just gotten some software config corruption. I spent a couple hours reinstalling SL7 today as per my previous elog notes and the X display seems to work as before.

i.e. it was fine with the default setup, except for the ole "X chrashes if the mouse goes to left side of screen". As before, I

  1. blacklisted the nouvaeu driver (which is used by default)
  2. download the NVIDIA driver as per the link
  3. run its installation from the no-X terminal

left side of screen is safe again

This time I installed SL7.6 and followed the K Thorne wiki. But its having trouble installing cds-root because it can't find root.frown

 

  14826   Sun Aug 4 14:39:41 2019 gautamUpdateGeneralsome lab activity
  1. Unresponsive c1psl, c1iool0, c1auxey and c1iscaux VME crates were keyed.
  2. c1psl channels were burt-restored, did a burtrestore, and re-locked the PMC. Tweaked the pointing into the PMC on the PSL table to increase the PMC transmission from ~0.69 to ~0.71.
  3. Re-locked IMC. Ran WFS offset script to relieve the ~100 DAC counts (~10 urad) DC offset from the WFS servos to the IMC suspensions (a serious calibration of this into physical units should be made part of the planned 40m WFS activity). Now that I think about it, since we change the IMC alignment to match the input beam alignment, some post-IMC clipping could modulate the power incident on the ITMs, which is a source of error for the arm cavity loss determination using DC reflection. We need a better normalizing data stream than the IMC transmission.
  4. The IFO_OVEREVIEW medm screen was modified such that the threshold for the PMC transmitted beam to be visible was lowered from 0.7 to 0.6, so that now there is a continuous beam line from the NPRO to the PRM when the IMC is locked even when the PMC transmission degrades by 5% due to thermally driven pointing drifts on the PSL table.
  5. The wmctrl utility on pianosa wasn't working so well, I wasn't able to use my usual locking MEDM autoconfig scripts. Turned out to be due to a zombie MEDM window which I killed with xkill, now it is working okay again.
  6. The misaligned XARM was re-aligned and the loss measuring PDA520 at the AS port was removed from the beam path (mainly to avoid ADC saturations the fringing Michelson will cause).
  7. I noticed that the ETMX Oplev HeNe SUM level has degraded to ~50% of its power level from 200 days ago [Attachment #1], may need a new HeNe here soon. @Chub, do we have spare HeNes in stock?

I want to collect some data with the arms locked to investigate the possibility/usefullness of having seismic feedforward implemented for the arms (it is already known to help the IMC length and PRC angular stability at low frequencies). To facilitate diagnostics I modified the file /users/Templates/Seismic/Seismic_vs_TRXTRYandMC.xml to have the correct channel names in light of Lydia's channel name changes in 2016. Looking at the coherence data, the alignment of the cartesian coordinate system of the Seismometers at the ends and the global interferometer coordinate system can be improved.

I don't know if for the MISO filter design if there is any difference in using TRX/TRY as the target, or the arm length control signal.

Data collection started at 1249018179. I've setup a script running in a tmux shell to turn off the LSC enable in 2 hours.

  14881   Mon Sep 16 12:00:16 2019 aaronHowToGeneralMoved some immovable optics

When I put away the lenses we had used for measuring the RF transfer functions of the QPD heads, I saw that I'd removed them from the cabinet containing green endtable optics, but hadn't noticed the sign forbidding their removal. I'll talk with Koji/Gautam about what happened and what should be done.

  14919   Tue Oct 1 18:35:12 2019 gautamUpdateGeneralBeam centering campaign
  1. With TRX and TRY maximized using ASS, I centered the Oplev spots on the respective QPDs for the four test masses and the BS. I also centered the spot onto the IPPOS QPD by moving the available steering mirror.
  2. At EX, I tweaked the input pointing of the green beam into the arm by manually twiddling with the PZT mirrors. I was able to get GTRX~0.4.
  3. On the AS table - Koji and I found that there was a steering mirror placed in the AS beam path such that there was no light reaching the AS110 or AS55 PDs. Please - when you are done with your measurement, return the optical configuration to the state it was in before so that the usual locking activity isn't disturbed by a needless few hours troubleshooting electronics.

Once Koji is done with his checkout of the whitening electronics, I will try and lock the PRMI.

  14930   Thu Oct 3 12:08:47 2019 gautamUpdateGeneralMake the Jenne-laser setup fiber-coupled

I propose the following re-organization of the PDFR measurement breadboard. We have all the parts on hand, just needs ~30mins of setup work and some characterization afterwards. The fiber beamsplitter will not be PM, but for this measurement, I don't think that matters (the patch fiber from the diode laser head isn't PM anyways). We have one spare 1 GHz BW NF1611 that is fiber coupled (used to live on the ITMY in-air table, and is (conveniently) labelled "REF DET", but I'm not sure what the function of this was). In any case, we have at least 1 free-space NF1611 photodiode available as well. I suggest confirming that the FC version works as expected by calibrating against the free space PD first.

Update 245pm: Implemented, see Attachment #2. Aaron is testing it now, and will post the characterization results.

  14931   Thu Oct 3 14:32:37 2019 ranaUpdateGeneralMake the Jenne-laser setup fiber-coupled

I'm curious to see if we really need the 1611, or if we can calibrate the diode laser vs. the 1611 one time and then just use that calibration to get the absolute cal for the DUT.

  14932   Thu Oct 3 14:54:33 2019 KojiUpdateGeneralMake the Jenne-laser setup fiber-coupled

I'm afraid that the RF modualtion of the laser is nonlinear and the electrical and optical resoponse is dependent on the LD pumping current and RF input power. So I feel safe if we keep the reference PD. Of course, this is my feeling and it should be quantitatively tested.

  14934   Thu Oct 3 21:05:04 2019 aaronUpdateGeneralMake the Jenne-laser setup fiber-coupled

I measured the RF response of the fiber-coupled NewFocus 1611, calibrating out the cable delay. The laser current was set to 20.0 mA, and the RF power going into the splitter was -10 dBm. The DC voltage was 1.87 V, and Gautam and I measured the power from the fiber at 344uW.

Something still looks very wrong -- the PD is supposed to be flat out to 1GHz, and physical units pending, need food.

  14936   Thu Oct 3 23:15:39 2019 KojiUpdateGeneralMake the Jenne-laser setup fiber-coupled

The 1GHz PD has a bit more flat response, but the laser and the driving network have more frequency dependence as you saw.

  14937   Fri Oct 4 00:30:31 2019 gautamUpdateGeneralMake the Jenne-laser setup fiber-coupled

I think the metric of interest here is the consistency of the AC transimpedance of the proposed new "Reference PD" (= fiber coupled NF1611) vs the old reference (free space NF1611), since everything will be calibrated against that.

Quote:

Something still looks very wrong -- the PD is supposed to be flat out to 1GHz, and physical units pending, need food.

  14940   Fri Oct 4 14:25:59 2019 aaronUpdateGeneralMake the Jenne-laser setup fiber-coupled

Summary:

The fiber-coupled PD seems to have a factor of ~1.5 difference in responsivity compared to the free-space PD. There are some differences in the two ways I made the measurement that I don't yet understand.

Details

I measured relative responsivities of the fiber and free coupled NewFocus 1611 PDs (scaled by the Jenne AM transfer function).

I made the measurement in two ways, see attachment threeIn attachment oneI show the response for separately measuring the two PDs relative to a pickoff of the source (two-port thru calibration). In attachment two I measure the relative responses directly, without picking off a reference (three-port calibration). I scaled the transfer functions by their DC voltages; both PDs have transimpedances of 700 V/A.

However, there are some clear differences in the response (overall factor of 0.5dB offset that may be explained by a miscalibrated DC level; apparent periodicity in attachment 1) that I don't yet understand.The free path of the non-fiber PD is ~5-6 inches, which accounts for the ~45 degrees of phase advance of the fiber relative to free coupled PD signal. (12.7cm / (c / 300 MHz) * 360 degrees ~ 45 degrees)

I didn't find Agilent's manual very helpful for learning about the available calibration schemes, and didn't find a resource online that I liked -- is there a good one?
I think I want to characterize the WFS heads treating the DUT as a three-port device (AM in, ref PD, WFS segment PD).
  14964   Thu Oct 10 23:36:02 2019 KojiUpdateGeneralWednesday cleaning work

[Jon, Yehonathan, Gautam, Aaron, Shruti, Koji]

We get together on Wednesday afternoon for cleaning the lab. Particularly, we collected e-wastes: VME crates, VME modules, old slow control cables, and other old/broken electronics. They are piled up in the office area and the cage outside rioght now (Attachments 1/2). We asked Liz to come to pick them up (under the coordination with either Gautam or Koji). Eventually this will free up two office desks.

Also, we made the acromag components organized in plastic boxes. (Attachment 3)

ELOG V3.1.3-