40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 119 of 357  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14138   Mon Aug 6 09:42:10 2018 KojiSummaryComputersTransition of the main NFS disk on chiara

Follow up:

- At least it was confirmed that the local backup (4TB->2TB) is regularly running every morning.

- The 2TB disk was used up to 95%. To ease the size of the remaining space, I have further compressed the burt snapshot folders. (~2016). This released another 150GB. The 2TB is currently used up to  87%.

Prev

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sdc1      3845709644 1731391748 1918967020  48% /home/cds
/dev/sdd1      2113786796 1886162780  120249888  95% /media/40mBackup

Now

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sdc1      3845709644 1731706744 1918652024  48% /home/cds
/dev/sdd1      2113786796 1728124828  278287840  87% /media/40mBackup

 

  14144   Tue Aug 7 23:06:30 2018 KojiUpdatePSLEOM measuement preparation

I was preparing for the aLIGO EOM measuement to be carried out tomorrow afternoon.

I did a few modifications to the PLL setup.

  • The freq mixier in the PLL setup was replaced with ZP3 (level 7) from ZAD-6
  • The PLL gain was reduced from 3.10 to 2.80 to prevent servo oscillation
  • The main PSL marconi is connected to the PLL mixier and providing fixed 200MHz 8dBm.
  • The main PSL modulation is off.

Tomorrow I am going to modulate the EOM with the AUX Marconi via an amplifier (probably)

Automated scripts (AGinit.py and AGmeas.py) are in /users/koji/scripts

I will revert the setup once the measurement is done tomorrow.

  14145   Wed Aug 8 20:56:11 2018 KojiUpdatePSLEOM measuement preparation

Rich and I worked on the EOM measurement. After the measurement, the setup was reverted to the nominal state

  • AUX PLL mixer was restored to ZAD-6
  • The PLL gain was restored to 3.10
  • The main PSL marconi is connected to the freq generator again. Using the beat note, I've confirmed that the modulations are applied on the beam.
  • The PSL HEPA was reduced from 100 to 30.
  14175   Wed Aug 22 00:22:05 2018 KojiSummaryElectronicsInspection of the possible dual backplane interfaces for Acromag DAQ

[Johannes, Koji]

We went around the LSC, PSL, IOO, and SUS racks to check how many dual backplane interfaces will be required.

Euro card modules are connected to the backplane with two DIN 41612 connectors (as you know). The backplane connectors provide DC supplies and GND connections.
In addition, they are also used for the input and output connections with the fast and slow machines.

According to the past inspection by Johannes, most of the modules just use the upper DIN41612 connector (called P1). But there are some modules exhibited the possibility of the additional use of the other connector (P2).

Tuesday afternoon Johannes and I made the list of the modules with the possible dual use. And I took a time to check the modules with DCC, Jay's schematics, and the visual inspection of the actual modules.

LSC Rack

  • Common mode servo (D040180 Rev B)
    • Schematic source D040180 Rev B D1500308
    • Assesment: Both P1 and P2 are to be connected to Acromag, but there are only a few channels on P2
    • P1: 1A-32A Digital In
    • P2: 1A-3A Analog Out (D32/33/34, SLOW MON and spare?)
            9A Digital Out for D35 (Limitter)
            10A-15A Spare
            16A Digital In (Latch Enable/Disable)
            25A, 25C  Differential Analog in (Differential offset input, indicated as "BIAS") 
  • PD Interface (D990543 Rev B)
    • Schematic source D990543 RevB
    • Assesment: No connection necessary. We don't monitor/control anything of any LSC PDs from Acromag.

PSL Rack

  • Generic DAQ Interface (D990155) - This is a DAC interface.
    • Schematic source: Jay's page D990155 Rev.B All the lines between P2 and P3 are connected.
    • Assesment: Only P2 is to be connected to Acromag.
    • P1 DAC mon -> not necessary
    • P2 A1-A16, Connected to DAC in P2-P3
  • PMC Servo
    • Schematic source: LIGO DCC D980352
    • Assesment: Only P1 (1A-9A) is to be connected to Acromag. (Just one DSub is sufficient)
    • P1 1A-9A
  • Crystal Ref (D980353)
    • Schematic source: LIGO DCC D980353
    • Assesment: Only P1 (1A-4A) is to be connected to Acromag. (Just one DSub is sufficient)
    • P1 1A-4A
  • TTFSS REV A
    • Schematic source: PNot found
    • Assesment: Probably Only P1 is sufficient. We need to analyze the board to figure out the channel assignment.

IOO Rack

  • PD Interface (D990543 Rev B)
    • Schematic source D990543 RevB
    • Assesment: Only P1 connection is sufficient.
  • Generic DAQ Interface (D990155)
    • Assesment: Remove the module. We already have the same module in PSL Rack. This is redundant.
  • Common mode servo (D040180 Rev B)
    • See above
  • Pentek Generic Input Board D020432
    • Schematic source Jay's page D020432-A
    • Assesment: No connection. There is no signal on the backplane.

SUS Rack

  • SUS Dewhitening
    • Schematic source: Jay's page D000316-A
    • Assesment: No connection.
    • We can omit Mon CHs.
    • Bypass/Inputs are already connected to the fast channels.

 

  14180   Thu Aug 23 16:05:24 2018 KojiUpdateIMCMC/PMC trouble

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

  14197   Wed Sep 12 22:22:30 2018 KojiUpdateComputersSSL2.0, SSL3.0 disabled

LIGO GC notified us that nodus had SSL2.0 and SSL3.0 enabled. This has been disabled now.
The details are described on 40m wiki.

  14208   Fri Sep 21 19:50:17 2018 KojiUpdateCDSFrequent time out

Multiple realtime processes on c1sus are suffering from frequent time outs. It eventually knocks out c1sus (process).

Obviously this has started since the fiber swap this afternoon.

gautam 10pm: there are no clues as to the origin of this problem on the c1sus frontend dmesg logs. The only clue (see Attachment #3) is that the "ADC" error bit in the CDS status word is red - but opening up the individual ADC error log MEDM screens show no errors or overflows. Not sure what to make of this. The IOP model on this machine (c1x02) reports an error in the "Timing" bit of the CDS status word, but from the previous exchange with Rolf / J Hanks, this is down to a misuse of ADC0 Ch31 which is supposed to be reserved for a DuoTone diagnostic signal, but which we use for some other signal (one of the MC suspension shadow sensors iirc). The response is also not consistent with this CDS manual - which suggests that an "ADC" error should just kill the models. There are no obvious red indicator lights in the c1sus expansion chassis either.

Attachment 1: 33.png
33.png
Attachment 2: 49.png
49.png
Attachment 3: Screenshot_from_2018-09-21_21-52-54.png
Screenshot_from_2018-09-21_21-52-54.png
  14210   Sat Sep 22 00:21:07 2018 KojiUpdateCDSFrequent time out

[Gautam, Koji]

We had another crash of c1sus and Gautam did full power cycling of c1sus. It was a sturggle to recover all the frontends, but this solved the timing issue.

We went through full reset of c1sus, and rebooting all the other RT hosts, as well as daqd and fb1.

Attachment 1: 23.png
23.png
  14213   Sun Sep 23 20:15:35 2018 KojiSummaryOMCMontecarlo simulation of the phase difference between P and S pols for a modeled HR mirror

Link to OMC_Lab ELOG 308

  14231   Fri Oct 5 00:46:17 2018 KojiConfigurationASCY-end table upgrade

???

The SHG crystal has the conversion efficiency of ~2%W (i.e. if you have 1W input @1064, you get 2% conversion efficiency ->20mW@532nm)

It is not possible to produce 0.58mW@532nm from 20.9mW@1064nm because this is already 2.8% efficiency.

 

  14261   Thu Oct 18 00:27:37 2018 KojiUpdateSUSSUS PD Whitening board inspection

[Gautam, Koji]

As a part of the preparation for the replacement of c1susaux with Acromag, I made inspection of the coil-osem transfer function measurements for the vertex SUSs.

The TFs showed typical f^-2 with the whitening on except for ITMY UL (Attachment 1). Gautam told me that this is a known issue for ~5 years.
We made a thorough inspection/replacement of the components and identified the mechanism of the problem.
It turned out that the inputs to MAX333s are as listed below.

  Whitening ON Whitening OFF
UL ~12V ~8.6V
LL 0V 15V
UR 0V 15V
LR 0V 15V
SD 0V 15V

The switching voltage for UL is obviously incorrect. We thought this comes from the broken BIO board and thus swapped the corresponding board. But the issue remained. There are 4 BIO boards in total on c1sus, so maybe we have replaced a wrong board?

Initially, we thought that the BIO can't drive the pull-up resistor of 5KOhm from 15V to 0V (=3mA of current). So I have replaced the pull-up resistor to be 30KOhm. But this did not help. These 30Ks are left on the board.
 

Attachment 1: 43.png
43.png
  14341   Tue Dec 11 13:42:44 2018 KojiUpdateOMCOMC channels

FYI:

D050368 Anti-Imaging Chassis
https://dcc.ligo.org/LIGO-D050368

https://labcit.ligo.caltech.edu/~tetzel/files/equip

D050368 Adl SUS/SEI Anti-Image filter board 
S/N 100-102 Assembled by screaming circuits. Begin testing 4/3/06 
S/N xxx Mohana returned it to the shop. No S/N or traveler. Put in shop inventory 4/24/06 
S/N 103 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29 
S/N 104-106 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29 Needs DRV-135’s installed 
S/N 107-111 Rev 02 (32768 Hz) Back from assembly 7/14/06 
S/N 112-113 Rev 03 (65536 Hz) assembled into chassis and waiting for test 1/29/07 
S/N 114 Rev 03 (65536 Hz) assembled and ready for test 020507 


D050512 RBS Interface Chassis Power Supply Board (Just an entry. There is no file)

https://dcc.ligo.org/LIGO-D050512

RBS Interface Chassis Power Board D050512-00

https://labcit.ligo.caltech.edu/~rolf/jayfiles/drawings/D050512-00.pdf
 

 

  14353   Thu Dec 13 20:10:08 2018 KojiUpdateGeneralPower Outage recovery

[Gautam, Aaron, Koji]

The PSL interlock system was fixed and now the 40m lab is laser hazard as usual.


- The schematic diagram of the interlock system D1200192
- We have opened the interlock box. Immediately we found that the DC switching supply (OMRON S82K-00712) is not functioning anymore.  (Attachment #1)
- We could not remove the module as the power supply was attached on the DIN rail. We decided to leave the broken supply there (it is still AC powered with no DC output).

- Instead, we brought a DC supply adapter from somewhere and chopped the head so that we can hook it up on the crimping-type quick connects. In Attachment #1, the gray is +12V, and the orange and black lines are GND.

- Upon the inspection, the wires of the "door interlock reset button" fell off and the momentary switch (GRAYHILL 30-05-01-502-03) got broken. So it was replaced with another momentary swicth, which is way smaller than the original unfortunately. (Attachments 2 and 3)

- Once the DC supply adapter was pluged to an AC tap, we heard the sounds of the relays working, and we recovered the laser hazard lamps, PSL door alerm lamps. Also it was confirmed that the PSL innolight is operatable now. 

- BTW, there is the big switch box on the wall close to the PSL enclosure. Some of the green lamps were gone. We found that we have plenty of spare lamps and relays inside of the box. So we replaced the bulbs and know the A.C. lights are functioning. (Attachments 4 & 5)

Attachment 1: OMRON_S82K-00712.JPG
OMRON_S82K-00712.JPG
Attachment 2: reset_button_repaired1.JPG
reset_button_repaired1.JPG
Attachment 3: reset_button_repaired2.JPG
reset_button_repaired2.JPG
Attachment 4: gray_box.JPG
gray_box.JPG
Attachment 5: gray_box2.JPG
gray_box2.JPG
  14359   Fri Dec 14 14:25:36 2018 KojiUpdateCDSchiara backup

fsck of chiara backup disk (UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000") was done. But this required many files to be fixed. So the backed-up files are not reliable now.
On the top of that, the disk became not recognized from the machine.

I went to the disk and disconnected the USB and then the power supply, which was/is connected to the UPS.
Then they are reconnected again. This made the disk came back as /media/90a5c98a-22fb-4685-9c17-77ed07a5e000. (*)
After unmounting this disk, I ran "sudo mount -a" to follow the way of mounting as fstab does.
Now I am running the backup script manually so that we can pretend to maintain a snapshot of the day at least.

(*) This is the same situation we found at the recovery from the power shutdown. So my hypothesis is that on Oct 16 at 7 AM during the backup there was a USB failure or disk failure or something which unmounted the disk. This caused some files got damaged. Also this caused the disk mounted as /media/90a5c98a-22fb-4685-9c17-77ed07a5e000. So since then, we did not have the backup.
Update (20:00): The disk connection failed again. I think this disk is no longer reliable.

 

Attachment 1: fsck_log.log
sudo fsck -yV UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000"           [238/276]
[sudo] password for controls:
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /media/40mBackup] fsck.ext4 -y /dev/sde1
e2fsck 1.42 (29-Nov-2011)
/dev/sde1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Error reading block 527433852 (Attempt to read block from filesystem resulted in
 short read) while getting next inode from scan.  Ignore error? yes

... 283 more lines ...
  14360   Fri Dec 14 22:19:22 2018 KojiUpdateGeneralChiara new USB 4TB DIsk

Edit: It was not 4TB disk but 6TB disk in fact. (We actually ordered 4TB disk...)

I think the problem of the backup disk was the flaky power supply for the external drive.
I swapped the drive to a new HGST 4TB one, but it was neither recognized nor spun up with the external power supply we had. So I decided to put both the new and old drives in the PC chassis to power them up with the internal power supply. I tested the old disk via a USB-SATA cable. However, this disk was not recognized. I noticed that the disk was not HGST 4TB but Seagate 3TB. Is it possible? I thought it was 4TB... Did I miss something?

Once the new 4TB was connected to the USB-SATA, it was very smooth to get it mounted. Now the disk is mounted as /media/40mBackup as before. /etc/fstab was also modified with the new UUID. All the command logs are found here below.

Let's see how the morning backup goes. It would take a while to copy everything on the new disk. So it was actually very nice to set this disk up by Friday midnight.


controls@chiara|~> lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk 
---- sda1   8:1    0 446.9G  0 part /
---- sda2   8:2    0     1K  0 part 
---- sda5   8:5    0  18.9G  0 part [SWAP]
sdb      8:16   0   1.8T  0 disk 
---- sdb1   8:17   0   1.8T  0 part 
sdc      8:32   0   3.7T  0 disk 
---- sdc1   8:33   0   3.7T  0 part /home/cds
sr0     11:0    1  1024M  0 rom  
sdd      8:64   0   5.5T  0 disk 

controls@chiara|~> sudo mkfs -t ext4 /dev/sdd1

mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
183144448 inodes, 1465130385 blocks
73256519 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
44713 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
    102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done      

controls@chiara|~> blkid

/dev/sda1: UUID="972db769-4020-4b74-b943-9b868c26043a" TYPE="ext4" 
/dev/sda5: UUID="a3f5d977-72d7-47c9-a059-38633d16413e" TYPE="swap" 
/dev/sdc1: UUID="92dc7073-bf4d-4c58-8052-63129ff5755b" TYPE="ext4" 
/dev/sdd1: UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" TYPE="ext4" 

controls@chiara|~> sudo emacs -nw /etc/fstab
controls@chiara|~> cat /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none            swap    sw              0       0
#UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0
UUID="1843f813-872b-44ff-9a4e-38b77976e8dc"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0

#fb:/frames      /frames nfs     ro,bg


UUID=92dc7073-bf4d-4c58-8052-63129ff5755b   /home/cds    ext4    defaults,relatime,commit=60    0   0

controls@chiara|~> sudo mount -a
controls@chiara|~> df

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sda1       461229088   10694700  427105320   3% /
udev             15915020         12   15915008   1% /dev
tmpfs             3185412        868    3184544   1% /run
none                 5120          0       5120   0% /run/lock
none             15927044        484   15926560   1% /run/shm
/dev/sdc1      3845709644 1809568856 1840789912  50% /home/cds
/dev/sdd1      5814346836     190408 5521130352   1% /media/40mBackup
  14361   Sat Dec 15 18:29:53 2018 KojiUpdateGeneralChiara new USB 4TB DIsk

The local backup was done at 18:18 after 11h18m of running.

2018-12-15 07:00:01,699 INFO       Updating backup image of /cvs/cds
2018-12-15 18:17:56,378 INFO       Backup rsync job ran successfully, transferred 5717707 files.

 

  14367   Wed Dec 19 14:19:15 2018 KojiSummaryVACPlan for pumpoing down test

We still need elaborated test procedure posted

12/29 Wed

  • Jon continues to work on valve actuator tests.
  • Chub continues to work on wiring / fixing wiring.
  • At the end of the day Jon is going to send out a notification email of "GO"/"NO GO" for pumping.

 

12/30 Thu

  • 9AM: Start closing two doors unless Jon gives us NO GO sign.
  • 10AM: Start pumping down
    • Test roughing pump capability via new control system
    • (Independently) Test turbo rotating procedure. This time we will not open the gate valve between the TP1 and the main volume. This is because we want to take care of the backing turbo loads while we gradually open the gate valve. This will take more hours to be done and we will not be able to finish this test by the end of Thu.
    • At the end of the procedure, we isolate the main volume, stop all the pumps, and vent the roghing pumps to save them from the oil backstream.

gautam: Koji and I were just staring at the vacuum screen, and realized that the drypumps, which are the backing pumps for TP2 and TP3, are not reflected on the MEDM screen. This should be rectified.

Steve also mentioned that the new small turbo controller does not directly interface with the drypump. So we need some system to delay the starting of the turbo itself, once the drypump has been engaged. Does this system exist?

Attachment 1: Screenshot_from_2018-12-19_14-49-34.png
Screenshot_from_2018-12-19_14-49-34.png
  14371   Wed Dec 19 22:11:28 2018 KojiUpdateGeneralHow to align the copper OMC

The OMC input optics layout is attached

Checked the spot position on OMMT-FM1. It was off from the center. This was causing the spot on OMMT1 off-center. This was fixed by the steering mirror for the AUX laser.

The beam alignment onto the OMC was tweaked with OMC-SM1 and OMC-SM2. This was the painful part. We had to make a sensor card that could get in to the narrow space of the OMC. (Attachment 2 right)

Attachment 2 left shows the naming convention of the OMC mirrors.

For the alignment, we gave 5Vpp trig waves at 3.1Hz to the input of the PZT amp so that the cavity is kept scanned continuously. Firstly check the rough spot positions for OMC-CM1 and OMC-CM2. If you carefully use the card, you can check if the beam is returning to OMC-IC. This return beam should have roughtly same hight as the incident beam. This can be adjusted by either of the steering mirrors.

Once the beam is going around the mirrors multiple times, the spot alignment can be checked at OMC-CM1. Bring a card right in front of CM1. If the card is lifter slightly above the incident spot, this automatically allows for the outgoing beam to go through. Depending on the pitch alignment, the next roundtrip (1RT) will be seen on the card. As you lift the card up more, you will be able to see more round trip beams (e.g. 2RT, 3RT, in the figure). If the yaw alignment is perfect, these spots would be lined up vertically. So you can try to align the horizontal direction with the steering mirrors. Then the vertical alignment can be done with the pitch knobs.

At this point you should be able to see some super high-order transmission at the OMC trans. For today, we stopped here as we already ran out of the knob ranges at multiple knobs. This is because the beam height in the mode matching telescope was not right, and the steering mirrors had to work more than their range.

Attachment 1: 110804_40m_OMC_layout.pdf
110804_40m_OMC_layout.pdf
Attachment 2: OMC_alignment.pdf
OMC_alignment.pdf
  14379   Fri Dec 21 12:57:10 2018 KojiOmnistructureVACN2 line valved off

Independent question: Are all the turbo forelines vented automatically? We manually did it for the main roughing line.

 

  14385   Fri Jan 4 15:18:15 2019 KojiUpdateGeneralChiara disk clean up and internally mounted

[Koji Gautam]

Took the opprtunity of the power glitch to take care of the disk situation of chiara.

- Unmounted /cvs/cds from nodus. This did not affect the services on nodus as they don't use /cvs/cds

- Go to chiara, shut it down, and physically checked the labels of the drives.

root = 0.5TB
/cvs/cds = 4TB HGST
backup of /cvs/cds= 6TB HGST

- These three disks are internally mounted and connected with SATA. Previously, 6TB was on USB.

- There were two other drives (2TB and 3TB) but they seemed logically or physically broken. These two disks were removed from chiara. (they came back online after reformatting on mac. So they seem still physically alive).

controls@chiara|~> df
df: `/var/lib/lightdm/.gvfs': Permission denied
Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sda1       461229088   10690932  427109088   3% /
udev             15915020          4   15915016   1% /dev
tmpfs             3185412        848    3184564   1% /run
none                 5120          0       5120   0% /run/lock
none             15927044        144   15926900   1% /run/shm
/dev/sdb1      5814346836 1783407788 3737912972  33% /media/40mBackup
/dev/sdc1      3845709644 1884187232 1766171536  52% /home/cds
controls@chiara|~> lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
├─sda1   8:1    0 446.9G  0 part /
├─sda2   8:2    0     1K  0 part
└─sda5   8:5    0  18.9G  0 part [SWAP]
sdb      8:16   0   5.5T  0 disk
└─sdb1   8:17   0   5.5T  0 part /media/40mBackup
sdc      8:32   0   3.7T  0 disk
└─sdc1   8:33   0   3.7T  0 part /home/cds
sr0     11:0    1  1024M  0 rom

- Rebooted the machine and just came back without any error. This time the control room machines were not shutdown, but they just recovered the NFS once chiara got back.

Attachment 1: P_20190104_143336.jpg
P_20190104_143336.jpg
Attachment 2: P_20190104_143357.jpg
P_20190104_143357.jpg
  14408   Sat Jan 19 05:07:45 2019 KojiUpdateSUSUnused optic on EY table

I don't think it was used. It is not on the diagram too. You can remove it.

  14429   Sat Feb 2 21:53:24 2019 KojiUpdateVACovernight leak rate

The pressure of the main volume increased from ~1mtorr to 50mtorr for the past 24 hours (86ksec). This rate is about x1000 of the reported number on Jan 10. Do we suspect vacuum leak?

Quote:

Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s.

 

Attachment 1: Screen_Shot_2019-02-02_at_21.49.33.png
Screen_Shot_2019-02-02_at_21.49.33.png
  14431   Sun Feb 3 20:52:34 2019 KojiUpdateVACovernight leak rate

We can pump down (or vent) annuli. If this is the leak between the main volume and the annuli, we will be able to see the effect on the leak rate. If this is the leak of an  outer o-ring, again pumping down (or venting) of the annuli should temporarily decrease (or increase) the leak rate..., I guess. If the leak rate is not dependent on the pressure of the annuli, we can conclude that it is internal outgassing.

  14439   Thu Feb 7 17:27:53 2019 KojiSummaryTip-TIltFive FiveNine Optics Optics delivered

5 PR3/SR3 optics from FiveNine Optics were delivered. The data sheets were scanned and uploaded to the following wiki page. https://wiki-40m.ligo.caltech.edu/Aux_Optics

  14470   Mon Feb 25 20:20:07 2019 KojiUpdateSUSDIN 41612 (96pin) shrouds installed to vertex SUS coil drivers

The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.

However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.

Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.

To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.

Attachment 1: P_20190222_175058.jpg
P_20190222_175058.jpg
  14485   Mon Mar 18 18:10:14 2019 KojiSummaryGeneralTask items and priority

[Gautam/Chub/Koji] ~ Mini discussion

Maintenance / Upgrade Items

(Priority high to low)

  • TT/IO suspension upgrade (solidworks work) -> order components -> TT characterization
  • Acromag upgrade c1susaux
    • Produce spread sheetfor DB files. Learn new format of the DB file with Acromag. Develop a python code for the DB file generation (Jon->Koji)
  • Satellite Box upgrade
    • Rack mount? Front panel DB connectors. New circuits (PD-LED)
       
  • Acromag iscaux1/2 & isc whitening upgrade
     
  • new RC mirror characterization -> installation
  14492   Thu Mar 21 18:09:36 2019 KojiUpdateCDSdb file preparation for acromag c1susaux

I have updated the google doc spreadsheet to indicate the required action for the new dbfile generation.

There are three types of actions:

1. COPY - Just duplicate the old EPICS db entry. This is for soft channels, calc channels.
2. DELETE - Delete the entry for some physical channels that will not be implemented on Acromag (oplev, dewhitening mon, AI monitor, etc)
3. REPLACE - For the physical channels, we want to replace the port names.

The blue part of the spreadsheet indicates the action for each channel. If it is a physical channel, the assigned module and the channel are indicated there. What we still want to do is to use the these information for generating the port name which looks like "@asynMask(C1VAC_XT1221A_ADC 1 -16)MODBUS_DATA".

The links to the spreadsheets can be found on 40m wiki: https://wiki-40m.ligo.caltech.edu/CDS/SlowControls/c1susaux

Attachment 1: Screen_Shot_2019-03-21_at_18.06.53.png
Screen_Shot_2019-03-21_at_18.06.53.png
  14499   Thu Mar 28 23:29:00 2019 KojiUpdateSUSSuspension PD whitening and I/F boards modified for susaux replacement

Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.


Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.

The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.

After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.

The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.

The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).

 

Attachment 1: IMG_7474.JPG
IMG_7474.JPG
Attachment 2: D000210_backplane.pdf
D000210_backplane.pdf
Attachment 3: Screenshot_from_2019-03-28_23-28-23.png
Screenshot_from_2019-03-28_23-28-23.png
  14513   Wed Apr 3 12:32:33 2019 KojiUpdateALSNote about new fiber couplers

Andrew seems to have an integrated solution of PBS+HWP in a singe mount. Or, I wonder if we should use HWP/QWP before the coupler. I am interested in a general solution for this problem in my OMC setup too.

  14535   Thu Apr 11 11:42:10 2019 KojiUpdatePSLPSL fan is noisy

This thread: ELOG 10295

My interpretation of these ELOGs is that we did not have the replacement, and then I brought unknown fan from WB. At the same time, Steve ordered replacement fans which we found in the blue tower yesterday.
The next action is to replace the internal fan, I believe.

  14553   Fri Apr 19 09:42:18 2019 KojiBureaucracyGeneralItem borrowing (40m->OMC)

Apr 16, 2019
Borrowed two laser goggles from the 40m. (Returned Apr 29, 2019)
Apr 19, 2019
Borrowed from the 40m:
- Universal camera mount
- 50mm CCD lens
- zoom CCD lens (Returned Apr 29, 2019)
- Olympus SP-570UZ (Returned Apr 29, 2019)
- Special Olympus USB Cable (Returned Apr 29, 2019)

 

  14612   Wed May 15 19:36:29 2019 KojiUpdateSUSETMY instepction

A pair of tweezer is OK as long as there is no magnets around. You need to (somewhat) constrain the mirror with the EQ stops so that you can pull the fiber without dragging the mirror.

  14659   Thu Jun 6 22:11:53 2019 KojiUpdateIOOIMC diagnostics

As per Gautam's request, I looked at the IMC situation.

Locking path

  • Acquisition: IMC IN1 Gain +4 (nominal), Boost 0, VCO Gain (-32), FSS Common +6 (nominal), FSS FAST +20
    This is too low gain. So oscillate VCO Gain between -32 and ~0 until TEM00 lock is acquired
  • Once lock is acquired, bring the VCO gain to +11 (new nominal), and increase the FSS FAST to +23 (new nominal). Change the IMC BOOST to 3 (nominal)

Diagnosis

  • The PMC servo gain was checked. The control signal monitor for the PMC actuation was hooked up to SR785. The nominal gain was +18dB. Increasing the gain to 20dB made the servo oscillating. So the nominal gain of +18dB seems still reasonable.
  • The status of NOISE EATER was checked. Both the PMC REFL and TRANS were looked at by AG4395A. The power spectrum of them did not change much around the kHz~MHz region. It made the PSD slightly (x2~3) improved below 1kHz. I also did not recognize the relaxation oscillation peak. So I could not figure out where to see. NOISE EATSER was on and is still on.
  • IFO Modulation Freq: I took this chance to look at the IMC absolute length using the peak at 3.6MHz. The TP1A output of the IMC servo board was hooked up to AG4395A.
    The new FSR of the IMC (and thus the modulation frequency for the IFO) is 11.066275MHz (instead of the previous 11.066209MHz).
    This corresponds to 0.16mm difference in the roundtrip length.
  • (*Still working) IMC SERVO configuration:
    • FAST 25 (nominal) sometimes invoke the oscilattion. 24 has gain peaking ~30kHz. There is a big line peak at 35kHz so wanted to avoid the servo bump (PZT-EOM cross over). So decided to use 23dB. (This is not optimal for the CM servo as we need as much as bandwidth for CM servo.)
    • IMC VCO GAIN (bad name. this is actually overall output gain for IMC) was increased from the nominal 7 to 11. Increasing this above 11 makes the servo oscillating at ~200kHz.
  • (*Still working) Measured power spectrum of the error signal. Too many line peaks.
  • (*Still working) Single trigger observation: Oscilloscope monitoring started from 35kHz going up and ~20kHz oscillation +/-6V of the IMC servo output was observed. Could not capture good data for this. Try the other day.

I'll complete the entry later.

  14672   Thu Jun 13 22:21:44 2019 KojiConfigurationCDSPaola wireless connected to martian

SURFs had trouble connecting paola to martian via wireless.
Of course, it requires a fixed IP but it had not it yet. So I went to chiara and gave 192.168.113.110 as "paolawl". Note that the wired connection has .111 and it is "paola".

Followed the instruction on http://nodus.ligo.caltech.edu:8080/40m/14121

  14673   Thu Jun 13 22:46:41 2019 KojiUpdateIOOLeft IMC at the intermediate gains

SURFS want some locking of IMC for camera adjustment.

So I left the IMC with intermediate gains so that it keeps locking and unlocking.

VCO (overall) iMC gain of -32, FSS common gain 3, and the FAST gain 20. I believe MC2tickle is ON too.

  14685   Fri Jun 21 19:22:40 2019 KojiConfigurationBHDReviving the single OMC BHD design?

I think a Faraday rotator rotates the polarizations in a same way for both forward and backward beam, and it's not like in this figure.
And the transmission through multiple faradays will also be a big issue.

  14686   Fri Jun 21 19:36:26 2019 KojiUpdateIOOIMC diagnostics

The IMC REFL error signal was measured to compare it with the other spectra (if we have).

The blue curve is the in-loop IMC error and the red is the dark noise. So they are not an apple-to-apple comparison. But the red noise is going to be suppressed by the loop, and still the red is below blue. This means that the blue curve is the measured noise rather than the readout noise.

We suspect that the current issue is the PC drive saturation (as usual). Does this indicate that the laser freq noise is actually increased?

----

Another suspect was that the degradation of the LO level. We used to have the issue of slowly dying ERA-5 (ERA-5SM indeed). The RF levels on the demod board were measured using an active probe.

The LO input: 0dBm, ERA-5 input: -2.7dBm and -2.1dBm for I and Q. I found that the outputs of the ERA-5SM were +10.5dBm and +10.6dBm.
This lead me to replace the chips but the situation was not changed. Then I realized that the LO levels should have been measured with the load replaced from the mixers to a 50Ohm load. Somehow these mixers lower the apparent LO levels. So I decided to say this is OK.

Attachment 1: IMC_error.pdf
IMC_error.pdf
  14689   Sun Jun 23 14:43:14 2019 KojiUpdateIOOIMC is locking normally again

Note that I have removed an SR785, an oscilloscope, some SRS instruments from the PSL and PMC last night.

But they (and RF Network Analyzer) were not there when the problem started.

We should record the IMC error (at test point monitor) too? If the IMC locks on Monday too, I'll do it.

  14712   Sun Jun 30 23:52:09 2019 KojiUpdateIOOPMC and IMC locked again, some MEDM maintenance

> For channels corresponding to continuous values (such as say exposure time or the like) changes to abs(1+current_value)

Why abs? Is the current_value is like -5.4321 (for example for the alignment slider), this returns +4.4321 and the suspension will suffer from huge motion (well it will be returned to the original value soon though). 

  14725   Thu Jul 4 10:54:21 2019 KojiSummarySUSSuspension damping recovered, ITMX stuck

So Cal Earthquake. All suspension watchdogs tripped.

Tried to recover the OSEM damping. 

=> The watchdogs for all suspensions except for ITMX were restored. ITMX seems to be stuck. No further action by me for now.

  14727   Fri Jul 5 20:57:04 2019 KojiUpdateSUSAnother M7.1 EQ

[Kruthi, Koji]

Koji came to the lab to align the IMC/IFO, but found the mirrors are dancing around. Kruthi told me that there was M7.1 EQ at Ridgecrest. Looks like there are aftershocks of this EQ going on. So we need to wait for an hour to start the alignment work.

ITMX and ETMX are stuck.

Attachment 1: Screenshot_from_2019-07-05_21-03-06.png
Screenshot_from_2019-07-05_21-03-06.png
  14728   Fri Jul 5 21:53:10 2019 KojiUpdateSUSAnother M7.1 EQ

- ITM unstuck now
- IMC briefly locked at TEM00

A series of aftershocks came. I could unstick ITMX by turning on the damping during one of the aftershocks.
Between the aftershocks, MC1~3 were aligned to the previous dof values. This allowed the IMC flashing. Once I got the lock of a low order TEM mode, it was easy to recover the alignment to have a weak TEM00.
Now at least temporarily the full alignment of the IMC was recovered.

  14729   Fri Jul 5 22:21:13 2019 KojiUpdateSUSAnother M7.1 EQ

In fact, ETMX was not stuck until the M7.1 EQ today. After that it got stuck, but during the after shocks, all the OSEMs occasionally showed full swing of the light levels. So I believe the magnets are OK.

Attachment 1: Screenshot_from_2019-07-05_22-19-57.png
Screenshot_from_2019-07-05_22-19-57.png
  14743   Wed Jul 10 14:55:32 2019 KojiUpdateGeneralProjector lightbulb blown out

In fact the projector is still working. The lamp timer showed ~8200hrs. I just reset the timer, but not sure it was the cause of the shutdown. I also set the fan mode to be "High Altitude" to help cooling.

  14744   Wed Jul 10 14:57:01 2019 KojiSummaryCDSChannel recipe for iscaux upgrade

The list of the iscaux channels and pin assignments were posted to google drive.
The spreadsheet can be viewable by the link sent to the 40m ML. It was shared with foteee@gmail for full access.

Summary

  • We need
    4 ADC modules
    5 DAC modules
    5 Binary I/O modules
  • Be aware that there are bundled multiple digital I/O channels such as "mbboDirect" and "mbbi".
  • The full db record of the new channels need to be inferred from the existing channels.

Necessary electronics modification

1. D990694 whitening filter modification (4 modules)

This module shares the fast and slow channels on the top DIN96pin (P1) connector. Also, the whitening selector (done by an analog signal per channel) is assigned over 17pin of the P1 connector, resulting in the necessity of the second DSUB cable. By migrating the fast channels, we can swap the cable from the P1 to P2.  Also, the whitening selectors are concentrated on the first Dsub. (See Attachment1 P1)

2. D040180 / D1500308 Common Mode Board

CM servo board itself doesn't need any modification. The CM board uses P1 and P2. So we need to manufacture a special connector for CM Board P2. (cf The adapter board for P1 T1800260). See also D1700058.

3. D990543A1 LSC Photodiode Interface

PD I/F board has the DC mon channels spread over the 16pin limit. P1 21A can be connected to 6A so that we can accommdate it in the first Dsub.
Also the board uses AD797s. This is not necessary. We can replace them to OP27s. I actually don't know what is happening to those bias control, temp mon, enable, and status. These features should be disables at the I/F and the PDs. (See Attachment2 P1)

Attachment 1: D990694-B.pdf
D990694-B.pdf D990694-B.pdf D990694-B.pdf D990694-B.pdf D990694-B.pdf D990694-B.pdf D990694-B.pdf D990694-B.pdf
Attachment 2: D990543A1.PDF
D990543A1.PDF D990543A1.PDF D990543A1.PDF
  14756   Fri Jul 12 18:54:47 2019 KojiUpdateGeneralItem loan: optical chopper from Cryo Lab

Optical chopper borrowed from CryoLab to 40m

https://nodus.ligo.caltech.edu:8081/Cryo_Lab/2458

  14764   Tue Jul 16 15:17:57 2019 KojiHowToCDSFinal bit bug of the BIO CDS module

Yutaro talked about the BIO bug in KAGRA elog. http://klog.icrr.u-tokyo.ac.jp/osl/?r=9536

I think I made the similar change for the 40m model somewhere (don't remember), but be aware of the presense of this bug.

  14767   Wed Jul 17 17:56:18 2019 KojiConfigurationComputersGave resolv.conf to giada

Kruthi noticed that she could not login to rossa from giada.

I checked /etc/resolv.conf and it was

nameserver 127.0.0.1

so obviously it is useless to refer localhost (i.e. giada) as a nameserver.

I copied our usual resolv.conf to giada as following:

nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8

search martian

Giada's ssh known_host had unupdated entry for rossa, so I had to clean it up, but after that we can connect to rossa from giada just by "ssh rossa".

Case closed.

  14770   Thu Jul 18 00:51:52 2019 KojiSummaryCDSiscaux electronics modifications

Along with the plan in ELOG 14744, the ISC PD interface and the whitening filter board have been modificed. The ISC PD I/Fs were restored to the crate and the cables were connected. The whitening filteres are still on the electronics bench for some more tests before being returned to the crate.

The updated schematics were uploaded as https://dcc.ligo.org/D1900318 and https://dcc.ligo.org/D1900319

- Modification of the ISC PD interface: Jumpers between DIN96 P1 and P2. Replace all AD797s with OP27. In fact only I/F #1 (the left most)  had total 12 AD797 but the other units already had OP27s.

- Modification of the whitening filter: Jumpers between DIN96 P1 and P2.

Attachment 1: LSC_whitening2.jpg
LSC_whitening2.jpg
Attachment 2: LSC_whitening.jpg
LSC_whitening.jpg
  14775   Thu Jul 18 22:34:40 2019 KojiSummaryCDSiscaux electronics modifications

The whitening filter modules have been restored to the crates. The SMA cables have been restored and fastened by a spanner. The ribbon cable to the antialiasing board was also connected. The backplane cables have not been moved from the upper DIN96 connector to the lower one.

Everything is expected to be good, but just keep eyes on the LSC signals as the boards were not quantitatvely tested yet. If you find something suspicious, report on the elog.

ELOG V3.1.3-