ID |
Date |
Author |
Type |
Category |
Subject |
16501
|
Fri Dec 10 19:22:01 2021 |
Koji | Update | VAC | Pumping down the RGA section |
The scan result was ~x10 higher than the previously reported scan on 2020/9/15 (https://nodus.ligo.caltech.edu:8081/40m/15570), which was sort of high from the reference taken on 2018/7/18.
This just could mean that the vacuum level at the RGA was x10 high.
We'll just go ahead with the vacuum repair and come back to the RGA once we return to "vacuum normal".
Meanwhile, I asked Jordan to turn off the RGA to make it cool down. I shut off RGA section and turned TP2 off. |
16502
|
Fri Dec 10 21:35:15 2021 |
Koji | Summary | SUS | Vertex SUS DAC adapter ready |
4 units of Vertex SUS DAC adapter (https://dcc.ligo.org/LIGO-D2100035) ready.
https://dcc.ligo.org/LIGO-S2101689
https://dcc.ligo.org/LIGO-S2101690
https://dcc.ligo.org/LIGO-S2101691
https://dcc.ligo.org/LIGO-S2101692
The units are completely passive right now and has option to extend to have a dewhitening board added inside.
So the power switch does nothing.
Some of the components for the dewhitening enhancement are attached inside the units.
|
Attachment 1: PXL_20211211_053155009.jpg
|
|
Attachment 2: PXL_20211211_053209216.jpg
|
|
Attachment 3: PXL_20211211_050625141-1.jpg
|
|
16510
|
Wed Dec 15 17:44:18 2021 |
Koji | Summary | BHD | Part VIII of BHR upgrade - Placed LO1 |
If ITMX already has another side magnet, we can migrate the side OSEM of ITMX to the other side. This way, the interference of the OSEMs can be avoided. |
16515
|
Thu Dec 16 15:54:08 2021 |
Koji | Update | Electronics | ITMX feedthroughs and in-vac cables installed |
Thanks for the installation.
With regard to the connector convention, let's use the attached arrangement so that it will be consistent with the existing flange DSUB configuration. Not a big deal.
|
Attachment 1: PXL_20211216_235056582.jpg
|
|
16516
|
Thu Dec 16 17:41:12 2021 |
Koji | Update | BHD | Coil driver test failed for S2100619-v1 |
Good catch. It turned out that the both + and - side of the output stages for CH2 were oscillating at ~600kHz. If I use a capacitance sticks to touch arbitrarily around the components, it stops their oscillation and they stay calm.
It means that the phase margin becomes small while the circuit starts up.
I decided to increase the capacitances C6 and C20 (WIMA 150pF) to 330pF (WIMA FPK2 100V) and the oscillation was tamed. 220pF didn't stop them. After visually checked the signal behavior with an oscilloscope, the unit was passed to Anchal for the TF test.
The modification was also recorded in the DCC S2100619 |
Attachment 1: PXL_20211217_001735762.jpg
|
|
Attachment 2: PXL_20211217_001719345.jpg
|
|
Attachment 3: PXL_20211217_005344828.jpg
|
|
Attachment 4: PXL_20211217_010131027.PORTRAIT.jpg
|
|
Attachment 5: PXL_20211217_011423823.jpg
|
|
Attachment 6: HAMA_Driver_V4.pdf
|
|
16519
|
Fri Dec 17 12:32:35 2021 |
Koji | Update | SUS | Remaining task for 2021 |
Anything else? Feel free to edit this entry.
- SUS: AS1 hanging
- SUS: PR3/SR2/LO2 3/4" thick optic hanging
v Electronics chain check (From DAC to the end of the in-air cable / From the end of the in-air cable to the ADC)
[ELOG 16522]
- Long cable installation (4x 70ft)
- In-air cable connection to the flange
- In-vac wiring (connecting LO1 OSEMs)
- LO1 OSEM insertion/alignment
- LO1 Damping test
|
16521
|
Fri Dec 17 19:16:45 2021 |
Koji | Update | BHD | SOS assembly |
We @40m do the convention of the arrow at the thinnest side & pointing the HR side, but nobody says Lambda does the same.
We can just remount the mirror without breaking the wires and adjust the pitching if you do it carefully.
Does this mean that the LO1 also likely to have the wedge pointing up? Or did you rotate the mirror to have the wedge reflection to be as horizontal as possible? |
16522
|
Fri Dec 17 19:19:42 2021 |
Koji | Update | SUS | Remaining task for 2021 |
I had the fear that any mistake in the electronics chain could have been the show stopper.
So I quickly checked the signal assignments for the ADC and DAC chains.
I had initial confusion (see below), but it was confirmed that the electronics chains (at least for LO1) are correct.
Note: One 70ft cable is left around the 1Y0 rack
There are a few points to be fixed:
- It looks like the ADC/DAC card # assignment has been messed up.
CDS ADC0 -> Cable label ADC1 -> AA A1 -> ...
CDS ADC1 -> Cable label ADC0 -> AA A0 -> ...
CDS DAC0 -> Cable label DAC2 -> AI D2 -> ...
CDS DAC1 -> Cable label DAC0 -> AI D0 -> ...
CDS DAC2 -> Cable label DAC1 -> AI D1 -> ...
(What is going on here... please confirm and correct as they become straight forward)
Once this puzzle was solved I could confirm reasonable connections from the end of the 70 cables to the ADC/DAC.
- We also want to change the ADC card assignment. The face OSEM readings must be assigned to ADC1 and the side OSEM readings to ADC0.
My system wiring diagram needs to be fixed accordingly too.
This is because the last channel of the first ADC (ADC0) is not available for us and is used for DuoTone. |
Attachment 1: PXL_20211218_030145356.MP.jpg
|
|
16524
|
Sat Dec 18 00:56:14 2021 |
Koji | Update | BHD | SOS assembly |
Sad... We just need to check the wedge direction everytime, unfortunately.
Pencil: can you try to gently wipe it off with solvent & a swab? (IPA / Acetone)
If it does not come off in the end, it's all right to leave. Do we want to scribe the arrow mark? You need a diamond pen. |
16526
|
Mon Dec 20 13:52:01 2021 |
Koji | Update | BHD | SOS assembly |
LO1: No need to remove the pencil mark for the damping test. Until we see serious contamination on the LO1 optic, we don't need to take the optic off from the mount and clean it. If there is a chance of rehanging (because of a broken wire/etc), we do wipe the pencil mark.
Other optics: wipe the pencil mark as much as possible. |
16529
|
Tue Dec 21 16:35:39 2021 |
Koji | Update | VAC | ITMX NW feedthru (LO1-1) connector pin bent |
I've received a report that a pin of an ITMX NW feedthru connector was bent. (Attachment 1)
The connector is #1 (upper left) and planned to be used for LO1-1.
This is Pin25 and used for the PD K of OSEM #1. This means that Coil Driver #1 (3 OSEMs) uses this pin, but Coil Driver #2 (2 OSEMs) does not.
Anyways, I tried to fix it by bending it back. WIth some tools, it was straightened enough for plugging the cable connector. (Attachment 2)
It seemed that the pins were exceptionally soft compared to the ones used for usual DSUBs, probably because of the vacuum compatibility.
So it's better to approach the pins in parallel to the surface and not apply mating pressure until you are sure that all the 25pins are inserted in the counterpart holes. |
Attachment 1: PXL_20211222_002019620.jpg
|
|
Attachment 2: PXL_20211222_003014068.jpg
|
|
16532
|
Wed Dec 22 14:57:05 2021 |
Koji | Update | General | chiara local backup |
chiara local backup of /cvs/cds has not been running since the move of chiara in Nov 19. The remote backup has not been taken since 2017.
The lack of the local backup was because of the misconfiguration of /etc/fstab.
It was fixed and now the backup disk was mounted. We'll see the backup script running tomorrow morning.
The backup disk is smaller than the main disk. So sooner or later, we will face the backup problem again.
localbackup script was crying because there was no backup disk.
backup>pwd
/opt/rtcds/caltech/c1/scripts/backup
backup>tail localbackup.log
2021-12-18 07:00:02,002 INFO Updating backup image of /cvs/cds
2021-12-18 07:00:02,002 ERROR External drive not mounted!!!
2021-12-19 07:00:01,146 INFO Updating backup image of /cvs/cds
2021-12-19 07:00:01,146 ERROR External drive not mounted!!!
2021-12-20 07:00:01,255 INFO Updating backup image of /cvs/cds
2021-12-20 07:00:01,255 ERROR External drive not mounted!!!
2021-12-21 07:00:01,361 INFO Updating backup image of /cvs/cds
2021-12-21 07:00:01,361 ERROR External drive not mounted!!!
2021-12-22 07:00:01,469 INFO Updating backup image of /cvs/cds
2021-12-22 07:00:01,470 ERROR External drive not mounted!!!
fstab had no entry for the backup disk.
backup>cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none swap sw 0 0
# OLD BACKUP DISK
#UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000" /media/40mBackup ext4 defaults,relatime,commit=60 0 0
# CURRENT BACKUP DISK as of 2021/09/02
#UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" /media/40mBackup ext4 defaults,relatime,commit=60 0 0
#fb:/frames /frames nfs ro,bg
# CURRENT MAIN DISK as of 2021/09/02
# UUID=92dc7073-bf4d-4c58-8052-63129ff5755b /home/cds ext4 defaults,relatime,commit=60 0 0
UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" /home/cds ext4 defaults,relatime,commit=60 0 0
Checked the dev name of the disks and the UUIDs
backup>sudo lsblk
[sudo] password for controls:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 446.9G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 18.9G 0 part [SWAP]
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part /home/cds
sdc 8:32 0 3.7T 0 disk
└─sdc1 8:33 0 3.7T 0 part
sr0 11:0 1 1024M 0 rom
backup> sudo blkid
/dev/sda1: UUID="972db769-4020-4b74-b943-9b868c26043a" TYPE="ext4"
/dev/sda5: UUID="a3f5d977-72d7-47c9-a059-38633d16413e" TYPE="swap"
/dev/sdb1: UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" TYPE="ext4"
/dev/sdc1: UUID="92dc7073-bf4d-4c58-8052-63129ff5755b" TYPE="ext4"
Added the fstab entry for the backup disk
media>cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none swap sw 0 0
# OLD BACKUP DISK
#UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000" /media/40mBackup ext4 defaults,relatime,commit=60 0 0
# OLD BACKUP DISK as of 2021/09/02
#UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" /media/40mBackup ext4 defaults,relatime,commit=60 0 0
# Current backup disk as of 2021/12/22
UUID="92dc7073-bf4d-4c58-8052-63129ff5755b" /media/40mBackup ext4 defaults,relatime,commit=60 0 0
#fb:/frames /frames nfs ro,bg
# CURRENT MAIN DISK as of 2021/09/02
# UUID=92dc7073-bf4d-4c58-8052-63129ff5755b /home/cds ext4 defaults,relatime,commit=60 0 0
UUID="1843f813-872b-44ff-9a4e-38b77976e8dc" /home/cds ext4 defaults,relatime,commit=60 0 0
|
16534
|
Wed Dec 22 18:16:23 2021 |
Koji | Update | SUS | Remaining task for 2021 |
The in-vacuum installation team has reported that the side OSEMs of ITMX and LO1 are going to be interfering if place LO1 at the planned location.
I confirmed that ITMX has the side magnet on the other side (Attachment 1 ITMX photo taken on 2016/7/21). So we can do this swap.
The ITMX side OSEM is sticking out most. By doing this operation, we will recover most of the space between the ITMX and LO1. (Attachment 2) |
Attachment 1: ITMX_2016_07_21.jpg
|
|
Attachment 2: Screen_Shot_2021-12-22_at_18.03.42.png
|
|
16535
|
Thu Dec 23 16:38:21 2021 |
Koji | Update | General | Is megatron down? (Re: chiara local backup) |
The local backup seems working fine again. But I found that megatron is down and this is a real issue. This should be fixed at the earliest chance.
It seems that the local backup has been successfully taken this morning.
controls@nodus|backup> tail /opt/rtcds/caltech/c1/scripts/backup/localbackup.log
2021-12-19 07:00:01,146 INFO Updating backup image of /cvs/cds
2021-12-19 07:00:01,146 ERROR External drive not mounted!!!
2021-12-20 07:00:01,255 INFO Updating backup image of /cvs/cds
2021-12-20 07:00:01,255 ERROR External drive not mounted!!!
2021-12-21 07:00:01,361 INFO Updating backup image of /cvs/cds
2021-12-21 07:00:01,361 ERROR External drive not mounted!!!
2021-12-22 07:00:01,469 INFO Updating backup image of /cvs/cds
2021-12-22 07:00:01,470 ERROR External drive not mounted!!!
2021-12-23 07:00:01,594 INFO Updating backup image of /cvs/cds
2021-12-23 07:19:55,560 INFO Backup rsync job ran successfully, transferred 338425 files.
However, I noticed that the autoburt has been stalled since Dec 6 (I used to check how the backup is up-to-date using the autoburt snapshots)
Dec>pwd
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Dec
Dec>ls -l
total 24
drwxr-xr-x 26 controls controls 4096 Dec 1 23:07 1
drwxr-xr-x 26 controls controls 4096 Dec 2 23:07 2
drwxr-xr-x 26 controls controls 4096 Dec 3 23:07 3
drwxr-xr-x 26 controls controls 4096 Dec 4 23:07 4
drwxr-xr-x 26 controls controls 4096 Dec 5 23:07 5
drwxr-xr-x 19 controls controls 4096 Dec 6 16:07 6
There are a bunch of errors in the log file as follows, but maybe this is not an issue
controls@nodus|burt> pwd
/opt/rtcds/caltech/c1/burt
controls@nodus|burt> tail burtcron.log
!!! ERROR !!! Target c1supepics Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1tstepics Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1x10epics Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1aux Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1dcuepics Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1iscaux Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1iscepics Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1losepics Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1psl Snapshot file inconsistent with Request file
!!! ERROR !!! Target c1susaux Snapshot file inconsistent with Request file
The real issue seems that megatron is down. It has a lot of house keeping jobs on corn including the N2 pressure alert.
https://wiki-40m.ligo.caltech.edu/Computers_and_Scripts/CRON
This needs to be fixed at the earliest chance. |
16536
|
Fri Dec 24 16:49:41 2021 |
Koji | Update | General | Is megatron down? (Re: chiara local backup) |
It turned out that the UPS installed on Nov 22 failed (cf https://nodus.ligo.caltech.edu:8081/40m/16479 ). As a fact, it was alive just for 2 weeks!
The APC UPS unit indicated F06. According to the manual (https://www.apc.com/shop/us/en/products/APC-Power-Saving-Back-UPS-Pro-1000VA/P-BR1000G), F06 means "Relay Welding" and can not be fixed by a user. Resetting the UPS eliminated the error, but I didn't want to have the same issue while no one is in the lab, I moved the megatron power source from the UPS to the power strip on 1Y7. So, megatron is currently vulnerable to a power glitch.
After the power cords were restored, megatron eventually recovered ssh terminals. I manually ran autoburt.cron at 16:50 so that the latest snapshot is taken. |
Attachment 1: PXL_20211224_235652821.jpg
|
|
16538
|
Sun Jan 2 20:46:46 2022 |
Koji | Update | SUS | End SUS Electronics building |
19:00~ Start working on the electronics bench
The following units were tested and ready to be installed. These are the last SUS electronics units and we are now ready to upgrade the end SUS electronics too.
40m End ADC Adapter Unit D2100016 / 2 Units (S2200001 S2200002)
40m End DAC Adapter Unit D2100647/ 2 Units (S2200003 S2200004)
These are placed on Tega's desk together with the vertex DAC adapters
0:30 End work |
Attachment 1: PXL_20220103_081133119.jpg
|
|
16547
|
Thu Jan 6 13:54:28 2022 |
Koji | Update | CDS | Yearly DAQD fix 2022! |
Just restarting all the c1sus2 models fixed the issue. (Attachment 1)
SUS2 ADC1 CH21 is saturated. I'm not yet sure if this is the electronics issue or the ADC issue.
SUS2 ADC1 CH10 also has large offset. This should also be investiagted. |
Attachment 1: Screenshot_2022-01-06_13-57-40.png
|
|
16548
|
Thu Jan 6 14:08:14 2022 |
Koji | Update | CDS | More BHD SUS screens added to sitemap |
More BHD SUS screens added to sitemap (Attachment 1) |
Attachment 1: Screenshot_2022-01-06_14-06-15.png
|
|
16549
|
Thu Jan 6 15:10:38 2022 |
Koji | Update | SUS | ITMX Chamber work |
[Anchal, Koji]
=== Summary ===
- ITMX SD OSEM migration done
- LO1 OSEM insertion and precise adjustment (part 1) done
- LO1 POS/PIT/YAW/SD motions were damped
=== General Remarks ===
- 15:00 Entered into ITMX.
- We were equipped with N95 and took physical distance as much as possible.
- 17:00 Temporarily came out from the lab.
- 18:30? Came into the chamber again
- 20:00 Sus damped. OSEM work continues
- 21:00 OSEM installation work done. Exit.
=== ITMX SD OSEM position swap ===
- Moved the LO1 suspension to the center of the chamber
- Removed the ITMX SD OSEM from the right side (west side) and tried to move it to the other side.
- Noted that the open light output of the ITMX SD was 908 at the output of the SDSEN filter module. So the half-light target is 454. These numbers include the "cnt2um" calibration of 0.36. That means the open light raw ADC count was supposed to be 2522.
- The OSEM set screw (silver plated, with a plunger) was removed from the old position. We first tried to recycle it to the other side, but it didn't go into the thread with fingers. After making ourselves convinced that the threaded hole was identical for both sides, we decided to put the new identical plunger set screw with an Allen-key was used to put it in and it went in!
- Now the ITMX SD OSEM was inserted from the east side. Once we saw some shadow on the OSEM signal, the SD damping was turned on with the previous setting. And this successfully damped the side motion. ⭕️
- A bit finer adjustment has been done. After a few trials, we reached the stable output of ~400. Considering the temporary leveling of the table, we decided this is enough for now ⭕️. The set screw was tightened.
- To make the further work safer w.r.t the ITMX magnets, Anchal fastened the EQ stops of the ITMX sus except for the bottom four.
- Photo: [Attachment 1]
=== LO1 OSEM installation ~ wiring ===
- Now LO1 was moved back to the planned position.
- For the wiring, we (temporarily) clamped the in-vac DSUB cables to the stack table with metal clamps.
- Started plugging the OSEMs into the DSUB cables.
- Looking at the LO1-1 cable from the mating side with the longer side top: The top-right pin of the female connector is Pin1 as usual. From right to left LL / UR / UL coils were inserted one by one while looking at the OSEM PD signals.
- LO1-2 cable has the LR / SD coils (from the right to the left) were connected.
- Photo: [Attachment 2]
- LO1 Open light levels (raw ADC counts) the 2nd number is the target half-light level
- UL 27679 (-> 13840)
- UR 29395 (-> 14697)
- LR 30514 (-> 15257)
- LL 27996 (-> 13998)
- SD 26034 (-> 13017)
=== RTS Filter implementation ===
- Anchal copied the filter module settings from other suspensions.
- We also implemented the simple input and output matrices.
=== LO1 OSEM insertion ===
- We struggled to make the suspension freely swinging with the OSEMs inserted.
- It seemed that the magnets were sucked to the OSEMs due to magnetic components.
- It turned out that the OSEMs were not fastened well and not seated in the holder plates.
- Once this was fixeded, we found that the mirror height is too high for the given OSEM heights.
The suspension height (or the OSEM height should be decided with the OSEMs not inserted but fully fastened to prevent misalignment of them.
- Decided to lift up the OSEM plates in situ.
- Soon we found that the OSEM holder plates are not fastened at all [Attachment 3 arrows]
- The plates were successfully lifted up and the suspension became much more freely swinging even with the OSEMs inserted. ⭕️
=== LO1 damping and more precise OSEM insertion ===
- Once the OSEMs were inserted to the light level of 30~70%, we started to try to dampen the motion. The side damping was somewhat successful, but the face ones were not.
- We checked the filters and found the coil output filters didn't have the alternating signs.
- Once the coil signs were corrected, the damping became more straight forward.
- And the robust damping allowed us the fine-tuning of the OSEM insertion.
- In the end, what we had for the light levels were
- UL 14379 (52%)
- UR 14214 (48%)
- LR 14212 (47%)
- LL 12869 (46%)
- SD 14358 (55%)
The damping is working well. [Attachment 4]
Post continues at 40m/16552. |
Attachment 1: PXL_20220107_044739280.MP.jpg
|
|
Attachment 2: PXL_20220107_044958224.jpg
|
|
Attachment 3: PXL_20220107_044805503.NIGHT.jpg
|
|
Attachment 4: Screen_Shot_2022-01-06_at_20.54.04.png
|
|
16553
|
Thu Jan 6 22:18:47 2022 |
Koji | Update | CDS | SUS screen debugging |
Indicated by the red arrow:
Even when the side damping servo is off, the number appears at the input of the output matrix
Indicated by the green arrows:
The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC.
Indicated by the blue arrows:
This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified.
Indicated by the orange circle:
The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here. |
Attachment 1: Screen_Shot_2022-01-06_at_18.03.24.png
|
|
16557
|
Fri Jan 7 18:24:25 2022 |
Koji | Update | BHD | SOS assembly -- SR2 |
Vacseal in the freezer. It could have been expired sooooo many years ago, We need some cure testing.
Can you release the part numbers of the ordered components (and how/where to use them), so that we can incorporate them into the CAD model?
Quote: |
Again, we should apply some glue to the counterweight to prevent it from spinning on the setscrew. Is there a glue other than EP30 that we can use?
Related: Peek nuts, screws and washers were ordered from Mcmaster.
|
|
16558
|
Fri Jan 7 18:28:13 2022 |
Koji | Update | BHD | PR2 Sat Amp has a bad channel |
Leave the unit to me. I can look it at on Mon. For a while, you can take a replacement unit from the electronics stack.
Also: Was this unit tested before? If so, what was the testing result at the time? |
16564
|
Mon Jan 10 15:59:46 2022 |
Koji | Update | BHD | PR2 Sat Amp has a bad channel |
The issue was present in the cable between the small adapter board and the rear panel. The cable and the Dsub 25 connectors were replaced. The removed parts were resoldered. Did the basic test of the channel.
Attachment 1: I cleaned up the area of the PD3 circuit of S2100556 and checked the voltage when the circuit was energized. The PD photocurrent line from the rear panel had S2100556 even with R25 removed. So the problem was between the rear panel to the outer side of R25. I've started to remove the cables to localize the issue and found that the issue disappeared when the ribbon cable was removed.
Attachment 2: I didn't investigate how the ribbon cable was bad. It was just trashed. The cable and the 25pin Dsub connectors were replaced and the line in question looked normal.
Attachment 3: All the components removed were stuffed again. The I/V-output of the circuit showed a 0.7mV offset but it seemed within the normal range. By touching R25 with a finger made it up to ~10mV as the other channels do. BTW: For 1000pF cap (C10) I used a stock 1000pF cap (KEMET, C330C102JDG5TA, 5%, 1kV, C0G) instead of nominal one (KEMET, C317C102G1G5TA, 2%, 100V, C0G).
Attachment 4: Noticed that the jumpers for shield grounding were missing. So they were installed (Attachment 5). This jumper is connected to Pin13. This line becomes Pin1 of the Dsub25 sat-amp cable because of the adapter board D2100148. The sat amp cable is D2100675. Hmm. In fact, this line does not touch the shield anywhere (unlike the aLIGO case). So only the chassis provides the cable shielding, no matter how the jumpers are connected or not connected.
Attachment 6: Final state of the circuit |
Attachment 1: trouble_shoot1.jpg
|
|
Attachment 2: trouble_shoot2.jpg
|
|
Attachment 3: S2100556_PD3.jpg
|
|
Attachment 4: shield_grounding_before.jpg
|
|
Attachment 5: shield_grounding_after.jpg
|
|
Attachment 6: S2100737.jpg
|
|
16573
|
Tue Jan 11 13:43:14 2022 |
Koji | Update | SUS | Temporary watchdog |
I don't remember the syntax of the db file, but here this calc channel computes A&B. And A&B corresponds to INPA and INPB.
field(CALC,"A&B")
field(INPA,"C1:SUS-LO1_UL_COMM PP NMS")
field(INPB,"C1:SUS-LO1_LATCH_OFF PP MS")
What is this LATCH doing?
|
16584
|
Fri Jan 14 03:07:04 2022 |
Koji | Update | BHD | PR2 transmission calculation |
I opened the notebook but I was not sure where you have the loss per bounce for the arm cavity.
PRC_RT_Loss = 2 * PR3_T + 2 * PR2_T + 2 * Arm_Cavity_Finesse * ETM_T + PRM_T
Do you count the arm reflection loss to be only 2 * 13ppm * 450 = 1.17%? |
16594
|
Tue Jan 18 18:19:22 2022 |
Koji | Summary | BHD | AS4 placed in ITMY Chamber, OSEMs connected |
AS4 satellite amplifier D1002818 / D080276 troubleshoot
I dug into the circuit to see what/where things were wrong.
- UL saturation issue: The open light voltage at the TIA output (I-V out) was 10.4V. It seemed that the photocurrent of 86uA was simply a little too much for the transimpedance gain of 121kOhm. So the R18 was replaced to 100kOhm. This made the I-V out to be 8.6V and the ADC input count to be 28200 (Attachment 1). This modification was done on the unit S2100742 CH1 (LEFT CH1)
- Non responding LL issue: Now moved on to LL (LEFT CH2). The basic circuit test didn't reveal any problem. So the DSUB25 cables were swapped at the vacuum feedthru flange. The result is shown in Attachment 2. LL OSEM issue was moved to the 2nd ch of the right channel of the sat amp (CH6). This is means that the problem is somewhere in the vacuum chamber (including the vacuum feedthru). We need to check the in-vacuum cable and the OSEM. We can test the OSEM by swapping the position of the OSEM connector between LL and UL (for example).
|
Attachment 1: Screen_Shot_2022-01-18_at_17.48.16.png
|
|
Attachment 2: cable_swap.png
|
|
16597
|
Wed Jan 19 14:41:23 2022 |
Koji | Update | BHD | Suspension Status |
Is this the correct status? Please directly update this entry.
LO1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
LO2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS4 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR3 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
SR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
Last updated: Fri Jan 28 10:34:19 2022 |
16601
|
Thu Jan 20 00:26:50 2022 |
Koji | Update | SUS | Temporary watchdog |
As the new db is made for c1susaux, 1) it needs to be configured to be read by c1susaux 2) it requires restarting c1susaux 3) it needs to be recorded by FB 4) and restartinbg FB.
(^-Maybe not super exact procedure but conceptually like this)
cf.
https://wiki-40m.ligo.caltech.edu/How_To/Add_or_rename_a_daq_channel
|
16602
|
Thu Jan 20 01:48:02 2022 |
Koji | Update | BHD | PR2 transmission calculation updated |
IMC is not such lossy. IMC output is supposed to be ~1W.
The critical coupling condition is G_PRC = 1/T_PRM = 17.7. If we really have L_arm = 50ppm, we will be very close to the critical coupling. Maybe we are OK if we have such condition as our testing time would be much longer in PRMI than PRFPMI at the first phase. If the arm loss turned out to be higher, we'll be saved by falling to undercoupling.
When the PRC is close to the critical coupling (like 50ppm case), we roughly have Tprc x 2 and Tarm to be almost equal. So each beam will have 1/3 of the input power i.e. ~300mW. That's probably too much even for the two OMCs (i.e. 4 DCPDs). That's OK. We can reduce the input power by 3~5.
Quote: |
LO Power
LO power when PRFPMI is locked is calculated by assuming 1 W of input power to IMC. IMC is assumed to let pass 10% of the power ( ).
|
|
16607
|
Thu Jan 20 17:34:07 2022 |
Koji | Update | BHD | V6-704/705 Mirror now @Downs |
The PR2 candidate V6-704/705 mirrors (Qty2) are now @Downs. Camille picked them up for the measurements.
To identify the mirrors, I labeled them (on the box) as M1 and M2. Also the HR side was checked to be the side pointed by an arrow mark on the barrel. e.g. Attachment 1 shows the HR side up |
Attachment 1: PXL_20220120_225248265_2.jpg
|
|
Attachment 2: PXL_20220120_225309361_2.jpg
|
|
16612
|
Fri Jan 21 14:51:00 2022 |
Koji | Update | BHD | V6-704/705 Mirror now @Downs |
Camille@Downs measured the surface of these M1 and M2 using Zygo.
Result of the ROC measurements:M1: ROC=2076m (convex)M2: ROC=2118m (convex)
Here are screenshots. One file shows the entire surface and the other shows the central 30mm. |
Attachment 1: M1.PNG
|
|
Attachment 2: M1_30mm.PNG
|
|
Attachment 3: M2.PNG
|
|
Attachment 4: M2_30mm.PNG
|
|
16629
|
Thu Jan 27 20:46:38 2022 |
Koji | Summary | BHD | Part III of BHR upgrade - Install PR2, balance, and attempt OSEM tuning |
- Started debugging D1002818 / S2100737 (8:30PM)
- Confirmed the issue of the negative output of the UR sensor with the dummy OSEM connected at the air side of the ITMY chamber. Both PD Out and PD Mon have negative outputs.
- The same issue remains when the dummy OSEM box is connected to the chassis with a short DB25M/F cable at the rack.
- Started debugging the setup at the workbench.
- CH1 TIA Output=-3.0V / CH2 (in question) TIA Output =-2.7V => No Problem
- CH1 Whitening Out=+3.0V / CH2 Whitening Out=-1.4V => Problem
- Resolder the components around whitening CH2 => no change
- Remove AD822 and replace with a new one => CH2 Whitening OUt = +2.7V ==> Problem solved
- PD1~3 channels of the left and right PCBs tested with the OSEM box ==> nearly +3V/-3V differential output (All Clear)
- Chassis closing
- Chassis restored in Rack 1Y1 and the normal output with the dummy OSEM box confirmed
- Mission Completed (9:30PM)
- Elog finished (9:40PM)
- Case closed
|
Attachment 1: Screen_Shot_2022-01-27_at_20.33.56.png
|
|
Attachment 2: Screen_Shot_2022-01-27_at_21.30.16.png
|
|
16631
|
Fri Jan 28 11:30:52 2022 |
Koji | Summary | BHD | Part III of BHR upgrade - PR2 OSEM finalized, reinsall LO1 OSEMs |
All green! Great work, Team! |
16635
|
Tue Feb 1 15:33:28 2022 |
Koji | Summary | BHD | Optomechanics configuration for the in-vacuum aux small optics |
I've summarized the optomechanics configuration for the in-vacuum aux small optics
It's not obvious here but the post for POP_SM4 is the stack of BA2V, Newport 9953, PLS-T238, LMR1V. The mirror is a CM254-750-E03 curved mirror. This should go on the suspension base. I hope I did not make a mistake there... |
Attachment 1: Invac_opto_mechanics_v2.pdf
|
|
16640
|
Wed Feb 2 18:55:58 2022 |
Koji | Summary | BHD | Optomechanics configuration for the in-vacuum aux small optics |
Here is more detail of the POP_SM4 mount assembly.
It's a combination of BA2V + PLS-T238 + BA1V + TR-1.5 + LMR1V + Mirror: CM254-750-E03
Between BA1V and PLS-T238, we have to do a washer action to fix the post (8-32) with a 1/4-20 slot. Maybe we can use a 1" post shim from thorlabs/newport.
Otherwise, we should be able to fasten the other joints with silver-plated screws we already have/ordered.
I think TR-1.5 (and a shim) has not been given to Jordan for C&B. I'll take a look at these. |
Attachment 1: Screen_Shot_2022-02-02_at_6.46.46_PM.png
|
|
16641
|
Wed Feb 2 21:16:23 2022 |
Koji | Summary | BHD | Optomechanics configuration for the in-vacuum aux small optics |
Asked Jordan to clean 2x 1.5" posts (0.5 dia) and a washer with 1" dia.
|
Attachment 1: PXL_20220203_050156031.jpg
|
|
16651
|
Mon Feb 7 16:53:02 2022 |
Koji | Update | General | Scheduled power outage recovery |
I went to the X end and found it was warm. Turned out that not all the A/Cs were on. They were turned on now. |
Attachment 1: PXL_20220208_001646282.jpg
|
|
Attachment 2: PXL_20220208_001657871.jpg
|
|
16653
|
Wed Feb 9 13:55:05 2022 |
Koji | Update | General | Bringing back CDS |
Great recovery work and cleaning of the rebooting process.
I'm just curious: Did you observe that the c1sus2 cards have different numbering order than the previous along with the power outage/cycling? |
16656
|
Thu Feb 10 14:39:31 2022 |
Koji | Summary | Computers | Network security issue resolved |
[Mike P / Koji / Tega / Anchal]
IMSS/LIGO IT notified us that "ILOM ports" of one of our hosts on the "114" network are open. We tried to shut down obvious machines but could not identify the host in question. So we decided to do a bit more systematic search of the host.
[@Network Rack]
- First of all, we disconnected the optical cables coming to the GC router while the ping is running on the AIRLIGO connected laptop (i.e. outside of the 40m network). This made the ping stopped. This means that the issue was definitely in the 40m.
- Secondly, we started to disconnect (and reconnect) the ethernet cables from the GC router one by one. We found that the ping response stops when the cable named "NODUS" was disconnected.
[@40m IFO lab]
- So we tracked the cable down in the 40m lab. After a while, we identified that the cable was really connected to nodus.
- Nodus was supposed to have one network connection to the martian network since the introduction of the bidirectional NAT router (rather than the old configuration with a single direction NAT router).
- In fact, the cable was connected to "non-networking" port of nodus. (Attachment 1). I guess the cable was connected like this long time, but somehow the ILOM (IPMI) port was activated along with the recent power cycling.
- The cable was disconnected at nodus too. (Attachment 2) And a tape was attached to the port so that we don't connect anything to the port anymore. |
Attachment 1: PXL_20220210_220816955.jpg
|
|
Attachment 2: PXL_20220210_220827167.jpg
|
|
16659
|
Thu Feb 10 19:03:23 2022 |
Koji | Update | General | Scheduled power outage recovery - Locking mode cleaner(s) |
I came back to the 40m and started the investigation.
If I ping 192.168.113.92, it responds. But telnet (port 23) was rejected. I somehow tried ssh and it responds! I even could login to the host using usual password. Here is the prompt.
controls@nodus|~> ssh 192.168.113.92
controls@192.168.113.92's password:
...
controls@c1sus2:~ 0$
Oh no...
Looks like c1sus2 and the videomux have the IP address conflict.
Here are the useful ELOG links:
https://nodus.ligo.caltech.edu:8081/40m/4498
https://nodus.ligo.caltech.edu:8081/40m/4529 |
16660
|
Thu Feb 10 19:46:37 2022 |
Koji | Update | General | Scheduled power outage recovery - Locking mode cleaner(s) |
== Assign new IP address to c1sus2 ==
cf: [40m ELOG 16398] [40m ELOG 16396]
- Shutdown c1sus2 (Oh, no. This killed c1lsc/c1sus/c1ioo... This should be taken care of later)
- Confirmed 192.168.113.87 is not alive
- Go to chiara
- Modify /diskless/root/etc/hosts
192.168.113.87 c1sus2 c1sus2.martian
- Modify /etc/dhcp/dhcpd.conf
host c1sus2 {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.87;
}
- Modify /var/lib/bind/martian.hosts
c1sus2 A 192.168.113.87
videomux A 192.168.113.92
- Modify /var/lib/bind/martian.hosts/rev.113.168.192.in-addr.arpa
87 PTR c1sus2.martian
92 PTR videomux.martian
- Reload/restart bind9 / dhcpd. Run the following command
sudo service bind9 reload
sudo service isc-dhcp-server restart
- Restart c1sus2 and confirm if the IP address was actually changed
controls@c1sus2:~ 0$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:25:90:06:69:c2
inet addr:192.168.113.87 Bcast:192.168.113.255 Mask:255.255.255.0
...
== Restart c1lsc / c1sus /c1ioo ==
- Reboot c1lsc/c1sus/c1ioo
- Go to scripts/cds
- Run startC1LSC.sh and follow the instruction
|
16661
|
Thu Feb 10 21:10:43 2022 |
Koji | Update | General | Video Mux setting reset |
Now the video matrix is responding correctly and the web interface shows up. (Attachment 1)
Also the video buttons respond as usual. I pushed Locking Template button to bring the setting back to nominal. (Attachment 2) |
Attachment 1: Screenshot_2022-02-10_21-11-21.png
|
|
Attachment 2: Screenshot_2022-02-10_21-11-54.png
|
|
16662
|
Thu Feb 10 21:16:27 2022 |
Koji | Summary | CDS | chiara resolv.conf wierdo |
During the videomux debug, I noticed that the host name resolving on chiara didn't behave well. Basically I could not login to anything from chiara using host names.
I found that there was no /etc/resolv.conf. Instead, there is /etc/resolvconf directory.
According to my research, live resolv.conf is placed in /run/resolveconf/resolv.conf .
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.113.20
nameserver 131.215.125.1
nameserver 8.8.8.8
This 113.20 is directing an old "linux1" machine. Too much obsolete. If I modify this file as
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8
search martian
Then the name resolving became reasonable. However, during rebooting / service resetting / etc, resolvconf -u command is executed and /run/resolveconf/resolv.conf is overridden, as indicated in the file.
I have modified /etc/resolvconf/resolv.conf.d/base to include 192.168.113.104 and search martian . The latter was included but the former did not show up.
FInally I figured out that, after the resolv.conf is constructed from base and head files in /etc/resolvconf/resolv.conf.d/ , NetworkManager overrides the nameserver addresses.
The configuration was found in /etc/NetworkManager/system-connections/Wired\ connection\ 1 .
Here is the modified setting (dns entry was modified)
>sudo cat /etc/NetworkManager/system-connections/Wired\ connection\ 1
[sudo] password for controls:
[802-3-ethernet]
duplex=full
mac-address=68:05:CA:36:4E:B4
[connection]
id=Wired connection 1
uuid=ed177e70-d10e-42be-8165-3bf59f8f199d
type=802-3-ethernet
timestamp=1438810765
[ipv6]
method=auto
[ipv4]
method=manual
dns=192.168.113.104;131.215.125.1;8.8.8.8;
addresses1=192.168.113.104;24;192.168.113.2;
And
>cat /etc/resolvconf/resolv.conf.d/base
search martian
# See Also /etc/NetworkManager/system-connections/Wired\ connection\ 1
So complicated...
|
16663
|
Thu Feb 10 21:51:02 2022 |
Koji | Update | CDS | [Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW |
Huge random numbers are flowing into ETMX/ETMY ASC PIT/YAW. Because of this, I could not damp the ETMX/ETMY suspension at the beginning during the recovery from rebooting. (Attachment 1)
By turning off the output of the ASC filters, the mirrors were successfully damped.
Looking at the FE model view of the end RTSs, there were two possibilities: (Attachment 2)
- They are coming from RFM connection
- They are coming from ASXASY
ASX/ASY are not active and I could not see anything producing these numbers. Burtrestore didn't help.
The possibility was something at the other side of the RFM, or corruption of the RFM signal.
- Looking at the RFM model (Attachment 3), the ASC signals are coming from ASS and IOO. The ASS path has the filter module (C1:RFM-ETMX_PIT and etc). This FM is quiet and not guilty.
- Why do we have the RFM from IOO? I went to IOO and found the new ASC (WFS) model is there. I didn't realize the presence of this model. In fact ASC screen showed that these random numbers are flowing into the end SUSs.
So I did burtrestore of c1iooepics. Alas! they are gone.
Now I can go home. |
Attachment 1: Screenshot_2022-02-10_21-46-02.png
|
|
Attachment 2: Screen_Shot_2022-02-10_at_21.54.21.png
|
|
Attachment 3: Screen_Shot_2022-02-10_at_22.14.23.png
|
|
16671
|
Mon Feb 14 21:03:25 2022 |
Koji | Update | General | Scheduled power outage recovery |
I opened the boxes. Allegra has obvious vent of at least 4 caps. And the power supply did not respond even a paper clip test was performed. https://www.silverstonetek.com/downloads/QA/PSU/PSU-Paper%20Clip-EN.pdf (Paper Clip Test)
=> The mother board and the PSU are dead.
Then Ottavia was also checked. The mother board looked OK, but the PSU did not respond. I quickly opened the PSU and it had a bunch of bulged capacitors in it. => PSU dead
Conclusion: Save the cards/memory etc as much as possible. Migrate the allegra HDD to any other healthy PC or obtain a new used PC from Larry. Otherwise, we just want to buy another WS and copy the disk in it.
|
Attachment 1: PXL_20220215_025325118.jpg
|
|
16672
|
Tue Feb 15 19:32:50 2022 |
Koji | Update | General | Scheduled power outage recovery - IMC recovery progress |
Reduced the IMC power to 100mW
Setup: The power meter was placed right before the final aperture (Attachment 1)
Before the adjustment: Initial position of the HWP was 37.29deg and the input power was 987mW (Attachments 2/3)
After the adjustment: Initial position of the HWP was 74.00deg and the input power was 100mW (Attachments 4/5)
This made the MCREFL reading 0.549.
The MC refl path optics has not been modified. |
Attachment 1: PXL_20220216_001731377.jpg
|
|
Attachment 2: Screen_Shot_2022-02-15_at_16.18.16.png
|
|
Attachment 3: PXL_20220216_001727465.jpg
|
|
Attachment 4: Screen_Shot_2022-02-15_at_16.22.16.png
|
|
Attachment 5: PXL_20220216_002229572.jpg
|
|
16673
|
Tue Feb 15 19:40:02 2022 |
Koji | Update | General | IMC locking |
IMC is locking now. There was nothing wrong: just a careful alignment + proper gain adj
=== Primary Alignment ===
- I used WFS error signals as the indicator of the PDH error signals. Checked C1:IOO-WFS1_(I/Q)n_ERR and ended up C1:IOO-WFS1_I4_ERR as it showed the largest PDH error PP.
- Then used MC2 and MC3 to align the IMC by maximizing the PDH error and the MC trans (C1:IOO-MC_TRANS_SUM_ERR)
=== Locking procedure ===
Note that the MC REFL path is still configured for the full power input
- (Only at the beginning) Run scripts/MC/mcdown for initialization / Run scripts/MC/MC2tickleOFF just in case
- Enable IOO-MC-SW1 (MC SERVO switch right after "IN1 Gain (dB)").
- Disable 40:4000 boost
- Increase VCO Gain from -15 to 0
- Jiggle IN1 Gain from low to +31 until the lock is achieved
- As soon as the lock is acquired, enable 40:4000
- Increase VCO Gain to +10
- Turn up "SUPER BOOST" from 0 to 3
=== Lock loss procedure ===
Note that the MC REFL path is still configured for the full power input
- Disable IOO-MC-SW1
- Disable 40:4000 boost
- Reduce VCO Gain 0
- Turn down "SUPER BOOST" to 0
- Then jiggle IN1 Gain again to lock the IMC
=== MC2 spot ===
- It was obvious that the MC2F spot was not on the center of the optic.
- I tried to move the spot on the camera as much as possible, but this did not make the trans beam to the center of the MC end QPD
- I had the impression that the trans beam started to be clipped when the beam was moved towards the end QPD,
We need to reestablish the reasonable/consistent MC2 spot on the mirror, the MC end optics, and the QPD.
We will need to use MC2 dithering and A2L coupling to determine the center of the mirror
But as long as the transmission is maximized, the transmitted beam thru MC1 and MC3 follows the input beam. So we can continue the vent work
The current maximized transmission was ~1300. MC1 refl CCD view was largely off -> The camera path was adjusted.
=== MC2 alignment note ===
During the alignment, I noticed a sudden change of the MC2 alignment. There might be some hysteresis in the MC2 suspension. If you are locking the IMC and noticed significant misalignment, the first thing to try is to touch MC2 alignment. |
16684
|
Sat Feb 26 23:48:14 2022 |
Koji | Update | SUS | ETMY SUS Electronics Replacement |
[Ian, Koji] - Activity on 25th (Fri)
We continued working on the ETMY electronics replacement.
- The units were fixed on the rack along with the rack plan.
- Unnecessary Eurocard modules were removed from the crate.
- Unnecessary IDC cables and the sat amp were removed from the wiring chain. The side cross-connects became obsolete and they also were removed.
- A 18V DC power strip was attached to one of the side DIN rails.
Warning:
- Right now the ETMY suspension is free and not damped. We are relying on the EQ stops.
Next things to do:
- Layout the coil driving cables from the vacuum feedthru to the sat amp (2x D2100675-01 30ft ) [40m wiki]
- Layout DB cables between the units
- Layout the DC power cables from the power strip to the units
- Reassign ADC/DAC channels in the iscey model.
- Recover the optic damping
- Measure the change of the PD gains and the actuator gains. |
Attachment 1: PXL_20220226_023111179_2.jpg
|
|
16685
|
Sun Feb 27 00:37:00 2022 |
Koji | Update | General | IMC Locking Recovery |
Summary:
- IMC was locked.
- Some alignment change in the output optics.
- The WFS servos working fine now.
- You need to follow the proper alignment procedure to recover the good alignment condition.
Locking:
- Basically followed the previous procedure 40m/16673.
- The autolocker was turned off. Used MC2 and MC3 for the alignment.
- Once I hit the low order modes, increased the IN1 gain to acquire the lock. This helped me to bring the alignment to TEM00
- Found the MC2 spot was way too off in pitch and yaw.
- Moved MC1/2/3 to bring the MC2 spot around the center of the mirror.
- Found a reasonably good visibility (<90%) at a MC2 spot. Decided this to be the reference (at least for now)
SP Table Alignment Work
- Went to the SP table and aligned the WFS1/2 spots.
- I saw no spot on the camera. Found that the beam for the camera was way too weak and a PO mirror was useless to bring the spot on the CCD.
- So, instead, I decided to catch an AR reflection of the 90% mirror. (See Attachment 1)
- This made the CCD vulnerable to the stronger incident beam to the IMC. Work on the CCD path before increasing the incident power.
MC2 end table alignment work
- I knew that the focusing lens there and the end QPD had inconsistent alignment.
- The true MC2 spot needs to be optimized with A2L (and noise analysis / transmitted beam power analysis / etc)
- So, just aligned the QPD spot using today's beam as the temporary target of the MC alignment. (See Attachment 2)
Resulting CCD image on the quad display (Attachment 3)
WFS Servo
- To activate the WFS with the low transmitted power, the trigger threshold was reduced from 5000 to 500. (See Attachment 4)
- WFS offset was reset with /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_RF_offsets
- Resulting working state looks like Attachment 5 |
Attachment 1: PXL_20220226_093809056.jpg
|
|
Attachment 2: PXL_20220226_093854857.jpg
|
|
Attachment 3: PXL_20220226_100859871.jpg
|
|
Attachment 4: Screenshot_2022-02-26_01-56-31.png
|
|
Attachment 5: Screenshot_2022-02-26_01-56-47.png
|
|
16686
|
Sun Feb 27 01:12:46 2022 |
Koji | Update | General | IMC manual alignment procedure |
We expect that the MC sus are susceptible to the temperature change and the alignment drifts away with time.
Here is the proper alignment procedure.
0) Assume there is no TEM00 flash or locking, but the IMC is still flashing with higher-order modes.
1) Use the CCD camera and WFS DC spots to bring the beam to the nominal position.
2) Use only MC2 and MC3 to align the cavity to have low-order modes (TEM00,01,02 etc)
3) You should be able to lock the cavity on one of these modes. Minimize the reflection (maximize the transmission) for that mode.
4) This should allow you to jump to a better lower-order mode. Continue alignment optimization only with MC2/3 until you get TEM00.
5) Optimize the TEM00 alignment only with MC2/3
6) Look at the MC end QPD. use one of the scripts in scripts/MC/moveMC2 . Note that the spot moves opposite to the name of the scripts. i.e. MC2_spot_down moves the spot up, MC2_spot_right moved the spot left, etc...
These scripts move MC1/2/3 and try to keep the good MC transmission.
7) moveMC2 scripts are not perfect. As you use them, it makes the MC alignment gradually degraded. Use MC2 and MC3 to recover good transmission.
8) If MC2 spot is satisfactory, you are done.
-------------
Step 6-8 can be done with the WFS on. This way, you can skip step 7 as the WFS servo takes care of it. But if the spot move is too fast, the servo can't keep up with the change. If so, you have to wait for the settling of the servo. Once the spot position is satisfactory, MC servo relief should be run so that the servo offset (in actuation) can be offloaded to the bias slider.
|
Attachment 1: PXL_20220226_100859871.jpg
|
|