Thu May 20 10:35:57 2021, Anchal, Update, SUS, IMC settings reverted
|
For future reference, the new settings can be upoaded from a script in the same directory. Run python /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/uploadNewConfigIMC.py
from allegra.
Quote:
|
Fri May 28 17:32:48 2021, Anchal, Summary, ALS, Single Arm Actuation Calibration with IR ALS Beat 
|
I attempted a single arm actuation calibration using IR beatnote (in the directions of soCal idea for DARM calibration)
Measurement and Inferences:
I sent 4 excitation signals at C1:SUS-ITM_LSC_EXC wit 30cts at 31Hz,
200cts at 197Hz, 600cts at 619Hz and 1000cts at 1069 Hz.
These were sent simultaneously using compose function in python awg.
The |
Thu Jun 3 17:35:31 2021, Anchal, Summary, IMC, Fixed medm button
|
I fixed the PSL shutter button on Shutters summary page C1IOO_Mech_Shutter.adl. Now PSL switch changes C1:PSL-PSL_ShutterRqst channel. Earlier it was
C1:AUX-PSL_ShutterRqst which doesn't do anything.
|
Thu Jun 10 14:01:36 2021, Anchal, Summary, AUX, Xend Green Laser PDH OLTF measurement loop algebra
|
Attachment 1 shows the closed loop of Xend Green laser Arm PDH lock loop. Free running laser noise gets injected at laser head after the PZT actuation
as . The PDH error signal at output of miser is fed to a gain 1 SR560 used as summing
junction here. Used in 'A-B mode', the B port is used for sending in excitation |
Mon Jun 14 18:57:49 2021, Anchal, Update, AUX, Xend is unbearably hot. Green laser is loosing lock in 10's of seconds 
|
Working in Xend with mask on has become unbearable. It is very hot there and I would really like if we fix this issue.
Today, the Xend Green laser was just unable to hold lock for longer than 10's of seconds. The longest I could see it hold lock was
for about 2 minutes. I couldn't find anything obviously wrong with it. Attached are noise spectrums of error and control points. The control point |
Fri Jun 18 14:53:37 2021, Anchal, Summary, PEM, Temperature sensor network proposal
|
I propose we set up a temperature sensor network as described in attachment 1.
Here there are two types of units:
BASE-GATEWAY
Holds the processor to talk to the network through |
Wed Jun 23 09:05:02 2021, Anchal, Update, SUS, MC lock acquired back again
|
MC was unable to acquire lock because the WFS offsets were cleared to zero at some point and because of that MC was very misaligned to be able to catch
back lock. In such cases, one wants the WFS to start accumulating offsets as soon as minimal lock is attained so that the mode cleaner can be automatically
aligned. So I did following that worked: |
Wed Jun 30 15:31:35 2021, Anchal, Summary, Optical Levers, Centered optical levers on ITMY, BS, PRM and ETMY
|
When both arms were locked, we found that ITMY optical lever was very off-center. This seems to have happened after the c1susaux rebooting we did in
June 17th. I opened the ITMY table and realigned the OPLev beam to the center when the arm was locked. I repeated this process for BS, PRM and ETMY. I
did PRM because I've known that we have been keeping its OpLev off. The reason was clear once I opened the table. The oplev reflection beam was hitting |
Wed Jun 30 18:44:11 2021, Anchal, Summary, LSC, Tried fixing ETMY QPD
|
I worked in Yend station, trying to get the ETMY QPD to work properly. When I started, only one (quadrant #3) of the 4 quadrants were seeing any lights.
By just changing the beam splitter that reflects some light off to the QPD, I was able to get some amount of light in quadrant #2. However, no amount of
steering would show any light in any other quadrants. |
Thu Jul 1 16:55:21 2021, Anchal, Summary, Optical Levers, Fixed Centeringoptical levers PRM
|
This was a mistake. When arms are locked, PRM is misaligned by setting -800 offset in PIT dof of PRM. The oplev is set to function in normal state not
this misalgined configuration. I undid my changes today by switching off the offset, realigning the oplev to center and then restoring the single arm locked
state. The PRM OpLevs loops are off now. |
Fri Jul 9 15:39:08 2021, Anchal, Summary, ALS, Single Arm Actuation Calibration with IR ALS Beat [Correction]
|
I did this analysis again by just doing demodulation go 5s time segments of the 60s excitation signal. The major difference is that I was not summing
up the sine-cosine multiplied signals, so the error associated was a lot more. If I simply multpy the whole beatnote signal with digital LO created at
excitation frequency, divide it up in 12 segments of 5 s each, sum them up individually, then take the mean and standard deviation, I get the answer as: |
Tue Jul 27 23:04:37 2021, Anchal, Update, LSC, 40 meter party
|
[ian, anchal, paco]
After our second attempt of locking PRFPMI tonight, we tried to resotre XARM and YARM locks to IR by clicking on IFO_CONFIGURE>Restore XARM
(POX) and IFO_CONFIGURE>Restore YARM (POY) but the arms did not lock. The green lasers were locked to the arms at maximum power, so the relative alignments |
Wed Jul 28 17:10:24 2021, Anchal, Update, LSC, Schnupp asymmetry
|
[Anchal, Paco]
I redid the measurement of Schnupp asymmetry today and found it to be 3.8 cm 0.9 cm. |
Tue Aug 3 20:20:08 2021, Anchal, Update, Optical Levers, Recentered ETMX, ITMX and ETMY oplevs at good state
|
Late elog. Original time 08/02/2021 21:00.
I locked both arms and ran ASS to reach to optimum alignment. ETMY PIT > 10urad, ITMX P > 10urad and ETMX P < -10urad. Everything else
was ok absolute value less than 10urad. I recentered these three. |
Thu Aug 5 14:59:31 2021, Anchal, Update, General, Added temperature sensors at Yend and Vertex too
|
I've added the other two temperature sensor modules on Y end (on 1Y4, IP: 192.168.113.241) and in the vertex on (1X2, IP: 192.168.113.242). I've
updated the martian host table accordingly. From inside martian network, one can go
to the browser and go to the IP address to see the temperature sensor status . These sensors can be set to trigger alarm and send emails/sms etc if temperature |
Fri Aug 6 13:13:28 2021, Anchal, Update, BHD, c1teststand subnetwork now accessible remotely
|
c1teststand subnetwork is now accessible remotely. To log into this network, one needs to do following:
Log into nodus or pianosa. (This will only work from these two computers)
ssh -CY
controls@192.168.113.245
Password is our usual workstation password.
This will log you into c1teststand network.
From |
Mon Aug 9 10:38:48 2021, Anchal, Update, BHD, c1teststand subnetwork now accessible remotely
|
I had to add following two lines in the /etc/network/interface file to make the special ip routes persistent even after reboot:
post-up ip route add 192.168.113.200 via 10.0.1.1 dev eno1
post-up ip route add 192.168.113.216 via 10.0.1.1 dev eno1 |
Wed Aug 18 20:30:12 2021, Anchal, Update, ASS, Fixed runASS scripts
|
Late elog: Original time of work Tue Aug 17 20:30 2021
I locked the arms yesterday remotely and tried running runASS.py scripts (generally ran by clicking Run ASS buttons on IFO OVERVIEW screen
of ASC screen). We have known for few weeks that this script stopped working for some reason. It would start the dithering and would optimize the alignment |
Thu Aug 19 03:23:00 2021, Anchal, Update, CDS, Time synchornization not running
|
I tried to read a bit and understand the NTP synchronization implementation in FE computers. I'm quite sure that NTP synchronization should be 'yes'
if timesyncd are running correctly in the output of timedatectl in these computers. As Koji reported in 15791,
this is not the case. I logged into c1lsc, c1sus and c1ioo and saw that RTC has drifted from the software clocks too which does not happen if NTP synchronization |
Fri Aug 20 00:28:55 2021, Anchal, Update, CDS, Time synchornization not running
|
I added ntpserver as a known host name for address 192.168.113.201 (fb1's address where ntp server is running) in the martian host list in the following
files in Chiara:
/var/lib/bind/martian.hosts
/var/lib/bind/rev.113.168.192.in-addr.arpa
Note: |
Fri Aug 20 06:24:18 2021, Anchal, Update, CDS, Time synchornization not running
|
I read on some stack exchange that 'NTP synchornized' indicator turns 'yes' in the output of command timedatectl only when RTC clock
has been adjusted at some point. I also read that timesyncd does not do the change if the time difference is too much, roughly more than 3 seconds.
So I logged into all FE machines and ran sudo hwclock -w to synchronize them all to the system |
Mon Aug 23 22:51:44 2021, Anchal, Update, General, Time synchronization efforts
|
Related elog thread: 16286
I didn't really achieve anything but I'm listing what I've tried.
I know now that the timesyncd isn't working because systemd-timesyncd is known to have issues when running on a read-only file system. |
Tue Aug 24 09:22:48 2021, Anchal, Update, General, Time synchronization working now
|
Jamie told me to use chroot to log in into the chroot jail of debian os that are exported for the FEs and install ntp there. I took following steps at
the end of which, all FEs have NTP synchronized now.
I logged into fb1 through nodus.
chroot /diskless/root.jessie /bin/bash took |
Tue Aug 24 22:37:40 2021, Anchal, Update, General, Time synchronization not really working
|
I attempted to install chrony and run it on one of the FE machines. It didn't work and in doing so, I lost the working NTP client service on the
FE computers as well. Following are some details:
I added the following two mirrors in the apt source list of root.jessie at /etc/apt/sources.list |
Mon Sep 13 15:14:36 2021, Anchal, Update, LSC, Xend Green laser injection mirrors M1 and M2 not responsive
|
I was showing some green laser locking to Tega, I noticed that changing the PZT sliders of M1/M2 angular position on Xend had no effect on locked TEM01
or TEM00 mode. This is odd as changing these sliders should increase or decrease the mode-matching of these modes. I suspect that the controls are not
working correctly and the PZTs are either not powered up or not connected. We'll investigate this in near future as per priority. |
Tue Sep 14 17:22:21 2021, Anchal, Update, CDS, Added temp sensor channels to DAQ list
|
[Tega, Paco, Anchal]
We attempted to reboot fb1 daqd today to get the new temperature sensor channels recording. However, the FE models got stuck, apparantely due
to reasons explaine din 40m/16325. Jamie cleared the /var/logs in fb1 so that FE can reboot. |
Thu Sep 16 10:07:25 2021, Anchal, Update, General, Melting 2
|
Put outside.
Quote:
It happened again. Defrosting |
Thu Sep 16 20:18:13 2021, Anchal, Update, General, Reset
|
Fridge brought back inside.
Quote:
Put outside. |
Tue Sep 21 11:09:34 2021, Anchal, Summary, CDS, XARM YARM UGF Servo and Oscillators added
|
I've updated the c1LSC simulink model to add the so-called UGF servos in the XARM and YARM single arm loops as well. These were earlier present in
DARM, CARM, MICH and PRCL loops only. The UGF servo themselves serves a larger purpose but we won't be using that. What we have access to now is to
add an oscillator in the single arm and get realtime demodulated signal before and after the addition of the oscillator. This would allow us to get the |
Wed Sep 22 12:40:04 2021, Anchal, Summary, CDS, XARM YARM UGF Servo and Oscillators shifted to OAF
|
To reduce burden on c1lsc, I've shifted the added UGF block to to c1oaf model. c1lsc had to be modified to allow addition of an oscillator in the
XARm and YARM control loops and take out test points before and after the addition to c1oaf through shared memory IPC to do realtime demodulation in c1oaf
model. |
Wed Sep 29 17:10:09 2021, Anchal, Summary, CDS, c1teststand problems summary
|
[anchal, ian]
We went and collected some information for the overlords to fix the c1teststand DAQ network issue.
from c1teststand, c1bhd and c1sus2 computers were not accessible through ssh. (No route to host). So we restarted both the computers |
Thu Sep 30 14:09:37 2021, Anchal, Summary, CDS, New way to ssh into c1teststand
|
Late elog, original time Wed Sep 29 14:09:59 2021
We opened a new port (22220) in the router to the martian subnetwork which is forwarded to port 22 on c1teststand (192.168.113.245) allowing
direct ssh access to c1teststand computer from the outside world using: |
Thu Sep 30 14:13:18 2021, Anchal, Update, LSC, HV supply to Xend Green laser injection mirrors M1 and M2 PZT restored
|
Late elog, original date Sep 15th
We found that the power switch of HV supply that powers the PZT drivers for M1 and M2 on Xend green laser injection alignment was tripped off.
We could not find any log of someone doing it, it is a physical switch. Our only explanation is that this supply might have a solenoid mechansm to shut |
Mon Oct 4 11:05:44 2021, Anchal, Summary, CDS, c1teststand problems summary
|
[Anchal, Paco]
We tried to fix the ntp synchronization in c1teststand today by repeating the steps listed in 40m/16302.
Even though teh cloned fb1 now has the exact same package version, conf & service files, and status, the FE machines (c1bhd and c1sus2) fail to sync |
Tue Oct 5 17:58:52 2021, Anchal, Summary, CDS, c1teststand problems summary
|
open-mx service is running successfully on the fb1(clone), c1bhd and c1sus.
Quote:
I don't know |
Tue Oct 5 18:00:53 2021, Anchal, Summary, CDS, c1teststand time synchronization working now
|
Today I got a new router that I used to connect the c1teststand, fb1 and chiara. I was able to see internet access in c1teststand and fb1, but not in
chiara. I'm not sure why that is the case.
The good news is that the ntp server on fb1(clone) is working fine now and both FE computers, c1bhd and c1sus2 are succesfully synchronized to |
Wed Oct 6 15:39:29 2021, Anchal, Summary, SUS, PRM and BS Angular Actuation transfer function magnitude measurements
|
Note that your tests were done with the output matrix for BS and PRM in the compensated state as done in 40m/16374.
The changes made there were supposed to clear out any coil actuation imbalance in the angular degrees of freedom. |
Mon Oct 11 17:31:25 2021, Anchal, Summary, CDS, Fixed mounting of mx devices in fb. daqd_dc is running now.
|
|
Mon Oct 11 18:29:35 2021, Anchal, Summary, CDS, Moving forward?
|
The teststand has some non-trivial issue with Myrinet card (either software or hardware) which even teh experts are saying they don't remember how
to fix it. CDS with mx was iin use more than a decade ago, so it is hard to find support for issues with it now and will be the same in future. We need
to wrap up this test procedure one way or another now, so I have following two options moving forward: |
Tue Oct 12 17:10:56 2021, Anchal, Summary, CDS, Some more information
|
Chris pointed out some information displaying scripts, that show if the DAQ network is working or not. I thought it would be nice to log this information
here as well.
controls@fb1:/opt/mx/bin 0$ ./mx_info
MX |
Tue Oct 12 17:20:12 2021, Anchal, Summary, CDS, Connected c1sus2 to martian network
|
I connected c1sus2 to the martian network by splitting the c1sim connection with a 5-way switch. I also ran another ethernet cable from the second port
of c1sus2 to the DAQ network switch on 1X7.
Then I logged into chiara and added the following in chiara:/etc/dhcp/dhcpd.conf : |
Wed Oct 13 11:25:14 2021, Anchal, Summary, CDS, Ran c1sus2 models in martian CDS. All good!
|
Three extra steps (when adding new models, new FE):
Chris pointed out that the sudo command in c1sus2 is giving
error
sudo: unable to resolve host c1sus2
This
error comes in when the computer could not figure out it's own hostname. Since FEs are network booted off the fb1, we need to update the /etc/hosts |
Fri Oct 15 16:46:27 2021, Anchal, Summary, Optical Levers, Vent Prep   
|
I centered all the optical levers on ITMX, ITMY, ETMX, ETMY, and BS to a position where the single arm lock on both were best aligned. Unfortunately,
we are seeing the TRX at 0.78 and TRY at 0.76 at the most aligned positions. It seems less power is getting out of PMC since last month. (Attachment 1).
Then, I tried to lock PRMI with carrier with no luck. But I was able to see flashing of up to 4000 counts in POP_DC. At this position, I centered |
Wed Oct 20 11:16:21 2021, Anchal, Summary, PEM, Particle counter setup near BS Chamber
|
I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber. The serial cable is connected to c1psl computer on 1X2 using
2 usb extenders (blue in color) over the PSL enclosure and over the 1X1 rack.
The main serial communication script for this counter by Radhika is present in 40m/labutils/serial_com/gt321.py. |
Wed Oct 20 11:48:27 2021, Anchal, Summary, CDS, Power supple configured correctly.
|
This was horrible! That's my bad, I should have checked the configuration before assuming that it is right.
I fixed the power supply configuration. Now the strip has two rails of +/- 18V and the GND is referenced to power supply earth GND.
Ian should redo the tests. |
Thu Oct 21 11:41:31 2021, Anchal, Summary, PEM, Particle counter setup near BS Chamber
|
The particle count channel names were changes yesterday to follow naming conventions used at the sites. Following are the new names:
C1:PEM-BS_DUST_300NM
C1:PEM-BS_DUST_500NM |
Mon Oct 25 13:23:45 2021, Anchal, Summary, BHD, Before photos of BSC
|
[Yehonathan, Anchal]
On thursday Oct 21 2021, Yehonathan and I opened the door to BSC and took some photos. We setup the HEPA stand next to the door with anti-static
curtains covering all sides. We spend about 15 minutes trying to understand the current layout and taking photos and a video. Any suggestions on improvement |
Mon Oct 25 17:37:42 2021, Anchal, Summary, BHD, Part I of BHR upgrade - Removed optics from BSC 
|
[Anchal, Paco, Ian]
Clean room etiquettes
Two people in coverall suits, head covers, masks and AccuTech ultra clean
gloves.
One person in just booties to interact with outside "dirty" world.
Anything that comes in chamber, first cleaned |
Wed Oct 27 16:27:16 2021, Anchal, Summary, BHD, Part II of BHR upgrade - Prep 
|
[Anchal, Paco, Ian]
Before we could start working on Part II, which is to relocate TT2 to new location, we had to clear space in front of injection chamber door
and clean the floor which was very dusty. This required us to disconnect everything we could safely from OMC North short electronics rack, remove 10-15 |
Wed Oct 27 16:31:35 2021, Anchal, Summary, BHD, Part III of BHR upgrade - Removal of PR2 Small Suspension
|
I went inside the ITMX Chamber to read off specs from PR2 edge. This was required to confirm our calculations of LO power for BHR later. The numbers
that I could read from the edge were kind of meaningless "0.5 088 or 2.0 088". To make it more worthwhile this opening of the chamber, we decided
to remove the PR2 suspension unit so that the optic can be removed and installed on an SOS in the cleanroom. We covered the optic in clean aluminum foil |