40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 333 of 349  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  16299   Wed Aug 25 18:20:21 2021 JamieUpdateCDSGPS time on fb1 fixed, dadq writing correct frames again

I have no idea what happened to the GPS timing on fb1, but it seems like the issue was coincident with the power glitch on Monday.

As was noted by Koji above, the GPS time kernel interface was off by a year, which was causing the frame builder to write out files with the wrong names.  fb1 was using DAQD components from the advligorts 3.3 release, which used the old "symmetricom" kernel module for the GPS time.  This old module was also known to have issues with time offsets.  This issue is remniscent of previous timing issues with the DAQ on fb1.

I noted that a newer version of the advligorts, version 3.4, was available on debian jessie, the system running on fb1.  advligorts 3.4 includes a newer version of the GPS time module, renamed gpstime.  I checked with Jonathan Hanks that the interfaces did not change between 3.3 and 3.4, and 3.4 was mostly a bug fix and packaging release, so I decided to upgrade the DAQ to get the new components.  I therefore did the following

  • updated the archive info in /etc/apt/sources.list.d/cdssoft.list, and added the "jessie-restricted" archive which includes the mx packages: https://git.ligo.org/cds-packaging/docs/-/wikis/home

  • removed the symmetricom module from the kernel

    sudo rmmod symmetricom

  • upgraded the advligorts-daqd components (NOTE I did not upgrade the rest of the system, although there are outstanding security upgrades needed):

    sudo apt install advligorts-daqd advligorts-daqd-dc-mx

  • loaded the new gpstime module and checked that the GPS time was correct:

    sudo modprobe gpstime

  • restarted all the daqd processes

    sudo systemctl restart daqd_*

Everything came up fine at that point, and I checked that the correct frames were being written out.

  16300   Thu Aug 26 10:10:44 2021 PacoUpdateCDSFB is writing the frames with a year old date

[paco, ]

We went over the X end to check what was going on with the TRX signal. We spotted the ground terminal coming from the QPD is loosely touching the handle of one of the computers on the rack. When we detached it completely from the rack the noise was gone (attachment 1).

We taped this terminal so it doesn't touch anything accidently. We don't know if this is the best solution since it is probably needs a stable voltage reference. In the Y end those ground terminals are connected to the same point on the rack. The other ground terminals in the X end are just cut.

We also took the PSD of these channels (attachment 2). The noise seem to be gone but TRX is still a bit noisier than TRY. Maybe we should setup a proper ground for the X arm QPD?


We saw that the X end station ALS laser was off. We turned it on and also the crystal oven and reenabled the temperature controller. Green light immidiately appeared. We are now working to restore the ALS lock. After running XARM ASS we were unable to lock the green laser so we went to the XEND and moved the piezo X ALS alignment mirrors until we maximized the transmission in the right mode. We then locked the ALS beams on both arms successfully. It very well could be that the PZT offsets were reset by the power glitch. The XARM ALS still needs some tweaking, its level is ~ 25% of what it was before the power glitch.

Attachment 1: Screenshot_from_2021-08-26_10-09-50.png
Screenshot_from_2021-08-26_10-09-50.png
Attachment 2: TRXTRY_Spectra.pdf
TRXTRY_Spectra.pdf
  16305   Wed Sep 1 14:16:21 2021 JordanUpdateVACEmpty N2 Tanks

The right N2 tank had a bad/loose valve and did not fully open. This morning the left tank was just about empty and the right tank showed 2000+ psi on the gauge. Once the changeover happened the copper line emptied but the valve to the N2 tank was not fully opened. I noticed the gauges were both reading zero at ~1pm just before the meeting. I swapped the left tank, but not in time. The vacuum interlocks tripped at 1:04 pm today when the N2 pressure to the vacuum valves fell below 65psi. After the meeting, Chub tightened the valve, fully opened it and refilled the lines. I will monitor the tank pressures today and make sure all is ok.

There used to be a mailer that was sent out when the sum pressure of the two tanks fell <600 psi, telling you to swap tanks. Does this no longer exist?

  16308   Thu Sep 2 19:28:02 2021 KojiUpdate This week's FB1 GPS Timing Issue Solved

After the disk system trouble, we could not make the RTS running at the nominal state. A part of the troubleshoot FB1 was rebooted. But the we found that the GPS time was a year off from the current time

controls@fb1:/diskless/root/etc 0$ cat /proc/gps 
1283046156.91
controls@fb1:/diskless/root/etc 0$ date
Thu Sep  2 18:43:02 PDT 2021
controls@fb1:/diskless/root/etc 0$ timedatectl 
      Local time: Thu 2021-09-02 18:43:08 PDT
  Universal time: Fri 2021-09-03 01:43:08 UTC
        RTC time: Fri 2021-09-03 01:43:08
       Time zone: America/Los_Angeles (PDT, -0700)
     NTP enabled: no
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-14 01:59:59 PST
                  Sun 2021-03-14 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-11-07 01:59:59 PDT
                  Sun 2021-11-07 01:00:00 PST


Paco went through the process described in Jamie's elog [40m ELOG 16299] (except for the installation part) and it actually made the GPS time even strange

controls@fb1:~ 0$ cat /proc/gps
967861610.89

I decided to remove the gpstime module and then load it again. This made the gps time back to normal again.

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$ cat /proc/gps
cat: /proc/gps: No such file or directory
controls@fb1:~ 1$ sudo modprobe gpstime
controls@fb1:~ 0$ cat /proc/gps
1314671254.11

 

  16309   Thu Sep 2 19:47:38 2021 KojiUpdateCDSThis week's FB1 GPS Timing Issue Solved

After the reboot daqd_dc was not working, but manual starting of open-mx / mx services solved the issue.

sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*

 

  16310   Thu Sep 2 20:44:18 2021 KojiUpdateCDSChiara DHCP restarted

We had the issue of the RT machines rebooting. Once we hooked up the display on c1iscex, it turned out that the IP was not given at it's booting-up.

I went to chiara and confirmed that the DHCP service was not running

~>sudo service isc-dhcp-server status
[sudo] password for controls:
isc-dhcp-server stop/waiting

So the DHCP service was manually restarted

~>sudo service isc-dhcp-server start
isc-dhcp-server start/running, process 24502
~>sudo service isc-dhcp-server status
isc-dhcp-server start/running, process 24502

 

 

  16311   Thu Sep 2 20:47:19 2021 KojiUpdateCDSChiara DHCP restarted

[Paco, Tega, Koji]

Once chiara's DHCP is back, things got much more straight forward.
c1iscex and c1iscey were rebooted and the IOPs were launched without any hesitation.

Paco ran rebootC1LSC.sh and for the first time in this year we had the launch of the processes without any issue.

  16316   Wed Sep 8 18:00:01 2021 KojiUpdateVACcronjobs & N2 pressure alert

In the weekly meeting, Jordan pointed out that we didn't receive the alert for the low N2 pressure.

To check the situation, I went around the machines and summarized the cronjob situation.
[40m wiki: cronjob summary]
Note that this list does not include the vacuum watchdog and mailer as it is not on cronjob.

Now, I found that there are two N2 scripts running:

1. /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh on megatron and is running every minute (!)
2. /opt/rtcds/caltech/c1/scripts/Admin/N2check/pyN2check.sh on c1vac and is running every 3 hours.

Then, the N2 log file was checked: /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log

Wed Sep 1 12:38:01 PDT 2021 : N2 Pressure: 76.3621
Wed Sep 1 12:38:01 PDT 2021 : T1 Pressure: 112.4
Wed Sep 1 12:38:01 PDT 2021 : T2 Pressure: 349.2
Wed Sep 1 12:39:02 PDT 2021 : N2 Pressure: 76.0241
Wed Sep 1 12:39:02 PDT 2021 : N2 pressure has fallen to 76.0241 PSI !

Tank pressures are 94.6 and 98.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

Wed Sep 1 12:40:02 PDT 2021 : N2 Pressure: 75.5322
Wed Sep 1 12:40:02 PDT 2021 : N2 pressure has fallen to 75.5322 PSI !

Tank pressures are 93.6 and 97.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

...

The error started at 11:39 and lasted until 13:01 every minute. So this was coming from the script on megatron. We were supposed to have ~20 alerting emails (but did none).
So what's happened to the mails? I tested the script with my mail address and the test mail came to me. Then I sent the test mail to 40m mailing list. It did not reach.
-> Decided to put the mail address (specified in /etc/mailname , I believe) to the whitelist so that the mailing list can accept it.
I did run the test again and it was successful. So I suppose the system can now send us the alert again.
And alerting every minute is excessive. I changed the check frequency to every ten minutes.

What's happened to the python version running on c1vac?
1) The script is running, spitting out some error in the cron report (email on c1vac). But it seems working.
2) This script checks the pressures of the bottles rather than the N2 pressure downstream. So it's complementary.
3) During the incident on Sept 1, the checker did not trip as the pressure drop happened between the cronjob runs and the script didn't notice it.
4) On top of them, the alert was set to send the mails only to an our former grad student. I changed it to deliver to the 40m mailing list. As the "From" address is set to be some ligox...@gmail.com, which is a member of the mailing list (why?), we are supposed to receive the alert. (And we do for other vacuum alert from this address).

 

 

 

 

  16317   Wed Sep 8 19:06:14 2021 KojiUpdateGeneralBackup situation

Tega mentioned in the meeting that it could be safer to separate some of nodus's functions from the martian file system.
That's an interesting thought. The summary pages and other web services are linked to the user dir. This has high traffic and can cause the issure of the internal network once we crash the disk.
Or if the internal system is crashed, we still want to use elogs as the source of the recovery info. Also currently we have no backup of the elog. This is dangerous.

We can save some of the risks by adding two identical 2TB disks to nodus to accomodate svn/elog/web and their daily backup.

host file system or contents condition note
nodus root none or unknown  
nodus home (svn, elog) none  
nodus web (incl summary pages) backed up linked to /cvs/cds
chiara root maybe need to check with Jon/Anchal
chiara /home/cds local copy The backup disk is smaller than the main disk.
chiara /home/cds remote copy - stalled we used to have, but stalled since 2017/11/17
fb1 root maybe need to check with Jon/Anchal
fb1 frame rsync pulled from LDAS according to Tega
       

 

  16319   Mon Sep 13 04:12:01 2021 TegaUpdateGeneralAdded temperature sensors at Yend and Vertex too

I finally got the modbus part working on chiara, so we can now view the temperature data on any machine on the martian network, see Attachment 1. 

I also updated the entries on /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini, as suggested by Koji, to include the SensorGatway temperature channels, but I still don't see their EPICs channels on https://ldvw.ligo.caltech.edu/ldvw/view. This means the channels are not available via nds so I think the temperature data is not being to be written to frame files on framebuilder but I am not sure what this entails, since I assumed C0EDCU.ini is the framebuilder daq channel list.

When the EPICs channels are available via nds, we should be able to display the temperature data on the summary pages.

Quote:

I've added the other two temperature sensor modules on Y end (on 1Y4, IP: 192.168.113.241) and in the vertex on (1X2, IP: 192.168.113.242). I've updated the martian host table accordingly. From inside martian network, one can go to the browser and go to the IP address to see the temperature sensor status . These sensors can be set to trigger alarm and send emails/sms etc if temperature goes out of a defined range.

I feel something is off though. The vertex sensor shows temperature of ~28 degrees C, Xend says 20 degrees C and Yend says 26 degrees C. I believe these sensors might need calibration.

Remaining tasks are following:

  • Modbus TCP solution:
    • If we get it right, this will be easiest solution.
    • We just need to add these sensors as streaming devices in some slow EPICS machine in there .cmd file and add the temperature sensing channels in a corresponding database file.
  • Python workaround:
    • Might be faster but dirty.
    • We run a python script on megatron which requests temperature values every second or so from the IP addresses and write them on a soft EPICs channel.
    • We still would need to create a soft EPICs channel fro this and add it to framebuilder data acquisition list.
    • Even shorted workaround for near future could be to just write temperature every 30 min to a log file in some location.

[anchal, paco]

We made a script under scripts/PEM/temp_logger.py and ran it on megatron. The script uses the requests package to query the latest sensor data from the three sensors every 10 minutes as a json file and outputs accordingly. This is not a permanent solution.

 

Attachment 1: Screen_Shot_2021-09-13_at_4.16.22_AM.png
Screen_Shot_2021-09-13_at_4.16.22_AM.png
  16320   Mon Sep 13 09:15:15 2021 PacoUpdateLSCMC unlocked?

Came in at ~ 9 PT this morning to find the IFO "down". The IMC had lost its lock ~ 6 hours before, so at about 03:00 AM. Nothing seemed like the obvious cause; there was no record of increased seismic activity, all suspensions were damped and no watchdog had tripped, and the pressure trends similar to those in recent pressure incidents show nominal behavior (Attachment #1). What happened?

Anyways I simply tried reopening the PSL shutter, and the IMC caught its lock almost immediately. I then locked the arms and everything seems fine for now cool.

Attachment 1: VAC_2021-09-13_09-32-45.png
VAC_2021-09-13_09-32-45.png
  16321   Mon Sep 13 14:32:25 2021 YehonathanUpdateCDSc1auxey assembly

So we agreed that the RTNs points on the c1auxex Acromag chassis should just be grounded to the local Acromag ground as it just needs a stable reference. Normally, the RTNs are not connected to any ground so there is should be no danger of forming ground loops by doing that. It is probably best to use the common wire from the 15V power supplies since it also powers the VME crate. I took the spectra of the ETMX OSEMs (attachment) for reference and proceeding with the grounding work.

 

Attachment 1: ETMX_OSEMS_Noise.png
ETMX_OSEMS_Noise.png
  16322   Mon Sep 13 15:14:36 2021 AnchalUpdateLSCXend Green laser injection mirrors M1 and M2 not responsive

I was showing some green laser locking to Tega, I noticed that changing the PZT sliders of M1/M2 angular position on Xend had no effect on locked TEM01 or TEM00 mode. This is odd as changing these sliders should increase or decrease the mode-matching of these modes. I suspect that the controls are not working correctly and the PZTs are either not powered up or not connected. We'll investigate this in near future as per priority.

  16324   Mon Sep 13 18:19:25 2021 TegaUpdateComputer Scripts / ProgramsMoved modbus service from chiara to c1susaux

[Tega, Anchal, Paco]

After talking to Anchal, it was made clear that chiara is not the place to host the modbus service for the temperature sensors. The obvious machine is c1pem, but the startup cmd script loads c object files and it is not clear how easy it would integrate the modbus functionality since we can only login via telnet, so we decided to instead host the service on c1susaux. We also modified the /etc/motd file on c1susaucx which displays the welcome message during login to inform the user that this machine hosts the modbus service for the temperature sensor. Anchal plans to also document this information on the temperature sensor wiki at some point in the future when the page is updated to include what has been learnt so far.

We might also consider updating the database file to a more modern way of reading the temperature sensor data using FLOAT32_LE which is available on EPICs version 3.14 and above, instead of the current method which works but leaves the reader bemused by the bitwise operations that convert the two 16 bits words (A and B) to IEEE-754 32-bit float, via 

field(CALC, "(A&D?(A&C?-1:1):0)*((G|A&E)*J+B)*2^((A&D)/G-F)")

where 

   field(INPA, "$HiWord")
   field(INPB, "$LoWord")
   field(INPC, "0x8000")   # Hi word, sign bit
   field(INPD, "0x7F80")   # Hi word, exponent mask
   field(INPE, "0x00FF")   # Hi word, mantissa mask (incl hidden bit)
   field(INPF, "150")      # Exponent offset plus 23-bit mantissa shift
   field(INPG, "0x0080")   # Mantissa hidden bit
   field(INPJ, "65536")    # Hi/Lo mantissa ratio
   field(CALC, "(A&D?(A&C?-1:1):0)*((G|A&E)*J+B)*2^((A&D)/G-F)")
   field(PREC, "4")

as opposed to the more modern form

field(INP,"@asyn($(PORT) $(OFFSET))FLOAT32_LE")
  16326   Tue Sep 14 16:12:03 2021 JordanUpdateSUSSOS Tower Hardware

Yehonathan noticed today that the silver plated hardware on the assembled SOS towers had some pretty severe discoloration on it. See attached picture.

These were all brand new screws from UC components, and have been sitting on the flow bench for a couple months now. I believe this is just oxidation and is not an issue, I spoke to Calum as well and showed him the attached picture and he agreed it was likely oxidation and should not be a problem once installed.

He did mention if there is any concern from anyone, we could take an FTIR sample and send it to JPL for analysis, but this would cost a few hundred dollars.

I don't believe this to be an issue, but it is odd that they oxidized so quickly. Just wanted to relay this to everyone else to see if there was any concern.

Attachment 1: 20210914_160111.jpg
20210914_160111.jpg
  16328   Tue Sep 14 17:14:46 2021 KojiUpdateSUSSOS Tower Hardware

Yup this is OK. No problem.

 

  16330   Tue Sep 14 17:22:21 2021 AnchalUpdateCDSAdded temp sensor channels to DAQ list

[Tega, Paco, Anchal]

We attempted to reboot fb1 daqd today to get the new temperature sensor channels recording. However, the FE models got stuck, apparantely due to reasons explaine din 40m/16325. Jamie cleared the /var/logs in fb1 so that FE can reboot. We were able to reboot the FE machines after this work successfully and get the models running too. During the day, the FE machines were shut down manually and brought back on manually, a couple of times on the c1iscex machine. Only change in fb1 is in the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini where the new channels were added, and some hacking was done by Jamie in gpstime module (See 40m/16327).

  16332   Wed Sep 15 11:27:50 2021 YehonathanUpdateCDSc1auxey assembly

{Yehonathan, Paco}

We turned off the ETMX watchdogs and OpLevs. We went to the X end and shut down the Acromag chassi. We labeled the chassi feedthroughs and disconnected all the cables from it.

We took it out and tied the common wire of the power supplies (the commons of the 20V and 15V power supplies were shorted so there is no difference which we connect) to the RTNs of the analog inputs.

The chassi was put back in place. All the cables were reconnected. Power turn on.

We rebooted c1auxex and the channels went back online. We turned on the watchdogs and watched the ETMX motion get damped. We turned on the OpLev. We waited until the beam position got centered on the ETMX.

Attachment shows a comparison between the OSEM spectra before and after the grounding work. Seems like there is no change.

We were able to lock the arms with no issues.

 

Attachment 1: c1auxex_Grounding_OSEM_comparison1.pdf
c1auxex_Grounding_OSEM_comparison1.pdf
Attachment 2: c1auxex_Grounding_OSEM_comparison2.pdf
c1auxex_Grounding_OSEM_comparison2.pdf
  16333   Wed Sep 15 23:38:32 2021 KojiUpdateALSALS ASX PZT HV was off -> restored

It was known that the Y end ALS PZTs are not working. But Anchal reported in the meeting that the X end PZTs are not working too.

We went down to the X arm in the afternoon and checked the status. The HV (KEPCO) was off from the mechanical switch. I don't know this KEPCO has the function to shutdown the switch at the power glitch or not.
But anyway the power switch was engaged. We also saw a large amount of misalignment of the X end green. The alignment was manually adjusted. Anchal was able to reach ~0.4 Green TRX, but no more. He claimed that it was ~0.8.

We tried to tweak the SHG temp from 36.4. We found that the TRX had the (local) maximum of ~0.48 at 37.1 degC. This is the new setpoint right now.

Attachment 1: P_20210915_151333.jpg
P_20210915_151333.jpg
  16335   Thu Sep 16 00:00:20 2021 KojiUpdateGeneralRIO Planex 1064 Lasers in the south cabinet

RIO Planex 1064 Lasers in the south cabinet

Property Number C30684/C30685/C30686/C30687

Attachment 1: P_20210915_232426.jpg
P_20210915_232426.jpg
  16336   Thu Sep 16 01:16:48 2021 KojiUpdateGeneralFrozen 2

It happened again. Defrosting required.

Attachment 1: P_20210916_003406_1.jpg
P_20210916_003406_1.jpg
  16337   Thu Sep 16 10:07:25 2021 AnchalUpdateGeneralMelting 2

Put outside.

Quote:

It happened again. Defrosting required.

 

Attachment 1: PXL_20210916_170602832.jpg
PXL_20210916_170602832.jpg
  16338   Thu Sep 16 12:06:17 2021 TegaUpdateComputer Scripts / ProgramsTemperature sensors added to the summary pages

We can now view the minute trend of the temperature sensors under the PEM tab of the summary pages. See attachment 1 for an example of today's temperature readings. 

Attachment 1: TempPlot_2021-09-16_12.04.19PM.png
TempPlot_2021-09-16_12.04.19PM.png
  16340   Thu Sep 16 20:18:13 2021 AnchalUpdateGeneralReset

Fridge brought back inside.

Quote:

Put outside.

Quote:

It happened again. Defrosting required.

 

 

Attachment 1: PXL_20210917_031633702.jpg
PXL_20210917_031633702.jpg
  16341   Fri Sep 17 00:56:49 2021 KojiUpdateGeneralAwesome

The Incredible Melting Man!

 

  16342   Fri Sep 17 20:22:55 2021 KojiUpdateSUSEQ M4.3 Long beach

EQ  M4.3 @longbeach
2021-09-18 02:58:34 (UTC) / 07:58:34 (PDT)
https://earthquake.usgs.gov/earthquakes/eventpage/ci39812319/executive

  • All SUS Watchdogs tripped, but the SUSs looked OK except for the stuck ITMX.
  • Damped the SUSs (except ITMX)
  • IMC automatically locked
  • Turned off the damping of ITMX and shook it only with the pitch bias -> Easily unstuck -> damping recovered -> realignment of the ITMX probably necessary.
  • Done.
  16344   Mon Sep 20 14:11:40 2021 KojiUpdateBHDEnd DAC Adapter Unit D2100647

I've uploaded the schematic and PCB PDF for End DAC Adapter Unit D2100647.

Please review the design.

  • CH1-8 SUS actuation channels.
    • 5CHs out of 8CHs are going to be used, but for future extensions, all the 8CHs are going to be filled.
    • It involves diff-SE conversion / dewhitening / SE-diff conversion. Does this make sense?
  • CH9-12 PZT actuation channels. It is designed to send out 4x SE channels for compatibility. The channels have the jumpers to convert it to pass through the diff signals.
  • CH13-16 are general purpose DIFF/SE channels. CH13 is going to be used for ALS Laser Slow control. The other 3CHs are spares.

The internal assembly drawing & BOM are still coming.

Attachment 1: D2100647_End_DAC_Adapter.pdf
D2100647_End_DAC_Adapter.pdf D2100647_End_DAC_Adapter.pdf D2100647_End_DAC_Adapter.pdf D2100647_End_DAC_Adapter.pdf D2100647_End_DAC_Adapter.pdf D2100647_End_DAC_Adapter.pdf D2100647_End_DAC_Adapter.pdf D2100647_End_DAC_Adapter.pdf
  16346   Mon Sep 20 15:23:08 2021 YehonathanUpdateComputersWifi internet fixed

Over the weekend and today, the wifi was acting bad with frequent disconnections and no internet access. I tried to log into the web interface of the ASUS wifi but with no success.

I pushed the reset button for several seconds to restore factory settings. After that, I was able to log in. I did the automatic setup and defined the wifi passwords to be what they used to be.

Internet access was restored. I also unplugged and plugged back all the wifi extenders in the lab and moved the extender from the vertex inner wall to the outer wall of the lab close to the 1X3.

Now, there seems to be wifi reception both in X and Y arms (according to my android phone).

 

  16349   Mon Sep 20 20:43:38 2021 TegaUpdateElectronicsSat Amp modifications

Running update of Sat Amp modification work, which involves the following procedure (x8) per unit:

  1. Replace R20 & R24 with 4.99K ohms, R23 with 499 ohms, and remove C16.
  2. (Testing) Connect LEDDrive output to GND and check that
    • TP4 is ~ 5V
    •  TP5-8 ~ 0V. 
  3. Install 40m Satellite to Flange Adapter (D2100148-v1)

 

Unit Serial Number Issues Status
S1200740 NONE DONE
S1200742 NONE DONE
S1200743 NONE DONE
S1200744

TP4 @ LED1,2 on PCB S2100568 is 13V instead of 5V

TP4 @ LED4 on PCB S2100559 is 13V instead of 5V

DONE
S1200752 NONE DONE

 

 

 

Attachment 1: IMG_20210920_203456226.jpg
IMG_20210920_203456226.jpg
  16350   Mon Sep 20 21:56:07 2021 KojiUpdateComputersWifi internet fixed

Ug, factory resets... Caltech IMSS announced that there was an intermittent network service due to maintenance between Sept 19 and 20. And there seemed some aftermath of it. Check out "Caltech IMSS"

 

  16356   Wed Sep 22 17:22:59 2021 TegaUpdateElectronicsSat Amp modifications

[Koji, Tega]

 

Decided to do a quick check of the remaining Sat Amp units before component replacement to identify any unit with defective LED circuits. Managed to examine 5 out of 10 units, so still have 5 units remaining. Also installed the photodiode bias voltage jumper (JP1) on all the units processed so far.

Unit Serial Number Issues Debugging Status
S1200738

TP4 @ LED3 on chan 1-4 PCB was ~0.7 V instead of 5V

Koji checked the solder connections of the various components, then swapped out the IC OPAMP. Removed DB9 connections to the front panel to get access to the bottom of the board. Upon close inspection, it looked like an issue of a short connection between the Emitter & Base legs of the Q1 transistor.

Solution - Remove the short connection between the Emitter & Base legs of the Q1 transistor legs.

DONE
S1200748 TP4 @ LED2 on chan 1-4 PCB was ~0.7 V instead of 5V

This issue was caused by a short connection between the Emitter & Base legs of the Q1 transistor.

Solution - Remove the short connection between the Emitter & Base legs of the Q1 transistor legs.

DONE
S1200749 NONE N/A DONE
S1200750 NONE N/A DONE
S1200751 NONE N/A DONE

 

Defective unit with updated resistors and capacitors in the previous elog

Unit Serial Number Issues Debugging Status
S1200744

TP4 @ LED1,2 on PCB S2100568 is 13V instead of 5V

TP4 @ LED4 on PCB S2100559 is 13V instead of 5V

This issue was caused by a short between the Collector & Base legs of the Q1 transistor.

Solution - Remove the short connection between the Collector & Base legs of the Q1 transistor legs

 

Complications - During the process of flipping the board to get access to the bottom of the board, a connector holding the two middle black wires, on P1, came loose. I resecured the wires to the connector and checked all TP4s on the board afterwards to make sure things are as expected.

DONE

 

 

 

Quote:

Running update of Sat Amp modification work, which involves the following procedure (x8) per unit:

  1. Replace R20 & R24 with 4.99K ohms, R23 with 499 ohms, and remove C16.
  2. (Testing) Connect LEDDrive output to GND and check that
    • TP4 is ~ 5V
    •  TP5-8 ~ 0V. 
  3. Install 40m Satellite to Flange Adapter (D2100148-v1)

 

Unit Serial Number Issues Status
S1200740 NONE DONE
S1200742 NONE DONE
S1200743 NONE DONE
S1200744

TP4 @ LED1,2 on PCB S2100568 is 13V instead of 5V

TP4 @ LED4 on PCB S2100559 is 13V instead of 5V

DONE
S1200752 NONE DONE

 

 

 

 

  16357   Thu Sep 23 14:17:44 2021 TegaUpdateElectronicsSat Amp modifications debugging update

Debugging complete.

All units now have the correct TP4 voltage reading needed to drive a nominal current of 35 mA through to OSEM LED. The next step is to go ahead and replace the components and test afterward that everything is OK.

 

Unit Serial Number Issues Debugging Status
S1200736 TP4 @ LED4 on chan 1-4 PCB reads 13V instead of 5V

This issue was caused by a short between the Collector & Base legs of the Q1 transistor.

Solution - Remove the short connection between the Collector & Base legs of the Q1 transistor legs

DONE
S1200737 NONE N/A DONE
S1200739 NONE N/A DONE
S1200746 TP4 @ LED3 on chan 5-8 PCB reads 0.765 V instead of 5V

This issue was caused by a short between the Emitter & Base legs of the Q1 transistor.

Solution - Remove the short connection between the Emitter & Base legs of the Q1 transistor legs

 

Complications - I was extra careful this time because of the problem of loose cable from the last flip-over of the right PCB containing chan 5-8. Anyways, after I was done I noticed one of the pink wires (it carries the +14V to the left PCB) had come off on P1. At least this time I could also see that the corresponding front panel green LED turn off as a result. So I resecured the wire to the connector (using solder as my last attempt yesterday to reattach the via crimping didn't work after a long time trying. I hope this is not a problem.) and checked the front panel LED turns on when the unit is powered before closing the unit. These connectors are quite flimsy.

DONE
S1200747 TP4 @ LED2 on chan 1-4 PCB reads 13V instead of 5V

This issue was caused by a short between the Collector & Base legs of the Q1 transistor.

Solution - Remove the short connection between the Collector & Base legs of the Q1 transistor legs

DONE

 

 

 

  16359   Thu Sep 23 18:18:07 2021 YehonathanUpdateBHDSOS assembly

I have noticed that the dumbells coming back from C&B had glue residues on them. An example is shown in attachment 1: it can be seen that half of the dumbell's surface is covered with glue.

Jordan gave me a P800 sandpaper to remove the glue. I picked the dumbells with the dirty face down and slid them over the sandpaper in 8 figures several times to try and keep the surface untilted. Attachment 2 shows the surface from attachment 1 after this process.

Next, the dumbells will be sent to another C&B.

Attachment 1: dumbell_before.png
dumbell_before.png
Attachment 2: dumbell_after.png
dumbell_after.png
  16364   Wed Sep 29 09:36:26 2021 JordanUpdateSUS2" Adapter Ring Parts for SOS Arrived 9/28/21

The remaining machined parts for the SOS adapter ring have arrived. I will inspect these today and get them ready for C&B.

Attachment 1: 20210929_092418.jpg
20210929_092418.jpg
  16368   Thu Sep 30 14:13:18 2021 AnchalUpdateLSCHV supply to Xend Green laser injection mirrors M1 and M2 PZT restored

Late elog, original date Sep 15th

We found that the power switch of HV supply that powers the PZT drivers for M1 and M2 on Xend green laser injection alignment was tripped off. We could not find any log of someone doing it, it is a physical switch. Our only explanation is that this supply might have a solenoid mechansm to shut off during power glitches and it probably did so on Aug 23 (see 40m/16287). We were able to align the green laser using PZT again, however, the maximum power at green transmission from X arm cavity is now about half of what it used to be before the glitch. Maybe the seed laser on the X end died a little.

  16370   Fri Oct 1 12:12:54 2021 StephenUpdateBHDITMY (3002) CAD layout pushed to Box

Koji requested current state of BHD 3D model. I pushed this to Box after adding the additional SOSs and creating an EASM representation (also posted, Attachment 1). I also post the PDF used to dimension this model (Attachment 2). This process raised some points that I'll jot down here:

1) Because the 40m CAD files are not 100% confirmed to be clean of any student license efforts, we cannot post these files to the PDM Vault or transmit them this way. When working on BHD layout efforts, these assemblies which integrate new design work therefore must be checked for most current revisions of vault-managed files - this Frankenstein approach is not ideal but can be managed for this effort. 

2) Because the current files reflect the 40m as built state (as far as I can tell), I shared the files in a zip directory without increasing the revisions. It is unclear whether revision control is adequate to separate [current 40m state as reflected in CAD] from [planned 40m state after BHD upgrade]. Typically a CAD user would trust that we could find the version N assembly referenced in the drawing from year Y, so we wouldn't hesitate to create future design work in a version N+1 assembly file pending a current drawing. However, this form of revision control is not implemented. Perhaps we want to use configurations to separate design states (in other words, create a parallel model of every changed component, without creating paralle files - these configurations can be selected internal to the assembly without a need to replace files)? Or more simply (and perhaps more tenuously), we could snapshot the Box revisions and create a DCC page which notes the point of departure for BHD efforts?

Anyway, the cold hard facts:

 - Box location: 40m/40m_cad_models/Solidworks_40m (LINK)

 - Filenames: 3002.zip and 3002 20211001 ITMY BHD for Koji presentation images.easm (healthy disregard for concerns about spaces in filenames)

Attachment 1: 3002_20211001_ITMY_BHD_for_Koji_presentation_images.easm
Attachment 2: 40m_upgrade_layout_20200611-ITMY_Beam_Dim.pdf
40m_upgrade_layout_20200611-ITMY_Beam_Dim.pdf
  16373   Mon Oct 4 15:50:31 2021 HangUpdateCalibrationFisher matrix estimation on XARM parameters

[Anchal, Hang]

What: Anchal and I measured the XARM OLTF last Thursday.

Goal: 1. measure the 2 zeros and 2 poles in the analog whitening filter, and potentially constrain the cavity pole and an overall gain. 

          2. Compare the parameter distribution obtained from measurements and that estimated analytically from the Fisher matrix calculation.

          3. Obtain the optimized excitation spectrum for future measurements.   

How: we inject at C1:SUS-ETMX_LSC_EXC so that each digital count should be directly proportional to the force applied to the suspension. We read out the signal at C1:SUS-ETMX_LSC_OUT_DQ. We use an approximately white excitation in the 50-300 Hz band, and intentionally choose the coherence to be only slightly above 0.9 so that we can get some statistical error to be compared with the Fisher matrix's prediction. For each measurement, we use a bandwidth of 0.25 Hz and 10 averages (no overlapping between adjacent segments). 

The 2 zeros and 2 poles in the analog whitening filter and an overall gain are treated as free parameters to be fitted, while the rest are taken from the model by Anchal and Paco (elog:16363). The optical response of the arm cavity seems missing in that model, and thus we additionally include a real pole (for the cavity pole) in the model we fit. Thus in total, our model has 6 free parameters, 2 zeros, 3 poles, and 1 overall gain. 

The analysis codes are pushed to the 40m/sysID repo. 

===========================================================

Results:

Fig. 1 shows one measurement. The gray trace is the data and the olive one is the maximum likelihood estimation. The uncertainty for each frequency bin is shown in the shaded region. Note that the SNR is related to the coherence as 

        SNR^2 = [coherence / (1-coherence)] * (# of average), 

and for a complex TF written as G = A * exp[1j*Phi], one can show the uncertainty is given by 

        \Delta A / A = 1/SNR,  \Delta \Phi = 1/SNR [rad]. 

Fig. 2. The gray contours show the 1- and 2-sigma levels of the model parameters using the Fisher matrix calculation. We repeated the measurement shown in Fig. 1 three times, and the best-fit parameters for each measurement are indicated in the red-crosses. Although we only did a small number of experiments, the amount of scattering is consistent with the Fisher matrix's prediction, giving us some confidence in our analytical calculation. 

One thing to note though is that in order to fit the measured data, we would need an additional pole at around 1,500 Hz. This seems a bit low for the cavity pole frequency. For aLIGO w/ 4km arms, the single-arm pole is about 40-50 Hz. The arm is 100 times shorter here and I would naively expect the cavity pole to be at 3k-4k Hz if the test masses are similar. 

Fig. 3. We then follow the algorithm outlined in Pintelon & Schoukens, sec. 5.4.2.2, to calculate how we should change the excitation spectrum. Note that here we are fixing the rms of the force applied to the suspension constant. 

Fig. 4 then shows how the expected error changes as we optimize the excitation. It seems in this case a white-ish excitation is already decent (as the TF itself is quite flat in the range of interest), and we only get some mild improvement as we iterate the excitation spectra (note we use the color gray, olive, and purple for the results after the 0th, 1st, and 2nd iteration; same color-coding as in Fig. 3).   

 

 

 

Attachment 1: tf_meas.pdf
tf_meas.pdf
Attachment 2: fisher_est_vs_data.pdf
fisher_est_vs_data.pdf
Attachment 3: Pxx_evol.pdf
Pxx_evol.pdf
Attachment 4: fisher_evol.pdf
fisher_evol.pdf
  16377   Mon Oct 4 18:35:12 2021 PacoUpdateElectronicsSatellite amp box adapters

[Paco]

I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3.

Attachment 1: pcb_no_flush.jpg
pcb_no_flush.jpg
Attachment 2: 1U_assembly.jpg
1U_assembly.jpg
Attachment 3: fourunits.jpg
fourunits.jpg
  16378   Mon Oct 4 20:46:08 2021 KojiUpdateElectronicsSatellite amp box adapters

Thanks. You should be able to find the chassis-related hardware on the left side of the benchtop drawers at the middle workbench.

Hardware: The special low profile 4-40 standoff screw / 1U handles / screws and washers for the chassis / flat-top screws for chassis panels and lids

  16379   Mon Oct 4 21:58:17 2021 TegaUpdateElectronicsSat Amp modifications

Trying to finish 2 more Sat Amp units so that we have the 7 units needed for the X-arm install. 

S2100736 - All good

S2100737 - This unit presented with an issue on the PD1 circuit of channel 1-4 PCB where the voltage reading on TP6, TP7 and TP8 are -15.1V,  -14.2V, and +14.7V respectively, instead of ~0V.  The unit also has an issue on the PD2 circuit of channel 1-4 PCB because the voltage reading on TP7 and TP8 are  -14.2V, and +14.25V respectively, instead of ~0V.

 

  16380   Tue Oct 5 17:01:20 2021 KojiUpdateElectronicsSat Amp modifications

Make sure the inputs for the PD amps are open. This is the current amplifier and we want to leave the input pins open for the test of this circuit.

TP6 is the first stage of the amps (TIA). So this stage has the issue. Usual check if the power is properly supplied / if the pins are properly connected/isolated / If the opamp is alive or not.

For TP8, if TP8 get railed. TP5 and TP7 are going to be railed too. Is that the case, if so, check this whitening stage in the same way as above.
If the problem is only in the TP5 and/or TP7 it is the differential driver issue. Check the final stage as above. Replacing the opamp could help.

 

  16384   Wed Oct 6 15:04:36 2021 HangUpdateSUSPRM L2P TF measurement & Fisher matrix analysis

[Paco, Hang]

Yesterday afternoon Paco and I measured the PRM L2P transfer function. We drove C1:SUS-PRM_LSC_EXC with a white noise in the 0-10 Hz band (effectively a white, longitudinal force applied to the suspension) and read out the pitch response in C1:SUS-PRM_OL_PIT_OUT. The local damping was left on during the measurement. Each FFT segment in our measurement is 32 sec and we used 8 non-overlapping segments for each measurement. The empirically determined results are also compared with the Fisher matrix estimation (similar to elog:16373).

Results:

Fig. 1 shows one example of the measured L2P transfer function. The gray traces are measurement data and shaded region the corresponding uncertainty. The olive trace is the best fit model. 

Note that for a single-stage suspension, the ideal L2P TF should have two zeros at DC and two pairs of complex poles for the length and pitch resonances, respectively. We found the two resonances at around 1 Hz from the fitting as expected. However, the zeros were not at DC as the ideal, theoretical model suggested. Instead, we found a pair of right-half plane zeros in order to explain the measurement results. If we cast such a pair of right-half plane zeros into (f, Q) pair, it means a negative value of Q. This means the system does not have the minimum phase delay and suggests some dirty cross-coupling exists, which might not be surprising. 

Fig. 2 compares the distribution of the fitting results for 4 different measurements (4 red crosses) and the analytical error estimation obtained using the Fisher matrix (the gray contours; the inner one is the 1-sigma region and the outer one the 3-sigma region). The Fisher matrix appears to underestimate the scattering from this experiment, yet it does capture the correlation between different parameters (the frequencies and quality factors of the two resonances).

One caveat though is that the fitting routine is not especially robust. We used the vectfit routine w/ human intervening to get some initial guesses of the model. We then used a standard scipy least-sq routine to find the maximal likelihood estimator of the restricted model (with fixed number of zeros and poles; here 2 complex zeros and 4 complex poles). The initial guess for the scipy routine was obtained from the vectfit model.  

Fig. 3 shows how we may shape our excitation PSD to maximize the Fisher information while keeping the RMS force applied to the PRM suspension fixed. In this case the result is very intuitive. We simply concentrate our drive around the resonance at ~ 1 Hz, focusing on locations where we initially have good SNR. So at least code is not suggesting something crazy... 

Fig. 4 then shows how the new uncertainty (3-sigma contours) should change as we optimize our excitation. Basically one iteration (from gray to olive) is sufficient here. 

We will find a time very recently to repeat the measurement with the optimized injection spectrum.

Attachment 1: prm_l2p_tf_meas.pdf
prm_l2p_tf_meas.pdf
Attachment 2: prm_l2p_fisher_vs_data.pdf
prm_l2p_fisher_vs_data.pdf
Attachment 3: prm_l2p_Pxx_evol.pdf
prm_l2p_Pxx_evol.pdf
Attachment 4: prm_l2p_fisher_evol.pdf
prm_l2p_fisher_evol.pdf
  16386   Wed Oct 6 16:31:02 2021 TegaUpdateElectronicsSat Amp modifications

[Tega, Koji]

(S2100737) - Debugging showed that the opamp, AD822ARZ, for PD2 circuit was not working as expected so we replaced with a spare and this fixed the problem. Somehow, the PD1 circuit no longer presents any issues, so everything is now fine with the unit.

(S2100741) - All good.

Quote:

Trying to finish 2 more Sat Amp units so that we have the 7 units needed for the X-arm install. 

S2100736 - All good

S2100737 - This unit presented with an issue on the PD1 circuit of channel 1-4 PCB where the voltage reading on TP6, TP7 and TP8 are -15.1V,  -14.2V, and +14.7V respectively, instead of ~0V.  The unit also has an issue on the PD2 circuit of channel 1-4 PCB because the voltage reading on TP7 and TP8 are  -14.2V, and +14.25V respectively, instead of ~0V.

 

 

  16387   Thu Oct 7 02:04:19 2021 KojiUpdateElectronicsSatellite amp adapter chassis

The 4 units of Satellite Amp Adapter were done:
- The ears were fixed with the screws
- The handles were attached (The stock of the handles is low)
- The boards are now supported by plastic stand-offs. (The chassis were drilled)
- The front and rear panels were fixed to the chassis
- The front and rear connectors were fixed with the low profile 4-40 stand-off screws (3M 3341-1S)
 

Attachment 1: P_20211006_205044.jpg
P_20211006_205044.jpg
  16388   Fri Oct 8 17:33:13 2021 HangUpdateSUSMore PRM L2P measurements

[Raj, Hang]

We did some more measurements on the PRM L2P TF. 

We tried to compare the parameter estimation uncertainties of white vs. optimal excitation. We drove C1:SUS-PRM_LSC_EXC with "Normal" excitation and digital gain of 700.

For the white noise exciation, we simply put a butter("LowPass",4,10) filter to select out the <10 Hz band.

For the optimal exciation, we use butter("BandPass",6,0.3,1.6) gain(3) notch(1,20,8) to approximate the spectral shape reported in elog:16384. We tried to use awg.ArbitraryLoop yet this function seems to have some bugs and didn't run correctly; an issue has been submitted to the gitlab repo with more details. We also noticed that in elog:16384, the pitch motion should be read out from C1:SUS-PRM_OL_PIT_IN1 instead of the OUT channel, as there are some extra filters between IN1 and OUT. Consequently, the exact optimal exciation should be revisited, yet we think the main result should not be altered significantly.

While a more detail analysis will be done later offline, we post in the attached plot a comparison between the white (blue) vs optimal (red) excitation. Note in this case, we kept the total force applied to the PRM the same (as the RMS level matches).

Under this simple case, the optimal excitation appears reasonable in two folds.

First, the optimization tries to concentrate the power around the resonance. We would naturally expect that near the resonance, we would get more Fisher information, as the phase changes the fastest there (i.e., large derivatives in the TF).

Second, while we move the power in the >2 Hz band to the 0.3-2 Hz band, from the coherence plot we see that we don't lose any information in the > 2 Hz region. Indeed, even with the original white excitation, the coherence is low and the > 2 Hz region would not be informative. Therefore, it seems reasonable to give up this band so that we can gain more information from locations where we have meaningful coherence.

Attachment 1: Screenshot_2021-10-08_17-30-52.png
Screenshot_2021-10-08_17-30-52.png
  16389   Mon Oct 11 11:13:04 2021 ranaUpdateSUSMore PRM L2P measurements

For the oplev, there are DQ channels you can use so that its possible to look back in the past for long measurements. They have names like PERROR

  16390   Mon Oct 11 13:59:47 2021 HangUpdateSUSMore PRM L2P measurements

We report here the analysis results for the measurements done in elog:16388

Figs. 1 & 2 are respectively measurements of the white noise excitation and the optimized excitation. The shaded region corresponds to the 1-sigma uncertainty at each frequency bin. By eyes, one can already see that the constraints on the phase in the 0.6-1 Hz band are much tighter in the optimized case than in the white noise case. 

We found the transfer function was best described by two real poles + one pair of complex poles (i.e., resonance) + one pair of complex zeros in the right-half plane (non-minimum phase delay). The measurement in fact suggested a right-hand pole somewhere between 0.05-0.1 Hz which cannot be right. For now, I just manually flipped the sign of this lowest frequency pole to the left-hand side. However, this introduced some systematic deviation in the phase in the 0.3-0.5 Hz band where our coherence was still good. Therefore, a caveat is that our model with 7 free parameters (4 poles + 2 zeros + 1 gain as one would expect for an ideal signal-stage L2P TF) might not sufficiently capture the entire physics. 

In Fig. 3 we showed the comparison of the two sets of measurements together with the predictions based on the Fisher matrix. Here the color gray is for the white-noise excitation and olive is for the optimized excitation. The solid and dotted contours are respectively the 1-sigma and 3-sigma regions from the Fisher calculation, and crosses are maximum likelihood estimations of each measurement (though the scipy optimizer might not find the true maximum).

Note that the mean values don't match in the two sets of measurements, suggesting potential bias or other systematics exists in the current measurement. Moreover, there could be multiple local maxima in the likelihood in this high-D parameter space (not surprising). For example, one could reduce the resonant Q but enhance the overall gain to keep the shoulder of a resonance having the same amplitude. However, this correlation is not explicit in the Fisher matrix (first-order derivatives of the TF, i.e., local gradients) as it does not show up in the error ellipse. 

In Fig. 4 we show the further optimized excitation for the next round of measurements. Here the cyan and olive traces are obtained assuming different values of the "true" physical parameter, yet the overall shapes of the two are quite similar, and are close to the optimized excitation spectrum we already used in elog:16388

 

Attachment 1: prm_l2p_tf_meas_white.pdf
prm_l2p_tf_meas_white.pdf
Attachment 2: prm_l2p_tf_meas_opt.pdf
prm_l2p_tf_meas_opt.pdf
Attachment 3: prm_l2p_fisher_vs_data_white_vs_opt.pdf
prm_l2p_fisher_vs_data_white_vs_opt.pdf
Attachment 4: prm_l2p_Pxx_evol_v2.pdf
prm_l2p_Pxx_evol_v2.pdf
  16399   Wed Oct 13 15:36:38 2021 HangUpdateCalibrationXARM OLTF

We did a few quick XARM oltf measurements. We excited C1:LSC-ETMX_EXC with a broadband white noise upto 4 kHz. The timestamps for the measurements are: 1318199043 (start) - 1318199427 (end).

We will process the measurement to compute the cavity pole and analog filter poles & zeros later.

Attachment 1: Screenshot_2021-10-13_15-32-16.png
Screenshot_2021-10-13_15-32-16.png
  16400   Thu Oct 14 09:28:46 2021 YehonathanUpdatePSLPMC unlocked

PMC has been unlocked since ~ 2:30 AM. Seems like the PZT got saturated. I moved the DC output adjuster and the PMC locked immidiatly although with a low transmission of 0.62V (>0.7V is the usual case) and high REFL.

IMC locked immidiately but IFO seems to be completely misaligned. The beams on the AS monitor are moving quite alot syncronously. BS watchdog tripped. I enabled the coil outputs. Waiting for the RMS motion to relax...

Its not relaxing. RMS motion is still high. I disabled the coils again and reenabled them. This seems to have worked. Arms were locked quite easily but the ETMs oplevs were way off and the ASS couldn't get the TRX and TRY more than 0.7. I align the ETMs to center the oplev. I realign everything else and lock the arms. Maximium TR is still < 0.8.

 

 

  16401   Thu Oct 14 11:25:49 2021 YehonathanUpdatePSLPMC unlocked

{Yehonathan, Anchal}

I went to get a sandwich around 10:20 AM and when I came back BS was moving like crazy. We shutdown the watchdog.

We look at the spectra of the OSEMs (attachment 1). Clearly, the UR sensing is bad.

We took the BS sattelite box out. Anchal opened the box and nothing seemed wrong visually. We returned the box and connected it to the fake OSEM box. The sensor spectra seemed normal.

We connected the box to the vacuum chamber and the spectra is still normal (attachment 2).

We turn on the coils and the motion got damped very quickly (RMS <0.5mV).

Either the problem was solved by disconnecting and connecting the cables or it will come back to haunt us.

 

 

 

Attachment 1: BS_OSEM_Sensor_PSD.pdf
BS_OSEM_Sensor_PSD.pdf
Attachment 2: BS_OSEM_Sensor_PSD_AfterReconnectingCables.pdf
BS_OSEM_Sensor_PSD_AfterReconnectingCables.pdf
ELOG V3.1.3-