40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 351 of 355  Not logged in ELOG logo
ID Date Author Typedown Category Subject
  14672   Thu Jun 13 22:21:44 2019 KojiConfigurationCDSPaola wireless connected to martian

SURFs had trouble connecting paola to martian via wireless.
Of course, it requires a fixed IP but it had not it yet. So I went to chiara and gave 192.168.113.110 as "paolawl". Note that the wired connection has .111 and it is "paola".

Followed the instruction on http://nodus.ligo.caltech.edu:8080/40m/14121

  14685   Fri Jun 21 19:22:40 2019 KojiConfigurationBHDReviving the single OMC BHD design?

I think a Faraday rotator rotates the polarizations in a same way for both forward and backward beam, and it's not like in this figure.
And the transmission through multiple faradays will also be a big issue.

  14692   Mon Jun 24 13:48:36 2019 KruthiConfigurationCDSGiada wireless connection

[Gautam, Kruthi]

This afternoon, Gautam helped me setup Giada to access the GigE installed for MC2. Unlike Paola, which was being used earlier, Giada has a better battery life and doesn't shutdown when the charger is unplugged. Gautam configured Giada to enable its wireless connection to Martian, just like Koji had configured Paola (https://nodus.ligo.caltech.edu:8081/40m/14672). We also rerouted  the ethernet cable we were using with the PoE adaptor from Netgear Switch in 1x2 to 1x6.

  14767   Wed Jul 17 17:56:18 2019 KojiConfigurationComputersGave resolv.conf to giada

Kruthi noticed that she could not login to rossa from giada.

I checked /etc/resolv.conf and it was

nameserver 127.0.0.1

so obviously it is useless to refer localhost (i.e. giada) as a nameserver.

I copied our usual resolv.conf to giada as following:

nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8

search martian

Giada's ssh known_host had unupdated entry for rossa, so I had to clean it up, but after that we can connect to rossa from giada just by "ssh rossa".

Case closed.

  14812   Thu Jul 25 14:28:03 2019 gautamConfigurationComputersfirewalld disabled for EPICS CA

I think rana did some more changes to this workstation to make it useful for commissioning activities - but the MEDM screens were still white blanks. The problem was that the firewalld wasn't disabled (last two steps of the KThorne setup wiki). I disabled it. Now donatella can run MEDM, ndscope and StripTool. DTT doesn't work to get online data because of a "Synchronization Error", I'm not bothering with this for now. I think Kruthi successfully demonstrated the fetching of offline data with DTT.

Attachment 1: donatellaCommissioning.png
donatellaCommissioning.png
  15085   Sun Dec 8 20:48:29 2019 ranaConfigurationComputersMegatron: starts up grade

I noticed recently that Megatron was running Ubuntu 12, so I've started its OS upgrade.

  1. Unlocked the IMC + disabled the autolocker from the LockMC screen + closed the PSL shutter (IMC REFL shutter doesn't seem to do anythin)
  2. Disabled the "FSS" slow servo on the FSS screen
  3. did sudo apt-get update, sudo apt-get upgrade, and then sudo apt-get do-release-upgrade which starts the actual thing
  4. According to the internet, the LTS upgrades will go in series rather than up to 18 in one shot, so its now doing 12 -> 14 (Trusty Tapir)

Megatron and IMC autolocking will be down for awhile, so we should use a different 'script' computer this week.


Mon Dec 9 14:52:58 2019

upgrade to Ubuntu 14 complete; now upgrading to 16

  15095   Wed Dec 11 22:01:24 2019 ranaConfigurationComputersMegatron: starts up grade

Megatron is now running Ubuntu 18.04 LTS.

We should probably be able to load all the LSC software on there by adding the appropriate Debian repos.

I have re-enabled the cron jobs in the crontab.

The MC Autolocker and the PSL NPRO Slow/Temperature control are run using 'initctl', so I'll leave that up to Shruti to run/test.

  15117   Mon Jan 13 15:47:37 2020 shrutiConfigurationComputer Scripts / Programsc1psl burt restore

[Yehonathan, Jon, Shruti]

Since the PMC would not lock, we initially burt-restored the c1psl machine to the last available shapshot (Dec 10th 2019), but it still would not lock.

Then, it was burt-restored to midnight of Dec 1st, 2019, after which it could be locked.

  15125   Wed Jan 15 14:10:28 2020 JonConfigurationPSLNew EPICS database for C1PSL + C1IOO

Summary

I have completed the new EPICS channel database for the c1psl and c1ioo channels (now combined into the new c1psl Acromag machine). I've tested a small subset of channels on the electronics bench to confirm that the addressing and analog channel calibrations are correct in a general sense. At this point, we are handing the chassis off to Chub to complete the wiring of the Acromag terminals to Dsub feedthroughs. At the 40m meeting today, we identified Feb. 17-22 as a potential window for installation in the interferometer (Gautam is out of town then). Below are some implementaton details for future reference.

Analog channel calibration for Acromag

For analog input (ai) channels, the Acromag outputs raw values ranging from +/-30,000 counts, but the EPICS IOC interprets the data type as ranging from +/-2^15 = 32,768. Similarly, for analog output (ao) channels, the Acromag expects a drive signal in the range +/-30,000 counts. To achieve proper scaling, Johannes had previously changed the EGUF and EGUL fields from +/-10 V to +/-10.923 V. However, changing the engineering fields makes it much harder for a human to read off the real physical I/O range of the channel.

A better way to achieve the correct scaling is to simply set the field  ASLO=1.09225 (65,536 / 60,001) in addition to the normal EGUF and EGUL field values (+/-10 V). Setting this field forces a rescaling of the number of raw counts that works as so (assuming a 16-bit bipolar ADC or DAC, as are the Acromags):

OVAL = (RVAL * ASLO + AOFF + 2^15) * (EGUF - EGUL) / 2^16 + EGUL

In the above mapping, OVAL is the value of the channel in engineering units (e.g., V) and RVAL is its raw value in counts. It is not the case that either the ASLO/AOFF or EGUF/EGUL fields are used, but not both. The ASLO/AOFF parameters are always applied (but their default values are ASLO=1 and AOFF=0, so have no effect unless changed). The EGUF and EGUL parameters are then additionally applied if the field LINR="LINEAR" is set.

This conversion allows the engineering fields to remain unchanged from the real physical range. The ASLO value is the same for both analog input and output channels. I have implemented this on all the new c1psl and c1ioo channels and confirmed it to work using a calibrated input voltage source.

  15142   Wed Jan 22 19:17:20 2020 gautamConfigurationComputersMegatron: starts up grade

upgrade was done

cronjob testing wasn't one by one 😢 

burt snapshots were gone

i brought them back home 🏠 

Quote:

Megatron is now running Ubuntu 18.04 LTS.

  15145   Thu Jan 23 15:32:42 2020 gautamConfigurationComputersMegatron: starts up grade

The burt snapshotting is still not so reliable - for whatever reason, the number of snapshot files that actually get written looks random. For example, the 14:19 backup today got all the snaps, but 15:19 did not. There are no obvious red flags in either the cron job logs or the autoburt log files. I also don't see any clues when I run the script in a shell. It'll be good if someone can take a look at this.

  15150   Thu Jan 23 23:07:04 2020 JonConfigurationPSLc1psl breakout board wiring

To facilitate wiring the c1psl chassis and scripting loopback tests, I've compiled a distilled spreadsheet with the Acromag-to-breakout board wiring, broken down by connector. This information is extractable from the master spreadsheet, but not easily. There were also a few apparent typos which are fixed here.

The wiring assignments at the time of writing are attached below. Here is the link to the latest spreadsheet.

Attachment 1: c1psl_feedthrough_wiring.pdf
c1psl_feedthrough_wiring.pdf c1psl_feedthrough_wiring.pdf c1psl_feedthrough_wiring.pdf c1psl_feedthrough_wiring.pdf
  15158   Mon Jan 27 14:01:01 2020 JordanConfigurationGeneralRepurposed Sorenson Power Supply

The 24 V Sorenson (2nd from bottom) in the small rack west of 1x2 was repurposed to 12V 600 mA, and was run to a terminal block on the north side of 1X1. Cables were routed underneath 1X1 and 1X2 to the terminal blocks. 12V was then routed to the PSL table and banana clip terminals were added.

  15159   Mon Jan 27 18:16:30 2020 gautamConfigurationComputersSluggish megatron?

I've also been noticing that the IMC Autolocker scripts are running rather sluggishly on Megatron recently. Some evidence - on Feb 11 2019, the time between the mcup script starting and finishing is ~10 seconds (I don't post the raw log output here to keep the elog short). However, post upgrade, the mean time is more like ~45-50 seconds. Rana mentioned he didn't install any of the modern LIGO software tools post upgrade, so maybe we are using some ancient EPICS binaries. I suspect the cron job for the burt snapshot is also just timing out due to the high latency in channel access. Rana is doing the software install on the new rossa, and once he verifies things are working, we will try implementing the same solution on megatron. The machine is an old Sun Microsystems one, but the system diagnostics don't signal any CPU timeouts or memory overflows, so I'm thinking the problem is software related...

Quote:

The burt snapshotting is still not so reliable - for whatever reason, the number of snapshot files that actually get written looks random. For example, the 14:19 backup today got all the snaps, but 15:19 did not. There are no obvious red flags in either the cron job logs or the autoburt log files. I also don't see any clues when I run the script in a shell. It'll be good if someone can take a look at this.

  15164   Tue Jan 28 15:39:04 2020 gautamConfigurationComputersSluggish megatron?

There were a bunch of medm processes stalled on megatron (connected with screenshot taking). To see if they were interfering with the other scripts, I killed all of the medm processes, and commented out the line in the crontab that runs the screenshots every 10 mins. Let's see if this improves stability.

  15167   Tue Jan 28 17:36:45 2020 gautamConfigurationComputersLocal EPICS7.0 installed on megatron

[Jon, gautam]

We found that the caput commands were taking much longer to execute on megatron than on pianosa (for example). Suspecting that this had something to do with the fact that megatron was using EPICS binaries from the shared NFS drive which were compiled for a much older OS, I installed the latest stable release of EPICS on megatron. The new caput commands execute much faster. I also added the local EPICS directory to the head of the $PATH variable used by the MC autolocker and FSS Slow scripts, so that they use the new caput command. But mcup is still slow - maybe my new path definition isn't picked up and it is still using the NFS binaries? To be looked into...

Quote:

There were a bunch of medm processes stalled on megatron (connected with screenshot taking). To see if they were interfering with the other scripts, I killed all of the medm processes, and commented out the line in the crontab that runs the screenshots every 10 mins. Let's see if this improves stability.

  15168   Tue Jan 28 19:12:30 2020 JonConfigurationPSLSpare channels added to c1psl chassis

After some discussion with Gautam, I decided to build more spare channels into the new c1psl machine. This is anticipation of adding new laser and ISS channels in the near future, to avoid having to disconnect the installed chassis and pull it out of the rack. The spare channels will be wired to DB37M feedthroughs on the front side of the chassis, with enough wire length to be able to pull the breakout boards out of the front to reconfigure their wiring as needed (e.g., split off channels onto a separate connector).

To have enough overhead, this will require installing 1 additional ADC unit (XT1221) and 1 additional DAC (XT1541). We have enough spare BIO channels among the existing units (both sinking and sourcing). This will give us:

  • 13 spare ADC channels
  • 14 spare DAC channels
  • 16 spare sinking BIO channels
  • 12 spare sourcing BIO channels

The updated c1psl chassis wiring assignments are attached. It adds 4 new DB37M connectors for the spare channels (highlighted in yellow) and fixes one typo Jordan found while wiring today. The most current spreadsheet is available here.

Attachment 1: c1psl_feedthrough_wiring_v2.pdf
c1psl_feedthrough_wiring_v2.pdf c1psl_feedthrough_wiring_v2.pdf c1psl_feedthrough_wiring_v2.pdf c1psl_feedthrough_wiring_v2.pdf c1psl_feedthrough_wiring_v2.pdf
  15421   Mon Jun 22 10:43:25 2020 JonConfigurationVACVac maintenance at 11 am

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]

We will advise when the work is completed.

  15424   Mon Jun 22 20:06:06 2020 JonConfigurationVACVac maintenance complete

This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.

For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.

Edit: The new interlock flag channel is named C1:Vac-interlock_flag.

Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.

The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍

Quote:

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
Attachment 1: Pumpdown-6-22-20.png
Pumpdown-6-22-20.png
  15425   Tue Jun 23 17:54:56 2020 ranaConfigurationVACVac maintenance complete

I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.

It avoids us having to force them all to UPPER in the scripts and channel lists.

  15446   Wed Jul 1 18:03:04 2020 JonConfigurationVACUPS replacements

​I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:

  • Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
  • Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
  • Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.

I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.

  15465   Thu Jul 9 18:00:35 2020 JonConfigurationVACUPS replacements

Chub has placed the order for two new UPS units (115V for TP2/3 and a 220V version for TP1).

They will arrive within the next two weeks.

Quote:

​I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:

  • Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
  • Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
  • Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.

I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.

  15510   Sat Aug 8 07:36:52 2020 Sanika KhadkikarConfigurationCalibration-RepairBS Seismometer - Multi-channel calibration

Summary : 

I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer.

The calibration factors have been determined to be :

BS-X Channel: \small {\color{Blue} 2.030 \pm 0.079 }

BS-Y Channel: \small {\color{Blue} 2.840 \pm 0.177 }

BS-Z Channel: \small {\color{Blue} 1.397 \pm 0.182 }


Details :

The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well. 


Method :

Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency. 

  1. I first analyzed the regions in which the similar channels were found to be coherent to have a proper gain analysis. The EY seismometer was found to be the most stable one so it has been used as a reference. I saw the coherence between similar channels of the 2 seismometers and the bode plots together. A transfer function estimator was used to analyze the relative calibration in between all 3 pairs of seismometers. In the given frequency range EX and EY have a gain of 1 so their relative calibration is proper. The relative calibration in between the BS and the EY seismometers is not proper as the resultant gain is not 1. The attached plots show the discrepancies clearly : 
  • BS-X & EY-X Transfer Function : Attachment #1
  • BS-Y & EY-Y Transfer Function : Attachment #2

          The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3. 

  • BS-Z & EY-Z Transfer Function : Attachment #3

The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above.

Note: The standard error envelope plotted in the attached graphs is calculated as follows :

         1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later. 

         2. Calculate PSD for every segment (no averaging).

         3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment.

Discussions :

The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again. 

Attachment 1: BS_X-EY_X.png
BS_X-EY_X.png
Attachment 2: BS_Y-EY_Y.png
BS_Y-EY_Y.png
Attachment 3: BS_Z-EY_Z.png
BS_Z-EY_Z.png
Attachment 4: timeseries.png
timeseries.png
  15526   Fri Aug 14 10:10:56 2020 JonConfigurationVACVacuum repairs today

The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.

  15527   Sat Aug 15 02:02:13 2020 JonConfigurationVACVacuum repairs today

Vacuum work is completed. The TP2 and TP3 interlocks have been overhauled as proposed in ELOG 15499 and seem to be performing reliably. We're now back in the nominal system state, with TP2 again backing for TP1 and TP3 pumping the annuli. I'll post the full implementation details in the morning.

I did not get to setting up the new UPS units. That will have to be scheduled for another day.

Quote:

The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.

  15528   Sat Aug 15 15:12:22 2020 JonConfigurationVACOverhaul of small turbo pump interlocks

Summary

Yesterday I completed the switchover of small turbo pump interlocks as proposed in ELOG 15499. This overhaul altogether eliminates the dependency on RS232 readbacks, which had become unreliable (glitchy) in both controllers. In their place, the V4(5) valve-close interlocks are now predicated on an analog controller output whose voltage goes high when the rotation speed is >= 80% of the nominal setpoint. The critical speed is 52.8 krpm for TP2 and 40 krpm for TP3. There already exist hardware interlocks of V4(5) using the same signals, which I have also tested.

Interlock signal

Unlike the TP1 controller, which exposes simple relays whose open/closed states are sensed by Acromags, what the TP2(3) controllers output is an energized 24V signal for controlling such a relay (output circuit pictured below). I hadn't appreciated this difference and it cost me time yesterday. The ultimate solution was to route the signals through a set of new 24V Phoenix Contact relays installed inside the Acromag chassis. However, this required removing the chassis from the rack and bringing it to the electronics bench (rather than doing the work in situ, as I had planned). The relays are mounted to the second DIN rail opposite the Acromags. Each TP2(3) signal controls the state of a relay, which in turn is sensed using an Acromag XT1111.

Signal routing

The TP2(3) "normal-speed" signals are already in use by hardware interlocks of V4(5). Each signal is routed into the main AC relay box, where it controls an "interrupter" relay through which the Acromag control signal for the main V4(5) relay is passed. These signals are now shared with the digital controls system using a passive DB15 Y-splitter. The signal routing is shown below.

Interlock conditions

The new turbo-pump-related interlock conditions and their channel predicates are listed below. The full up-to-date channel list and wiring assignments for c1vac are maintained here.

Channel Type New? Interlock-triggering condition
C1:Vac-TP1_norm BI No Rotation speed < 90% nominal setpoint (29 krpm)
C1:Vac-TP1_fail BI No Critical fault occurrence
C1:Vac-TP1_current AI No Current draw > 4 A
C1:Vac-TP2_norm BI Yes Rotation speed < 80% nominal setpoint (52.8 krpm)
C1:Vac-TP3_norm BI Yes Rotation speed < 80% nominal setpoint (40 krpm)

There are two new channels, both of which provide a binary indication of whether the pump speed is outside its nominal range. I did not have enough 24V relays to also add the C1:Vac-TP2(3)_fail channels listed in ELOG 15499. However, these signals are redundant with the existing interlocks, and the existing serial "Status" readback will already print failure messages to the MEDM screens. All of the TP2(3) serial readback channels remain, which monitor voltage, current, operational status, and temperature. The pump on/off and low-speed mode on/off controls remain implemented with serial signals as well.

The new analog readbacks have been added to the MEDM controls screens, circled below:

Other incidental repairs

  • I replaced the (dead) LED monitor at the vac controls console. In the process of finding a replacement, I came across another dead spare monitor as well. Both have been labeled "DEAD" and moved to Jordan's desk for disposal.
  • I found the current TP3 Varian V70D controller to be just as glitchy in the analog outputs as well. That likely indicates there is a problem with the microprocessor itself, not just the serial communications card as I thought might be the case. I replaced the controller with the spare unit which was mounted right next to it in the rack [ELOG 13143]. The new unit has not glitched since the time I installed it around 10 pm last night.
Attachment 1: small_tp_signal_routing.png
small_tp_signal_routing.png
Attachment 3: small_tp_signal_routing.png
small_tp_signal_routing.png
Attachment 4: medm_screen.png
medm_screen.png
  15738   Fri Dec 18 22:59:12 2020 JonConfigurationCDSUpdated CDS upgrade plan

Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:

  • Existing FEs stay where they are (they are not moved to a single rack)

  • Dolphin IPC remains PCIe Gen 1

  • RFM network is entirely replaced with Dolphin IPC

Please send me any omissions or corrections to the layout.

Attachment 1: CDS_2020_Dec.pdf
CDS_2020_Dec.pdf
Attachment 2: CDS_2020_Dec.graffle
  15742   Mon Dec 21 09:28:50 2020 JamieConfigurationCDSUpdated CDS upgrade plan
Quote:

Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:

  • Existing FEs stay where they are (they are not moved to a single rack)

  • Dolphin IPC remains PCIe Gen 1

  • RFM network is entirely replaced with Dolphin IPC

Please send me any omissions or corrections to the layout.

I just want to point out that if you move all the FEs to the same rack they can all be connected to the Dolphin switch via copper, and you would only have to string a single fiber to every IO rack, rather than the multiple now (for network, dolphin, timing, etc.).

  15746   Wed Dec 23 23:06:45 2020 gautamConfigurationCDSUpdated CDS upgrade plan
  1. The diagram should clearly show the host machines and the expansion chassis and the interconnects between them.
  2. We no longer have any Gentoo bootserver or diskless FEs.
  3. The "c1lsc" host is in 1X4 not 1Y3.
  4. The connection between c1lsc and Dolphin switch is copper not fiber. I don't know how many Gbps it is. But if the switch is 10 Gbps, are they really selling interface cables that have lower speed? The datasheet says 10 Gbps.
  5. The control room workstations - Debian10 (rossa) is the way forward I believe. it is true pianosa remains SL7 (and we should continue to keep it so until all other machines have been upgraded and tested on Debian 10).
  6. There is no "IOO/OAF". The host is called "c1ioo".
  7. The interconnect between Dolphin switch and c1ioo host is via fiber not copper.
  8. It'd be good to have an accurate diagram of the current situation as well (with the RFM network).
  9. I'm not sure if the 1Y1 rack can accommodate 2 FEs and 2 expansion chassis. Maybe if we clear everything else there out...
  10. There are 2 "2GB/s" Copper traces. I think the legend should make clear what's going on - i.e. which cables are ethernet (Cat 6? Cat 5? What's the speed limitation? The cable? Or the switch?), which are PCIe cables etc etc. 

I don't have omnigraffle - what about uploading the source doc in a format that the excellent (and free) draw.io can handle? I think we can do a much better job of making this diagram reflect reality. There should also be a corresponding diagram for the Acromag system (but that doesn't have to be tied to this task). Megatron (scripts machine) and nodus should be added to that diagram as well.

Please send me any omissions or corrections to the layout.

  15771   Tue Jan 19 14:05:25 2021 JonConfigurationCDSUpdated CDS upgrade plan

I've produced updated diagrams of the CDS layout, taking the comments in 15476 into account. I've also converted the 40m's diagrams from Omnigraffle ($150/license) to the free, cloud-based platform draw.io. I had never heard of draw.io, but I found that it has most all the same functionality. It also integrates nicely with Google Drive.

Attachment 1: The planned CDS upgrade (2 new FEs, fully replace RFM network with Gen 1 Dolphin IPC)
Attachment 2: The current 40m CDS topology

The most up-to-date diagrams are hosted at the following links:

Please send me any further corrections or omissions. Anyone logged in with LIGO.ORG credentials can also directly edit the diagrams.

Attachment 1: 40m_CDS_Network_-_Planned.pdf
40m_CDS_Network_-_Planned.pdf
Attachment 2: 40m_CDS_Network_-_Current.pdf
40m_CDS_Network_-_Current.pdf
  15772   Tue Jan 19 15:43:24 2021 gautamConfigurationCDSUpdated CDS upgrade plan

Not sure if 1Y1 can accommodate both c1sus2 and c1bhd as well as the various electronics chassis that will have to be installed. There may need to be some distribution between 1Y1 and 1Y3. Does Koji's new wiring also specify which racks hold which chassis?

Some minor improvements to the diagram:

  1. The GPS receiver in 1X7 should be added. All the timing in the lab is synced to the 1pps from this.
  2. We should add hyperlinks to the various parts datasheets (e.g. Dolphin switch, RFM switch, etc etc) so that the diagram will be truly informative and self-contained.
  3. Megatron and nodus, but especially chiara (NFS server), should be added to the diagram. 
  15921   Mon Mar 15 20:40:01 2021 ranaConfigurationComputersinstalled QTgrace on donatella for dataviewer

I installed QTgrace using yum on donatella.angel Both Grace and XMgrace are broken due to some boring fight between the Fedora package maintainers and the (non existent) Grace support team. So I have symlinked it:

controls@donatella|bin> sudo mv xmgrace xmgrace_bak
controls@donatella|bin> sudo ln -s qtgrace xmgrace
controls@donatella|bin> pwd
/usr/bin

I checked that dataviewer works now for realtime and playback.cool Although the middle click paste on the mouse doesn't work yet.angry

Attachment 1: cutiegrace.png
cutiegrace.png
  15928   Wed Mar 17 09:05:01 2021 Paco, AnchalConfigurationComputers40m Control Room Changes
  • Switched positions of allegra and donatella.
  • While doing so, the hdmi cable previously used by donatella snapped. We replaced this cable by another unused cable we found connected only on one end to rossa. We should get more HDMI cables if that cable was in use for some other purpose.
  • Paco bought a bluetooth speaker/mic that is placed infront of allegra and it's usb adapter is connected to iMac's keyboard in the bottom. With the new camera installed, the 40m video call environment is now complete.
  • Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
  16027   Wed Apr 14 13:16:20 2021 AnchalConfigurationComputers40m Control Room Changes
  • I have confirmed that the old two monitors' backlighting is not working. One can see the impression of the display without any brightness on them. Both old monitors are on the shelf behind.
  • Today we got a monitor and mouse from Mike. I had to change /etc/default/grub GRUB_GFXMODE to 1920x1200@30 on allegra for it to work with the(any) monitor.
  • Allegra is Debian 10 with latest cds-workstation installed on it. It is a good test station to migrate our existing scripts to start using updated cds-workstation configuration.
Quote:
  • Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.

 

  16163   Wed May 26 11:45:57 2021 Anchal, PacoConfigurationIMCMC2 analog camera

[Anchal, Paco]

We went near the MC2 area and opened the lid to inspect the GigE and analog video monitors for MC2. Looked like whatever image is coming through the viewport is split into the GigE (for beam tracking) and the analog monitor. We hooked the monitor found on the floor nearby and tweaked the analog video camera around to get a feel for how the "ghost" image of the transmission moves around. It looks like in order to try and remove this "extra spots" we would need to tweak the beam tracking BS. We will consult the beam tracking authorities and return to this.

  16302   Thu Aug 26 10:30:14 2021 JamieConfigurationCDSfront end time synchronization fixed?

I've been looking at why the front end NTP time synchronization did not seem to be working.  I think it might not have been working because the NTP server the front ends were point to, fb1, was not actually responding to synchronization requests.

I cleaned up some things on fb1 and the front ends, which I think unstuck things.

On fb1:

  • stopped/disabled the default client (systemd-timesyncd), and properly installed the full NTP server (ntp)
  • the ntp server package for debian jessie is old-style sysVinit, not systemd.  In order to make it more integrated I copied the auto-generated service file to /etc/systemd/system/ntp.service, and added and "[install]" section that specifies that it should be available during the default "multi-user.target".
  • "enabled" the new service to auto-start at boot ("sudo systemctl enable ntp.service") 
  • made sure ntp was configured to serve the front end network ('broadcast 192.168.123.255') and then restarted the server ("sudo systemctl restart ntp.service")

For the front ends:

  • on fb1 I chroot'd into the front-end diskless root (/diskless/root) and manually specifed that systemd-timesyncd should start on boot by creating a symlink to the timesyncd service in the multi-user.target directory:
$ sudo chroot /diskless/root
$ cd /etc/systemd/system/multi-user.target.wants
$ ln -s /lib/systemd/system/systemd-timesyncd.service
  • on the front end itself (c1iscex as a test) I did a "systemctl daemon-reload" to force it to reload the systemd config, and then restarted the client ("systemctl restart systemd-timesyncd")
  • checked the NTP synchronization with timedatectl:
controls@c1iscex:~ 0$ timedatectl 
      Local time: Thu 2021-08-26 11:35:10 PDT
  Universal time: Thu 2021-08-26 18:35:10 UTC
        RTC time: Thu 2021-08-26 18:35:10
       Time zone: America/Los_Angeles (PDT, -0700)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-14 01:59:59 PST
                  Sun 2021-03-14 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-11-07 01:59:59 PDT
                  Sun 2021-11-07 01:00:00 PST
controls@c1iscex:~ 0$ 

Note that it is now reporting "NTP enabled: yes" (the service is enabled to start at boot) and "NTP synchronized: yes" (synchronization is happening), neither of which it was reporting previously.  I also note that the systemd-timesyncd client service is now loaded and enabled, is no longer reporting that it is in an "Idle" state and is in fact reporting that it synchronized to the proper server, and it is logging updates:

controls@c1iscex:~ 0$ sudo systemctl status systemd-timesyncd
â— systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
   Active: active (running) since Thu 2021-08-26 10:20:11 PDT; 1h 22min ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 2918 (systemd-timesyn)
   Status: "Using Time Server 192.168.113.201:123 (ntpserver)."
   CGroup: /system.slice/systemd-timesyncd.service
           â””─2918 /lib/systemd/systemd-timesyncd

Aug 26 10:20:11 c1iscex systemd[1]: Started Network Time Synchronization.
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 64s/+0.000s/0.000s/0.000s/+26ppm
Aug 26 10:21:15 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 128s/-0.000s/0.000s/0.000s/+25ppm
Aug 26 10:23:23 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 256s/+0.001s/0.000s/0.000s/+26ppm
Aug 26 10:27:40 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 512s/+0.003s/0.000s/0.001s/+29ppm
Aug 26 10:36:12 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 1024s/+0.008s/0.000s/0.003s/+33ppm
Aug 26 10:53:16 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/-0.026s/0.000s/0.010s/+27ppm
Aug 26 11:27:24 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/+0.009s/0.000s/0.011s/+29ppm
controls@c1iscex:~ 0$ 

So I think this means everything is working.

I then went ahead and reloaded and restarted the timesyncd services on the rest of the front ends.

We still need to confirm that everything comes up properly the next time we have an opportunity to reboot fb1 and the front ends (or the opportunity is forced upon us).

There was speculation that the NTP clients on the front ends (systemd-timesyncd) would not work on a read-only filesystem, but this doesn't seem to be true.  You can't trust everything you read on the internet.

  16614   Mon Jan 24 12:33:41 2022 ranaConfigurationWikiAIC Wiki: txz files allowed

I updated the mime.local.conf file for the AIC Wiki so as to allow attachments with the .txz format. THis should be persistent over upgrades, since its a local file.

  16874   Wed May 25 16:56:44 2022 PacoConfigurationBHDIFO recovery - IMC alignment

[Yuta, Paco]

We aligned IMC to recover the IFO progressively. First step was to center the MC REFL beamspot on the camera as well as the WFS DC. Then slide MC2 and MC3 together. Below are the alignment slider positions before/after.

  MC1 (before --> after) MC2 (before --> after) MC3 (before --> after)
PIT -0.3398 --> -0.4768 4.1217 --> 4.0737 -1.9808 --> -1.9308
YAW -0.8947 --> -0.7557 -1.2350 --> -1.3350 1.5598 --> 1.5638
  16875   Wed May 25 17:34:47 2022 yutaConfigurationBHDIFO recovery - IFO alignment

IFO aligned to maximize flashings, except for GRY and LO-AS.

What we did:
 0. After recovering IMC, C1:IOO-MC_TRANS_SUM was ~1300 with C1:IOO-MC_RFPD_DCMON of ~0.11 (~10% better than what we had during vent). Xarm and Yarm was already flashing and could see the beam at AS and POP cameras.
 1. Aligned ETMX and ITMX to green X input beam to maximize C1:ALS-TRX_OUT, to ~0.19.
 2. Aligned TT2-PR3 to get C1:SUS-ETMX_TRX_OUT flashing at 0.09 at max
 3. Aligned ITMY to have nice POP blinking of MICH at POP camera
 4. Aligned ETMY-PR3 to have C1:SUS-ETMX_TRX_OUT flashing at 0.06 at max
 5. Misaligned ITMY (with +2 in C1:SUS-ITMY_PIT_COMM), and aligned PRM to have PRX (PRM-ITMX cavity) flashing at C1:LSC-ASDC_IN1 at ~20 (offset -70) at max
 6. Misaligned PRM, and aligned SRM to have SRX (SRM-ITMX cavity) flashing at C1:LSC-ASDC_IN1 at ~20 (offset -70) at max
 7. Restored all the alignment. ITMY didn't quite come back, so I need to tweak the alignement to maximize TRY flashing.

Result:
Current alignment is as attached. IR beam at AS, REFL, MCR and green beam at GTRX cameras all seem slightly to the left from monitors, but looks as it was before the pump down.yes GTRY is still clipped, but green Y locks stably. Oplevs were not so useful to recover the alignment. ETMX/Y oplevs did not drifted too much probably because we don't have in-vac steering mirrors.

Next:
 - Tweak alignment of green Y input to follow Yarm
 - Do LO-AS alignment
 - REFL DC is not receiving beam. Re-alignment necessary
 - Oplev centering
 - BHD PDs need to be replaced to lower gain PDs and need to be connected to CDS

Attachment 1: Screenshot_2022-05-25_17-47-57.png
Screenshot_2022-05-25_17-47-57.png
  16877   Thu May 26 19:55:43 2022 yutaConfigurationBHDOplevs centered, BHD DCPDs are now online

[Paco, Yuta]

We have aligned the IFO (except for LO-AS and GRY), and centered all the oplevs.
We have also restored Gautam's in-air BHD DCPD setup and placed it to ITMY table.
BHD DC PD signals are now online at C1:XO4-MADC1_EPICS_CH4 and CH5. 

Oplevs:
 Aligned the IFO following the steps in elog 40m/16875.
 When we were woking on BHD DCPDs, we lost REFL beam on camera and both arms flashing. Alignment was restored mostly with TT2 pitch.
 We centered all the oplevs after the recovery (see attached).

BHD DCPDs:
 1. We removed a circuit box with M2 ISS photodetector readout board from AP table, in-air BHD photodiodes from optics graveyard. (see LIGO-E2000436 and elog 40m/15493 for wiring diagram)
 2. Taken out temporary two Thorlabs PDA100A used for aligning LO-AS during the vent from ITMY table, and placed the BHD setup in ITMY table (see attached and attached).
 3. DB9 cable (15ft+10ft) was connected from M2 ISS box to anti-aliasing chassis for ADC1 of C1X04 at 1Y2 rack (see attached).
 4. +/-18V power for M2 ISS box was supplied from 1Y1 rack.
 5. BHD DCPD signals are now available at C1:XO4-MADC1_EPICS_CH4 and CH5 (see attached).

Next:
 - Tweak alignment of green Y input to follow Yarm
 - Do LO-AS alignment
 - Centering of PDs everywhere with IFO aligned
 - Update RTS model for BHD

Attachment 1: elog_1Y2.JPG
elog_1Y2.JPG
Attachment 2: elog_BHD.JPG
elog_BHD.JPG
Attachment 3: elog_box.JPG
elog_box.JPG
Attachment 4: Screenshot_2022-05-26_17-37-27_IFOaligned_OplevCentered.png
Screenshot_2022-05-26_17-37-27_IFOaligned_OplevCentered.png
Attachment 5: Screenshot_2022-05-26_20-35-02.png
Screenshot_2022-05-26_20-35-02.png
  16880   Fri May 27 17:45:53 2022 yutaConfigurationBHDBHD camera installed, GRY aligned

[JC, Paco, Yuta]

After the IFO recovery (elog 40m/16881), we installed an analog camera for BHD fringe using a BNC cable for old SRMF camera so that we can see it from the control room.
We also aligned AS-LO using LO1,LO2 and AS4.
We then aligned GRY injection to get maximum GTRY.

Maximum TEM00s right now are
 C1:SUS-ETMX_TRX_OUT_DQ ~0.1
 C1:SUS-ETMY_TRY_OUT_DQ ~0.05
 C1:ALS-TRX_OUT_DQ ~0.20
 C1:ALS-TRY_OUT_DQ ~0.18

  16886   Thu Jun 2 20:05:37 2022 yutaConfigurationPSLIMC input power recovered to 1W, some alignment works

[Paco, Yuta]

We have increased the output power from the PSL table to 951 mW (it was 96.7 mW).
IMC was recovered including WFS, and both arms are flashing nicely in IR.
We tweaked the alignment of GRX and GRY injection to align them with IR, but it was hard.
Right now IR beams are not centered on TMs. We should center them first.

What we did:
Power increase and IMC recovery
 - Replaced a beam splitter which splits the beam into IMC REFL RF PD path and WFS path from R=98% to R=10% one. Reflection goes to RF PD.
 - Put a R=98% beam splitter back into WFS path.
 - We also tried to put a window in front of IMC REFL camera to recover the arrangement in 40m wiki, but the beam reflected from the window was too weak for us to align. So, we decided not to place a window in front of the camera.
 - Attached photos are the IMC REFL path before and after the work.
 - Measured the PSL output power as Koji did in elog 40m/16672. It was measured to be 96.7+/- 0.5 mW.
 - Rotated the HWP using the Universal Motion Controller (it was not possible for us to do it from the MEDM screen). The position was changed from 73.99 deg to 36.99 deg. Output power was measured to be 951 +/- 1 mW
 - IMC locked without any other changes.
 - Changed C1:IOO-WFS_TRIGGER_THRESH_ON to 5000 (was 500). IMC WFS also worked.
 - After running MC WFS relief script, WFS DC offsets and RF offsets are adjusted following the steps in elog 40m/16835. Below are the results.

C1:IOO-WFS1_SEG1_DC.AOFF => -0.0008882080010759334
C1:IOO-WFS1_SEG2_DC.AOFF => -0.0006527877490346629
C1:IOO-WFS1_SEG3_DC.AOFF => -0.0005847311617496113
C1:IOO-WFS1_SEG4_DC.AOFF => -0.0010395992663688955
C1:IOO-WFS2_SEG1_DC.AOFF => -0.0025944841559976334
C1:IOO-WFS2_SEG2_DC.AOFF => -0.003191715502180159
C1:IOO-WFS2_SEG3_DC.AOFF => -0.0036688060499727726
C1:IOO-WFS2_SEG4_DC.AOFF => -0.004011172490815322


IOO-WFS1_I1         :  +1977.7 ->    +2250 (Significant change)
IOO-WFS1_I2         :  +3785.8 ->  +3973.2
IOO-WFS1_I3         :  +2014.2 ->  +2277.7 (Significant change)
IOO-WFS1_I4         :  -208.83 ->  +430.96 (Significant change)
IOO-WFS1_Q1         :  +2379.5 ->  +1517.4 (Significant change)
IOO-WFS1_Q2         :  +2260.4 ->  +2172.6
IOO-WFS1_Q3         :  +588.86 ->  +978.98 (Significant change)
IOO-WFS1_Q4         :  +1654.8 ->  +195.38 (Significant change)
IOO-WFS2_I1         :  -1619.9 ->  -534.25 (Significant change)
IOO-WFS2_I2         :  +1610.4 ->  +1619.8
IOO-WFS2_I3         :  +1919.6 ->  +2179.8 (Significant change)
IOO-WFS2_I4         :    +1557 ->  +1426.6
IOO-WFS2_Q1         :   -62.58 ->  +345.56 (Significant change)
IOO-WFS2_Q2         :  +777.01 ->  +805.41
IOO-WFS2_Q3         :  -6183.6 ->  -5365.8 (Significant change)
IOO-WFS2_Q4         :  +4457.2 ->  +4397.


IFO Alignment
 - Aligned both arms using IR. Both arm flashes at the following, which is consistent with the power increase.
 C1:SUS-ETMX_TRX_OUT_DQ ~1.1
 C1:SUS-ETMY_TRY_OUT_DQ ~0.6
 - With this, we tried to tweak GRX and GRY injection. The following is after the work. We could increase GTRX to 0.204 when the Xarm is aligned to green. This suggests that GRX injection is not aligned nicely yet. But the beams are also not centered on TMs. We should center them first.
 C1:ALS-TRX_OUT_DQ ~0.13
 C1:ALS-TRY_OUT_DQ ~0.07
 - GTRX and GTRY cameras are adjusted to have nicer images. In GRX path, the second and last lens before the PD and CCD was pulled ~ 1 cm behind its original position and both beams realigned. Then, on GRY path, the beam was re-centered on the first and only lens, the whole assembly pushed forward by ~ 2 cm and the beams re-centered.

Next:
 - Center the IR beam on TMs (first by our eyeballs; better to use A2L after arm locking is recovered and coils are balanced)
 - Tweak GRX and GRY injection (restore GRY PZTs?)
 - Install ETMXT camera (if it is easy)
 - Lock Xarm and Yarm (C1:LSC-TRX/Y_OUT needs to be fixed for triggering. Can we use other PDs for triggering?)
 - MICH locking (REFL and AS PDs might need to be re-aligned; they are not receiving much light)
 - RTS model for BHD needs to be updated

Attachment 1: Before.JPG
Before.JPG
Attachment 2: After.JPG
After.JPG
  16887   Fri Jun 3 12:13:58 2022 PacoConfigurationCDSFix RFM channels

[Paco, Yuta]

We tried fixing the issue of LSC_TRY and LSC_TRX channels not working. We first did some investigation, and just like previously reported by Chris, narrowed down the issue to the RFM channels coming from c1iscex/c1iscey.

First attempt : FAIL

In our first attempt, we

  1. Tripped ETMX/ETMY watchdogs, ssh to c1iscex/c1iscey and restart the rtcds models.
  2. Since the last step didn't fix things, we decided to do the same thing on c1lsc, c1sus, c1ioo.
  3. After hard rebooting c1ioo and c1lsc (because they died during the stopping of rtcds models), and not experiencing any timing issues (nice), we still don't fix the issue.

Second attempt: Success

A second attempt just followed Koji's previous fix explained here. Basic difference with our first attempt was a hard reboot of c1iscex/c1iscey in addition to the rtcds model restarting. RFM channels were then clear of errors and we recovered our IR transmission channels in the LSC model.

Attachment 1: SoGreen.png
SoGreen.png
  16893   Mon Jun 6 16:09:23 2022 ranaConfigurationDetCharSummary Pages: seis BLRMS

I updated the config file c1pem.ini in /users/public_html/detcharsummary/ConfigFiles, and commited it so I hope it works, but I did not have git push permissions. Does anyone know what is the idea here? Should we do our own personal git clone and modify that way or shoudl we do it with the control account.

Wiki needs to clear out all the outdated information on this workflow.

The changes are to make the y-scales useful. Currently, all of the past seis BLRMS plots are not so useful because the scales have not been set based on the actual signal levels. Let's see if this works, and we  can re-evaluate after a few weeks.

  16924   Thu Jun 16 18:23:15 2022 PacoConfigurationBHDRecovering LO beam in BHD DCPDs

[Paco, Yuta]

We recovered the LO beam on the BHD port. To do this, we first tried reverting to a previously "good" alignment but couldn't see LO beam hit the sensor. Then we checked the ITMY table and couldn't see LO beam either, even though the AS beam was coming out fine. The misalignment is likely due to recent changes in both injection alignment on TT1, TT2, PR2, PR3, as well as ITMX, ITMY. We remembered that LO path is quite constrained in the YAW direction, so we started a random search by steering LO1 YAW around by ~ 1000 counts in the negative direction at which point we saw the beam come out of the ITMY chamber yes


We proceeded to walk the LO1-LO2 in PIT mostly to try and offload the huge alignment offset from LO2 to LO1 but this resulted in the LO beam disappearing or become dimmer (from some clipping somewhere). This is WiP and we shall continue this alignment offload task at least tomorrow, but if we can't offload significantly we will have to move forward with this alignment. Attachment #1 shows the end result of today's alignment.

Attachment 1: Screenshot_2022-06-16_18-29-14_BHDLObeamISBACK.png
Screenshot_2022-06-16_18-29-14_BHDLObeamISBACK.png
  16932   Tue Jun 21 14:17:50 2022 yutaConfigurationBHDBHD DCPDs re-routed to c1sus2

After discussing with Anchal, we decided to route BHD related PD signals directly to ADC of c1sus2, which handles our new suspensions including LO1, LO2, AS1, AS4, so that we can control them directly.
BHD related PD signals will be sent to c1lsc for DARM control.

Re-cabling was done, and now they are online at C1:X07-MADC1_EPICS_CH16 (DC PD A) and CH17 (DC PD B) with 15ft DB9 cable.
Here, DC PD A is the transmission of BHD BS for AS beam, and DC PD B is the reflection of BHD BS for AS beam (see attached photo).

Attachment 1: C1X07ADC1.JPG
C1X07ADC1.JPG
Attachment 2: BHDDCPDs.JPG
BHDDCPDs.JPG
  17018   Tue Jul 19 16:00:34 2022 yutaConfigurationBHDFast channels for BHD DCPDs now available in c1lsc but not in c1hpc

[Paco, Anchal-remote-support, Yuta]

We added fast channels to BHD DC PDs.
C1:LSC-DCPD_(A|B)_IN1 are now available, but C1:HPC-DCPD_(A|B)_IN1 still gives us zero.

c1hpc situation -> not good
 - We can see the slow signal at C1:X07-MADC1_EPICS_CH16 (DC PD A) and CH17 (DC PD B)
 - C1:HPC-DCPD_(A|B)_IN1 is there, but zero.
 - We have modified c1hpc model to add DCPD_(A|B) filters in front of the input matrix (see Attachment #1).
 - After modifying the model, we run
ssh c1sus2
rtcds make c1hpc
rtcds install c1hpc
ssh fb1
sudo systemctl restart daqd_*

 - After this, we got 0x2000 error. So, we ran the following. This removed 0x2000 error, but DCPD signals are still zero. They are also not available in C1HPC-MONITOR_ADC1.adl screen (see Attachment #3).
ssh c1sus2
rtcds restart c1hpc


c1lsc situation -> good
 - We could see the slow signal at C1:X04-MADC1_EPICS_CH4 (DC PD A) and CH5 (DC PD B), and also C1:LSC-DCPD_(A|B)_NORM after making C1:LSC-DCPD_(A|B)_POW_NORM=1. The ADC channel and DCPD channel are exactly the same.
 - After confirming the above, we modified the c1lsc model to add DCPD_(A|B) filters in front of the input matrix (see Attachment #2).
 - After modifying the model, we run
ssh c1lsc
rtcds make c1lsc
rtcds install c1lsc
ssh fb1
sudo systemctl restart daqd_*

 - After this, we also got 0x2000 error. We also noticed that, for example, C1:X04-MADC0_EPICS_CH31 and C1:LSC-ASDC_INMON are different, which used to be the same (ASDC_INMON was largely attenuated).
 - In the end, we run the following to remove 0x2000 error, but it crashed c1lsc, as well as c1sus, c1ioo.
ssh c1lsc
rtcds restart c1lsc

 - So, we did rebootC1LSC.sh. This made c1lsc, c1ioo and c1sus as green as before, except for RFM issue in TRX/TRY, like we saw in June. We followed the steps in 40m/16887 to hard reboot c1iscex/c1iscey and ran rebootC1LSC.sh again. This made C1CDS_FE_STATUS.adl screen as green as before (see Attachment #3).

 - Fast channels C1:LSC-DCPD_(A|B)_IN1 are now available. They are also available in C1LSC-MONITOR_ADC1.adl screen (see Attachment #3).

Attachment 1: Screenshot_2022-07-19_14-26-39_c1hpc.png
Screenshot_2022-07-19_14-26-39_c1hpc.png
Attachment 2: Screenshot_2022-07-19_14-24-49_c1lsc.png
Screenshot_2022-07-19_14-24-49_c1lsc.png
Attachment 3: Screenshot_2022-07-19_15-51-25_GreenGreen.png
Screenshot_2022-07-19_15-51-25_GreenGreen.png
  17025   Thu Jul 21 21:50:47 2022 TegaConfigurationBHDc1sus2 IPC update

IPC issue still unresolved.

Updated shared memory tag so that 'SUS' -> 'SU2' in c1hpc, c1bac and c1su2. Removed obsolete 'HPC/BAC-SUS' references from IPC file, C1.ipc. Restarted the FE models but the c1sus2 machine froze, so I did a manual reboot. This brought down the vertex machines---which I restarted using /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh---and the end machines which I restarted manually. Everything but the BHD optics now have their previous values. So need to burtrestore these.
 

# IPC file:
/opt/rtcds/caltech/c1/chans/ipc/C1.ipc

# Model file locations:
/opt/rtcds/userapps/release/isc/c1/models/isc/c1hpc.mdl
/opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl
/opt/rtcds/userapps/release/isc/c1/models/isc/c1bac.mdl

# Log files:
/cvs/cds/rtcds/caltech/c1/rtbuild/3.4/c1hpc.log
/cvs/cds/rtcds/caltech/c1/rtbuild/3.4/c1su2.log
/cvs/cds/rtcds/caltech/c1/rtbuild/3.4/c1bac.log


SUS overview medm screen :

  • Reduced the entire screen width
  • Revert to old screen style watchdog layout
  17026   Fri Jul 22 15:05:26 2022 TegaConfigurationBHDc1sus2 shared memory and ADC fix

[Tega, Yuta]

We were able to fix the shared memory issue by updating the receiver model name from ''SUS' to 'SU2' and the ADC zero issue by including both ADC0 and ADC1 in the c1hpc and c1bac models as well as removing the grounding of the unused ADC channels (including chn#16 and chn#17 which are actually used in c1hpc) in c1su2. We also used shared memory to move the DCPD_A/B error signals (after signal conditioning and mixing A/B; now named A_ERR and B_ERR) from c1hpc to c1bac.
C1:HPC-DCPD_A_IN1 and C1:HPC-DCPD_B_IN1 are now availableangel (they are essentially the same as C1:LSC-DCPD_A_IN1 and C1:LSC-DCPD_B_IN1, except for they are ADC-ed with different ADC; see elog 40m/16954 and Attachment #1).
Dolphin IPC error in seding signal from c1hpc to c1lsc still remains.crying

Attachment 1: Screenshot_2022-07-22_15-04-33_DCPD.png
Screenshot_2022-07-22_15-04-33_DCPD.png
Attachment 2: Screenshot_2022-07-22_15-12-19_models.png
Screenshot_2022-07-22_15-12-19_models.png
Attachment 3: Screenshot_2022-07-22_15-15-11_ERR.png
Screenshot_2022-07-22_15-15-11_ERR.png
Attachment 4: Screenshot_2022-07-22_15-32-19_GDS.png
Screenshot_2022-07-22_15-32-19_GDS.png
  17028   Fri Jul 22 17:46:10 2022 yutaConfigurationBHDc1sus2 watchdog update and DCPD ERR channels

[Tega, Yuta]

We have added C1:HPC-DCPD_A_ERR and C1:HPC-DCPD_B_ERR testpoints, which can be used as A+B, A-B etc.
Restarting c1hpc crashed c1sus2, and also made c1lsc/ioo/sus models red.
We run /opt/rtcds/caltech/c1/Git/40m/scripts/cds/restartAllModels.sh to restart all the machines. It worked perfectly without manually pressing power buttons! Wow!heart

We have also edited /opt/rtcds/caltech/c1/medm/c1su2/C1SU2_WATCHDOGS.adl so that it will use new /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/medm/resetFromWatchdogTrip.sh instead of old /opt/rtcds/caltech/c1/scripts/SUS/damprestore.py.

Attachment 1: Screenshot_2022-07-22_17-48-25.png
Screenshot_2022-07-22_17-48-25.png
ELOG V3.1.3-