40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 303 of 344  Not logged in ELOG logo
ID Date Authordown Type Category Subject
  13130   Fri Jul 21 18:03:17 2017 JamieUpdateCDSUpdate on front-end/DAQ rebuild

Update:

  • front ends booting with the new Debian jessie diskless root image and a linux 3.2 version of the RTS-patched kernel
  • dolphin is configured correctly and running on c1lsc and c1sus
  • models building and running with RCG 3.0.3

Up next:

  • add c1ioo to the dolphin network
  • recompile/restart all front end models
  • daqd

I'll try to get the first two of those done tomorrow, although it's unclear what model updates we'll have to do to get things working with the newer RCG.

 

  13132   Sun Jul 23 15:00:28 2017 JamieOmnistructureVACstrange sound around X arm vacuum pumps

While walking down to the X end to reset c1iscex I heard what I would call a "rythmic squnching" sound coming from under the turbo pump.  I would have said the sound was coming from a roughing pump, but none of them are on (as far as I can tell).

Steve maybe look into this??

  13136   Mon Jul 24 10:59:08 2017 JamieUpdateCDSc1iscex models died
Quote:

This morning, all the c1iscex models were dead. Attachment #1 shows the state of the cds overview screen when I came in. The machine itself was ssh-able, so I just restarted all the models and they came back online without fuss.

This was me.  I had rebooted that machine and hadn't restarted the models.  Sorry for the confusion.

  13138   Mon Jul 24 19:28:55 2017 JamieUpdateCDSfront end MX stream network working, glitches in c1ioo fixed

MX/OpenMX network running

Today I got the mx/open-mx networking working for the front ends.  This required some tweaking to the network interface configuration for the diskless front ends, and recompiling mx and open-mx for the newer kernel.  Again, this will all be documented.

controls@fb1:~ 0$ /opt/mx/bin/mx_info
MX Version: 1.2.16
MX Build: root@fb1:/opt/src/mx-1.2.16 Mon Jul 24 11:33:57 PDT 2017
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
    8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0:  364.4 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
    Status:        Running, P0: Link Up
    Network:    Ethernet 10G

    MAC Address:    00:60:dd:43:74:62
    Product code:    10G-PCIE-8B-S
    Part number:    09-04228
    Serial number:    485052
    Mapper:        00:60:dd:43:74:62, version = 0x00000000, configured
    Mapped hosts:    6

                                                        ROUTE COUNT
INDEX    MAC ADDRESS     HOST NAME                        P0
-----    -----------     ---------                        ---
   0) 00:60:dd:43:74:62 fb1:0                             1,0
   1) 00:30:48:be:11:5d c1iscex:0                         1,0
   2) 00:30:48:bf:69:4f c1lsc:0                           1,0
   3) 00:25:90:0d:75:bb c1sus:0                           1,0
   4) 00:30:48:d6:11:17 c1iscey:0                         1,0
   5) 00:14:4f:40:64:25 c1ioo:0                           1,0
controls@fb1:~ 0$

c1ioo timing glitches fixed

I also checked the BIOS on c1ioo and found that the serial port was enabled, which is known to cause timing glitches.  I turned off the serial port (and some power management stuff), and rebooted, and all the c1ioo timing glitches seem to have gone away.

It's unclear why this is a problem that's just showing up now.  Serial ports have always been a problem, so it seems unlikely this is just a problem with the newer kernel.  Could the BIOS have somehow been reset during the power glitch?

In any event, all the front ends are now booting cleanly, with all dolphin and mx networking coming up automatically, and all models running stably:

Now for daqd...

  13145   Wed Jul 26 19:13:07 2017 JamieUpdateCDSdaqd showing same instability as before

I recompiled daqd on the updated fb1, similar to how I had before, and we're seeing the same instability: process crashes when it tries to write out the second trend (technically it looks like it crashes while it's trying to write out the full frame while the second trend is also being written out).  Jonathan Hanks and I are actively looking into it and i'll provide further report soon.

  13149   Fri Jul 28 20:22:41 2017 JamieUpdateCDSpossible stable daqd configuration with separate DC and FW

This week Jonathan Hanks and I have been trying to diagnose why the daqd has been unstable in the configuration used by the 40m, with data concentrator (dc) and frame writer (fw) in the same process (referred to generically as 'fb').  Jonathan has been digging into the core dumps and source to try to figure out what's going on, but he hasn't come up with anything concrete yet.

As an alternative, we've started experimenting with a daqd configuration with the dc and fw components running in separate processes, with communication over the local loopback interface.  The separate dc/fw process model more closely matches the configuration at the sites, although the sites put dc and fwprocesses on different physical machines.  Our experimentation thus far seems to indicate that this configuration is stable, although we haven't yet tested it with the full configuration, which is what I'm attempting to do now.

Unfortunately I'm having trouble with the mx_stream communication between the front ends and the dc process.  The dc does not appear to be receiving the streams from the front ends and is producing a '0xbad' status message for each.  I'm investigating.

  13153   Mon Jul 31 18:44:40 2017 JamieUpdateCDSCDS system essentially fully recovered

The CDS system is mostly fully recovered at this point.  The mx_streams are all flowing from all front ends, and from all models, and the daqd processes are receiving them and writing the data to frames:

Remaining unresolved issues:

  • IFO needs to be fully locked to make sure ALL components of all models are working.
  • The remaining red status lights are from the "FB NET" diagnostics, which are reflecting a missing status bit from the front end processes due to the fact that they were compiled with an earlier RCG version (3.0.3) than the mx_streams were (3.3+/trunk).  There will be a new release of the RTS soon, at which point we'll compile everything from the same version, which should get us all green again.
  • The entire system has been fully modernized, to the target CDS reference OS (Debian jessie) and more recent RCG versions.  The management of the various RTS components, both on the front ends and on fb, have as much as possible been updated to use the modern management tools (e.g. systemd, udev, etc.).  These changes need to be documented.  In particular...
  • The fb daqd process has been split into three separate components, a configuration that mirrors what is done at the sites and appears to be more stable: The "target" directory for all of these components is now:
    • daqd_dc: data concentrator (receives data from front ends)
    • daqd_fw: receives frames from dc and writes out full frames and second/minute trends
    • daqd_rcv: NDS1 server (raises test points and receives archive data from frames from 'nds' process)
    The "target" directory for all of these new components is:
    • /opt/rtcds/caltech/c1/target/daqd
    All of these processes are now managed under systemd supervision on fb, meaning the daqd restart procedure has changed.  This needs to be simplified and clarified.
  • Second trend frames are being written, but for some reason they're not accessible over NDS.
  • Have not had a chance to verify minute trend and raw minute trend writing yet.  Needs to be confirmed.
  • Get wiper script working on new fb.
  • Front end RTS kernel will occaissionally crash when the RTS modules are unloaded.  Keith Thorne apparently has a kernel version with a different set of patches from Gerrit Kuhn that does not have this problem.  Keith's kernel needs to be packaged and installed in the front end diskless root.
  • The models accessing the dolphin shared memory will ALL crash when one of the front end hosts on the dolphin network goes away.  This results in a boot fest of all the dolphin-enabled hosts.  Need to figure out what's going on there.
  • The RCG settings snapshotting has changed significantly in later RCG versions.  We need to make sure that all burt backup type stuff is still working correctly.
  • Restoration of /frames from old fb SCSI RAID?
  • Backup of entirety of fb1, including fb1 root (/) and front end diskless root (/diskless)
  • Full documentation of rebuild procedure from Jamie's notes.
  13164   Thu Aug 3 19:46:27 2017 JamieUpdateCDSnew daqd restart procedure

This is the daqd restart procedure:

$ ssh fb1 sudo systemctl restart daqd_*

That will restart all of the daqd services (daqd_dc, daqd_fw, daqd_rcv).

The front end mx_stream processes should all auto-restart after the daqd_dc comes back up.  If they don't (models show "0x2bad" on DC0_*_STATUS) then you can execute the following to restart the mx_stream process on the front end:

$ ssh c1<host> sudo systemctl restart mx_stream

 

 

  13165   Thu Aug 3 20:15:11 2017 JamieUpdateCDSdataviewer can not raise test points

For some reason dataviewer is not able to raise test points with the new daqd setup, even though dtt can.  If you raise a test point with dtt then dataviewer can show the data fine.

It's unclear to me why this would be the case.  It might be that all the versions of dataviewer on the workstations are too old??  I'll look into it tomorrow to see if I can figure out what's going on.

  13198   Fri Aug 11 19:34:49 2017 JamieUpdateCDSCDS final bits status update

So it appears we now have full frames and second, minute, and minute_raw trends.

We are still not able to raise test points with daqd_rcv (e.g. the NDS1 server), which is why dataviewer and nds2-client can't get test points on their own.

We were not able to add the EDCU (EPICS client) channels without daqd_fw crashing.

We have a new kernel image that's supposed to solve the module unload instability issue.  In order to try it we'll need to restart the entire system, though, so I'll do that on Monday morning.

I've got the CDS guys investigating the test point and EDCU issues, but we won't get any action on that until next week.

Quote:

Remaining unresolved issues:

  • IFO needs to be fully locked to make sure ALL components of all models are working.
  • The remaining red status lights are from the "FB NET" diagnostics, which are reflecting a missing status bit from the front end processes due to the fact that they were compiled with an earlier RCG version (3.0.3) than the mx_streams were (3.3+/trunk).  There will be a new release of the RTS soon, at which point we'll compile everything from the same version, which should get us all green again.
  • The entire system has been fully modernized, to the target CDS reference OS (Debian jessie) and more recent RCG versions.  The management of the various RTS components, both on the front ends and on fb, have as much as possible been updated to use the modern management tools (e.g. systemd, udev, etc.).  These changes need to be documented.  In particular...
  • The fb daqd process has been split into three separate components, a configuration that mirrors what is done at the sites and appears to be more stable: The "target" directory for all of these components is now:
    • daqd_dc: data concentrator (receives data from front ends)
    • daqd_fw: receives frames from dc and writes out full frames and second/minute trends
    • daqd_rcv: NDS1 server (raises test points and receives archive data from frames from 'nds' process)
    The "target" directory for all of these new components is:
    • /opt/rtcds/caltech/c1/target/daqd
    All of these processes are now managed under systemd supervision on fb, meaning the daqd restart procedure has changed.  This needs to be simplified and clarified.
  • Second trend frames are being written, but for some reason they're not accessible over NDS.
  • Have not had a chance to verify minute trend and raw minute trend writing yet.  Needs to be confirmed.
  • Get wiper script working on new fb.
  • Front end RTS kernel will occaissionally crash when the RTS modules are unloaded.  Keith Thorne apparently has a kernel version with a different set of patches from Gerrit Kuhn that does not have this problem.  Keith's kernel needs to be packaged and installed in the front end diskless root.
  • The models accessing the dolphin shared memory will ALL crash when one of the front end hosts on the dolphin network goes away.  This results in a boot fest of all the dolphin-enabled hosts.  Need to figure out what's going on there.
  • The RCG settings snapshotting has changed significantly in later RCG versions.  We need to make sure that all burt backup type stuff is still working correctly.
  • Restoration of /frames from old fb SCSI RAID?
  • Backup of entirety of fb1, including fb1 root (/) and front end diskless root (/diskless)
  • Full documentation of rebuild procedure from Jamie's notes.
  13205   Mon Aug 14 19:41:46 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

I'm upgrading the linux kernel for all the front ends to one that is supposedly more stable and won't freeze when we unload RTS models (linux-image-3.2.88-csp).  Since it's a different kernel version it requires rebuilds of all kernel-related support stuff (mbuf, symmetricom, mx, open-mx, dolphin) and all the front end models.  All the support stuff has been upgraded, but we're now waiting on the front end rebuilds, which takes a while.

Initial testing indicates that the kernel is more stable; we're mostly able to unload/reload RTS modules without the kernel freezing.  However, the c1iscey host seems to be oddly problematic and has frozen twice so far on module unloads.  None of the other hosts have frozen on unload (yet), though, so still not clear.

We're now seeing some timing errors between the front ends and daqd, resulting in a "0x4000" status message in the 'C1:DAQ-DC0_*_STATUS' channels.  Part of the problem was an issue with the IRIG-B/GPS receiver timing unit, which I'll log in a separate post.  Another part of the problem was a bug in the symmetricom driver, which has been resolved.  That wasn't the whole problem, though, since we're still seeing timing errors.  Working with Jonathan to resolve.

  13215   Wed Aug 16 17:05:53 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

The CDS system has now been up moved to a supposedly more stable real-time-patched linux kernel (3.2.88-csp) and RCG r4447 (roughly the head of trunk, intended to be release 3.4).  With one major and one minor exception, everything seems to be working:

The remaining issues are:

  • RFM network down.  The IOP models on all hosts on the RFM network are not detecting their RFM cards.  Keith Thorne thinks that this is because of changes in trunk to support the new long-range PCIe that will be used at the sites, and that we just need to add a new parameter to the cdsParameters block in models that use RFM.  Him and Rolf are looking into for us.
  • The 3.2.88-csp kernel is still not totally stable.  On most hosts (c1sus, c1ioo, c1iscex) it seems totally fine and we're able to load/unload models without issue.  c1iscey is definitely problematic, frequently freezing on module unload.  There must be a hardware/bios issue involved here.  c1lsc has also shown some problems.  A better kernel is supposedly in the works.
  • NDS clients other than DTT are still unable to raise test points.  This appears to be an issue with the daqd_rcv component (i.e. NDS server) not properly resolving the front ends in the GDS network.  Still looking into this with Keith, Rolf, and Jonathan.

Issues that have been fixed:

  • "EDCU" channels, i.e. non-front-end EPICS channels, are now being acquired properly by the DAQ.  The front-ends now send all slow channels to the daq over the MX network stream.  This means that front end channels should no longer be specified in the EDCU ini file.  There were a couple in there that I removed, and that seemed to fix that issue.
  • Data should now be recorded in all formats: full frames, as well as second, minute, and raw_minute trends
  • All FE and DAQD diagnostics are green (other than the ones indicating the problems with the RFM network).  This was fixed by getting the front ends models, mx_stream processes, and daqd processes all compiled against the same version of the advLigoRTS, and adding the appropriate command line parameters to the mx_stream processes.
  13217   Wed Aug 16 18:01:28 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors
Quote:

What's the current backup situation?

Good question.  We need to figure something out.  fb1 root is on a RAID1, so there is one layer of safety.  But we absolutely need a full backup of the fb1 root filesystem.  I don't have any great suggestions, other than just getting an external disk, 1T or so, and just copying all of root (minus NFS mounts).

  13219   Wed Aug 16 18:50:58 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors
Quote:

The remaining issues are:

  • RFM network down.  The IOP models on all hosts on the RFM network are not detecting their RFM cards.  Keith Thorne thinks that this is because of changes in trunk to support the new long-range PCIe that will be used at the sites, and that we just need to add a new parameter to the cdsParameters block in models that use RFM.  Him and Rolf are looking into for us.

RFM network is back!  Everything green again.

Use of RFM has been turned off in adLigoRTS trunk in favor of the new long-range PCIe networking being developed for the sites.  Rolf provided a single-line patch that re-enables it:

controls@c1sus:/opt/rtcds/rtscore/trunk 0$ svn diff
Index: src/epics/util/feCodeGen.pl
===================================================================
--- src/epics/util/feCodeGen.pl    (revision 4447)
+++ src/epics/util/feCodeGen.pl    (working copy)
@@ -122,7 +122,7 @@
 $diagTest = -1;
 $flipSignals = 0;
 $virtualiop = 0;
-$rfm_via_pcie = 1;
+$rfm_via_pcie = 0;
 $edcu = 0;
 $casdf = 0;
 $globalsdf = 0;
controls@c1sus:/opt/rtcds/rtscore/trunk 0$

This patched was applied to RTS source checkout we're using for the FE builds (/opt/rtcds/rtscore/trunk, which is r4447, and is linked to /opt/rtcds/rtscore/release).  The following models that use RFM were re-compiled, re-installed, and re-started:

  • c1x02
  • c1rfm
  • c1x03
  • c1als
  • c1x01
  • c1scx
  • c1asx
  • c1x05
  • c1scy
  • c1tst

The re-compiled models now see the RFM cards (dmesg log from c1ioo):

[24052.203469] c1x03: Total of 4 I/O modules found and mapped
[24052.203471] c1x03: ***************************************************************************
[24052.203473] c1x03: 1 RFM cards found
[24052.203474] c1x03:     RFM 0 is a VMIC_5565 module with Node ID 180
[24052.203476] c1x03: address is 0xffffc90021000000
[24052.203478] c1x03: ***************************************************************************

This cleared up all RFM transmission error messages.

CDS upstream are working to make this RFM usage switchable in a reasonable way.

  13258   Mon Aug 28 08:47:32 2017 JamieSummaryLSCFirst cavity length reconstruction with a neural network
Quote:

Phenomenal!

truly.

  15742   Mon Dec 21 09:28:50 2020 JamieConfigurationCDSUpdated CDS upgrade plan
Quote:

Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:

  • Existing FEs stay where they are (they are not moved to a single rack)

  • Dolphin IPC remains PCIe Gen 1

  • RFM network is entirely replaced with Dolphin IPC

Please send me any omissions or corrections to the layout.

I just want to point out that if you move all the FEs to the same rack they can all be connected to the Dolphin switch via copper, and you would only have to string a single fiber to every IO rack, rather than the multiple now (for network, dolphin, timing, etc.).

  16299   Wed Aug 25 18:20:21 2021 JamieUpdateCDSGPS time on fb1 fixed, dadq writing correct frames again

I have no idea what happened to the GPS timing on fb1, but it seems like the issue was coincident with the power glitch on Monday.

As was noted by Koji above, the GPS time kernel interface was off by a year, which was causing the frame builder to write out files with the wrong names.  fb1 was using DAQD components from the advligorts 3.3 release, which used the old "symmetricom" kernel module for the GPS time.  This old module was also known to have issues with time offsets.  This issue is remniscent of previous timing issues with the DAQ on fb1.

I noted that a newer version of the advligorts, version 3.4, was available on debian jessie, the system running on fb1.  advligorts 3.4 includes a newer version of the GPS time module, renamed gpstime.  I checked with Jonathan Hanks that the interfaces did not change between 3.3 and 3.4, and 3.4 was mostly a bug fix and packaging release, so I decided to upgrade the DAQ to get the new components.  I therefore did the following

  • updated the archive info in /etc/apt/sources.list.d/cdssoft.list, and added the "jessie-restricted" archive which includes the mx packages: https://git.ligo.org/cds-packaging/docs/-/wikis/home

  • removed the symmetricom module from the kernel

    sudo rmmod symmetricom

  • upgraded the advligorts-daqd components (NOTE I did not upgrade the rest of the system, although there are outstanding security upgrades needed):

    sudo apt install advligorts-daqd advligorts-daqd-dc-mx

  • loaded the new gpstime module and checked that the GPS time was correct:

    sudo modprobe gpstime

  • restarted all the daqd processes

    sudo systemctl restart daqd_*

Everything came up fine at that point, and I checked that the correct frames were being written out.

  16302   Thu Aug 26 10:30:14 2021 JamieConfigurationCDSfront end time synchronization fixed?

I've been looking at why the front end NTP time synchronization did not seem to be working.  I think it might not have been working because the NTP server the front ends were point to, fb1, was not actually responding to synchronization requests.

I cleaned up some things on fb1 and the front ends, which I think unstuck things.

On fb1:

  • stopped/disabled the default client (systemd-timesyncd), and properly installed the full NTP server (ntp)
  • the ntp server package for debian jessie is old-style sysVinit, not systemd.  In order to make it more integrated I copied the auto-generated service file to /etc/systemd/system/ntp.service, and added and "[install]" section that specifies that it should be available during the default "multi-user.target".
  • "enabled" the new service to auto-start at boot ("sudo systemctl enable ntp.service") 
  • made sure ntp was configured to serve the front end network ('broadcast 192.168.123.255') and then restarted the server ("sudo systemctl restart ntp.service")

For the front ends:

  • on fb1 I chroot'd into the front-end diskless root (/diskless/root) and manually specifed that systemd-timesyncd should start on boot by creating a symlink to the timesyncd service in the multi-user.target directory:
$ sudo chroot /diskless/root
$ cd /etc/systemd/system/multi-user.target.wants
$ ln -s /lib/systemd/system/systemd-timesyncd.service
  • on the front end itself (c1iscex as a test) I did a "systemctl daemon-reload" to force it to reload the systemd config, and then restarted the client ("systemctl restart systemd-timesyncd")
  • checked the NTP synchronization with timedatectl:
controls@c1iscex:~ 0$ timedatectl 
      Local time: Thu 2021-08-26 11:35:10 PDT
  Universal time: Thu 2021-08-26 18:35:10 UTC
        RTC time: Thu 2021-08-26 18:35:10
       Time zone: America/Los_Angeles (PDT, -0700)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-14 01:59:59 PST
                  Sun 2021-03-14 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-11-07 01:59:59 PDT
                  Sun 2021-11-07 01:00:00 PST
controls@c1iscex:~ 0$ 

Note that it is now reporting "NTP enabled: yes" (the service is enabled to start at boot) and "NTP synchronized: yes" (synchronization is happening), neither of which it was reporting previously.  I also note that the systemd-timesyncd client service is now loaded and enabled, is no longer reporting that it is in an "Idle" state and is in fact reporting that it synchronized to the proper server, and it is logging updates:

controls@c1iscex:~ 0$ sudo systemctl status systemd-timesyncd
â— systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
   Active: active (running) since Thu 2021-08-26 10:20:11 PDT; 1h 22min ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 2918 (systemd-timesyn)
   Status: "Using Time Server 192.168.113.201:123 (ntpserver)."
   CGroup: /system.slice/systemd-timesyncd.service
           â””─2918 /lib/systemd/systemd-timesyncd

Aug 26 10:20:11 c1iscex systemd[1]: Started Network Time Synchronization.
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 64s/+0.000s/0.000s/0.000s/+26ppm
Aug 26 10:21:15 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 128s/-0.000s/0.000s/0.000s/+25ppm
Aug 26 10:23:23 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 256s/+0.001s/0.000s/0.000s/+26ppm
Aug 26 10:27:40 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 512s/+0.003s/0.000s/0.001s/+29ppm
Aug 26 10:36:12 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 1024s/+0.008s/0.000s/0.003s/+33ppm
Aug 26 10:53:16 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/-0.026s/0.000s/0.010s/+27ppm
Aug 26 11:27:24 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/+0.009s/0.000s/0.011s/+29ppm
controls@c1iscex:~ 0$ 

So I think this means everything is working.

I then went ahead and reloaded and restarted the timesyncd services on the rest of the front ends.

We still need to confirm that everything comes up properly the next time we have an opportunity to reboot fb1 and the front ends (or the opportunity is forced upon us).

There was speculation that the NTP clients on the front ends (systemd-timesyncd) would not work on a read-only filesystem, but this doesn't seem to be true.  You can't trust everything you read on the internet.

  17109   Sun Aug 28 23:14:22 2022 JamieUpdateComputersrack reshuffle proposal for CDS upgrade

@tega This looks great, thank you for putting this together.  The rack drawing in particular is great.  Two notes:

  1. In "1X6 - proposed" I would move the "PEM AA + ADC Adapter" down lower in the rack, maybe where "Old FB + JetStor" are, after removing those units since they're no longer needed.  That would keep all the timing stuff together at the top without any other random stuff in between them.  If we can't yet remove Old FB and the JetStor then I would move the VME GPS/Timing chassis up a couple units to make room for the PEM module between the VME chassis and FB1.
  2. We'll eventually want to move FB1 and Megatron into 1X7, since it seems like there will be room there.  That will put all the computers into one rack, which will be very nice.  FB1 should also be on the KVM switch as well.

I think most of this work can be done with very little downtime.

  16794   Thu Apr 21 11:31:35 2022 JCUpdateVACGauges P3/P4

[Jordan, JC]

It was brought to attention during yesterday's meeting that the pressures in the vacuum system were not equivalent althought the valve were open. So this morning, Jordan and I reviewed the pressure gauges P3 and P4. We attempted to recalibrate, but the gauges were unresponsive. Following this, we proceeded to connect new gauges on the outside to test for a calibration. The two gauges successfully calibrated at atmosperic pressure. We then proceeded to remove the old gauges and install the new ones. 

Attachment 1: IMG_0560.jpeg
IMG_0560.jpeg
Attachment 2: IMG_0561.jpeg
IMG_0561.jpeg
  16808   Mon Apr 25 14:19:51 2022 JCUpdateGeneralNitrogen Tank

Coming in this morning, I checked on the Nitrogen tanks to check the level. One of the tanks were empty, so I went ahead and swapped it out. One tank is at 946 PSI, the other is at 2573 PSI. I checked for leaks and found none.

  16814   Wed Apr 27 10:05:55 2022 JCUpdateCoil DriversCoil Drivers Update

18 (9 pairs) Coil Drivers have been modified. Namely ETMX/ITMX/ITMY/BS/PRM/SRM/MC1/MC2/MC3.

 

ETMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100624                     ETMX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100631

ITMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100620                      IMTX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100633

ITMY Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100623                   ITMY Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100632

BS Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3            S2100625                     BS Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100649

PRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100627                      PRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100650

SRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100626                        SRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100648

MC1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100628                        MC1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100651

MC2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3       S2100629                        MC2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100652

MC3 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100630                        MC3 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100653

 

 

Will be updating this linking each coil driver to the DCC

  16820   Fri Apr 29 08:34:40 2022 JCUpdateVACRGA Pump Down

Jordan and I, in order to start pumpig down the RGA Volume, we began by opening V7 and VM. Afterwards, we started RP1 and RP3. After this, the pressure in the line between RP1, RP3, and V6 dropped to 3.4 mTorr. Next, we tried to open V6, although an error message popped up. We haven't been able to erase it since. But we were able to turn on TP2 with V4 closed. The pressure in that line is reporting 1.4 mTorr.

 

PRP on the sitemap is giving off an incorrect pressure for the line between RP1, RP3, and V6. This is verified by the pressure by the control screen and the physical controller as well. 

Attachment 1: Screen_Shot_2022-04-29_at_8.46.53_AM.png
Screen_Shot_2022-04-29_at_8.46.53_AM.png
  16823   Mon May 2 13:30:52 2022 JCUpdateCoil DriversCoil Drivers Update

The DCC has been updated, along with the modified schematic. Links have been attached.

Quote:

18 (9 pairs) Coil Drivers have been modified. Namely ETMX/ITMX/ITMY/BS/PRM/SRM/MC1/MC2/MC3.

 

ETMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100624                     ETMX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100631

ITMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100620                      IMTX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100633

ITMY Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100623                   ITMY Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100632

BS Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3            S2100625                     BS Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100649

PRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100627                      PRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100650

SRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100626                        SRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100648

MC1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100628                        MC1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100651

MC2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3       S2100629                        MC2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100652

MC3 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100630                        MC3 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100653

 

 

Will be updating this linking each coil driver to the DCC

 

  16825   Tue May 3 13:18:47 2022 JCUpdateVACRGA Pump Down

Jordan, Tega, JC

Issue has been resolved. Breaker on RP1 was tripped so the RP1 button was reporting ON, but was not actually on which continuously tripped the V6 interlock. Breaker was reset, RP1 and RP3 turned on. The V6 was opened to rough out the RGA volume. Once, pressure was at ~100mtorr, V4 was opened to pump the RGA with TP2. V6 was closed and RP1/3 were turned off.

RGA is pumping down and will take scans next week to determine if a bakeout is needed

Quote:

Jordan and I, in order to start pumpig down the RGA Volume, we began by opening V7 and VM. Afterwards, we started RP1 and RP3. After this, the pressure in the line between RP1, RP3, and V6 dropped to 3.4 mTorr. Next, we tried to open V6, although an error message popped up. We haven't been able to erase it since. But we were able to turn on TP2 with V4 closed. The pressure in that line is reporting 1.4 mTorr.

 

PRP on the sitemap is giving off an incorrect pressure for the line between RP1, RP3, and V6. This is verified by the pressure by the control screen and the physical controller as well. 

 

  16842   Tue May 10 15:46:38 2022 JCUpdateBHDRelocate green TRX and TRY components from PSL table to BS table

[JC, Tega]

Tega and I cleaned up the BS OPLEV Table and took out a couple of mirrors and an extra PD. The PD which was removed is "IP-POS - X/Y  Reversed". In addition to this, the cable is zip-tied to the others located on the outside of the table in case this is required later on. 

Next, we placed the cameras and mirrors for the green beam into their postions. A beam splitter and 4 mirrors were relocated from PSL table and placed onto the BS Oplev table to complete this. I will upload the picture of the newly updated photo with arrows of the beam routes.

Attachment 1: IMG_0741.jpeg
IMG_0741.jpeg
Attachment 2: IMG_0753.jpeg
IMG_0753.jpeg
  16845   Wed May 11 15:49:42 2022 JCUpdateOPLEV TablesGreen Beam OPLEV Alignment

[Paco, JC]

Paco and I began aligning the Green Beam in the BS Oplev Table. while aligning the GRN-TRX, the initial beam was entering the table a bit low. To fix this, Paco went into the chamber and correcting the pitch with the steering mirror. The GRN-TRX is now set up, both the PD and Camera. Paco is continuing to work on the GRN-TRY and will update later on today. 

In the morning, I will update this post with photos of the new arrangement of the BS OPLEV Table.


Update Wed May 11 16:54:49 2022

[Paco]

GRY is now better mode matched to the YARM and is on the edge of locking, but it more work is needed to improve the alignment. The key difference this time with respect to previous attempts was to scan the two lenses on translation stages along the green injection path. This improved the GTRY level by a factor of 2.5, and I know it can be further improved. Anyways, the locked HOMs are nicely centered on the GTRY PD, so we are likely done with the in-vac GTRY GTRX alignment.


Update Wed May 12 10:59:22 2022

[JC]

The GTRX PD is now set up and connected. The camera have been set to an angle because the cable to connect it is too thick for the camera to maintain its original position along the side. 

 

Attachment 1: IMG_0770.jpeg
IMG_0770.jpeg
  16851   Fri May 13 14:26:00 2022 JCUpdateAlignmentLO2 Beam

[Yehonathan, JC]

Yehonathan and I attempted to align the LO2 beam today through the BS chamber and ITMX Chamber. We found the LO2 beam was blocked by the POKM1 Mirror. During this attempt, I tapped TT2 with the Laser Card. This caused the mirror to shake and dampen into a new postion. Afterwards, when putting the door back on ITMX, one of the older cables were pulled and the insulation was torn. This caused some major issues and we have been able to regain either of the arms to their original standings.

  16871   Tue May 24 11:04:53 2022 JCUpdateVACBeginning Pumpdown

[JC, Jordan, Paco, Chub]

We began with the pumpdown this morning. We started with the annulus volume and proceeded by using the following:

1. Isolate the RGA Volume by closing of valves VM3 and V7.

2. Opened valves VASE, VASV, VABSSCT, VABS, VABSSCO, VAEV, and VAEE, in that order.

3. Open VA6 to allow P3, FRG3, and PAN to equalize.

4. Turn on RP1 and RP3, rough out annulus volume, once <1 torr turn on TP3. Close V6. Open V5 to pump the annulus volume with TP3.

5. Re route pumping from RP1 and RP3 to the main volume by opening V3 and slowly opening RV1.

6. After ~3.5 hours the pressure in the arms was <500mtorr on both FRG1 and P1a. Turn on TP1 and wait to reach full speed 560 Hz

7. Open V1 with RV2 barely open. The pressure diff between P1a and P2/FRG2 needs to be below 1 torr. This took a couple attempts with the manual valve in different positions. The interlocks were tripped for this reason. Repeat step 7 until the manual gate valve was in a position that throttled pumping enough to maintain the <1 torr differential.

8. Slowly open the manual gate valve over the course of ~ 1 hour. Once the manual gate valve fully opened, pressure in the arms was <1mtorr.

9. V7 was closed, leaving only TP2 to back TP1, while TP3 was used to continue pumping the annuli. Left in that configuration overnight (see attached)

 

We did have to replace gauge PAN becuase it was reading a signal error. In addition, we found the cable is a bit sketchy and has a sharp bend. The signal comes in and out when the cable is fiddled with.

Attachment 1: PUMPDOWN-2022-05-24_16-57-59.png
PUMPDOWN-2022-05-24_16-57-59.png
Attachment 2: C1VAC_Screenshot_2022-05-24_16-59-27.png
C1VAC_Screenshot_2022-05-24_16-59-27.png
  16878   Fri May 27 12:15:30 2022 JCUpdateElectronicsCRT TV / Monitor 6

[Yehonathan, Paco, Yuta, JC]

As we were cleaning up this morning, we heard a high pitch sound that turned into a buzz. After searching for where the sound came from, we noticed the CRT TV went out. We swapped this out with a moniter and used a BNC to VGA adapter to display the cameras.

  16882   Tue May 31 14:44:02 2022 JCUpdateElectronicsCRT TV / Monitor 6

[Paco, JC]

Paco and I fixed the ethernet cable which was hanging. We stopped models c1x07 and c1su2, realigned the cable to follow the shelf from top, and returned to turn on the computers.

 

Note: There was not a long enough ethernet cable, so we used a female to female adapter and attached 2 ethernet cables.

Quote:

[Yehonathan, Paco, Yuta, JC]

As we were cleaning up this morning, we heard a high pitch sound that turned into a buzz. After searching for where the sound came from, we noticed the CRT TV went out. We swapped this out with a moniter and used a BNC to VGA adapter to display the cameras.

 

  16891   Mon Jun 6 09:37:16 2022 JCUpdateSUSInMatCalc

[Paco, JC]

Paco and I attempted to calculate the input matrices from May 24, 2022, but the OSEM data was all saturated and not very useful. Therefore, we decided to manually investigate the appropriate coil offsets for all BHD SUS. Before, the default offset kick was 30000 counts, but we found that LO1, AS1, AS4, and PR2 cannot take more than 5000 counts. As for LO2, SR2, and PR3 cannot take more than 2000 counts before saturating. Note that all these kick test were taken by kicking OSEM UL on all BHD Optics.

 

We started the freeSwing.py script on tmux freeSwing session for tomorrow at 1:00 am for only the 5000 count offset SUS.

 

  16906   Fri Jun 10 13:52:22 2022 JCUpdateOPLEV TablesITMX, ITMY, and Vertex Table Beam Paths

I have at taken photos and added arrows which signify the beam paths for ITMX, ITMY, and Vertex Oplev tables.

Attachment 1: DCE4F1D7-5AE0-491C-8AF6-F8B659C0787E_1_105_c.jpeg
DCE4F1D7-5AE0-491C-8AF6-F8B659C0787E_1_105_c.jpeg
Attachment 2: 4B24C891-654D-4C51-A8D9-D316364FCF68_1_105_c.jpeg
4B24C891-654D-4C51-A8D9-D316364FCF68_1_105_c.jpeg
Attachment 3: F5B115E5-885F-463C-9645-BB2EB73B6144_1_201_a.jpeg
F5B115E5-885F-463C-9645-BB2EB73B6144_1_201_a.jpeg
  16912   Tue Jun 14 08:41:36 2022 JCUpdateOPLEV TablesBS Oplev Table Sketch

[JC]

Lately, I have been working on a 3d sketch of the BS OPLEV Table on SolidWorks. This is my progress so far, a few of the components I will have to sketch myself, such as the HeNe laser and photodiodes. This will just be a general layout of the HeNe laser, optics, and photodiodes.

Attachment 1: BS_OPLEV_Table.PNG
BS_OPLEV_Table.PNG
  16916   Wed Jun 15 07:26:35 2022 JCUpdateVACBeginning Pumpdown

[Jordan, JC]

Jordan and I went in to retore the Vacuum System back to it's original state before the power loss on June 8, 2022. The process went smoothly as we first closed V7 and opened VM3 (in that order).

The RP1/3 line did not have the KF blank installed. That was added and the RP flex line was capped off.

Quote:

[JC, Jordan, Paco, Chub]

We began with the pumpdown this morning. We started with the annulus volume and proceeded by using the following:

1. Isolate the RGA Volume by closing of valves VM3 and V7.

2. Opened valves VASE, VASV, VABSSCT, VABS, VABSSCO, VAEV, and VAEE, in that order.

3. Open VA6 to allow P3, FRG3, and PAN to equalize.

4. Turn on RP1 and RP3, rough out annulus volume, once <1 torr turn on TP3. Close V6. Open V5 to pump the annulus volume with TP3.

5. Re route pumping from RP1 and RP3 to the main volume by opening V3 and slowly opening RV1.

6. After ~3.5 hours the pressure in the arms was <500mtorr on both FRG1 and P1a. Turn on TP1 and wait to reach full speed 560 Hz

7. Open V1 with RV2 barely open. The pressure diff between P1a and P2/FRG2 needs to be below 1 torr. This took a couple attempts with the manual valve in different positions. The interlocks were tripped for this reason. Repeat step 7 until the manual gate valve was in a position that throttled pumping enough to maintain the <1 torr differential.

8. Slowly open the manual gate valve over the course of ~ 1 hour. Once the manual gate valve fully opened, pressure in the arms was <1mtorr.

9. V7 was closed, leaving only TP2 to back TP1, while TP3 was used to continue pumping the annuli. Left in that configuration overnight (see attached)

 

We did have to replace gauge PAN becuase it was reading a signal error. In addition, we found the cable is a bit sketchy and has a sharp bend. The signal comes in and out when the cable is fiddled with.

 

  16943   Fri Jun 24 12:13:16 2022 JCUpdateIOOWFS issues

[Yuta, JC]

It seems that early this morning MC got very misaligned. Yuta was able to align the Mode Cleaner again by individually adjusting the MC1 MC2, and MC3. Once transmission reach ~12000, we went ahead and turned on WFS. Oddly enough, the transmission began plummeting and MC fell out of lock. After this, Yuta reset the WFS offsets and realigned the WFS QPDs. We then locked MC and turned on WFS once again, but the same issue happened. After fiddeling around with this, we found the if we set C1:IOO-MC2_TRANS_PIT_OUTPUT and C1:IOO-WFS1_YAW_OUTPUT equal to 0, WFS does not cause this issue. Is there a proper to reset WFS, aside from only zeroing the offsets?

Attachment 1: WFS.png
WFS.png
  16953   Tue Jun 28 09:03:58 2022 JCUpdateGeneralOrganizing and Cleaning

The plan for the tools in 40m

As of right now, there are 4 tool boxes. X-end, Y-end, Vertex, and the main tool box along the X-arm. The plan is the give each toolbox a set of their own tools. The tools of X-end, Y-end, and Vertex toolboxes will be very similar containing the basic tools such as pliers, screwdrivers, allen ball drivers. Along with this, each tool box will have a tape measure, caliper, level, and other measuring tools we find convinient. 

As for the new toolbox, I have done research and found a few good selections. The only problem I have ran into with this is the width of the tool box corresponding with the prices. The tool cabinet we have now is 41" wide. The issue I have is not in finding another toolbox of the same width, but for a similar price we can find a 54" wide tool cabinet. Would anyone be objected to making a bit more space for this?

How the tools will stay organized.

I the original idea I had was to use a specified color of electrical tape for each tool box. Then to wrap the corresponding tools tools with the same color tape. But it was brought to my attention that the electrical tape would become sticky over time. So, I think the using the label maker would be the best idea. with the labels being 'X' for X-end, 'Y' for Y-end, 'V' for vertex, and 'M' for main toolboxes.

An idea for the optical tables:

Anchal brought it up to me that it is a hassle to go back and forth searching for the correct sizes of Hex Keys and Allen Wrenches. The idea of a pouch on the outside of each optical table was mentioned so I brought this up to Paco. Paco also gave me the idea of a 3D printed stand we could make for allen ball drives. Does anyone have a preference or an idea of what would be the best choice and why? 


A few sidenotes: 

Anchal mentioned to me a while back that there are many cables that are laying on the racks that are not being used. Is there a way we could identify which ones are being used? 

I noticed that when we were vented that a few of the chamber doors were leaning up against the wall and not on a wooden stand like others. Although, the seats for the chamber doors are pretty spacious and do not give us much clearance. For the future ones, could we make something more sleek and put the wider seats at the end chambers?

The cabinets along the Y-Arm are labelled, but do not correspond with all the materials inside or are too full to take in more items. Could I organize these? 
 

  16980   Fri Jul 8 14:03:33 2022 JCHowToVACVacuum Preparation for Power Shutdown

[Koji, JC]

Koji and I have prepared the vacuum system for the power outage on Saturday.

  1. Closed V1 to isolate the main volume.
  2. Closed of VASE, VASV, VABSSCI,VABS, VABSSCO, VAEV, and VAEE.
  3. Closed V6, then close VM3 to isolate RGA
  4. Turn off TP1 (You must check the RPMs on the TP1 Turbo Controller Module)
  5. Close V5
  6. Turn off TP3 (There is no way to check the RPMs, so be patient)
  7. Close V4 (System State changes to 'All pneumatic valves are closed)
  8. Turn off TP2 (There is no way to check the RPMs, so be patient)
  9. Close Vacuum Valves (on TP2 and TP3) which connect to the AUX Pump.
  10. Turn of AUX Pump with the breaker switch wall plug.

From here, we shutdown electronics.

  1. Run /sbin/shutdown -h now on c1vac to shut the host down.
  2. Manually turn off power to electronic modules on the rack.
    • GP316a
    • GP316b
    • Vacuum Acromags
    • PTP3
    • PTP2
    • TP1
    • TP2 (Unplugged)
    • TP3 (Unplugged)

 

Attachment 1: Screen_Shot_2022-07-12_at_7.02.14_AM.png
Screen_Shot_2022-07-12_at_7.02.14_AM.png
  16983   Mon Jul 11 11:16:45 2022 JCSummaryElectronicsStartup after Shutdown

[Paco, Yehonathan, JC]

We began starting up all the electronics this morning beginning in the Y-end. After following the steps on the Complete_Power_Shutdown_Procedures on the 40m wiki, we only came across 2 issues.

  1. The Green beam at the Y-End : Turn on the controller and the indicator light began flashing. After waiting until the blinking light becomes constant, turn on the beam. 
  2. C1lsc "could not find operating system"-unable to SSH from Rossa : We found an Elog of how to restart Chiara and this worked. We proceeded by adding this to the procedures of startup.
  16985   Mon Jul 11 15:26:12 2022 JCHowToVACStartup after Power Outage

[Koji, Jc]

Koji and I began starting that Vacuum system up.

  1. Reverse order step 2 of shutting down electronics. Anthing after, turn on manually.
  2. If C1vac does not come back, then restart by holding the reset button.
  3. Open VA6
  4. Open VASE, VASV,VABSSCI, VABS, VABSSCO, VAEV, and VAEE
  5. Open V7
  6. Check P3 and P2, if they are at high pressure, approx. 1 Torr range, then you must use the roughing pumps.
  7. Connect Rotary pump tube. (Manually)
  8. Turn on AUX Pump
  9. Manually open TP2 and TP3 valves.
  10. Turn on TP2 and TP3, when the pumps finish startup, turn off Standby to bring to nominal speed.
  11. Turn on RP1 and RP3
  12. Open V6
  13. Once P3 reaches <<1 Torr, close V6 to isolate the Roughing pumps.
  14. When TP2 and TP3 are at nominal speed, open V5 and V4.
  15. Now TP1 is well backed, turn on TP1.
  16. When TP1 is at nominal speed, Open V1.
  16995   Wed Jul 13 07:16:48 2022 JCUpdateElectronicsChecking Sorensen Power Supplies

[JC]

I went around 40m picking up any Sorensens that were laying around to test if they worked, or in need of repair. I gathered up a total of 7 Sorensens and each one with a Voltmeter. I made sure the voltage would rise on the Sorenson as well as the voltmeter, maxing out at ~33.4 Volts. For the current, the voltmeter can only rise to 10 Amps before it is fused. Many of the Sorensons that I found did not have their own wall connection, so I had to use the same one for multiple.

From these 7, I have found 5 that are well. One Sorenson I have tested has a output shortage above 20V and the other has yet to be tested.

Attachment 1: 658C5D39-11BD-4EE3-90E2-34CBBC1DBD3C.jpeg
658C5D39-11BD-4EE3-90E2-34CBBC1DBD3C.jpeg
Attachment 2: 5328312A-7918-44CC-82B7-54B57840A336.jpeg
5328312A-7918-44CC-82B7-54B57840A336.jpeg
  17005   Fri Jul 15 12:21:58 2022 JCUpdateElectronicsChecking Sorensen Power Supplies

Of the 7 Sorenson Power Supplies I tested, 5 are working fine, 1 cannot output voltage more than 20 Volts before shorting, and other does not output current. Six Sorensons are behind the X-Arm.

Quote:

[JC]

I went around 40m picking up any Sorensens that were laying around to test if they worked, or in need of repair. I gathered up a total of 7 Sorensens and each one with a Voltmeter. I made sure the voltage would rise on the Sorenson as well as the voltmeter, maxing out at ~33.4 Volts. For the current, the voltmeter can only rise to 10 Amps before it is fused. Many of the Sorensons that I found did not have their own wall connection, so I had to use the same one for multiple.

From these 7, I have found 5 that are well. One Sorenson I have tested has a output shortage above 20V and the other has yet to be tested.

 

Attachment 1: 50DF21D7-D61A-4674-B0DA-463378B00ADB.jpeg
50DF21D7-D61A-4674-B0DA-463378B00ADB.jpeg
Attachment 2: FA4CF579-6C1E-48D5-B152-74F35B4EE90B.jpeg
FA4CF579-6C1E-48D5-B152-74F35B4EE90B.jpeg
  17019   Tue Jul 19 17:18:34 2022 JCUpdateElectronicsNew Coil Driver on Rack 1X3

[Yehonathan, JC]

Yehonathan and I began to put the electronics on Rack 1X3. To do this, we had to move the monitor over the the PD testing table. Before mounting the Coil Drivers, we added numbers to the spaces to follow the rack plan Koji has provided. The drivers which have been mounted are PRM (Slots 10,11), BS (Slots 15, 16), ITMX (Slots 26, 27), and ITMY (34, 35).

Attachment 1: 22DC1767-6073-4D82-BEED-915318B57C03.jpeg
22DC1767-6073-4D82-BEED-915318B57C03.jpeg
  17078   Fri Aug 12 13:40:36 2022 JCUpdateGeneralPreparing for Shutdown on Saturday, Aug 13

[Yehonathan, JC]

Our first step in preparing for the Shutdown was to center all the OpLevs. Next is to prepare the Vacuum System for the shutdown.

 

  17099   Tue Aug 23 14:59:15 2022 JCUpdateToolsNew Toolbox at Y-End

A new tool box has been placed at the Y-end! Each drawer has its label so PLEASE put the tools back in their correct location. In addition to this, Each tool has its assigned tool box, so PLEASE RETURN all tools to their designated tool box. The tools can be distinguished by a writing or heat shrink which corresponds to the color of the tool chest or location. Photo #2 is an example of how the tools have been marked.

Each toolbox from now on will contain a drawer for the folllowing: Measurements, Allen Keys, Pliers and Cutters, Screwdrivers, Zipties and Tapes, Allen Ball Drivers, Crescent Wrenches, Clamps, and Torque Wrenches/ Ratchets.

Attachment 1: 9AFD3E49-0C5B-4626-889A-0A5C62590AD7.jpeg
9AFD3E49-0C5B-4626-889A-0A5C62590AD7.jpeg
Attachment 2: 99EC2EB1-EEA0-4AD8-B6D7-A494431E91E5.jpeg
99EC2EB1-EEA0-4AD8-B6D7-A494431E91E5.jpeg
  17110   Mon Aug 29 13:33:09 2022 JCUpdateGeneralLab Cleanup

The machine shop looked a mess this morning, so I cleaned it up. All power tools are now placed in the drawers in the machine shop. Let me know if there are any questions of where anything here is placed. 

Attachment 1: EDE63209-D556-41F1-9BF2-89CD78E3D7B7.jpeg
EDE63209-D556-41F1-9BF2-89CD78E3D7B7.jpeg
  17126   Thu Sep 1 09:00:02 2022 JCConfigurationDaily ProgressLocked both arms and aligned Op Levs

Each morning now, I am going to try to align both arms and lock. Along with that, sometime at towards the end of each week, we should align the OpLevs. This is a good habit that should be practiced more often, not only by me. As for the Y Arm, Yehonathan and I had to adjust the gain to 0.15 in order to stabilize the lock.

Attachment 1: Daily.pdf
Daily.pdf
Attachment 2: Daily.pdf
Daily.pdf
  17132   Tue Sep 6 09:57:26 2022 JCSummaryGeneralLab cleaning

DB9 Cables have been assorted and placed behind the Y-Arm. Long BNC Cables and Ethernet Cables have been stored under the Y-Arm. 

Quote:

We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

  • The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
    • BHD BS optics was moved to the south optics cabinet. (Attachment 2)
    • DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
  • Some instruments were moved into the lab.
    • The Zurich instrument box
    • KEPCO HV supplies
    • Matsusada HV supplies
  • We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
  • We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
  • ISC/WFS left over components were moved to the pile of the BHD electronics.
    • Front panels (Attachment 5)
    • Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal

 

Attachment 1: 982146B2-02E5-4C19-B137-E7CC598C262F.jpeg
982146B2-02E5-4C19-B137-E7CC598C262F.jpeg
Attachment 2: 0FBB61AC-E882-458D-A891-7B11F35588FF.jpeg
0FBB61AC-E882-458D-A891-7B11F35588FF.jpeg
  17135   Thu Sep 8 11:54:37 2022 JCConfigurationLab OrganizationLab Organization

The arms in the 40m laboratory have now been sectioned off. Each arm has been divided up into 15 sections. Along the Y arm, the section are labelled "Section Y1 - Section Y15". For the X arm, they are labelled "Section X1- Section X15". Anything changed or moved will now be updated into the elog with their appropriate section.

 

Below is an example of Section X6.

Attachment 1: 1A7026BC-82A9-49E9-BA22-1A700DFEC5D2.jpeg
1A7026BC-82A9-49E9-BA22-1A700DFEC5D2.jpeg
Attachment 2: 2A904809-82F0-40C0-B907-B48C3A0E789E.jpeg
2A904809-82F0-40C0-B907-B48C3A0E789E.jpeg
Attachment 3: CB4B8591-B769-454D-9A16-EE9176004099.jpeg
CB4B8591-B769-454D-9A16-EE9176004099.jpeg
  17136   Thu Sep 8 12:01:02 2022 JCConfigurationLab OrganizationLab Organization

The floor cable cover has been changed out for a new one. This is in Section X11.

Attachment 1: F41AD1DA-29E9-4449-99CB-5F43AE527CA6_1_105_c.jpeg
F41AD1DA-29E9-4449-99CB-5F43AE527CA6_1_105_c.jpeg
Attachment 2: FF5F2CE8-85E8-4B6F-8F8A-9045D978F670.jpeg
FF5F2CE8-85E8-4B6F-8F8A-9045D978F670.jpeg
ELOG V3.1.3-