40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 276 of 337  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  2037   Thu Oct 1 15:42:55 2009 robUpdateLockingc1susvme2 timing problems update

Quote:

We've also been having problems with timing for c1susvme2.  Attached is a one-hour plot of timing data for this cpu, known as SRM.  Each spike is an instance of lateness, and a potential cause of lock loss.  This has been going on for a quite a while.

 

 

Attached is a 3 day trend of SRM CPU timing info.  It clearly gets better (though still problematic) at some point, but I don't know why as it doesn't correspond with any work done.  I've labeled a reboot, which was done to try to clear out the timing issues.  It can also be seen that it gets worse during locking work, but maybe that's a coincidence.

Attachment 1: srmcpu2.png
srmcpu2.png
  2041   Fri Oct 2 14:52:55 2009 ranaUpdateComputersc1susvme2 timing problems update

The attached shows the 200 day '10-minute' trend of the CPU meters and also the room temperature.

To my eye there is no correlation between the signals. Its clear that c1susvme2 (SRM LOAD) is going up and no evidence that its temperature.

 

Attachment 1: Untitled.png
Untitled.png
  2042   Fri Oct 2 15:11:44 2009 robUpdateComputersc1susvme2 timing problems update update

It got worse again, starting with locking last night, but it has not recovered.  Attached is a 3-day trend of SRM cpu load showing the good spell.

Attachment 1: srmcpu3.png
srmcpu3.png
  2080   Mon Oct 12 14:51:41 2009 robUpdateComputersc1susvme2 timing problems update update update

Quote:

It got worse again, starting with locking last night, but it has not recovered.  Attached is a 3-day trend of SRM cpu load showing the good spell.

 Last week, Alex recompiled the c1susvme2 code without the decimation filters for the OUT16 channels, so these channels are now as aliased as the rest of them.  This appears to have helped with the timing issues: although it's not completely cured it is much better.  Attached is a five day trend.

Attachment 1: srmcpu.png
srmcpu.png
  1905   Fri Aug 14 15:29:43 2009 JenneUpdateComputersc1susvme2 was unmounted from /cvs/cds

When I came in earlier today, I noticed that c1susvme2 was red on the DAQ screens.  Since the vme computers always seem to be happier as a set, I hit the physical reset buttons on sosvme, susvme1 and susvme2.  I then did the telnet or ssh in as appropriate for each computer in turn.  sosvme and susvme1 came back just fine. However, I couldn't cd to /cvs/cds/caltech/target/c1susvme2 while ssh-ed in to susvme2.  I could cd to /cvs/cds, and then did an ls, and it came back totally blank.  There was nothing at all in the folder. 

Yoichi showed me how to do 'df' to figure out what filesystems are mounted, and it looked as though the filesystem was mounted.  But then Yoichi tried to unmount the filesystem, and it claimed that it wasn't mounted at all.  We then remounted the filesystem, and things were good again.  I was able to continue the regular restart procedure, and the computer is back up again.

Recap: c1susvme2 mysteriously got unmounted from /cvs/cds!  But it's back, and the computers are all good again.

  1634   Sat May 30 12:36:52 2009 robUpdateComputersc1susvme2, c1iscex running late

c1susvme2 has been running just a bit late for about a week.  I rebooted it. 

The plot shows SRM_FE_SYNC, which is the number of times in the last second that c1susvme2 was late for the 16k cycle.   Similarly for ETMX.

 

Attachment 1: srmsync.jpg
srmsync.jpg
Attachment 2: etmxsync.jpg
etmxsync.jpg
  1635   Mon Jun 1 13:25:00 2009 robUpdateComputersc1susvme2, c1iscex running late

Quote:

c1susvme2 has been running just a bit late for about a week.  I rebooted it. 

The plot shows SRM_FE_SYNC, which is the number of times in the last second that c1susvme2 was late for the 16k cycle.   Similarly for ETMX.

 

 

The reboot appears to have worked.

Attachment 1: doublesync.jpg
doublesync.jpg
  16365   Wed Sep 29 17:10:09 2021 AnchalSummaryCDSc1teststand problems summary

[anchal, ian]

We went and collected some information for the overlords to fix the c1teststand DAQ network issue.


  • from c1teststand, c1bhd and c1sus2 computers were not accessible through ssh. (No route to host). So we restarted both the computers (the I/O chassis were ON).
  • After the computers restarted, we were able to ssh into c1bhd and c1sus, ad we ran rtcds start c1x06 and rtcds start c1x07.
  • The first page in attachment shows the screenshot of GDS_TP screens of the IOP models after this step.
  • Then we started teh user models by running rtcds start c1bhd and rtcds start c1su2.
  • The second page shows the screenshot of GDS_TP screens. You can notice that DAQ status is red in all the screens and the DC statuses are blank.
  • So we checked if daqd_ services are running in the fb computer. They were not. So we started them all by sudo systemctl start daqd_*.
  • Third page shows the status of all services after this step. the daqd_dc.service remained at failed state.
  • open-mx_stream.service was not even loaded in fb. We started it by running sudo systemctl start open-mx_stream.service.
  • The fourth page shows the status of this service. It started without any errors.
  • However, when we went to check the status of mx_stream.service in c1bhd and c1sus2, they were not loaded and we we tried to start them, they showed failed state and kept trying to start every 3 seconds without success. (See page 5 and 6).
  • Finally, we also took a screenshot of timedatectl command output on the three computers fb, c1bhd, and c1sus2 to show that their times were not synced at all.
  • The ntp service is running on fb but it probably does not have access to any of the servers it is following.
  • The timesyncd on c1bhd and c1sus2 (FE machines) is also running but showing status 'Idle' which suggested they are unable to find the ntp signal from fb.
  • I believe this issue is similar to what jamie ficed in the fb1 on martian network in 40m/16302. Since the fb on c1teststand network was cloned before this fix, it might have this dysfunctional ntp as well.

We would try to get internet access to c1teststand soon. Meanwhile, someone with more experience and knowledge should look into this situation and try to fix it. We need to test the c1teststand within few weeks now.

Attachment 1: c1teststand_issues_summary.pdf
c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf
  16372   Mon Oct 4 11:05:44 2021 AnchalSummaryCDSc1teststand problems summary

[Anchal, Paco]

We tried to fix the ntp synchronization in c1teststand today by repeating the steps listed in 40m/16302. Even though teh cloned fb1 now has the exact same package version, conf & service files, and status, the FE machines (c1bhd and c1sus2) fail to sync to the time. the timedatectl shows the same stauts 'Idle'. We also, dug bit deeper into the error messages of daq_dc on cloned fb1 and mx_stream on FE machines and have some error messages to report here.


Attempt on fixing the ntp

  • We copied the ntp package version 1:4.2.6 deb file from /var/cache/apt/archives/ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb on the martian fb1 to the cloned fb1 and ran.
    controls@fb1:~ 0$ sudo dbpg -i ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb
  • We got error messages about missing dependencies of libopts25 and libssl1.1. We downloaded oldoldstable jessie versions of these packages from here and here. We ensured that these versions are higher than the required versions for ntp. We installed them with:
    controls@fb1:~ 0$ sudo dbpg -i libopts25_5.18.12-3_amd64.deb 
    controls@fb1:~ 0$ sudo dbpg -i libssl1.1_1.1.0l-1~deb9u4_amd64.deb
  • Then we installed the ntp package as described above. It asked us if we want to keep the configuration file, we pressed Y.
  • However, we decided to make the configuration and service files exactly same as martian fb1 to make it same in cloned fb1. We copied /etc/ntp.conf and /etc/systemd/system/ntp.service files from martian fb1 to cloned fb1 in the same positions. Then we enabled ntp, reloaded the daemon, and restarted ntp service:
    controls@fb1:~ 0$ sudo systemctl enable ntp
    controls@fb1:~ 0$ sudo systemctl daemon-reload
    controls@fb1:~ 0$ sudo systemctl restart ntp
  • But ofcourse, since fb1 doesn't have internet access, we got some errors in status of the ntp.service:
    controls@fb1:~ 0$ sudo systemctl status ntp
    ● ntp.service - NTP daemon (custom service)
       Loaded: loaded (/etc/systemd/system/ntp.service; enabled)
       Active: active (running) since Mon 2021-10-04 17:12:58 UTC; 1h 15min ago
     Main PID: 26807 (code=exited, status=0/SUCCESS)
       CGroup: /system.slice/ntp.service
               ├─30408 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
               └─30525 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
    
    Oct 04 17:48:42 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 17:48:52 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
    Oct 04 18:05:05 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
    Oct 04 18:05:15 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
    Oct 04 18:05:25 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 18:05:35 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
    Oct 04 18:21:48 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
    Oct 04 18:21:58 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
    Oct 04 18:22:08 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 18:22:18 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
  • But the ntpq command is giving the saem output as given by ntpq comman in martian fb1 (except for the source servers), that the broadcasting is happening in the same manner:
    controls@fb1:~ 0$ ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
     192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000
    
  • On the FE machines side though, the systemd-timesyncd are still unable to read the time signal from fb1 and show the status as idle:
    controls@c1bhd:~ 3$ timedatectl
          Local time: Mon 2021-10-04 18:34:38 UTC
      Universal time: Mon 2021-10-04 18:34:38 UTC
            RTC time: Mon 2021-10-04 18:34:38
           Time zone: Etc/UTC (UTC, +0000)
         NTP enabled: yes
    NTP synchronized: no
     RTC in local TZ: no
          DST active: n/a
    controls@c1bhd:~ 0$ systemctl status systemd-timesyncd -l
    ● systemd-timesyncd.service - Network Time Synchronization
       Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
       Active: active (running) since Mon 2021-10-04 17:21:29 UTC; 1h 13min ago
         Docs: man:systemd-timesyncd.service(8)
     Main PID: 244 (systemd-timesyn)
       Status: "Idle."
       CGroup: /system.slice/systemd-timesyncd.service
               └─244 /lib/systemd/systemd-timesyncd
  • So the time synchronization is still not working. We expected the FE machined to just synchronize to fb1 even though it doesn't have any upstream ntp server to synchronize to. But that didn't happen.
  • I'm (Anchal) working on getting internet access to c1teststand computers.

Digging into mx_stream/daqd_dc errors:

  • We went and changed the Restart fileld in /etc/systemd/system/daqd_dc.service on cloned fb1 to 2. This allows the service to fail and stop restarting after two attempts. This allows us to see the real error message instead of the systemd error message that the service is restarting too often. We got following:
    controls@fb1:~ 3$ sudo systemctl status daqd_dc -l
    ● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
       Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
       Active: failed (Result: exit-code) since Mon 2021-10-04 17:50:25 UTC; 22s ago
      Process: 715 ExecStart=/usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc (code=exited, status=1/FAILURE)
     Main PID: 715 (code=exited, status=1/FAILURE)
    
    Oct 04 17:50:24 fb1 systemd[1]: Started Advanced LIGO RTS daqd data concentrator.
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: [Mon Oct  4 17:50:25 2021] Unable to set to nice = -20 -error Unknown error -1
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: Failed to do mx_get_info: MX not initialized.
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: 263596
    Oct 04 17:50:25 fb1 systemd[1]: daqd_dc.service: main process exited, code=exited, status=1/FAILURE
    Oct 04 17:50:25 fb1 systemd[1]: Unit daqd_dc.service entered failed state.
    
  • It seemed like the only thing daqd_dc process doesn't like is that mx_stream services are in failed state in teh FE computers. So we did the same process on FE machines to get the real error messages:
    controls@fb1:~ 0$ sudo chroot /diskless/root
    fb1:/ 0#
    fb1:/ 0# sudo nano /etc/systemd/system/mx_stream.service
    fb1:/ 0#
    fb1:/ 0# exit
  • Then I ssh'ed into c1bhd to see the error message on mx_stream service properly.
    controls@c1bhd:~ 0$ sudo systemctl daemon-reload
    controls@c1bhd:~ 0$ sudo systemctl restart mx_stream
    controls@c1bhd:~ 0$ sudo systemctl status mx_stream -l
    ● mx_stream.service - Advanced LIGO RTS front end mx stream
       Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
       Active: failed (Result: exit-code) since Mon 2021-10-04 17:57:20 UTC; 24s ago
      Process: 11832 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
     Main PID: 11832 (code=exited, status=1/FAILURE)
    
    Oct 04 17:57:20 c1bhd systemd[1]: Starting Advanced LIGO RTS front end mx stream...
    Oct 04 17:57:20 c1bhd systemd[1]: Started Advanced LIGO RTS front end mx stream.
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: send len = 263596
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: mx_connect failed Nic ID not Found in Peer Table
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1x06_daq mmapped address is 0x7f516a97a000
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1bhd_daq mmapped address is 0x7f516697a000
    Oct 04 17:57:20 c1bhd systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
    Oct 04 17:57:20 c1bhd systemd[1]: Unit mx_stream.service entered failed state.
    
  • c1sus2 shows the same error. I'm not sure I understand these errors at all. But they seem to have nothing to do with timing issuessurprise!

As usual, some help would be helpful

  16376   Mon Oct 4 18:00:16 2021 KojiSummaryCDSc1teststand problems summary

I don't know anything about mx/open-mx, but you also need open-mx,don't you?


controls@c1ioo:~ 0$ systemctl status *mx*
● open-mx.service - LSB: starts Open-MX driver
   Loaded: loaded (/etc/init.d/open-mx)
   Active: active (running) since Wed 2021-09-22 11:54:39 PDT; 1 weeks 5 days ago
  Process: 470 ExecStart=/etc/init.d/open-mx start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/open-mx.service
           └─620 /opt/3.2.88-csp/open-mx-1.5.4/bin/fma -d

● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: active (running) since Wed 2021-09-22 12:08:00 PDT; 1 weeks 5 days ago
 Main PID: 5785 (mx_stream)
   CGroup: /system.slice/mx_stream.service
           └─5785 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x03 c1ioo c1als c1omc -d fb1:0

 

  16381   Tue Oct 5 17:58:52 2021 AnchalSummaryCDSc1teststand problems summary

open-mx service is running successfully on the fb1(clone), c1bhd and c1sus.

Quote:

I don't know anything about mx/open-mx, but you also need open-mx,don't you?


  16697   Thu Mar 3 15:37:40 2022 AnchalSummaryCDSc1teststand restructured

c1teststand has been restructured. There is no port computer called 'c1teststand' anymore. When you ssh into the c1teststand network using ssh c1teststand from inside martian or from outside network using the method mentioned in this wiki page , you would land into chiara (clone) computer and you can navigate into any teststand network computer from there.

I'll be repurposing 1U c1teststand computer into the new c1susaux2 slow machine now. All files from home directory and from /etc directory of former c1teststand have been zipped and stored in /home/controls of chiara (clone). Just a aside, the network configuration of teststand can be done from inside the teststand network, by going to a browser on either fb1 (clone) or chair (clone) and going to address 10.0.1.1. The login and password are same as our usual workstation username and password.

  16271   Fri Aug 6 13:13:28 2021 AnchalUpdateBHDc1teststand subnetwork now accessible remotely

c1teststand subnetwork is now accessible remotely. To log into this network, one needs to do following:

  • Log into nodus or pianosa. (This will only work from these two computers)
  • ssh -CY controls@192.168.113.245
  • Password is our usual workstation password.
  • This will log you into c1teststand network.
  • From here, you can log into fb1, chiara, c1bhd and c1sus2  which are all part of the teststand subnetwork.

Just to document the IT work I did, doing this connection was bit non-trivial than usual.

  • The martian subnetwork is created by a NAT router which connects only nodus to outside GC network and all computers within the network have ip addresses 192.168.113.xxx with subnet mask of 255.255.255.0.
  • The cloned test stand network was also running on the same IP address scheme, mostly because fb1 and chiara are clones in this network. So every computer in this network also had ip addresses 192.168.113.xxx.
  • I setup a NAT router to connect to martian network forwarding ssh requests to c1teststand computer. My NAT router creates a separate subnet with IP addresses 10.0.1.xxx and suubnet mask 255.255.255.0 gated through 10.0.1.1.
  • However, the issue is for c1teststand, there are now two networks accessible which have same IP addresses 192.168.113.xxx. So when you try to do ssh, it always search in its local c1teststand subnetwork instead of routing through the NAT router to the martian network.
  • To work around this, I had to manually provide an ip router to c1teststand for connecting to two of the computers (nodus and pianosa) in martian network. This is done by:
    ip route add 192.168.113.200 via 10.0.1.1 dev eno1
    ip route add 192.168.113.216 via 10.0.1.1 dev eno1
  • This gives c1teststand specific path for ssh requests to/from these computers in the martian network.
  16273   Mon Aug 9 10:38:48 2021 AnchalUpdateBHDc1teststand subnetwork now accessible remotely

I had to add following two lines in the /etc/network/interface file to make the special ip routes persistent even after reboot:

post-up ip route add 192.168.113.200 via 10.0.1.1 dev eno1
post-up ip route add 192.168.113.216 via 10.0.1.1 dev eno1

  16382   Tue Oct 5 18:00:53 2021 AnchalSummaryCDSc1teststand time synchronization working now

Today I got a new router that I used to connect the c1teststand, fb1 and chiara. I was able to see internet access in c1teststand and fb1, but not in chiara. I'm not sure why that is the case.

The good news is that the ntp server on fb1(clone) is working fine now and both FE computers, c1bhd and c1sus2 are succesfully synchronized to the fb1(clone) ntpserver. This resolves any possible timing issues in this DAQ network.

On running the IOP and user models however, I see the same errors are mentioned in 40m/16372. Something to do with:

Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: mx_connect failed Nic ID not Found in Peer Table
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1x07_daq mmapped address is 0x7fa4819cc000
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1su2_daq mmapped address is 0x7fa47d9cc000


Thu Oct 7 17:04:31 2021

I fixed the issue of chiara not getting internet. Now c1teststand, fb1 and chiara, all have internet connections. It was the issue of default gateway and interface and findiing the DNS. I have found the correct settings now.

  14239   Tue Oct 9 16:05:29 2018 gautamConfigurationASCc1tst deleted, c1asy deployed.

Setting up c1asy:

  • Backed up old c1tst.mdl as c1tst_old_bak.mdl in /opt/rtcds/userapps/release/cds/c1/models
  • Copied the c1tst model to /opt/rtcds/userapps/release/isc/c1/models/c1asy.mdl as this is where the c1asx.mdl file resides.
  • Backed up original c1rfm.mdl as c1rfm_old.mdl in /opt/rtcds/userapps/release/cds/c1/models (since the old c1tst had an RFM block which is unnecessary).
  • Deleted offending RFM block from c1rfm.mdl.
  • Recompiled and re-installed c1rfm.mdl. Model has not yet been restarted, as I'd like suspension watchdogs to be shutdown, but c1susaux EPICS channels are presently not responsive.
  • Removed c1tst model (C-node91) from /opt/rtcds/caltech/c1/target/gds/param/testpoints.
  • Removed /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1tst.par (at this point, DCUID 91 is free for use by c1asy).
  • Moved c1tst line in /opt/rtcds/caltech/c1/target/daqd/master to "old model definitions models" section.
  • Added /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1asy.par to the master file.
  • Edited/diskless/root.jessie/etc/rtsystab to allow c1asy to be run on c1iscey.
  • Finally, I followed the instructions here to get the channels into frames and make all the indicators green.

Now Yuki can work on copying the simulink model (copy c1asx structure) and implementing the autoalignment servo.

Attachment 1: CDSoverview_ASY.png
CDSoverview_ASY.png
  14507   Tue Apr 2 14:53:57 2019 gautamUpdateCDSc1vac added to burt

I deleted references to c1vac1 and c1vac2 (which no longer exist) and added c1vac to the autoburt request file list at /opt/rtcds/caltech/c1/burt/autoburt/requestfilelist

  14641   Tue May 28 09:51:33 2019 gautamUpdateVACc1vac hard-rebooted

The vacuum itself was fine - CC1 gauge reported a pressure of 1.3e-5 torr. Note to self: the C1:Vac-CC1_HORNET_PRESSURE channel, which is the analog readback of the Hornet gauge and which is hooked up to an Acromag ADC in the c1auxex chassis, is independent of the status of the c1vac machine, and so can serve as a diagnostic.

However, I was unable to interact with c1vac in any way, the monitor hooked up directly to it was showing a frozen display. So I hard-rebooted the system. It took a few minutes to come back online - but even after 10 minutes of waiting, still no display. In the process of the reboot, several valves were closed off - when the EPICS processes restart, there are momentary instances where the readback channels get an "undefined" value, which prompts the main interlock process to transition to a "SAFE" state. 

Running df -h, I saw that the /var partition was completely full. Maybe this was somehow interfering with the machine running smoothly? Two files in particular, daemon.log and daemon.log.1 were ~1GB each. The contents of these files seemed to be just the readbacks for the caget and caput commands. So I cleared both these files, and now the /var partition usage is only 26%. I also got the display back up and running on the physical monitor hooked up to the c1vac machine's VGA port. Let's see if this has improved the stability situation. The CPU load is still high (~6-7), with most of this coming from the modbus process. Why is this so high? c1susaux has more Acromag units but claims a much lower load of 0.71. Is the CPU of the c1vac machine somehow inferior?

In the meantime, I ssh-ed into c1vac and restored the "Vacuum normal" valve config. During this little escapade, the main volume pressure rose to ~6e-5 torr. It's coming back down smoothly.


Unrelated to this work: we had turned the RGA off for the vent, I powered it back on and re-initialized it this morning.

Attachment 1: Screen_Shot_2019-05-31_at_12.44.54_PM.png
Screen_Shot_2019-05-31_at_12.44.54_PM.png
  14640   Mon May 27 11:37:13 2019 gautamUpdateVACc1vac is unresponsive

I've been monitoring the status of the pumpdown remotely with ndscope lookbacks of C1:Vac-CC1_pressure. Today morning, I saw that the channel was putting out a constant value (signature of EPICS server being frozen). caget did not work either. Then I tried ssh-ing into c1vac to see if there were any issues but I was unable to. The machine isn't responding to ping either. The EPICS value has been frozen since ~1030pm PDT 26 May 2019.

I will try and head to campus later today to check on it. Isn't an email alert or soemthing supposed to be sent out in such an event?

  14279   Tue Nov 6 23:19:06 2018 gautamUpdateVACc1vac1 FAIL lights on (briefly)

Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.

But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.

Is there a reason why extender cards shouldn't be stuck into eurocrates?

Attachment 1: Screenshot_from_2018-11-06_23-18-23.png
Screenshot_from_2018-11-06_23-18-23.png
Attachment 2: Screenshot_from_2018-11-06_23-19-26.png
Screenshot_from_2018-11-06_23-19-26.png
  14281   Wed Nov 7 08:32:32 2018 SteveUpdateVACc1vac1 FAIL lights on (briefly)...checked

The vacuum and MC are OK

Quote:

Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.

But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.

Is there a reason why extender cards shouldn't be stuck into eurocrates?

 

Attachment 1: Vac_MC_OK.png
Vac_MC_OK.png
  14207   Fri Sep 21 16:51:43 2018 gautamUpdateVACc1vac1 is unresponsive

Steve pointed out that some of the vacuum MEDM screen fields were reporting "NO COMM". Koji confirmed that this is a c1vac1 problem, likely the same as reported here and can be fixed using the same procedure.

However, Steve is worried that the interlock won't kick in in case of a vacuum emergency, so we are leaving the PSL shutter closed over the weekend. The problem will be revisited on Monday.

  14215   Mon Sep 24 15:06:10 2018 gautamUpdateVACc1vac1 reboot + TP1 controller replacement

[steve, gautam]

Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.

While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):

  • Turn power off using switch on rear.
  • Remove 4 connecting cables on the back.
  • Switch controllers.
  • Reconnect 4 cables on the back panel.
  • Turn power back on using switch on rear.

However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. 

Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.

Quote:

The problem will be revisited on Monday.

Attachment 1: beforeReboot.png
beforeReboot.png
Attachment 2: afterReboot.png
afterReboot.png
Attachment 3: CC1.png
CC1.png
  14278   Tue Nov 6 19:41:46 2018 JonOmnistructure c1vac1/2 replacement

This afternoon I started setting up the Supermicro 5017A-EP that will replace c1vac1/2. Following Johannes's procedure in 13681 I installed Debian 8.11 (jessie). There is a more recent stable release, 9.5, now available since the first acromag machine was assembled, but I stuck to version 8 for consistency. We already know that version to work. The setup is sitting on the left side of the electronics bench for now.

  1505   Mon Apr 20 23:27:59 2009 ranaSummaryVACc1vac2 rebooted: non-functional for several months
We found several problems with the framebuilder tonight. The first symptom was that it was totally out of
disk space. The latest daqd log file had gone up to 500 MB and filled the space. The log file was full of
a lot of requests from my seisBLRMS.m code, but what was really making it so big was that it couldn't
connect to c1vac2 (aka scipe4) to make connections for some channels.

We looked into the daqd log files and this has been going on since at least December. There were several
'whited out' records for TP2 and TP3 in the Vacuum overview as well as the Checklist screen! Why did no
one notice this and fix it??
WE cannot function if we just ignore any non-functioning displays and say
"Oh, that never worked."

For sure, we know that it was working in 2005. Jay and Steve and Alan looked at it.

Today it was responding to ping and telnet, but not allowing any new connections. I hit the RESET button
on it. Several lights went RED and then it came back up. The readbacks on the EPICS screens are OK too.

I went into fb0 and deleted many of the GB size log files from the past several months. There is now
19GB free out of its local 33GB disk.
  5989   Wed Nov 23 16:48:39 2011 SureshUpdateGeneralcable cleanup

[Koji Suresh]

As part of the general lab clean up we removed many unused BNC cables (long and short) from around the SP table.  We removed one very long BNC cable which was connected on one side to an PEM input and not connected on the other side near the 1X2 rack..   There were several cables from an old SURF phase camera project which were still attached to a couple of RF amps on the SP tables and running towards the 1X6 rack. 

We also removed some unused power cables  plugged into a power distribution strip near Megatron.

 

  875   Mon Aug 25 10:23:53 2008 steveHowToGeneralcable killer
Rack 1Y7 double violation:

BNC cables left to be jammed by door

and see destroyed BNCs

RED fibers should be rerouted.
I placed protective obstacle in position
so the door can not be closed.

Please do not do this!

DNA analysis is in progress on your finger prints.
Attachment 1: cablkill.png
cablkill.png
Attachment 2: cablkll2.png
cablkll2.png
  7873   Thu Jan 3 19:19:59 2013 ranaHowToElectronicscable racks

Today I found 3 power cables in the orange Pomona cable tray, put in so that the cables were damaged and therefore dangerous.

Please think about what you are doing before doing it. Damaging these things because your are in a hurry or frustrated will just waste our time and damage our interferometer.

For reference, we only use the thick blue Pomona racks for power cables. We use the orange and black ones for thinner cables. Pay attention and keep the cables organized.

Cable Rack Selection

 

  4390   Wed Mar 9 16:07:42 2011 kiwamuUpdateVIDEOcable session

[Koji, Steve, Suresh, Kiwamu]

The following video cables have been newly laid down :

  - MC1F/MC3F (65 ft.)

  - PMCR (100 ft.)

  - PSL spare (100 ft.)

  - PSL1  (100 ft.)

  - PSL2  (100 ft.)

 

  11659   Fri Oct 2 15:11:08 2015 SteveUpdatePEMcable squashed

Cable numbered #53 from Accelerometer 4 to 1X7 / DAQ input c26 was squased while removing network card from Sun Fire x4600 today.

This cable has to be tested.

  7807   Tue Dec 11 08:53:52 2012 SteveHowToPEMcables needs care

How NOT to:

The janitor can not clean in areas like this. He may only steps on these cables accidentally as he dust wiping our chambers.

Attachment 1: IMG_1839.JPG
IMG_1839.JPG
  7809   Tue Dec 11 10:09:04 2012 AyakaHowToPEMcables needs care

Quote:

How NOT to:

The janitor can not clean in areas like this. He may only steps on these cables accidentally as he dust wiping our chambers.

 Sorry for the mess. I fixed it.

  3996   Tue Nov 30 12:33:27 2010 kiwamuSummaryIOOcabling of in-vac PZT mirrors

  10066   Wed Jun 18 22:34:44 2014 ericqUpdateIOOcaget frusrtation

Quote:

 Somehow the caget/caput commands are really slow. I'm not sure if this is new behavior or not, but after changing values, it takes ~1-2 seconds to move on to the next command.

This is still happening. Specifically: on all of the control room computers, calls to caget display the result immediately, but then hang for five seconds (consistently five). We had also seen a situation where calls hang indefinitely on ottavia/pianosa, but a reboot "fixes" this.

Some observations:

  • Front end machines and the FB have proper caget/caput response times.
  • Control room machines have some odd ping behavior when targeting frontends/FB; namely the ping times themselves are ok, but each ping line takes quite some time to show up, which made us think that there is odd network routing issue happening with some network switch. 
  • Front ends and FB get epics from /opt/rtapps, whereas control room machines get epics from /ligo/apps, which has different contents. (Is this for Gentoo vs. Ubuntu? I don't really get why this is the case...). This means different environment setting scripts to be called, so maybe the control room machines are misconfigured in some way for the new name server?

I poked around the network settings on all of these machines, but everything seemed reasonable. Nothing was changed. Rossa and Pianosa have their network settings done through some Ubuntu GUI, but I don't know where the settings are written. I had expected their settings to be in /etc/network/interfaces; maybe we should change this to be consistent with other machines, and easier to administrate via the terminal. 

Despite all this, ezcaread is fine.

  10077   Thu Jun 19 22:04:23 2014 ericqUpdateComputer Scripts / Programscaget/caput now return in reasonable time

I think I've fixed the caget/caput issue. Rana's observation that pinging the IP directly was faster than pinging the hostname set me on a path of googling which informed making the following changes to the DNS setup on chiara (specifically, informed by this thread: http://www.dslreports.com/forum/r11836974-BIND-slow-to-reply-over-LAN-Solved)

/etc/bind/named.conf.local has these lines:

zone "martian" IN {
 type master;
 file "/etc/bind/zones/martian.db";
 };
zone "113.168.192.in-addr.arpa" {
 type master;
 file "/etc/bind/zones/rev.113.168.192.in-addr.arpa";
};

The first zone command links hostnames like c1lsc to an IP like 192.168.113.62, but apparently in the second, we need to do the inverse. So, for each line in martian.db like

c1lsc           A       192.168.113.62

I added a line in rev.113.168.192.in-addr.arpa like so:

62 IN PTR c1lsc.martian

This seems kind of silly, but now if you do the host command from a workstation, it can find the hostname associated with an IP. 

controls@pianosa|~ > host 192.168.113.62
62.113.168.192.in-addr.arpa domain name pointer scipe12.martian.113.168.192.in-addr.arpa.
62.113.168.192.in-addr.arpa domain name pointer c1lsc.martian.113.168.192.in-addr.arpa.

[At this point, note that we have a bunch of duplicate entries in https://wiki-40m.ligo.caltech.edu/Martian_Host_Table  with these scipe## hostnames. What are these for?]


 
Now (edited for brevity):
 
controls@ottavia|~ > ping -c 5 -D c1sus
PING c1sus.martian (192.168.113.85) 56(84) bytes of data.
<SNIP>
--- c1sus.martian ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3997ms
rtt min/avg/max/mdev = 0.051/0.075/0.114/0.028 ms
controls@ottavia|~ > ping -c 5 -D 192.168.113.85
PING 192.168.113.85 (192.168.113.85) 56(84) bytes of data.
<SNIP>
--- 192.168.113.85 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3998ms
rtt min/avg/max/mdev = 0.052/0.130/0.380/0.127 ms
 
controls@pianosa|~ > time caget C1:LSC-XARM_GAIN
C1:LSC-XARM_GAIN               0.015
real    0m0.039s
 
controls@pianosa|~ > time caput C1:LSC-XARM_GAIN 0.0151
Old : C1:LSC-XARM_GAIN               0.015
New : C1:LSC-XARM_GAIN               0.0151
real    0m0.054s
 
 
 

 

  6931   Fri Jul 6 14:10:31 2012 yutaSummaryLSCcalculation of FPMI using ALS

From calculation, phase fluctuation of reflected beam from length stabilized arm is not disturbing MI lock.

Easy calculation:
  The phase PD at AS port sense is

phi = phi_x - phi_y = 2*l_MICH*omega/c + (phi_X - phi_Y)

  where l_MICH is the Michelson differential length change, omega is laser frequency, phi_X and phi_Y are phase of arm reflected beam. From very complicated calculation,

phi_X ~ F/2 * Phi_X

  at near resonance. Where F is arm finesse, Phi_X is the round trip phase change in X arm. So,

phi = 2*l_MICH*omega/c + F/2 * 2*L_DARM*omega/c

  Our ALS stabilizes arm length in ~ 70 pm(see elogs #6835#6858). Finesse for IR is ~450. Considering l_MICH is ~ 1 um, MICH signal at AS port should be larger than stabilized DARM signal by an order of magnitude.

Length sensing matrix of FPMI:
  Calculated length sensing matrix of 40m FPMI is below. Here, I'm just considering 11 MHz modulation. I assumed input power to be 1 W, modulation index 0.1i, Schnupp asymmetry 26.6 mm. PRM/SRM transmissivity is not taken into account.

[W/m]     DARM      CARM      MICH
REFL_I    0         1.69e8    0
REFL_Q    7.09e1    0        -3.61e3
AS_I      0         0         0
AS_Q      1.04e6    0         3.61e3


  Maybe we should use REFL_Q as MICH signal, but since IQ separation is not perfect, we see too much CARM. I tried to lock MI with REFL11_Q yesterday, but failed.

  4402   Thu Mar 10 17:03:48 2011 Larisa ThorneConfigurationElectronicscalculations for passive low pass filter on X arm

[Kiwamu, Larisa] 

 

We want to increase gain in the lower frequencies, so a circuit must be designed (a passive low pass filter). 

 

First, measurements were taken at the X arm for impedance and capacitance, which were 104.5kOhms and 84.7pF respectively. Kiwamu decided to make the circuit resemble a voltage divider for ease of calculation, such that Vout/Vin would be a ratio of some values of the equivalent circuit reactance values. After a few algebra mistakes, this Vout/Vin value was simplified in terms of the R, C measured and the R', C' that would be needed to complete the circuit. 

Since the measured C was very small and the measure R was fairly high, the simplified form allowed us to pick values of R' and C' that would make the critical frequency occur at 0.1Hz: set the R' resistance to 1MOhm and C' capacitance to 10uF, which would yield a gain ~1.

With these values a circuit we can start actually making the circuit.

  8248   Thu Mar 7 01:43:35 2013 yutaUpdateLSCcalibrated MI differential length spectra

Free swing MI differential length is 86 nm RMS and residual length when locked is 0.045 nm RMS(in-loop).
Looks very quiet. Comparison with PRMI is the next step.

Openloop transfer function:
  OLTF of simple MI lock using AS55_Q_ERR as error signal and ITMs as actuators is below.
  UGF ~ 90 Hz, phase margin ~ 40deg
  I added 16 Hz resonant gain to suppress bounce mode.
LSCMICHOLTF_MI.png

MI differential length spectra:
  Below. Calibration was done using calibrated AS55_Q_ERR and actuator response(elog #8242)
MImotion.png


  Expected free swing is calculated using

x_free = (1+G)/G * A * fb

where G is openloop transfer function, A is actuator response, fb is feedback signal(C1:LSC_ITMX/Y_IN1) spectrum. I used A as simple pendulum with resonant frequency at 1 Hz, Q = 5. Since free swing RMS is dominated by this resonance, RMS depends on this Q assumption.

  6841   Wed Jun 20 18:43:57 2012 yutaUpdateLSCcalibrated POX error signal

[Jenne, Yuta]

We did the same calibration for POX. It was 3.8e12 counts/m. See elog #6834 for the details of calibration we did.

According to Kiwamu's calibration, actuator response of ITMX is;

A_ITMX  = 4.913e-09 Hz^2*counts/m / freq^2

Plots below are results from our calibration measurement.

LSCxarmTF_usingITMX.pngLSCxarm_HAover1plusG.pngPOXerrorcalibration.png

  6834   Tue Jun 19 23:36:19 2012 yutaUpdateLSCcalibrated POY error signal

[Jenne, Yuta]

We calibrated POY error signal(C1:LSC-POY11_I_ERR). It was 1.4e12 counts/m.

Modeling of Y arm lock:
  Let's say H is transfer function from Y arm length displacement to POY error signal. This is what we want to measure.
  F is the servo filter (filter module C1:LSC-YARM).
  A is the actuator TF using ITMY. According to Kiwamu's calibration using MICH (see elog #5583),

  A_ITMY  = 4.832e-09 Hz^2*counts/m / freq^2

  We used ITMY to lock Y arm because ITMY is already calibrated.

What we did:
  1. Measured openloop transfer function of Y arm lock using POY error signal using ITMY (G=HFA). We noticed some discrepancy in phase with our model. If we include 1800 usec delay, phase fits well with the measurement. I think this is too big.
LSCyarmTF_usingITMY.png


  2. Measured a transfer function between actuator to POY error signal during lock. This should give us HA/(1+G).
LSCyarm_HAover1plusG.png

  4. Calculated H using measurements above. Assuming there's no frequency dependance in H, we got

  H = 1.4e12 counts/m

POYerrorcalibration.png

 For sanity check; Peak to peak of the POY error signal when crossing the IR resonance is about 800 counts. FWHM is about 1 nm, so our measurement is not so crazy.

  6835   Wed Jun 20 00:01:04 2012 JenneUpdateLSCcalibrated POY error signal

[Yuta, Jenne]

We have measured the out of loop residual motion of the Yarm while locked with the ALS.  We see ~70pm RMS, as compared to Kiwamu's best of ~24pm RMS.  So we're not yet meeting Kiwamu's best measurement, but we're certainly not in crazy-land.

The Yarm ALS was locked, I took a spectrum of POY11_I_ERR, and used the calibration that we determined earlier this evening.  For reference, I attach a screenshot of our ALS loop filters - we had on all the boosts, and both resonant gain filters (~3Hz and ~16Hz).

A large part of the RMS is coming from the 60Hz power line and the 180Hz harmonic....if we could get rid of these (how were they eliminated from the measurement that Kiwamu used in the paper?? - plotted elog 6780) we would be closer. 

Also, it looks like the hump (in our measurementf ~100Hz, in Kiwamu's ~200Hz) is not quite an order of magnitude higher in amplitude in our measurement vs. Kiwamu's.  We have ~5e-11 m/rtHz, Kiwamu had ~7e-12 m/rtHz.  This increase in noise could be coming from the fact that Yuta and Koji decreased the gain in the Ygreen PDH loop to prevent the PDH box from oscillating. 

While we should still think about why we can't use the same gain that Kiwamu was able to ~6 months ago, we think that we're good enough that we can move on to doing mode scans and residual motion measurements of the Xarm.

 

Attachment 1: LSC_POY_11_I_ERR_calib_19June2012.pdf
LSC_POY_11_I_ERR_calib_19June2012.pdf
Attachment 2: POY_calib_19June2012_FiltBankSettings.png
POY_calib_19June2012_FiltBankSettings.png
  8256   Fri Mar 8 03:07:19 2013 yutaUpdateLSCcalibrated PRM-ITMY length spectra

Measured free swing PRM-ITMY length was 230 nm RMS.
MI differential length was 85 nm RMS(elog #8248). This tells you that PR2, PR3 are not so noisy compared with usual suspensions.

Openloop transfer function:
  OLTF of PRM-ITMY cavity lock using REFL55_Q_ERR as error signal and PRM as actuator is below.
  UGF ~ 120 Hz, phase margin ~ 50 deg.
  Somehow, phase delay was 460 usec, which is smaller than the empirical value 550 usec.
LSCPRCLOLTF_PRITMY.png


PRM-ITMY length spectra:
  Below. Calibration was done using calibrated REFL55_Q_ERR and actuator response(elog #8255).
PRITMYmotion.png

  9606   Wed Feb 5 20:41:57 2014 DenUpdateLSCcalibrated spetra from OAF test

We did online adaptive filtering test with IMC and arms 1 year ago (log 7771). In the 40m presentations I can still see the plot with uncalibrated control spectra that was attached to that log. Now it the time to attach the calibrated one.

Template is in the /users/den/oaf

Attachment 1: oaf_cal.pdf
oaf_cal.pdf
  6938   Sun Jul 8 00:27:54 2012 yutaSummaryLockingcalibrating phase tracking mode scan data

FSR for X/Y arm are 3.97 +/- 0.03 MHz and 3.96 +/- 0.02 MHz respectively. This means X/Y arm lengths are 37.6 +/- 0.3 m and 37.9 +/- 0.2 m respectively.
I calibrated the mode scan results using 11MHz sideband as frequency reference.
Calibration factor between the phase of the phase tracker and IR frequency is 9.81 +/- 0.05 kHz/deg for X arm, 9.65 +/- 0.02 kHz/deg for Y arm.

Calculation:
  For the mode scan measurements, we swept the phase of the phase tracker linearly with time. Previous calculation was done without calibrating seconds into actual IR frequency. The first order calibration can be done using modulation frequency as reference. Note that I'm still assuming our sweep was linear here.

  Relation between FSR and modulation frequency can be written in

f_mod = n * nu_FSR + nu_f

  where f_mod is the modulation frequency, n is an integer, nu_f = mod(nu_FSR,f_mod).
  nu_FSR and nu_f are measurable values (in seconds) from the mode scan. We know that f_mod = 11065910 Hz (elog #6027). We also know that nu_FSR is designed to be ~3.7 MHz(=c/2L). So, n = 2.
  We can calculate f_mod in seconds, so we can calibrate seconds into IR frequency.


Calibrating X arm mode scan:
  From the 8FSR mode-scan data (see elog #6859), positions of TEM00 and upper/lower 11 MHz sidebands in seconds are;

TEM00    242.00     214.76     187.22     159.27     131.33     102.96     74.61     46.00     17.51
upper    236.70     209.05     181.36     153.42     125.06      96.86     68.43     40.20
lower    220.35     192.96     165.03     136.98     108.92      80.65     52.25     23.90


  So, FSR and nu_f in seconds are;

FSR    27.24     27.54     27.95     27.94     28.37     28.35     28.61     28.49
nu_f   21.80     21.82     22.14     22.19     22.26     22.28     22.40     22.40


  By using formula above, modulation frequency in seconds are;

f_mod    76.28    76.90    78.04    78.07    79.00    78.98    79.62    79.38

  By taking average, FSR and f_mod in seconds are

FSR    28.1 +/- 0.2
f_mod    78.3 +/- 0.4

  We know that f_mod = 11065910 Hz, so conversion constant from seconds to frequency is

k1 = 0.1413 +/- 0.0007 MHz/sec

  We swept the phase by 3600 deg in 250 sec, so conversion constant from degree to frequency is

k2 = 9.81 +/- 0.05 kHz/deg

  Also, using k1, FSR for X arm is

FSR = 3.97 +/- 0.03 MHz

  This means, X arm length is

L = c/(2*FSR) = 37.6 +/- 0.3 m


Calibrating Y arm mode scan:
  From the 8FSR mode-scan data (see elog #6832), positions of TEM00 and upper/lower 11 MHz sidebands in seconds are;

TEM00    246.70     218.15     190.06     161.87     133.26     104.75     76.01     47.19     18.60
upper    240.86     212.78     184.32     155.73     127.23      98.48     69.78     41.26
lower    224.53     195.73     167.31     139.13     110.81      82.27     53.60     24.50


  So, FSR and nu_f in seconds are;

FSR    28.55     28.09     28.19     28.61     28.51     28.74     28.82     28.59
nu_f   22.44     22.57     22.60     22.61     22.47     22.48     22.50     22.68


  By using formula above, modulation frequency in seconds are;

f_mod    79.54    78.75    78.98    79.825    79.485    79.955    80.14    79.855


  By taking average, FSR and f_mod in seconds are

FSR    28.5 +/- 0.1
f_mod    79.6 +/- 0.2

  We know that f_mod = 11065910 Hz, so conversion constant from seconds to frequency is

k1 = 0.1390 +/- 0.0003 MHz/sec

  We swept the phase by 3600 deg in 250 sec, so conversion constant from degree to frequency is

k2 = 9.65 +/- 0.02 kHz/deg

  (k2 of X arm and Y arm is different because delay-line lengths are different)
  Using k1, FSR for X arm is

FSR = 3.96 +/- 0.02 MHz

  This means, X arm length is

L = c/(2*FSR) = 37.9 +/- 0.2 m


Summary of mode scan results:
X arm
  Mode matching    MMR = 91.2 +/- 0.3 % (elog #6859) Note that we had ~2% of 01/10 mode.
  FSR         FSR = 3.97 +/- 0.03 MHz (this elog)
  finesse    F = 416 +/- 6 (elog #6859)
  g-factor    g1*g2 = 0.3737 +/- 0.002 (elog #6922)

  length        L = 37.6 +/- 0.3 m (this elog)
  ETM RoC  R2 = 60.0 +/- 0.5 m (this elog and #6922; assuming ITM is flat)

Y arm
  Mode matching    MMR = 86.7 +/- 0.3 % (elog #6828) Note that we had ~5% of 01/10 mode.
  FSR         FSR = 3.96 +/- 0.02 MHz (this elog)
  finesse    F = 421 +/- 6 (elog #6832)
  g-factor    g1*g2 = 0.3765 +/- 0.003 (elog #6922)

  length       L = 37.9 +/- 0.2 m (this elog)
  ETM RoC R2 = 60.7 +/- 0.3 m (this elog and #6922; assuming ITM is flat)

  I think these are all the important arm parameters we can derive just from mode scan measurement.

  Every errors shown above are statistical error in 1 sigma. We need linearity check to put systematic error. Also, we need more precise calibration after that, too, if the sweep has considerably large non-linearity. To do the linearity check, I think the most straight forward way is to install frequency divider to monitor actual beat frequency during the sweep.

  6939   Sun Jul 8 00:58:08 2012 KojiSummaryLockingcalibrating phase tracking mode scan data

Quote:

FSR for X/Y arm are 3.97 +/- 0.03 MHz and 3.96 +/- 0.02 MHz respectively. This means X/Y arm lengths are 37.6 +/- 0.3 m and 37.9 +/- 0.2 m respectively.

These aren't so bad. (Look at this entry)

And interestingly the ETM curvatures are closer to ATF measurements than Coastline's measurement. (Look at wiki)

  6815   Wed Jun 13 17:39:13 2012 yutaUpdateGreen Lockingcalibrating the beatbox

[Jenne, Yuta]

We put 0 dBm sine wave to the RF input of the beatbox and linear-sweeped frequency of the sine wave from 0 to 200 MHz using network analyzer (Aligent 4395A).
(We first tried to use 11 MHz EOM marconi)

Whlile the sweep, we recorded the output of the beatbox, C1:ALS-BEATY_(FINE|COARSE)_(I|Q)_IN1_DQ. We made them DQ channels today. Also, we put gain 10 after the beatbox before ADC for temporal whitening filter using SR560s.

We fitted the signals with sine wave using least squares fit(scipy.optimize.leastsq).
Transision time of the frequency from 200 MHz to 0 Hz can be seen from the discontinuity in the time series. We can convert time to frequency using this and supposing linear sweep of the network analyzer is perfect.

Plots below are time series data of each signal(top) and expansion of the fitted region with x axis calibrated in frequency (bottom).

ALS-BEATY_COARSE_I_IN1_DQ.pngALS-BEATY_COARSE_Q_IN1_DQ.png
ALS-BEATY_FINE_I_IN1_DQ.pngALS-BEATY_FINE_Q_IN1_DQ.png


We got

C1:ALS-BEATY_COARSE_I_IN1_DQ = -1400 sin(0.048 freq + 1.17pi) - 410
C1:ALS-BEATY_COARSE_Q_IN1_DQ = 1900 sin(0.045 freq + 0.80pi) - 95

C1:ALS-BEATY_FINE_I_IN1_DQ = 1400 sin(0.89 freq + 0.74pi) + 15
C1:ALS-BEATY_FINE_Q_IN1_DQ = 1400 sin(0.89 freq + 1.24pi) - 3.4

(freq in MHz)

The delay line length calculated from this fitted value (supposing speed of signal in cable is 0.7c) is;

  D_coarse = 0.7c * 0.048/(2*pi*1MHz) =  1.6 m
  D_fine = 0.7c * 0.89/(2*pi*1MHz) = 30 m

So, the measurement look quite reasonable.

FINE signals looks nice because we have similar response with 0.5pi phase difference.
For COARSE, maybe we need to do the measurement again because the frequency discontinuity may affected the shape of the signal.

  1185   Mon Dec 8 00:10:42 2008 carynSummaryGeneralcalibrating the jenne laser
I apologize in advance for the long list of numbers in the attachment. I can't seem to make them hide for some reason.

So, since Jenne's laser will probably be used for the Stoch mon calibration, Alberto and I took some measurements to calibrate Jenne's laser.
We focused the beam onto the New Focus RF 1GHz photodetector that we stole from rana's lab (powered with NewFocus power 0901). Measured the DC output of the photodetector with scope. Aligned the beam so DC went up (also tried modulating laser at 33MHz and aligning so 33MHz peak went up). Hooked up the 4395a Spectrum/Network Analyzer to the laser and to the AC out of the photodetector (after calibrating Network analyzer with the cables) so that the frequency response of the laser*photodetector could be measured.
(Note: for a while, we were using a splitter, but for the measurements here, I got rid of the splitter and just sent the RFout through the cables to channel A for the calibration, sent RFout to the laser and photodetector to channel A for the measurement)

Measured the frequency response. At first, we got this weird thing with a dip around 290MHz (see jcal_dip_2_norm.png below).
After much fiddling, it appeared that the dip was from the laser itself. And if you pull up just right on the corner of this little metal flap on the laser (see picture), then the dip in the frequency response seems to go away and the frequency response is pretty flat(see jcal_flat_3_norm below). If you press down on the flap, the dip returns. This at least happened a couple of times.
Note that despite dividing the magnitude by the DC, the frequency responses don't all line up. I'm not sure why. In some cases the DC was drifting a bit(I presume the laser was coming out of alignment or decided to align itself better) and maybe with avgfactor=16, and measuring mean DC on the scope, it made the DC meas not match up the the frequ resp meas...
I've attached the data for the measurements made (I'm so sorry for all the #'s. I can't figure out how to hide them)
name/lasercurrent/DC/analyzer SourcePower/analyzer avgfactor
jcal7_1/I=31.7mA/DC=-4.41/SourcePower=0dBm/avgfactor=16
jcal7_2/I=31.7mA/DC=-1.56/SourcePower=0dBm/avgfactor=none
jcal8_1/I=31.7mA/DC=-4.58/SourcePower=0dBm/avgfactor=16
jcal8_2/I=31.7mA/DC=-2.02/SourcePower=0dBm/avgfactor=16
jcal8_3/I=31.7mA/DC=-3.37/SourcePower=0dBm/avgfactor=16
Note also that the data from the 4395a seems to have column1-frequency, column2-real part, column3-imaginary part...I think. So, to calculate the magnitude, I just took (column2)^2+(column3)^2.


To get sort of an upper-bound on the DC, I measured how DCmax varied with laser current, where DCmax is the DC for the best alignment I could get. After setting the current, the laser was modulated at 33MHz and the beam was aligned such that the 33MHz peak in the photodetector output was as tall as I could manage. Then DC was measured. See IvsDCmax.png. Note the DC is negative. I don't know why.

Also, the TV's don't look normal, the alarm's going off and I don't think the mode cleaner's locked.
Attachment 1: IvsDCmax.png
IvsDCmax.png
Attachment 2: data.tar.gz
Attachment 3: jcal_dip_2_norm_log.png
jcal_dip_2_norm_log.png
Attachment 4: jcal_flat_3_norm_log.png
jcal_flat_3_norm_log.png
  1189   Tue Dec 9 10:48:17 2008 CarynSummaryGeneralcalibrating the jenne laser: impedance mismatch?

We sent RFout of network analyzer to a splitter, with one side going back to the network analyzer and the other to the laser modulation input. We observed a rippled transfer function through the splitter. The ripple is probably due to reflection due to an impedance mismatch in the laser.
Attachment 1: reflection.png
reflection.png
  8255   Fri Mar 8 02:17:04 2013 yutaUpdateLSCcalibration of PRM actuator

[Manasa, Yuta]

We measured AC response of PRM actuator using PRM-ITMY cavity.
Result is

PRM:  (19.6 +/- 0.3) x 10^{-9} (Hz/f)^2 m/counts

It is almost the same as in 2011 (elog #5583). We took the same procedure as what Kiwamu did.

What we did:
  1. Aligned PRMI in usual procedure, mis-aligned ITMX and locked PRM-ITMY cavity using REFL55_Q_ERR. POP DC was about 18 when locked.

  2. Made UGF of PRM-ITMY cavity lock at 10 Hz and introduced elliptic LPF at 50 Hz(OLTF below).
OLTF_PRCL.png


  3. Measured transfer function from C1:LSC_ITMY_EXC to C1:LSC_REFL55_Q_ERR. Dividing this by ITMY actuator response(measured in elog #8242) gives calibration of REFL55_Q.

  4. Measured transfer function from C1:LSC_PRM_EXC to C1:LSC_REFL55_Q_ERR to calibrate PRM actuator.

Result:
  Calibration factor for REFL55_Q for PRM-ITMY cavity was (1.37 +/- 0.02) x 10^9 counts/m (plot below). Error is mainly from statistical error of the average.
calibREFL55Q.png


  Measured AC response (50-200 Hz) of PRM is below.
actcalibPRM.png


Next:
  - Measure free-run length spectrum of PRM-ITMY cavity and compare with MICH free-run.

  5637   Sat Oct 8 00:44:42 2011 kiwamuUpdateLSCcalibration of SRM actuator

The AC response of the SRM actuator has been calibrated.

 actuators.png
(Summary of the calibration results)
     BS = 2.190e-08 / f2     [m/counts]
     ITMX  = 4.913e-09 / f2   [m/counts]
     ITMY  = 4.832e-09 / f2   [m/counts]
     PRM   = 2.022e-08 / f2  [m/counts]
     SRM   = 2.477e-08 / f2  [m/counts]    ( NEW ! ) 
 
(Measurement)
The same technique as I reported some times ago (#4721) were used.
The Signal-Recycled ITMY was locked for measuring the actuator response.
Since the ITMY actuator had been already calibrated, first the sensor was calibrated into [counts/m] by exciting the ITMY actuator and then calibrated the SRM actuator with swept sine measurement.
 
 - - notes to myself
   SRCL GAIN = 2.2
   Sensor = REFL11_I
   Demod. phase = 40 deg
   Resonant condition = Carrier resonant
   Gain in WF = 0 dB

Quote from #5583
The AC responses of the BS, ITMs and PRM actuators have been calibrated.

 

ELOG V3.1.3-