40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 280 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  2042   Fri Oct 2 15:11:44 2009 robUpdateComputersc1susvme2 timing problems update update

It got worse again, starting with locking last night, but it has not recovered.  Attached is a 3-day trend of SRM cpu load showing the good spell.

Attachment 1: srmcpu3.png
srmcpu3.png
  2080   Mon Oct 12 14:51:41 2009 robUpdateComputersc1susvme2 timing problems update update update

Quote:

It got worse again, starting with locking last night, but it has not recovered.  Attached is a 3-day trend of SRM cpu load showing the good spell.

 Last week, Alex recompiled the c1susvme2 code without the decimation filters for the OUT16 channels, so these channels are now as aliased as the rest of them.  This appears to have helped with the timing issues: although it's not completely cured it is much better.  Attached is a five day trend.

Attachment 1: srmcpu.png
srmcpu.png
  1905   Fri Aug 14 15:29:43 2009 JenneUpdateComputersc1susvme2 was unmounted from /cvs/cds

When I came in earlier today, I noticed that c1susvme2 was red on the DAQ screens.  Since the vme computers always seem to be happier as a set, I hit the physical reset buttons on sosvme, susvme1 and susvme2.  I then did the telnet or ssh in as appropriate for each computer in turn.  sosvme and susvme1 came back just fine. However, I couldn't cd to /cvs/cds/caltech/target/c1susvme2 while ssh-ed in to susvme2.  I could cd to /cvs/cds, and then did an ls, and it came back totally blank.  There was nothing at all in the folder. 

Yoichi showed me how to do 'df' to figure out what filesystems are mounted, and it looked as though the filesystem was mounted.  But then Yoichi tried to unmount the filesystem, and it claimed that it wasn't mounted at all.  We then remounted the filesystem, and things were good again.  I was able to continue the regular restart procedure, and the computer is back up again.

Recap: c1susvme2 mysteriously got unmounted from /cvs/cds!  But it's back, and the computers are all good again.

  1634   Sat May 30 12:36:52 2009 robUpdateComputersc1susvme2, c1iscex running late

c1susvme2 has been running just a bit late for about a week.  I rebooted it. 

The plot shows SRM_FE_SYNC, which is the number of times in the last second that c1susvme2 was late for the 16k cycle.   Similarly for ETMX.

 

Attachment 1: srmsync.jpg
srmsync.jpg
Attachment 2: etmxsync.jpg
etmxsync.jpg
  1635   Mon Jun 1 13:25:00 2009 robUpdateComputersc1susvme2, c1iscex running late

Quote:

c1susvme2 has been running just a bit late for about a week.  I rebooted it. 

The plot shows SRM_FE_SYNC, which is the number of times in the last second that c1susvme2 was late for the 16k cycle.   Similarly for ETMX.

 

 

The reboot appears to have worked.

Attachment 1: doublesync.jpg
doublesync.jpg
  17100   Tue Aug 23 22:30:24 2022 TegaUpdateComputersc1teststand OS upgrade - I

[JC, Tega, Chris]

After moving the test stand front-ends, chiara (name server) and fb1 (boot server) to the new rack behind 1X7, we powered everything up and checked that we can reach c1teststand via pianosa and that the front-ends are still able to boot from fb1. After confirming these tests, we decided to start the software upgrade to debian 10. We installed buster on fb1 and are now in the process of setting up diskless boot. I have been looking around for cds instructions on how to do this and I found the CdsFrontEndDebian10page which contains most of the info we require. The page suggests that it may be cleaner to start the debian10 installation on a front-end that is connected to an I/O chassis with at least 1 ADC and 1 DAC card, then move the installation disk to the boot server and continue from there, so I moved the disk from fb1 to one of the front-ends but I had trouble getting it to boot. I decided to do a clean install on another disk on the c1lsc front-end which has a host adapter card that can be connected to the c1bhd I/O chassis. We can then mount this disk on fb1 and use it to setup the diskless boot OS.

  16365   Wed Sep 29 17:10:09 2021 AnchalSummaryCDSc1teststand problems summary

[anchal, ian]

We went and collected some information for the overlords to fix the c1teststand DAQ network issue.


  • from c1teststand, c1bhd and c1sus2 computers were not accessible through ssh. (No route to host). So we restarted both the computers (the I/O chassis were ON).
  • After the computers restarted, we were able to ssh into c1bhd and c1sus, ad we ran rtcds start c1x06 and rtcds start c1x07.
  • The first page in attachment shows the screenshot of GDS_TP screens of the IOP models after this step.
  • Then we started teh user models by running rtcds start c1bhd and rtcds start c1su2.
  • The second page shows the screenshot of GDS_TP screens. You can notice that DAQ status is red in all the screens and the DC statuses are blank.
  • So we checked if daqd_ services are running in the fb computer. They were not. So we started them all by sudo systemctl start daqd_*.
  • Third page shows the status of all services after this step. the daqd_dc.service remained at failed state.
  • open-mx_stream.service was not even loaded in fb. We started it by running sudo systemctl start open-mx_stream.service.
  • The fourth page shows the status of this service. It started without any errors.
  • However, when we went to check the status of mx_stream.service in c1bhd and c1sus2, they were not loaded and we we tried to start them, they showed failed state and kept trying to start every 3 seconds without success. (See page 5 and 6).
  • Finally, we also took a screenshot of timedatectl command output on the three computers fb, c1bhd, and c1sus2 to show that their times were not synced at all.
  • The ntp service is running on fb but it probably does not have access to any of the servers it is following.
  • The timesyncd on c1bhd and c1sus2 (FE machines) is also running but showing status 'Idle' which suggested they are unable to find the ntp signal from fb.
  • I believe this issue is similar to what jamie ficed in the fb1 on martian network in 40m/16302. Since the fb on c1teststand network was cloned before this fix, it might have this dysfunctional ntp as well.

We would try to get internet access to c1teststand soon. Meanwhile, someone with more experience and knowledge should look into this situation and try to fix it. We need to test the c1teststand within few weeks now.

Attachment 1: c1teststand_issues_summary.pdf
c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf c1teststand_issues_summary.pdf
  16372   Mon Oct 4 11:05:44 2021 AnchalSummaryCDSc1teststand problems summary

[Anchal, Paco]

We tried to fix the ntp synchronization in c1teststand today by repeating the steps listed in 40m/16302. Even though teh cloned fb1 now has the exact same package version, conf & service files, and status, the FE machines (c1bhd and c1sus2) fail to sync to the time. the timedatectl shows the same stauts 'Idle'. We also, dug bit deeper into the error messages of daq_dc on cloned fb1 and mx_stream on FE machines and have some error messages to report here.


Attempt on fixing the ntp

  • We copied the ntp package version 1:4.2.6 deb file from /var/cache/apt/archives/ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb on the martian fb1 to the cloned fb1 and ran.
    controls@fb1:~ 0$ sudo dbpg -i ntp_1%3a4.2.6.p5+dfsg-7+deb8u3_amd64.deb
  • We got error messages about missing dependencies of libopts25 and libssl1.1. We downloaded oldoldstable jessie versions of these packages from here and here. We ensured that these versions are higher than the required versions for ntp. We installed them with:
    controls@fb1:~ 0$ sudo dbpg -i libopts25_5.18.12-3_amd64.deb 
    controls@fb1:~ 0$ sudo dbpg -i libssl1.1_1.1.0l-1~deb9u4_amd64.deb
  • Then we installed the ntp package as described above. It asked us if we want to keep the configuration file, we pressed Y.
  • However, we decided to make the configuration and service files exactly same as martian fb1 to make it same in cloned fb1. We copied /etc/ntp.conf and /etc/systemd/system/ntp.service files from martian fb1 to cloned fb1 in the same positions. Then we enabled ntp, reloaded the daemon, and restarted ntp service:
    controls@fb1:~ 0$ sudo systemctl enable ntp
    controls@fb1:~ 0$ sudo systemctl daemon-reload
    controls@fb1:~ 0$ sudo systemctl restart ntp
  • But ofcourse, since fb1 doesn't have internet access, we got some errors in status of the ntp.service:
    controls@fb1:~ 0$ sudo systemctl status ntp
    ● ntp.service - NTP daemon (custom service)
       Loaded: loaded (/etc/systemd/system/ntp.service; enabled)
       Active: active (running) since Mon 2021-10-04 17:12:58 UTC; 1h 15min ago
     Main PID: 26807 (code=exited, status=0/SUCCESS)
       CGroup: /system.slice/ntp.service
               ├─30408 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
               └─30525 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:107
    
    Oct 04 17:48:42 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 17:48:52 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
    Oct 04 18:05:05 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
    Oct 04 18:05:15 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
    Oct 04 18:05:25 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 18:05:35 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
    Oct 04 18:21:48 fb1 ntpd_intres[30525]: host name not found: 0.debian.pool.ntp.org
    Oct 04 18:21:58 fb1 ntpd_intres[30525]: host name not found: 1.debian.pool.ntp.org
    Oct 04 18:22:08 fb1 ntpd_intres[30525]: host name not found: 2.debian.pool.ntp.org
    Oct 04 18:22:18 fb1 ntpd_intres[30525]: host name not found: 3.debian.pool.ntp.org
  • But the ntpq command is giving the saem output as given by ntpq comman in martian fb1 (except for the source servers), that the broadcasting is happening in the same manner:
    controls@fb1:~ 0$ ntpq -p
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
     192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000
    
  • On the FE machines side though, the systemd-timesyncd are still unable to read the time signal from fb1 and show the status as idle:
    controls@c1bhd:~ 3$ timedatectl
          Local time: Mon 2021-10-04 18:34:38 UTC
      Universal time: Mon 2021-10-04 18:34:38 UTC
            RTC time: Mon 2021-10-04 18:34:38
           Time zone: Etc/UTC (UTC, +0000)
         NTP enabled: yes
    NTP synchronized: no
     RTC in local TZ: no
          DST active: n/a
    controls@c1bhd:~ 0$ systemctl status systemd-timesyncd -l
    ● systemd-timesyncd.service - Network Time Synchronization
       Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
       Active: active (running) since Mon 2021-10-04 17:21:29 UTC; 1h 13min ago
         Docs: man:systemd-timesyncd.service(8)
     Main PID: 244 (systemd-timesyn)
       Status: "Idle."
       CGroup: /system.slice/systemd-timesyncd.service
               └─244 /lib/systemd/systemd-timesyncd
  • So the time synchronization is still not working. We expected the FE machined to just synchronize to fb1 even though it doesn't have any upstream ntp server to synchronize to. But that didn't happen.
  • I'm (Anchal) working on getting internet access to c1teststand computers.

Digging into mx_stream/daqd_dc errors:

  • We went and changed the Restart fileld in /etc/systemd/system/daqd_dc.service on cloned fb1 to 2. This allows the service to fail and stop restarting after two attempts. This allows us to see the real error message instead of the systemd error message that the service is restarting too often. We got following:
    controls@fb1:~ 3$ sudo systemctl status daqd_dc -l
    ● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
       Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
       Active: failed (Result: exit-code) since Mon 2021-10-04 17:50:25 UTC; 22s ago
      Process: 715 ExecStart=/usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc (code=exited, status=1/FAILURE)
     Main PID: 715 (code=exited, status=1/FAILURE)
    
    Oct 04 17:50:24 fb1 systemd[1]: Started Advanced LIGO RTS daqd data concentrator.
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: [Mon Oct  4 17:50:25 2021] Unable to set to nice = -20 -error Unknown error -1
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: Failed to do mx_get_info: MX not initialized.
    Oct 04 17:50:25 fb1 daqd_dc_mx[715]: 263596
    Oct 04 17:50:25 fb1 systemd[1]: daqd_dc.service: main process exited, code=exited, status=1/FAILURE
    Oct 04 17:50:25 fb1 systemd[1]: Unit daqd_dc.service entered failed state.
    
  • It seemed like the only thing daqd_dc process doesn't like is that mx_stream services are in failed state in teh FE computers. So we did the same process on FE machines to get the real error messages:
    controls@fb1:~ 0$ sudo chroot /diskless/root
    fb1:/ 0#
    fb1:/ 0# sudo nano /etc/systemd/system/mx_stream.service
    fb1:/ 0#
    fb1:/ 0# exit
  • Then I ssh'ed into c1bhd to see the error message on mx_stream service properly.
    controls@c1bhd:~ 0$ sudo systemctl daemon-reload
    controls@c1bhd:~ 0$ sudo systemctl restart mx_stream
    controls@c1bhd:~ 0$ sudo systemctl status mx_stream -l
    ● mx_stream.service - Advanced LIGO RTS front end mx stream
       Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
       Active: failed (Result: exit-code) since Mon 2021-10-04 17:57:20 UTC; 24s ago
      Process: 11832 ExecStart=/etc/mx_stream_exec (code=exited, status=1/FAILURE)
     Main PID: 11832 (code=exited, status=1/FAILURE)
    
    Oct 04 17:57:20 c1bhd systemd[1]: Starting Advanced LIGO RTS front end mx stream...
    Oct 04 17:57:20 c1bhd systemd[1]: Started Advanced LIGO RTS front end mx stream.
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: send len = 263596
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: mx_connect failed Nic ID not Found in Peer Table
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1x06_daq mmapped address is 0x7f516a97a000
    Oct 04 17:57:20 c1bhd mx_stream_exec[11832]: c1bhd_daq mmapped address is 0x7f516697a000
    Oct 04 17:57:20 c1bhd systemd[1]: mx_stream.service: main process exited, code=exited, status=1/FAILURE
    Oct 04 17:57:20 c1bhd systemd[1]: Unit mx_stream.service entered failed state.
    
  • c1sus2 shows the same error. I'm not sure I understand these errors at all. But they seem to have nothing to do with timing issuessurprise!

As usual, some help would be helpful

  16376   Mon Oct 4 18:00:16 2021 KojiSummaryCDSc1teststand problems summary

I don't know anything about mx/open-mx, but you also need open-mx,don't you?


controls@c1ioo:~ 0$ systemctl status *mx*
● open-mx.service - LSB: starts Open-MX driver
   Loaded: loaded (/etc/init.d/open-mx)
   Active: active (running) since Wed 2021-09-22 11:54:39 PDT; 1 weeks 5 days ago
  Process: 470 ExecStart=/etc/init.d/open-mx start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/open-mx.service
           └─620 /opt/3.2.88-csp/open-mx-1.5.4/bin/fma -d

● mx_stream.service - Advanced LIGO RTS front end mx stream
   Loaded: loaded (/etc/systemd/system/mx_stream.service; enabled)
   Active: active (running) since Wed 2021-09-22 12:08:00 PDT; 1 weeks 5 days ago
 Main PID: 5785 (mx_stream)
   CGroup: /system.slice/mx_stream.service
           └─5785 /usr/bin/mx_stream -e 0 -r 0 -w 0 -W 0 -s c1x03 c1ioo c1als c1omc -d fb1:0

 

  16381   Tue Oct 5 17:58:52 2021 AnchalSummaryCDSc1teststand problems summary

open-mx service is running successfully on the fb1(clone), c1bhd and c1sus.

Quote:

I don't know anything about mx/open-mx, but you also need open-mx,don't you?


  17083   Tue Aug 16 18:22:59 2022 TegaUpdateComputersc1teststand rack mounting for CDS upgrade

[Tega, Yuta]

I keep getting confused about the purpose of the teststand. The view I am adopting going forward is its use as a platform for testing the compatibility of new hardware upgrade, instead of thinking of it as an independent system that works with old hardware.

The initial idea of clearing 1X7 cannot be done for now, because I missed the deadline for providing a detailed enough plan before Monday power up of the lab, so we are just going to go ahead and use the new rack as was initially intended and get the latest hardware and software tested here.

We mounted the DAQ, subnet and dolphin IX switches, see attachement 1. The mounting ears that came with the dolphin switch did not fit and so could not be used for mounting. We looked around the lab and decided to used one of the NavePoint mounting brackets which we found next to the teststand, see attachment 2.

We plan to move the new rack to the current location of the teststand and use the power connection from there. It is also closer to 1X7 so that moving the front-ends and switches to 1X7 should be straight forward after we complete all CDS upgrade testing.

Attachment 1: IMG_20220816_180157132.jpg
IMG_20220816_180157132.jpg
Attachment 2: IMG_20220816_175125874.jpg
IMG_20220816_175125874.jpg
  17088   Wed Aug 17 11:10:51 2022 ranaUpdateComputersc1teststand rack mounting for CDS upgrade

we want to be able to run SimPlant on the teststand, test our new controls algorithms, test watchdogs, and any other software upgrades. Ideally in the steady state it will run some plants with suspensions and cavities and we will develop our measurement scripts on there also (e.g. IFOtest).

Quote:

[Tega, Yuta]

I keep getting confused about the purpose of the teststand. The view I am adopting going forward is its use as a platform for testing the compatibility of new hardware upgrade, instead of thinking of it as an independent system that works with old hardware.

  17098   Mon Aug 22 19:02:15 2022 TegaUpdateComputersc1teststand rack mounting for CDS upgrade II

[Tega, JC]

Moved the rack to the location of the test stand just behind 1X7 and plan to remove the other two small test stand racks to create some space there.  We then mounted the c1bhd I/O chassis and 4 front-end machines on the test stand (see attachment 1).

Installed the dolphin IX cards on all 4 front-end machines: c1bhd, c1ioo, c1sus, c1lsc. I also removed the dolphin DX card that was previously installed on c1bhd.

Found a single OneStop host card with a mini PCI slot mounting plate in a storage box (see attachment 2). Since this only fits into the dual PCI riser card slot on c1bhd, I swapped out the full-length PCI slot OneStop host card on c1bhd and installed it on c1lsc, (see attachments 3 & 4).

 

Attachment 1: IMG_20220822_185437763.jpg
IMG_20220822_185437763.jpg
Attachment 2: IMG_20220822_131340214.jpg
IMG_20220822_131340214.jpg
Attachment 3: c1bhd.jpeg
c1bhd.jpeg
Attachment 4: c1lsc.jpeg
c1lsc.jpeg
  16697   Thu Mar 3 15:37:40 2022 AnchalSummaryCDSc1teststand restructured

c1teststand has been restructured. There is no port computer called 'c1teststand' anymore. When you ssh into the c1teststand network using ssh c1teststand from inside martian or from outside network using the method mentioned in this wiki page , you would land into chiara (clone) computer and you can navigate into any teststand network computer from there.

I'll be repurposing 1U c1teststand computer into the new c1susaux2 slow machine now. All files from home directory and from /etc directory of former c1teststand have been zipped and stored in /home/controls of chiara (clone). Just a aside, the network configuration of teststand can be done from inside the teststand network, by going to a browser on either fb1 (clone) or chair (clone) and going to address 10.0.1.1. The login and password are same as our usual workstation username and password.

  16271   Fri Aug 6 13:13:28 2021 AnchalUpdateBHDc1teststand subnetwork now accessible remotely

c1teststand subnetwork is now accessible remotely. To log into this network, one needs to do following:

  • Log into nodus or pianosa. (This will only work from these two computers)
  • ssh -CY controls@192.168.113.245
  • Password is our usual workstation password.
  • This will log you into c1teststand network.
  • From here, you can log into fb1, chiara, c1bhd and c1sus2  which are all part of the teststand subnetwork.

Just to document the IT work I did, doing this connection was bit non-trivial than usual.

  • The martian subnetwork is created by a NAT router which connects only nodus to outside GC network and all computers within the network have ip addresses 192.168.113.xxx with subnet mask of 255.255.255.0.
  • The cloned test stand network was also running on the same IP address scheme, mostly because fb1 and chiara are clones in this network. So every computer in this network also had ip addresses 192.168.113.xxx.
  • I setup a NAT router to connect to martian network forwarding ssh requests to c1teststand computer. My NAT router creates a separate subnet with IP addresses 10.0.1.xxx and suubnet mask 255.255.255.0 gated through 10.0.1.1.
  • However, the issue is for c1teststand, there are now two networks accessible which have same IP addresses 192.168.113.xxx. So when you try to do ssh, it always search in its local c1teststand subnetwork instead of routing through the NAT router to the martian network.
  • To work around this, I had to manually provide an ip router to c1teststand for connecting to two of the computers (nodus and pianosa) in martian network. This is done by:
    ip route add 192.168.113.200 via 10.0.1.1 dev eno1
    ip route add 192.168.113.216 via 10.0.1.1 dev eno1
  • This gives c1teststand specific path for ssh requests to/from these computers in the martian network.
  16273   Mon Aug 9 10:38:48 2021 AnchalUpdateBHDc1teststand subnetwork now accessible remotely

I had to add following two lines in the /etc/network/interface file to make the special ip routes persistent even after reboot:

post-up ip route add 192.168.113.200 via 10.0.1.1 dev eno1
post-up ip route add 192.168.113.216 via 10.0.1.1 dev eno1

  16382   Tue Oct 5 18:00:53 2021 AnchalSummaryCDSc1teststand time synchronization working now

Today I got a new router that I used to connect the c1teststand, fb1 and chiara. I was able to see internet access in c1teststand and fb1, but not in chiara. I'm not sure why that is the case.

The good news is that the ntp server on fb1(clone) is working fine now and both FE computers, c1bhd and c1sus2 are succesfully synchronized to the fb1(clone) ntpserver. This resolves any possible timing issues in this DAQ network.

On running the IOP and user models however, I see the same errors are mentioned in 40m/16372. Something to do with:

Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: OMX: Failed to find peer index of board 00:00:00:00:00:00 (Peer Not Found in the Table)
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: mx_connect failed Nic ID not Found in Peer Table
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1x07_daq mmapped address is 0x7fa4819cc000
Oct 06 00:47:56 c1sus2 mx_stream_exec[21796]: c1su2_daq mmapped address is 0x7fa47d9cc000


Thu Oct 7 17:04:31 2021

I fixed the issue of chiara not getting internet. Now c1teststand, fb1 and chiara, all have internet connections. It was the issue of default gateway and interface and findiing the DNS. I have found the correct settings now.

  14239   Tue Oct 9 16:05:29 2018 gautamConfigurationASCc1tst deleted, c1asy deployed.

Setting up c1asy:

  • Backed up old c1tst.mdl as c1tst_old_bak.mdl in /opt/rtcds/userapps/release/cds/c1/models
  • Copied the c1tst model to /opt/rtcds/userapps/release/isc/c1/models/c1asy.mdl as this is where the c1asx.mdl file resides.
  • Backed up original c1rfm.mdl as c1rfm_old.mdl in /opt/rtcds/userapps/release/cds/c1/models (since the old c1tst had an RFM block which is unnecessary).
  • Deleted offending RFM block from c1rfm.mdl.
  • Recompiled and re-installed c1rfm.mdl. Model has not yet been restarted, as I'd like suspension watchdogs to be shutdown, but c1susaux EPICS channels are presently not responsive.
  • Removed c1tst model (C-node91) from /opt/rtcds/caltech/c1/target/gds/param/testpoints.
  • Removed /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1tst.par (at this point, DCUID 91 is free for use by c1asy).
  • Moved c1tst line in /opt/rtcds/caltech/c1/target/daqd/master to "old model definitions models" section.
  • Added /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1asy.par to the master file.
  • Edited/diskless/root.jessie/etc/rtsystab to allow c1asy to be run on c1iscey.
  • Finally, I followed the instructions here to get the channels into frames and make all the indicators green.

Now Yuki can work on copying the simulink model (copy c1asx structure) and implementing the autoalignment servo.

Attachment 1: CDSoverview_ASY.png
CDSoverview_ASY.png
  14507   Tue Apr 2 14:53:57 2019 gautamUpdateCDSc1vac added to burt

I deleted references to c1vac1 and c1vac2 (which no longer exist) and added c1vac to the autoburt request file list at /opt/rtcds/caltech/c1/burt/autoburt/requestfilelist

  14641   Tue May 28 09:51:33 2019 gautamUpdateVACc1vac hard-rebooted

The vacuum itself was fine - CC1 gauge reported a pressure of 1.3e-5 torr. Note to self: the C1:Vac-CC1_HORNET_PRESSURE channel, which is the analog readback of the Hornet gauge and which is hooked up to an Acromag ADC in the c1auxex chassis, is independent of the status of the c1vac machine, and so can serve as a diagnostic.

However, I was unable to interact with c1vac in any way, the monitor hooked up directly to it was showing a frozen display. So I hard-rebooted the system. It took a few minutes to come back online - but even after 10 minutes of waiting, still no display. In the process of the reboot, several valves were closed off - when the EPICS processes restart, there are momentary instances where the readback channels get an "undefined" value, which prompts the main interlock process to transition to a "SAFE" state. 

Running df -h, I saw that the /var partition was completely full. Maybe this was somehow interfering with the machine running smoothly? Two files in particular, daemon.log and daemon.log.1 were ~1GB each. The contents of these files seemed to be just the readbacks for the caget and caput commands. So I cleared both these files, and now the /var partition usage is only 26%. I also got the display back up and running on the physical monitor hooked up to the c1vac machine's VGA port. Let's see if this has improved the stability situation. The CPU load is still high (~6-7), with most of this coming from the modbus process. Why is this so high? c1susaux has more Acromag units but claims a much lower load of 0.71. Is the CPU of the c1vac machine somehow inferior?

In the meantime, I ssh-ed into c1vac and restored the "Vacuum normal" valve config. During this little escapade, the main volume pressure rose to ~6e-5 torr. It's coming back down smoothly.


Unrelated to this work: we had turned the RGA off for the vent, I powered it back on and re-initialized it this morning.

Attachment 1: Screen_Shot_2019-05-31_at_12.44.54_PM.png
Screen_Shot_2019-05-31_at_12.44.54_PM.png
  14640   Mon May 27 11:37:13 2019 gautamUpdateVACc1vac is unresponsive

I've been monitoring the status of the pumpdown remotely with ndscope lookbacks of C1:Vac-CC1_pressure. Today morning, I saw that the channel was putting out a constant value (signature of EPICS server being frozen). caget did not work either. Then I tried ssh-ing into c1vac to see if there were any issues but I was unable to. The machine isn't responding to ping either. The EPICS value has been frozen since ~1030pm PDT 26 May 2019.

I will try and head to campus later today to check on it. Isn't an email alert or soemthing supposed to be sent out in such an event?

  17081   Mon Aug 15 18:06:07 2022 AnchalUpdateGeneralc1vac issues, 1 pressure gauge died

[Anchal, Paco, Tega]


Disk full issue:

c1vac was showing /var disk to be full. We moved all gunzipped backup logs to /home/controls/logBackUp. This emptied 36% of space on /var. Ideally, we need not log so much. Some solution needs to be found for reducing these log sizes or monitoring them for smart handling.


Pressure sensor malfunctioning:

We were unable to opel the PSL shuttter, due to the interlock with C1:Vac-P1a_pressure. We found that C1:Vac-P1a_pressure is not being written by serial_MKS937a service on c1vac. The issue was the the sensor itself has become bad and needs to be replaced. We believe that "L 0E-04" in the status (C1:Vac-P1a_status) message indicates a malfunctioning sensor.

Quick fix:

We removed writing of C1:Vac-P1a_pressure and C1:Vac-P1a_status from MKS937a and mvoed them to XGS600 which is using the sensor 1 from main volume. See this commit.

Now we are able to open PSL shutter. The sensor should be replaced ASAP and this commit can be reverted then.

  17082   Mon Aug 15 20:09:18 2022 KojiUpdateGeneralc1vac issues, 1 pressure gauge died

- Disk Full: Just use the usual /etc/logrotate thing

- Vacuum gauge

I rather feel not replacing P1a. We used to have Ps and CCs as they didn't cover the entire pressure range. However, this new FRG (=Full Range Gauge) does cover from 1atm to 4nTorr.

Why don't we have a couple of FRG spares, instead?

Questions to Tega: How many FRGs can our XGS-600 controller handle?

 

  17086   Wed Aug 17 10:23:05 2022 TegaUpdateGeneralc1vac issues, pressure gauge replacement

- Disk full

I updated the configuration file '/etc/logrotate.d/rsyslog' to set a file sise limit of 50M on 'syslog' and 'daemon.log' since these are the two log files that capture caget & caput terminal outputs. I also reduce the number of backup files to 2.

controls@c1vac:~$ cat /etc/logrotate.d/rsyslog
/var/log/syslog
{
    rotate 2
    daily
    size 50M
    missingok
    notifempty
    delaycompress
    compress
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}

/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/mail.log
/var/log/daemon.log
{
    rotate 2
    missingok
    notifempty
    size 50M
    compress
    delaycompress
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
    rotate 4
    weekly
    missingok
    notifempty
    compress
    delaycompress
    sharedscripts
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}

- Vacuum gauge

The XGS-600 can handle 6 FRGs and we currently have 5 of them connected. Yes, having a spare would be good. I'll see about placing an order for these then.

Quote:

- Disk Full: Just use the usual /etc/logrotate thing

- Vacuum gauge

I rather feel not replacing P1a. We used to have Ps and CCs as they didn't cover the entire pressure range. However, this new FRG (=Full Range Gauge) does cover from 1atm to 4nTorr.

Why don't we have a couple of FRG spares, instead?

Questions to Tega: How many FRGs can our XGS-600 controller handle?

 

 

  14279   Tue Nov 6 23:19:06 2018 gautamUpdateVACc1vac1 FAIL lights on (briefly)

Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.

But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.

Is there a reason why extender cards shouldn't be stuck into eurocrates?

Attachment 1: Screenshot_from_2018-11-06_23-18-23.png
Screenshot_from_2018-11-06_23-18-23.png
Attachment 2: Screenshot_from_2018-11-06_23-19-26.png
Screenshot_from_2018-11-06_23-19-26.png
  14281   Wed Nov 7 08:32:32 2018 SteveUpdateVACc1vac1 FAIL lights on (briefly)...checked

The vacuum and MC are OK

Quote:

Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.

But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.

Is there a reason why extender cards shouldn't be stuck into eurocrates?

 

Attachment 1: Vac_MC_OK.png
Vac_MC_OK.png
  14207   Fri Sep 21 16:51:43 2018 gautamUpdateVACc1vac1 is unresponsive

Steve pointed out that some of the vacuum MEDM screen fields were reporting "NO COMM". Koji confirmed that this is a c1vac1 problem, likely the same as reported here and can be fixed using the same procedure.

However, Steve is worried that the interlock won't kick in in case of a vacuum emergency, so we are leaving the PSL shutter closed over the weekend. The problem will be revisited on Monday.

  14215   Mon Sep 24 15:06:10 2018 gautamUpdateVACc1vac1 reboot + TP1 controller replacement

[steve, gautam]

Following the procedure in this elog, we effected a reset of the vacuum slow machines. Usually, I just turn the key on these crates to do a power cycle, but Steve pointed out that for the vacuum machines, we should only push the "reset" button.

While TP1 was spun down, we took the opportunity to replace the TP1 controller with a spare unit the company has sent us for use while our unit is sent to them for maintenance. The procedure was in principle simple (I only list the additional ones, for the various valve closures, see the slow machine reset procedure elog):

  • Turn power off using switch on rear.
  • Remove 4 connecting cables on the back.
  • Switch controllers.
  • Reconnect 4 cables on the back panel.
  • Turn power back on using switch on rear.

However, we were foiled by a Philips screw on the DB37 connector labelled "MAG BRG", which had all its head worn out. We had to make a cut in this screw using a saw blade, and use a "-" screwdriver to get this troublesome screw out. Steve suspects this is a metric gauge screw, and will request the company to send us a new one, we will replace it when re-installing the maintaiend controller. 

Attachments #1 and #2 show the Vacuum MEDM screen before and after the reboot respectively - evidently, the fields that were reading "NO COMM" now read numbers. Attachment #3 shows the main volume pressure during this work.

Quote:

The problem will be revisited on Monday.

Attachment 1: beforeReboot.png
beforeReboot.png
Attachment 2: afterReboot.png
afterReboot.png
Attachment 3: CC1.png
CC1.png
  14278   Tue Nov 6 19:41:46 2018 JonOmnistructure c1vac1/2 replacement

This afternoon I started setting up the Supermicro 5017A-EP that will replace c1vac1/2. Following Johannes's procedure in 13681 I installed Debian 8.11 (jessie). There is a more recent stable release, 9.5, now available since the first acromag machine was assembled, but I stuck to version 8 for consistency. We already know that version to work. The setup is sitting on the left side of the electronics bench for now.

  1505   Mon Apr 20 23:27:59 2009 ranaSummaryVACc1vac2 rebooted: non-functional for several months
We found several problems with the framebuilder tonight. The first symptom was that it was totally out of
disk space. The latest daqd log file had gone up to 500 MB and filled the space. The log file was full of
a lot of requests from my seisBLRMS.m code, but what was really making it so big was that it couldn't
connect to c1vac2 (aka scipe4) to make connections for some channels.

We looked into the daqd log files and this has been going on since at least December. There were several
'whited out' records for TP2 and TP3 in the Vacuum overview as well as the Checklist screen! Why did no
one notice this and fix it??
WE cannot function if we just ignore any non-functioning displays and say
"Oh, that never worked."

For sure, we know that it was working in 2005. Jay and Steve and Alan looked at it.

Today it was responding to ping and telnet, but not allowing any new connections. I hit the RESET button
on it. Several lights went RED and then it came back up. The readbacks on the EPICS screens are OK too.

I went into fb0 and deleted many of the GB size log files from the past several months. There is now
19GB free out of its local 33GB disk.
  5989   Wed Nov 23 16:48:39 2011 SureshUpdateGeneralcable cleanup

[Koji Suresh]

As part of the general lab clean up we removed many unused BNC cables (long and short) from around the SP table.  We removed one very long BNC cable which was connected on one side to an PEM input and not connected on the other side near the 1X2 rack..   There were several cables from an old SURF phase camera project which were still attached to a couple of RF amps on the SP tables and running towards the 1X6 rack. 

We also removed some unused power cables  plugged into a power distribution strip near Megatron.

 

  875   Mon Aug 25 10:23:53 2008 steveHowToGeneralcable killer
Rack 1Y7 double violation:

BNC cables left to be jammed by door

and see destroyed BNCs

RED fibers should be rerouted.
I placed protective obstacle in position
so the door can not be closed.

Please do not do this!

DNA analysis is in progress on your finger prints.
Attachment 1: cablkill.png
cablkill.png
Attachment 2: cablkll2.png
cablkll2.png
  7873   Thu Jan 3 19:19:59 2013 ranaHowToElectronicscable racks

Today I found 3 power cables in the orange Pomona cable tray, put in so that the cables were damaged and therefore dangerous.

Please think about what you are doing before doing it. Damaging these things because your are in a hurry or frustrated will just waste our time and damage our interferometer.

For reference, we only use the thick blue Pomona racks for power cables. We use the orange and black ones for thinner cables. Pay attention and keep the cables organized.

Cable Rack Selection

 

  4390   Wed Mar 9 16:07:42 2011 kiwamuUpdateVIDEOcable session

[Koji, Steve, Suresh, Kiwamu]

The following video cables have been newly laid down :

  - MC1F/MC3F (65 ft.)

  - PMCR (100 ft.)

  - PSL spare (100 ft.)

  - PSL1  (100 ft.)

  - PSL2  (100 ft.)

 

  11659   Fri Oct 2 15:11:08 2015 SteveUpdatePEMcable squashed

Cable numbered #53 from Accelerometer 4 to 1X7 / DAQ input c26 was squased while removing network card from Sun Fire x4600 today.

This cable has to be tested.

  7807   Tue Dec 11 08:53:52 2012 SteveHowToPEMcables needs care

How NOT to:

The janitor can not clean in areas like this. He may only steps on these cables accidentally as he dust wiping our chambers.

Attachment 1: IMG_1839.JPG
IMG_1839.JPG
  7809   Tue Dec 11 10:09:04 2012 AyakaHowToPEMcables needs care

Quote:

How NOT to:

The janitor can not clean in areas like this. He may only steps on these cables accidentally as he dust wiping our chambers.

 Sorry for the mess. I fixed it.

  3996   Tue Nov 30 12:33:27 2010 kiwamuSummaryIOOcabling of in-vac PZT mirrors

  10066   Wed Jun 18 22:34:44 2014 ericqUpdateIOOcaget frusrtation

Quote:

 Somehow the caget/caput commands are really slow. I'm not sure if this is new behavior or not, but after changing values, it takes ~1-2 seconds to move on to the next command.

This is still happening. Specifically: on all of the control room computers, calls to caget display the result immediately, but then hang for five seconds (consistently five). We had also seen a situation where calls hang indefinitely on ottavia/pianosa, but a reboot "fixes" this.

Some observations:

  • Front end machines and the FB have proper caget/caput response times.
  • Control room machines have some odd ping behavior when targeting frontends/FB; namely the ping times themselves are ok, but each ping line takes quite some time to show up, which made us think that there is odd network routing issue happening with some network switch. 
  • Front ends and FB get epics from /opt/rtapps, whereas control room machines get epics from /ligo/apps, which has different contents. (Is this for Gentoo vs. Ubuntu? I don't really get why this is the case...). This means different environment setting scripts to be called, so maybe the control room machines are misconfigured in some way for the new name server?

I poked around the network settings on all of these machines, but everything seemed reasonable. Nothing was changed. Rossa and Pianosa have their network settings done through some Ubuntu GUI, but I don't know where the settings are written. I had expected their settings to be in /etc/network/interfaces; maybe we should change this to be consistent with other machines, and easier to administrate via the terminal. 

Despite all this, ezcaread is fine.

  10077   Thu Jun 19 22:04:23 2014 ericqUpdateComputer Scripts / Programscaget/caput now return in reasonable time

I think I've fixed the caget/caput issue. Rana's observation that pinging the IP directly was faster than pinging the hostname set me on a path of googling which informed making the following changes to the DNS setup on chiara (specifically, informed by this thread: http://www.dslreports.com/forum/r11836974-BIND-slow-to-reply-over-LAN-Solved)

/etc/bind/named.conf.local has these lines:

zone "martian" IN {
 type master;
 file "/etc/bind/zones/martian.db";
 };
zone "113.168.192.in-addr.arpa" {
 type master;
 file "/etc/bind/zones/rev.113.168.192.in-addr.arpa";
};

The first zone command links hostnames like c1lsc to an IP like 192.168.113.62, but apparently in the second, we need to do the inverse. So, for each line in martian.db like

c1lsc           A       192.168.113.62

I added a line in rev.113.168.192.in-addr.arpa like so:

62 IN PTR c1lsc.martian

This seems kind of silly, but now if you do the host command from a workstation, it can find the hostname associated with an IP. 

controls@pianosa|~ > host 192.168.113.62
62.113.168.192.in-addr.arpa domain name pointer scipe12.martian.113.168.192.in-addr.arpa.
62.113.168.192.in-addr.arpa domain name pointer c1lsc.martian.113.168.192.in-addr.arpa.

[At this point, note that we have a bunch of duplicate entries in https://wiki-40m.ligo.caltech.edu/Martian_Host_Table  with these scipe## hostnames. What are these for?]


 
Now (edited for brevity):
 
controls@ottavia|~ > ping -c 5 -D c1sus
PING c1sus.martian (192.168.113.85) 56(84) bytes of data.
<SNIP>
--- c1sus.martian ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3997ms
rtt min/avg/max/mdev = 0.051/0.075/0.114/0.028 ms
controls@ottavia|~ > ping -c 5 -D 192.168.113.85
PING 192.168.113.85 (192.168.113.85) 56(84) bytes of data.
<SNIP>
--- 192.168.113.85 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3998ms
rtt min/avg/max/mdev = 0.052/0.130/0.380/0.127 ms
 
controls@pianosa|~ > time caget C1:LSC-XARM_GAIN
C1:LSC-XARM_GAIN               0.015
real    0m0.039s
 
controls@pianosa|~ > time caput C1:LSC-XARM_GAIN 0.0151
Old : C1:LSC-XARM_GAIN               0.015
New : C1:LSC-XARM_GAIN               0.0151
real    0m0.054s
 
 
 

 

  6931   Fri Jul 6 14:10:31 2012 yutaSummaryLSCcalculation of FPMI using ALS

From calculation, phase fluctuation of reflected beam from length stabilized arm is not disturbing MI lock.

Easy calculation:
  The phase PD at AS port sense is

phi = phi_x - phi_y = 2*l_MICH*omega/c + (phi_X - phi_Y)

  where l_MICH is the Michelson differential length change, omega is laser frequency, phi_X and phi_Y are phase of arm reflected beam. From very complicated calculation,

phi_X ~ F/2 * Phi_X

  at near resonance. Where F is arm finesse, Phi_X is the round trip phase change in X arm. So,

phi = 2*l_MICH*omega/c + F/2 * 2*L_DARM*omega/c

  Our ALS stabilizes arm length in ~ 70 pm(see elogs #6835#6858). Finesse for IR is ~450. Considering l_MICH is ~ 1 um, MICH signal at AS port should be larger than stabilized DARM signal by an order of magnitude.

Length sensing matrix of FPMI:
  Calculated length sensing matrix of 40m FPMI is below. Here, I'm just considering 11 MHz modulation. I assumed input power to be 1 W, modulation index 0.1i, Schnupp asymmetry 26.6 mm. PRM/SRM transmissivity is not taken into account.

[W/m]     DARM      CARM      MICH
REFL_I    0         1.69e8    0
REFL_Q    7.09e1    0        -3.61e3
AS_I      0         0         0
AS_Q      1.04e6    0         3.61e3


  Maybe we should use REFL_Q as MICH signal, but since IQ separation is not perfect, we see too much CARM. I tried to lock MI with REFL11_Q yesterday, but failed.

  4402   Thu Mar 10 17:03:48 2011 Larisa ThorneConfigurationElectronicscalculations for passive low pass filter on X arm

[Kiwamu, Larisa] 

 

We want to increase gain in the lower frequencies, so a circuit must be designed (a passive low pass filter). 

 

First, measurements were taken at the X arm for impedance and capacitance, which were 104.5kOhms and 84.7pF respectively. Kiwamu decided to make the circuit resemble a voltage divider for ease of calculation, such that Vout/Vin would be a ratio of some values of the equivalent circuit reactance values. After a few algebra mistakes, this Vout/Vin value was simplified in terms of the R, C measured and the R', C' that would be needed to complete the circuit. 

Since the measured C was very small and the measure R was fairly high, the simplified form allowed us to pick values of R' and C' that would make the critical frequency occur at 0.1Hz: set the R' resistance to 1MOhm and C' capacitance to 10uF, which would yield a gain ~1.

With these values a circuit we can start actually making the circuit.

  17160   Tue Sep 27 10:50:11 2022 PacoUpdateBHDcalibrated LO phase noise

Locked LO phase to ITMX single bounce beam at the AS port, using the DCPD (A-B) error point and actuating on LO1 POS. For this the gain was tuned from 0.6 to 4.0. A rough Michelson fringe calibration gives a counts to meters conversion of ~0.212 nm/count, and the OLTF looks qualitatively like the one in a previous measurement (~ 20 dB at 1 Hz, UGF = 30 Hz). The displacement was then converted to phase using lambda=1e-6; I'm not sure what the requirement is on the LO phase (G1802014 says 1e-4 rad/rtHz at 1 Hz, but our requirement doc says 1 to 20 nrad/rtHz (rms?)... anyways wit this rough calibration we are still off in either case.

The balancing gain is obvious at DC in the individual DCPD spectra, and the common mode rejection in the (A-B) signal is also appreciable. I'll keep working on refining this, and implementing a different control scheme.

Attachment 1: lo_phase_asd.pdf
lo_phase_asd.pdf
  17161   Wed Sep 28 16:37:26 2022 PacoUpdateBHDcalibrated LO phase noise; update

[yuta, paco]

Update; the high frequency ( > 100 Hz) drop is of course not real and comes from a 4th order LP filter in the HPC demod I filter which I haven't accounted for. Furthermore, we have gone through the calibration factors and corrected a factor of 2 in the optical gain. Then, I also added the CLTF to show in loop and out of loop error respectively. The updated plot, though not final, is in Attachment #1.

Attachment 1: lo_phase_asd.pdf
lo_phase_asd.pdf
  17163   Wed Sep 28 21:54:08 2022 PacoUpdateBHDcalibrated LO phase noise; update

Repeated the LO phase noise measurement, this time with the LO - ITMY single bounce, and a couple of fixes Koji hinted at including:

  1. The DEMOD angle was the missing piece! The previous error point showed lower noise than the individual DCPDs because the demodulation angle had not been checked. I corrected it so that the error point in LO_PHASE control was exactly equal to the LO-ITMY single bounce fringe. With this, the gain on the servo had to be adjusted from 4.00 to 0.12, still using FM4, FM5, and this time also FM8 (BLP600).
  2. Turned off 60 Hz harmonics comb notches on DCPDs, they are unecessary.
  3. Acquired noise spectra down to 0.1 Hz, with 0.03 Hz bin width to increase resolution and identify resonant SUS noise near 1 Hz.

This time, after alignment the fringe amplitude was 500 counts. Attachment #1 shows the updated plot with the calibrated noise spectra for the individual DCPD signals A and B as well as their rms values. Attachment #2 shows the error point, in loop and the estimated out of loop spectra with their rms as well. The peak at ~ 240 Hz is quite noticeable in the error point time series, and dominates the high frequency rms noise. The estimated rms out of loop noise is ~ 9.2 rad, down to 100 mHz.

Attachment 1: dcpd_phase_asd.pdf
dcpd_phase_asd.pdf
Attachment 2: lo_phase_asd.pdf
lo_phase_asd.pdf
  8248   Thu Mar 7 01:43:35 2013 yutaUpdateLSCcalibrated MI differential length spectra

Free swing MI differential length is 86 nm RMS and residual length when locked is 0.045 nm RMS(in-loop).
Looks very quiet. Comparison with PRMI is the next step.

Openloop transfer function:
  OLTF of simple MI lock using AS55_Q_ERR as error signal and ITMs as actuators is below.
  UGF ~ 90 Hz, phase margin ~ 40deg
  I added 16 Hz resonant gain to suppress bounce mode.
LSCMICHOLTF_MI.png

MI differential length spectra:
  Below. Calibration was done using calibrated AS55_Q_ERR and actuator response(elog #8242)
MImotion.png


  Expected free swing is calculated using

x_free = (1+G)/G * A * fb

where G is openloop transfer function, A is actuator response, fb is feedback signal(C1:LSC_ITMX/Y_IN1) spectrum. I used A as simple pendulum with resonant frequency at 1 Hz, Q = 5. Since free swing RMS is dominated by this resonance, RMS depends on this Q assumption.

  6841   Wed Jun 20 18:43:57 2012 yutaUpdateLSCcalibrated POX error signal

[Jenne, Yuta]

We did the same calibration for POX. It was 3.8e12 counts/m. See elog #6834 for the details of calibration we did.

According to Kiwamu's calibration, actuator response of ITMX is;

A_ITMX  = 4.913e-09 Hz^2*counts/m / freq^2

Plots below are results from our calibration measurement.

LSCxarmTF_usingITMX.pngLSCxarm_HAover1plusG.pngPOXerrorcalibration.png

  6834   Tue Jun 19 23:36:19 2012 yutaUpdateLSCcalibrated POY error signal

[Jenne, Yuta]

We calibrated POY error signal(C1:LSC-POY11_I_ERR). It was 1.4e12 counts/m.

Modeling of Y arm lock:
  Let's say H is transfer function from Y arm length displacement to POY error signal. This is what we want to measure.
  F is the servo filter (filter module C1:LSC-YARM).
  A is the actuator TF using ITMY. According to Kiwamu's calibration using MICH (see elog #5583),

  A_ITMY  = 4.832e-09 Hz^2*counts/m / freq^2

  We used ITMY to lock Y arm because ITMY is already calibrated.

What we did:
  1. Measured openloop transfer function of Y arm lock using POY error signal using ITMY (G=HFA). We noticed some discrepancy in phase with our model. If we include 1800 usec delay, phase fits well with the measurement. I think this is too big.
LSCyarmTF_usingITMY.png


  2. Measured a transfer function between actuator to POY error signal during lock. This should give us HA/(1+G).
LSCyarm_HAover1plusG.png

  4. Calculated H using measurements above. Assuming there's no frequency dependance in H, we got

  H = 1.4e12 counts/m

POYerrorcalibration.png

 For sanity check; Peak to peak of the POY error signal when crossing the IR resonance is about 800 counts. FWHM is about 1 nm, so our measurement is not so crazy.

  6835   Wed Jun 20 00:01:04 2012 JenneUpdateLSCcalibrated POY error signal

[Yuta, Jenne]

We have measured the out of loop residual motion of the Yarm while locked with the ALS.  We see ~70pm RMS, as compared to Kiwamu's best of ~24pm RMS.  So we're not yet meeting Kiwamu's best measurement, but we're certainly not in crazy-land.

The Yarm ALS was locked, I took a spectrum of POY11_I_ERR, and used the calibration that we determined earlier this evening.  For reference, I attach a screenshot of our ALS loop filters - we had on all the boosts, and both resonant gain filters (~3Hz and ~16Hz).

A large part of the RMS is coming from the 60Hz power line and the 180Hz harmonic....if we could get rid of these (how were they eliminated from the measurement that Kiwamu used in the paper?? - plotted elog 6780) we would be closer. 

Also, it looks like the hump (in our measurementf ~100Hz, in Kiwamu's ~200Hz) is not quite an order of magnitude higher in amplitude in our measurement vs. Kiwamu's.  We have ~5e-11 m/rtHz, Kiwamu had ~7e-12 m/rtHz.  This increase in noise could be coming from the fact that Yuta and Koji decreased the gain in the Ygreen PDH loop to prevent the PDH box from oscillating. 

While we should still think about why we can't use the same gain that Kiwamu was able to ~6 months ago, we think that we're good enough that we can move on to doing mode scans and residual motion measurements of the Xarm.

 

Attachment 1: LSC_POY_11_I_ERR_calib_19June2012.pdf
LSC_POY_11_I_ERR_calib_19June2012.pdf
Attachment 2: POY_calib_19June2012_FiltBankSettings.png
POY_calib_19June2012_FiltBankSettings.png
  8256   Fri Mar 8 03:07:19 2013 yutaUpdateLSCcalibrated PRM-ITMY length spectra

Measured free swing PRM-ITMY length was 230 nm RMS.
MI differential length was 85 nm RMS(elog #8248). This tells you that PR2, PR3 are not so noisy compared with usual suspensions.

Openloop transfer function:
  OLTF of PRM-ITMY cavity lock using REFL55_Q_ERR as error signal and PRM as actuator is below.
  UGF ~ 120 Hz, phase margin ~ 50 deg.
  Somehow, phase delay was 460 usec, which is smaller than the empirical value 550 usec.
LSCPRCLOLTF_PRITMY.png


PRM-ITMY length spectra:
  Below. Calibration was done using calibrated REFL55_Q_ERR and actuator response(elog #8255).
PRITMYmotion.png

ELOG V3.1.3-