ID |
Date |
Author |
Type |
Category |
Subject |
3056
|
Tue Jun 8 18:39:36 2010 |
rana | Update | PEM | DAQ down |
As before, I am unable to get data from the past. With DTT on Allegra I got data from now, but its unavailable from 1 hour ago. Same problem using mDV on mafalda. I blame Joe again - or the military industrial complex.
|
Quote:
|
Quote: |
Although trends are available, I am unable to get any full data from in the past (using DTT or DV). I started the FB's daqd process a few times, but no luck. 
I blame Joe's SimPlant monkeying from earlier today for lack of a better candidate. I checked and the frames are actually on the FB disk, so its something else.
|
I tried running dataviewer and dtt this morning. Dataviewer seemed to be working. I was able to get trends, full data on a 2k channel (seismic channels) and full data on a 16k channel (C1:PEM-AUDIO_MIC1) This was tried for a period 24 hours a go for a 10 minute stretch.
I also tried dtt and was able to get 2k and 16k channel data, for example C1:PEM-AUDIO_MIC1. Was this problem fixed by someone last night or did time somehow fix it?
|
|
6397
|
Fri Mar 9 20:44:24 2012 |
Jim Lough | Update | CDS | DAQ restart with new ini file | DAQ reload/restart was performed at about 1315 PST today. The previous ini file was backed up as c1pem20120309.ini in the /chans/daq/working_backups/ directory.
I set the following to record:
The two JIMS channels at 2048:
[C1:PEM-JIMS_CH1_DQ] Persistent version of JIMS channel. When bit drops to zero indicating something bad (BLRMS threshold exceeded) happens the bit stays at zero for >= the value of the persist EPICS variable.
[C1:PEM-JIMS_CH2_DQ] Non-persistent version of JIMS channel.
And all of the BLRMS channels at 256:
Names are of the form:
[C1:PEM-RMS_ACC1_F0p1_0p3_DQ]
[C1:PEM-RMS_ACC1_F0p3_1_DQ]
On monday I intend to look at the weekend seismic data to establish thresholds on the JIMS channels.
256 was the lowest rate possible according to the RCG manual. The JIMS channels are recorded at 2048 because I couldn't figure out how to disable the decimation filter. I will look into this further. |
6404
|
Tue Mar 13 13:28:31 2012 |
Ryan Fisher | Update | CDS | DAQ restart with new ini file | Extra note: This was the ini file that was edited:
/cvs/cds/rtcds/caltech/c1/chans/daq/C1PEM.ini |
16793
|
Thu Apr 21 10:35:23 2022 |
Koji | Update | CDS | DAQ seemed down | Yesterday, when I worked on the damping servo, I found that any of the daqvtools (ndscope, dtt, dataviewer,...) is not available. We may need to restart the fb and rt machines. |
4171
|
Thu Jan 20 00:39:22 2011 |
kiwamu | HowTo | CDS | DAQ setup : another trick | Here is another trick for the DAQ setup when you add a DAQ channel associated with a new front end code.
Once you finish setting up the things properly according to this wiki page (this page ), you have to go to
/cvs/cds/rtcds/caltech/c1/target/fb
and then edit the file called master.
This file contains necessary path where fb should look at, for the daqd initialization.
Add your path associated with your new front end code on this file, for example:
/opt/rtcds/caltech/c1/chans/daq/C1LSC.ini
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1lsc.par
After editing the file, restart the daqd on fb by the usual commands:
telnet fb 8088
shutdown |
16811
|
Mon Apr 25 17:24:06 2022 |
Anchal | Update | CDS | DAQ still down | I investigated this issue today. At first, it seemed that only new suspension testpoints are inaccessible. I was able to use diaggui for a measurement on MC2. The DAQ network cable between 1X4 and 1Y1 was tied and is very taught now (we should relieve this as soon as possible, best solution is to lay down a longer cable over the bridge). My hypothesis is that the DAQ network might have broken while tying this cable and it probably did not come back since then.
The simplest solution would have been to restart c1su2 models. As I restarted those models though, I found that c1lsc and c1sus models failed. This is very unusual as c1su2 models are independent and share nothing with the other vertex models. So I had to restart all the FE computers eventually. But this did not solve the issue. Worse, now the DAQ isn't working for the vertex machiens as well.
Next step was to try restarting fb1 itself. We switched off all the FE computers, restarted fb1, stopped daqd_* processes, reloaded gpstime module, restarted open-mx, mx, nds and daqd_* process. But the mx.service failed to load with following error message:
● mx.service - LSB: starts MX driver for Myrinet card
Loaded: loaded (/etc/init.d/mx)
Active: failed (Result: exit-code) since Mon 2022-04-25 17:18:02 PDT; 1s ago
Process: 4261 ExecStart=/etc/init.d/mx start (code=exited, status=1/FAILURE)
Apr 25 17:18:02 fb1 mx[4261]: Loading mx driver
Apr 25 17:18:02 fb1 mx[4261]: insmod: ERROR: could not insert module /opt/mx/sbin/mx_mcp.ko: File exists
Apr 25 17:18:02 fb1 mx[4261]: insmod: ERROR: could not insert module /opt/mx/sbin/mx_driver.ko: File exists
Apr 25 17:18:02 fb1 systemd[1]: mx.service: control process exited, code=exited status=1
Apr 25 17:18:02 fb1 systemd[1]: Failed to start LSB: starts MX driver for Myrinet card.
Apr 25 17:18:02 fb1 systemd[1]: Unit mx.service entered failed state.
(Ignore the timestamp above, I ran the command again to capture the error message.)
However, I was able to start all the FE models without any errors and daqd processes are also all running without showing any errors. Everything is green in CDS screen with no error messages. But the only thing still wrong is mx.service which is not running.
From my limited knowledge and experience, mx.service is a one-time script that mounts mx devices in /dev and loads the mx driver. I tried running the script /opt/mx/sbin/mx_start_stop :
controls@fb1:/opt/mx/sbin 1$ sudo ./mx_start_stop start
Loading mx driver
insmod: ERROR: could not insert module /opt/mx/sbin/mx_mcp.ko: File exists
insmod: ERROR: could not insert module /opt/mx/sbin/mx_driver.ko: File exists
This gave the same error. On searching little bit online, "insmod: ERROR; cound not insert module" error comes up when the kernel version of the driver doesnot match the Linux kernel (whatever that means!). Such deep issues should not appear out of nowhere in a previosuly perfectly runnig system. I'll check around more what changed in fb1, network cables etc. |
3631
|
Thu Sep 30 21:55:31 2010 |
rana | Update | CDS | DAQ sys update | Its pretty exciting to see that Joe got Alex to actually use the ELOG. Its a proof that even rare events occur if you are patient enough.
1) I fixed the MEDM links to point to the new sitemap.adl in /opt/rtcds. There is a link on the new sitemap which points to the old sitemap so that there is nothing destroyed yet.
2) Some of the fields in the screen are white. These are from the new c1sus processor, not issues with the slow controls. I think its just stuff that has not yet been created in the C1SUS simulink module.

3) The PZT steering controls are gone. Without this we cannot get the beam down the arm. Must fix before aliging things after the MC. Since PZT used to be controlled by ASC, we'll have to wire the Piezo Jena PZT controls in from a different VME 4116. Possibly c1iool0's crate?
4) Also, the IPANG and IPPOS are somehow not working right. I guess this is because they are part of the ASC / the old ETMX system. We'll have to wire the IPANG QPD into the new ETMY ADC system if we want to get the initial alignment into the Y-arm correct.
5) I've started migrating things over from the old SITEMAP. Please just use the new SITEMAP. it has a red link to the old one, but eventually everything on the new one will work after Joe, Alex, me, and Kiwamu are done tweaking.
|
3629
|
Thu Sep 30 17:11:01 2010 |
alex i | Update | CDS | DAQ system update | The frame builder is timed from the Symmetricom GPS card now, which is getting the IRIGB timecode from the freq. distribution amplifier (from the VME GPS receiver card).
I have adjusted the GPS seconds to match the real GPS time and the DTT seems to be happy: sweeping MC2 MCL filter module produces nice plot.
Test points are working on SUS.
Excitations are working on SUS.
I am leaving the frame builder running and acquiring the data.
Alex
|
3247
|
Mon Jul 19 21:47:36 2010 |
rana | Summary | DAQ | DAQ timing test | Since we now have a good measurement of the phase noise of the Rb clock Marconi locked to the Rb clock, I wanted to use that to check out the old DAQ system:
I used Megan's phase noise setup - Marconi #2 is putting out 11000013 Hz at 13 dBm into the ZP-3MH mixer. Marconi #1 is putting out 3 dBm at 11000000 Hz into the RF input.
The output goes through a 50 Ohm load and then a Mini-Circuits BNC LP filter (either 2 or 5 MHz). Then an SR560 set for low noise, G = 5, AC coupling, 1-pole LP @ 1 kHz.
This SR560 output goes into the channel C1:IOO-MC_DRUM1 (which is sampled at 16384 Hz with ICS-110B after the usual Sander Liu AA chassis containing the INA134s). |
3299
|
Tue Jul 27 16:03:36 2010 |
rana | Summary | DAQ | DAQ timing test |
Quote: |
Since we now have a good measurement of the phase noise of the Rb clock Marconi locked to the Rb clock, I wanted to use that to check out the old DAQ system:
I used Megan's phase noise setup - Marconi #2 is putting out 11000013 Hz at 13 dBm into the ZP-3MH mixer. Marconi #1 is putting out 3 dBm at 11000000 Hz into the RF input.
The output goes through a 50 Ohm load and then a Mini-Circuits BNC LP filter (either 2 or 5 MHz). Then an SR560 set for low noise, G = 5, AC coupling, 1-pole LP @ 1 kHz.
This SR560 output goes into the channel C1:IOO-MC_DRUM1 (which is sampled at 16384 Hz with ICS-110B after the usual Sander Liu AA chassis containing the INA134s).
|
This is the 0.3 mHz BW spectrum of this test - as you can see the apparent linewidth (assuming the width is all caused by the DAQ jitter) is comparable to the BW and therefore not resolved.
Basically, the Hanning window function is not sharp enough to do this test and so I will do it offline in Matlab. |
Attachment 1: Untitled.png
|
|
16853
|
Sat May 14 08:36:03 2022 |
Chris | Update | DAQ | DAQ troubleshooting | I heard a rumor about a DAQ problem at the 40m.
To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.
daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...
One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:
$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)
In /etc/resolv.conf on fb1 there was the following line:
search martian.113.168.192.in-addr.arpa
Changing this to search martian got address lookup on fb1 working:
$ host c1sus2
c1sus2.martian has address 192.168.113.87
But testpoints still could not be retrieved from c1sus2, even after a daqd restart.
In /etc/hosts on fb1 I found the following:
192.168.113.92 c1sus2
Changing the hardcoded address to the value returned by the nameserver (192.168.113.87) fixed the problem.
It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time. |
16854
|
Mon May 16 10:49:01 2022 |
Anchal | Update | DAQ | DAQ troubleshooting | [Anchal, Paco, JC]
Thanks Chris for the fix. We are able to access the testpoints now but we started facing another issue this morning, not sure how it is related to what you did.
- The C1:LSC-TRX_OUT and C1:LSC-TRY_OUT channels are stuck to zero value.
- These were the channels we used until last friday to align the interferometer.
- These channels are routed through the c1rfm FE model (Reflected Memory model is the name, I think). These channels carry the IR transmission photodiode monitors at the two ends of the interferometer, where they are first logged into the local FEs as C1:SUS-ETMX_TRX and C1:SUS-ETMY_TRY .
- These channels are then fed to C1:SCX-RFM_TRX -> C1:RFM_TRX -> C1:RFM-LSC_TRX -> C1:LSC-TRX and similar for Y side.
- We are able to see channels in the end FE filtermodule testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT)
- However, we are unable to see the same signal in c1rfm filter module testpoints like C1:RFM_TRX_IN1, C1:RFM_TRY_IN1 etc
- There is an IPC error shown in CDS FE status screen for c1rfm in c1sus. But we remember seeing this red for a long time and have been ignoring it so far as everything was working regardless.
The steps we have tried to fix this are:
- Restart all the FE models in c1lsc, c1sus, and c1ioo (without restarting the computers themselves) , and then burt restore.
- Restart all the FE models in c1iscex, and c1iscey (only c1iscey computer was restarted) , and then burt restore.
These above steps did not fix the issue. Since we have the testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT) for now to monitor the transmission levels, we are going ahead with our upgrade work without resovling this issue. Please let us know if you have any insights. |
16855
|
Mon May 16 12:59:27 2022 |
Chris | Update | DAQ | DAQ troubleshooting | It looks like the RFM problem started a little after 2am on Saturday morning (attachment 1). It’s subsequent to what I did, but during a time of no apparent activity, either by me or others.
The pattern of errors on c1rfm (attachment 2) looks very much like this one previously reported by Gautam (errors on all IRFM0 ipcs). Maybe the fix described in Koji’s followup will work again (involving hard reboots). |
Attachment 1: timeseries.png
|
|
Attachment 2: err.png
|
|
3057
|
Tue Jun 8 20:52:25 2010 |
josephb | Update | PEM | DAQ up (for the moment) | As a test, I did a remote reboot of both Megatron and c1iscex, to make sure there was no code running that might interfere with the dataviewer. Megatron is behind a firewall, so I don't see how it could be interfering with the frame builder. c1iscex was only running a test module from earlier today when I was testing the multi-filter matrix part. No daqd or similar processes were running on this machine either, but it is not behind a firewall at the moment.
Neither of these seemed to affect the lack of past data. I note the error message from dataviewer was "read(); errno=9".
Going to the frame builder machine, I ran dmesg. I get some disturbing messages from May 26th and June 7th. There are 6-7 of these pairs of lines for each of these days, spread over the course of about 30 minutes.
Jun 7 14:05:09 fb ufs: [ID 213553 kern.notice] NOTICE: realloccg /: file system full
Jun 7 14:11:14 fb last message repeated 19 times
There's also one:
Jun 7 13:35:14 fb syslogd: /usr/controls/main_daqd.log: No space left on device
I went to /usr/controls/ and looked at the file. I couldn't read it with less, it errored with Value too large for defined data type. Turns out the file was 2.3 G. And had not been updated since June 7th. There were also a bunch of core dump files from May 25th, and a few more recent. However the ones from May 25th were somewhat large, half a gig each or so. I decided to delete the main_daqd.log file as well as the core files.
This seems to have fixed the data history for the moment (at least with one 16k channel I tested quickly). However, I'm now investigating why that log file seems to have filled up, and see if we can prevent this in the future.
Quote: |
As before, I am unable to get data from the past. With DTT on Allegra I got data from now, but its unavailable from 1 hour ago. Same problem using mDV on mafalda. I blame Joe again - or the military industrial complex.
|
Quote:
|
Quote: |
Although trends are available, I am unable to get any full data from in the past (using DTT or DV). I started the FB's daqd process a few times, but no luck. 
I blame Joe's SimPlant monkeying from earlier today for lack of a better candidate. I checked and the frames are actually on the FB disk, so its something else.
|
I tried running dataviewer and dtt this morning. Dataviewer seemed to be working. I was able to get trends, full data on a 2k channel (seismic channels) and full data on a 16k channel (C1:PEM-AUDIO_MIC1) This was tried for a period 24 hours a go for a 10 minute stretch.
I also tried dtt and was able to get 2k and 16k channel data, for example C1:PEM-AUDIO_MIC1. Was this problem fixed by someone last night or did time somehow fix it?
|
|
|
9422
|
Fri Nov 22 09:54:22 2013 |
Steve | Update | CDS | DAQ? | Jamie, I think the computers know that you are away. c1lsc keeps going down.
The short time plots are correct. |
Attachment 1: comp8d.png
|
|
9423
|
Fri Nov 22 14:21:43 2013 |
Jamie | Update | Computer Scripts / Programs | DAQ? |
Quote: |
Jamie, I think the computers know that you are away. c1lsc keeps going down.
The short time plots are correct.
|
Is there some indication from the attached image that there is a problem with c1lsc? I see some drop outs in the channels you're plotting, but those are not c1lsc channels.
The channels with the drop outs are I think derived channels, as opposed to ones that are generated on the front end. Therefore they could have been affected by the c1auxey outages from earlier in the week. |
1737
|
Mon Jul 13 15:14:57 2009 |
Alberto | Update | Computers | DAQAWG | Today Alex came over, performed his magic rituals on the DAQAWG computer and fixed it. Now it's up and running again.
I asked him what he did, but he's not sure of what fixed it. He couldn't remember exactly but he said that he poked around, did something somewhere somehow, maybe he tinkered with tpman and eventually the computer went up again.
Now everything is fine. |
1752
|
Wed Jul 15 17:18:24 2009 |
Jenne | DAQ | Computers | DAQAWG gone, now back | Yet again, the DAQAWG flipped out for an unknowable reason. In order of restart activities listed on the Wiki, I keyed the crate and nothing really happened, then I hit the physical reset button and nothing happened, and then I did the 'telnet....vmeBusReset', and a couple minutes later, it was all good again. |
12152
|
Tue Jun 7 11:12:47 2016 |
jamie | Update | CDS | DAQD UPGRADE WORK UNDERWAY | I am re-starting work on the daqd upgrade again now. Expect the daqd to be offline for most of the day. I will report progress. |
12155
|
Tue Jun 7 20:49:50 2016 |
jamie | Update | CDS | DAQD work ongoing | Summary: new daqd code running overnight test on fb1. Stability issues persist.
The code is from Keith's "tests/advLigoRTS-40m" branch, which is a branch of the current trunk. It's supposed to include patches to fix the crashes when multiple frame types are written to disk at the same time. However, the issue is not fixed:
2016-06-07_20:38:55 about to write frame @ 1149392336
2016-06-07_20:38:55 Begin Full WriteFrame()
2016-06-07_20:38:57 full frame write done in 2seconds
2016-06-07_20:39:11 about to write frame @ 1149392352
2016-06-07_20:39:11 Begin Full WriteFrame()
2016-06-07_20:39:13 full frame write done in 2seconds
2016-06-07_20:39:27 about to write frame @ 1149392368
2016-06-07_20:39:27 Begin Full WriteFrame()
2016-06-07_20:39:29 full frame write done in 2seconds
2016-06-07_20:39:43 about to write second trend frame @ 1149391800
2016-06-07_20:39:43 Begin second trend WriteFrame()
2016-06-07_20:39:43 about to write frame @ 1149392384
2016-06-07_20:39:43 Begin Full WriteFrame()
2016-06-07_20:39:44 full frame write done in 1seconds
2016-06-07_20:39:59 about to write frame @ 1149392400
2016-06-07_20:40:04 Begin Full WriteFrame()
2016-06-07_20:40:04 Second trend frame write done in 21 seconds
2016-06-07_20:40:14 [Tue Jun 7 20:40:14 2016] main profiler warning: 1 empty blocks in the buffer
2016-06-07_20:40:15 [Tue Jun 7 20:40:15 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:16 [Tue Jun 7 20:40:16 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:17 [Tue Jun 7 20:40:17 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:18 [Tue Jun 7 20:40:18 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:19 [Tue Jun 7 20:40:19 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:20 [Tue Jun 7 20:40:20 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:21 [Tue Jun 7 20:40:21 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:22 [Tue Jun 7 20:40:22 2016] main profiler warning: 0 empty blocks in the buffer
2016-06-07_20:40:23 [Tue Jun 7 20:40:23 2016] main profiler warning: 0 empty blocks in the buffer
This failure comes when a full frame (1149392384+16) is written to disk at the same time as a second trend (1149391800+600). It seems like every time this happens daqd crashes.
I have seen other stability issues as well, maybe caused by mx flakiness, or some sort of GPS time synchronization issue caused by our lack of IRIG-B cards. I'm going to look to see if I can get the GPS issue taken care of so we take that out of the picture.
For the last couple of hours I've only seen issues with the frame writing every 20 minutes, when the full and second trend frames happen to be written at the same time. Running overnight to gather more statistics. |
12156
|
Wed Jun 8 08:34:55 2016 |
jamie | Update | CDS | DAQD work ongoing | 38 restarts overnight. Problem definitely not fixed. I'll be reverting back to old daqd and fb this morning. Then regroup and evaluate options. |
10481
|
Wed Sep 10 02:26:20 2014 |
Jenne | Update | LSC | DARM -> AS55 optickle | Q has pointed out that we expect a sign flip in the AS55 signal for DARM as we reduce the CARM offset in the PRFPMI case. Koji also mentioned that the SRC will help broaden the DARM linewidth. I wanted to check and think about these things with my Optickle simulation. Q is working on confirming my results with Mist.
The simulation situation:
* The demod phase for AS55 is set in the MICH-only case so that the MICH signal is maximized in the Q-phase. I do not change the demod phase at all in these simulations.
* Cavity lengths (arms, recycling cavities) are the measured lengths.
* I look at AS55 I and Q as DARM sensors (i.e. I'm doing DARM sweeps) as a function of CARM offset, for both PRFPMI and DRFPMI cases.
Spoiler alert! Conclusions:
* In the PRFPMI case, the DARM signal shows up with approximately equal strength in the I and Q phases, so we suffer only about a factor of 2 if we do not re-optimize the demod angle for AS55.
* In the DRFPMI case, the DARM signal is a factor of 1,000 smaller in the Q-phase than the I-phase, which means that the ideal demod phase angle has moved by about 90 degrees from the MICH-only case. We must either use the I-phase signal or change the demod phase by 90 degrees in order to acquire lock.
* In the PRFPMI case, there is a sign flip for DARM on the AS55 PD around 100pm, so we don't want to use AS55 for DARM until we are well inside 50pm, and aren't going to fluctuate out of that range.
* In the DRFPMI case, there is no such sign flip, at least out to 1nm, so we can use AS55 for DARM as soon as we see a viable signal. This is super awesome. The caveat is that the gain changes significantly as we reduce the CARM offset, so we either need a UGF servo (eventually) or careful watching (for now).
* The AS55 linear(-ish) range is much broader in the dual recycled case, which is yet another reason why getting DARM on AS55 will be easier for DRFPMI.
Why didn't we do it already?
* To put the SRM QPD back, we'd also have to move Steve/EricG's laser. Since I had other things to do, I left the setup for tonight, but I think I will want it for tomorrow night.
* Monday night (and tonight) we can pretty reliably get DARM onto AS55Q for the PRFPMI case, and I don't know what the cause has been for my locklosses, so I thought I'd try to figure that out first.
Plots!
First up, the current transition we've been trying to handle, PRFPMI DARM to AS55Q. I also plot AS55I, and we see that the signals are roughly the same magnitude (the x axis isn't the same between these plots! sorry), so we aren't screwed if we don't change the demod phase angle. We'll be better off once we can do a re-optimization, but this is assuming we are stuck (at first) with our MICH-only demod phase angle.
 
Next up, the same plots, but for the DRFPMI case. Note here that there is a factor of about 1000 in the y-axis scales, and also that there is no switch in the sign of the zero-crossing slope for the I-phase.
 
And here is the same data (DRFPMI case), but zoomed out for the Q-phase, so you can see the craziness of this phase. Again, this is much smaller than the signals in the I-phase, so I'm not too worried.

Game plan:
* Steve leaves the SRM oplev back in its nominal location (we can worry about aligning the mirror, and aligning the beam on the PD, but please put it back approximately where it came from).
* Try DRMI + 2 arm locking, which I don't think we have ever actually done before. Hopefully there won't be any tricks, and we can get to an equivalent place and successfully get DARM to AS55.
* .... Keep going? |
1549
|
Tue May 5 14:02:16 2009 |
rob | Update | LSC | DARM DC response varies with DARM offset | Note the effect of quadrature rotation for small offsets. |
Attachment 1: DARM_DARM_AS_DC_2.png
|
|
Attachment 2: DARM_DARM_AS_DC_3.png
|
|
Attachment 3: DARM_DARM_AS_DC_2.pdf
|
|
Attachment 4: DARM_DARM_AS_DC_3.pdf
|
|
13799
|
Sun Apr 29 22:53:06 2018 |
gautam | Update | General | DARM actuation estimate | Motivation:
We'd like to know how much actuation is required on the ETMs to lock the DARM degree of freedom. The "disturbance" we are trying to cancel is the seismic driven length fluctuation of the arm cavity. In order to try and estimate what the actuation required will be, we can use data from POX/POY locks. I'd collected some data on Friday which I looked at today. Here are the results.
Method:
- I collected the error and control signals for both arm cavities while they were locked to the PSL.
- Knowing the POX/POY sensing response and the actuator transfer functions, we can back out the free running displacements of the two arm cavities.
- I used numbers from the cal filters which may not be accurate (although POX sensing response which was recently measured).
- But the spectra computed using this method seem reasonable, and the X and Y arm asds line up around 1 Hz (albeit on a log scale).
- In this context,
is really a proxy for and similarly for L_Y so I think the algebra works out correctly.
- I didn't include any of the violin mode/AA/AI filters in this calculation.
- Having calculated the arm cavity displacements, I computed "DARM" as L_y- L_x and then plotted its asd.
- For good measure, I also added the quadrature sum of 4 optics' displacement noise as per the 40m GWINC model - there seems to be a pretty large discrepancy, not sure why.
If this approach looks legit, I will compute the control signal that is required to stabilize this level of disturbance using the DARM control loop, and see what is the maximum permissible series resistance we can use in order to realize this stabilization. We can then compare various scenarios like different whitening schemes, with/without Barry puck etc, and look at coil driver noise levels for each of them. |
Attachment 1: darmEst.pdf
|
|
13805
|
Tue May 1 19:37:50 2018 |
gautam | Update | General | DARM actuation estimate | Here is an updated plot - the main difference is that I have added a trace that is the frequency domain wiener filter subtraction of the coherent power between the L_X and L_Y time series. I tried reproducing the calculation with the time domain wiener filter subtraction as well, using half of the time series (i.e. 5 mins) to train the wiener filter (with L_X as target and L_Y as witness), but I don't get any subtraction above 5 Hz on the half of the data that is a test data set. Probably I am not doing the pre-filtering correctly - I downsampled the signal to 1 kHz, weighted it by low passing the signal above 40 Hz and trained the Wiener filter on the resulting time series. But this frequency domain Wiener filter subtraction should be at least a lower bound on DARM - indeed, it is slightly lower everywhere than simply taking the time domain subtraction of the two data streams.
To do:
- Re-measure calibration numbers used.
- Redo calculation once the numbers have been verified.
Putting a slightly cleaned up version of this plot in now - I'm only including the coherence-inferred DARM estimate now instead of the straight up time domain subtraction. So this is likely to be an underestimate. At low (<10 Hz) frequencies, the time domain computation lines up fairly well, but I suspect that I am getting huge amounts of spectral leakage (see Attachment #2) in the way I compute the spectrum using scipy's filtering routine (once the Wiener filter has been computed). Note that Attachment #2, I didn't break up the data into a training/testing set as in this case, we just care about the one-off offline performance in order to get an estimate of DARM.
The python version of the wiener filter generating code only supports [b,a] output of the digital filter, an sos filter might give better results. Need to figure out the least painful way of implementing the low-noise digital filtering in python... |
Attachment 1: darmEst.pdf
|
|
Attachment 2: darmEst_time.pdf
|
|
13822
|
Mon May 7 16:23:06 2018 |
gautam | Update | General | DARM actuation estimate | Summary:
Using the Wiener Filter estimate of the DARM disturbance we will have to cancel, I computed how the control signal would look like for a few scenarios. Our DACs are 16-bit, +/-10V (i.e. +/-32,768cts-pk, or ~23,000cts RMS). We need to consider the shape of the de-whitening filter to conclude whether it is feasible to increase the series resistance by x10 or not.
Some details:
Note that in this first computation, I have not considered
- Actuation range required by other loops (e.g. local damping, Oplev etc).
- At some point, I need to add the 2P/c radiation pressure disturbance as well.
- The control signal is calculated assuming we are actuating equally on both ETMs (but with opposite phase).
- RMS computation is done from 30 Hz downwards, as above 30 Hz, I think the estimate from the previous elog is not true seismic displacement.
- De-whitening filters (or digital whitening), which will be required to suppress DAC noise at 100Hz.
- DARM loop shape, specifically low-pass to avoid sensing noise injection. In this calculation, I just used the pendulum transfer function.
While doing this calculation, I have accounted for the fact that right now, the analog de-whitening filters in the ETM drive chain have a x3 gain which we will remove. Actually this is an assumption, I have not yet measured a transfer function, maybe I'll do one channel at EY to confirm. Also, the actuator gains themselves need to be confirmed.
As I was looking at the coil driver schematic more closely, I realized that there are actually two separate series resistances, one for the fast controls path, and another for the DC bias voltage from the slow ADCs. So I think we have been underestimating the Johnson noise of the coil drivers by sqrt(2). I've also attached screenshots of the IFOalign and MCalign screens. The two ITMs and ETMX have pitch DC bias values that are compatible with a x10 increase of the series resistance. But even so, we will have ~3pA/rtHz per coil from the two resistances.
gautam 8pm May8: Seems like I had confirmed the x3 gain in the EX de-whitening board when Johannes and I were investigating the AI board offset. |
Attachment 1: darmProj.pdf
|
|
Attachment 2: 37.png
|
|
Attachment 3: MCalign_20180507.png
|
|
1514
|
Fri Apr 24 03:57:30 2009 |
Yoichi | Update | Locking | DARM demod phase | Tonight, I was able to go up to arm power = 40 by tweaking the DARM demodulation phase.
I think the DARM loop became unstable because the demodulation phase was not right and the error signal contained some junk from I-phase.
I did not do any sophisticated demodulation phase optimization. Rather I just tweaked the phase so that the dark port image becomes stable.
I will do more careful demodulation phase tuning next time. |
1515
|
Fri Apr 24 04:38:49 2009 |
Yoichi | Update | Locking | DARM demod phase |
Quote: | Tonight, I was able to go up to arm power = 40 by tweaking the DARM demodulation phase.
I think the DARM loop became unstable because the demodulation phase was not right and the error signal contained some junk from I-phase.
I did not do any sophisticated demodulation phase optimization. Rather I just tweaked the phase so that the dark port image becomes stable.
I will do more careful demodulation phase tuning next time. |
In the next try, I was actually able to go up to arm power = 70 stably.
At this power level we are ready for the RF CARM hand off. |
1516
|
Fri Apr 24 11:34:32 2009 |
rob | Update | Locking | DARM demod phase |
Quote: |
Quote: | Tonight, I was able to go up to arm power = 40 by tweaking the DARM demodulation phase.
I think the DARM loop became unstable because the demodulation phase was not right and the error signal contained some junk from I-phase.
I did not do any sophisticated demodulation phase optimization. Rather I just tweaked the phase so that the dark port image becomes stable.
I will do more careful demodulation phase tuning next time. |
In the next try, I was actually able to go up to arm power = 70 stably.
At this power level we are ready for the RF CARM hand off. |
There's actually code in place in the LSC to dynamically adjust the demod phase for AS1. I've never made much use of it, because it's possible to get around the problem with some gain tweaking if you start at the right phase, or because I did the DC readout handoff earlier.
Attached is a cartoon showing how the demod phase at the dark port changes as the CARM offset is decreased. |
Attachment 1: darm_phase_rotate.png
|
|
1519
|
Fri Apr 24 17:26:57 2009 |
Yoichi | Update | Locking | DARM demod phase |
Quote: |
There's actually code in place in the LSC to dynamically adjust the demod phase for AS1. I've never made much use of it, because it's possible to get around the problem with some gain tweaking if you start at the right phase, or because I did the DC readout handoff earlier.
Attached is a cartoon showing how the demod phase at the dark port changes as the CARM offset is decreased. |
The cartoon is very nice.
I actually changed the demod phase continuously as the CARM offset was reduced to get up to arm power = 70.
As the CARM offset is changed, not only the DARM signal gain but also the phase margin around 100Hz changes if you use a fixed demodulation phase.
So it was necessary to change the demodulation phase to keep the DARM loop stable. |
10621
|
Fri Oct 17 03:05:00 2014 |
ericq | Update | LSC | DARM locked on DC Transmission difference | I've been able to repeatedly get off of ALS and onto (TRY-TRX)/(TRY+TRX). Nevertheless, lock is lost between arm powers of 10 and 20.
I do the transition at the same place as the CARM->SqrtInv transition, i.e. arm powers about 1.0 Jenne started a script for the transition, and I've modified it with settings that I found to work, and integrated it into the carm_cm_up script. I've also modified carm_cm_down to zero the DARM normalization elements.
I was thwarted repeatedly by the frequent crashing of daqd, so I was not able to take OLTFs of CARM or DARM, which would've been nice. As it was, I tuned the DARM gain by looking for gain peaking in the error signal spectrum. I also couldn't really get a good look at the lock loss events. Once the FB is behaving properly, we can learn more.
Turning over to difference in transmission as an error signal naturally squashes the difference in arm transmissions:

I was able to grab spectra of the error and control signals, though I did not take the time to calibrate them... We can see the high frequency sensing noise for the transmission derived signals fall as the arm power increases. The low frequency mirror motion stays about the same.

So, it seems that DARM was not the main culprit in breaking lock, but it is still gratifying to get off of ALS completely, given its out-of-loop-noise's strong dependence on PSL-alignment. |
10737
|
Wed Nov 26 22:24:28 2014 |
Jenne | Update | LSC | DARM loop improved, other work | [Jenne, Koji]
We have done several things this evening, which have incrementally helped the lock stability. We are still locking CARM and DARM on ALS, and PRMI on REFL165.
- Saw peaks in CARM error signal at 24Hz and 29 Hz, so put in moderate-Q resonant gains.
- DARM at low frequency was much noisier than CARM. We discovered that we had put in a nice boost at some point for CARM in FM1, but hadn't transferred that over to DARM. Copying FM1 from CARM to DARM (so replacing an integrator with a boost below ~10Hz) dropped the DARM noise down to match the CARM noise at low frequencies.
- Koji noticed that we were really only illuminating one quadrant of the Xend QPD, so we aligned both trans QPDs. Also, I reset the transmission normalization so that all 4 diodes (Thorlabs and QPDs at each end) all read 1 with good alignment.
- We've got some concerns about the ASS. It needs some attention and tuning.
- The X ASS needs an overall gain of about 0.3. This may be because I forgot to put the new lower gains into the burt restore after Rana's oplev work, or this may be something new.
- When Koji did a very careful arm alignment, we turned on the Y ASS and saw it methodically reduce the transmitted power. Mostly it was moving ETMY in yaw. Why is the DC response of the ASS not good? The oplev work shouldn't have affected DC.
- We don't like the way the ASS offloads the alignment. Maybe there's a better way to do it overall, but one thought is to have an option to offload (for long-term alignment fixing, so maybe once a day) and another option to just freeze the current output (for the continual tweak-ups that we do throughout the evening). We'd want the ASS to start up again with these frozen values, and not clear them.
- ETMY was being fussy, in the same way that ETMX had been for the last few months. I went down to squish the cables, and found that it was totally not strain-relieved, and that the cable was pulling on the connector. I have zip tied the cable to the rack so that it's not pulling anymore.
- At high arm powers, it is hard to see what is going on at the AS port because there is so much light. Koji has added an ND filter to the AS camera so that we can more easily tweak alignment to improve the contrast.
Something that has been bothering me the last few days is that early in the evening, I would be able to get to very high arm powers, but later on I couldn't. I think this has to do with setting the contrast at the AS port separately for the sideband versus the carrier. I had been minimizing the AS port power with the arms held off resonance, PRMI locked. But, this is mostly sideband. If instead I optimize the Michelson fringes when the arms are held with ALS at arm powers of 1, and PRM is still misaligned, I end up with much higher arm powers later. Some notes about this though: most of this alignment was done with the arm cavity mirrors, specifically the ETMs, to get the nice Michelson fringes. When the PRM is restored and the PRMI locked, the AS port contrast doesn't look very good. However, when I leave the alignment alone at this point, I get up to arm powers above 100, whereas if I touch the BS, I have trouble getting above 50.
Around GPS time 1101094920, I moved the DARM offset after optimizing the CARM offset. We were able to see a pretty nice zero crossing in AS55, although that wasn't at the same place as the ALS diff zero offset (close though). At this time, the arm powers got above 250, and TRY claimed almost 200. These are the plots below, first as a wide-view, then zoomed in. During this time, PRCL still has a broadband increase in noise when the arm powers are high, and CARM is seeing a resonance at a few tens of Hz. But, we can nicely see the zerocrossing in AS55, so I think there's hope of being able to transition DARM over.




Now, the same data, but zoomed in more.




During the 40m meeting, we had a few ideas of directions to pursue for locking:
- Look into using POX or POY as a proxy for POP and instead of REFL, for CARM control. Maybe we have some nice juicy SNR.
- Check the linearity of our REFL signals by holding the arms on (or close to) resonance, then do a swept sine exciting CARM ctrl and taking a transfer function to the RF signals.
- Q is going to look into the TRX QPD, since he thought it looked funny last week, although this may no longer be necessary after we put the beam at the center of the QPD.
- Koji had a thought for making it easier to blend the CARM error signals. What if we put a pole into the ALS CARM signals at the place where the final coupled cavity pole will be, and then compensate for this in the CARM loop. Since any IR signals will naturally have this pole, we want the CARM loop to be stable when it's present.
- Diego tells us that the Xarm IR beatnote is basically ready to go. We need to see how big the peak is so we can put it into the frequency counter and read it out via EPICS. The freq counter wants at least -15dBm, so we may need an amplifier.
|
15350
|
Tue May 26 02:37:19 2020 |
gautam | Update | LSC | DARM loop measurement and fitting | Summary:
In order to estimate the free-running DARM displacement noise, I measured the DARM OLTF using the usual IN1/IN2 prescription. The measured data was then used to fit some model paramters for a loop model that can be used over a larger frequency range.
Details:
- Attachment #1 shows an overlay of the measured and modelled TFs.
- Attachment #2 shows the various components that went into building up this model.
- The digital AA and AI filter coefficients were taken from the RTCDS code.
- The analog AA and AI filter zpks were taken from here and here respectively.
- CDS filters taken from the banks enabled. The 20Hz : 0Hz z:p filter in the CARM_B path is also accounted for, as have the violin-mode notches.
- Pendulum TF is just 1/f^2, the overall scaling is unimportant because it will be fitted (in combination with the overall scaling uncertainty on the DC optical gain), but I used a value of 10 nm/f^2 which should be in the right ballbark.
- The optical gain includes the DARM pole at ~4.5 kHz for this config.
- With all these components, to make the measurement and fit line up, I added two free parameters - an overall gain, and a delay.
- NLSQ minimizer was used to find the best-fit values for these parameters.
- I'm not sure what to make of the relatively large disagreement between measurement and model below 100 Hz - I'm pretty sure I got all the CDS filters included...
- Moreover, I don't have a good explanation for why the best-fit delay is 400 us. One RTCDS clock cycle is onyl 60 us, and even with an extra clock cycle for the RFM transfer, I still can't get up to such a high delay...
In summary, the UGF is ~150 Hz and phase margin is ~30 deg. This loop would probably benefit from some low-pass filter being turned on. |
Attachment 1: DARM_TF.pdf
|
|
Attachment 2: DARM_TF_breakdown.pdf
|
|
10864
|
Wed Jan 7 09:44:33 2015 |
ericq | Update | LSC | DARM phase budget | As Jenne mentioned, I created a model of the DARM OLG to see why we have so little phase margin. However, it turns out I can explain the phase after all.
Chris sent me his work for the aLIGO DARM phase budget, which I adapted for our situation. Here's a stacked-area plot that shows the contributions of various filters and delays on our phase margin, and a real measurement from a few days ago .

This isn't so great! Informed by Chris's model, the digital delays look like: (Here I'm only listing pure delays, not phase lags from filters)
- 64k cycle (End IOP)
- 16k cycle (End isce[x/y])
- 16k cycle x 2 (end to LSC through RFM) [See ELOG 10811]
- 16k cycle (LSC)
- 16k cycle (LSC to SUS through dolphin) [See ELOG 9881]
- 16k cycle (SUS)
- 16k cycle x2 (SUS to end through RFM)
- 16k cycle (End isce[x/y])
- 64k cycle (SUS IOP)
- DAC zero order hold
This adds up to about 570usec, 20.5 degrees at 100Hz, largely due to the sheer number of computer hops the transmission loops involve.
As a check, I divided the measured OLG by my model OLG, to see if there is any shape to the residual, that my model doesn't explain. It looks like it fits pretty well. Plot:

So, unless we undertake a bunch of computer work, we can only improve our transmission loops through our control filter design.
Everything I used to generate these plots is attached. |
Attachment 3: 2015-01-DARMphase.zip
|
10868
|
Wed Jan 7 13:39:42 2015 |
Chris | Update | LSC | DARM phase budget | I think the dolphin and RFM transit times are double-counted in this budget. As I understand it, all IPC transit times are already built in to the cycle time of the sending model. That is, the sending model is required to finish its computational work a little bit early, so there's time left to transmit data to the receivers before the start of the next cycle. Otherwise you get IPC errors. (This is why the LSC models at the sites can't use the last ~20 usec of their cycle without triggering IPC errors. They have to allow that much time for the RFM to get their control signals down the arms to the end stations.)
For instance, the delay measurement in elog 9881 (c1als to c1lsc via dolphin) shows only the c1lsc model's own 61 usec delay. If the dolphin transfer really took an additional cycle, you would expect 122 usec.
And in elog 10811 (c1scx to c1rfm to c1ass), the delay is 122 usec, not because the RFM itself adds delay, but because an extra model is traversed.
Bottom line: there may still be some DARM phase unaccounted for. And it would definitely help to bypass the c1rfm model, as suggested in 9881. |
1575
|
Tue May 12 01:11:55 2009 |
Yoichi | Update | LSC | DARM response (DC Readout) | I measured the DARM response with DC readout.
This time, I first measured the open loop transfer function of the X single arm lock.
The open loop gain (Gx) can be represented as a product of the optical gain (Cx), the filter (Fx), and the suspension response (S), i.e. Gx = Cx*Fx*S.
We know Fx because this is the transfer function of the digital filters. Cx can be modeled as a simple cavity pole, but we need to know the finesse to calculate it.
In order to estimate the current finesse of the XARM cavity, I ran the armLoss script, which measures the ratio of the reflected light power between the locked and the unlocked state. Using this ratio and the designed transmissivity of the ITMX (0.005), I estimated the round trip loss in the XARM, which was 170 ppm. From this number, the cavity pole was estimated to be 1608Hz.
Using the measured Gx, the knowledge of Fx and the estimated Cx, I estimated the ETMX suspension response S, which is shown in the first attachment.
Note that this is not a pure suspension response. It includes the effects of the digital system time delay, the anti-imaging and anti-aliasing filters and so on.
Now the DARM open loop gain (Gd) can also be represented as a product of the optical gain (Cd), the filter (Fd) and the suspension response (S).
Since the actuations are applied again to the ETMs and we think ETMX and ETMY are quite similar, we should be able to use the same suspension response as XARM for DARM. Therefore, using the knowledge of the digital filter shape and the measured open loop gain, we can compute the DARM optical gain Cd.
The second attachment shows the estimated DARM response along with an Optickle prediction.
The DARM loop gain was measured with darm_offset_dc = 350. Since we haven't calibrated the DARM signal, I don't know how many meters of offset does this number correspond to. The Optickle prediction was calculated using a 20pm DARM offset. I chose this to make the prediction look similar to the measured one, though they look quite different around the RSE peak. The input power was set to 1.7W in the Optickle model (again this is just my guess).
It looks as if the measured DARM response is skewed by an extra low pass filter at high frequencies. I don't know why is it so. |
Attachment 1: SUS_Resp.png
|
|
Attachment 2: DARM_Resp.png
|
|
11606
|
Wed Sep 16 15:04:33 2015 |
ericq | Summary | LSC | DC PD Whitening Board Fixed |
Quote: |
Tonight we noticed that the REFL_DC signal has gone bipolar, even though the whitening gain is 0 dB and the whitening filter is requested to be OFF.
|
Fixed! I noticed that whitening gain changes weren't having any effect on CM_SLOW. I then checked REFL_DC, where this also seemed to be the case. Since the gain is controlled via VME machine, and whitening filter switching is controlled via RCG, I figured there must be something wrong with the board. I checked all of the DC PD signals, which share a whitening filter board, and they all had the same symptoms.
I went and peeked at the board, and it turns out the backplane cable had fallen off. 
I plugged it in, things look ok. |
10872
|
Wed Jan 7 15:53:01 2015 |
Jenne | Update | LSC | DC PD analog settings exposed | I have added another block to the LSC screen (and made the corresponding sub-screen) to expose the analog settings for the DC photodiodes.
Note that we have 2 open channels there, which are still called something like "PD2" and "PD3" from olden times.
If we ever chose to use those, we will probably want to change their names, in /cvs/cds/caltech/target/c1iscaux2/LSC_aux2.db and /cvs/cds/caltech/target/c1iscaux/LSC_aux.db |
12710
|
Fri Jan 13 08:54:32 2017 |
Johannes | Update | General | DC PD installed | I installed a DC PD (Thorlabs PDA 520) in the beam path to AS55. I placed a 2" 90/10 BS on a flip mount that picks of enough light for the PD to spit out ~8V when the port is bright. Single arm continuous signal will be ~2V. While most of the light still continues towards AS55, the displacement from the BS moves the beam off AS55, so I used the flip mount in case anyone needs to use AS55. The current configuration is UP.
When we're done with loss investigations the flip mount should be removed from the bench.
 
I hooked the PD up to an ethernet-enabled scope and started scripting the loss map measurement (scope can receive commands via http so we can automate the data acquisition). The scope that was present at the bench and had been used for the MC ringdown measurements had a 'scrambled' screen that I couldn't fix so I had to retrieve another scope ("scope1"). I'll try to find out what's wrong with it but we may have to send it in for repair.

|
16150
|
Fri May 21 00:15:33 2021 |
Koji | Update | Electronics | DC Power Strip delivered / stored | DC Power Strip Assemblies delivered and stored behind the Y arm tube (Attachment 1)
- 7x 18V Power Strip (Attachment 2)
- 7x 24V Power Strip (Attachment 2)
- 7x 18V/24V Sequencer / 14x Mounting Panel (Attachment 3)
- DC Power Cables 3ft, 6ft, 10ft (Attachments 4/5)
- DC Power Cables AWG12 Orange / Yellow (Attachments 6/7)
I also moved the spare 1U Chassis to the same place.
- 5+7+9 = 21x 1U Chassis (Attachments 8/9)
|
Attachment 1: P_20210520_233112.jpeg
|
|
Attachment 2: P_20210520_233123.jpg
|
|
Attachment 3: P_20210520_233207.jpg
|
|
Attachment 4: P_20210520_231542.jpg
|
|
Attachment 5: P_20210520_231815.jpg
|
|
Attachment 6: P_20210520_195318.jpg
|
|
Attachment 7: P_20210520_231644.jpg
|
|
Attachment 8: P_20210520_233203.jpg
|
|
Attachment 9: P_20210520_195204.jpg
|
|
2056
|
Tue Oct 6 01:41:20 2009 |
rob | Update | Locking | DC Readout | Lock acquisition working well tonight. Was able to engage CM boost (not superboost) with bandwidth of ~10kHz. Also succeeded once in handing off DARM to DC readout. |
1544
|
Tue May 5 05:16:12 2009 |
Yoichi | Update | Locking | DC Readout and DARM response | Tonight, I was able to switch the DARM to DC readout a couple of times.
But the lock was not as stable as the RF DARM. It lost lock when I tried to measure the DARM loop gain.
I also measured DARM response when DARM is on RF.
The attached plot shows the DARM optical gain (from the mirror displacement to the PD output).
The magnitude is in an arbitrary unit.
I measured a transfer function from DARM excitation to the DARM error signal. Then I corrected it for the DARM open loop gain and the pendulum response to get the plot below.
There is an RSE peak at 4kHz as expected. The origin of the small bump and dip around 2.5kHz and 1.5kHz are unknown.
I will consult with the Optickle model.
I don't know why the optical gain decreases below 50Hz (I don't think it actually decreases).
Seems like the DARM loop gain measured at those frequencies are too low.
I will retry the measurement. |
Attachment 1: DARM-TF.png
|
|
1545
|
Tue May 5 08:26:56 2009 |
rob | Update | Locking | DC Readout and DARM response |
Quote: | Tonight, I was able to switch the DARM to DC readout a couple of times.
But the lock was not as stable as the RF DARM. It lost lock when I tried to measure the DARM loop gain.
I also measured DARM response when DARM is on RF.
The attached plot shows the DARM optical gain (from the mirror displacement to the PD output).
The magnitude is in an arbitrary unit.
I measured a transfer function from DARM excitation to the DARM error signal. Then I corrected it for the DARM open loop gain and the pendulum response to get the plot below.
There is an RSE peak at 4kHz as expected. The origin of the small bump and dip around 2.5kHz and 1.5kHz are unknown.
I will consult with the Optickle model.
I don't know why the optical gain decreases below 50Hz (I don't think it actually decreases).
Seems like the DARM loop gain measured at those frequencies are too low.
I will retry the measurement. |
The optical gain does decrease below ~50Hz--that's the optical spring in action. The squiggles are funny. Last time we did this we measured the single arm TFs to compensate for any tough-to-model squiggles in the transfer functions which might arise from electronics or the suspensions. |
15572
|
Tue Sep 15 17:04:43 2020 |
gautam | Update | Electronics | DC adaptors delivered | These were delivered to the 40m today and are on Rana's desk
Quote: |
I'll order a couple of these (5 ordered for delivery on Wednesday) in case there's a hot demand for the jack / plug combo that this one has.
|
|
14776
|
Fri Jul 19 12:50:10 2019 |
gautam | Update | SUS | DC bias actuation options for SOS | Rana and I talked about some (genius) options for the large range DC bias actuation on the SOS, which do not require us to supply high-voltage to the OSEMs from outside the vacuum.
What we came up with (these are pretty vague ideas at the moment):
- Some kind of thermal actuation.
- Some kind of electrical actuation where we supply normal (+/- 10 V) from outside the vacuum, and some mechanism inside the chamber integrates (and hence also low-pass filters) the applied voltage to provide a large DC force without injecting a ton of sensor noise.
- Use the blue piers as a DC actuator to correct for the pitch imbalance --- Kruthi and Milind are going to do some experiments to investigate this possibility later today.
For the thermal option, I remembered that (exactly a year ago to the day!) when we were doing cavity mode scans, once the heaters were turned on, I needed to apply significant correction to the DC bias voltage to bring the cavity alignment back to normal. The mechanism of this wasn't exactly clear to me - furthermore, we don't have a FLIRcam picture of where the heater radiation patter was centered prior to my re-centering of it on the optic earlier this year, so we don't know what exactly we were heating. Nevertheless, I decided to look at the trend data from that night's work - see Attachment #1. This is a minute trend of some ETMY channels from 0000 UTC on 18 July 2018, for 24 hours. Some remarks:
- We did multiple trials that night, both with the elliptical reflector and the cylindrical setup that Annalisa and Terra implemented. I think the most relevant part of this data is starting at 1500 UTC (i.e. ~8am PDT, which is around when we closed shop and went home). So that's when the heaters were turned off, and the subsequent drift of PIT/YAW are, I claim, due to whatever thermal transients were at play.
- Just prior to that time, we were running the heater at close to its maximum rated current - so this relaxation is indicative of the range we can get out of this method of actuation.
- I had wrongly claimed in my discussion with Rana this morning that the change in alignment was mostly in pitch - in fact, the data suggests the change is almost equal in the two DoFs. Oplev and OSEMs report different changes though, by almost a factor of 2....
- The timescale of the relaxation is ~20 minutes - what part(s) of the suspension take this timescale to heat up/cool down? Unlikely to be the wire/any metal parts because the thermal conductivity is high?
- In the optimistic scenario, let's say we get 100 urad of actuation range - over 40m, this corresponds to a beam spot motion of ~8mm, which isn't a whole lot. Since the mechanism of what is causing this misalignment is unclear, we may end up with significantly less actuation range as well.
- I will repeat the test (i.e. drive the heater and look for drift in the suspension alignment using OSEMs/Oplev) in the afternoon - now I claim the radation pattern is better centered on the optic so maybe we will have a better understanding of what mechanisms are at play.
Also see this elog by Terra.
Attachment #2 shows the results from today's heating. I did 4 steps, which are obvious in the data - I=0.6A, I=0.76A, I=0.9A, and I=1.05A.
In science, one usually tries to implement some kind of interpretation. so as to translate the natural world into meaning. |
Attachment 1: heaterPitch_2018.pdf
|
|
Attachment 2: Screenshot_from_2019-07-19_16-39-21.png
|
|
12708
|
Thu Jan 12 17:31:51 2017 |
gautam | Update | CDS | DC errors | The IFO is more or less back to an operational state. Some details:
- The IMC mirror excess motion alluded to in the previous elog was due to some timing issues on c1sus. The "DAC" and "DK" blocks in the c1x02 diag word were red instead of green. Restarting all the models on c1sus fixed the problem
- When c1ioo was restarted, all of Koji's changes (digital) to the MC WFS servo where lost as they were not committed to the SDF. Eric suggested that I could just restore them from burt snapshots, which is what I did. I used the c1iooepics.snap file from 12:19PM PST on 26 December 2016, which was a time when the WFS servo was working well as per this elog by Koji. I have also committed all the changes to the SDF. IMC alignment has been stable for the last 4 hours.
- Johannes aligned and locked the arms today. There was a large DC offset on POX11, which was zeroed out by closing the PSL shutter and running LSC offsets. Both arms lock and stay aligned now.
- The doubling oven controller at the Y end was switched off. Johannes turned it on.
- Eric and I started a data consistency check on the RAID array yesterday, it has completed today and indicated no issues
- NDS2 is now running again on megatron so channel access from outside should(???) be possible again.
One error persists - the "DC" indicator (data concentrator?) on the CDS medm screen for the various models spontaneously go red and return to green often. Is this a known issue with an easy fix?

|
12715
|
Fri Jan 13 21:41:23 2017 |
Koji | Update | CDS | DC errors | I think I fixed the DC error issue
1. I added the leap second (leapsecond ?) entry for 2016/12/31, 23:60:00 UTC to daqdrc
[OLD]
set gps_leaps = 820108813 914803214 1119744016;
[NEW]
set gps_leaps = 820108813 914803214 1119744016 1167264018;
2. Restarted FB and all realtime models
Now I don't see any RED light. |
15735
|
Tue Dec 15 12:38:41 2020 |
gautam | Update | Electronics | DC power strip | I installed a DC power strip (24 V variant, 12 outlets available) on the NW upright of the 1X1 rack. This is for the AS WFS. Seems to work, all outlets get +/- 24 V DC.
The FSS_RMTEMP channel is very noisy after this work. I'll look into it, but probably some Acromag grounding issue.
In the afternoon, Jordan and I also laid out 4x SMA LMR240 cables and 1x DB15 M/F cable from 1X2 to the NE corner of the AP table via the overhead cable trays. |
15699
|
Thu Dec 3 10:46:39 2020 |
gautam | Update | Electronics | DC power strip requirements | Since we will have several new 1U / 2U aLIGO style electronics chassis installed in the racks, it is desirable to have a more compact power distribution solution than the fusable terminal blocks we use currently.
- The power strips come in 2 varieties, 18 V and 24 V. The difference is in the Dsub connector that is used - the 18 V variant has 3 pins / 3 sockets, while the 24V version uses a hybrid of 2 pins / 1 socket (and the mirror on the mating connector).
- Each strip can accommodate 24 individual chassis. It is unlikely that we will have >24 chassis in any collection of racks, so per area (e.g. EX/EY/IOO/SUS), one each of the 18V and 24V strips should be sufficient. We can even migrate our Acromag chassis to be powered via these strips.
- Details about the power strip may be found here.
I did a quick walkaround of the lab and the electronics rack today. I estimate that we will need 5 units of the 24 V and 5 units of the 18 V power strips. Each end will need 1 each of 18 V and 24 V strips. The 1Y1/1Y2/1Y3 (LSC/OMC/BHD sus) area will be served by 1 each 18 V and 24 V. The 1X1/1X2 (IOO) area will be served by 1 each 18 V and 24 V. The 1X5/1X6 (SUS Shadow sensor / Coil driver) area will be served by 1 each of 18 V and 24 V. So I think we should get 7 pcs of each to have 2 spares.
Most of the chassis which will be installed in large numbers (AA, AI, whitening) supports 24V DC input. A few units, like the WFS interface head, OMC driver, OMC QPD interface, require 18V. It is not so clear what the input voltage for the Satellite box and Coil Drivers should be. For the former, an unregulated tap-off of the supply voltage is used to power the LT1021 reference and a transistor that is used to generate the LED drive current for the OSEMs. For the latter, the OPA544 high current opamp used to drive the coil current has its supply rails powered by again, an unregulated tap-off of the supply voltage. Doesn't seem like a great idea to drive any ICs with the unregulated switching supply voltage from a noise point of view, particularly given the recent experience with the HV coil driver testing and the PSRR, but I think it's a bit late in the game to do anything about this. The datasheet specs ~50 dB of PSRR on the negative rail, but we have a couple of decoupling caps close to the IC and this IC is itself in a feedback loop with the low noise AD8671 IC so maybe this won't be much of an issue.
For the purposes of this discussion, I think both Satellite Amp and Coil Driver chassis can be driven with +/- 24 V DC.
On a side note - after the upgrade will the "Satellite Amplifiers" be in the racks, and not close to the flange as they currently are? Or are we gonna have some mini racks next to the chambers? Not sure what the config is at the sites, and if the circuits are designed to drive long cables. |
4738
|
Wed May 18 15:54:50 2011 |
Koji | Update | RF System | DC power supplies for the RF generation box in place | [Koji, Steve]
DC power supplies for the RF generation box are now in place. They are the top two of the 6 Sorensens in the OMC short rack next to 1X2.
We made the connections as we did for the RF distribution box, the power supplies labele, and the cables strain-relieved.
The power supply is not yet connected to the actual RF generation box. This should be done by Suresh or someone with the supervision of him.
Note:
We have two +18V supply on the short OMC rack, in total. One is for the RF source, the other is for the OMC PZTs, whitening, etc.
This is to avoid unnecessary ground loop although the grounding situation of the OMC side is not known to me. |
|