ID |
Date |
Author |
Type |
Category |
Subject |
13393
|
Wed Oct 18 19:17:42 2017 |
gautam | Update | General | PRC angular feedforward | Last night, I collected ~30mins of data for the vertex seismometer channels and the POP QPD PIT/YAW signals with the PRMI locked on carrier (angular FF OFF). The ITM Oplev loops weren't DC coupled, as they are in the full IFO locking sequence, but I feel like the angular FF filters can be improved - there are frequent sharp dives in the AS110 signal level which are correlated with large amplitude motion of the POP spot on the control room CCD monitor.
Repeating the frequency domain multicoherence analysis using BS_X and BS_Y seismometer channels as witnesses suggest that we can win significantly (see Attachment #1).
I've never really implemented feedforward filters - I was planning on using ericq's latest entry on this subject as a guide. From what I gather, the procedure is as follows:
- Pre-filter the target (POP QPD PIT or YAW) and witness (BS_X, BS_Y) channels
- Downsample the 2k target data and 256Hz witness data to 32 Hz (how to choose this?)
- Detrend (linear?)
- Apply elliptic low pass filter (previously, a 3rd order Elliptic Low pass with 3dB ripple, 40dB stopband attenuation, corner at 5Hz was used).
- Filter the target signal (i.e. POP QPD PIT/YAW) by the inverse actuator TF.
- This "actuator TF" is a measurement of how actuating on the angular DoFs of the PRM affects the POP QPD spot.
- So by pre-filtering the target signal through the inverse actuator TF, we get a measure of how much the PRM angular motion is.
- The reason we want to do this is to give the FIR filter that produces optic motion (output) given ground motion sensed by the seismometer (input) fewer poles/zeros to fit (?).
- The actual actuator TF has to be measured using DTT, and fit - is there anything critical about this fitting? Seems like this should be just a simple pendulum transfer function so a pair of complex poles should be sufficient?
- The actual Wiener filter is calculated by the function miso_firlev.m. There are many versions of this floating around from what I can gather.
- This function requires 3 input parameters.
- Order of filter to be fit
- Witness channels (can be multiple)
- Target channel (has to be single, hence the "miso" in the function name).
- Today, at the meeting, we talked about weighting the cost function that the optimal Wiener filter calculator minimizes.
- The canonical wiener filter minimizes the mean squared error between the output of the filter and the desired signal profile (which for this particular problem is the angular motion of the PRM, calculated by dividing the target signal by the actuator TF, knowing which we can cancel it out).
- But as seen in Attachment #1, the main reduction in RMS comes below f=5Hz.
- So can we weight the cost function more heavily at lower frequencies? From what I can find in previous calculations, it looks like this weighting happens in the pre-filtering stage, which is not the same thing as including the frequency dependent weighting in the calculation of the Weiner filter? The PSD and acf are F.T. pairs per the Wiener-Khinchin theorem so intuitively I would think that weighting in the frequency domain corresponds to weighting on the lags at which the acf is calculated, but I need to think about this.
- What kind of low-pass filter do we use to prevent noise injection at higher frequencies? Does the optimal filter calculation automatically roll-off the filter response at high frequencies?
- As I write this, seems like there is another level of optimization of "meta-parameters" possible in this whole process - e.g. what is the optimal order of filter to fit? what is the optimal pre-filtering of training data? Not sure how much we can gain from this though.
Some notes from Rana from some years ago: https://nodus.ligo.caltech.edu:8081/40m/11519
If anyone has pointers / other considerations I should take into account, please post here. |
Attachment 1: pop_feedforward_potential.pdf
|
|
13394
|
Wed Oct 18 23:11:53 2017 |
gautam | Update | CDS | FEs unresponsive | This happened again just now - it was roughly this time when this happened last night as well.
There was certainly an EPICS freeze of the kind we were used to seeing prior to replacing the martian wireless router sometime in late 2015 (or early 2016?). I was trying to run the dither alignment servos on the Y-arm at this time, and all the StripTool traces flatlined.
I took the opportunity to try accessing testpoints from the iscey ADCs - specifically C1:SUS-TRY_OUT, and it seemed to work just fine. However, I couldn't ssh into c1iscey.
Looking at the dmesg once I was able to ssh in eventually (~2 minutes deadtime tonight, I feel like it was longer yesterday but can't quantify), I see the following: not sure if there are any clues in here, or whether this is the correct log to check. But there are many instances of the nfs server related message in the log. Note that the system time-stamp corresponds to when this freeze happened.
[5461308.784018] nfs: server 192.168.113.201 not responding, still trying
[5461412.936284] nfs: server 192.168.113.201 OK
[5461412.937130] systemd[1]: Starting Journal Service...
[5461412.947947] systemd-journald[20281]: Received SIGTERM from PID 1 (systemd).
[5461412.996063] systemd[1]: Unit systemd-journald.service entered failed state.
[5461413.002627] systemd[1]: systemd-journald.service has no holdoff time, scheduling restart.
[5461413.008983] systemd[1]: Stopping Journal Service...
[5461413.014664] systemd[1]: Starting Journal Service...
[5461413.044262] systemd[1]: Started Journal Service.
[5461413.694838] systemd-journald[400]: Received request to flush runtime journal from PID 1
|
13396
|
Fri Oct 20 16:30:17 2017 |
gautam | Update | CDS | FB1 installed on shelves | [steve, jamie, gautam]
The machine that now serves as out Frame Builder, FB1, was sitting on top of megatron. I decided that this wasn't ideal, and asked Steve to get some alternative mounting solution. Today, he procured some shelves to put FB1 on. Jamie suggested looking for the slider-rail that came with the machine, and using that instead, as it will allow us to slide FB1 out of the rack as we do megatron and the old FB. But as luck would have it, the distance between the rack vertical posts is 26 inches, but the rail is 27 inches. So we had to accept using the less ideal solution of putting FB1 on two shelves, with no sliding option. Photo to be uploaded shortly.
For this work, I had to shutdown FB1 for about 1 hour between 3pm and 4pm. It seems to have come back up fine now. |
13398
|
Tue Oct 24 16:22:53 2017 |
gautam | Update | CDS | Toy DARM model setup in c1tst | [alex, gautam]
Alex is going to have an undergrad work on a calibration optimization project on the 40m RTCDS system. For this purpose, we wanted to setup a "Simulated DARM loop". Today, Alex and I set this up. I figured we can use the c1tst model for this purpose. We basically copied the topology from Figure 2 of the h(t) paper. Attached are screenshots of the MEDM screens of the system we setup, and the simulink block diagram - the main screen can be accessed from the "SIM PLANT" tab in the sitemp.
It remains to setup the appropriate filters in the filter banks, and an EPICS channel monitor for monitoring the single excitation testpoint in the model. We also did not set up any DQ channels for the time being, as it is not even clear to me what channels need to be DQ-ed. |
Attachment 1: TOY_DARM.png
|
|
Attachment 2: TOY_DARM_SIMULINK.png
|
|
13404
|
Sat Oct 28 00:36:26 2017 |
gautam | Update | CDS | 40m files backup situation - ddrescue | None of the 3 dd backups I made were bootable - at boot, selecting the drive put me into grub rescue mode, which seemed to suggest that the /boot partition did not exist on the backed up disk, despite the fact that I was able to mount this partition on a booted computer. Perhaps for the same reason, but maybe not.
After going through various StackOverflow posts / blogs / other googling, I decided to try cloning the drives using ddrescue instead of dd.
This seems to have worked for nodus - I was able to boot to console on the machine called rosalba which was lying around under my desk. I deliberately did not have this machine connected to the martian network during the boot process for fear of some issues because of having multiple "nodus"-es on the network, so it complained a bit about starting the elog and other network related issues, but seems like we have a plug-and-play version of the nodus root filesystem now.
chiara and fb1 rootfs backups (made using ddrescue) are still not bootable - I'm working on it.
Nov 6 2017: I am now able to boot the chiara backup as well - although mysteriously, I cannot boot it from the machine called rosalba, but can boot it from ottavia. Anyways, seems like we have usable backups of the rootfs of nodus and chiara now. FB1 is still a no-go, working on it.
Quote: |
Looks to have worked this time around.
controls@fb1:~ 0$ sudo dd if=/dev/sda of=/dev/sdc bs=64K conv=noerror,sync
33554416+0 records in
33554416+0 records out
2199022206976 bytes (2.2 TB) copied, 55910.3 s, 39.3 MB/s
You have new mail in /var/mail/controls
I was able to mount all the partitions on the cloned disk. Will now try booting from this disk on the spare machine I am testing in the office area now. That'd be a "real" test of if this backup is useful in the event of a disk failure.
|
|
Attachment 1: 415E2F09-3962-432C-B901-DBCB5CE1F6B6.jpeg
|
|
Attachment 2: BFF8F8B5-1836-4188-BDF1-DDC0F5B45B41.jpeg
|
|
13408
|
Mon Oct 30 11:15:02 2017 |
gautam | Update | CDS | slow machine bootfest + vacuum snafu | Eurocrate key turning reboots today morning for c1psl and c1aux.c1auxex and c1auxey are also down but I didn't bother keying them for now. PSL FSS slow loop is now active again (its inactivity was what prompted me to check status of the slow machines).
Note that the EPCIS channels for PSL shutter are hosted on c1aux.But looks like the slow machine became unresponsive at some point during the weekend, so plotting the trend data for the PSL shutter channel would have you believe that the PSL shutter was open all the time. But the MC_REFL DC channel tells a different story - it suggests that the PSL shutter was closed at ~4AM on Sunday, presumably by the vacuum interlock system. I wonder:
- How does the vacuum interlock close the PSL shutter? Is there a non-EPICS channel path? Because if the slow machine happens to be unresponsive when the interlock wants to close the PSL shutter via EPICS commands, it will be unable to. The fact that the PSL shutter did close suggests that there is indeed another path.
- We should add some feature to the vacuum interlock (if it doesn't already exist) such that the PSL shutter isn't accidentally re-opened until any vacuum related issues are resolved. Steve was immediately able to identify that the problem was vacuum related, but I think I would have just re-opened the PSL shutter thinking that the issue was slow computer related.
|
13410
|
Mon Nov 6 11:15:43 2017 |
gautam | Update | CDS | slow machine bootfest + IFO re-alignment | Eurocrate key turning reboots today morning for and c1susaux, c1auxex and c1auxey. Usual precautions were taken to minimize risk of ITMX getting stuck.
The IFO hasn't been aligned in ~1week, so I recovered arm and PRM alignment by locking individual arms and also PRMI on carrier. I will try recovering DRMI locking in the evening.
As far as MC1 glitching is concerned, there hasn't been any major one (i.e. one in which MC1 is kicked by such a large amount that the autolocker can't lock the IMC) for the past 2 months - but the MC WFS offsets are an indication of when smaller glitches have taken place, and there were large DC offsets on the MC WFS servo outputs, which I offloaded to the DC MC suspension sliders using the MC WFS relief script.
I'd like for the save-restore routine that runs when the slow machines reboot to set the watchdog state default to OFF (currently, after a key-turning reboot, the watchdogs are enabled by default), but I'm not really sure how this whole system works. The relevant files seem to be in the directory /cvs/cds/caltech/target/c1susaux . There is a script in there called startup.cmd , which seems to be the initialization script that runs when the slow machine is rebooted. But looking at this file, it is not clear to me where the default values are loaded from? There are a few "saverestore " files in this directory as well:
saverestore.sav
saverestore.savB
saverestore.sav.bu
saverestore.req
Are the "default" channel values loaded from one of these? |
13412
|
Tue Nov 7 17:45:05 2017 |
gautam | Update | LSC | DRMI Nosie Budget v3.1 | Some days ago, I had tried to measure the SRCL->MICH and PRCL->MICH cross couplings using broadband noise injected between 120-180 Hz, a frequency band chosen arbitrarily, in hindsight, I could have done a more broadband test. I've spent some time including the infrastructure to calculate "White-Noise TFs" in the noise budgeting code, where a transfer function is estimated by injecting a "broadband" excitation into a channel of interest, and looking at the resulting response in MICH. I figured this would be useful to estimate other couplings as well, e.g. laser intensity nosie, oscillator noise etc.
I estimate the transfer function of the coupling using the relation (MICH is the median ASD of the MICH error signal in the below expression, and similarly for PRCL)

Attachments #1 and #2 show the spectra of the MICH, PRCL and SRCL signals during 'quiet' times and during the injection, while Attachment #3 shows the calculated coupling TFs using the above relation. These are significantly different (more than 10dB lower) than the numbers I reported in elog 13367, where the measurement was made using swept sine. As can be seen in the attached plots, the injected broadband excitation is visible above the nominal noise level, and I calculated the white noise TFs using ~5mins of data which should be plenty, so I'm not sure atm what to make of the answers from swept-sine and broadband injections being so different.
Attachment #4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz.
Nov 8 1600: Updating NB to inculde estimated Oplev A2L.
Quote: |
AUX coupling
This is the other find.
- While chatting with Gabriele, he suggested measuring the SRCL->MICH and PRCL->MICH cross couplings.
- I injected a signal in SRCL servo EXC channel, and adjusted amplitude till coherence in MICH_IN1 was good.
- The actual TF measured was MICH_IN1 / SRCL_IN1 (so units of cts/ct).
- My multiplying the in-lock PRCL and SRCL IN1 signals by these coupling coefficients (assumed flat in frequency for now, note that measurement was only made between 100Hz and 1kHz), I get the trace labelled "AUX coupling" in Attachment #1 (this is the quadrature sum for SRCL and PRCL couplings).
- Also repeated for PRCL -> MICH coupling in the same way.
- Measurements of these TFs and coherence are shown in Attachment #5 (again png screenshot because of DTT).
- However, there is no significant coherence in MICH/SRCL or MICH/PRCL in this frequency range.
This seems to be limiting us from saturating the dark noise once the coil de-whitening is engaged. But lack of coherence means the mechanism is not re-injection of SRCL/PRCL sensing noise? Need to think about what this means / how we can mitigate it.
|
|
Attachment 1: SRCL_MICH_whitenoise_tf.pdf
|
|
Attachment 2: PRCL_MICH_whitenoise_tf.pdf
|
|
Attachment 3: MICH_aux.pdf
|
|
Attachment 4: C1NB_disp_40m_MICH_NB_2017-10-08.pdf
|
|
13413
|
Tue Nov 7 22:56:21 2017 |
gautam | Update | LSC | DRMI locking recovered | I hadn't re-locked the DRMI after the work on the AS55 demod board. Tonight, I was able to recover the DRMI locking with the old settings.
The feature in the PRCL spectrum (uncalibrated, y-axis is cts/rtHz) at ~1.6kHz is mysterious, I wonder what that's about.
Wasted some time tonight futzing around with various settings because I couldn't catch a DRMI lock, thinking I may have to re-tune demod phases etc given that I've been mucking around the LSC rack a fair bit. But fortunately, the problem turned out to be that the correct feedforward filters were not enabled in the angular feedforward path (seems like these are not SDF monitored). Clue was that there was more angular motion of the POP spot on the CCD than I'm used to seeing, even in the PRMI carrier lock.
After fixing this, lock was acquired within seconds, and the locks are as robust as I remember them - I just broke one after ~20mins locked because I went into the lab. I've been putting off looking at this angular feedforward stuff and trying out some ideas rana suggested, seems like it can be really useful.
As part of the pre-lock work, I dither aligned arms, and then ran the PRCL/MICH dithers as well, following which I re-centered the ITM, PRM and BS Oplevspots onto their respective QPDs - they have not been centered for a couple of months now.
I'm now going to try and measure some other couplings like PSL RIN->MICH, Marconi phase noise->MICH etc.
|
Attachment 1: DRMI_7Nov20178.png
|
|
13414
|
Wed Nov 8 00:28:16 2017 |
gautam | Update | LSC | Laser intensity coupling measurement attempt | I tried measuring the coupling of PSL intensity noise by driving some broadband noise bandpassed between 80-300Hz using the spare DAC channel at 1Y3 that I had set up for this purpose a couple of weeks ago (via a battery powered SR560 buffer set to low-noise operation mode because I'm not sure if the DAC output can drive a ~20m long cable). I was monitoring the MC2 TRANS QPD Sum channel spectrum while driving this broadband noise - the "nominal" spectrum isn't very clean, there are a bunch of notches from a 60Hz comb and a forest of peaks over a broad hump from 300Hz-1kHz (see Attachment #1).
I was able to increase the drive to the AOM till the RIN in the band being driven increased by ~10x, and saw no change in the MICH error signal spectrum [see Attachment #1] - during this measurement, the RFPD whitening was turned on for REFL11, REFL55 and AS55, and the ITM coil drivers were de-whitened, so as to get a MICH spectrum that is about as "low-noise" as I've gotten it so far.
I tried increasing the drive further, but at this point, started seeing frequent MC locklosses - I'm not convinced this is entirely correlated to my AOM activities, so I will try some more, but at the very least, this places an upper bound on the coupling from intensity noise to MICH. |
Attachment 1: PSL_RIN.pdf
|
|
13416
|
Wed Nov 8 09:59:12 2017 |
gautam | Update | LSC | DRMI Nosie Budget v3.1 | The Oplev trace is missing for now, as I have not re-measured the A2L coupling since modifying the Oplev loop shape (specifically the low pass filter and overall gain) to allow engageing the coil de-whitening.
The averaging for the white noise TFs plotted is computed using median averaging - I have used a python transcription of Sujan's matlab code. I use scipy.signal.spectrogram to compute the fft bins (I've set some defaults like 8s fft length and a tukey window), and then take the median average using np.median(). I've also incorporated the ln(2) correction factor.
It seems like GwPy has some in-built capability to compute median (and indeed various other percentile) averages, but since we aren't using it, I just coded this up.
Quote: |
why no oplev trace in the NB ?
also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.
|
|
13417
|
Wed Nov 8 12:19:55 2017 |
gautam | Update | SUS | coil driver series resistance | We've been talking about increasing the series resistance for the coil driver path for the test masses. One consequence of this will be that we have reduced actuation range.
This may not be a big deal since for almost all of the LSC loops, we currently operate with a limiter on the output of the control filter bank. The value of the limit varies, but to get an idea of what sort of "threshold" velocities we are looking at, I calculated this for our Finesse 400 arm cavities. The calculation is rather simplistic (see Attachment #1), but I think we can still draw some useful conclusions from it:
- In Attachment #1, I've indicated with dashed vertical lines some series resistances that are either currently in use, or are values we are considering.
- The table below tabulates the fraction of passages through a resonance we will be able to catch, assuming velocities sampled from a Gaussian with width ~3um/s, which a recent ALS study suggests describes our SOS optic velocity distribution pretty well (with lcoal damping on).
- I've assumed that the maximum DAC output voltage available for length control is 8V.
- Presumably, this Gaussian velocity distribution will be modified because of the LSC actuation exerting impulses on the optic on failed attempts to catch lock. I don't have a good model right now for how this modification will look like, but I have some ideas.
- It would be interesting to compare the computed success rates below with what is actually observed.
- The implications of different series resistances on DAC noise are computed here (although the non-linear nature of the DAC noise has not been taken into account).
Series resistance [ohms] |
Predicted Success Rate [%] |
Optics with this resistance |
100 |
>90 |
BS, PRM, SRM |
400 |
62 |
ITMX, ITMY, ETMX, ETMY |
1000 |
45 |
- |
2000 |
30 |
- |
So, from this rough calculation, it seems like we would lose ~25% efficiency in locking the arm cavity if we up the series resistance from 400ohm to 1kohm. Doesn't seem like a big deal, becuase currently, the single arm locking |
Attachment 1: vthresh.pdf
|
|
13418
|
Wed Nov 8 14:28:35 2017 |
gautam | Update | General | MC1 glitches return | There hasn't been a big glitch that misaligns MC1 by so much that the autolocker can't lock for at least 3 months, seems like there was one ~an hour ago.
I disabled autolocker and feedback to the PSL, manually aligned MC1 till the MC_REFL spot looked right on the CCD to me, and then re-engaged the autolocker, all seems to have gone smoothly.
|
Attachment 1: MC1_glitchy.png
|
|
Attachment 2: 6AFDA67D-79B1-469C-A58A-9EC5F8F01D32.jpeg
|
|
13420
|
Wed Nov 8 17:04:21 2017 |
gautam | Update | CDS | gds-2.17.15 [not] installed | I wanted to use the foton.py utility for my NB tool, and I remember Chris telling me that it was shipping as standard with the newer versions of gds. It wasn't available in the versions of gds available on our workstations - the default version is 2.15.1. So I downloaded gds-2.17.15 from http://software.ligo.org/lscsoft/source/, and installed it to /ligo/apps/linux-x86_64/gds-2.17.15/gds-2.17.15. In it, there is a file at GUI/foton/foton.py.in - this is the one I needed.
Turns out this was more complicated than I expected. Building the newer version of gds throws up a bunch of compilation errors. Chris had pointed me to some pre-built binaries for ubuntu12 on the llo cds wiki, but those versions of gds do not have foton.py. I am dropping this for now. |
13421
|
Thu Nov 9 10:51:37 2017 |
gautam | Summary | LSC | current procedure for compiling and installing c1dnn code | Jamie pointed out that the compile and install instructions are different for c1dnn:
cd /opt/rtcds/caltech/c1/rtbuild/test/nn-test
make c1dnn
make install-c1dnn
See also: https://nodus.ligo.caltech.edu:8081/40m/13383.
I think these build instructions have to be run on the c1lsc frontend - in the past, I have been able to compile and install models on any computer with the shared drive mounted (including the control room workstations), but I'm guessing that something has changed since the RCG upgrade. Jamie can correct me on this if I'm wrong. |
13428
|
Wed Nov 15 01:37:07 2017 |
gautam | Update | LSC | DRMI low freq. nosie improved | Pianosa just crashed and ate my elog, along with all the DTT/Foton windows I had open, so more details tomorrow... This workstation has been crashing ~once a month for the last 6 months.
Summary:
Below ~100Hz, the hypothesis is that the BS oplev A2L contribution dominates the MICH displacement noise. I wanted to see if I could mitigate this my modifying the BS Oplev loop shape.
Details:
- Swept sine TF measurements suggested that the BS A2L contribution is between 10-100x that of the ITM A2L
- The Oplev loop shape for BS is different from ITMs - specifically, there is a Res-gain centered at ~3.3 Hz. The low frequency ~0.6Hz boost filter present in the ITM Oplev loops was disengaged for the BS Oplves.
- I turned off the BS OL loops and looked at error signal spectra - didn't seem that different from ITM OL error signals, so I decided to try turning off the res-gain and engage the 0.6Hz boost.
- This change also gave me much more phase at ~6Hz, which is roughly the UGF of the loop. So I put in another roll-off low pass filter with corner frequency 25Hz.
- This worked okay - RMS went down by ~5x (which is even better than the original config), and although the performance between ~3 and 10Hz is slightly worse than with the old combination,this region isn't the dominant contribution to the RMS. PM at the upper UGF is ~30degrees in the new configuration.
- I wanted to give DRMI locking a shot with the new OL loop - expectations were that the noise between 30-100Hz would improve, and perhaps the engaging of de-whitening filters on BS would also be easier given the more severe roll-off at high-frequencies.
- Attachment #1 shows the NB for tonights lock. All MICH optics had their coil drivers de-whitened, and all the LSC PDs were whitened for this measurement.
- I've edited the NB code to make the A2L calculation more straightforward, I now just make the coupling 1/f^2 and give the function a measured overall gain, so that this curve can now be easily added to all future NBs. I've also transcribed the matlab funciton used for parsing Foton files into python, this allows me to convert the DQ-ed OL error signals to control signals. Will update git with changes.
Remarks:
- MICH noise has improved by ~2x between 40-80Hz.
- Not sure what to make of the broad hump around 60Hz - scatter shelf?
- There is still unexplained noise below 100Hz, the A2L estimate is considerably lower than the measured noise.
- We are still more than an order of magnitude away from the estimated seismic noise floor at low frequencies (but getting closer!).
I've been banging my head against optimal loop shaping, with the OL loop as a test-case, without much success - as was the case with coating PSO, the magic is in smartly defining the cost function, but right now, my optimizer seems to be pushing most of the roots I'm making available for it to place to high frequencies. I've got a term in there that is supposed to guard against this, need to tweak further...
Attachment #2: Eye-fits of measured OL A2L coupling TFs to a 1/f^2 shape, with the gain being the parameter "fitted". I used these value, and the DQ-ed OL error signal in lock, to estimate the red curve labelled "A2L" in Attachment #1. The dots are the measurement, and the lines are the 1/f^2 estimates. |
Attachment 1: C1NB_disp_40m_MICH_NB_2017-11-15.pdf
|
|
Attachment 2: OL_A2L_couplings.pdf
|
|
13430
|
Thu Nov 16 00:45:39 2017 |
gautam | Update | SUS | SOS Sapphire Prism design |
Quote: |
- If I could get pictures of the lower mirror clamp (document D960008), it would be helpful in making solidworks model. Document is unclear. Same for sensor/actuator head assembly.
|
If you go through this thread of elogs, there are lots of pictures of the SOS assembly with the optic in it from the vent last year. I think there are many different perspectives, close ups of the standoffs, and of the OSEMs in their holders in that thread.
This elog has a measurement of the pendulum resonance frequencies with ruby standoffs - although the ruby standoff used was cylindrical, and the newer generation will be in the shape of a prism. There is also a link in there to a document that tells you how to calculate the suspension resonance frequencies using analytic equations. |
13431
|
Thu Nov 16 00:53:26 2017 |
gautam | Update | LSC | DRMI noise sub-budgets | I've incorporated the functionality to generate sub-budgets for the various grouped traces in the NBs (e.g. the "A2L" trace is really the quadrature sum of the A2L coupling from 6 different angular servos).
For now, I'm only doing this for the A2L coupling, and the AUX length loop coupling groups. But I've set up the machinery in such a way that doing so for more groups is easy.
Here are the sub-budget plots for last night's lock - for the OL plot, there are only 3 lines (instead of 6) because I group the PIT and YAW contributions in the function that pulls the data from the nds server, and don't ever store these data series individually. This should be rectified, because part of the point of making these sub-budgets is to see if there is a particularly bad offender in a given group.
I'll do a quick OL loop noise budget for the ITM loops tomorrow.
I also wonder if it is necessary to measure the Oplev A2L coupling from lock to lock? This coupling will be dependant on the spot position on the optic, and though I run the dither alignment servos to minimize REFL_DC, AS_DC, I don't have any intuition for how the offset from center of optic varies from lock to lock, and if this is at all significant. I've been using a number from a measurement made in May. Need to do some algebra... |
Attachment 1: C1NB_a2l_40m_MICH_NB_2017-11-15.pdf
|
|
Attachment 2: C1NB_aux_40m_MICH_NB_2017-11-15.pdf
|
|
13432
|
Thu Nov 16 13:57:01 2017 |
gautam | Update | Optical Levers | Optical lever noise | I disabled the OL loops for ITMX, ITMY and BS at GPStime 1194897655 to come up with an Oplev noise budget. OL spots were reasonably well centered - by that, I mean that the PIT/YAW error signals were less than 20urad in absolute value.
Attachment #1 is a first look at the DTT spectra - I wonder why the BS Oplev signals don't agree with the ITMs at ~1Hz? Perhaps the calibration factor is off? The sensing noise not really flat above 100Hz - I wonder what all those peaky features are. Recall that the ITM OLs have analog whitening filters before the ADC, but the BS doesn't...
In Attachment #2, I show comparison of the error signal spectra for ITMY and SRM - they're on the same stack, but the SRM channels don't have analog de-whitening before the ADC.
For some reason, DTT won't let me save plots with latex in the axes labels... |
Attachment 1: VertexOLnoise.pdf
|
|
Attachment 2: ITMYvsSRM.pdf
|
|
13436
|
Tue Nov 21 11:21:26 2017 |
gautam | Update | CDS | RFM network down | I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.
- The error seems to be in the receiving model only, i.e. c1rfm, all the sending models (e.g. c1scx) don't report any errors, at least on the CDS overview screen.
- Judging by dataviewer trending of the c1rfm status word, seems like this happened on Sunday morning, around 11am.
- I tried restarting both sender and receiver models, but error persists.
- I got no useful information from the dmesg logs of either c1sus (which runs c1rfm), or c1iscex (which runs c1scx).
- There are no physical red lights in the expansion chassis that I could see - in the past, when we have had some timing errors, this would be a signature.
Not sure how to debug further...
* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy). |
Attachment 1: RFMerrors.png
|
|
13437
|
Tue Nov 21 11:37:29 2017 |
gautam | Update | Optical Levers | BS OL calibration updated | I calibrated the BS oplev PIT and YAW error signals as follows:
- Locked X-arm, ran dither alignment servos to maximize transmission.
- Applied an offset to the ASC PIT/YAW filter banks. Set the ramp time to something long, I used 60 seconds.
- Monitored the X arm transmission while the offset was being ramped, and also the oplev error signal with its current calibration factor.
- Fit the data, oplev error signal vs arm transmission, with a gaussian, and extracted the scaling factor (i.e. the number which the current Oplev error signals have to be multiplied by for the error signal to correspond to urad of angular misalignment as per the overlap of the beam axis to the cavity axis.
- Fits are shown in Attachment #1 and #2.
- I haven't done any error analysis yet, but the open loop OL spectra for the BS now line up better with the other optics, see Attachment #3 (although their calibration factors may need to be updated as well...). Need to double check against OSEM readout during the sweep.
- New numbers have been SDF-ed.
The numbers are:
BS Pitch 15 / 130 (old/new) urad/counts
BS Yaw 14 / 170 (old/new) urad/counts
Quote: |
I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS.
|
|
Attachment 1: OL_calib_BS_PERROR.pdf
|
|
Attachment 2: OL_calib_BS_YERROR.pdf
|
|
Attachment 3: VertexOLnoise_updated.pdf
|
|
13439
|
Tue Nov 21 16:28:23 2017 |
gautam | Update | Optical Levers | BS OL calibration updated | The numbers I have from the fitting don't agree very well with the OSEM readouts. Attachment #1 shows the Oplev pitch and yaw channels, and also the OSEM ones, while I swept the ASC_PIT offset. The output matrix is the "naive" one of (+1,+1,-1,-1). SUSPIT_IN1 reports ~30urad of motion, while SUSYAW_IN1 reports ~10urad of motion.
From the fits, the BS calibration factors were ~x8 for pitch and x12 for yaw - so according to the Oplev channels, the applied sweep was ~80urad in pitch, and ~7urad in yaw.
Seems like either (i) neither the Oplev channels nor the OSEMs are well diagonalized and that their calibration is off by a factor of ~3 or (ii) there is some significant imbalance in the actuator gains of the BS coils...
Quote: |
Need to double check against OSEM readout during the sweep.
|
|
Attachment 1: BS_oplev_sweep.png
|
|
13441
|
Tue Nov 21 23:04:12 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | Per our discussions in the meetings over the last week, I've tried to put together a simple Oplev noise budget. The only two terms in this for now are the dark noise and a model for the seismic noise, and are plotted together with the measured open-loop error signal spectra.
- Dark noise
- Beam was taken off the OL QPD
- A small DC offset was added to all the oplev segment input filters to make the sum ~20-30 cts [call this testSum] (usually it varies from 4000-13000 for the BS/ITMs, call this nominalSum).
- I downloaded 20mins of dq-ed error signal data, and computed the ASD, dividing by a factor of nominalSum / testSum to account for the usual light intensity on the QPD.
- Seismic noise
- This is a very simplistic 1/f^2 pendulum TF with a pair of Q=2 poles at 1Hz.
- I adjusted the overall gain such that the 1Hz peak roughly line up in measurement and model.
- The stack isn't modelled at all.
Some remarks:
- The BS oplev doesn't have any whitening electronics, and so has a higher electronics noise floor compared to the ITMs. But it doesn't look like we are limited by this lower noise floor anywhere..
- I wonder what all those high frequency features seen in the ITM error signal spectra are - mechanical resonances of steering optics? It is definitely above the dark noise floor, so I am inclined to believe this is real beam motion on the QPD, but surely this can't be the test-mass motion? If it were, the measured A2L would be much higher than the level it is adjudged to be at now. Perhaps it's some resonances of steering mirrors?
- The seismic displacement @100Hz per the GWINC model is ~1e-19 m/rtHz. Assuming the model A2L = d_rms * theta(f) where d_rms is the rms offset of the beam spot from the optic center, and theta(f) is the angular control signal to the optic, for a 5mm rms offset of the spot from the center, theta(f) must be ~1e-17 urad @100Hz. This gives some requirement on the low pass required - I will look into adding this to the global optimization cost.
|
Attachment 1: vertexOL_noises.pdf
|
|
13442
|
Tue Nov 21 23:47:51 2017 |
gautam | Configuration | Computers | nodus post OS migration admin | I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.
This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.
I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.
Quote: |
The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017
Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. "websvn" is not installed.
|
|
13445
|
Wed Nov 22 11:51:38 2017 |
gautam | Configuration | Computers | nodus post OS migration admin | Confirmed that this crontab is running - the daily backup of the crontab seems to have successfully executed, and there is now a file crontab_nodus.ligo.caltech.edu.20171122080001 in the directory quoted below. The $HOSTNAME seems to be "nodus.ligo.caltech.edu" whereas it was just "nodus", so the file names are a bit longer now, but I guess that's fine...
Quote: |
I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.
This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.
I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.
|
|
13448
|
Wed Nov 22 15:29:23 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | [steve, gautam]
What is the best way to set this test up?
I think we need a QPD to monitor the spot rather than a single element PD, to answer this question about the sensor noise. Ideally, we want to shoot the HeNe beam straight at the QPD - but at the very least, we need a lens to size the beam down to the same size as we have for the return beam on the Oplevs. Then there is the power - Steve tells me we should expect ~2mW at the output of these HeNes. Assuming 100kohm transimpedance gain for each quadrant and Si responsivity of 0.4A/W at 632nm, this corresponds to 10V (ADC limit) for 250uW of power - so it would seem that we need to add some attenuating optics in the way.
Also, does anyone know of spare QPDs we can use for this test? We considered temporarily borrowing one of the vertex OL QPDs (mark out its current location on the optics table, and move it over to the SP table), but decided against it as the cabling arrangement would be too complicated. I'd like to use the same DAQ electronics to acquire the data from this test as that would give us the most direct estimate of the sensor noise for supposedly no motion of the spot, although by adding 3 optics between the HeNe and the QPD, we are introducing possible additional jitter couplings...
Quote: |
For the OL NB, probably don't have to fudge any seismic noise, since that's a thing we want to suppress. More important is "what the noise would be if the suspended mirrors were no moving w.r.t. inertial space".
For that, we need to look at the data from the OL test setup that Steve is putting on the SP table.
|
|
Attachment 1: OplevTest.jpg
|
|
13450
|
Wed Nov 22 17:52:25 2017 |
gautam | Update | Optimal Control | Visualizing cost functions | I've attempted to visualize the various components of the cost function in the way I've defined it for the current iteration of the Oplev optimal control loop design code. For each term in the cost function, the way the cost is computed depends on the ratio of the abscissa value to some threshold value (set by hand for now) - if this ratio is >1, the cost is the logarithm of the ratio, whereas if the ratio is <1, the cost is the square of the ratio. Continuity is enforced at the point at which this transition happens. I've plotted the cost function for some of the terms entering the code right now - indicated in dashed red lines are the approximate value of each of these costs for our current Oplev loop - the weights were chosen so that each of the costs were O(10) for the current controller, and the idea was that the optimizer could drive these down to hopefully O(1), but I've not yet gotten that to happen.
Based on the meeting yesterday, some possible ideas:
- For minimizing the control noise injection - we know the transfer function from the Oplev control signal coupling to MICH from measurements, and we also have a model for the seismic noise. So one term could be a weighted integral of (coupling - seismic) - the weight can give more importance to f>30Hz, and even more importance to f>100Hz. Right now, I don't have any suc frequency dependant requirement on the control signal.
- Try a simpler problem first - pendulum local damping. The position damping controller for example has fewer roots in the complex plane. Although it too has some B/R notches, which account for 16 complex roots, and hence, 32 parameters, so maybe this isn't really a simpler problem?
- How do we pick the number of excess poles compared to zeros in the overall transfer function? The OL loop low-pass filters are elliptic filters, which achieve the fastest transition between the passband and stopband, but for the Oplev loop roll-off, perhaps its better to have a just have some poles to roll off the HF response?
|
Attachment 1: globalCosts.pdf
|
|
13452
|
Wed Nov 22 23:56:14 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | Do not turn on BS/ITMY/SRM/PRM Oplev servos without reading this elog and correcting the needful!
I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that
- I've turned off the HeNe that is used for the SRM and ITMY Oplev.
- Moved one of the HeNe's Steve setup on the SP table to the ITMY Oplev table.
- Output power was 2.5mW, whereas normal power incident on this PD was ~250uW.
- So I changed all transimpedance gains on the ITMY Oplev QPD from 100k to 10k thin film - these should be changed back when we want to use this QPD for Oplev purposes again. Note that I did not change the compensation capacitors C3-C6, as with 10k transimpedance, and assuming they are 2.2nF, we get a corner frequency of 6.7kHz. The original schematic recommends 0.1uF. In hindsight, I should have changed these to 22nF to keep roughly the same corner frequency of ~700Hz.
I've implemented this change as of ~5pm Nov 23 2017 - C3-C6 are now 22nF, so the corner frequency is 676Hz, as opposed to 723Hz before... This should also be undone when we use this QPD as an Oplev QPD again...
- I marked the position of the ITMY Oplev QPD with sharpie and also took pics so it should be easy enough to restore when we are done with this test.
- I couldn't get the HeNe to turn on with any of the power supplies I found in the cabinet, so I borrowed the one used to power the BS/PRM. So these oplevs are out of commission until this test is done.
- There is a single steering mirror in a Polaris mount which I used to center the spot on the QPD.
- The specular reflection (~250uW, i.e. 10% of the power incident on the QPD) is dumped onto a clean razor beam stack. Steve can put in a glass beam dump on Monday.
- Just in case someone accidentally turns on some servos - I've disabled the inputs to the BS, PRM and SRM oplevs, and set the limiter on the ITMY servo to 0.
Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...
Quote: |
too complex; just shoot straight from the HeNe to the QPD. We lower the gain of the QPD by changing the resistors; there's no sane reason to keep the existing 100k resistors for a 2 mW beam. The specular reflection of the QPD must be dumped on a black glass V dump (not some flimsy anodized aluminum or dirty razor stack)
|
|
13453
|
Thu Nov 23 18:03:52 2017 |
gautam | Update | Optical Levers | Oplev "noise budget" | Here are a couple of preliminary plots of the noise from a 20minute stretch of data - the new curve is the orange one, labelled sensing, which is the spectrum of the PIT/YAW error signal from the HeNe beam single bounce off a single steering mirror onto the QPD, normalized to account for the difference in QPD sum. The peaky features that were absent in the dark noise are present here.
I am a bit confused about the total sum though - there is ~2.5mW of light incident on the PD, and the transimpedance gain is 10.7kohm. So I would expect 2.5e-3 mW * 0.4A/W * 10.7 kV/A ~ 10.7V over 4 quadrants. The ADC is 16 bit and has a range +/- 10V, so 10.7 V should be ~35,000 cts. But the observed QPD sum is ~14,000 counts. The reflected power was measured to be ~250uW, so ~10% of the total input power. Not sure if this is factored into the photodiode efficiency value of 0.4A/W. I guess there is some fraction of the QPD that doesn't generate any photocurrent (i.e. the grooves defining the quadrants), but is it reasonable that when the Oplev beam is well centered, ~50% of the power is not measured? I couldn't find any sneaky digital gains between the quadrant channels to the sum channel either... But in the Oplev setup, the QPD had ~250uW of power incident on it, and was reporting a sum of ~13,000 counts with a transimpedance gain of 100kohm, so at least the scaling seems to hold...
I guess we wan't to monitor this over a few days, see how stationary the noise profile is etc. I didn't look at the spectrum of the intensity noise during this time.
Quote: |
I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that
Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...
|
|
Attachment 1: ITMY_P_noise.pdf
|
|
Attachment 2: ITMY_Y_noise.pdf
|
|
13461
|
Sun Dec 3 05:25:59 2017 |
gautam | Configuration | Computers | sendmail installed on nodus | Pizza mail didn't go out last weekend - looking at logfile, it seems like the "sendmail" service was missing. I installed sendmail following the instructions here: https://tecadmin.net/install-sendmail-server-on-centos-rhel-server/
Except that to start the sendmail service, I used systemctl and not init.d. i.e. I ran systemctl start sendmail.service (as root). Test email to myself works. Let's see if it works this weekend. Of course this isn't so critical, more important are the maintenance emails that may need to go out (e.g. disk usage alert on chiara / N2 pressure check, which looks like nodus' responsibilities). |
13476
|
Thu Dec 14 19:33:20 2017 |
gautam | Frogs | ASS | c1ass slow channel offloading scripts with small | I don't think this is really a problem - we offload to the fast channels and not to the slow (although we really should offload to the slow channels). I think the best approach is to use the ezcaservo utility to offload the DC part of the ASS control signals to the slow channels, so as to not waste fast channel DAC counts on DC offsets. In principle, this approach should be somewhat immune to the slow channel calibration not being perfect.
Quote: |
While staring at epics records all day I noticed something about the PIT/YAW offset sliders and ASS offset offloading to slow channels scripts that I'm not sure others are aware off, so I'll briefly discuss it in this post.
The PIT and YAW sliders directly control soft channels that are hosted on the slow machine. Secondary epics records disentangle them for the individual coils:
- UL = PIT+YAW
- LL = -PIT+YAW
- UR = PIT-YAW
- LR = -PIT-YAW
These channels are the direct input for the physical output channels that generate the control voltage.
The fast channels for PIT and YAW have a numerical correction factor built in that accounts for differences between the OSEMs, but the slow channels don't. This means that the slow PIT/YAW controls are not entirely orthogonal but have crosstalk on the order of 10 percent. This in itself is not that dramatic, however the offload offsets scripts for the dither alignment use the fast PIT/YAW values as inputs, which represent the necessary adjustments to the OSEMs only after the individual correction factors have been applied. The offloading to slow knows nothing of this calibration difference between the OSEMs. The result is that there is a ~10 percent of the offset correction error on the mirror alignment AFTER offloading. This will of course converge after a few iterations, but in any case it is recommendable to run the dither alignment again after offloading and not offload the new offsets to the fast channels.
|
|
13477
|
Thu Dec 14 19:41:00 2017 |
gautam | Update | CDS | CDS recovery, NFS woes | [Koji, Jamie(remote), gautam]
Summary: The CDS system seems to be back up and functioning. But there seems to be some pending problems with the NFS that should be looked into.
We locked Y-arm, hand aligned transmission to 1 . Some pending problems with ASS model (possibly symptomatic of something more general). Didn't touch Xarm because we don't know what exactly the status of ETMX is.
Problems raised in elogs in the thread of 13474 and also 13436 seem to be solved.
I would make a detailed post with how the problems were fixed, but unfortunately, most of what we did was not scientific/systematic/repeatable. Instead, I note here some general points (Jamie/Koji can addto /correct me):
- There is a "known" problem with unloading models on c1lsc. Sometimes, running rtcds stop <model> will kill the c1lsc frontend.
- Sometimes, when one machine on the dolphin network goes down, all 3 go down.
- The new FB/RCG means that some of the old commands now no longer work. Specifically, instead of telnet fb 8087 followed by shutdown (to fix DC errors) no longer works. Instead, ssh into fb1, and run sudo systemctl restart daqd_*.
- Timing error on c1sus machine was linked to the mx_stream processes somehow not being automatically started. The "!mxstream restart" button on the CDS overview MEDM screen should run the necessary commands to restart it. However, today, I had to manually run sudo systemctl start mx_stream on c1sus to fix this error. It is a mystery why the automatic startup of this process was disabled in the first place. Jamie has now rectified this problem, so keep an eye out.
- c1oaf persistently reported DC errors (0x2bad) that couldn't be addressed by running mxstream restart or restarting the daqd processes on FB1. Restarting the model itself (i.e. rtcds restart c1oaf) fixed this issue (though of course I took the risk of having to go into the lab and hard-reboot 3 machines).
- At some point, we thought we had all the CDS lights green - but at that point, the END FEs crashed, necessitating Koji->EX and Gautam->EY hard reboots. This is a new phenomenon. Note that the vertex machines were unaffected.
- At some point, all the DC lights on the CDS overview screen went white - at the same time, we couldn't ssh into FB1, although it was responding to ping. After ~2mins, the green lights came back and we were able to connect to FB1. Not sure what to make of this.
While trying to run the dither alignment scripts for the Y-arm, we noticed some strange behaviour:
Even when there was no signal (looking at EPICS channels) at the input of the ASS servos, the output was fluctuating wildly by ~20cts-pp.
This is not simply an EPICS artefact, as we could see corresponding motion of the suspension on the CCD.
A possible clue is that when I run the "Start Dither" script from the MEDM screen, I get a bunch of error messages (see Attachment #2).
Similar error messages show up when running the LSC offset script for example. Seems like there are multiple ports open somehow on the same machine?
There are no indicator lights on the CDS overview screen suggesting where the problem lies.
Will continue investigating tomorrow.
Some other general remarks:
- ETMX watchdog remains shutdown.
- ITMY and BS oplevs have been hijacked for HeNe RIN / Oplev sensing noise measurement, and so are not enabled.
- Y arm trans QPD (Thorlabs) has large 60Hz harmonics. These can be mitigated by turning on a 60Hz comb filter, but we should check if this is some kind of ground loop. The feature is much less evident when looking at the TRANS signal on the QPD.
UPDATE 8:20pm:
Koji suggested trying to simply retsart the ASS model to see if that fixes the weird errors shown in Attachment #2. This did the trick. But we are now faced with more confusion - during the restart process, the various indicators on the CDS overview MEDM screen froze up, which is usually symptomatic of the machines being unresponsive and requiring a hard reboot. But we waited for a few minutes, and everything mysteriously came back. Over repeated observations and looking at the dmesg of the frontend, the problem seems to be connected with an unresponsive NFS connection. Jamie had noted sometime ago that the NFS seems unusually slow. How can we fix this problem? Is it feasible to have a dedicated machine that is not FB1 do the NFS serving for the FEs? |
Attachment 1: CDS_14Dec2017.png
|
|
Attachment 2: CDS_errors.png
|
|
13481
|
Fri Dec 15 11:19:11 2017 |
gautam | Update | CDS | CDS recovery, NFS woes | Looking at the dmesg on c1iscex for example, at least part of the problem seems to be associated with FB1 (192.168.113.201, see Attachment #1). The "server" can be unresponsive for O(100) seconds, which is consistent with the duration for which we see the MEDM status lights go blank, and the EPICS records get frozen. Note that the error timestamped ~4000 was from last night, which means there have been at least 2 more instances of this kind of freeze-up overnight.
I don't know if this is symptomatic of some more widespread problem with the 40m networking infrastructure. In any case, all the CDS overview screen lights were green today morning, and MC autolocker seems to have worked fine overnight.
I have also updated the wiki page with the updated daqd restart commands.
Unrelated to this work - Koji fixed up the MC overview screen such that the MC autolocker button is now visible again. The problem seems to do with me migrating some of the c1ioo EPICS channels from the slow machine to the fast system, as a result of which the EPICS variable type changed from "ENUM" to something that was not "ENUM". In any case, the button exists now, and the MC autolocker blinky light is responsive to its state.
Quote: |
I don't think the problem is fb1. The fb1 NFS is mostly only used during front end boot. It's the rtcds mount that's the one that sees all the action, which is being served from chiara.
|
|
Attachment 1: NFS.png
|
|
Attachment 2: MCautolocker.png
|
|
13482
|
Fri Dec 15 17:05:55 2017 |
gautam | Update | PEM | Trillium seismometer DC offset | Yesterday, while we were bringing the CDS system back online, we noticed that the control room wall StripTool traces for the seismic BLRMS signals did not come back to the levels we are used to seeing even after restarting the PEM model. There are no red lights on the CDS overview screen indicative of DAQ problems. Trending the DQ-ed seismometer signals (these are the calibrated (?) seismometer signals, not the BLRMS) over the last 30 days, it looks like
- On ~1st December, the signals all went to 0 - this is consistent with signals in the other models, I think this is when the DAQ processes crashed.
- On ~8 December, all the signals picked up a DC offset of a few 100s (counts? or um/s? this number is after a cts2vel calibration filter). I couldn't find anything in the elog on 8 December that may be related to this problem.
I poked around at the electronics rack (1X5/1X6) which houses the 1U interface box for these signals - on its front panel, there is a switch that has selectable positions "UVW" and "XYZ". It is currently set to the latter. I am assuming the former refers to velocities in the xyz directions, and the latter is displacement in these directions. Is this the nominal state? I didn't spend too much time debugging the signal further for now.
|
Attachment 1: Trillium.png
|
|
13485
|
Fri Dec 15 19:09:49 2017 |
gautam | Update | IOO | IMC lockloss correlated with PRM alignment? | Motivation:
To test the hypothesis that the IMC lock duty cycle is affected by the PRM alignment. Rana pointed out today that the input faraday has not been tuned to maximize the output->input isolation in a while, so the idea is that perhaps when the PRM is aligned, some of the reflected light comes back towards the PSL through the Faraday and hence, messes with the IMC lock.
A script to test this hypothesis is running over the weekend (in case anyone was thinking of doing anything with the IFO over the weekend).
Methodology:
I've made a simple script - the pseudocode is the following:
- Align PRM
- For the next half hour, look for downward transitions in the EPICS record for MC TRANS > 5000 cts - this is a proxy for an MC lockloss
- At the end of 30 minutes, record number of locklosses in the last 30 minutes
- Misalign PRM, repeat the above 3 bullets
The idea is to keep looping the above over the weekend, so we can expect ~100 datapoints, 50 each for PRM misaligned/aligned. The times at which PRM was aligned/misaligned is also being logged, so we can make some spectrograms of PC drive RMS (for example) with PRM aligned/misaligned. The script lives at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/FaradayIsolCheck.py. Script is being run inside a tmux session on pianosa, hopefully the machine doesn't crash over the weekend and MC1/CDS stays happy.
A more direct measurement of the input Faraday isolation can be made by putting a photodiode in place of the beam dump shown in Attachment #1 (borrowed from this elog). I measured ~100uW of power leaking through this mirror with the PRM misaligned (but IMC locked). I'm not sure what kind of SNR we can expect for a DC measurement, but if we have a chopper handy, we could put a chopper (in the leaked beam just before the PD so as to allow the IMC to be locked) and demodulate at that frequency for a cleaner measurement? This way, we could also measure the contribution from prompt reflections (up to the input side of the Faraday) by simply blocking the beam going into the vacuum. The window itself is wedged so that shouldn't be a big contributor. |
Attachment 1: PSL_layout.JPG
|
|
13486
|
Mon Dec 18 16:45:44 2017 |
gautam | Update | IOO | IMC lockloss correlated with PRM alignment? | I stopped the test earlier today morning around 11:30am. The log file is located at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/PRM_stepping.txt. It contains the times at which the PRM was aligned/misaligned for lookback, and also the number of MC unlocks during every 30 minute period that the PRM alignment was toggled. This was computed by:
- continuously reading the current value of the EPICS record for MC Trans.
- comparing its current value to its values 3 seconds ago.
- If there is a downward step in this comparison greater than 5000 counts, increment a counter variable by 1.
- Reset counter at the end of 30 minute period.
I think this method is a pretty reliable proxy, because the MC autolocker certainly takes >3 seconds to re-acquire the lock (it has to run mcdown, wait for the next cavity flash, and run mcup in the meantime).
Preliminary analysis suggests no obvious correlation between MC lock duty cycle and PRM alignment.
I leave further analysis to those who are well versed in the science/art of PRM/IMC statistical correlations. |
13488
|
Mon Dec 18 20:37:18 2017 |
gautam | Update | PSL | PMC MEDM cleanup | There are fewer lies on this screen now. For reference, the details of the electronics modifications made are in this elog.
- Error and control signals are now in units of nm, the appropriate filter switches have been SDF'ed.
- I think it's useful to see the control voltage to the PZT in volts as well, so I've made two readbacks available at the control point, one in V and one in nm.
- Indicated that the on-board LO mon readback, which reads "nan", is no longer meaningful, as the mixer is off the demod board.
- Indicated that the PMC Trans readback of "0" is because of a dead ADC.
Quote: |
I think many of the readbacks on the PMC MEDM screen are now bogus and misleading since the PMC RF upgrade that Gautam did awhile ago. We ought to fix the screen and clearly label which readbacks and actuators are no longer valid.
|
|
Attachment 1: PMC_revamped.png
|
|
13493
|
Thu Dec 28 17:22:02 2017 |
gautam | Update | General | power outage - CDS recovery |
- I had to manually reboot c1lsc, c1sus and c1ioo.
- I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
- mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
- Restored all sus dampings.
- Slow computers all seem to be responsive, so no action was required there.
- Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.
@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.
from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.
I also hard-rebooted megatron and optimus as these were unresponsive to ping.
*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup. |
Attachment 1: 28.png
|
|
13496
|
Tue Jan 2 16:24:29 2018 |
gautam | Update | safety | Projector periodically shuts itself off | I noticed this behaviour since ~Dec 20th, before the power failure. The bulb itself seems to work fine, but the projector turns itself off after <1 minute after being manually turned on by the power button. AFAIK, there was no changes made to the projector/Zita. Perhaps this is some kind of in-built mechanism that is signalling that the bulb is at the end of its lifetime? It has been ~4.5 months (3240 hours) since the last bulb replacement (according to the little sticker on the back which says the last bulb replacement was on 15 Aug 2017 |
13497
|
Tue Jan 2 16:37:26 2018 |
gautam | Update | Optimal Control | Oplev loop tuning | I've made various changes to the optimal loop design approach, but am still not having much success. A summary of changes made:
- Parametrization of filter - enforcing uniqueness
- Previously, the input to the particle swarm was a vector of root frequencies and associated Q-factors.
- This way of parametrization is not unique - permuting the order of the roots yield the same filter, but particles traversing the high (65) dimensional parameter space may have to go over very expensive regions in order to converge with the global minimum / best performing particle.
- One way around this is to parametrize the filter by the highest pole/zero frequency, and then specifying the remaining roots by the cumulative separation from this highest root. This guarantees that a unique vector input to the particle swarm function specifies a unique filter.
- To avoid negative frequencies, I manually set a particular element of the vector to 0 if the cumulative sum yields a negative frequency. I believe this is how MATLAB's particle swarm implements the "constraints" in the constrained optimization routines.
- Cost function - I've reformulated this into something that makes more sense to me, but probably can be improved further.
- Term #1 - integral of the area (evaluated with MATLAB's trapz utility) between the in-loop (i.e. suppressed) error signal and the sensing noise spectrum (for the latter, I use the orange curve from this plot). This is a signed number, so that suppression below the sensing noise is penalized. Target value is 1 urad rtHz. One problem I see with this approach is that if we believe the sensing noise measurement, then even at 10mHz, it looks like sensing noise is below the out-of-loop error signal level. So the optimizer doesn't seem to want to make the loop AC coupled.
- Term #2 - stability margin. I'm using this number, which is the distance-of-closest-approach to the point -1 in the Nyquist plot, instead of gain and phase margins, as this yields a more conservative robustness measure. Target value is 0.65.
- Term #3 - A2L contribution of in-loop control signal. This contribution is calculated using measurements of A2L coupling for the DRMI. The actual term that goes into the cost function is the ratio of the area under the in-loop control signal to that under the seismic noise curve above 35Hz. Further, f>100Hz is given 10x the weight of 35Hz<f<100Hz (I've not really played around with this weighting function). The goal is to be as close to the seismic curve as possible, at which point this term becomes 1.
- Terms #4 and #5 - the maximum open loop gain evaluated in a 1Hz wide bin centered around the bounce and roll resonances. The aim is to not exceed -40dB in these bins. Perhaps this needs to be reformulated, as the optimizer seems to be giving this term too much importance - the optimized loops have extremely deep bandstops around the BR resonances.
- To normalize each term, I divide by the "target" value mentioned above, so as to make the various terms comparable.
- Each term in the cost function has two regimes - one where it is rapidly varying close to the desired operating point, and one far away where the cost still increases monotonically, but slower (see Attachment #2).
- A scalar cost function is evaluated by taking a weighted sum of the above terms. The weights are chosen so as to make each term ~10 for the controller currently implemented.
- All of the above are only applicable if the resulting loop is stable - else, a large cost is assigned (exponential of sum of real parts of poles of OLTF).
Attachment #1 shows the outcome of a typical optimization run - so while I am having some more success with this than before, where the PSO algorithm was stalling and terminating before any actual optimization was done, it seems like I need to re-think the cost function yet again...
Attachment #2 shows the current terms entering the cost function, and their "desired" values.
The current version of the code I am using is here: although I may not have inculded some of the data files required to run it, to be fixed... |
Attachment 1: loopOpt_180102_1706.pdf
|
|
Attachment 2: globalCosts.pdf
|
|
13501
|
Wed Jan 3 18:00:46 2018 |
gautam | Update | PonderSqueeze | plan of action | Notes of stuff we discussed @ today's meeting, and afterwards, towards measuring ponderomotive squeezing at the 40m.
- Displacement noise requirements
- Kevin is going to see if we can measure any kind of squeezing on a short timescale by tuning various parameters.
- Specifically, without requiring crazy ultra low current noise level for the coil driver noise.
- Investigate how much actuation range we need for lock acquisition and maintaining lock.
- Specifically, for DARM.
- We will measure this by having the arms controlled with ALS in the CARM/DARM basis.
- Build up a noise budget for this, see how significant the laser noise contribution is.
- RC folding mirrors
- In the present configuration, these are introducing ~2.5% RT loss in the RCs.
- This affects PRG, and on the output side, measurable squeezing.
- We want to see if we can relax the requirements on the RC folding mirrors such that we don't have to spend > 20 k$.
- Specifically, consider spec'ing the folding mirror coatings to only have HR @1064 nm, and take what we get at 532 nm.
- But still demand tolerances on RoC driven by mode-matching between the RCs and the arm cavities.
- ALS with Beat Mouth
- Use the fiber coupled light from the ends to make the ALS signals.
- Gautam will update diagram to show the signal chain from end-to-end (i.e. starting at AUX laser, ending at ADC input).
- Make a noise budget for the same - preliminary analysis suggests a sensing noise floor of ~10 mHz/rtHz.
RXA:
- For the ALS-DARM budget the idea is that we can do lock acquisition better, so we don't need to care about the acquisition reqs. i.e. we just need to set the ETM coil driver current range based on the DARM in-lock values.
- To get the coil driver noise to be low enough to detect squeezing we need to use a ~10-15 kOhm series resistor.
- We assume that all DAC and coil driver input noises can be sufficiently filtered.
- We are assuming that we don't change the magnet sizes or the number of coil windings in the OSEMs.
- The noise in the ITMs doesn't matter because we don't use them for any locking activity, so we can easily set the coil driver series resistors to 15 kOhm.
- We will do the bias for the ETMs and ITMs using some HV circuit (not the existing ones on the coil driver boards) and doing the summation after the main coil driver series resistor. This HV bias module needs to handle the ~ (2 V / 400 Ohm) = 5 mA which is now used. This would require (5 mA) x (15 kOhm) = 60+ V drivers.
- IF we can get away with doing the ALS beat note with just red (still using GREEN light from the end laser to lock to the arms from the ends), we will not have any requirements for the 532 nm transmission of any optics in the DRMI area.
- Get some quotes for the new PR/SR mirrors having tight RoC tolerance, high R for 1064, and no spec for 532.
- Check that the 1-way fiber noise for 1064 nm is < 100 mHz/rHz in the 50-1000 Hz band. If its more, explore putting better acoustic foam around the fiber run.
- Improve the mode-matching of the IR beam into the fibers at the ends. We want >80% to reduce the noise do to scattering; we don't really care about the amount of light available in the PSL - this is just to reduce the IR-ALS noise.
|
13502
|
Thu Jan 4 12:46:27 2018 |
gautam | Update | ALS | Fiber ALS assay | Attachment #1 is the updated diagram of the Fiber ALS setup. I've indicated part numbers, power levels (optical and electrical). For the light power levels, numbers in green are for the AUX lasers, numbers in red are for the PSL.
I confirmed that the output of the power splitter is going to the "RF input" and the output of the delay line is going to the "LO input" of the demodulator box. Shouldn't this be the other way around? Unless the labels are misleading and the actual signal routing inside the 1U chassis is correctly done :/
- Mode-matching into the fibers is rather abysmal everywhere.
- In this diagram, only the power levels measured at the lasers and inputs of the fiber couplers are from today's measurements. I just reproduced numbers for inside the beat mouth from elog13254.
- Inside the beat mouth, the PD output actually goes through a 20dB coupler which is included in this diagram for brevity. Both the direct and coupled outputs are available at the front panel of the beat mouth. The latter is meant for diagnostic purposes. The number of -8dBm of beat @30MHz is quoted using the direct output, and not the coupled output.
Still facing some CDS troubles, will start ALS recovery once I address them.
Attachment #2 is the svg file of Attachment #1, which we can update as we improve things. I'll put it on the DCC 40m tree eventually. |
Attachment 1: FiberALS.pdf
|
|
Attachment 2: FiberALS.svg.zip
|  |
13503
|
Thu Jan 4 14:39:50 2018 |
gautam | Update | General | power outage - timing error | As mentioned in my previous elog, the CDS overview screen "DC" indicators are all RED (everything else is green). Opening up the displays for individual CPUs, the error message shown is "0x4000", which is indicative of some sort of timing error. Indeed, it seems to me that on the FB machine, the gpstime command shows a gps time that is ~1 second ahead of the times on other FE machines.
Running gpstime on other FE machines throws up an error, saying that it cannot connect to the network to update leap second data. Not sure what this is about...
I double checked the GPS timing module, we had some issues with this in the recent past. But judging by its front panel display, everything seems to be in order...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/gpstime", line 9, in <module>
load_entry_point('gpstime==0.2', 'console_scripts', 'gpstime')()
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 356, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2476, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2190, in load
['__name__'])
File "/usr/lib/python3/dist-packages/gpstime/__init__.py", line 41, in <module>
LEAPDATA = ietf_leap_seconds.load_leapdata(notify=True)
File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 158, in load_leapdata
fetch_leapfile(leapfile)
File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 115, in fetch_leapfile
r = requests.get(LEAPFILE_IETF)
File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in get
return request('get', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 407, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(101, 'Network is unreachable'))
|
13507
|
Fri Jan 5 22:19:53 2018 |
gautam | Update | General | power outage - timing error | Just putting the relevant line from email from Rolf which at least identifies the problem here:
Looks like FB time is actually off by 1 year, as your timing system does not get year info.
There still seems to be something funky with the X arm transmission PDs - I can't seem to get the triggering to switch between the QPD and the Thorlabs PD, and the QPD signal seems to be wildly fluctuating by several orders of magnitude from 0.01-100. The c1iscex FE was pulled out, and it seemed to me like someone was doing some cable re-arrangement at the X end.
I will look into this tomorrow.
Quote: |
Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.
|
|
13510
|
Sat Jan 6 18:27:37 2018 |
gautam | Update | General | power outage - IFO recovery | Mostly back to nominal operating conditions now.
- EX TransMon QPD is not giving any sensible output. Seems like only one quadrant is problematic, see Attachment #1. I blame team EX_Acromag for bumping some cabling somewhere. In any case, I've disabled output of the QPD, and forced the LSC servo to always use the Thorlabs "High Gain" PD for now. Dither alignment servo for X arm does not work so well with this configuration - to be investigated.
- BS Seismometer (Trillium) is still not giving any sensible output.
- I looked under the can, the little spirit level on the seismometer is well centered.
- I jiggled all the cabling to rule out any obvious loose connections - found none at the seismometer, or at the interface unit (labelled D1002694 on the front panel) in 1X5/1X6.
- All 3 axes are giving outputs with DC values of a few hundred - I guess there could've been some big earthquake in early December which screwed the internal alignment of the sensing mass in the seismometer. I don't know how to fix this.
- Attachment #2 = spectra for the 3 channels. Can't say they look very seismicy
. I've assumed the units are in um/sec.
- This is mainly bothering me in the short term because I can't use the angular feedforward on PRC alignment, which is usually quite helpful in DRMI locking.
- But I think the PRM Oplev loop is actually poorly tuned, in which case perhaps the feedforward won't really be necessary once I touch that up.
What I did today (may have missed some minor stuff but I think this is all of it):
- At EX:
- Toggled power to Thorlabs trans monitoring PD, checked that it was actually powered, squished some cables in the e- rack.
- Removed PDA55 in the green path (put there for EX laser AM/PM measurement). So green beam can now enter the X arm cavity.
- Re-connected ALS cabling.
- Turned on HV supply for EX Green PZT steering mirrors (this has to be done every time there is a power failure).
- At ITMY table:
- Removed temporary HeNe RIN/ Oplev sensing noise measurement setup. HeNe + 1" vis-coated steering mirror moved to SP table.
- Turned on ITMY/SRM Oplev HeNe.
- Undid changes on ITMY Oplev QPD and returned it to its original position.
- Centered ITMY reflected beam on this QPD.
- At vertex area
- Looked under Trillium seismometer can - I've left the clamps undone for now while we debug this problem.
- Power-cycled Trillium interface box.
- Touched up PMC alignment.
- Control room
- Recover IFO alignment using combination of IR and Green beams.
- Single arm locking recovered, dither alignment servos run to maximize arm transmission. Single arm locks holding for hours, that's good.
- The X arm dither alignment isn't working so well, the transmission never quite hits 1 and it undergoes some low frequency (T~30secs) oscillations once the transmission reaches its peak value.
- Had to do the usual ipcrm thing to get dataviewer to run on pianosa.
Next order of business:
- Recover ALS:
- aim is to replace the vertex area ALS signals derived from 532nm with their 1064nm counterparts.
- Need to touch up end PDH servos, alignment/MM into arms, and into Fibers at ends etc.
- Control the arms (with RMs misaligned) in the CARM/DARM basis using the revised ALS setup.
- Make a noise budget - specifically, we are interested in how much actuation range is required to maintain DARM control in this config.
- Recover DRMI locking
- Continue NBing.
- Do a statistical study of actuation range required for acquiring and maintaining DRMI locking.
|
Attachment 1: EX_QPD_Quad1_Faulty.pdf
|
|
Attachment 2: Trillium_faulty.pdf
|
|
13514
|
Sun Jan 7 17:27:13 2018 |
gautam | Update | PonderSqueeze | Displacement requirements for short-term squeezing | Maybe you've accounted for this already in the Optickle simulations - but in Finesse (software), the "tuning" corresponds to the microscopic (i.e. at the nm level) position of the optics, whereas the macroscopic lengths, which determine which fields are resonant inside the various cavities, are set separately. So it is possible to change the microscopic tuning of the SRC, which need not necessarily mean that the correct resonance conditions are satisfied. If you are using the Finesse model of the 40m I gave you as a basis for your Optickle model, then the macroscopic length of the SRC in that was ~5.38m. In this configuration, the f2 (i.e. 55MHz sideband) field is resonant inside the SRC while the f1 and carrier fields are not.
If we decide to change the macroscopic length of the SRC, there may also be a small change to the requirements on the RoCs of the RC folding mirrors. Actually, come to think of it, the difference in macroscopic cavity lengths explains the slight differences in mode-matching efficiencies I was seeing between the arms and RCs I was seeing before.
Quote: |
Yes, this SRC detuning is very close to extreme signal recycling (0° in this convention), and the homodyne angle is close to the amplitude quadrature (90° in this convention).
For T(SRM) = 5% at the optimal angles (SRC detuning of -0.01° and homodyne angle of 89°), we can see 0.7 dBvac at 210 Hz.
|
|
13518
|
Tue Jan 9 11:52:29 2018 |
gautam | Update | CDS | slow machine bootfest | Eurocrate key turning reboots today morning for and c1susaux, c1auxey and c1iscaux. These were responding to ping but not telnet-able. Usual precautions were taken to minimize risk of ITMX getting stuck.
|
13519
|
Tue Jan 9 21:38:00 2018 |
gautam | Update | ALS | ALS recovery |
- Aligned IFO to IR.
- Ran dither alignment to maximize arm transmission.
- Centered Oplev reflections onto their respective QPDs for ITMs, ETMs and BS, as DC alignment reference. Also updated all the DC alignment save/restore files with current alignment.
- Undid the first 5 bullets of elog13325. The AUX laser power monitor PD remains to be re-installed and re-integrated with the DAQ.
- I stupidly did not refer to my previous elog of the changes made to the X end table, and so spent ages trying to convince Johannes that the X end green alignment had shifted, and turned out that the green locking wasn't going because of the 50ohm terminator added to the X end NPRO PZT input. I am sorry for the hours wasted

- GTRY and GTRX at levels I am used to seeing (i.e. ~0.25 and ~0.5) now. I tweaked input pointing of green and also movable MM lenses at both ends to try and maximize this.
- Input green power into X arm after re-adjusting previously rotated HWP to ~100 degrees on the dial is ~2.2mW. Seems consistent with what I reported here.
- Adjusted both GTR cameras on the PSL table to have the spots roughly centered on the monitors.
Will update shortly with measured OLTFs for both end PDH loops.
- X end PDH seems to have UGF ~9kHz, Y end has ~4.5kHz. Phase margin ~60 degrees in both cases. Data + plotting code attached. During the measurement, GTRY ~0.22, GTRX~0.45.
Next, I will work on commissioning the BEAT MOUTH for ALS beat generation.
Note: In the ~40mins that I've been typing out these elogs, the IR lock has been stable for both the X and Y arms. But the X green has dropped lock twice, and the Y green has been fluctuating rather more, but has mangaged to stay locked. I think the low frequency Y-arm GTRY fluctuations are correlated with the arm cavity alignment drifting around. But the frequent X arm green lock dropouts - not sure what's up with that. Need to look at IR arm control signals and ALS signals at lock drop times to see if there is some info there. |
Attachment 1: GreenLockStability.png
|
|
Attachment 2: ALS_OLTFs_20180109.pdf
|
|
Attachment 3: ALS_OLTF_data_20180109.tar.bz2
|
13520
|
Tue Jan 9 21:57:29 2018 |
gautam | Update | Optimal Control | Oplev loop tuning | After some more tweaking, I feel like I may be getting closer to a cost-function definition that works.
- The main change I made was to effectively separate the BR-bandstop filter poles/zeros and the rest of the poles and zeros.
- So now the input vector is still a list of highest pole frequency followed by frequency separations, but I can specify much tighter frequency bounds for the roots of the part of the transfer function corresponding to the Bounce/Roll bandstops.
- This in turn considerably reduces the swarming area - at the moment, half of the roots are for the notches, and in the (f0,Q) basis, I see no reason for the bounds on f0 to be wider than [10,30]Hz.
Some things to figure out:
- How the "force" the loop to be AC coupled without explicitly requiring it to be so? What should the AC coupling frequency be? From the (admittedly cursory) sensing noise measurement, it would seem that the Oplev error signal is above sensing noise even at frequencies as low as 10mHz.
- In general, the loops seem to do well in reducing sensing noise injection - but they seem to do this at the expense of the loop gain at ~1Hz, which is not what we want.
- I am going to try and run the optimizer with an excess of poles relative to zeros
- Currently, n(Poles) = n(Zeros), and this is the condition required for elliptic low pass filters, which achieve fast transition between the passband and stopband - but we could just as well use a less rapid, but more monotonic roll-off. So the gain at 50Hz might be higher, but at 200Hz, we could perhaps do better with this approach.
- The loop shape between 10 and 30Hz that the optimizer outputs seems a but weird to me - it doesn't really quite converge to a bandstop. Need to figure that out.
|
Attachment 1: loopOpt_180108_2232.pdf
|
|
13522
|
Wed Jan 10 12:24:52 2018 |
gautam | Update | CDS | slow machine bootfest | MC autolocker got stuck (judging by wall StripTool traces, it has been this way for ~7 hours) because c1psl was unresponsive so I power cycled it. Now MC is locked. |
|