Attachment #1 - Measured / modelled noises for AS55 demod board. I've plotted quadrature sum of the LISO trace with the SR785 noise floor with input terminated to ground via 50ohm. Note that these measurements were made after all the changes in the marked up schematic in the previous elog were implemented.
Both channels should be identical - I don't understand why the I channels are noisier than their Q counterparts. This is almost certainly a problem on the daughter board, as the orange traces are pretty much identical for both channels.
The dark red curves were measured by shorting the inputs to D040179 to ground via 50ohms using some Pomona minigrabbers - I wanted to avoid ripping the daughter board out, but this probably explains the excess noise compared to the green trace at low frequencies. All other measurements were made with the board installed in the LSC rack eurocrate, with the LO input driven at the nominal level (I didn't measure this yesterday but a measurement from ~6months ago says that this level is 1.5dBm).
Wall StripTool traces showed that IMC has not been locked for at least 8 hours when I came in this morning. Going to the IMC autolocker log, it looks like the last timestamp was at ~6pm yesterday. Megatron was responding to ping, but I couldn't ssh into it. So I went over to the machine and did a hard-reboot via front panel power switch. The computer took ~10mins to come back online and respond to ping. Once it did, I was able to ssh into it. However, trying the usual commands to restart the IMC autolocker and FSS Slow loops didn't work. Specifically, monitoring the logfile with tail -f Autolocker.log, I would see that the autolocker seemed to get stuck after starting the "blinky" script. Trying to restart the process using sudo initctl restart MCautolocker, init would print to shell that the restart had worked, and reported the PID, but the logfile wouldn't update "live" as it should when tail is used with the -f option. All very strange .
Anyways, as a last resort, I kill -9'ed the PID for the init instance, and init automatically restarted the Autolocker - this did the trick, IMC is locked now and logfile seems to be getting updated normally.
I also cleared a bunch of matlab crash dump files in the home directory.
Koji suggested looking at the output of the AS55 demod board on a fast oscilloscope to look for differences in the two channel outputs (if there is some high-frequency oscillations, for example, we could miss this information in the SR785 spectra). Besides, I was only looking at spectra out to a few kHz on the SR785. I grabbed this data with a 300MHz BW Tektronix oscilloscope (battery mode) today. Input impedance of both channels were set to 1Mohm, and the measurement was made with the RFPD input terminated, output of the daughter board is what is measured. The vertical scaling of the channels was set to the minimum allowed, 1mV/div.
Attachment #1 shows that there is indeed a visible difference between the two channels - the (noisier) I channel has a much larger DC offset of ~5mV compared to the Q channel (I tried switching channels on the O'scope and the larger DC offset remained on the I channel, so seems real). There is also some kind of oscillation going on in the I channel, although the frequency is pretty low, with the peaks spaced ~50us apart. Indeed, in the ASD of the acquired data, the excess power in the I channel at 20kHz and higher harmonics are evident (see Attachment #2). Anyway all of this points to something being anomalous on the daughter board I channel signal path - I will pull it out and monitor the outputs at various points along the signal path with the fast scope to see if I can narrow down what's going on where.
We took a closer look at the AS55 demod board today. The procedure was to just be as thorough as possible, and check the behaviour of the circuit (both Transfer Function and Noise) stage by stage. Checking the transfer function was the key.
During this process, we found that the reason why the Q channels had lower noise than the I channels was because of the gain of the AD829 stage of the circuit was 0dB rather than 4dB (which is what it should be according to the component values used). Specifically, resistor R12, which is supposed to be 1.30kohm, was measured to be 1.03kohm. Replacing this resistor, the transfer functions (see Attachment #1) and noise levels (see Attachment #2) match the expectations from LISO. Some notes:
Assming the whitened ADC noise level is much below this (should only be ~10nV/rtHz), and given the measured sensing element of 6.2e8 V/m, this means that the dark noise sets a maximum achievable sensitivity of 2e-16m/rtHz.
To figure out what (if anything) is to be done next, we need to first figure out what is the goal. In the end, we care about DARM and not MICH. The optical gain for the former is ~300x the latter, so the dark noise contribution gets scaled by this factor (giving us a number of 7e-19 m/rtHz). There are certainly many noises above that level which have to be handled first. Indeed, looking at the DARM spectrum from DRFPMI lock back in March 2016, it looks like the current 1f DRMI (with coils de-whitened) Michelson sensitivity is within a factor of 2 of DARM in the full lock (albeit with vertex DoFs on 3f signals, and no coil de-whitening). Koji pointed out that we need to consider the photodiode resonant circuit itself too.
TODO: Upload all this onto the DCC
While working on the IFO tonight, I noticed that the blinky status lights on c1iscex and c1iscey were frozen (but those on the other 3 FEs seemed fine). But all other lights on the CDS overview screen were green I couldn't access testpoints from these machines, and the EPICS readbacks for models on these FEs (e.g. Oplev servo inputs outputs etc) were frozen at some fixed value. This lasted for a good 5 minutes at least. But the blinky lights started blinking again without me doing anything. Not sure what to make of this. I am also not sure how to diagnose this problem, as trending the slow EPICS records of the CPU execution cycle time (for example) doesn't show any irregularity.
So this wasn't just an EPICS freeze? I don't see how this had anything to do with any of the work I did earlier today. I didn't modify any of the running front ends, didn't touch either of the end station machines or the DAQ, and didn't modify the network in any way. I didn't leave anything running.
If you couldn't access test points then it sounds like it was more than just EPICS. It sounds like maybe the end machines somehow fell of the network momentarily. Was there anything else going on at the time?
I was looking at the ASDC channel on dataviewer, and toggling various settings like whitening gain. At some point, the signal just froze. So I quit dataviewer and tried restarting it, at which point it complained about not being able to connect to FB. This is when I brought up the CDS_OVERVIEW medm screen, and noticed the frozen 1pps indicator lights. There was certainly something going on with the end FEs, because I was able to ping the machine, but not ssh into it. Once the 1pps lights came back, I was able to ssh into c1iscex and c1iscey, no problems.
Could it be that some of the mx processes stalled, but the systemctl routine automatically restarted them after some time?
Could it be that some of the mx processes stalled, but the systemctl routine automatically restarted them after some time?
An mx_stream glitch would have interrupted data flowing from the front end to the DAQ, but it wouldn't have affected the heartbeat. The heartbeat stop could mean either that the front end process froze, or the EPICS communication stopped. The fact that everything came back fine after a couple of minutes indicates to me that the front end processes all kept running fine. If they hadn't I'm sure the machines would have locked up. The fact that you couldn't connect to the FE machine is also suspicious.
My best guess is that there was a network glitch on the martian network. I don't know how to account for the fact that pings still worked, though.
The signal path for the ASDC signal is AS55 PD --> D990543 (interface board) --> D990694 (whitening board) --> D000076 (AA board) --> ADC Ch 31. Everything in this signal chain should be able to handle signals in the range +/- 10V, which should correspond to the full range of our +/-10V, 16bit ADCs. But the ASDC signal seems to saturate at ~2000 counts (i.e. turning up the analog whitening gain doesn't make the signal get any bigger than this). I investigated this a little more today.
So the problem lies somewhere downstream of the D990694. There are other anomalous behaviours of this channel - e.g. engaging the analog whitening filters changes the DC offset of the signal. I am going to pull out this board to check it out.
Why does this matter? I want to calibrate the ASDC level (and eventually the other PD DC signals as well) into Watts. This is useful for IFO diagnostics, noise budgeting the shot noise level etc.
According to the AS55 schematic, the DC transimpedance is 66.7 ohms. I claim that the DC power on the AS55 photodiode during a DRMI (no arms) lock is ~1mW. The C30642 photodiode (InGaAs) responsivity is ~0.8 A/W. So I'd expect ~50mV to be the signal level into the ADC (assuming gain of all the other electronics in the signal chain at the start of this elog is unity). This corresponds to ~163 counts (since the ADC conversion factor is 2^16 counts over 20volts). The DC signal level I observed is ~200 counts. So things seem roughly consistent.
*Note: Despite my above statement, I don't think it is true that the AS110 PD has more light on it - the BS splitting the light between
AS55 and AS110 PDs is a 50-50 BS, and using the crude method of putting an Ophir power meter in front of both PDs and
monitoring the power while the Michelson was swinging around freely showed roughly the same maximum value.
Last night, I collected ~30mins of data for the vertex seismometer channels and the POP QPD PIT/YAW signals with the PRMI locked on carrier (angular FF OFF). The ITM Oplev loops weren't DC coupled, as they are in the full IFO locking sequence, but I feel like the angular FF filters can be improved - there are frequent sharp dives in the AS110 signal level which are correlated with large amplitude motion of the POP spot on the control room CCD monitor.
Repeating the frequency domain multicoherence analysis using BS_X and BS_Y seismometer channels as witnesses suggest that we can win significantly (see Attachment #1).
I've never really implemented feedforward filters - I was planning on using ericq's latest entry on this subject as a guide. From what I gather, the procedure is as follows:
Some notes from Rana from some years ago: https://nodus.ligo.caltech.edu:8081/40m/11519
If anyone has pointers / other considerations I should take into account, please post here.
This happened again just now - it was roughly this time when this happened last night as well.
There was certainly an EPICS freeze of the kind we were used to seeing prior to replacing the martian wireless router sometime in late 2015 (or early 2016?). I was trying to run the dither alignment servos on the Y-arm at this time, and all the StripTool traces flatlined.
I took the opportunity to try accessing testpoints from the iscey ADCs - specifically C1:SUS-TRY_OUT, and it seemed to work just fine. However, I couldn't ssh into c1iscey.
Looking at the dmesg once I was able to ssh in eventually (~2 minutes deadtime tonight, I feel like it was longer yesterday but can't quantify), I see the following: not sure if there are any clues in here, or whether this is the correct log to check. But there are many instances of the nfs server related message in the log. Note that the system time-stamp corresponds to when this freeze happened.
[5461308.784018] nfs: server 192.168.113.201 not responding, still trying
[5461412.936284] nfs: server 192.168.113.201 OK
[5461412.937130] systemd: Starting Journal Service...
[5461412.947947] systemd-journald: Received SIGTERM from PID 1 (systemd).
[5461412.996063] systemd: Unit systemd-journald.service entered failed state.
[5461413.002627] systemd: systemd-journald.service has no holdoff time, scheduling restart.
[5461413.008983] systemd: Stopping Journal Service...
[5461413.014664] systemd: Starting Journal Service...
[5461413.044262] systemd: Started Journal Service.
[5461413.694838] systemd-journald: Received request to flush runtime journal from PID 1
[steve, jamie, gautam]
The machine that now serves as out Frame Builder, FB1, was sitting on top of megatron. I decided that this wasn't ideal, and asked Steve to get some alternative mounting solution. Today, he procured some shelves to put FB1 on. Jamie suggested looking for the slider-rail that came with the machine, and using that instead, as it will allow us to slide FB1 out of the rack as we do megatron and the old FB. But as luck would have it, the distance between the rack vertical posts is 26 inches, but the rail is 27 inches. So we had to accept using the less ideal solution of putting FB1 on two shelves, with no sliding option. Photo to be uploaded shortly.
For this work, I had to shutdown FB1 for about 1 hour between 3pm and 4pm. It seems to have come back up fine now.
Do not leave organic trash or food boxes in the 40m to attrack ants !
Alex is going to have an undergrad work on a calibration optimization project on the 40m RTCDS system. For this purpose, we wanted to setup a "Simulated DARM loop". Today, Alex and I set this up. I figured we can use the c1tst model for this purpose. We basically copied the topology from Figure 2 of the h(t) paper. Attached are screenshots of the MEDM screens of the system we setup, and the simulink block diagram - the main screen can be accessed from the "SIM PLANT" tab in the sitemp.
It remains to setup the appropriate filters in the filter banks, and an EPICS channel monitor for monitoring the single excitation testpoint in the model. We also did not set up any DQ channels for the time being, as it is not even clear to me what channels need to be DQ-ed.
[ Gautam , Steve ]
c1susaux & c1iscaux were rebooted manually.
Had to reboot c1psl, c1susaux, c1auxex, c1auxey and c1iscaux today. PMC has been relocked. ITMX didn't get stuck. According to this thread, there have been two instances in the last 10 days in which c1psl and c1susaux have failed. Since we seem to be doing this often lately, I've made a little script that uses the netcat utility to check which slow machines respond to telnet, it is located at /opt/rtcds/caltech/c1/scripts/cds/testSlowMachines.bash.
The script can be executed by ./testSlowMachines.bash.
Lompoc 4.3M and 3.7M Avalon
Valve configuration: Vacuum normal
Note: Tp2 running at 75Krpm 0.25A 26C has a load high pitch sound today. It's fore line pressure 78 mTorr. Room temp 20C
None of the 3 dd backups I made were bootable - at boot, selecting the drive put me into grub rescue mode, which seemed to suggest that the /boot partition did not exist on the backed up disk, despite the fact that I was able to mount this partition on a booted computer. Perhaps for the same reason, but maybe not.
After going through various StackOverflow posts / blogs / other googling, I decided to try cloning the drives using ddrescue instead of dd.
This seems to have worked for nodus - I was able to boot to console on the machine called rosalba which was lying around under my desk. I deliberately did not have this machine connected to the martian network during the boot process for fear of some issues because of having multiple "nodus"-es on the network, so it complained a bit about starting the elog and other network related issues, but seems like we have a plug-and-play version of the nodus root filesystem now.
chiara and fb1 rootfs backups (made using ddrescue) are still not bootable - I'm working on it.
Nov 6 2017: I am now able to boot the chiara backup as well - although mysteriously, I cannot boot it from the machine called rosalba, but can boot it from ottavia. Anyways, seems like we have usable backups of the rootfs of nodus and chiara now. FB1 is still a no-go, working on it.
Looks to have worked this time around.
controls@fb1:~ 0$ sudo dd if=/dev/sda of=/dev/sdc bs=64K conv=noerror,sync
33554416+0 records in
33554416+0 records out
2199022206976 bytes (2.2 TB) copied, 55910.3 s, 39.3 MB/s
You have new mail in /var/mail/controls
I was able to mount all the partitions on the cloned disk. Will now try booting from this disk on the spare machine I am testing in the office area now. That'd be a "real" test of if this backup is useful in the event of a disk failure.
Udit Kahndelwal received 40m specific basic safety traning on Friday, Oct. 27
IFO pressure 1.2e-5 Torr at 9:30am
Atm. 1, This was the vacuum condition this morning.
IFO P1 9.7 mTorr, V1 open, V4 was in closed position , ~37 C warm Maglev at normal 560Hz rotation speed with foreline pressure 3.9 Torr because V4 closed 2 days ago when TP2 failed .....see Atm.3
The error messege at TP2 controller was: fault overtemp.
I did the following to restored IFO pumping: stopped pumping of the annulose with TP3 and valves were configured so TP3 can be the forepump of the Maglev.
closed VM1 to protect the RGA, close PSL shutter .....see Gautam entry
aux fan on to cool down Maglev-TP1, room temp 20 C,
aux drypump turned on and opend to TP3 foreline to gain pumping speed,
closed PAN to isolate annulos pumping,
opened V7 to pump Maglev forline with TP3 running at 50Krpm, It took 10 minutes to reach P2 1mTorr from 3.9 Torr
aux drypump closed off at P2 1 mTorr, TP3 foreline pressure 362 mTorr.......see Atm.2
As we are running now:
IFO pressure 7e-6 Torr at Hornet cold cathode gauge at 15:50 We have no IFO CC1 logging now. Annuloses are in 3-5 mTorr range are not pumped.
TP3 as foreline pump of TP1 at 50 Krpm, 0.24 A, 24 C, it's drypump forline pressure 324 mTorr
V4 valve cable is disconnected.
I need help with wiring up the logging of the Hornet cold cathode gauge.
Eurocrate key turning reboots today morning for c1psl and c1aux.c1auxex and c1auxey are also down but I didn't bother keying them for now. PSL FSS slow loop is now active again (its inactivity was what prompted me to check status of the slow machines).
Note that the EPCIS channels for PSL shutter are hosted on c1aux.But looks like the slow machine became unresponsive at some point during the weekend, so plotting the trend data for the PSL shutter channel would have you believe that the PSL shutter was open all the time. But the MC_REFL DC channel tells a different story - it suggests that the PSL shutter was closed at ~4AM on Sunday, presumably by the vacuum interlock system. I wonder:
Our new Agilent Technology TwisTorr 84FS AG rack controller ( English Manual pages 195-297 ) RS232/485, product number X3508-64001, sn IT1737C383
This controller, turbo and it's drypump needs to be set up into our existing vacuum system. The intake valve of this turbo (V4) has to have a hardwired interlock that closes V4 when rotation speed is less than 20% of preset RPM speed.
The unit has an analoge 10Vdc output that is proportional to rotation speed. This can be used with a comperator to direct the interlock or there may be set software option in the controller to close the valve if the turbo slows down more than 20%
The last Upgrade of the 40m Vacuum System 1/2/2000 discribes our vauum system LIGO-T000054-00-R
Here the LabView / Metrabus controls were replaced by VME processor and an Epic interface
We do not have schematics of the hardware wiring.
We need help with this.
Eurocrate key turning reboots today morning for and c1susaux, c1auxex and c1auxey. Usual precautions were taken to minimize risk of ITMX getting stuck.
The IFO hasn't been aligned in ~1week, so I recovered arm and PRM alignment by locking individual arms and also PRMI on carrier. I will try recovering DRMI locking in the evening.
As far as MC1 glitching is concerned, there hasn't been any major one (i.e. one in which MC1 is kicked by such a large amount that the autolocker can't lock the IMC) for the past 2 months - but the MC WFS offsets are an indication of when smaller glitches have taken place, and there were large DC offsets on the MC WFS servo outputs, which I offloaded to the DC MC suspension sliders using the MC WFS relief script.
I'd like for the save-restore routine that runs when the slow machines reboot to set the watchdog state default to OFF (currently, after a key-turning reboot, the watchdogs are enabled by default), but I'm not really sure how this whole system works. The relevant files seem to be in the directory /cvs/cds/caltech/target/c1susaux. There is a script in there called startup.cmd, which seems to be the initialization script that runs when the slow machine is rebooted. But looking at this file, it is not clear to me where the default values are loaded from? There are a few "saverestore" files in this directory as well:
Are the "default" channel values loaded from one of these?
Some days ago, I had tried to measure the SRCL->MICH and PRCL->MICH cross couplings using broadband noise injected between 120-180 Hz, a frequency band chosen arbitrarily, in hindsight, I could have done a more broadband test. I've spent some time including the infrastructure to calculate "White-Noise TFs" in the noise budgeting code, where a transfer function is estimated by injecting a "broadband" excitation into a channel of interest, and looking at the resulting response in MICH. I figured this would be useful to estimate other couplings as well, e.g. laser intensity nosie, oscillator noise etc.
I estimate the transfer function of the coupling using the relation (MICH is the median ASD of the MICH error signal in the below expression, and similarly for PRCL)
Attachments #1 and #2 show the spectra of the MICH, PRCL and SRCL signals during 'quiet' times and during the injection, while Attachment #3 shows the calculated coupling TFs using the above relation. These are significantly different (more than 10dB lower) than the numbers I reported in elog 13367, where the measurement was made using swept sine. As can be seen in the attached plots, the injected broadband excitation is visible above the nominal noise level, and I calculated the white noise TFs using ~5mins of data which should be plenty, so I'm not sure atm what to make of the answers from swept-sine and broadband injections being so different.
Attachment #4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz.
Nov 8 1600: Updating NB to inculde estimated Oplev A2L.
This seems to be limiting us from saturating the dark noise once the coil de-whitening is engaged. But lack of coherence means the mechanism is not re-injection of SRCL/PRCL sensing noise? Need to think about what this means / how we can mitigate it.
I hadn't re-locked the DRMI after the work on the AS55 demod board. Tonight, I was able to recover the DRMI locking with the old settings.
The feature in the PRCL spectrum (uncalibrated, y-axis is cts/rtHz) at ~1.6kHz is mysterious, I wonder what that's about.
Wasted some time tonight futzing around with various settings because I couldn't catch a DRMI lock, thinking I may have to re-tune demod phases etc given that I've been mucking around the LSC rack a fair bit. But fortunately, the problem turned out to be that the correct feedforward filters were not enabled in the angular feedforward path (seems like these are not SDF monitored). Clue was that there was more angular motion of the POP spot on the CCD than I'm used to seeing, even in the PRMI carrier lock.
After fixing this, lock was acquired within seconds, and the locks are as robust as I remember them - I just broke one after ~20mins locked because I went into the lab. I've been putting off looking at this angular feedforward stuff and trying out some ideas rana suggested, seems like it can be really useful.
As part of the pre-lock work, I dither aligned arms, and then ran the PRCL/MICH dithers as well, following which I re-centered the ITM, PRM and BS Oplevspots onto their respective QPDs - they have not been centered for a couple of months now.
I'm now going to try and measure some other couplings like PSL RIN->MICH, Marconi phase noise->MICH etc.
I tried measuring the coupling of PSL intensity noise by driving some broadband noise bandpassed between 80-300Hz using the spare DAC channel at 1Y3 that I had set up for this purpose a couple of weeks ago (via a battery powered SR560 buffer set to low-noise operation mode because I'm not sure if the DAC output can drive a ~20m long cable). I was monitoring the MC2 TRANS QPD Sum channel spectrum while driving this broadband noise - the "nominal" spectrum isn't very clean, there are a bunch of notches from a 60Hz comb and a forest of peaks over a broad hump from 300Hz-1kHz (see Attachment #1).
I was able to increase the drive to the AOM till the RIN in the band being driven increased by ~10x, and saw no change in the MICH error signal spectrum [see Attachment #1] - during this measurement, the RFPD whitening was turned on for REFL11, REFL55 and AS55, and the ITM coil drivers were de-whitened, so as to get a MICH spectrum that is about as "low-noise" as I've gotten it so far.
I tried increasing the drive further, but at this point, started seeing frequent MC locklosses - I'm not convinced this is entirely correlated to my AOM activities, so I will try some more, but at the very least, this places an upper bound on the coupling from intensity noise to MICH.
why no oplev trace in the NB ?
#4 shows the noise budget from the October 8 DRMI lock with the updated SRCL->MICH and PRCL->MICH couplings (assumed flat, extrapolated from Attachment #2 in the 120-180Hz band). If these updated coupling numbers are to be believed, then there is still some unexplained noise around 100Hz before we hit the PD dark noise. To be investigated. But if Attachment #4 is to be believed, it is not surprising that there isn't significant coherence between SRCL/PRCL and MICH around 100Hz
also, this method would work better if we had a median averaging python PSD instead of mean averaging as in Welch's method.
The Oplev trace is missing for now, as I have not re-measured the A2L coupling since modifying the Oplev loop shape (specifically the low pass filter and overall gain) to allow engageing the coil de-whitening.
The averaging for the white noise TFs plotted is computed using median averaging - I have used a python transcription of Sujan's matlab code. I use scipy.signal.spectrogram to compute the fft bins (I've set some defaults like 8s fft length and a tukey window), and then take the median average using np.median(). I've also incorporated the ln(2) correction factor.
It seems like GwPy has some in-built capability to compute median (and indeed various other percentile) averages, but since we aren't using it, I just coded this up.
We've been talking about increasing the series resistance for the coil driver path for the test masses. One consequence of this will be that we have reduced actuation range.
This may not be a big deal since for almost all of the LSC loops, we currently operate with a limiter on the output of the control filter bank. The value of the limit varies, but to get an idea of what sort of "threshold" velocities we are looking at, I calculated this for our Finesse 400 arm cavities. The calculation is rather simplistic (see Attachment #1), but I think we can still draw some useful conclusions from it:
So, from this rough calculation, it seems like we would lose ~25% efficiency in locking the arm cavity if we up the series resistance from 400ohm to 1kohm. Doesn't seem like a big deal, becuase currently, the single arm locking
There hasn't been a big glitch that misaligns MC1 by so much that the autolocker can't lock for at least 3 months, seems like there was one ~an hour ago.
I disabled autolocker and feedback to the PSL, manually aligned MC1 till the MC_REFL spot looked right on the CCD to me, and then re-engaged the autolocker, all seems to have gone smoothly.
Gautam and I measured the noise of the ADC for channels 17, 18, and 19. We plan to use those channels for measuring the noise of the temperature sensors, and we need to figure out whether or not we will need whitening and if so, how much. The figure below shows the actual measurements (red, green and blue lines), and a rough fit. I used Gautam's elog here and used the same function, (with units of nV/sqrt(Hz)) to fit our results. I used a = 1, b = 1e6, c = 2000. Since we are interested in measuring at lower frequencies, we must whiten the signal from the temperature sensors enough to have the ADC noise be negligible.
We want to be able to measure to accuracy at 1Hz, which translates to about current from the AD590 (because it gives ). Since we have a 10K resistor and V=IR, the voltage accuracy we want to measure will be . We would need whitening for lower frequencies to see such fluctuations.
To do the measurements, we put a BNC end cap on the channels we wanted to measure, then took measurements from 0-900Hz with a bandwidth of 0.001Hz. This setup is shown in the last two attachments. We used the ADC in 1X7.
I wanted to use the foton.py utility for my NB tool, and I remember Chris telling me that it was shipping as standard with the newer versions of gds. It wasn't available in the versions of gds available on our workstations - the default version is 2.15.1. So I downloaded gds-2.17.15 from http://software.ligo.org/lscsoft/source/, and installed it to /ligo/apps/linux-x86_64/gds-2.17.15/gds-2.17.15. In it, there is a file at GUI/foton/foton.py.in - this is the one I needed.
Turns out this was more complicated than I expected. Building the newer version of gds throws up a bunch of compilation errors. Chris had pointed me to some pre-built binaries for ubuntu12 on the llo cds wiki, but those versions of gds do not have foton.py. I am dropping this for now.
We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system
This is the machine that Larry suggested when I asked him for his opinion on a low workload rack-mount unit. It only has an atom processor, but I don't think it needs anything particularly powerful under the hood. He said that we will likely be able to let us borrow one of his for a couple days to see if it's up to the task. The dual ethernet is a nice touch, maybe we can keep the communication between the server and the DAQ units on their separate local network.
PSL shutter closed at 6e-6 Torr-it
The foreline pressure of the drypump is 850 mTorr at 8,446 hrs of seal life
V1 will be closed for ~20 minutes for drypump replacement..........
9:30am dry pump replaced, PSL shutter opened at 7.7E-6 Torr-it
Valve configuration: vacuum normal as TP3 is the forepump of the Maglev & annuloses are not pumped.
TP3 drypump replaced at 655 mTorr, no load, tp3 0.3A
This seal lasted only for 33 days at 123,840 hrs
The replacement is performing well: TP3 foreline pressure is 55 mTorr, no load, tp3 0.15A at 15 min [ 13.1 mTorr at d5 ]
Valve configuration: Vacuum Normal, ITcc 8.5E-6 Torr
Dry pump of TP3 replaced after 9.5 months of operation.[ 45 mTorr d3 ]
The annulosses are pumped.
Valve configuration: vac normal, IFO pressure 4.5E-5 Torr [1.6E-5 Torr d3 ] on new ITcc gauge, RGA is not installed yet.
Note how fast the pressure is dropping when the vent is short.
IFO pressure 1.7E-4 Torr on new not logged cold cathode gauge. P1 <7E-4 Torr
Valve configuration: vac.normal with anunulossess closed off.
TP3 was turned off with a failing drypump. It will be replaced tomorrow.
All time stamps are blank on the MEDM screens.
Pianosa just crashed and ate my elog, along with all the DTT/Foton windows I had open, so more details tomorrow... This workstation has been crashing ~once a month for the last 6 months.
Below ~100Hz, the hypothesis is that the BS oplev A2L contribution dominates the MICH displacement noise. I wanted to see if I could mitigate this my modifying the BS Oplev loop shape.
I've been banging my head against optimal loop shaping, with the OL loop as a test-case, without much success - as was the case with coating PSO, the magic is in smartly defining the cost function, but right now, my optimizer seems to be pushing most of the roots I'm making available for it to place to high frequencies. I've got a term in there that is supposed to guard against this, need to tweak further...
Attachment #2: Eye-fits of measured OL A2L coupling TFs to a 1/f^2 shape, with the gain being the parameter "fitted". I used these value, and the DQ-ed OL error signal in lock, to estimate the red curve labelled "A2L" in Attachment #1. The dots are the measurement, and the lines are the 1/f^2 estimates.
If you go through this thread of elogs, there are lots of pictures of the SOS assembly with the optic in it from the vent last year. I think there are many different perspectives, close ups of the standoffs, and of the OSEMs in their holders in that thread.
This elog has a measurement of the pendulum resonance frequencies with ruby standoffs - although the ruby standoff used was cylindrical, and the newer generation will be in the shape of a prism. There is also a link in there to a document that tells you how to calculate the suspension resonance frequencies using analytic equations.
I've incorporated the functionality to generate sub-budgets for the various grouped traces in the NBs (e.g. the "A2L" trace is really the quadrature sum of the A2L coupling from 6 different angular servos).
For now, I'm only doing this for the A2L coupling, and the AUX length loop coupling groups. But I've set up the machinery in such a way that doing so for more groups is easy.
Here are the sub-budget plots for last night's lock - for the OL plot, there are only 3 lines (instead of 6) because I group the PIT and YAW contributions in the function that pulls the data from the nds server, and don't ever store these data series individually. This should be rectified, because part of the point of making these sub-budgets is to see if there is a particularly bad offender in a given group.
I'll do a quick OL loop noise budget for the ITM loops tomorrow.
I also wonder if it is necessary to measure the Oplev A2L coupling from lock to lock? This coupling will be dependant on the spot position on the optic, and though I run the dither alignment servos to minimize REFL_DC, AS_DC, I don't have any intuition for how the offset from center of optic varies from lock to lock, and if this is at all significant. I've been using a number from a measurement made in May. Need to do some algebra...
I disabled the OL loops for ITMX, ITMY and BS at GPStime 1194897655 to come up with an Oplev noise budget. OL spots were reasonably well centered - by that, I mean that the PIT/YAW error signals were less than 20urad in absolute value.
Attachment #1 is a first look at the DTT spectra - I wonder why the BS Oplev signals don't agree with the ITMs at ~1Hz? Perhaps the calibration factor is off? The sensing noise not really flat above 100Hz - I wonder what all those peaky features are. Recall that the ITM OLs have analog whitening filters before the ADC, but the BS doesn't...
In Attachment #2, I show comparison of the error signal spectra for ITMY and SRM - they're on the same stack, but the SRM channels don't have analog de-whitening before the ADC.
For some reason, DTT won't let me save plots with latex in the axes labels...
I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS.
I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.
Not sure how to debug further...
* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy).
I calibrated the BS oplev PIT and YAW error signals as follows:
The numbers are:
BS Pitch 15 / 130 (old/new) urad/counts
BS Yaw 14 / 170 (old/new) urad/counts
I performed a test with the can last week with one layer of insulation to see how well it worked. First, I soldered two heaters together in series so that the total resistance was 48.6 ohms. I placed the heaters on the sides of the can and secured them. Then I wrapped the sides and top of the can in insulation and sealed the edges with tape, only leavng the handles open. I didn't insulate the bottom. I connected the two ends of the heater directly into the DC source and drove the current as high as possible (around 0.6A). I let the can heat up to a final value of 37.5C, turned off the current and manually measured the temperature, recoding the time every half degree. I then plotted the results, along with a fit. The intersection of the red line with the data marks the time constant and the temperature at which we get the time constant. This came out to be about 1.6 hours, much longer than expected considering that onle one layer instead of four was used. With only one layer, we would expect the time constant to be about 13 min, while for 4 layers it should be 53 min (the area A is 0.74 m^2 and not 2 m^2).
I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.
In our case, we will heat the can to a certain temerature and wait for it to cool on its own so
We know that where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, is the change in temperature, d is the thickness of the insulation.
We can take the derivative of this to get
We can guess the solution to be
where tau is the time constant, which we would like to find.
The boundary conditions are and . I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,
We can plug everything back into the derivative T'(t)
Equating the exponential terms on both sides, we can solve for tau
Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.
To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.
The numbers I have from the fitting don't agree very well with the OSEM readouts. Attachment #1 shows the Oplev pitch and yaw channels, and also the OSEM ones, while I swept the ASC_PIT offset. The output matrix is the "naive" one of (+1,+1,-1,-1). SUSPIT_IN1 reports ~30urad of motion, while SUSYAW_IN1 reports ~10urad of motion.
From the fits, the BS calibration factors were ~x8 for pitch and x12 for yaw - so according to the Oplev channels, the applied sweep was ~80urad in pitch, and ~7urad in yaw.
Seems like either (i) neither the Oplev channels nor the OSEMs are well diagonalized and that their calibration is off by a factor of ~3 or (ii) there is some significant imbalance in the actuator gains of the BS coils...
Need to double check against OSEM readout during the sweep.
Per our discussions in the meetings over the last week, I've tried to put together a simple Oplev noise budget. The only two terms in this for now are the dark noise and a model for the seismic noise, and are plotted together with the measured open-loop error signal spectra.
For the OL NB, probably don't have to fudge any seismic noise, since that's a thing we want to suppress. More important is "what the noise would be if the suspended mirrors were no moving w.r.t. inertial space".
For that, we need to look at the data from the OL test setup that Steve is putting on the SP table.
Updated some values, most importantly, the k-factor. I had assumed that it was in the correct units already, but when converting it to 0.046 W/(m^2*K) from 0.26 BTU/(h*ft^2*F), I got the following plot. The time constant is still a bit larger than what we'd expect, but it's much better with these adjustments.
For our next steps, I will measure the time constant of the heater without any insulation and then decide how many layers of it we will need. I'll need to construct and calibrate a temperature sensor like the ones I've made before and use it to record the values more accurately.
For the insulation, I have decided to use this one (Buna-N/PVC Foam Insulation Sheets). We will need 3 of the 1 inch plain backing ones (9349K4) to wrap a few layers around it. I'll try two layers for now, since the insulation seems to be doing quite well according to initial testing.