Today we were looking at the MC TFs and pulled out the FSS box to measure it. We took photos and removed a capacitor with only one leg.
Still, we were unable to see the weird, flat TF from 0.1-1 MHz and the bump around 1 MHz. Its not in the FSS box or the IMC servo card. So we looked around for a rogue Pomona box and found one sneakily located between the IMC and FSS box, underneath some cables next to the Thorlabs HV driver for the NPRO.
It was meant to be a 14k:140k lead filter (with a high frequency gain of unity) to give us more phase margin (see elog 4366; its been there for 3.5 years).
From the comparison below, you can see what the effect of the filter was. Neither the red nor purple TFs are what we want, but at least we've tracked down where the bump comes from. Now we have to figure out why and what to do about it.
* all of the stuff above ~1-2 MHz seems to be some kind of pickup stuff.
** notice how the elog is able to make thumbnails of PDFs now that its not Solaris!
I looked at the endtable for possible space to setup optics in order to couple the X end laser into a PM fiber.
Attached is the layout of where the setup will go and what are the existing stuff that will be moved.
Since we will not be doing any major locking, I am taking this chance to move things on the X end table and install the fiber coupler.
The first steering mirror shown in the earlier elog will be a Y1 (HR mirror) and the second one will be a beam sampler (similar to the one installed at the Y endtable for the fiber setup).
Doubler --> Y1 ---> Lens (f=12.5cm) ---> Beam sampler --->Fiber coupler
The fiber coupler mount will be installed in the green region to the right of the TRX camera.
This work will involve moving around the TRX camera and the optic that brings the trans image on it.
Let me know if this work should not be done tomorrow morning for any reason.
In order to fix ELOG search, I have started running ELOG v2.9.2 on Nodus.
Sadly, due to changes in the software, we can no longer use one global write password. Instead, we must now operate with registered users.
Based on recent elog users, I'll be creating user accounts with the following names, using the same old ELOG write password. (These will be valid across all logbooks)
All of these users will be "Admins" as well, meaning they can add new users and change settings, using the "Config" link.
Let me know if I neglected to add someone, and sorry for the inconvenience.
RXA: What Eric means to say, is that "upgrading" from Solaris to Linux broke the search and made us get a new elog software that;s worse than what we had.
IMC OL TF has been measured from 10K to 10M
What we want is to have the high and low noise spectra on the same plot. The high noise one should be triggered by a high PC DRIVE signal.
I tried to find my own entry and faced with a strange behavior of the elog.
The search button invoked the following link and no real search has been done:
If I ran the following link, it returned correct search. So something must be wrong.
The error spectra I took so far are not that informative, I'm afraid. The first three posted here refer to Wed 17 in the afternoon, where things were quiet, the LSC control was off and the MC was reliably locked. The last two plots refer to Wed night, while Q and I were doing some locking work; in particular, these were taken just after one of the locklosses described in elog 10814. Sadly, they aren't much different from the "quiet" ones.
I can add some considerations though: Q and I saw some weird effects during that night, using a live reading of such spectra, which couldn't be saved though; such effects were quite fast both in appearance and disapperance, therefore difficult to save using the snapshot measurement, which is the only one that can save the data as of now; moreover, these effects were certainly seen during the locklosses, but sometimes also in normal circumstances. What we saw was a broad peak in the range 5e4-1e5 Hz with peak value ~1e-5 V/rtHz, just after the main peak shown in the attached spectra.
I ssh'd in, and was able to run each script manually successfully. I ran the initctl commands, and they started up fine too.
We've seen this kind of behavior before, generally after reboots; see ELOGS 10247 and 10572.
Today Q moved the FSS slow servo over to some init thing on megatron, and some time ago he did the same thing to the MC auto locker script. It isn't working though.
Even though megatron was rebooted, neither script started up automatically. As Diego mentioned in elog 10823, we ran sudo initctl start MCautolocker and sudo initctl start FSSslow, and the blinky lights for both of the scripts started. However, that seems to be the only thing that the scripts are doing. The MC auto locker is not detecting lockloses, and is not resetting things to allow the MC to relock. The MC is happy to lock if I do it by hand though. Similarly, the blinky light for the FSS is on, but the PSL temperature is moving a lot faster than normal. I expect that it will hit one of the rails in under an hour or so.
The MC autolocker and the FSS loop were both running earlier today, so maybe Q had some magic that he used when he started them up, that he didn't include in the elog instructions?
Everything seems reasonably back to normal:
The EPICS freeze that we had noticed a few weeks ago (and several times since) has happened again, but this time it has not come back on its own. It has been down for almost an hour so far.
So far, we have reset the Martian network's switch that is in the rack by the printer. We have also power cycled the NAT router. We have moved the NAT router from the old GC network switch to the new faster switch, and reset the Martian network's switch again after that.
We have reset the network switch that is in 1X6.
We have reset what we think is the DAQ network switch at the very top of 1X7.
So far, nothing is working. EPICS is still frozen, we can't ping any computers from the control room, and new terminal windows won't give you the prompt (so perhaps we aren't able to mount the nfs, which is required for the bashrc).
We need help please!
EricQ suggested it may be some NFS related issue: if something, maybe some computer in the control room, is asking too much to chiara, then all the other machines accessing chiara will slow down, and this could escalate and lead to the Big Bad Freeze. As a matter of fact, chiara's dmesg pointed out its eth0 interface being brought up constantly, as if something is making it go down repeatedly. Anyhow, after the shutdown of all the computers in the control room, a reboot of chiara, megatron and the fb was performed.
Then I rebooted pianosa, and most of the issues seem gone so far; I had to "mxstream restart" all the frontends from medm and everyone of them but c1scy seems to behave properly. I will now bring the other machines back to life and see what happens next.
Given that op340m showed some undesired behavior, and that the FSS slow seems prone to railing lately, I've moved the FSS slow servo job over to megatron in the same way I did for the MC autolocker.
Namely, there is an upstart configuration (megatron:/etc/init/FSSslow.conf), that invokes the slow servo. Log file is in the same old place (/cvs/cds/caltech/logs/scripts), and the servo can be (re)started by running:
controls@megatron|~ > sudo initctl start FSSslow
Maybe this won't really change the behavior. We'll see
I've set up nodus to start the ELOG on boot, through /etc/init/elog.conf. Also, thanks to this, we don't need to use the start-elog.csh script any more. We can now just do:
controls@nodus:~ $ sudo initctl restart elog
I also tweaked some of the ELOG settings, so that image thumbnails are produced at higher resolution and quality.
I swapped out one of the channels on Q's lockloss plotter - we don't need POP22Q, but I do want the PC drive.
So, we still need to look into why the PC drive goes crazy, and if it is related to the buildup in the arms or just something intrinsic in the current FSS setup, but it looks like that was the cause of the lockloss that Q and Diego had on Wednesday.
elog was not responding for unknown reasons, since the elogd process on nodus was alive; anyway, I restarted it.
I just stumbled upon this while poking around:
Since the great crash of June 2014, the scripts backup script has not been workingon op340m. For some reason, it's only grabbing the PRFPMI folder, and nothing else.
Megatron seems to be able to run it. I've moved the job to megatron's crontab for now.
Since the Nodus switch, the offsite backup scripts (scripts/backup/rsync.backup) had not been running successfully. I tracked it down to the weird NFS file ownership issues we've been seeing since making Chiara the fileserver. Since the backup script uses rsync's "archive" mode, which preserves ownership, permissions, modification dates, etc, not seeing the proper ownership made everything wacky.
Despite 99% of the searches you do about this problem saying you just need to match your user's uid and gid on the NFS client and server, it turns out NFSv4 doesn't use this mechanism at all, opting instead for some ID mapping service (idmapd), which I have no inclination of figuring out at this time.
Thus, I've configured /etc/fstab on Nodus (and the control room machines) to use NFSv3 when mounting /cvs/cds. Now, all the file ownerships show up correctly, and the offsite backup of /cvs/cds is churning along happily.
Some locking efforts tonight; many locklosses due to PRC angular motion. Furthest progress was arm powers of 15, and I've stared at the corresponding lockloss plot, with little insight into what went wrong. (BTW, lastlock.sh seems to catch the lock loss reliably in the window)
CARM and DARM loops were measured not long before this lock loss, and had nominal UGFs (~120Hz, ~20deg PM). However, there was a reasonably clear 01 mode shape at the AS camera, which I did nothing to correct. Here's a spectrum from *just* before the lockloss, recovered via nds. Nothing stands out to me, other than a possible loss of DARM optical gain. (I believe the references are the error signal spectra taken in ALS arms held away + PRMI on 3F configuration)
The shape in the DARM OLTF that we had previously observed and hypothesized as possible DARM optical spring was not ever observed tonight. I didn't induce a DARM offset to try and look for it either, though.
Looking into some of the times when I was measuring OLTFs, the AS55 signals do show coherence with the live DARM error signal at the excitation frequencies, but little to no coherence under 30Hz, which probably means we weren't close enough to swap DARM error signals yet. This arm power regime is where the AS55 sign flip has been modeled to be...
A fair amount of time was spent in pre-locking prep, including:
I wonder what to do with the X arm.
The primary purpose of the ASS is to align the arm (=transmission), and the secondary purpose is to adjust the input pointing.
As the BS is the only steering actuator, we can't adjust two dof out of 8 dof.
In the old (my) topology, the spot position on ITMX was left unadjusted.
If my understanding of the latest configuration, the alignment of the cavity (=matching of the input axis with the cavity axis)
is deteriorated in order to move the cavity axis at the center of the two test masses. This is not what we want as this causes
deterioration of the power recycling gain.
I made the Xarm follow the new (old) topology of Length -> test masses, and Trans -> input pointing.
It takes a really long time to converge (2+ min), since the input pointing loops actuate on the BS, which has an optical lever, which is slow. So, everything has to be super duper slow for the input pointing to be fast relative to the test mass motion.
Also, between last night and this afternoon, I moved the green ASX stuff from a long list of ezca commands to a burt file, so turning it on is much faster now. Also, I chose new frequencies to avoid intermodulation issues, set the lockin demodulation phases, and tuned all 4 loops. So, now the green ASX should work for all 4 mirrors, no hand tuning required. While I was working on it, I also removed the band pass filters, and made the low pass filters the same as we are using for the IR ASS. The servos converge in about 30 seconds.
I have completed all of the model modifications and medm screen updates to allow for feedback from the transmon QPD pitch and yaw signals to the ITMs. Now, we can design and test actual loops...
The signals come from c1sc[x/y] to c1rfm via RFM, and then go to c1ass via dolphin.
Out of curiosity about the RFM+dolphin delay, I took a TF of an excitation at the end SUS model (C1:SUS-ETM[X/Y]_QPD_[PIT/YAW]_EXC) to the input FM in the ASC model (C1:ASC-ETM[X/Y]_QPD_[PIT/YAW]_IN1). All four signals exhibit the same delay of 122usec. I saved the dtt file in Templates/ASC/transmonQPDdelay.xml
This is less than a degree under 20Hz, so we don't have to worry about it.
EricQ's crazy people filter has been deleted. I'm trying to lock right now, to see if all is well in the world.
However, the PRMI would not acquire lock with the arms held off resonance.
This is entirely my fault.
Last week, while doing some stuff with PRY, I put this filter in SUS_PRM_LSC, to stop some saturations from high frequency sensing noise
After the discussion at today's meeting, it struck me that I might have left it on. Turns out I did.
20 degree phase lag at 200Hz can explain the instability, some non-flat shape at few hundreds of Hz explains the non 1/f shape.
Sorry about all that...
I was working around the PSL table and Y endtable today.
I modified the Y arm optical layout that couples the 1064nm light leaking from the SHG crystal into the fiber for frequency offset locking.
The ND filter that was used to attenuate the power coupled into the fiber has been replaced with a beam sampler (Thor labs BSF-10C). The reflected power after this optic is ~1.3mW and the trasmitted power has been dumped to a razor blade beam dump (~210mW).
Since we have a spare fiber running from the Y end to the PSL table, I installed an FC/APC fiber connector on the PSL table to connect them and monitored the output power at the Y end itself. After setting up, I have ~620uW of Y arm light on the PSL table (~48% coupling).
During the course of the alignment, I lowered the power of the Y end NPRO and disengaged the ETMY oplev. These were reset after I closed the end table.
Attached is the out of loop noise measurement of the Y arm ALS error signal before (ref plots) and after.
Did a big reconfig to make the Y-arm work again since it was bad again.
With the arm aligned and the A2L signals all zeroed, we centered the beam on QPDY (after freezing the ASS outputs). I saw the beam going to the QPD on an IR card, along with a host of green spots. Seems bad to have green beams hitting the QPD alogn with the IR, so we are asking Steve to buy a bunch of the broad, dielectric, bandpass filters from Thorlabs (FL1064-10), so that we can also be immune to the EXIT sign. I wonder if its legal to make a baffle to block it on the bottom side?
P.S. Why is the Transmon QPD software different from the OL stuff? We should take the Kissel OL package and put it in place of our old OL junk as well as the Transmons.
Diego is going to give us some spectra of the MC error point at various levels of pockel's cell drive. Is it always the same frequencies that are popping up, or is it random?
I found out that the Spectrum Analyzer gives bogus data... Since now is locking time, tomorrow I'll go and figure out what is not working
Nodus (solaris) is dead, long live Nodus (ubuntu).
Diego and I are smoothing out the Kinks as they appear, but the ELOG is running smoothly on our new machine.
SVN is working, but your checkouts may complain because they expect https, and we haven't turned SSL on yet...
SSL, https and backups are now working too!
A backup of nodus's configuration (with some explaining) will be done soon.
Nodus should be visible again from outside the Caltech Network; I added some basic configuration for postfix and smartmontools; configuration files and instructions for everything are in the svn in the nodus_config folder
[Jenne, Rana, Diego]
After deciding that the Yend QPD situation was not significant enough to prevent us from locking tonight, we got started. However, the PRMI would not acquire lock with the arms held off resonance.
This started some PRMI investigations.
With no arms, we can lock the PRMI with both REFL55 I&Q or REFL165 I&Q. We checked the demod phase for both Refl 55 and 165. REFL55 did not need changing, but REFL165 was off significantly (which probably contributed to the difficulty in using it to acquire lock). I didn't write down what REFL165 was, but it is now -3 degrees. To set the phase (this is also how Rana checked the 55 phase), I put in an oscillation using the sensing matrix oscillators. For both REFL165I and 165Q, I set the sensing matrix demod phases such that all of the signal was in the I phase (so I_I and Q_I, and basically zero in I_Q and Q_Q). Then, I set the main PD demod phase so that the REFL165Q phase (the Q_I phase) was about zero.
Here are the recipes for PRMI-only, REFL55 and REFL165:
Both cases, actuation was PRCL = 1*PRM and MICH = (0.5*BS - 0.2625*PRM). Trigger thresholds for DoFs and FMs were always POP22I, 10 up and 0.5 down.
REFL55, demod phase = 31deg.
MICH = 2*R55Q, gain = 2.4, trig FMs 2, 6, 8.
PRCL = 12*R55I, gain = -0.022, trig FMs 2,6,9.
REFL165, demod phase = -3deg.
MICH = -1*R165Q, gain = 2.4, trig FMs 2,6,8.
PRCL = 2.2*R165I, gain = -0.022, trig FMs 2,6,9.
These recipes assume Rana's new resonant gain filter for MICH's FM6, with only 2 resonant gains at 16 and 24 Hz instead of a whole mess of them: elog 10803. Also, we have turned down the waiting time between the MICH loop locking, and the filters coming on. It used to be a 5 second delay, but now is 2 sec. We have been using various delays for the PRCL filters, between 0.2s and 0.7s, with no particular preference in the end.
We compared the PRCL loop with both PDs, and note that the REFL 165 error signal has slightly more phase lag, although we do not yet know why. This means that if we only have a marginally stable PRCL loop for REFL55, we will not be stable with REFL165. Also, both loops have a non-1/f shape at a few hundred Hz. This bump is still there even if all filters except the acquisition ones (FM4,5 for both MICH and PRCL) are turned off, and all of the violin filters are turned off. I will try to model this to see where it comes from.
To Do list:
Go back to the QPDY situation during the daytime, to see if tapping various parts of the board makes the noise worse. Since it goes up to such high frequencies, it might not be just acoustic. Also, it's got to be in something common like the power or something, since we see the same spectra in all 4 quadrants.
The ASS needs to be re-tuned.
Rana was talking about perhaps opening up the ETMX chamber and wiggling the optic around in the wire. Apparently it's not too unusual for the wire to get a bit twisted underneath, which creates a set of places that the optic likes to go to.
This is ridiculous.
How many RGs can I fit into one button???
We manually realigned the BS and PRM optical levers on the optical table.
[Jenne, Rana, Diego]
We did some test on the modified QPD board for the Yend; we saw some weird oscillations at high frequencies, so we went and check more closely directly within the rack. The oscillations disappear when the cable from the QPD is disconnected, so it seems something is happening within the board itself; however, looking closely at the board with an oscilloscope in several locations, with the QPD cable connected or disconnected, there is nothing strange and definitely nothing changing if the cable is connected or not. In the plots there are the usual channels we monitor, and the 64kHz original channels before they are downsampled.
Overall it doesn't seem being a huge factor, as the RMS shows at high frequencies, maybe it could be some random noise coming up, but anyway this will be investigated further in the future.
Found that the PMC gain has been set to 5.3 dB instead of 10 dB since 9 AM this morning, with no elog entry.
I also re-aligned the beam into the PMC to minimize the reflection. It was almost all in pitch.
Details later - empty entry for a reply.
Short story - Yend is now same as Xend filters-wise and lack of gain sliders -wise. Both ends have 13.7k resistors around the AD620 to give them gains of ~4.5.
Xend seems fine.
Yend seems not fine. Even the dark noise spectrum sees giganto peaks. See Diego's elog 10801 for details on this investigation.
Yesterday, we were seeing anomalously high low frequency RIN in the y-arm (rms of 4% or so). I swung by the lab briefly to check this out. Turns out, despite TRY of 1.0, there was reasonable misalignment. ASS with the excitation lowered by a factor of two, and overall gain at 0.5 or so aligned things to TRY=1.2, and the RIN is back down to ~0.5% I reset the Thorlabs FM to make the power = 1.0
I then went to center the transmitted beam on the transmon QPD. Looking at the quadrant counts as I moved the beam around, things looked odd, and I poked around a little...
I strongly suspect that we have significantly mismatched gains for the different quadrants on the ETMY QPD.
Reasoning: With the y-arm POY locked, I used a lens to focus down the TRY beam, to illuminate the quadrants individually. Quadrants 2 and 3 would go up to 3 counts, while 1 and 4 would go up to 0.3 and 0.6, respectively. (These counts are in some arbitrary units that were set by setting the sum to 1.0 when pitch and yaw claimed to be centered, but mismatched gains makes that meaningless.)
I haven't looked more deeply into where the mismatch is occurring. The four individual whitening gain sliders did affect the signals, so the sliders don't seem sticky, however I didn't check the actual change in gains. Will the latest round of whitening board modifications help this?
Hopefully, once this is resolved, the DC transmission signals will be much more reliable when locking...
16 bit. There aren't any 14-bit ADCs anywhere in LIGO. The aLIGO suspensions have 18-bit DACs.
The Y-End gains seem reasonable to me. I think that we only use TRX/Y as error signals once we have arm powers of >5 so we should consider if the SNR is good enough at that point; i.e. what would be the actual arm motion if we are limited only by the dark noise?
I seem to remember that the estimate for the ultimate arm power is ~200, considering that we have such high losses in the arms.
Okay, I have finished modifying the Xend QPD whitening board, although I will likely need to change the gain on Monday.
Rather than following my plan in elog 10782, I removed the AD602's entirely, and just use the AD620's as the amplifiers. We don't need remotely adjustable gains, and the AD620s are a less noisy part.
I set the gain to be 30dB using a 1.65k resistor for R_G, which turns out to be too high. After I installed the board and realized that my counts were much higher than they used to be, I realized that what we had been calling +30dB was in fact +13.2dB. ( I am assuming that the ADC for the gain sliders were putting out a maximum of +10V. The AD620 used to have a 1/10 voltage divider at the input, and an overall gain of 1, so the output of the AD620 was 100mV. This goes into pin 16 of the AD602, which has gain of 32*V_set + 10. Which gives 32*0.1+10=13.2dB. Ooops. We've been lying to ourselves. )
Anyhow, before I made the gain realization, I was happily going along, setting the AD620s' gains all to 30dB. I also copied Koji's modification from April of this year, and permanently enabled the whitening filters.
Here is the schematic of what ended up happening. The red modifications were already in place, and the greens are what I did today.
You can see the "before" picture in my elog Wednesday, elog 10774. Here is an "after" photo:
Here is a spectrum comparing the dark noise of the Xend QPD after modification to the current Yend QPD (which is still using the AD602 as the main instrumentation amplifier). I have given the Yend data an extra 16.8dB to make things match.
And, here is a set of spectra comparing both ends, dark noise versus single arm lock. While I'll have to sacrifice a lot of it, there's oodles more SNR in the Xend now. The Yend data still has the "gain fixing" extra 16.8dB.
The Xend quadrant input counts (before the de-whitening filters) now go up to peak values of about 1,000 at single arm lock. If (optimistically) the we got full power recycling and the arms got to powers of 300, that would mean we would have 300,000 counts, which is obviously way more than we actually have ADC range for. Currently, the Yend quadrant input counts go as high as 50, which with arm powers of 300 would give 15,000 counts. I think I need to bring the Xend gain down to about the level of the Yend, so that we don't saturate at full arm powers. I can't remember right now - are the ends 14-bit or 16-bit ADCs? If they're 16-bit, then I can set the gain somewhere between the current X and Y values.
Finally, I added a section of the 40m's DCC document tree for the QPD whitening: E1400473, with a page for each end. Xend = D1400414, Yend = D1400415.
We ran a Cat 6+ Ethernet cable from the 1X7 rack (where the new nodus is located) to the fast GC switch in the control room rack; now I will learn how to setup the 'outside world' network, iptables, and the like.
I remind that the current hardware/software status is posted in elog 10697 ; if additions or corrections are needed, let me know.
After I check a couple of things, we can use the new nodus (which is currently known in the martian network as rosalba) as a local test to see that everything is working. After that (and, mostly, after I'll have the network working), we will sync the data from the old nodus to the new one and make the switch.
Update: work is almost completed; the old nodus is still online, as I don't feel confident to make the switch and leave it on its own for the weekend. However, the new nodus is online with the IP address 184.108.40.206, so everyone can check that everything works. From my tests I can say that:
After everything will be in place, I will save every reasonably important configuration file of nodus into the svn.
I remind that every change made while accessing the 220.127.116.11 machine will be purged during the sync&switch
Unfortunately the order placed for beam samplers last week did not go through. These will be used at the X and Y end tables to dump the unwanted light appropriately. Since they will not be here until Tuesday, I revised the timeline for FOL related activities accordingly.
I was working on the PSL table today.
Since the rejected 1064nm light after the SHG crystal is not easily reachable to measure beam widths close to the waist, I put a lens f=300mm and measured the beam size around its focus. I used this data and redesigned the telescope using 'a la mode'.
I used a beam splitter to attenuate the beam directed towards the fiber. The reflected beam from BS has been dumped (I need to find a better beam dump than what is being used right now.
I have only ~200uW at the input of the fiber coupler after the BS and 86uW at the output of the fiber (43% coupling).
I moved the GTRY DC photodiode and the lens in front of it to make space for the fiber coupler mount.
The layout on the PSL table right now is as shown below.
I have also put the fiber chassis inside the PSL enclosure on the rack. I moved the coherent spectrum analyser controller that is not being used to make space on the rack.
The first real rain of this year finds only one leak at the 40m
We decided that tonight was the night for ASS tuning.
We started from choosing new frequencies, by looking at the transmission and the servo control signals spectra to find areas that weren't too full of peaks. We chose to be above the OpLev UGF by at least a factor of ~2, so our lowest frequency is about 18Hz. This way, even if the oplevs are retuned, or the gains are increased, the ASS should still function.
We set the peak heights for the lowest frequency of each arm to have good SNR, and then calculated what the amplitude of the higher frequencies ought to be, such that the mirrors are moving about the same amount in all directions.
We re-did the low pass filters, and eliminated the band pass filters in the demodulation part of the servo. The band passes aren't strictly necessary, as long as you have adequate lowpassing, so we have turned them off, which gives us the freedom to change excitation frequencies at will. We modified the lowpass filter so that we had more attenuation at 2Hz, since we spaced our excitation frequencies at least ~2.5 Hz apart.
The same lowpass filter is in every single demodulator filter bank (I's and Q's, for both length and transmission demodulation). We are getting the gain hierarchy just by setting the servo gains appropriately.
We ran ezcaservos to set the demodulation phase of each lockin, to minimize the Q-phase signal.
We then tuned up the gains of the servos. Rana did the Y arm, but for the X arm I tried to find the gains where the servos went unstable, and then reduced the gain by a factor of 2. The Xarm is having trouble getting good alignment if you start with something less than about 0.7, so there is room for improvement.
Rana wrote a little shell script that will save the burt snapshot, if the gains need adjusting and they should be re-saved.
The scripts have been modified (just with the new oscillator amplitudes - everything else is in the burt snapshots), so you should be able to run the start from nothing and the start from frozen scripts for both arms. However, please watch them just in case, to make sure they don't run away.
I missed to elog this earlier. I have temporarily removed the DC photodiode for GTRY to install the fiber holder on the PSL table. So GTRY will not be seeing anything right now.
After some confusion, I discovered this a few hours ago.
I assembled the telescope to couple PSL light into the fiber. The maximum coupling that I could obtain was 10mW out of 65mW (~15%).
I was expecting to achieve 80-90% coupling from my design estimates. It makes me wonder if the beam waist measurements made by Harry during summer were correct in the first place. I would like to go back and check the beam waist at the PSL table.
Also, we need a pair of 8m (~25 feet) long SMA cables to carry the RF signal from the beat PD on the PSL table to frequency counter module on the IOO rack.
Steve says that we had a spool of SMA cable and it was borrowed by someone a few months ago. Any updates on either who is holding it or if it has been used up already would help.
The X end slow computer was down this morning. So I used only the Y arm ALS to record the noise level for reference. DTT data for ALSY out of loop noise before opening PSL enclosure is saved in /users/manasa/data/141211/ALSYoutLoop.xml
All of the QPDX matrix fields had a missing underscore in them. So I committed all of the c1asc screens to the SVN (because no one but me and Jamie seems to be able to remember to do this).
Then I did find/replace on the QPDY screen and saved it over the QPDX screen and committed the new thing to SVN as well. Values are now accessible.
- It was not sure how the whitening gains have been given.
- The corresponding database entry was found in /cvs/cds/caltech/target/c1auxey/ETMYaux.db as
- The gains for S2-S4 were set to be 30. However, C1:ASC-QPDY_S1WhiteGain was set to be 8.62068.
And it was not writable.
- After some investigation, it was found that the database was wrong. The DAC channel was changed from S100 to S0.
The corrected entry is shown here.
field(DESC,"Whitening gain for QPDY Seg 1")
field(OUT,"#C0 S0 @")
- Once c1auxey was rebooted, the S1 whitening gain became writable. Now all of the channels were set to be +30dB (max).
This exact situation was happening at ETMX. I did the exact same change to the database, now I can read and write all four gain segments.
The other day, I hooked up the agilent analyzer to OUT2 of the MC board, which is currently set to output the MC refl error signal. I've written a GPIB-based program that continuously polls the analyzer, and plots the live spectrum, an exponentially weighted running mean, and the first measured spectrum.
The intended use case is to see if the FSS or MC loops are going crazy when we're locking. Sometimes the GPIB interface hangs/loses its connection, and the script needs a restart.
The script lives in scripts/MC/MCerrmon
With some advice from Jamie, I've gotten the lock loss plotting script that is used at LHO working on our machines. The other night, I modified the ALSwatch.py script to log lockloss times. Tying it together, I've written a small wrapper script that grabs the last time from the lockloss log, and plots it.
It is: scripts/LSC/LocklossData/lastlock.sh
Jamie's going to make an adjustment to the pydv codebase that will let me implement the auto y-scaling that we like. We also will need to get a feel for the right timing window, once we see what kind of delay in the ALSwatch script is typical.
Here's an example of the output, with the window of [-10,+2] seconds from the logged GPS time: