Steve asked about calibrating the QPD, so I set up some new epics records so that we can have calibrated versions of the QPD output.
The new channels are called C1:ASC-TESTQPD_Y_Calc and C1:ASC-TESTQPD_X_Calc for pitch and yaw, respectively.
* I modified /cvs/cds/caltech/target/c1iscaux/QPD.db to add 2 new channels. Since we are currently plugged into the IPPOS channels, I didn't want to modify the units of IPPOS, which is why I created new channels. The new channels are just the IPPOS normalized X and Y channels, multiplied by a calibration factor. Steve has already done a rough calibration for his setup, so I used those numbers (0.15 urad/ct for pitch and 0.25 urad/ct for yaw).
* Rebooted c1iscaux. This required adding it to chiara's /etc/hosts file.
* Added the channels to the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini file so that the channels would appear in dataviewer.
* Restarted the framebuilder daqd process.
How to modify the calibration:
1) On a control room workstation, cd /cvs/cds/caltech/target/c1iscaux to get to the right folder. (Note that this is still in the old cvs/cds place, *not* the new opt/rtcds place)
2) open the epics database file by typing sudo emacs QPD.db. Since this is a protected file, you need to use the "sudo" command, and will have to type in the usual controls password.
sudo emacs QPD.db
3) Find the "records" that have the channel names C1:ASC-TESTQPD_Y_Calc and C1:ASC-TESTQPD_X_Calc by scrolling down. (Right now they are on lines #550 and #561 of the text file).
4) For each of these 2 records, modify the calibration in the line that says something like field(CALC,"(A*0.25)"). In this example, the current calibration is 0.25 urad/oldCount. Change the number to the new value.
5) Save the file. If you followed the procedure in step2 and used the emacs program and you can't use the mouse, do the following: Hold down the "ctrl" key. Keeping ctrl pushed down, push the "x" key. Still keeping ctrl pushed down, push the "s" key.
6) Close the file. If you followed the procedure in step2 and used the emacs program and you can't use the mouse, do the following: Hold down the "ctrl" key. Keeping ctrl pushed down, push the "x" key. Still keeping ctrl pushed down, push the "c" key.
7) Reboot the slow computer called c1iscaux. You should be able to do this remotely by typing telnet c1iscaux, and then typing reboot. If that doesn't work, you may have to go into the IFO room and power cycle the crate by turning the key. This computer is in 1Y3, near the bottom.
8) Check that you can see your channels - you should be finished now!
For steps 3 and 4, here is a screenshot of the lines in the text file:
TEST QPD sn 222 was calibrated with 1103P directly looking into it from 1 m. ND2 filter was on the qpd.
2005 ALL oplev servos use Coherent DIODE LASERS # 31-0425-000, 670 nm, 1 mW
Sep. 28, 2006 optical lever noise budget with DC readout in 40m, LIGO- T060234-00-R, Reinecke & Rana
May 22, 2007 BS, SRM & PRM He Ne 1103P takes over from diode
May 29, 2007 low RIN He Ne JDSU 1103P selected, 5 purchased sn: T8078254, T8078256, T8078257, T8078258 & T8077178 in Sep. 2007
Nov 30, 2007 Uniphase 1103P divergence measured
Nov. 30, 2007 ETMX old Uniphase 1103P from 2002 dies: .............., running time not known......~3-5 years?
May 19, 2008 ETMY old Uniphase 1103P from 1999 dies;.....................running time not known.....~ ?
Oct. 2, 2008 ITMX & ITMY are still diodes, meaning others are converted to 1103P earlier
JDSU 1103P were replaced as follows:
May 11, 2011 ETMX replaced, life time 1,258 days or 3.4 years
May 13, 2014 ETMX , LT 1,098 days or 3 y
May 22, 2012 ETMY, LT 1,464 days or 4 y
Oct. 5, 2011 BS & PRM, LT 4 years, laser in place at 1,037 days or 2.8 y
Sep. 13, 2011 ITMY old 1103P & SRM diode laser replaced by 1125P ..........old He life time is not known, 1125P in place 1,059 days or 2.9 y
June 26, 2013 ITMX 622 days or 1.7 y note: we changed because of beam quality.........................laser in place 420 days or 1.2 y
Sep. 27, 2013 purchased 3 JDSU 1103P lasers, sn: P893516, P893518, P893519 ......2 spares ( also 2 spares of 1125P of 5 mW & larger body )
May 13, 2014 ETMX, .............laser in place 90 d
May 22, 2012 ETMY,
Oct. 7, 2013 ETMY, LT 503 d or 1.4 y............bad beam quality ?
Aug. 8, 2014 ETMY, .............laser in place 425 days or 1.2 y
Sept. 5, 2014 new 1103P, sn P893516 installed at SP table for aLIGO oplev use qualification
The room temp drops 1 degree C on the 4th day. The weather has changed.
ITMY in vac table needs leveling.
ETMX is misbehaving again. I went to go squish his cable at the rack and at the satellite box, but it still happened at least once.
Anecdotally and without science, it seems to happen when ETMX is being asked to move a "big" amount. If I move the sliders too quickly (steps of 1e-3, but holding down the arrow key for about 1 second) or if I offload the ASS outputs when they're too large (above 10ish?), ETMX jumps so that it's about 50 urad off in yaw according to the oplev (sometimes right, more often left), and either 0 or 50urad off in pitch (up if right in yaw, down if left in yaw).
So far, by-hand slowly offloading the ASS outputs using the sliders seems to keep it happy.
I would ask if this is some DAC bit flipping or something, but it's happening for outputs through both the fast front ends (ASS offloading) and the slow computers (sliders moved too fast). So. I don't know what it could be, except the usual cable jiggling out issue.
Anyhow, annoying, but not a show stopper.
Okay, now ETMX's badness is a show-stopper. I'm not sure why, but after this last lockloss, ETMX won't stay put. Right now (as opposed to earlier tonight) it seems to only be happening when I enable LSC pushing on the SUS. ETMX is happy to sit and stay locked on TEM00 green while I write this entry, but if I go and try to turn on the LSC it'll be wacky again. Daytime work.
Anyhow, this is too bad, since I was feelin' pretty good about transitioning DARM over to AS55.
I had a line on (50 counts at 503.1 Hz pushing differentially on the ETMs), and could clearly see the sign flip happen in normalized AS55Q between arm powers of 4 and 6. The line also told me that I needed a matrix element of negative a few x10^-4 in the AS55Q -> DARM spot. Unfortunately, I was missing a zero (so I was making my matrix element too big by a factor of 10) in my ezcastep line, so both times I tried to transition I lost lock.
So. I think that we should put values of 0.5 into the power normalization for our test case (I was using SRCL_IN1 as my tester) since that's the approximate value that the DCtrans uses, and see what size AS55Q matrix element DARM wants tomorrow (tonight was 1.6-3 x 10^-4, but with 1's in the normalization matrix). I feel positive about us getting over to AS55.
Also, Q is (I assume) going to work some more tomorrow on PRMI->REFL165, and Diego is going to re-test his new IR resonance finding script. Manasa, if you're not swamped with other stuff, can you please see if you can have a look at ETMX? Maybe don't change any settings, but see what things being turned on makes ETMX crazy (if it's still happening in the morning).
I looked at what are the situations that make ETMX lose alignment.
This is not occur all that often this morning; less than 10 times in may be the last 4 hours of poking the X arm. I found that the bad behavior of ETMX also exists in certain other cases apart from the case when we enable LSC.
(I) Even the MISALIGN and RESTORE scripts for the suspensions make the suspension behave bad. The RESTORE script while in the process of bringing back the suspension to the place where it was, kicks it to some place else sometimes (even with LSC disabled)
(II) The suspension also gets kicked while realigning ETMX manually using sliders at 10^-3 (pace of 2-3 steps at a time).
I am suspecting something wrong right at the coil inputs and gains of the suspension.
Also, I recollect that we haven't done a check on the X arm LSC limiters and filters ramping times like it was done for the Y arm ( Elog 9877 ). We should do this check to be sure that we are not seeing a mixed puddle of problems from 2 sources.
PRM sus damping recovered and PMC locked.
We copied the new SRM filters over onto the OL banks for the ITMs and ETMs. We then adjusted the gain to be 3x lower than the gain at which it has a high frequency oscillation. This is the same recipe used for the SRM OL tuning.
Before this tune up, we also set the damping gains of the 4 arm cavity mirrors to give step response Q's of ~5 for all DOF and ~7-10 for SIDE.
PRM, SRM and the ENDs are kicking up. Computers are down. PMC slider is stuck at low voltage.
We noticed last night that the yarm couldn't handle the old nominal gain for the ASS servos. We were able to run the ASS using about 0.3 in the overall gain. So, I have reduced the gain in each of the individual servos by a factor of 3, so that the scripts work, and can set the overall gain to 1.
EDIT: some images look bad on the elog, and the notebook is parsed, which is is bad. Almost everything posted here is in the compressed file attachment.
As we've been discussing, we want to reduce the laser's jitter effect on the QPDs of the OpLevs, without losing sensitivity to angular motion of the mirror; the current setup is roughly described in this picture:
The idea is to place an additional lens (or lenses) between the mirror and the QPD, as shown in the proposed setup in this picture:
I did some ray tracing calculations to find out how the system would change with the addition of the lens. The step-by-step calculations are done at the several points shown in the pictures, but here I will just summarize. I chose to put the telescope at a variable relative distance x from the QPD, such that x=0 at the QPD, and x=1 at the mirror.
Here are the components that I used in the calculations:
I used a 3x3 matrix formalism in order to have easier calculations and reduce everything to matrix multiplications; that because the tilted mirror has an annoying addictive term, which I could get rid of:
Therefore, n the results the third line is a dummy line and has no meaning.
For the first case (first schematic), we have, for the final r and Theta seen at the QPD:
In the second case, we have a quite heavy output, which depend also on x and f:
Now, some plots to help understand the situation.
What we want if to reduce the angular effect on the laser displacement, without sacrificing the sensitivity on the mirror signal. I defined two quantities:
Beta is the laser jitter we want to reduce, while Gamma is the mirror signal we don't want to lose. I plotted both of them as a function of the position x of the new lens, for a range of focal lengths f. I used d1 = d2 = 2m, which should be a realistic value for the 40m's OpLevs.
Plot of Beta
Plot of Gamma
Even if it is a bit cluttered, it is useful to see both of the same plot:
Plot of Beta & Gamma
Apart from any kind of horrific mistakes that I may have done in my calculations, it seems that for converging lenses our signal Gamma is always reduced more than the jitter we want to suppress. For diverging lenses, the opposite happens, but we would have to put the lens very near to the mirror, which is somehow not what I would expect. Negative values of Beta and Gamma should mean that the final values at the QPD level are on the opposite side of the axis/center of symmetry of the QPD with respect to their initial position.
I will stare at the plots and calculations a bit more, and try to figure out if I missed something obvious. The Mathematica notebook is attached.
I stared a bit longer at the plots and thanks to Eric's feedback I noticed I payed too much attention to the comparison between Beta and Gamma and not enough attention to the fact that Beta has some zero-crossings...
I made new plots, focusing on this fact and using some real values for the focal lengths; some of them are still a bit extreme, but I wanted to plot also the zero-crossings for high values of x, to see if they make sense.
If we are not interested in the sign of our signals/noises (apart from knowing what it is), it is maybe more clear to see regions of interest by plotting Beta and Gamma in absolute value:
I don't know if putting the telescope far from the QPD and near the mirror has some disadvantage, but that is the region with the most benefit, according to these plots.
The plots shown so far only consider the coefficients of the various terms; this makes sense if we want to exploit the zero-crossing of Beta's coefficient and see how things work, but the real noise and signal values also depend on the Alpha and Theta themselves. Therefore I made another kind of plot, where I put the ratio r'(Alpha)/r'(Theta) and called it Tau. This may be, in a very rough way, an estimate of our "S/N" ratio, as Alpha is the tilt of the mirror and Theta is the laser jitter; in order to plot this quantity, I had to introduce the laser parameters r and Theta (taken from the Edmund Optics 1103P datasheet), and also estimate a mean value for Alpha; I used Alpha = 200 urad. In these plots, the contribute of r'(r) is not considered because it doesn't change adding the telescope, and it is overall small.
In these plots the dashed line is the No Telescope case (as there is no variable quantity), and after the general plot I made two zoomed subplots for positive and negative focal lengths.
If these plot can be trusted as meaningful, they show that for negative focal lengths our tentative "S/N" ratio is always decreasing which, given the plots shown before, it does little sense: although for these negative f Gamma never crosses zero, Beta surely does, so I would expect one singular value each.
ETMX sus damping restored and PMC locked manually.
Earlier this afternoon, while locking PRMI, I saw a big peak at 1883.48 Hz. This comes closest to the PRM's 627.75 Hz *3, so I infer that it is the 3rd order harmonic of the PRM violin mode.
While putting this in, I noticed that my addition of ETM filters the other day (elog 10746) had gotten deleted. Koji pointed out that Foton can do this - it allows you to create and save filters that are higher than 20th order, but secretly it deletes them. I went into the filter archive and recovered the old ETM filters, and split things up. I have now totally reorganized the filters, and I have made every single optic (ETMs, ITMs, PRM, SRM, BS, MC2) all the same.
FM1 is BS 1st and 2nd harmonics, and FM6 directly below that is a generic 3rd order notch that is wide enough that it encompases 3*BS.
FM2 is the PRM 1st and 2nd order, and FM7 below it is the PRM 3rd order.
FM3 is the SRM 1st order, FM4 is the ETMs' 1st order, and FM5 is the MC2 1st and 2nd order filters.
All of these filters are triggered on if any degree of freedom is triggered. They all have a ramp time of 3 sec. We may want to consider having separate trigger options for each optic, so that we're not including the PRM notch on the ETMs, for example, and vice versa.
When all of these filters are on, according to Foton we lose 5.6 degrees of phase at 100 Hz.
We looked at the spectra of POX and POY during IR lock, and Q saw a peak at 1285 in POX only. We're actuating on the ETMs, so it must be an ETMX violin mode, although it doesn't match the others that are in the table.
Anyhow, I added it to FM9. While I was doing that, I realized that yesterday I had forgotten to put back the 3rd order ETM violin notch, so that is also in FM9.
All suspensions were tripped. Damping were restored. No obvious sign of damage. BS OSEM-UR may be sticking ?
The BS was showing some excess motion. I think I've fixed it. Order of operations:
I'm not sure how this might have gotten switched on...
The SUS Drift Monitor screen has been updated:
The MEDM screen has been updated: the new buttons, one for each optic, call the scripts/general/SUS_DRIFTMON_update_reference.py script, which measures (and averages) for 30s the current values of the POS/PIT/YAW drifts, and then sets the average as the new reference value.
100 and 10 days trends of ETMX and ETMY_SUSPIT. One can see clearly the earthquaks of Dec.30 and 31 on the 10 day plot. You can not see the two shakes M3.0 & M4.3 of Jan. 3
The long term plot looks OK , but the 10 day plot show the problem of ETMX as it was shaken 4 times.
I made little scripts to go with the sus driftmon buttons, that will servo the alignment sliders until the susyaw and suspit values match the references on the driftmon screen.
I measured the bounce/roll frequencies for all the optics, and updated the Mechanical Resonances wiki page accordingly.
I put the DTT templates I used in the /users/Templates/DTT_BounceRoll folder; I wrote a python script which takes the exported ASCII data from such templates and does all the rest; the only tricky part is to remember to export the channel data in the order "UL UR LL" for each optic; the ordering of the optics in a single template export is not important, as long as you remember it...
Anyhow, the script is documented and the only things that may need to be modified are:
The script is in scripts/SUS/BR_freq_finder.py and in the SVN. I attach the plots I made with this method.
We centered the OpLevs for ITMX and ITMY.
ETMX YAW stopped drifting Jan 8, 2015
I made little scripts to go with the sus driftmon buttons, that will servo the alignment sliders until the susyaw and suspit values match the references on the driftmon screen.
Baja 4.9 m earth quake tripped suspentions, except ETMX Sus damping recovered. MC is locking.
This has been edited several times over the last several hours, as I try to change different parameters, to see if they affect the movement of ETMX. So far, I don't know what is causing the motion. If it is there, it is only present when the LSC is engaged, so I don't think it's wobbling constantly on a twisted wire.
FINAL EDIT, 9:10pm: The arm ASC was turning itself on when the arms were locked. Whelp, that was only 3 hours of confusion. Blargh.
For his penance for leaving the arm ASC engaged, Q has made a set of warning lights on the LSC screen, right next to the ASS warning lights.
ETMX might be having one of those days today, which is lame.
So far tonight, I have run the LSC offset script, set the FSS slow value to +0.2, and run the arm ASS scripts. Nothing too crazy I think.
Sometimes when I lock the single arms, the ETMs move around like crazy. Other times, not. What is going on here??? The ETMs don't move at all when they are not being actuated on with the LSC.
In this screenshot you can see the end of a POX/POY lock stretch where everything was nice and good. Then, the arms were unlocked, and they have a bit of a DC offset. After settling from that step, they continue sitting nice and still. Then, I relock the cavities on POX and POY a little before -4 minutes. ETMY takes a moment to pull itself together, but then it's steady. ETMX just wobbles around for several minutes, until I turn off the LSC enable switch (happened after the end of this plot).
I'm not going to be able to lock like this. Eeek!
This is somehow related to light being in the Xarm. This next plot was taken while the arms were held with ALS in CARM/DARM mode.
I closed and re-opened all 3 green shutters. Now (at least the last 8 arm locks in the last 6 mintues) ETMX has never gone wobbly, except for a little bit right after acquisition, to deal with whatever the DC offset it. Why is this changing?
The arms were fine for one long ~30 minute lock while I stepped out for dinner. At some point after returned, the MC lost lock. When the arms came back, ETMX was being fussy again. Then, it decided that it was done.
In this plot, at -1 minute I started the ASS. Other than that, I did not touch any buttons at all, just observed. I have no idea why at about -3 minutes the bad stuff seems to go away.
I was curious if it had to do with the DC pointing of the optics, so I unlocked the arms, put ETMX about where it was during the long good lock stretch, then reaquired lock. I had to undo a little of that so that it would lock on TEM00, but at the beginning of the lock stretch (starting at about -3) the pitch is about the same spot. But, the oscillations persist. This time it was clear that the oscillations were around 80 mHz, and they started getting bigger until they settled to an amplitude they seemed to like.
Seems pretty independent from FSS temp. There are 3 lock stretches in the next plot (easier to see by looking at the Yarm transmission, green trace). The first one, the FSS slow was at 0.35. the middle one, it was around 0.05. The last one, it was around -0.4. Other than the different DC pointings (which I don't know if they are related), I don't see anything qualitatively different in the movement of ETMX.
The temperature of the east and south ends are normal, they are about the same.
The BS oplev servo was kicking up the BS. It was turned off
I just realized that the "damprestore" script that can be called from the watchdog screen did not have the new oplev names. I have updated it, and added it to the svn.
The oplev situation still seems unresolved - notice this DTT. I guess there are still inconsistencies in the screens / models etc.
Could use some some investigation and ELOGGING from Eric.
March 19, 2015 2 new JDSU 1103P, sn P919645 & P919639 received from Thailand through Edmond Optics. Mfg date 12/2014............as spares
In addition to (and probably related to) the XARM ASS not working today, the ITMX has been jumping around kind of like ETMX sometimes does. It's very disconcerting.
Earlier today, Q and I tried turning off both the LSC and the oplev damping (leaving the local OSEM damping on), and ITMX still jumped, far enough that it fell off the oplev PD.
I'm not sure what is wrong with ITMX, but probably ASS won't work well until we figure out what's up.
I tried a few lock stretches (after realigning the Xgreen on the PSL table) after hand-aligning the Xarm, but the overall alignment just isn't good enough. Usually POPDC gets to 400 or 450 while the arms are held off resonance, but today (after tweaking BS and PRM alignment), the best I can get POPDC is about 300 counts.
Den and I are looking at the ASS and ITMX now.
So, was there real shifting in the ITMX alignment as seen in the DV trend or just mis-diagnosis from the ETMX violin mode? Or how would the ETMX violin mode drive the ITMX with the LSC feedback disabled?
I've been poking around the oplev situation. One thing I came across regarding ITMX was that the gain on segment 4 seems to be about higher than the other segments. I was led to believe this by steering the optic around, and looking at the counts on each quadrant when the other 3 were dark.
Putting a gain of 0.86 (the ratio of the other segments' max counts over segment 4's max counts) in the segment 4 FM flattens the 1 Hz peak in the ITMX_OL_SUM spectrum, as well as significantly reducing the sub-Hz coherence of the sum with the individual quandrant counts. This is what I would expect from reducing the coupling of angular motion due to quadrant gain mismatch into the sum.
Here are the ITMX_OL_SUM spectra before and after (oplev servos are off).
The "burps" and control filter saturations are still unexplained. Investigations continue...
ITMX, ETMY, BS and SRM are oscillating ?
The BS oplev has been misbehaving and kicking the optic from time to time since noon. The kicks are not strong enough to trip the watchdogs (current watchdog max counts for the sensors is 135).
I took a look at the spectrum of the BS oplev error in pit and yaw with both loops enabled while the optic was stable. There is nothing alarmingly big except for some additional noise above 4Hz.
I have turned the BS oplev servo OFF for now.
I saw this kicking before
I think that this happens when the beam gets too close to the edge of the QPD. We see this regularly in the ETMs, if they've been kicked a bit, but not enough to trip the watchdogs. I think it might be the step/impulse response of the RES3.3 filter, which rings for almost 20 seconds.
Anyhow, I've just recentered the BS oplev. It was at -21urad in pitch, and had more than 400 counts on the top two quadrants, but only about 100 counts on the bottom two. Now it's around 300 counts on all 4 quadrants.
As a totally unrelated aside, I have installed texlive on Donatella, so that I could run pdflatex.
The laser below is dead. JDSU 1103P, SN P845655 lived for 3.5 years.
JDSU 1103P died after 4 years of service. It was replaced with new identical head of 2.9 mW output. The power supply was also changed.
The return spots of 0.04 mW 2.5 mm diameter on qpds are BS 3,700 counts and PRM 4,250 counts.
It was replaced by JDSU P/N 22037130,( It has a new name for 1103P Uniphase ) sn P919639 of mfg date 12-2014
Beam shape at 5 m nicely round. Output power 2.8 mW of 633 nm
BS spot size on qpd ~1 mm & 60 micro W
PRM spot size on qpd ~1 mm & 50 micro W
ETMX sus damping restored.
Recently, Steve replaced the HeNe which was sourcing the BS & PRM OL. After replacement, no one checked the beam sizes and we've been living with a mostly broken BS OL. The beam spot on the QPD was so tiny that we were seeing the 'beam is nearly the size of the segment gap' effect.
Today I removed 2 of the lenses which were in the beam path: one removed from the common PRM/BS path, and one removed from the PRM path. The beams on both the BS & PRM got bigger. The BS beam is bigger by a factor of 7. I've increased the loop gains by a factor of 6 and now the UGFs are ~6 Hz. The loop gains were much too high with the small beam spots that Steve had left there. I would prefer for the beams to be ~1.5-2x smaller than they are now, but its not terrible.
Many of the mounts on the table are low quality and not constructed stably. One of the PRM turning mirror mounts twisted all the way around when I tried to align it. This table needs some help this summer.
In the future: never try locking after an OL laser change. Always redo the telescope and alignment and check the servo shape before the OL job is done.
Also, I reduced the height of the RG3.3 in the OL loops from 30 to 18 dB. The BS OL loops were conditionally stable before and thats a no-no. It makes it oscillate if it saturates.
After last week's work on the BS/PRM oplev table, I think the PRM oplev got centered while the PRM was misaligned. With the PRM aligned, the oplev spot was not on the QPD. It has been centered.
PRM watchdog tripped, but the damprestore.py script wouldn't run.
It turns out the script tries to import some ezca stuff from /users/yuta (), which had been moved to /users/OLD/yuta ().
I've moved the yuta directory back to /users/ until I fix the damprestore script.
I will move it back. We need to fix our scripts to not use any users/ libraries ever again.