All suspensions were tripped. Damping were restored. No obvious sign of damage. BS OSEM-UR may be sticking ?
For tuning the phase and amplitude of the mod. drive:
- since we don't have access to both RF phases, I just maximized the gain using the RF phase slider. First, I flipped the sign using the 'phase flip' button so that we would be near the linear range of the slider. Then I put the servo close to oscillation and adjusted the phase to maximize the height of the ~13 kHz body mode. For the amplitude, I just cranked the modulation depth until it started to show up as a reduction in the transmission by ~0.2%, then reduced it by a factor of ~3. That makes it ~5x larger than before.
I wonder if the variable bump around 100 kHz can be something about the NPRO and if the bump we see is the closed loop response due to the Noise Eater.
This plot (from the Mephisto manual) shows the effect of the NE on the RIN, but not the frequency noise. I assume its similar since the laser frequency noise above 10 kHz probably just comes from the pump diode noise.
I went out to the PSL and turned off the NE at ~4:53 PM local time today to see what happened. Although the overall PCDRIVE signal looks more ratty, there is no difference in the spectra of ON/OFF when the PCDRIVE is low. When its noisy, I see a tiny peak around 1 MHz with NE OFF. Turned it back on after a few hours.
Koji and I noticed that there was a comb* of peaks in the MC and FSS at harmonics of ~37 kHz. Today I saw that this shows up (at a much reduced level) even when the input to the MC board is disconnected.
It also shows up in the PMC. At nominal gains, there is just the 37 kHz peak. After tweaking up the phase shifter settings, I was able to get PMC servo to oscillate; it then makes a comb, but the actual oscillation fundamental is 1/3 of 37 kHz (some info on Jenne from elog 978 back in 2008).
Not sure what, if anything, we do about this. It is curious that the peak shows up in the MC with a different harmonic ratio than in the PMC. Any theories?
Anyway, after some screwing around with phase and amplitude of the RF modulation for the PMC from the phase shifter screen**, I think the gain is higher in the loop and it looks like the comb is gone from the MC spectrum.
Another clue I notice is that the PCDRIVE mad times often are coincident with DC shifts in the SLOWDC. Does this mean that its a flakiness with the laser? While watching the PCDRIVE output from the TTFSS interface board on a scope, I also looked at MIXER mon. It looks like many of the high noise events are associated with a broadband noise increase from ~50-140 kHz, rather than some specific lines. Don't know if this is characteristic of all of the noisy times though.
* this 'comb' had several peaks, but seem not be precise harmonics of each other: (f3 - 3*f1)/f3 ~ 0.1%
** I think we never optimized this after changing the ERA-5 this summer, so we'd better do it next.
We run out of N2 for the vacuum system. The pressure peaked at 1.3 mTorr with MC locked. V1 did not closed because the N2 pressure sensor failed.
We are back to vac normal. I will be here tomorrow to check on things.
ITMX damping restored.
In the plot it is shown the behaviour of the PSL-FSS_SLOWDC signal during the last week; the blue rectangle marks an approximate estimate of the time when the scripts were moved to megatron. Apart from the bad things that happened on Friday during the big crash, and the work ongoing since yesterday, it seems that something is not working well. The scripts on megatron are actually running, but I'll try and have a look at it.
I reset the threshold to +6666 counts (the aligned MC transmission is ~16000 for the TEM00 mode) so that it only turns on when we're in a good locked state.
I've updated the scripts for the MC auto locking. Due to some permissions issues or general SVN messiness, most of the scripts in there were not saved anywhere and so I've overwritten what we had before.
After all of the electronics changes from Monday/Tuesday, the lock acquisition had to be changed a lot. The MC seems to catch on the HOM more often. So I lowered a bunch of the gains so that its less likely to hold the HOM locks.
A very nice feature of the Autolocker running on megatron is that the whole 'mcup' sequence now runs very fast and as soon as it catches the TEM00, it gets to the final state in less than 2 seconds.
I've also increased the amplitude of the MC2 tickle from 100 to 300 counts to move it through more fringes and to break the HOM locks more often. Using the 2009 MC2 Calibration of 6 nm/count, this is 1.8 microns-peak @ 0.03 Hz, which seems like a reasonable excitation.
Using this the MC has relocked several times, so its a good start. We'll have to work on tuning the settings to make things a little spicier as we move ahead.
That directory is still in a conflicted state and I leave it to Eric/Diego to figure out what's going on in there. Seems like more fallout from the nodus upgrade:
controls@chiara|MC > svn up
svn: REPORT of '/svn/!svn/vcc/default': Could not read chunk size: Secure connection truncated (https://nodus.ligo.caltech.edu:30889)
Today we decided to continue to modify the TTFSS board.
The modified schematic can be found here: https://dcc.ligo.org/D1400426-v1 as part of the 40m electronics DCC Tree.
What we did
1) Modify input elliptic filter (L1, C3, C4, C5) to give zero and pole at 30 kHz and 300 kHz, respectively. L1 was replaced with a 1 kOhm resistor. C3 was replaced with 5600 pF. C4 and C5 were removed. So the expected locations of the zero and pole were at 28.4 kHz and 256 kHz, respectively. This lead filter replaces the Pomona box, and does so without causing the terrible resonance around 1 MHz.
2) Removed the notch filters for the PC and fast path. This was done by removing L2, L3, and C52.
At this point we tested the MC locking and measured the transfer function. We successfully turned up the UGF to 170kHz and two super-boosts on.
3) Now a peak at 1.7MHz was visible and probably causing noise. We decided to revert L2 and adjusted C50 to tune the notch filter in the PC path to suppress this possible PC resonance. Again the TF was measured. We confirmed that the peak at 1.7MHz is at -7dB and not causing an oscillation. The suppression of the peak is limited by the Q of the notch. Since its in a weird feedback loop, we're not sure how to make it deeper at the moment.
4) The connection from the MC board output now goes in through the switchable Test1 input, rather than the fixed 'IN1'. The high frequency gain of this input is now ~4x higher than it was. I'm not sure that the AD829 in the MC board can drive such a small load (125 Ohms + the ~20 Ohms ON resistance of the MAX333A) very well, so perhaps we ought to up the output resistor to ~100-200 Ohms?
Also, we modified the MC Servo board: mainly changed the corner frequencies of the Super Boost stages and some random cleanup and photo taking. I lost the connecting cable from the CM to the AO input (unlabeled).
I ssh'd in, and was able to run each script manually successfully. I ran the initctl commands, and they started up fine too.
We've seen this kind of behavior before, generally after reboots; see ELOGS 10247 and 10572.
So, despite having registered users, it turns out that the "Author" field is still open for editing when making posts. I.e. we don't really need to make new accounts for everyone.
Thus, I've made a user named "elog" with the old write password that can write to all ELOGs.
(Also, I've added a user called "jamie")
TP2's fore line - dry pump replaced at performance level 600 mTorr after 10,377 hrs of continuous operation.
Where are the foreline pressure gauges? These values are not on the vac.medm screen.
The new tip seal dry pump lowered the small turbo foreline pressure 10x
TP2fl after 2 day of pumping 65mTorr
TP2 dry pump replaced at fore pump pressure 1 Torr, TP2 50K_rpm 0.34A
Top seal life 6,362 hrs
New seal performance at 1 hr 36 mTorr,
Maglev at 560 Hz, cc1 6e-6 Torr
TP3 dry pump replaced at 540 mT as TP3 50K_rpm 0.3A with annulos load. It's top seal life time was 11,252 hrs
We had an unexpected power shutdown for 5 sec at ~ 9:15 AM.
Chiara had to be powered up and am in the process of getting everything else back up again.
Steve checked the vacuum and everything looks fine with the vacuum system.
PSL Innolight laser and the 3 units of IFO air conditions turned on.
The vacuum system reaction to losing power: V1 closed and Maglev shut down. Maglev is running on 220VAC so it is not connected to VAC-UPS. V1 interlock was triggered by Maglev "failure" message.
Maglev was reset and started. After Chiara was turned on manually I could bring up the vac control screen through Nodus and opened V1
"Vacuum Normal" valve configuration was recovered instantly.
It is arriving Thursday
EricQ and Steve,
Steve preset the vacuum for safe-reboot mode with C1vac1 and C1vac2 running normal: closed valves as shown, stopped Maglev & disconnected valves V1 plus valves with moving labels.
(The position indicator of the valves changes to " moving " when its cable disconnected )
Eric shut down Chiara, installed APC's UPS Pro 1000 and restarted it.
All went well. Nothing unexpected happened. So we can conclude that the vacuum system with running C1vac1 and C1vac2 is not effected by Chiara's losing AC power.
Steve and I switched chiara over to the UPS we bought for it, after ensuring the vacuum system was in a safe state. Everything went without a hitch.
Also, Diego and I have been working on getting some of the new computers up and running. Zita (the striptool projecting machine) has been replaced. One think pad laptop is missing an HD and battery, but the other one is fine. Diego has been working on a dell laptop, too. I was having problems editing the MAC address rules on the martian wifi router, but the working thinkpad's MAC was already listed.
Turns out that, as the martian wifi router is quite old, it doesn't like Chrome; using Firefox worked like a charm and now also giada (the Dell laptop) is on 40MARS.
Since we will not be doing any major locking, I am taking this chance to move things on the X end table and install the fiber coupler.
The first steering mirror shown in the earlier elog will be a Y1 (HR mirror) and the second one will be a beam sampler (similar to the one installed at the Y endtable for the fiber setup).
Doubler --> Y1 ---> Lens (f=12.5cm) ---> Beam sampler --->Fiber coupler
The fiber coupler mount will be installed in the green region to the right of the TRX camera.
This work will involve moving around the TRX camera and the optic that brings the trans image on it.
Let me know if this work should not be done tomorrow morning for any reason.
I was working around the X endtable and PSL table today.
1. Y1 mirror, beam sampler and the fiber coupler have been installed.
2. Removed TRX camera temporarily. The camera will be put back on the table once we have the filter for 532nm that can go with it.
3. Removed an old fiber mount that was not being used from the table.
4. Lowered the current for X end NPRO while working and put it back up at 2A before closing.
5. The fibers running from the X end to the PSL table are connected at an FC/APC connector on the PSL table.
6. Found the HEPA left on high (probably from yesterday's work around the PSL table). I have brought it back down and left it that way.
I have not installed the coupling lens as yet owing to the space restrictions - not enough space for footprint of the lens. I have to revisit the telescope design again.
Some TFs of the TTFSS box
Today we were looking at the MC TFs and pulled out the FSS box to measure it. We took photos and removed a capacitor with only one leg.
Still, we were unable to see the weird, flat TF from 0.1-1 MHz and the bump around 1 MHz. Its not in the FSS box or the IMC servo card. So we looked around for a rogue Pomona box and found one sneakily located between the IMC and FSS box, underneath some cables next to the Thorlabs HV driver for the NPRO.
It was meant to be a 14k:140k lead filter (with a high frequency gain of unity) to give us more phase margin (see elog 4366; its been there for 3.5 years).
From the comparison below, you can see what the effect of the filter was. Neither the red nor purple TFs are what we want, but at least we've tracked down where the bump comes from. Now we have to figure out why and what to do about it.
* all of the stuff above ~1-2 MHz seems to be some kind of pickup stuff.
** notice how the elog is able to make thumbnails of PDFs now that its not Solaris!
I looked at the endtable for possible space to setup optics in order to couple the X end laser into a PM fiber.
Attached is the layout of where the setup will go and what are the existing stuff that will be moved.
In order to fix ELOG search, I have started running ELOG v2.9.2 on Nodus.
Sadly, due to changes in the software, we can no longer use one global write password. Instead, we must now operate with registered users.
Based on recent elog users, I'll be creating user accounts with the following names, using the same old ELOG write password. (These will be valid across all logbooks)
All of these users will be "Admins" as well, meaning they can add new users and change settings, using the "Config" link.
Let me know if I neglected to add someone, and sorry for the inconvenience.
RXA: What Eric means to say, is that "upgrading" from Solaris to Linux broke the search and made us get a new elog software that;s worse than what we had.
IMC OL TF has been measured from 10K to 10M
What we want is to have the high and low noise spectra on the same plot. The high noise one should be triggered by a high PC DRIVE signal.
I tried to find my own entry and faced with a strange behavior of the elog.
The search button invoked the following link and no real search has been done:
If I ran the following link, it returned correct search. So something must be wrong.
The error spectra I took so far are not that informative, I'm afraid. The first three posted here refer to Wed 17 in the afternoon, where things were quiet, the LSC control was off and the MC was reliably locked. The last two plots refer to Wed night, while Q and I were doing some locking work; in particular, these were taken just after one of the locklosses described in elog 10814. Sadly, they aren't much different from the "quiet" ones.
I can add some considerations though: Q and I saw some weird effects during that night, using a live reading of such spectra, which couldn't be saved though; such effects were quite fast both in appearance and disapperance, therefore difficult to save using the snapshot measurement, which is the only one that can save the data as of now; moreover, these effects were certainly seen during the locklosses, but sometimes also in normal circumstances. What we saw was a broad peak in the range 5e4-1e5 Hz with peak value ~1e-5 V/rtHz, just after the main peak shown in the attached spectra.
Today Q moved the FSS slow servo over to some init thing on megatron, and some time ago he did the same thing to the MC auto locker script. It isn't working though.
Even though megatron was rebooted, neither script started up automatically. As Diego mentioned in elog 10823, we ran sudo initctl start MCautolocker and sudo initctl start FSSslow, and the blinky lights for both of the scripts started. However, that seems to be the only thing that the scripts are doing. The MC auto locker is not detecting lockloses, and is not resetting things to allow the MC to relock. The MC is happy to lock if I do it by hand though. Similarly, the blinky light for the FSS is on, but the PSL temperature is moving a lot faster than normal. I expect that it will hit one of the rails in under an hour or so.
The MC autolocker and the FSS loop were both running earlier today, so maybe Q had some magic that he used when he started them up, that he didn't include in the elog instructions?
Everything seems reasonably back to normal:
The EPICS freeze that we had noticed a few weeks ago (and several times since) has happened again, but this time it has not come back on its own. It has been down for almost an hour so far.
So far, we have reset the Martian network's switch that is in the rack by the printer. We have also power cycled the NAT router. We have moved the NAT router from the old GC network switch to the new faster switch, and reset the Martian network's switch again after that.
We have reset the network switch that is in 1X6.
We have reset what we think is the DAQ network switch at the very top of 1X7.
So far, nothing is working. EPICS is still frozen, we can't ping any computers from the control room, and new terminal windows won't give you the prompt (so perhaps we aren't able to mount the nfs, which is required for the bashrc).
We need help please!
EricQ suggested it may be some NFS related issue: if something, maybe some computer in the control room, is asking too much to chiara, then all the other machines accessing chiara will slow down, and this could escalate and lead to the Big Bad Freeze. As a matter of fact, chiara's dmesg pointed out its eth0 interface being brought up constantly, as if something is making it go down repeatedly. Anyhow, after the shutdown of all the computers in the control room, a reboot of chiara, megatron and the fb was performed.
Then I rebooted pianosa, and most of the issues seem gone so far; I had to "mxstream restart" all the frontends from medm and everyone of them but c1scy seems to behave properly. I will now bring the other machines back to life and see what happens next.
Given that op340m showed some undesired behavior, and that the FSS slow seems prone to railing lately, I've moved the FSS slow servo job over to megatron in the same way I did for the MC autolocker.
Namely, there is an upstart configuration (megatron:/etc/init/FSSslow.conf), that invokes the slow servo. Log file is in the same old place (/cvs/cds/caltech/logs/scripts), and the servo can be (re)started by running:
controls@megatron|~ > sudo initctl start FSSslow
Maybe this won't really change the behavior. We'll see
I've set up nodus to start the ELOG on boot, through /etc/init/elog.conf. Also, thanks to this, we don't need to use the start-elog.csh script any more. We can now just do:
controls@nodus:~ $ sudo initctl restart elog
I also tweaked some of the ELOG settings, so that image thumbnails are produced at higher resolution and quality.
I swapped out one of the channels on Q's lockloss plotter - we don't need POP22Q, but I do want the PC drive.
So, we still need to look into why the PC drive goes crazy, and if it is related to the buildup in the arms or just something intrinsic in the current FSS setup, but it looks like that was the cause of the lockloss that Q and Diego had on Wednesday.
elog was not responding for unknown reasons, since the elogd process on nodus was alive; anyway, I restarted it.
I just stumbled upon this while poking around:
Since the great crash of June 2014, the scripts backup script has not been workingon op340m. For some reason, it's only grabbing the PRFPMI folder, and nothing else.
Megatron seems to be able to run it. I've moved the job to megatron's crontab for now.
Since the Nodus switch, the offsite backup scripts (scripts/backup/rsync.backup) had not been running successfully. I tracked it down to the weird NFS file ownership issues we've been seeing since making Chiara the fileserver. Since the backup script uses rsync's "archive" mode, which preserves ownership, permissions, modification dates, etc, not seeing the proper ownership made everything wacky.
Despite 99% of the searches you do about this problem saying you just need to match your user's uid and gid on the NFS client and server, it turns out NFSv4 doesn't use this mechanism at all, opting instead for some ID mapping service (idmapd), which I have no inclination of figuring out at this time.
Thus, I've configured /etc/fstab on Nodus (and the control room machines) to use NFSv3 when mounting /cvs/cds. Now, all the file ownerships show up correctly, and the offsite backup of /cvs/cds is churning along happily.
Some locking efforts tonight; many locklosses due to PRC angular motion. Furthest progress was arm powers of 15, and I've stared at the corresponding lockloss plot, with little insight into what went wrong. (BTW, lastlock.sh seems to catch the lock loss reliably in the window)
CARM and DARM loops were measured not long before this lock loss, and had nominal UGFs (~120Hz, ~20deg PM). However, there was a reasonably clear 01 mode shape at the AS camera, which I did nothing to correct. Here's a spectrum from *just* before the lockloss, recovered via nds. Nothing stands out to me, other than a possible loss of DARM optical gain. (I believe the references are the error signal spectra taken in ALS arms held away + PRMI on 3F configuration)
The shape in the DARM OLTF that we had previously observed and hypothesized as possible DARM optical spring was not ever observed tonight. I didn't induce a DARM offset to try and look for it either, though.
Looking into some of the times when I was measuring OLTFs, the AS55 signals do show coherence with the live DARM error signal at the excitation frequencies, but little to no coherence under 30Hz, which probably means we weren't close enough to swap DARM error signals yet. This arm power regime is where the AS55 sign flip has been modeled to be...
A fair amount of time was spent in pre-locking prep, including:
I wonder what to do with the X arm.
The primary purpose of the ASS is to align the arm (=transmission), and the secondary purpose is to adjust the input pointing.
As the BS is the only steering actuator, we can't adjust two dof out of 8 dof.
In the old (my) topology, the spot position on ITMX was left unadjusted.
If my understanding of the latest configuration, the alignment of the cavity (=matching of the input axis with the cavity axis)
is deteriorated in order to move the cavity axis at the center of the two test masses. This is not what we want as this causes
deterioration of the power recycling gain.
I made the Xarm follow the new (old) topology of Length -> test masses, and Trans -> input pointing.
It takes a really long time to converge (2+ min), since the input pointing loops actuate on the BS, which has an optical lever, which is slow. So, everything has to be super duper slow for the input pointing to be fast relative to the test mass motion.
Also, between last night and this afternoon, I moved the green ASX stuff from a long list of ezca commands to a burt file, so turning it on is much faster now. Also, I chose new frequencies to avoid intermodulation issues, set the lockin demodulation phases, and tuned all 4 loops. So, now the green ASX should work for all 4 mirrors, no hand tuning required. While I was working on it, I also removed the band pass filters, and made the low pass filters the same as we are using for the IR ASS. The servos converge in about 30 seconds.
I have completed all of the model modifications and medm screen updates to allow for feedback from the transmon QPD pitch and yaw signals to the ITMs. Now, we can design and test actual loops...
The signals come from c1sc[x/y] to c1rfm via RFM, and then go to c1ass via dolphin.
Out of curiosity about the RFM+dolphin delay, I took a TF of an excitation at the end SUS model (C1:SUS-ETM[X/Y]_QPD_[PIT/YAW]_EXC) to the input FM in the ASC model (C1:ASC-ETM[X/Y]_QPD_[PIT/YAW]_IN1). All four signals exhibit the same delay of 122usec. I saved the dtt file in Templates/ASC/transmonQPDdelay.xml
This is less than a degree under 20Hz, so we don't have to worry about it.
EricQ's crazy people filter has been deleted. I'm trying to lock right now, to see if all is well in the world.
However, the PRMI would not acquire lock with the arms held off resonance.
This is entirely my fault.
Last week, while doing some stuff with PRY, I put this filter in SUS_PRM_LSC, to stop some saturations from high frequency sensing noise
After the discussion at today's meeting, it struck me that I might have left it on. Turns out I did.
20 degree phase lag at 200Hz can explain the instability, some non-flat shape at few hundreds of Hz explains the non 1/f shape.
Sorry about all that...
I was working around the PSL table and Y endtable today.
I modified the Y arm optical layout that couples the 1064nm light leaking from the SHG crystal into the fiber for frequency offset locking.
The ND filter that was used to attenuate the power coupled into the fiber has been replaced with a beam sampler (Thor labs BSF-10C). The reflected power after this optic is ~1.3mW and the trasmitted power has been dumped to a razor blade beam dump (~210mW).
Since we have a spare fiber running from the Y end to the PSL table, I installed an FC/APC fiber connector on the PSL table to connect them and monitored the output power at the Y end itself. After setting up, I have ~620uW of Y arm light on the PSL table (~48% coupling).
During the course of the alignment, I lowered the power of the Y end NPRO and disengaged the ETMY oplev. These were reset after I closed the end table.
Attached is the out of loop noise measurement of the Y arm ALS error signal before (ref plots) and after.
Did a big reconfig to make the Y-arm work again since it was bad again.
With the arm aligned and the A2L signals all zeroed, we centered the beam on QPDY (after freezing the ASS outputs). I saw the beam going to the QPD on an IR card, along with a host of green spots. Seems bad to have green beams hitting the QPD alogn with the IR, so we are asking Steve to buy a bunch of the broad, dielectric, bandpass filters from Thorlabs (FL1064-10), so that we can also be immune to the EXIT sign. I wonder if its legal to make a baffle to block it on the bottom side?
P.S. Why is the Transmon QPD software different from the OL stuff? We should take the Kissel OL package and put it in place of our old OL junk as well as the Transmons.
Diego is going to give us some spectra of the MC error point at various levels of pockel's cell drive. Is it always the same frequencies that are popping up, or is it random?
I found out that the Spectrum Analyzer gives bogus data... Since now is locking time, tomorrow I'll go and figure out what is not working
Nodus (solaris) is dead, long live Nodus (ubuntu).
Diego and I are smoothing out the Kinks as they appear, but the ELOG is running smoothly on our new machine.
SVN is working, but your checkouts may complain because they expect https, and we haven't turned SSL on yet...
SSL, https and backups are now working too!
A backup of nodus's configuration (with some explaining) will be done soon.
Nodus should be visible again from outside the Caltech Network; I added some basic configuration for postfix and smartmontools; configuration files and instructions for everything are in the svn in the nodus_config folder
[Jenne, Rana, Diego]
After deciding that the Yend QPD situation was not significant enough to prevent us from locking tonight, we got started. However, the PRMI would not acquire lock with the arms held off resonance.
This started some PRMI investigations.
With no arms, we can lock the PRMI with both REFL55 I&Q or REFL165 I&Q. We checked the demod phase for both Refl 55 and 165. REFL55 did not need changing, but REFL165 was off significantly (which probably contributed to the difficulty in using it to acquire lock). I didn't write down what REFL165 was, but it is now -3 degrees. To set the phase (this is also how Rana checked the 55 phase), I put in an oscillation using the sensing matrix oscillators. For both REFL165I and 165Q, I set the sensing matrix demod phases such that all of the signal was in the I phase (so I_I and Q_I, and basically zero in I_Q and Q_Q). Then, I set the main PD demod phase so that the REFL165Q phase (the Q_I phase) was about zero.
Here are the recipes for PRMI-only, REFL55 and REFL165:
Both cases, actuation was PRCL = 1*PRM and MICH = (0.5*BS - 0.2625*PRM). Trigger thresholds for DoFs and FMs were always POP22I, 10 up and 0.5 down.
REFL55, demod phase = 31deg.
MICH = 2*R55Q, gain = 2.4, trig FMs 2, 6, 8.
PRCL = 12*R55I, gain = -0.022, trig FMs 2,6,9.
REFL165, demod phase = -3deg.
MICH = -1*R165Q, gain = 2.4, trig FMs 2,6,8.
PRCL = 2.2*R165I, gain = -0.022, trig FMs 2,6,9.
These recipes assume Rana's new resonant gain filter for MICH's FM6, with only 2 resonant gains at 16 and 24 Hz instead of a whole mess of them: elog 10803. Also, we have turned down the waiting time between the MICH loop locking, and the filters coming on. It used to be a 5 second delay, but now is 2 sec. We have been using various delays for the PRCL filters, between 0.2s and 0.7s, with no particular preference in the end.
We compared the PRCL loop with both PDs, and note that the REFL 165 error signal has slightly more phase lag, although we do not yet know why. This means that if we only have a marginally stable PRCL loop for REFL55, we will not be stable with REFL165. Also, both loops have a non-1/f shape at a few hundred Hz. This bump is still there even if all filters except the acquisition ones (FM4,5 for both MICH and PRCL) are turned off, and all of the violin filters are turned off. I will try to model this to see where it comes from.
To Do list:
Go back to the QPDY situation during the daytime, to see if tapping various parts of the board makes the noise worse. Since it goes up to such high frequencies, it might not be just acoustic. Also, it's got to be in something common like the power or something, since we see the same spectra in all 4 quadrants.
The ASS needs to be re-tuned.
Rana was talking about perhaps opening up the ETMX chamber and wiggling the optic around in the wire. Apparently it's not too unusual for the wire to get a bit twisted underneath, which creates a set of places that the optic likes to go to.
This is ridiculous.
How many RGs can I fit into one button???
We manually realigned the BS and PRM optical levers on the optical table.
[Jenne, Rana, Diego]
We did some test on the modified QPD board for the Yend; we saw some weird oscillations at high frequencies, so we went and check more closely directly within the rack. The oscillations disappear when the cable from the QPD is disconnected, so it seems something is happening within the board itself; however, looking closely at the board with an oscilloscope in several locations, with the QPD cable connected or disconnected, there is nothing strange and definitely nothing changing if the cable is connected or not. In the plots there are the usual channels we monitor, and the 64kHz original channels before they are downsampled.
Overall it doesn't seem being a huge factor, as the RMS shows at high frequencies, maybe it could be some random noise coming up, but anyway this will be investigated further in the future.