Pumping again after 7 days at atmosphere.
BS, ITMY and OMC chambers were open only.
Checked: jam nuts, viewport covers and beam shutters.
Oplev servo turned off and medm screens shots taken.
New item in vacuum: green shade 14 glass beam block at IR-input [ from the PSL ] viewport to block green reflection-scatter.
Reminder: viewport is not AR coated for green!
Unfortunately, it seems that the large power supply which is used for the heater is dead. Or maybe I don't remember how to use it?
The AC power cord was plugged in to a power strip which seems to work for IO chassis. We also tried swapping power strip ports.
We checked the front panel fuses. The power one was 3 Ohms and the 'bias' one was 55 Ohms. We also checked that the EPICS slider did, in fact, make voltage changes at the bias control input.
Non of the front panel lights come on, but I also don't remember if that is normal.
Have those lights been dead a long time? We also reconnected the heater cable at the reference cavity side.
These old specs are not so bad. But we now want to get replacements for the TRX and TRY and PSL viewports that are R <0.1% at 532 and 1064 nm.
I don't know of any issues with keeping BK-7 as the substrate.
[ericq, lydia, steve, gautam]
AS beam on OM1
Link to IMG_2337.JPG
AS beam on OM2
AS beam on OM3
AS beam on OM4
I didn't manage to get a picture of the beam on OM5 because it is difficult to hold a card in front of it and simultaneously take a photo, but I did verify the centering...
It remains to update the CAD diagram to reflect the new AS beam path - there are also a number of optics/other in-vacuum pieces I noticed in the BS/PRM and OMC chambers which are not in the drawings, but I should have enough photos handy to fix this.
Here is the link to the Picasa album with a bunch of photos from the OMC, BS/PRM and ITMY chambers prior to putting the heavy doors back on...
SRM satellite box has been removed for diagnostics by Rana. I centered the SRM Oplev prior to removing this, and I also turned off the watchdog and set the OSEM bias voltages to 0 before pulling the box out (the PIT and YAW bias values in the save files were accurate). Other Oplevs were centered after dither-aligning the arms (see Attachment #8, ignore SRM). Green was aligned to the arms in order to maximize green transmission (GTRX ~0.45, GTRY ~0.5, but transmission isn't centered on cameras).
I don't think I have missed out on any further checks, so unless anyone thinks otherwise, I think we are ready for Steve to start the pumpdown tomorrow morning.
Tilted viewports installed in horizontal position. Atm2
Everybody is happy, except ITMY_UL or satalite box.
Gautam shows perfect form in the OMC chamber.
[ericq, lydia, gautam]
Lydia and I investigated the extra green beam situation. Here are our findings.
I can't think of an easy fix for this - the layout on the OMC chamber is pretty crowded, and potential places to install a beam dump are close to the AS and IMC REFL beam paths (see Attachment #1). Perhaps Steve can suggest the best, least invasive way to do this. I will also try and nail down more accurately the origin of these spots tomorrow.
Light doors are back on for the night. I re-ran the dithers, and centered the oplevs for all the test-masses + BS. I am leaving the PSL shutter closed for the night
I connected to the serial port using screen (through Terminal) and using Arduino's serial monitor and basically received the same strings that were received through python, so it's not a python issue. Checked the other TC 200 module and was also receiving nonsense, but it was all question marks instead of mostly K's and ['s.
This rules out a few possible reasons for the weird data. Next steps are to set up and configure the Raspberry Pi (which has been interfaced before) and see if the problem continues.
IMC realignment, Arm dither alignment
Ashley Fowler "high shool" student received basic 40m safety training and Lydia is her guarding angle.
Oct. 15, 2016
Another attempt (following elog 8755) to extract the oven transfer function from time series data using Matlab’s system identification functionalities.
The same time series data from elog 8755 was used in Matlab’s system identification toolbox to try to find a transfer function model of the system.
From elog 8755: H(s) is known from current PID gains: H(s) = 250 + 60/s +25s, and from the approximation G(s)=K/(1+Ts), we can expect the transfer function of the system to have 3 poles and 2 zeros.
I tried fitting a continuous-time and a discrete time transfer function with 3 poles and 2 zeros, as well as using the "quick start" option. Trying to fit a discrete time transfer function model with 3 poles and 2 zeros gave the least inaccurate results, but it’s still really far off (13.4% fit to the data).
1. Obtain more time domain data with some modulation of the input signal (also gives a way to characterize nonlinearities like passive cooling). This can be done with some minor modifications to the existing code on the raspberry pi. This should hopefully lead to a better system ID.
2. Try iterative tuning approach (sample gains above and below current gains?) so that a tune can be obtained without having to characterize the exact behavior of the heater.
Oct. 16, 2016
-Found the raspberry pi but it didn’t have an SD card
-Modified code to run directly on a computer connected to the TC 200. Communication seems to be happening, but a UnicodeDecodeError is thrown saying that the received data can’t be decoded.
-Some troubleshooting: tried utf-8 and utf-16 but neither worked. The raw data coming in is just strings of K’s, [‘s, and ?’s
-Will investigate possible reasons (update to Mac OS or a difference in Python version?), but it might be easier to just find an SD card for the raspberry pi which is known to work. In the meantime, modify code to obtain more time series data with variable input signals.
In the afternoon, we took the heavy door off the OMC chamber as well, such that we could trace the AS beam all the way out to the AP table.
In summary, we determined the following today:
Attachment #5 is extracted from the 40m CAD drawing which was last updated in 2012. It shows the beam path for the output beam from the BS all the way to the table (you may need to zoom in to see some labels. The drawing may not be accurate for the OMC chamber but it does show all the relevant optics approximately in their current positions.
EQ will put up photos from the ITMY and BS/PRM chambers.
Plan for Monday: Reconfirm all the findings from today immediately after running the dither alignment so that we can be sure that the ITMs are well-aligned. Then start at OM1 and steer the beam out of the chambers, centering the beam as best as possible given other constraints on all the optics sequentially. All shutters are closed for the weekend, though I left the SOS iris in the chamber...
Here is the link to the Picasa album with a bunch of photos from the OMC chamber prior to us making any changes inside it - there are also some photos in there of the AS beam path inside the OMC chamber...
I say just fix the clipping. Don't worry about the PRM OSEM filters. We can do that next time when we put in the ITM baffles. No need for them on this round.
We re-checked IMC locking, arm alignments (we were able to lock and dither align both arms today, and also made the michelson spot look reasonable on the camera) and made sure that the AS and REFL spots were in the camera ballpark. We then proceeded to remove the heavy doors off ITMY and BS/PRM chambers. We also quickly made sure that it is possible to remove the side door of the OMC chamber with the current crane configuration, but have left it on for now.
The hunt for clipping now begins.
I did the following today to prepare for taking the doors off tomorrow.
I am leaving all shutters closed overnight.
So I think we are ready to take the doors off at 8am tomorrow morning, unless anyone thinks there are any further checks to be done first.
Should we look to do anything else now? One thing that comes to mind is should we install ITM baffles? Or would this be more invasive than necessary for this vent?
Steve reported to me that he was unable to ssh into the control room machines from the laptops at the Xend and near the vacuum rack. The problem was with pianosa being frozen up. I did a manual reboot of pianosa and was able to ssh into it from both laptops just now.
IFO is at atmosphere. The MC can be locked in air now.
The doors will be coming off tomorrow 8am sharp.
Do we want to install the ITM baffles?
What about the found OSEM filters?
XLR(F)-XLR(M) cable for the blue microphone is missing. Steve ordered one.
We found one in the fibox setup. As we don't use it during the vent, we use this cable for the microphone.
Once we get the new one, it will go to the fibox setup.
I have completed the following non-Steve portions of the pre-vent checklist [wiki-40m.ligo.caltech.edu]
All shutters are closed. Ready for Steve to check nuts and begin venting!
- checked all jam nuts
- checked all viewports are covered
- turned oplev servos off
- took pictures of medm screens: sus summs, aligned oplev centering, IFO& MC alignment biases and vac configuration
- checked particle counts
- checked crane operational safety
- closed V1, VM1, annuloses
- opened VV1 and vented with Airgas brand, Industrial Grade Nitrogen [ 99.99% ] to 25 Torr
- switched over to Airgas brand compressed air, Alphagas " AI UZ300 " with Total Hydro Carbon 0.1 PPM
We engaged the HV driver to the output port PZTs, hoping to mitigate the AS port clipping. Basically, the range of the PZT is not enough to make the beam look clean. Also, our observation suggested there are possible multiple clipping in the chamber. We need another vent to make the things clearly right. Eric came in the lab and preparing the IFO for it.
1. Before the test, the test masses have been aligned with the dither servo.
2. We looked at the beam shape on the AS camera with a single bounce beam. We confirmed that the beam is hard-clipped at the upper and left sides of the beam on the video display. This clipping is not happening outside of the chamber.
3. We brought an HV power supply to the short OMC rack. There is a power supply cable with two spades. The red and black wires are +150V and GND respectively.
4. The voltage of +/-10V was applied on each of the four PZT drive inputs. We found that the motion of the beam on the camera is tiny and in any case, we could not improve the beam shape.
5. We wondered that if we are observing ANY improvement of the clipping. For this purpose, we aligned AS110 sensor every time we gave the misalignment with the PZTs. Basically, we are at the alignment to have the best power we can get. We thought this was weird.
6. Then we moved the AS port spot with the ITMX. We could clearly make the spot more round. However, this reduced the power at the AS port reduced by ~15%. When the beam was further clipped, the power went down again. Basically, the initial alignment gave us the max power we could get. As the max power was given with the clipped beam, we get confused and feel safer to check the situation with the chambers open.
During this investigation, we moved the AS port opitcs and the AS camera. So they are not too precise reference of the alignment. The PZT HV setup has been removed.
Johannes acquired a crate to contain the Acromag setup and wiring, and installed a rail along the bottom panel so that the ADC units will be oriented vertically with the ehternet ports facing up. We briefly talkes about what the layout should be, and are thinking of using 2 rails, one for ADCs and one for DACs. We want to design a generic front panel to accept 25 pin D-Sub inputs and maybe also BNCs, which we can use for all the Acromag crates.
I got the epics session for the acromag to run on c1iscex and was able to access the channel values using caget on donatella. However, I get the following warning:
cas warning: Using dynamically assigned TCP port 48154,
cas warning: but now two or more servers share the same UDP port.
cas warning: Depending on your IP kernel this server may not be
cas warning: reachable with UDP unicast (a host's IP in EPICS_CA_ADDR_LIST)
It seems like there might be a way to assign a port for each unit, if this is a problem.
Also, c1iscex doens't have tmux; what's the best way to run the modbusApp and then detach? Right now I just left an epics session running in an open terminal.
The two 40 mm apeture baffles at the ends were replaced by 50 mm one. ITM baffles with 50 mm apeture are baked ready for installation.
Green welding glass 7" x 9" shade #14 with 40 mm hole and mounting fixtures are ready to reduce scatter light on SOS
PEEK 450CA shims and U-shaped clips will keep these plates damped.
Looks like what were PRM problems are now seen in the SRM channels, while PRM itself seems well behaved. This supports the hypothesis that the satellite box is problematic, rather than any in-vacuum shenanigans.
Eric noted in this elog that when this problem was first noticed, switching Satellite boxes didn't seem to fix the problem. I think that the original problem was that the Al foil shorted the contacts on the back of the OSEM. Presumably, running the current driver with (close to) 0 load over 2 months damaged that part of the Satellite box circuitry, which lead to the subsequent observations of glitchy behaviour after the pumpdown. Which begs the question - what is the quick fix? Do we try swapping out the LM6321 in the LR LED current driver stage?
GV Edit Nov 2 2016: According to Rana, the load of the high speed current buffer LM6321 is 20 ohms (13 from the coil, and 7 from the wires between the Sat. Box and the coil). So, while the Al foil was shorting the coil, the buffer would still have seen at least 7 ohms of load resistance, not quite a short circuit. Moreover, the schematic suggests that that the kind of overvoltage protection scheme suggested in page 6 on the LM6321 datasheet has been employed. So it is becoming harder to believe that the problem lies with the output buffer. In any case, we have procured 20 of these discontinued ICs for debugging should we need them, and Steve is looking to buy some more. Ben Abbot will come by later in the afternoon to try and help us debug.
Perhaps the problem is electrical? The attached plot shows a downward trend for the LR sensor output over the past 20 days that is not visible in any of the other 4 sensor signals. The Al foil was shorting the electrical contacts for nearly 2 months, so perhaps some part of the driver circuit needs to be replaced? If so a Satellite Box swap should tell us more, I will switch the PRM and SRM satellite boxes. It could also be a dying LED on the OSEM itself I suppose. If we are accessing the chamber, we should come up with a more robust insulating cap solution for the OSEMs rather than this hacky Al foil + kapton arrangement.
The PRM and SRM Satellite boxes have been switched for the time being. I had to adjust some of the damping loop gains for both PRM and SRM and also the PRM input matrix to achieve stable damping as the PRM Satellite box has a Side sensor which reads out 0-10V as opposed to the 0-2V that is usually the case. Furthermore, the output of the LR sensor going into the input matrix has been turned off.
100 Sapphire prizms arrived.
It started here
Summary pages will be unavailable today due to LDAS server maintenance. This is unrelated to the issue that Rana reported.
I've re-submitted the Condor job; pages should be back within the hour.
Been non-functional for 3 weeks. Anyone else notice this? images missing since ~Sep 21.
Still no luck relocking, but got a little further. I disabled the output of the problematic PRM OSEM, it seems to work ok. Looking at the sensing of the PRMI with the arms held off, REFL165 has better MICH SNR due to its larger seperation in demod angle. So, I tried the slightly odd arrangement of 33I for PRCL and 165Q for MICH. This can indefinitely hold through the buzzing resonance. However, I haven't been able to find the sweet spot for turning on the CARM_B (CM_SLOW) integrator, which is neccesary for turning up the AO and overall CARM gain. This is a familiar problem, usually solved by looking at the value far from resonance on either side, and taking the midpoint as the filter module offset, but this didn't work tonight. Tried different gains and signs to no avail.
Tonight, and during last week's locking, we noticed something intermittently kicking the PRM. I've determined that PRM's LR OSEM is problematic again. The signal is coming in and out, which kicks the OSEM damping loops. I've had the watchdog tripped for a little bit, and here's the last ten minutes of the free swinging OSEM signal:
Here's the hour trend of the PRM OSEMS over the last 7 days a plot of just LR since the fix on the 9th of September.
It looks like it started misbehaving again on the evening of the 5th, which was right when we were trying to lock... Did we somehow jostle the suspension hard enough to knock the foil cap back into a bad spot?
I did a quick survey of the drive electronics for the PZT OM mirrors today. The hope is that we can correct for the clipping observed in the AS beam by using OM4 (in the BS/PRM chamber) and OM5 (in the OMC chamber).
Here is a summary of my findings.
I hope these have the correct in-vacuum connections. We also have to hope that the clipping is downstream of OM4 for us to be able to do anything about it using the PZT mirrors.
We did the following today morning:
It is unfortunate we have to do this dance each time c1susaux has to be restarted, but I guess it is preferable to repeated unsticking of the optic, which presumably applies considerable shear force on the magnets...
After Wednesday's locking effort, Eric had set the IFO to the PRMI configuration, so that we could collect some training data for the PRC angular feedforward filters and see if the filter has changed since it was last updated. We should have plenty of usable data, so I have restored the arms now.
Local earth quake 3.7 mag trips PRM
What about the MC?
I wanted to see what is the reason to have such large coupling between pitch and yaw motions.
The first test was to check orthogonality of the bias sliders. It was done by monitoring the suspension motion using the green beam.
The Y arm cavity was aligned to the green. The damping of ITMY was all turned off except for SD.
Then ITMY was misaligned by the bias sliders. The ITMY face CCD view shows that the beam is reasonably orthogonally responding to the pitch and yaw sliders.
I also confirmed that the OPLEV signals also showed reasonablly orthogonal responce to the pitch and yaw misalignment.
=> My intuition was that the coils (including the gain balance) are OK for a first approximation.
Then, I started to excite the resonant modes. I agree that it is difficult to excite a pure picth motion with the resonance.
So I wanted to see how the mixing is frequency dependent.
The transfer functions between ITMY_ASCPIT/YAW_EXC to ITMY_OPLEV_PERROR/YERROR were measured.
The attached PDFs basically shows that the transfer functions are basically orthogonal (i.e. pitch exc goes to pitch, yaw exc goes to yaw) except at the resonant frequency.
I think the problem is that the two modes are almost degenerate but not completely. This elog shows that the resonant freq of the ITMY modes are particularly close compared to the other suspensions.
If they are completely degenerate, the motion just obeys our excitation. However, they are slightly split. Therefore, we suffer from the coupled modes of P and Y at the resonant freq.
However, the mirror motion obeys the exitation at the off resonance as these two modes are similar enough.
This means that the problem exists only at the resonant frequencies. If the damping servos have 1/f slope around the resonant freqs (that's the usual case), the antiresonance due to the mode coupling does not cause servo instability thank to the sufficient phase margin.
In conclusion, unfortunately we can't diagnalize the sensors and actuators using the natural modes because our assumption of the mode purity is not valid.
We can leave the pitch/yaw modes undiagnalized or just believe the oplevs as a relatively reliable reference of pitch and yaw and set the output matrix accordingly.
The figures will be rotated later.
Found the MC autolocker kept failing, It turned out that c1iool0 and c1psl went bad and did not accept the epics commands.
Went to the rack and power cycled them. Burt resotred with the snapshot files at 5:07 today.
The PMC lock was restored, IMC was locked, WFS turned on, and WFS output offloaded to the bias sliders.
The PMC seemed highly misaligned, but I didn't bother myself to touch it this time.
The IFO room temp is up
The IFO room temp is up a bit and it is coming down. The out side temp is not really high.
The RGA is removed for repaire. It's volume at atmophere and sealed.. P4 reading of 38 Torr is not correct.
Summary: At the 40m meeting yesterday, Eric Q. gave the suggestion that we accept the input matrix weirdness and adjust the output matrix by driving each coil individually so that it refers to the same degrees of freedom. After testing this strategy, I don't think it will work.
Yesterday evening I tested this idea by driving one ITMY coil at a time, and measuring the response of each of the free swing modes at the drive frequency. I followed more or less the same procedure as the standard diagonalization: responses to each of the possible stimuli are compared to build a matrix, which is inverted to describe the responses given the stimuli. For the input matrix, the sensor readings are the responses and the free swing peaks are the stimuli. For the output matrix, the sensors transformed by the diagonalized input matrix as the responses of the dofs which are compared, and the drive frequency peak associated with a coil output is the stimulus. However, the normalization still happens to each dof independently, not to each coil independently.
The output matrix I got had good agreement with the ITMY input matrix in the previous elog: for each dof/osem the elements had the same sign in both input and output matrices, so there are no positive feedback loops. The relative magnitude of the elements also corresponded well within rows of the input matrix. So the input and output matrices, while radically different from the ideal, were consistent with each other and referred to the same dof basis. So, I applied these new matrices (both input and output) to the damping loops to test whether this approach would work.
drive-generated output matrix:
UL UR LR LL SD
pit 1.701 -0.188 -2.000 -0.111 0.452
yaw 0.219 -1.424 0.356 2.000 0.370
pos 1.260 1.097 0.740 0.903 -0.763
sid 0.348 0.511 0.416 0.252 1.000
but 0.988 -1.052 0.978 -0.981 0.060
However, when Gautam attempted to lock the Y arm, we noticed that this change significantly impacted alignment. The alignment biases were adjusted accordingly and the arm was locking. But when the dither was run, the lock was consistently destroyed. This indicates that the dither alignment signals pass through the SUS screen output matrix. If the output matrix pitch and yaw columns refer instead to the free swing eigenmodes, anything that uses the output matrix and attempts to align pitch and yaw will fail. So, the ITMY matrices were restored to their previous values: a close to ideal input matrix and naive output matrix. We could try to change everything that is affected by the output matrices to be independent of a transformation to the free swing dof basis, and then implement this strategy. But to me, that seems like an unneccessary amount of changes with unpredictable consequences in order to fix something that isn't really broken. The damping works fine, maybe even better, when the input matrix is set by the output matrix: we define pitch, for example, to be "The mode of motion produced by a signal to the coils proportional to the pitch row of the naieve output matrix," and the same for the other dofs. Then you can drive one of these "idealized" dofs at a time and measure the sensor responses to find the input matrix. (That is how the input matrix currently in use for ITMY was found, and it seems to work well.)
[ericq, Gautam, Lydia]
We spent some time tonight trying to revive the PRFPMI. (Why PR instead of DR? Not having to deal with SRM alignment and potentially get a better idea of our best-case PRG). After the usual set up and warm up, we found ourselves unable to hold on to the PRMI while the arms flash. In the past, this was generally solved through clever trigger matrix manipulations, but this didn't really work tonight. We will meditate on the solution.
This elog is meant to review some of the important changes made during the vent this summer - please add to this if I've forgotten something important. I will be adding this to the wiki page for a more permanent record shortly.
Optics, OSEM and suspension status:
ITMX & ITMY
Summary of characterization tasks to be done:
There are multiple methods by which the arm loss can be measured, including, but not limited to:
We found that the second method is extremely sensitive to errors in the ITM transmissivity. The first method was not an option for a while because the AOM (which serves as a fast shutter to cut the light to the cavity and thereby allow measurement of the cavity ringdown) was not installed. Johannes and Shubham have re-installed this so we may want to consider this method.
Most of the recent efforts have relied on the 3rd method, which itself is susceptible to many problems. As Yutaro found, there is something weird going on with ASDC which makes it perhaps not so reliable a sensor for this measurement (unfortunately, no one remembered to follow up on this during the vent, something we may come to regret...). He performed some checks and found that for the Y arm, POY is a suitable alternative sensor. However, the whitening gain was at 0dB for the measurements that Johannes recently performed (Yutaro does not mention what whitening gain he used, but presumably it was not 0). As a result, the standard deviation during the 10s averaging was such that the locked and misaligned readings had their 'fuzz' overlapping significantly. The situation is worse for POX DC - today, Eric checked that the POX DC and POY DC channels are indeed reporting what they claim, but we found little to no change in the POX DC level while misaligning the ITM - even after cranking the whitening gain up to 40!
Eric then suggested deriving ASDC from the AS110 photodiode, where there is more light. This increased the SNR significantly - in a 10s averaging window, the fuzz is now about 10 ADC counts out of ~1500 (~<1%) as opposed to ~2counts out of 30 previously. We also set the gains of POX DC, POY DC and ASDC to 1 (they were 0.001,0.001 and 0.5 respectively, for reasons unknown).
I ran a quick measurement of the X arm loss with the new ASDC configuration, and got a number of 80 +/- 10 ppm (7 datapoints), which is wildly different from the ~250ppm number I got from last night's measurement with 70 datapoints. I was simultaneously recording the POX DC value, which yielded 40 +/- 10 ppm.
We also discovered another possible problem today - the spot on the AS camera has been looking rather square (clearly not round) since, I presume, closing up and realigning everything. By looking at the beam near the viewport on the AS table for various configurations of the ITM, we were able to confirm that whatever is causing this distortion is in the vacuum. By misaligning the ITM, we are able to recover a nice round spot on the AS camera. But after running the dither align script, we revert to this weirdly distorted state. While closing up, no checks were done to see how well centered we are on the OMs, and moreover, the DRMI has been locked since the vent I believe. It is not clear how much of an impact this will have on locking the IFO (we will know more after tonight). There is also the possibility of using the PZT mounted OMs to mitigate this problem, which would be ideal.
Long story short -
GV Edit 8 Oct 2016: Going through some old elogs, I came across this useful reference for loss measurement. It doesn't talk about the reflection method (Method 3 in the list at the top of this elog), but suggests that cavity ringdown with the Trans PD yields the most precise numbers, and also allows for measuring TITM
I installed a 10% BS to pick off some of the light going to the IR fiber, and have added a Thorlabs PDA55 PD to the EX table setup. The idea is to be able to monitor the power output of the EX NPRO over long time scales, and also to serve as an additional diagnostic tool for when ALS gets glitchy etc. There is about 0.4mW of IR power incident on the PD (as measured with the Ophir power meter), which translates to ~2500 ADC counts (~1.67V as measured with an Oscilloscope set to high impedance directly at the PD output). The output of the PD is presently going to Ch5 of the same board that receives the OL QPD voltages (which corresponds to ADC channel 28). Previously, I had borrowed the power and signal cables from the High-Gain Transmon PD to monitor this channel, but today I have laid out independent cabling and also restored the Transmon PD to its nominal state.
On the CDS side of things, I edited C1SCX to route the signal from ADC Ch28 to the ALS block. I also edited the ALS_END library part to have an additional input for the power monitor, to keep the naming conventions consistent. I have added a gain in the filter module to calibrate the readout into mW using these numbers. The channel is called C1:ALS-X_POWER_OUT, and is DQed for long-term trending purposes.
The main ALS screen is a bit cluttered so I have added this channel to the ALS overview MEDM screen for now..
We let the PSL shutter closed overnight and observed the POXDC, POYDC and ASDC offsets. While POY has small fluctuations compared to the signal level, POX is worse off, and the drifts we observed live in the DC reading are in the same ballpark as the offset fluctuations. The POXDC level also unexpectedly increased suddenly without the PSL shutter being opened, which we can't explain. The data we took using POXDC cannot be trusted.
Even the ASDC occasionally shows some fluctuations, which is concerning because the change in value rivals the difference between locked and misaligned state. It turns out that the green shutters were left open, but that should not really affect the detectors in question.
We obtained loss numbers by measuring the arm reflections on the ASDC port instead. LSCoffsets was run before the data-taking run. For each arm we misaligned the respective other ITM to the point that moving it no longer had an impact on the ASDC reading. By taking a few quick data points we conclude the following numbers:
XARM: 247 ppm +/- 12 ppm
YARM: 285 ppm +/- 13 ppm
This is not in good agreement with the POYDC value. The script is currently running for the YARM for better statistics, which will take a couple hours.
ITMX is misaligned for the purpose of this measurement, with the original values saved.
GV edit 5Oct2016: Forgot to mention here that Johannes marked the spot positions on the ITMs and ETMs (as viewed on the QUAD in the control room) with a sharpie to reflect the current "well aligned" state.
The last good rga scan at vent 78 day 38
RGA background scan
Vacuum Status: Chamber Open
All chamber annuloses are vented. Vac Monitor screen is not communicating with gauges. The valve position indicator are working.
RGA is pumped by Maglev through VM2
We poked around trying to figure out the X PDH situation. In brief, the glitchiness comes and goes, not sure what causes it. Tried temp servo on/off and flow bench fan on/off. Gautam placed a PD to pick off the pre-doubler AUX X IR light to see if there is some intermittant intensity fluctuation overnight. During non-glitchy times, ALSX noise profile doesn't look too crazy, but some new peak around 80Hz and somewhat elevated noise compared to historical levels above 100Hz. It's all coherent with the PDH control up there though, and still looks like smooth frequency noise...
NB: The IR intensity monitoring PD is temporarily using the high gain Transmon PD ADC channel, and is thus the source of the signal at C1:LSC-TRY_OUT_DQ. If you want to IR lock the X arm, you must change the transmon PD triggering to use the QPD.
I started a script on Friday night to collect some data for a reflection armloss measurement of the XARM. Unfortunately there seemed to have been a hickup in some data transfer and some errors were produced, so we couldn't really trust the numbers.
Instead, we took a series of manual measurements today and made sure the interferometer is well behaved during the averaging process. I wrote up the math behind the measurement in the attached pdf.
The numbers we used for the calculations are the following:
While we average about 50 ppm +/-15 ppm for the XARM loss with a handful of samples, in a few instances the calculations actually yielded negative numbers, so there's a flaw in the way I'm collecting the data. There seems to be a ~3% drift in the signal level on the PO port on the order of minutes that does not show in the modecleaner transmission. The signals are somewhat small so we're closing the shutter over night to see if it could be an offset and will investigate further tomorrow. I went back and checked my data for the YARM, but that doesn't seem to be affected by it.
Some things I did last night:
I measured the X PDH OLG, and turned the gain down by ~6dB to bring the UGF back to 10kHz, ~50deg phase margin, 10dB gain margin. However, the error signal on the oscilloscope remained pretty ratty. Zooming in, it was dominated by glitches occuring at 120Hz. I went to hook up the SR785 to the control signal monitor to see what the spectrum of these glitches looked like, but weirdly enough connecting the SR785's input made the glitches go away. In fact, with one end of a BNC connector plugged into a floating SR785 input, touching the other end's shield to any of the BNC shields on the uPDH chassis made the glitches go away.
This suggested some ground loop shenanigans to me; everything in the little green PDH shelves is plugged into a power strip which is itself plugged into a power strip at the X end electronics rack, behind all of the sorensens. I tried plugging the power strip into some different places (including over by the chamber where the laser and green refl PD are powered), but nothing made the glitches go away. In fact, it often resulted in being unable to lock the PDH loop for unknown reasons. This remains unsolved.
As Gautam and Johannes observed, the X green beat was puny. By hooking up a fast scope directly to the beat PD output, I was able to fine tune the alignment to get a 80mVpp beat, which I think is substaintially bigger than what we used to have. (Is this plus the PDH gain changed really attributable to arm loss reduction? Hm)
However, the DFD I and Q outputs have intermittent glitches that are big enough to saturate the ADC when the whitening filters are on, even with 0dB whitening gain, which makes it hard to see any real ALS noise above a few tens of Hz or so. Turning off the whitening and cranking up the whitening gain still shows a reasonably elevated spectrum from the glitches. (I left a DTT instance with a spectrum on in on the desktop, but forgot to export...) The glitches are not uniformly spaced at 120Hz as in the PDH error signal. However, the transmitted green power also showed intermittant quick drops. This also remains unsolved for the time being.
Today we re-installed the fiber coupler on the X-endtable to couple some of the PSL light into a fiber that runs to the PSL table, where it is combined with a similar PSL pickoff to make an IR beat between the EX AUX laser and the PSL. The main motivation behind this was to make the process of finding the green beatnote easier. We used JAMMT (just another mode matching tool) to calculate a two lens solution to couple the light into the collimator - we use a +200mm and -200mm lens, I will upload a more detailed mode matching calculation + plot + picture soon. We wanted to have a beam waist of 350um at the collimator, a number calculated using the following formula from the Thorlabs website:
where d is the diameter of the output beam from the collimator, f is the collimating lens focal length and MFD is 6.6um for the fiber we use.
There is ~26mW of IR light coming through the BS after the EX AUX - after playing around with the 6 axis stage that the coupler is mounted on, Johannes got the IR transmission to the PSL table up to ~11.7mW. The mode matching efficiency of 45% is certainly not stellar, but we were more curious to find a beat and possibly measure the X arm loss so we decided to accept this for now - we could probably improve this by moving the lenses around. We then attenuated the input beam to the fiber by means of an ND filter such that the light incident on the coupler is now ~1.3mW, and the light arriving at the PSL table from the EX laser is ~550uW. Along with the PSL light, after the various couplers, we have ~500uW of light going to the IR beat PD - well below its 2mW threshold.
The IR beat was easily found with the frequency counter setup. However, there was no evidence of a green beat. So we went to the PSL table and did the near-field-far-field alignment onto the beat PD. After doing this, we were able to see a beat - but the amplitude was puny (~-60dBm, we are more used to seeing ~-20dBm on the network analyzer in the control room). Perhaps this can be improved by tweaking the alignment onto the PD while monitoring the RF output with an oscilloscope.
Moreover, the green PDH problems with the X end persist - even though the arm readily locks to a TEM00 mode, it frequently spontaneously drops lock. I twiddled around with the gain on the uPDH box while looking at the error signal while locked on a oscilloscope, but was unable to mitigate the situation. Perhaps the loop shape needs to be measured and that should tell us if the gain is too low or high. But ALS is getting closer to the nominal state...
Johannes is running his loss measurement script on the X arm - but this should be done by ~10pm tonight.
[ Gautam and Steve ]
c1vac1 and c1vac2 were rebooted and the gauges are communicating now. V1, VA6, V5 and V4 were closed and disconnected to avoid unexpected valve switching. All went smoothly.
The new ITcc gauge is at 1e-5 Torr as CC1 This is the gauge that should be logged in slow channel.
TP2 fore line dry pump was replaced this morning after 382 day of operation.
TP3 dry pump is very noisy, but it's pressure still 47 mTorr
[ericq, Gautam, Steve]
Following roughly the same procedure as ELOG 11354, c1vac1 and c1vac2 were rebooted. The symptoms were identical to the situation in that ELOG; c1vac1 could be pinged and telneted to, but c1vac2 was totally unresponsive.
The only change in the linked procedure was that we did not shut down the maglev. Since I unwittingly had it running for days without V4 open while Steve was away, we now know that it can handle shorter periods of time than that...
Upon reboot, many channels were readable again, unfortunately the channels for TP2 and TP3 are still blank. We were able to return to "Vacuum normal state," but because of unknowned communication problems with VM1's interlock, we can't open VM1 for the RGA. Instead we opened VM2 to expose the RGA to the main IFO volumn, but this isn't part of the "Normal" state definite, so things currently read "Undefined state".