I'm just on an elog roll this morning...
Again while poking around inside the IFO room, I noticed that they have replaced all of our incandescent lights with CFLs. Do we care? The point of having the incandescent lights on a separate switch from the big fluorescent lights was so that we could have only 60Hz lines, not wide-band noise if we want the lights on while locking.
I'm not sure that we actually care, because more often we just turn off all the lights while trying to do serious locking, but if we do care, then someone needs to ask the custodial staff (or someone else?) to undo the change.
Included the 'Servo' output from the D040180 in c1ioo, which I hoped would be a better measure of the MC length fluctuations. It goes into ADC6, labeled CH7 on the physical board.
Servo-output => C1:IOO-MC_SERVO. (Already present is Out1-output => C1:IOO-MC_F).
At low freq. the servo signal is about 4.5dB bigger. Both are recorded at 256Hz now which is the reason for the downward slope at about 100Hz.
For the past couple of days, Jan and I have been discussing a major issue in COMSOL involving modeling both oscillatory and non-oscillatory forces simultaneously while using FDA. It turns out that he and I had run into the same problem at different times and with different projects. After discussing with an expert, Jan had decided in the past that this simple task was impossible via direct means.
The issue could still be resolved if there was a way for us to work on the Weak Form of the differential equations describing the system:
According to current documentation however, Weak Form analysis is not yet possible in COMSOL 4.0. Jan suggested moving my work over to ANSYS or waiting for the 4.0 upgrade, but there's probably not enough time left in my SURF for either of these options. I suggested attempting a backwards-compatibility test to COMSOL 3.5; Jan and I will be exploring this option some time next week.
I changed the carm_cm_up.sh script so that it requires fewer human interventions. Rather than stopping and asking for things like "Press enter to confirm PRMI is locked", it checks for itself. The sequence that we have in the up script works very reliably, so we don't need to babysit the first several steps anymore.
Another innovation tonight that Q helped put in was servoing the CARM offset to get a certain arm power. A failing of the script had been that depending on what the arm power was during transition over to sqrtInvTrans, the arm power was always different even if the digital offset value was the same. So, now the script will servo (slowly!!) the offset such that the arm power goes to a preset value.
The biggest real IFO progress tonight was that I was able to actually measure the CARM and DARM loops (thanks ChrisW!), and so I discovered that even though we are using (TRX-TRY)/(TRX+TRY) for our IR DARM error signal, we needed to increase the digital gain for DARM as the CARM offset was reduced. For ALS lock and DC trans diff up to arm powers of 3, we use the same ol' gain of 6. However, between 3 - 6, we need a gain of 7. Then, when we go to arm powers above 6 we need a gain of 7.5. I was also measuring the CARM loop at each of these arm powers (4, 6, 7, 8, 9), but the gain of 4 that we use for sqrtInvTrans was still fine.
So, the carm_cm_up script will do everything that it used to without any help (unless it fails to find IR resonance for ALS, or can't lock the PRMI, in which case it will ask for help), and then once it gets to these servo lines to slowly increase the arm power and increase the DARM gain, it will ask you to confirm before each step is taken. The script should get you all the way to arm powers of 9, which is pretty much exactly 100pm according to Q's Mist plot that is posted.
The CARM and DARM loops (around the UGFs) don't seem to be appreciably changing shape as I increase the arm powers up to 9 (as long as I increase the DARM loop gain appropriately). So, we may be able to go a little bit farther, but since we're at about 100pm, it might be time to look at whether we think REFL11 or REFLDC is going to be more promising in terms of loop stability for the rest of the way to resonance.
Here are some plots from this evening.
First, the time I was able to get to and hold at arm powers of 9. I have a striptool to show the long time trends, and then zooms of the lockloss. I do not see any particular oscillations or anything that strikes me as the cause for the lockloss. If anyone sees something, that would be helpful.
This next lockloss was interesting because the DARM started oscillating as soon as the normalization matrix elements were turned on for DARM on DC transmissions. The script should be measuring values and putting in matrix elements that don't change the gain when they are turned on, but perhaps something didn't work as expected. Anyhow, the arm powers were only 1ish at the time of lockloss. There was some kind of glitch in the DARM_OUT (see 2nd plot below, and zoom in 3rd plot), but it doesn't seem to have caused the lockloss.
We spent the afternoon working on the new scan for IR resonance script. It is getting much closer, although we need to work on a plan for the fine scanning at the end - so far, the result from the wavelet thing mis-estimates the true peak phase, and so if we jump to where it recommends, we are only at about half of the arm resonance. So, in progress, but moving forward.
Tonight we repeated the process of reducing the CARM offset and measuring the DARM loop gain as we went. I'm not sure if I just had the wrong numbers yesterday, or if the gains are changing day-by-day. The gains that it wanted at given arm buildups were constant throughout this evening, but they are about a factor of 2 higher than yesterday. If they really do change, we may need to implement a UGF servo for DARM. New gains are in the carm_cm_up script.
We also actually saved our DARM loop measurements as a function of CARM offset (as indicated by arm buildups). The loop stays the same through arm powers of 4. However, once we get to arm powers of 6, the magnitude around 100 Hz starts to flatten out, and we get some weird features in the phase. It's almost like the phase bubble has a peak growing out of it. I saw these yesterday, and they just keep getting more pronounced as we go up to arm powers of 7, 8 and 9 (where we lost lock during the measurement). The very last point in the power=9 trace was just before/during the lockloss, so I don't know if we trust it, or if it is real and telling us something important. But, I think that it's time to see about getting both CARM and DARM onto a different set of error signals now that we're at about 100pm.
[Jenne, Rana, Koji]
Since the MOPA has been having a bad few weeks (and got even more significantly worse in the last day or so), we opened up the MOPA box to increase the power. This involved some adjusting of the NPRO, and some adjusting of the alignment between the NPRO and the Amplifier. Afterward, the power out of the MOPA box was increased. Hooray!
0. Before we touched anything, the AMPMON was 2.26, PMC_Trans was 2.23, PSL-126MOPA_126MON was 152 (and when the photodiode was blocked, it's dark reading was 23).
1. We took off the side panel of the MOPA box nearest the NPRO, to gain access to the potentiometers that control the NPRO settings. We selectively changed some of the pots while watching PSL-126MOPA_126MON on Striptool.
2. We adjusted the pot labeled "DTEMP" first. (You have to use a dental mirror to see the labels on the PCB, but they're there). We went 3.25 turns clockwise, and got the 126MON to 158.
3. To give us some elbow room, we changed the PSL-126MOPA_126CURADJ from +10.000 to 0.000 so that we have some space to move around on the slider. This changed 126MON to 142. The 126MOPA_CURMON was at 2.308.
4. We tried adjusting the "USR_CUR" pot, which is labeled "POWER" on the back panel of the NPRO (you reach this pot through a hole in the back of the NPRO, not through the side which we took off, like all the other pots today). This pot did nothing at all, so we left it in its original position. This may have been disabled since we use the slider.
5. We adjusted the CUR_SET pot, and got the 126MON up to 185. This changed the 126MOPA_CURMON to 2. 772 and the AMPMON to 2.45
We decided that that was enough fiddling with the NPRO, and moved on to adjusting the alignment into the Amplifier.
6. We teed off of the AMPMON photodiode so that we could see the DC values on a DMM. When we used a T to connect both the DMM and the regular DAQ cable, the DMM read a value a factor of 2 smaller than when the DMM was connected directly to the PD. This shouldn't happen.....it's something on the to-fix-someday list.
7. Rana adjusted the 2 steering mirrors immediately in front of the amplifier, inside the MOPA box. This changed the DMM reading from its original 0.204 to 0.210, and the AMPMON reading from 2.45 to 2.55. While this did help increase the power, the mirrors weren't really moved very much.
8. We then noticed that the beam wasn't really well aligned onto the AMPMON PD. When Rana leaned on the MOPA box, the PD's reading changed. So we moved the PD a little bit to maximize its readings. After this, the AMPMON read 2.68, and the DMM read 0.220.
9. Then Rana adjusted the 2 waveplates in the path from the NPRO to the Amplifier. The first waveplate in the path didn't really change anything. Adjusting the 2nd waveplate gave us an AMPMON of 2.72, and a DMM reading of 0.222.
10. We closed up the MOPA box, and locked the PMC. Unfortunately, the PMC_Trans was only 1.78, down from the 2.26 when we began our activities. Not so great, considering that in the end, the MOPA power went up from 2.26 to 2.72.
11. Koji and I adjusted the steering mirrors in front of the PMC, but we could not get a transmission higher than 1.78.
12. We came back to the control room, and changed the 126MOPA_126CURADJ slider to -2.263 which gives a 126MOPA_CURMON to 2.503. This increased PMC_TRANS up to 2.1.
13. Koji did a bit more steering mirror adjustment, but didn't get any more improvement.
14. Koji then did a scan of the FSS SLOW actuator, and found a better temperature place (~ -5.0)for the laser to sit in. This place (presumably with less mode hopping) lets the PMC_TRANS get up to 2.3, almost 2.4. We leave things at this place, with the 126MOPA_126CURADJ slider at -2.263.
Now that the MOPA is putting out more power, we can adjust the waveplate before the PBS to determine how much power we dump, so that we have ~constant power all the time.
Also, the PMCR view on the Quad TVs in the Control Room has been changed so it actually is PMCR, not PMCT like it has been for a long time.
This is a trend of the last 20 days. After our work with the NPRO, we have recovered only 5% in PMC trans power, although there's an apparent 15% increase in AMPMON.
The AMPMON increase is partly fake; the AMPMON PD has too much of an ND filter in front of it and it has a strong angle dependence. In the future, we should not use this filter in a permanent setup. This is not a humidity dependence.
The recovery of the refcav power mainly came from tweaking the two steering mirrors just before and just after the 21.5 MHz PC. I used those knobs because that is the part of the refcav path closest to the initial disturbance (NPRO).
BTW, the cost of a 1W Innolight NPRO is $35k and a 2W Innolight NPRO is $53k. Since Jenne is on fellowship this year, we can afford the 2W laser, but she has to be given priority in naming the laser.
To complete the story before moving on to ALS, I decided to measure the X arm loss. It is estimated to be 20 +/- 5 ppm. This is surprising to say the least, so I'm skeptical - the camera image of the ETMX spot when locked almost certainly looks brighter than in Oct 2016, but I don't have numerical proof. But I don't see any obvious red flags in the data quality/analysis yet. If true, this suggests that the "cleaning" of the Yarm optics actually did more harm than good, and if that's true, we should attempt to identify where in the procedure the problem lies - was it in my usage of non-optical grade solvents?
From the measurements I have, the Y arm loss is estimated to be 58 +/- 12 ppm. The quoted values are the median (50th percentile) and the distance to the 25th and 75th quantiles. This is significantly worse than the ~25 ppm number Johannes had determined. The data quality is questionable, so I would want to get some better data and run it through this machinery and see what number that yields. I'll try and systematically fix the ASS tomorrow and give it another shot.
Model and analysis framework:
Johannes and I have cleaned up the equations used for this calculation - while we may make more edits, the v1 of the document lives here. The crux of it is that we would like to measure the quantity , where is the power reflected from the resonant cavity (just the ITM). This quantity can then be used to back out the round-trip loss in the resonant cavity, with further model parameters which are:
If we ignore the 3rd for a start, we can calculate the "expected" value of as a function of the round-trip loss, for some assumed uncertainties on the above-mentioned model parameters. This is shown in the top plot in Attachment #1, and while this was generated using emcee, is consistent with the first order uncertainty propagation based result I posted in my previous elog on this subject. The actual samples of the model parameters used to generate these curves are shown in the bottom. What this is telling us is that even if we have no measurement uncertainty on , the systematic uncertainties are of the order of 5 ppm, for the assumed variation in model parameters.
The same machinery can be run backwards - assuming we have multiple measurements of , we then also have a sample variance, . The uncertainty on the sample variance estimator is also known, and serves to quantify the prior distribution on the parameter for our Monte-Carlo sampling. The parameter itself is required to quantify the likelihood of a given set of model parameters, given our measurement. For the measurements I did this week, my best estimate of . Plugging this in, and assuming uncorrelated gaussian uncertainties on the model parameters, I can back out the posterior distributions.
For convenience, I separate the parameters into two groups - (i) All the model parameters excluding the RT loss, and (ii) the RT loss. Attachment #2 and Attachment #3 show the priors (orange) and posteriors (black) of these quantities.
So that the experts on MC analysis can correct me wheere I'm wrong.
To conclude my PMC noise investigations: Attachment #1 shows the PMC noise inferred from the calibrations earlier in this thread and the fitted OLTF for the PMC loop. Attachment #2 compares the frequency noise (inferred from the error point of the PMC servo) when the IMC is locked / unlocked. I don't know what to make of the fact that the PMC suggests improvement from ~20 Hz onwards already - does this mean that the NPRO noise model is wrong by 1 order of magnitude at 30 Hz?
While I initially thought the 1/f^2 rise below ~100 Hz is attributable to the IMC cavity length fluctuations, I found that this profile is present even in the measurement with the PSL shutter closed. I am not embarking on a detailed PMC noise budgeting project for now. Note however that we are not shot noise limited anywhere in this measurement band.
Anchal mentioned it would be good to put more details about how I arrived at the values needed to configure the modbus drive for the temperature sensor, since this information is not in the manual and is hard to find on the internet, so here is a breakdown.
So the generic format is:
which in our case become:
As can be seen, the parameters of the first two functions "drvAsynIPPortConfigure" and "modbusInterposeConfig" are straight forward, so we restrict our discussion to the case of third function "drvModbusAsynConfigure". Well, after hours of trolling the internet, I was able to piece together a coherent picture of what needs doing and I have summarised them in the table below.
Once the asyn IP or serial port driver has been created, and the modbusInterpose driver has been configured, a modbus port driver is created with the following command:
drvModbusAsynConfigure(portName, # used by channel definitions in .db file to reference this unit)
tcpPortName, # reference to portName created with drvAsynIPPortConfigure command
modbusLength, # length in dataType units
pollMsec, # how frequently to request a value in [ms]
Modbus addresses are specified by a 16-bit integer address. The location of inputs and outputs within the 16-bit address space is not defined by the Modbus protocol, it is vendor-specific. Note that 16-bit Modbus addresses are commonly specified with an offset of 400001 (or 300001). This offset is not used by the modbus driver, it uses only the 16-bit address, not the offset.
For ServersCheck, the offset is "30001", so that
modbusStartAddress = 30200 - 30001 = 199
I begin modeling the initial BHD setup using Finesse. I started with copying C1_w_BHD.kat from the 40m/bhd repo and making changes to reflect the current BHD setup:
1. OMCs were removed.
2. Only 1 PD per BHD port was left.
3. Transmission of PR2 was changed to 2.2%. The PRG was calculated to be ~15.5.
4. Actual RoCs of new optics were dialed in (Yesterday me and Paco went into the cleanroom to measure the RoCs and they seem to match the datasheets).
Here's a table comparing the old (design?) RoCs with the new RoCs:
The changes looked quite alarming, especially for LO4 and AS3, so I wrote a script to calculate the mode matching between the LO and AS beams called AS_LO_ModeMatching.ipynb and pushed it into the repo. In the notebook a bright AS beam is created by creating a small asymmetry between the arms of ~ 0.003 degrees (~10pm). Amplitude detectors were put at the input ports of the BHD BS to calculate the fields in the AS and LO beams. In particular TEM00, TEM02 and TEM20 were measured for each beam.
The calculation shows that with the old RoCs the mode matching between the LO and AS beams is 99% while for the new RoCs it is ~ 50%.
Ok, it turns out these optics were purchased on purpose, as this elog shows. Jon considered building a mode-matching telescope with stock optics as an initial step before purchasing the custom optics (referred to as "design" optics in my elog).
I dialed in the new distances between the optics into the .kat file as described in this elog and pushed the changes to the repo. With the new distances, I got mode-matching of 87% for the full IFO and 89% for FPMI so there's probably no need to worry as the mode-matching with these optics was already designed.
I was finally able to set up a stable suspension model with the help of Yuta and I'm now ready to start doing some MICH noise budgeting with BHD readout. (Tip: turns out that in the zpk function in Matlab you should multiply the poles and zeros by -2*pi to match the zpk TFs in Foton)
I copied all the filters from the suspension MEDM screens into a Matlab. Those filters were concatenated with a single pendulum suspension TF with poles at [0.05e-1+1i, 0.05e-1-1i] and a gain of 4 N/kg.
I multiplied the OLTF with the real gains at the DAC/DAC/OSEMs/Coil Driver and Coils. I ignore whitening/dewhitening for now. The OLTF was calculated with no additional ad-hoc gain.
Attachment 1 shows the calculated open-loop transfer function.
Attachment 2 shows OLTF of ETMY measured last week.
Attachment 3 shows the step and impulse responses of the closed-loop system.
The guy from KroneCrane (sp?) came today and started the crane inspection on the X End Crane. There were issues with our crane so he's going to resume on Monday. We turned off the MOPA fur the duration of the inspection.
The plan is that they will bring enough weight to test it at slightly over the rating (1 Ton + 10 %) and we'll retry the certification after the oiling on Monday.
The south end crane has one more flaw. The wall cantilever is imbalanced: meaning it wants to rotate south ward, because its axis is off.
This effects the rope winding on the drum as it is shown on Atm2
Atm1 is showing Jay Swar of KoneCrane and the two 1250 lbs load that was used for the test. Overloading the crane at 125% is general practice at load testing.
It was good to see that the load brakes were working well at 2500 lbs. Finally we found a good service company! and thanks for Rana and Alberto
for coming in on Saturday.
We went into the vertex today to see about fixing the alignment. The in-air access connector is in place, and we took heavy doors off of BS, ITMY, and ETMY chambers.
We started by looking at the pointing from the PZTs. Manasa and Raji hooked up HV power supplies to the PZTs and set them to the middle of their ranges (75 V).
We installed a target on the BS cage, and new "free standing" targets made special by Steve for the SOSs on ITMY and ETMY.
Using a free-standing aperture target we looked at the beam height before PZT2. It was a little high, so we adjusted it with PZT1. Once that was done we looked at the beam height at PR2, and adjusted that height with PZT1.
We then tried to use the hysteresis in PR2 to adjust the beam height at ITMY. Pushing just a little bit at the top or bottom of PR2 would repoint the beam in pitch. This sort of works, but it's stupid. Using this method we got the beam more or less centered vertically at ITMY.
We moved on to ETMY with the idea that we would again use the hysteresis in PR3 to get the vertical pointing to the ETM correct. This was a good demonstration of just how stupid the tip-tilts really are. Just touching slightly at the top or bottom or PR3 we could completely change the pointing at ETMY, by mili-radians (~4 cm over 40m).
At this point I cried foul. This is not an acceptable situation. Very little stimulation to the tip-tilts can repoint the beam inside the PR cavity.
Steve says that the TT weights, which will attach to the base of the TT mirror mounts and should help keep the mirrors vertical and not hysteretic, are being baked now and should be available tomorrow. We therefore decided to stop what we were doing today, since we'll have to just redo it all again tomorrow once the weights are installed.
I have moved the 1W Innolight + controller from the PSL table to the SP table for beam profiling.
Koji and Kevin
We unpacked the Innolight 2W laser, took an inventory, and scanned the operations manual.
[Edit by KA]
The scanned PDFs are placed on the following wiki page
We will measure the P-I curve, the mode profile, frequency actuator responses, and so on.
Koji and Kevin measured the output power vs injection current for the Innolight 2W laser.
The threshold current is 0.75 A.
The following data was taken with the laser crystal temperature at 25.04ºC (dial setting: 0.12).
Koji and Kevin measured the vertical beam profile of the Innolight 2W laser at one point.
This data was taken with the laser crystal temperature at 25.04°C and the injection current at 2.092A.
The distance from the razor blade to the flat black face on the front of the laser was 13.2cm.
The data was fit to the function y(x)=a*erf(sqrt(x)*(x-x0)/w)+b with the following results.
Reduced chi squared = 14.07
x0 = (1.964 +- 0.002) mm
w = (0.216 +- 0.004) mm
a = (3.39 +- 0.03) V
b = (3.46 +- 0.03) V
razor height (mm) Voltage (V)
Back in Gainesville in 1997, I learned how to do this using the chopper wheel. We had to make the assumption that the wheel's blade was moving horizontally during the time of the chop.
One advantage is that the repetitive slices reduces the random errors by a lot - you can trigger the scope and average. Another advantage is that you can download the average scope trace using USB, floppy, or ethernet instead of pencil and paper.
But, I never analyzed it in enough detail to see if there was some kind of nasty systematic error.
Good fit. I assumed sqrt(x) is a typo of sqrt(2).
x0 = (1.964 +- 0.002) mm
w = (0.216 +- 0.004) mm
a = (3.39 +- 0.03) V
b = (3.46 +- 0.03) V
What kind of fit did you use? How are the uncertainties in the parameters obtained?
When I got back from lunch just now, I noticed that the PMC TRANS and REFL cameras were showing no spots. I went onto the PSL table, and saw that the NPRO was in fact turned off. I turned it back on.
The laser was definitely ON when I left for lunch around 130pm, and this happend around 140pm. Anjali says no one was in the lab in between. None of the FEs are dead, suggesting there wasn't a labwide power outage, and the EX and EY NPROs were not affected. I had pulled out the diagnostics connector logged by Acromag, I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event. So FSS_RMTEMP isn't working from now on. Sooner we get the PSL Acromag crate together, the better...
Happened again at ~730pm.
The NPRO diag channels don't really tell me what happened in a causal way, but the interlock channel seems suspicious. Why is the nominal value 0.04 V? From the manual, it looks like the TGUARD is an indication of deviations between the set temperature and actual diode laser temperature. Is it normal for it to be putting out 11V?
I'm not going to turn it on again right now while I ponder which of my hands I need to chop off.
I'm restoring it now in the hope we can get some more info on what exactly happened if this is a recurring event.
After discussing with Koji, I turned the NPRO back on again, at ~4PM local time. I first dialled the injection current down to 0A. Then powered the control unit state to "ON". Then I ramped up the power by turning the front panel dial. Lasing started at 0.5A, and I saw no abrupt swings in the power (I used PMC REFL as a monitor, there were some mode flashes which are the dips seen in the power, and the x-axis is in units of time not pump current). PMC was relocked and IMC autolocker locked the IMC almost immediately.
Now we wait and watch I guess.
NPRO shutoff at ~1517 local time today afternoon. Again, not many clues from the NPRO diagnostics channel, but to my eye, the D1_POW channel shows the first variation from the "steady state", followed by the other channels. This is ~0.1 sec before the other channels register some change, so I don't know how much we can trust the synchronizaiton of the EPICS data streams. I won't turn it on again for now. I did check that the little fan on the back of the NPRO controller is still rotating.
gautam 10am 4/29: I also added a longer term trend of these diagnostic channels, no clear trends suggesting a fault are visible. The y-axis units for all plots are in Volts, and the data is sampled at 16 Hz.
I suggested in an earlier elog that after the repair of the NPRO, the PZT capacitance may have changed dramatically. This seems unlikely - I measured the PZT capacitance with the BK Precision LCR meter and found it to be 2.62 nF, which is in excellent agreement with the numbers from elogs 3640 and 4354 - but this makes me wonder how the old setup ever worked. If the PZT capacitance were indeed that value, then for the Pomona box design in elog 4354, and assuming the PM at ~216kHz which was the old modulation frequency was ~30rad/V as suggested by the data in this elog, we would have had a modulation depth of 0.75 if the Function Generator were set to output a Signal at 2Vpp (2Vpp * 0.5 * 0.05 * 30rad/V = 1.5rad pp)! Am I missing something here?
Instead of using an attenuator, we could instead change the capacitor in the pomona box from 47pF mica to 5pF mica to realize a modulation depth of ~0.2 at the new modulation frequency of 231.25 kHz. In any case, as elog 4354 suggests, the phase introduced by this high-pass filter is non-zero at the modulation frequency, so we may also want to install an all-pass filter which will allow us to control the demodulation phase. This should be easy enough to implement with an Op27 and passive components we have in hand...
After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I
Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010
It's diodes should be replaced, based on it's age and performance.
RIN and noise eater bad. I will get a quote on this job.
The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956
I don't think there's any evidence that the noise eater is bad. That would change the behavior of the relaxation oscillation which is at 1 MHz ?
While I was investigating the AM/PM ratio of the Innolight, I found that there was a pronounced peak in the RIN at ~400kHz, which did not change despite toggling the noise eater switch on the front panel (see plot attached). The plot in the manual suggests the relaxation oscillations should be around 600kHz, but given that the laser power has dropped by a factor of ~3, I think it's reasonable that the relaxation oscillations are now at ~400kHz?
It is strange that there is no difference between with and without NE, isn't it?
The Innolight laser control unit has a 25 pin D-sub connector on the rear which is meant to serve as a diagnostics aid, and the voltages at the various pins should tell us the state of various things, like the diode power monitor, laser crystal TEC error temperature, NE status etc etc. Unfortunately, I am unable to locate a manual for this laser (online or physical copy in the filing cabinets), so the only thing I have to go on is a photocopied page that Steve had obtained sometime ago from the manual for the 2W NPRO. According to that, Pin 1 is "Diode laser 1, power monitor, 1V/W". The voltage I measured (with one of the 25 pin breakout boards and a DMM) is 1.038V. I didn't see any fast fluctuations in this value either. It may be that the coefficient indicating "normal" state of operation is different for the 1W model than the 2W model, but this measurement suggests the condition of the diode is alright after all?
I also measured the voltage at Pin 12, which is described in the manual as "Noise Eater, monitor". This value was fluctuating between ~20mV and ~40mV. Toggling the NE switch on the front of the control unit between ON and OFF did not change this behaviour. The one page of the manual that we have, however, doesnt provide any illumination on how we are supposed to interpret the voltage measured at this pin...
This is the same one as what you got from Steve. But you can find full pages.
It shipped out for repair evaluation.
Arrived to Hayward,CA 2016Feb16
We decided to rename the Input Beam channels (while keeping temporary backwards compatible aliases) as:
C1:ASC-IB_POS_X, C1:ASC-IB_POS_Y, C1:ASC-IB_ANG_SUM, etc.
The upgrade's input mode matching telescope design is complete. A summary document is on the MMT wiki page, as are the final distances between the optics in the chain from the mode cleaner to the ITMs. Unless we all failed kindergarden and can't use rulers, we should be able to get very good mode matching overlap. We seem to be able (in Matlab simulation land) to achieve better than 99.9% overlap even if we are wrong on the optics' placement by ~5mm. Everything is checked in to the svn, and is ready for output mode matching when we get there.
The summary pages are now online (Daily Summary), and will eventually be found on the 40m Wiki page under "LOGS-Daily Summary". (Currently, the linked website is the former summary page site)
Currently, all of the IFO and Acoustic channels have placeholders (they are not showing the real data yet) and the Weather channels are not working, although the Weather Station in the interferometer room is working (I am looking into this - any theories as to why this is would be appreciated!!).
I am looking for advice on what else to include in these pages. It would be fantastic if everyone could take a moment to look over what I have so far (especially the completed page from July 23, 2012) and give me their opinions on:
1. What else you would like to see included
2. Any specific applications to your area of work that I have overlooked
3. What the most helpful parts of the pages are
4. Any ways that I could make the existing pages more helpful
5. Any other questions, comments, clarifications or suggestions
Finally, are the hourly subplots actually helpful? It seems to me like they would be superfluous if the whole page were updating every 1-2 hours (as it theoretically eventually will). These subplots can be seen on the July 24, 2012 page.
My email address is firstname.lastname@example.org.
When I came in this afternoon, I saw that the PZT voltage to the PMC had railed. Following the usual procedure of turning the servo gain to zero and adjusting the DC offset, I got the PMC to relock, but the PMCR level was high and the alignment looked poor on the control room monitor. So I tweaked the input alignment on the PSL till I felt it was more reasonable. The view on the control room monitor now looks more like the usual state, and the "REFL (V)" field on the PMC MEDM screen now reads 0.02-0.03 which is the range I remember it being in nominally.
I turned on the power supplies for the output PZTs and pitch and yaw for PZT2. This is back to the condition that we had during atmosphere alignment, so after Ayaka has finished tweaking the MC, we can have a look at alignment of the interferometer.
Checking the drift in input pointing (TT2 is the main suspect)
I have centered IPPOS and the 2/3 part of IPANG that comes out of vacuum to the QPDs to see the drift in input pointing over the weekend or atleast overnight.
If anybody would be working with the IFO alignment over the weekend, do so only after recording the drift in IPANG and IPPOS or if you will be working later tonight, center them ion the QPDs before leaving.
I centered ipang and ippos on the QPDs (using only the steering mirrors) and wanted to see the drift over the weekend.
1. IPANG has drifted (QPD sum changed from -6 to -2.5); but it is still on the QPD.
2. IPPOS does not show any drift.
3. In the plot: The jump in IPANG on the left occured when I centered the beam to the QPD and that on the right is from the 4.7 earthquake and its aftershocks this morning.
1. Do we need to worry about this drift?
2. Which of the two TTs is resposible for the drift?
3. Do the TTs tend to drift in the same direction everytime?
P.S. The TTs were not touched to center on IPANG and IPPOS. The last time they were touched was nearly 6 hours before the centering. So the question of any immediate hysteresis is ruled out.
[Jenne, Manasa, Yuta]
We temporarily centered the beam on IPANG to see input pointing drift. From eyeball, drift was ~ 0.1 mrad/h in pitch.
What we did:
1. Aligned TT1/TT2 and aligned input pointing to Yarm.
2. Tweaked TT2 in pitch to center the beam on the first steering mirror of IPANG path. We still saw Yarm flash in higher order modes at this point. Before tweaking, the beam was hitting at the top edge.
3. Centered the beam on IPANG QPD.
4. Moved IPPOS first steering mirror because IPPOS beam was not on the mirror (off in yaw, on mirror edge). Also, IPPOS beam was coming out clipped in yaw.
5. Centered the beam on IPPOS QPD. We put lens in the path to focus the beam on the QPD.
6. Left input pointing untouched for 4 hours.
7. Restored TT2 again. We tried to align Y arm with IPANG available, but it was not possible without touching TRY path and AS was also clipped.
Below is the trend of IPANG sum, X, and Y. IPANG Y (IBQPD_Y) drifted by ~0.8 counts in 4 hours. IPANG is not calibrated yet, but Jenne used her eyeball to measure beam position shift on IPANG steering mirror. It shifted by ~2 mm. This means, input pointing drifts ~0.1 mrad/h in pitch.
Compared with yaw, pitch drift is quite large considering beam size at ETMY(~5 mm). We can monitor input pointing drift in weekends get longer trend.
- IPANG and IPPOS are both changed from the state before pumping.
There is no beam going into the IFO at the moment. There was definitely a spot on the AS camera after I restored the suspensions yesterday, as you can see from the ASDC level in Attachment #1. But at around 2pm Pacific yesterday, the ASDC level has gone to 0. I suspect the TTs. There is no beam on the REFL camera either when PRM is aligned, and PRM's DC alignment is good as judged by Oplev.
Normally, I am able to recover the beam by scanning the TTs around with some low frequency sine waves, but not today. We don't have any readback (Oplev/OSEM) of the TT alignment, and the DC bias values havent jumped abnormally around the time this happened, judging by the OUT16 monitor points (see Attachment #2). The IMC was also locked at the time when this abrupt drop in ASDC level happened. Unfortunately, we don't have a camera on the Faraday so I don't know where the misalignment is happening, but the beam is certainly not making it to the BS. All the SOS optics (e.g. BS, ITMX and ITMY) are well aligned as judged by Oplev.
Being debugged now...
As suspected - the problem was with the TTs. I tested the TT signal chain by driving a low frequency sine wave using AWG and looking at the signal on an o'scope. But I saw nothing, neither at the AI board monitor point, nor at the actual coil driver mon point. I decided to look at the IOP testpoints for the DAC channels, to see if the signals were going through okay on the digital side. But the IOP channels were flatlined, as viewed on dataviewer (see Attachment #1). This despite the fact that the DAC output monitor screen in the model itself was showing some sensible numbers, again see Attachment #1.
Looking at the CDS overview screen, there were no red flags. But there was a red indicator sneakily hidden away in the IOP model's CDS status screen, the "DAC" field in the state word is red. As Attachment #2 shows, a change in the state word is correlated with the time ASDC went to 0.
Note that there are also no errors on the c1lsc frontend itself, judging by dmesg. I want to do a model restart, but (i) this will likely require reboots of all vertex FEs and (ii) I want to know if any CDS experts want to sniff for clues to what's going on before a model restart wipes out some secret logfiles. I'm a little confused that the rtcds isn't throwing up any errors and causing models to crash if the values are not being written to the registers of the DAC. It may also be that the DAC card itself is dead . To re-iterate, all the EPICS readbacks were suggesting that I am injecting a signal right up to the IOP.
Quoting from the runtime diagnostics note:
Below is the bottom view of the geophone preamplifier and controller for the STACIS. It slides into the upper part of the STACIS, under the blue platform. The geophone signal goes in the bottom left, gets amplified, filtered, and otherwise pampered, and goes out from the bottom right. From there it goes on to the high voltage amplifier, and finally to the PZT stacks. Below right is a closer view of the output port for the preamplifier, top and bottom.
I suggest de-soldering and bending up the pins that carry the geophone signal (so the signals don't go directly to the high voltage amplifier), and adding the circuit below between the preamp and amplifier. The preamp connector is still attached to the high voltage amplifier connector in this setup, only the geophone signal pins are disconnected.
More on the circuit and its placement:
The first op-amp is a summing junction, and the second is just a unity gain inverter so that signal doesn't go into the high voltage amplifier inverted. I tested this with the breadboard, and it seems to work fine (amplitudes of two signals add, no obvious distortion). The switches allow you to choose local feedback, external feedforward, or both.
The geo input will be wires from the preamp (soldered to where the pins used to go), and the external input will be BNC cables, with the source probably a DAC. The output will go to the bent up pins that used to be connected to the preamp (they go into the high voltage amplifier). This circuit can sit outside of the STACIS- there is a place to feed wires in and out right near where the preamplifier sits. For power, it can use the STACIS preamp supply, which is +/- 15V. The resistors I used in the breadboard test were 10 kOhm, and the op-amp I used was LT1012 (whose noise should be less than either input, see eLog 7190).
This is visually represented below, with the preamp pin diagram corresponding to the soldering points with the preamp upside down (top right picture):
I put the new matricies in from the free swinging test for the: ITMX, ITMY, ETMX, ETMY, PRM, BS
Some of the optics damped okay, but ETMX and BS were not good at all. ETMX was ringing up when I turned on the damping. BS wasn't, but when I gave it a kick, it wouldn't damp. No good.
I tried ITMY, and it was totally fine, with nice damping Qs of ~5. So, I don't know what's going on.
Anamaria is trying a new 4x4 matrix-inverter, so we can look at the inversion of just the face osems. We'll see how it goes.
Since things were crappy, I did a BURT restore, so things are as they were earlier this morning.
After Kiwamu set the REFL11 phases in the PRMI configuration (maximized PRM->REFL11I reesponse) I tried to measure the MC length and the 11 MHz frequency missmatch by modulating the 11 MHz frequency and measuring the PM to AM conversion after the MC using the REFL11Q signal. The modulation appears in the REFL11Q with a good snr but the amplitude does not seem to go through a clear minimum as the 11 MHz goes through the MC resonance.
We could not relock the PRMI during the day so I resorted to a weaker method - measuring the amplitude of the 11 MHz sideband in the MC reflection (RF PD mon output on the demod board) with a RF spectrum analyzer. The minimum frequency on the IFR is 11.065650 MHz while the nominal setting was 11.065000 MHz. The sensitivity of this method is about 50 Hz.