Lightwave NPRO information:
Serial Number: 337
Manufactured: December 1998!!
Details of checks performed:
Koji tuned the parameters on the laser controller and we observed the following:
Ericq has begun the characterization of the repaired Innolight. We checked that it outputs 1W of power. We will now have to perform the following measurements:
All of these will have to be done before installing this laser at the endtable.
I believe the consensus as of now is to go ahead with carrying out the above measurements. Meanwhile, we will keep the Lightwave NPRO on and see if there is some miraculous improvement. So the decision as to whether to use the Innolight is deferred for a day or two.
I've performed the temperature sweep of PSL vs Innolight 1W AUX laser.
It remains to measure the output power vs diode current, and the beam profile. I will do the latter on the SP table where there is a little more space. Because we have 1W from this NPRO, the knife-edge method requires a power meter that has a large dynamic range and is sensitive enough to profile the beam accurately. After consulting the datasheets of the power meters we have available (Scientech, Ophir and Coherent) together with Koji, I have concluded that the Coherent calorimeter will be suitable. Its datasheet claims it can accurately measure incident powers of up to 100uW, although I think the threshold is more like 5-10mW, but this should still be plenty to get sufficient resolution for a Gaussian intensity profile with peak intensity of 1W. We also checked that the maximum likely power density we are likely to have during the waist measurement process (1W in a beam of diameter 160um) is within the 6kW/cm^2 quoted on the datasheet.
I re-measured the power levels today.
We have ~205mW out of the NPRO, and ~190mW after the Faraday. It doesn't look like the situation is going to improve dramatically. I'm going to work on a revised layout with the Innolight as soon as I've profiled the beam from it, and hopefully, by Monday, we can decide that we are going ahead with using the Innolight.
I have moved the 1W Innolight + controller from the PSL table to the SP table for beam profiling.
I've finished up the remaining characterization of the repaired 1W Innolight NPRO - the beamscan yielded results that are consistent with an earlier beam-profiling and also the numbers in the datasheet. The output power vs diode current plot is mainly for diagnostic purposes in the future - so the plot itself doesn't signify anything, but I'm uploading the data here for future reference. The methodology and analysis framework for the beamscan is the same as was used here.
Attachment #1 - Beam-scan results for X-direction
Attachment #2 - Beam-scan results for Y-direction
Attachment #3 - Beam profile using fitted beam radii
Attachment #4 - Beam-scan data
Attachment #5 - Output power vs Injection current plot
Even though I remember operating at a diode current of 2.1A at some point in the past, while doing this scan, attempting to increase the current above 2.07A resulted in the "Clamp" LED on the front turning on. According to the manual, this means that the internal current limiting circuitry has kicked in. But I don't think this is a problem as we don't really even need 1W of output power. This is probably an indicator of the health of the diode as well?
Attachment #6 - Output power vs Injection current data
It remains to redo the mode-matching into the doubling oven and make slight modifications to the layout to accommodate the new laser + beam profile.
I plan to do these in the morning tomorrow, and unless there are any objections, I will begin installing the repaired 1W Innolight Mephisto on the X endtable tomorrow (18 April 2016) afternoon.
Summary of work done over the last two days
Immediate next steps:
I've made progress on the new layout up to the doubling oven. After doing the coarse alignment with the diode current to the NPRO at ~1A, I turned it back up to the nominal 2A. I then rotated the HWP before the IR Faraday such that only ~470mW of IR power is going into the doubler (the rest is being dumped on razor beam dumps). After tuning the alignment of the IR into the doubling oven using the steering mirror + 4 axis translation stage on which the doubling oven is mounted, I get ~3.2mW of green after the harmonic separator and a HR mirror for green. The mode looks pretty good to the eye (see attachment #1), and the conversion efficiency is ~1.45%/W - which is somewhat less than the expected 2%/W but in the ballpark. It may be that some fine tweaking of the alignment + polarization while monitoring the green power can improve the situation a little bit (I think it may go up to ~4mW, which would be pretty close to 2%/W conversion efficiency). The harmonic separator also seems to be reflecting quite a bit of green light along with IR (see attachment #2) - so I'm not sure how much of a correction that introduces to the conversion efficiency.
While doing the alignment, I noticed that some amount of IR light is actually transmitted through the HR mirrors. With ~500mW of incident light at ~45 degrees, this transmitted light amounts to ~2mW. Turns out that this is also polarization dependant (see attachment #3) - for S polarized light, as at the first two steering mirrors after the NPRO, there is no transmitted light, while for P-polarized light, which is what we want for the doubling crystal, the amount transmitted is ~0.5%. The point is, I think the measured levels are consistent with the CVI datasheet. We just have to take care find all these stray beams and dump them.
I will try and optimize the amount of green power we can get out of the doubler a little more (but anyway 3mW should still be plenty for ALS). Once I'm happy with that, I will proceed with laying out the optics for mode-matching the green to the arm.
Layout as of today. Most of the green path is done. The Green REFL PD + PZT mirrors have not been hooked up to their respective power sources yet (I wonder if it's okay to start laying cables through the feedthroughs on either end of the table already, or if we want to put whatever it is that makes it airtight eventually in first?). A rough power budget has been included (with no harmonic separator just before the window), though some optimization can be done once the table is completely repopulated.
A zoomed-in version of the REFL path.
Some general notes:
I am closing the PSL shutter and the EX laser shutters for the night as I have applied a layer of first contact to the window for cleaning purposes, and we don't want any laser light incident on it. It may be that the window is so dirty that we may need multiple F.C. cleaning rounds, we will see how the window looks tomorrow...
The IR Transmon system is almost completely laid out, only the QPD remains to be installed. Some notes:
I feel like once the above are resolved, the next step would be to PDH lock the green to the arm and see what sort of transmission we get on the PSL table. It may be the polarization or just alignment, but for some reason, the transmitted green light from the X arm is showing up at GTRY now (up to 0.5, which is the level we are used to when the Y arm has green locked!). So a rough plan of action:
Using the modulation frequency suggested here, I hooked up the PDH setup at the X-end and succeeded in locking the green to the X arm. I then rotated the HWP after the green Faraday to maximize TRX output, which after a cursory alignment optimization is ~0.2 (I believe we were used to seeing ~0.3 before the end laser went wonky). Obviously much optimization/characterization remains to be done. But for tonight, I am closing the PSL and EX laser shutters and applying first contact to the window once more courtesy more PEEK from Koji's lab in W Bridge. Once this is taken care of, I can install the Oplev tomorrow, and then set about optimizing various things in a systematic way.. MC autolocker has also been disabled...
Side note: for the IR Transmon QPD, we'd like a post that is ~0.75" taller given the difference in beam height from the arm cavity and on the endtable. I will put together a drawing for Steve tomorrow..
After a second round of F.C. application, I think the window is clean enough and there are no residual F.C. pieces anywhere near the central parts of the window (indeed I think we got most of it off). So I am going to go ahead and install the Oplev.
It looks very promising.
With Steve's help, I installed the Oplev earlier today. I adjusted the positions of the two lenses until I deemed the spot size on the QPD satisfactory by eye. As a quick check, I verified using the DTT template that the UGF is ~5Hz for both pitch and yaw. There is ~300uW of power incident on the QPD (out of ~2mW from the HeNe). In terms of ADC counts, this is ~13,000 counts which is about what we had prior to taking the endtable apart. There are a couple of spots from reflections off the black glass plate in the vacuum chamber, but in general, I think the overall setup is acceptable.
This completes the bulk of the optical layout. The only bits remaining are to couple the IR into the fiber and to install a power monitoring PD. Pictures to follow shortly.
Now that the layout is complete, it remains to optimize various things. My immediate plan is to do the following:
I will also need to upload the layout drawing to reflect the layout finally implemented.
Not directly related:
The ETMx oplev servo is now on. I then wanted to see if I could lock both arms to IR. I've managed to do this successfully - BUT I think there is something wrong with the X arm dither alignment servo. By manually tweaking the alignment sliders on the IFOalign MEDM screen, I can get the IR transmission up to ~0.95. But when I run the dither, it drives the transmission back down to ~0.6, where it plateaus. I will need to investigate further.
GV Edit: There was some confusion while aligning the Oplev input beam as to how the wedge of the ETM is oriented. We believe the wedge is horizontal, but its orientation (i.e. thicker side on the right or left?) was still ambiguous. I've made a roughly-to-scale sketch (attachment #1) of what I think is the correct orientation - which turns out to be in the opposite sense of the schematic pinned up in the office area.. Does this make sense? Is there some schematic/drawing where the wedge orientation is explicitly indicated? My search of the elog/wiki did not yield any..
Today we spent some time looking into the PDH situation at the X end. A summary of our findings.
I suggested in an earlier elog that after the repair of the NPRO, the PZT capacitance may have changed dramatically. This seems unlikely - I measured the PZT capacitance with the BK Precision LCR meter and found it to be 2.62 nF, which is in excellent agreement with the numbers from elogs 3640 and 4354 - but this makes me wonder how the old setup ever worked. If the PZT capacitance were indeed that value, then for the Pomona box design in elog 4354, and assuming the PM at ~216kHz which was the old modulation frequency was ~30rad/V as suggested by the data in this elog, we would have had a modulation depth of 0.75 if the Function Generator were set to output a Signal at 2Vpp (2Vpp * 0.5 * 0.05 * 30rad/V = 1.5rad pp)! Am I missing something here?
Instead of using an attenuator, we could instead change the capacitor in the pomona box from 47pF mica to 5pF mica to realize a modulation depth of ~0.2 at the new modulation frequency of 231.25 kHz. In any case, as elog 4354 suggests, the phase introduced by this high-pass filter is non-zero at the modulation frequency, so we may also want to install an all-pass filter which will allow us to control the demodulation phase. This should be easy enough to implement with an Op27 and passive components we have in hand...
It looks like the hardware reset did the trick. Previously, I had just tried ssh-ing into c0rga and rebooting it. At the time, however, Steve and I noticed that the various LEDs on the RGA unit weren't on, as they are supposed to be in the nominal operating state. Today, Steve reported that all LEDs except the RS232 one were on today, so I just tried following the steps in this elog again, looks like things are back up and running. I'm attaching a plot of the scan generated using plotrgascan MATLAB script, it looks comparable to the plot in elog 11697, which if I remember right, was acceptable.
Unless there is some reason we want to keep this c0rga machine, I will recommission one of the spare Raspberry Pis lying around to interface with the RGA scanner when I get the time...
Our last RGA scan is from February 14, 2016 We had a power outage on the 15th
Gautom has not succeded reseting it. The old c0rga computer looks dead. Q may resurrect it, if he can?
The c0rga computer was off, I turned it on via front panel button. After running RGAset.py, RGAlogger.py seems to run. However, there are error messages in the output of the plotrgascan MATLAB script; evidiently there are some negative/bogus values in the output.
I'll look into it more tomorrow.
This is a cold scan.
I've been working on putting together a Finesse model for the current 40m configuration. The idea was to see if I could reproduce a model that is in agreement with what we have been seeing during the recent DRFPMI locks. With Antonio and EricQs help, I've been making slow progress in my forays into Finesse and pyKat. Here is a summary of what I have so far.
Having put together the .kat file (code attached, but this is probably useless, the new model with RC folding mirrors the right way will be what is relevant), I was able to recover a power recycling gain of ~7.5. The arm transmission at full lock also matches the expected value (125*80uW ~ 10mW) based on a recent measurement I did while putting the X endtable together. I also tuned the arm losses to see (qualitatively) that the power recycling gain tracked this curve by Yutaro. EricQ suggested I do a few more checks:
Conclusion: It doesn't look like I've done anything crazy. So unless anyone thinks there are any further checks I should do on this "toy" model, I will start putting together the "correct" model - using RC folding mirrors that are oriented the right way, and using the "ideal" RC cavity lengths as detailed on this wiki page. The plan of action then is
Sidenote to self: It would be nice to consolidate the most recent cavity length measurements in one place sometime...
As we realized during the EX table switch, the transmitted beam height from the arm is not exactly 4" relative to the endtable, it is more like 4.75" at the X-end (yet to be investigated at the Y-end). As a result, the present configuration involves the steering optics immediately before the Oplev and TransMon QPDs sending the beam downwards at about 5 degrees. Although this isn't an extremely large angle, we would like to have things more level. For this purpose, Steve has ordered some Aluminium I-beams (1/2 " thick) which we can cut to size as we require. The idea is to have the QPD enclosures mounted on these beams and then clamped to the table. One concern was electrical isolation - but Steve thinks Delrin washers between the QPD enclosure and the mount will suffice. We will move ahead with getting these machined once I investigate the situation at the Y end as well.. The I beams should be here sometime next week...
Having played around with a toy finesse model, I went about setting up a model in which the RC folding mirrors are not flipped. I then repeated the low-level tests detailed in the earlier elog, after which I ran a few spatial mode overlap analyses, the results of which are presented here. It remains to do a stability analysis.
Overview of model parameters (more details to follow):
Results (general note: positive RoC in these plots mean a concave surface as seen by the beam):
Next step is to carry out a stability analysis...
In a previous elog, I demonstrated that the RoC mismatch between ETMX and ETMY does not result in appreciable degradation in the mode overlap of the two arm modes. Koji suggested also checking the effect on the contrast defect. I'm attaching the results of this investigation (I've plotted the contrast, rather than the contrast defect 1-C).
Details and methodology
Attachment #1 shows the result of this scan (as mentioned earlier, I plot the contrast C and not the contrast defect 1-C, sorry for the wrong plot title but it takes ~30mins to run the simulation which is why I didn't want to do it agian). If the RoC of the spare ETMs is about 54m, the loss in contrast is about 0.5%. This is in good agreement with this technical note by Koji - it tells us to expect a contrast defect in the region of 0.5%-1% (depending on what parameter you use as the RoC of ETMY).
It doesn't seem that switching out the current ETM with one of the spare ETMs will result in dramatic degradation of the contrast defect...
That sounds weird. If the ETMY RoC is 60 m, why would you use 57.6 m in the simulation? According to the phase map web page, it really is 60.2 m.
This was an oversight on my part. I've updated the .kat file to have all the optics have the RoC as per the phase map page. I then re-did the tracing of the Y arm cavity mode to determine the appropriate beam parameters at the laser in the simulation, and repeated the sweep of RoC of ETMX while holding RoC of ETMY fixed at 60.2m. The revised contrast defect plot is attached (this time it is the contrast defect, and not the contrast, but since I was running the simulation again I thought I may as well change the plot).
As per this plot, if the ETMX RoC is ~54.8m (the closer of the two spares to 60.2m), the contrast defect is 0.9%, again in good agreement with what the note linked in the previous elog tells us to expect...
The drill room floor will be retiled Thursday, June 16. Temporary nitrogen line set up will allow emptying the hole area.
Ifo room entry will be through control room.
The retiling work has finished, Steve and I restored the N2 supply configuration to its normal state. The sequence of steps followed was:
Note: the valve isolating the RGA automatically shutoff during this work, possibly because it detected a pressure above its threshold - after checking the appropriate pressure gauges, we reopened this valve as well.
The attached screenshot suggests that everything went as planned and that the vacuum system is back to normal...
So, it seems that changing the ETMX for one of the spares will change the contrast defect from ~0.1% to 0.9%. True? Seems like that might be a big deal.
That is what the simulation suggests... I repeated the simulation for a PRFPMI configuration (i.e. no SRM, everything else as per the most up to date 40m numbers), and the conclusion is roughly the same - the contrast defect degrades from ~0.1% to ~1.4%... So I would say this is significant. I also attempted to see what the contribution of the asymmetry in loss in the arms is, by running over the simulation with the current loss numbers of 230ppm for Yarm and 484ppm for the X arm, split equally between the ITMs and ETMs for both cases, and then again with lossless arms - see attachment #1. While this is a factor, this plot seems to suggest that the RoC mismatch effect dominates the contrast defect...
Having investigated the mode-overlap as a function of RoC of the PRC and SRC folding mirrors, I've now been looking into possible stability issues, with the help of some code that EricQ wrote some time back for a similar investigation, but using Finesse to calculate the round trip Gouy phase and other relevant parameters for our current IFO configuration.
To do so, I've been using:
As a first check, I used flat folding mirrors to see what the HOM coupling structure into the IFO is like (the idea being then to track the positions of HOM resonances in terms of CARM offset as I sweep the RoC of the folding mirror).
However, just working with the flat folding mirror configuration suggests that there are order 2 22MHz and order 4 44MHz HOM resonances that are really close to the carrier resonance (see attached plots). This seems to be originating from the fact that the Y-arm length is 37.81m (while the "ideal" length is 37.795m), and also the fact that the ETM RoCs are ~3m larger than the design specification of 57m. Interestingly, this problem isn't completely mitigated if we use the ideal arm lengths, although the order 2 resonances do move further away from the carrier resonance, but are still around a CARM offset of +/- 2nm. If we use the design RoC for the ETMs of 57m, then the HOM resonances move completely off the scale of these plots...
Last night, we set about trying to see if we could measure and verify the predictions of the simulations, and if there are indeed HOM sidebands co-resonating with the carrier. Koji pointed out that if we clip the transmitted beam from the arm incident on a PD, then the power of the higher order HG modes no longer integrate to 0 (i.e. the orthogonality is broken), and so if there are indeed some co-resonating modes, we should be able to see the beat between them on a spectrum analyzer. The procedure we followed was:
We then repeated the above steps at the X-end (but here, an additional lens had to be installed to focus the IR beam onto the PDA10CF - there was, however, sufficient space on the table so we didn't need to remove the PDA520 for this measurement).
Y-end: DC power on the photodiode at optimal alignment ~ 200mV => spectra taken by deliberately misaligning the beam incident on the PD till the DC power was ~120mV (see remarks about these values).
I converted the peak heights seen on the spectrum analyzer in volts to power by dividing by transimpedance (=5*10^3 V/A into a 50ohm load) * responsivity at 1064nm (~0.6A/W for PDA10CF).
With Koji's help, I've hacked together an arrangement that will allow us to monitor the output of the coil driver to the UL coil.
The arrangement consists of a short custom ribbon cable with female DB25 connectors on both ends - the particular wire sending the signal to the UL coil has a 100 ohm resistor wired in series, because the coil has resistance ~20ohm, and the output of the coil driver board has a series 200(?) ohm resistor, so by directly monitoring the voltage at this point, we may not see a glitch as it may register too small. Tangentially related: the schematic of the coil driver board suggests that the buffered output monitor has a gain of 0.5.
To monitor the voltage, I use the board to which the 4 Oplev signals are currently hooked up. Channel 7 on this particular board (corresponding to ADC channel 30 on c1scx) was conveniently wired up for some prior test, so I used this channel. Then, I modified the C1SCX model to add a testpoint to monitor the output of this ADC. Then, I turned OFF the input on the coil output filter for the UL Coil (i.e. C1:SUS-ETMX_ULCOIL_SW1) so that we can send a known, controlled signal to the UL Coil by means of awggui. Next, I added an excitation at 5 Hz, amplitude 20 counts (as the signal to the coil under normal conditions was approximately of this amplitude) to the excitation channel of the same filter module, which is the state I am leaving the setup in for the night. I have confirmed that I see this 5Hz oscillation on the monitor channel I set up. Oddly, the 0 crossings of the oscillations happen at approximately -1000 counts and not at 0 counts. I wonder where this offset is coming from? The two points I am monitoring the voltage across is shown in the attached photograph - the black clip is connected to the lead carrying the return signal from the coil.
I also wanted to set up a math block in the model itself that monitors, in addition to the raw ADC channel, a copy from which the known applied signal has been cancelled, as presumably a glitch would be more obvious in such a record. However, I was unable to access the excitation channel to the ULCOIL filter from within the SCX model. So I am just recording the raw output for tonight...
I've made a few changes to the monitoring setup in the hope we catch a glitch in the DAC output/ sus coil driver electronics. Summary of important changes:
It remains to see if we will actually be able to see the glitch in long stretches of data - it is unclear to me how big a glitch will be in terms of ADC counts.
The relevant channels are : C1:SCX-UL_DIFF_MON and C1:SCX-UL_DIFF_MON_EPICS (pardon the naming conventions as the setup is only temporary after all). Both these should be hovering around 0 in the absence of any glitching. The noise in the measured signal seems to be around 2 ADC counts. I am leaving this as is overnight, hopefully the ETMX coil drive signal chain obliges and gives us some conclusive evidence...
I have not committed any of the model changes to the SVN.
One of the pianosa monitors has ceased to function For now, it has been set up to operate with just the one monitor.
One of Donatella's monitors has a defective display as well. Maybe we should source some replacements. Koji has said we will talk to Larry Wallace about this..
It may be advantageous to look at the coil output data from when the OSEM damping is on, to try and reproduce the real output signal amplitude that gets sent to the coils.
The amplitude of the applied signal (20) was indeed chosen to roughly match what goes to the coils normally when the OSEM damping is on.
There appears to be no evidence of a detectable glitch in the last 10 hours or so (see attachment #1 - of course this is a 16Hz channel and the full data is yet to be looked at)... I guess the verdict on this is still inconclusive.
Yesterday, I expanded the extent of the ETMX suspension coil driver investigation. I set up identical monitors for two more coils (so now we are monitoring the voltage sent to UL, UR and LL - I didn't set one up for LR because it is on a second DB25 connector). Furthermore, I increased the excitation amplitude from ~20 to ~2000 (each coil had an independent oscillator at slightly different frequency between 5Hz and 8.5 Hz), the logic being that during LSC actuation we send signals of approximately this amplitude to the coils and we wanted to see if a larger amplitude signal somehow makes the system more prone to glitches.
Over ~10 hours of observation, there is no clear evidence of any glitch. About 2 hours ago (~930am PDT Fri Jul 8), the watchdog tripped - but this was because even though I had increased the trip threshold to ~800 for the course of this investigation, megatron runs this script every 20 minutes or so that automatically reduces this threshold by 17 counts - so at some point, the threshold went lower than the coil voltage, causing the watchdog to trip. So this was not a glitch. The other break around 2am PDT earlier today was an FB crash.
Do we now go ahead and pull the suspension out, and proceed with the swap?
While ETMX is out, I'm leaving the larger amplitude excitations to the coils on over the weekend, in case any electronic glitch decides to rear its head over the weekend. The watchdog should be in no danger of tripping now that we have removed the ETM.
Unrelated to this work: while removing the ETMX suspension from the chamber, I also removed the large mirror that was placed inside to aid photo taking, so that there is no danger of an earthquake knocking it over and flooding the chamber with dust.
I have obtained 2x100cc bottles of in-date first contact from Garilynn (use before date is 09/14/2016) for cleaning of our test-masses. They are presently wrapped in foil in the plastic box with all the other first contact supplies.
Today, we attempted to progress as far as we could towards getting the mirror suspended and gluing the second wire standoff. We think we have a workable setup now. At this stage, the suspension wire has been looped around the magnet, the second wire standoff has been inserted, coarse pitch balancing has been done, and we have verified that side OSEM/magnet positioning is tenable. Details below.
Attachment #3 - Unglued stand off with wire in the groove, mirror freely suspended.
Attachment #4 - Glued stand off with wire in the groove, mirror freely suspended. Clearance between wire and magnet looks reasonable.
Attachment #5 - Barrel of optic (underside), mirror freely suspended. The wire seems to be in a reasonable orientation along the barrel, albeit not perfectly parallel.
Koji just pointed out that we should check that the unglued ruby standoff is in good contact with the barrel of the optic. Attachment #1 suggests that maybe this is not the case. If you zoom into Attachment #1, it is not clear if the standoff is sitting on the glue.
Summary: We did some preliminary tests to check if at least one of the side magnet positions is usable for the side OSEM. We mainly wanted to check how much dynamic range we lose because of the sub-optimal longitudinal positioning of the side magnet. We found that when the side magnet was mainly moving along the axis of the side OSEM (with minimal yaw motion as gauged by eye), the PD voltage bottomed out at ~80 counts (while the completely unoccluded readout was ~800 counts).
Today, we did the following:
I will have another look at the spectra tomorrow morning, to see if the damping improves overnight.
Brief summary, some pictures and such follow in the daytime.
The epoxy needs at least 12 hours of room temperature air curing, so no touchy until 3:30PM on Jul 28!
Attachment #1 - After multiple trials shimming the magnet gluing rig with teflon spacers, we think that we managed to find a configuration in which the side magnet edge is between 0.25 mm and 0.5 mm from the groove in the ruby wire standoff in which the wire will sit.
Attachment #2 - Zoomed in view of the side magnet.
Of course we won't know until we suspend the optic, but we believe that we have mitigated the misalignment between the side OSEM axis and side magnet.
The short term plan is to try and suspend ETMY in the end chamber and have a look at the alignment between all magnets and OSEM coils for it. Once the epoxy on ETMX is cured, we will try and suspend the optic again, this time taking extra care while tightening the wire clamps.
Unrelated to this work: Bob just informed me that we had left the air bake oven on overnight - this unfortunately melted the plastic thermocouple inside.
While ETMX magnets were curing, I wanted to try and suspend ETMY in the endchamber, put in the OSEMS and see if the magnets aligned well with the coils, and run the same type of diagnostics we have been doing for ETMX. However, while I was trying to slip the optic into the wire, the UL magnet on ETMY broke off. I recovered the magnet and now both optic and magnet are back in the cleanroom. The magnet dumbbell has been cleaned with acetone and then sandpaper to remove residual epoxy - it remains to clean the residue off the optic itself before re-gluing the magnet tonight
I also noticed that the existing wire in the suspension had a kink in it. It looks fairly sharp, and I think we should change the wire while re-inserting the optic. Putting the optic into an existing loop of wire is tricky, as if you go in from the front of the suspension cage, the magnets on the AR side attract the wire, and makes it quite difficult to loop the wire around. I have to think of some way of holding the wires in place while the optic is being placed, and then, once the optic is roughly in position, slip the wire into the grooves in the standoffs.
I took the opportunity to replace the face OSEM coil holder screws while the chamber was open.
EDIT 9 August 2016: It was in fact the LR magnet that was knocked off.
Summary: Third unsuccessful attempt at getting ETMX suspended. I think we should dial the torque wrench back down to 1.0 N m from 1.5 N m for tightening the primary clamp at the top of the SOS tower. No damage to magnets, standoff successfully retrieved (it is sitting in the steel bowl)
Unfortunately I don't know of a more deterministic way of deciding on a "safe" torque with which to tighten the bolts except by trial and error. It is also possible that the clamping piece is damaged in some way and is responsible for these breakages, but short of getting the edges chamfered, I am not sure what will help in this regard.
Unrelated to this work: earlier today before the first wire failure, while I was optimistic about doing fine pitch balancing and gluing the standoff, I set up an optical lever arm ~3m in length, with the beam from the HeNe on the clean bench at 5.5 in above the table, and parallel to it (verified using Iris close to the HeNe and at the end of the lever arm). I also set up the PZT buzzer - it needs a function generator as well for our application, so I brought one into the cleanroom from the lab, isopropanol wiped it. The procedure says apply 5Vrms triangular wave at 1000Hz, but our SR function generators can't put out such a large signal, the most they could manage was ~2Vrms (we have to be careful about applying an offset as well so as to not send any negative voltages to the PZT voltage unit's "External input". All the pieces we need for the fine pitch balancing should be in the cleanroom now.
[lydia, steve, ericq, gautam]
[lydia, ericq, gautam]
Lydia also briefly played around with the IR camera to inspect the OSEMs. A more thorough investigation will be done once the cage is in for air baking. From our initial survey, we feel that the beams are pretty well aligned along the straight line between PD and LED - we estimate the upper bound on any misalignment to be ~10 degrees.
Part 1: Rotation of optic
Part 2: Replacement of holder for top pair of OSEMs
Part 3: Fine pitch balancing
Attachment #1: Striptool trace showing OSEMs are pretty well centered (towards the end, I turned on the HEPA filters again, which explains the shift of the traces). The y-axis is normalized such that the maximum displayed corresponds to the fully open PD output of the coils
Attachment #2: Fine pitch balancing optical lever setup
Attachment #3: Tower assembly
Attachment #4: SIDE OSEM close-up
Attachment #5: UR OSEM close-up
Attachment #6: UL OSEM close-up
Attachment #7: LL OSEM close-up (this is the concerning one)
Attachment #8: LR OSEM close-up
We should also check the following (I forgot and don't want to wear my clean jumpsuit again now to take more photos):
Attachment #1: Wire is in the groove in the unglued wire-standoff, groove rotation looks pretty good.
Attachment #2: Ruby standoff is sitting on the barrel of the optic (if you zoom in)
Attachment #3: Side magnet is well centered w.r.t OSEM coil
Attachment #4: UR magnet is well centered w.r.t OSEM coil
Attachment #5: UL magnet is well centered w.r.t OSEM coil
Attachment #6: LL magnet is well centered w.r.t OSEM coil
Attachment #7: LR magnet is well centered w.r.t OSEM coil
Attachment #8: Wire is in the groove in the glued Ruby standoff
Attachment #9: Standoff after gluing. 3-4 drops of epoxy are visible on the wire, but none looks to have seeped into the groove itself
Attachment #10: Side view of newly glued Ruby standoff
Attachment #11: Before and After gluing shots.
In order to help Praful do his huddle test, I have temporarily arranged for the outputs of the 3 channels he wants to monitor to be acquired as DQ channels at 2048 Hz by editing the C1PEM model. No prior DQ channels were set up for the microphones. Data collected overnight should be sufficient for Praful's analysis, so we can remove these DQ channels from C1PEM before committing the updated model to the svn. There is in fact a filter that is enabled for these microphone channels that claims to convert the amplified microphone output to Pascals, but it is just a gain of 0.0005.
In the long term, once we install microphones around the IFO, we can update C1PEM to reflect the naming conventions for the microphones as is appropriate.
How much pitch bias do you need in order to correct this pitch misalignment?
That may give you the idea how bad this misalignment is.
I needed to move the pitch slider on the IFO align screen to -2.10 (V?) from 0 to get the HeNe spot to the center of the iris. The slider runs from -10V to 10V, so this is something like 10% of its range. I am not sure if it means anything, but the last saved backup value of this pitch slider was -3.70. Of course, application of the bias will affect all the coils, and when the optic is pitch balanced, the lower magnets are a little too far out and the upper magnets are a little too far in (see Attachment #1), as we expect for a downward pitch misalignment to be corrected. I suppose we can iteratively play with the coil positions and the bias such that the coils are centered and we are well balanced (maybe this explains the old value of -3.70).
I also checked that the side magnet can completely occlude its PD. With the damping on, by pushing the coil all the way in, the output of the side PD went down to 0.
This elog is meant to summarize my numerical simulations for looking into the effects of curvature on the RC mirrors. I've tried to go through my reasoning (which may or may not be correct) and once this gets a bit more refined, I will put all of this into a technical note.
I assume that we are prepared to live with the pitch bias situation of ETMY (i.e. we can achieve a configuration in which there is some pitch bias to the coils, and the OSEMs are inserted such that the PD outputs are half their maximum value). Or at least that we don't want to go through the whole standoff-regluing procedure for ETMY as well.
So today I took the optic out, and began to make some preparations for the air bake.
In summary, the questions that remain (to me) are:
I think we can start the baking of the optics tomorrow. The timeline for the suspension towers is unclear, depends on how we want to deal with the sanding dilemma.
Summary of roundtable meeting yesterday between EricG, EricQ, Koji and Gautam:
We identified two possible courses of action.
I have done some calculations to evaluate the first alternative.
Something else that came up in yesterdays meeting was if we should go in for 1" optics rather than 2", seeing as the beam spot is only ~3mm on these. It is not clear what (if any) advantages this will offer us (indeed, for the same RoC, the sag is smaller for a 1" optic than a 2").
Attachment #1: Mode-matching maps between PRX and Xarm cavities, PRY and Yarm cavities with some contours overlaid.
Attachment #2: Mode-matching maps between SRX and Xarm cavities, SRY and Yarm cavities with some contours overlaid.
Attachment #3: Gouy phase calculations for the PRC
Attachment #3: Gouy phase calculations for the SRC
Here are the results for case 2: (flat PR3/SR3, for purpose of simulation, I've used a concave mirror with RoC in the range 5-15km, and concave PR2/SR2 - I've looked at the RoC range 300m-4km).
Attachment #1: Mode matching between PRC cavities and arm cavities with some contour plots
Attachment #2: Mode matching between SRC cavities and arm cavities with some contour plots
Attachment #3: Gouy phase and TMS for the PRC. I've plotted two sets of curves, one for a PR3 with RoC 5km, and the other for a PR3 with RoC 15km
Attachment #4: Gouy phase and TMS for the SRC. Two sets of curves plotted, as above.
Hopefully EricG will have some information with regards to what is practical to spec at tomorrow's meeting.
EDIT: Added 9pm, 16 Aug 2016
A useful number to have is the designed one-way Gouy phase and TMS for the various cavities. To calculate these, I assume flat folding mirrors, and that the PRM has an RoC of 115.5m, SRM has an RoC of 148m (numbers taken from the wiki). The results may be summarized as:
So, there are regions in parameter space for both options (i.e. keep current G&H mirrors, or order two new sets of folding mirrors) that get us close to the design numbers...
I put in both ETMX and ETMY into the air-bake oven at approximately 8.45pm tonight. They can be removed at 8.45am tomorrow morning.
Keeping these design numbers in mind, here are a few possible scenarios. The "designed" TMS numbers from my previous elog are above for quick reference.
Case 1: Keep existing G&H mirror, flip it back the right way, and order new PR3/SR3.
Case 2: Order two new sets of folding mirrors
At first glance, it looks like the tolerances are much larger for Case 2, but we also have to keep in mind that for such large RoCs in the km range, it may be impractical to specify as tight tolerances as in the 100s of metres range. So these are a set of numbers to keep in mind, that we can re-iterate once we hear back from vendors as to what they can do.
For consolidation purposes, here are the aLIGO requirements for the coatings on the RC folding mirrors: PR2, PR3, SR2, SR3
I just put in the following into the air bake oven for a 12 hour, 70C bake:
I put these in at 10.30pm. So the oven will be turned off at 10.30am tomorrow morning. The oven temperature seems stable in the region 70-80 C (there is no temperature control except for the in built oven control, I just adjusted the dial till I found the oven remains at ~70C.
Tomorrow, we will look to put on first contact onto the ETMs, and then get about to re-suspending them.