40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 204 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  12545   Mon Oct 10 18:34:52 2016 gautamUpdateGeneralPZT OM Mirrors

I did a quick survey of the drive electronics for the PZT OM mirrors today. The hope is that we can correct for the clipping observed in the AS beam by using OM4 (in the BS/PRM chamber) and OM5 (in the OMC chamber).

Here is a summary of my findings.

  • Schematic for (what I assume is) the driver unit (located in the short electronics rack by the OMC chamber/AS table) can be found here
  • This is not hooked up to any HV power supply. There is a (short) cable on the back that is labelled '150V' but it isn't connected to anything. There are a bunch of 150V KEPCO power supplies in 1X1, looks like we will have to lay out some cable to power the unit
  • The driver is also not connected to any fast front end machine or slow machine - according to the schematic, we can use J4, which is a Dsub 9 connector on the front panel, to supply drive signals to the two PZTs X and Y axes. Presumably, we can use this + some function generator/DC power supply to drive the PZTs. I have fashioned a cable using a Dsub9 connector and some BNC connectors for this purpose.

I hope these have the correct in-vacuum connections. We also have to hope that the clipping is downstream of OM4 for us to be able to do anything about it using the PZT mirrors. 

  12551   Tue Oct 11 13:30:49 2016 gautamUpdateSUSPRM LR problematic again

Perhaps the problem is electrical? The attached plot shows a downward trend for the LR sensor output over the past 20 days that is not visible in any of the other 4 sensor signals. The Al foil was shorting the electrical contacts for nearly 2 months, so perhaps some part of the driver circuit needs to be replaced? If so a Satellite Box swap should tell us more, I will switch the PRM and SRM satellite boxes. It could also be a dying LED on the OSEM itself I suppose. If we are accessing the chamber, we should come up with a more robust insulating cap solution for the OSEMs rather than this hacky Al foil + kapton arrangement.

The PRM and SRM Satellite boxes have been switched for the time being. I had to adjust some of the damping loop gains for both PRM and SRM and also the PRM input matrix to achieve stable damping as the PRM Satellite box has a Side sensor which reads out 0-10V as opposed to the 0-2V that is usually the case. Furthermore, the output of the LR sensor going into the input matrix has been turned off.

 

  12552   Wed Oct 12 13:34:28 2016 gautamUpdateSUSPRM LR problematic again

Looks like what were PRM problems are now seen in the SRM channels, while PRM itself seems well behaved. This supports the hypothesis that the satellite box is problematic, rather than any in-vacuum shenanigans.

Eric noted in this elog that when this problem was first noticed, switching Satellite boxes didn't seem to fix the problem. I think that the original problem was that the Al foil shorted the contacts on the back of the OSEM. Presumably, running the current driver with (close to) 0 load over 2 months damaged that part of the Satellite box circuitry, which lead to the subsequent observations of glitchy behaviour after the pumpdown. Which begs the question - what is the quick fix? Do we try swapping out the LM6321 in the LR LED current driver stage? 

GV Edit Nov 2 2016: According to Rana, the load of the high speed current buffer LM6321 is 20 ohms (13 from the coil, and 7 from the wires between the Sat. Box and the coil). So, while the Al foil was shorting the coil, the buffer would still have seen at least 7 ohms of load resistance, not quite a short circuit. Moreover, the schematic suggests that that the kind of overvoltage protection scheme suggested in page 6 on the LM6321 datasheet has been employed. So it is becoming harder to believe that the problem lies with the output buffer. In any case, we have procured 20 of these discontinued ICs for debugging should we need them, and Steve is looking to buy some more. Ben Abbot will come by later in the afternoon to try and help us debug.

  12560   Thu Oct 13 19:28:14 2016 gautamUpdateGeneralin-air alignment

I did the following today to prepare for taking the doors off tomorrow.

  • Locked MC at low power
    • low power autolocker used during the last vent isn't working so well now
    • so I manually locked the IMC - locks are holding for ~30 mins and MC transmission was maximized by tweaking MC1 and MC2 alignments. The transmission is now ~1150 which is what I remember it being from the last vent
    • I had to restart c1aux to run LSCoffsets
  • Aligned arms to green using bias sliders on IFO align
    • X green transmission is ~0.4 and Y green transmission is ~0.5 which is what I remember it being before this vent
  • Removed ND filters from end Transmon QPDs since there is so little light now
  • Locked Y arm, ran the dither
    • Some tip tilt beam walking was required before any flashes were seen
    • I had to tweak the LSC gain for this to work
    • TRY is ~0.3 - in the previous vent, in air low power locking yielded TRY of ~0.6 but the 50-50 BS that splits the light between the high gain PD and the QPD was removed back then so these numbers are consistent
  • Tried locking X arm
    • For some reason, I can't get the triggering to work well - the trigger in monitor channel (LSC-XARM_TRIG_IN) and LSC-TRX_OUT_DQ are not the same, should they not be?
    • Tried using both QPD and high gain PD to lock, no luck. I also checked the error signal for DC offsets and that the demod phase was okay
    • In any case, there are TRX flashes of ~0.3 as well, this plus the reasonable green transmission makes me think the X arm alignment is alright
  • All the oplev spots are on their QPDs in the +/- 100 range. I didn't bother centering them for now

I am leaving all shutters closed overnight.

So I think we are ready to take the doors off at 8am tomorrow morning, unless anyone thinks there are any further checks to be done first.


Vent objectives:

  1. Fix AS beam clipping issues (elog1, elog2)
  2. Look into the green scatter situation (elog)

Should we look to do anything else now? One thing that comes to mind is should we install ITM baffles? Or would this be more invasive than necessary for this vent?


Steve reported to me that he was unable to ssh into the control room machines from the laptops at the Xend and near the vacuum rack. The problem was with pianosa being frozen up. I did a manual reboot of pianosa and was able to ssh into it from both laptops just now.

  12561   Fri Oct 14 10:31:53 2016 gautamUpdateGeneraldoors are off ITMY and BS/PRM chambers

[steve,ericq,gautam]

We re-checked IMC locking, arm alignments (we were able to lock and dither align both arms today, and also made the michelson spot look reasonable on the camera) and made sure that the AS and REFL spots were in the camera ballpark. We then proceeded to remove the heavy doors off ITMY and BS/PRM chambers. We also quickly made sure that it is possible to remove the side door of the OMC chamber with the current crane configuration, but have left it on for now.

The hunt for clipping now begins.

  12563   Fri Oct 14 18:33:55 2016 gautamUpdateGeneralAS clipping investigations

[steve,ericq,gautam]

In the afternoon, we took the heavy door off the OMC chamber as well, such that we could trace the AS beam all the way out to the AP table. 

In summary, we determined the following today:

  1. Beam is centered on SRM, as judged by placing the SOS iris on the tower
     
  2. Beam is a little off on OM1 in yaw, but still >2 beam diameters away from the edge of the steering optic, pitch is pretty good
  3. Beam is okay on OM2 
  4. Beam is okay on OM3 - but beam from OM3 to OM4 is perilously close to clipping on the green steering mirror between these two steering optics (see CAD drawing). We think this is where whatever effect of the SR2 hysteresis shows up first.
  5. Beam is a little low and a little to the left on OM4 (the first PZTJena mirror)
  6. Beam is well clear of other optics in the BS PRM chamber on the way from OM4 to OM5 in the OMC chamber
  7. Beam is a little low and a little to the left of OM5 in the OMC chamber. This is the second PZTJena mirror. We are approximately 1 beam diameter away from clipping on this 1" optic
    Link to IMG_2289.JPG
  8. Beam is off center on OMPO-OMMTSM partially transmissive optic, but because this is a 2" optic, the room for error is much more
    Link to IMG_2294.JPG
  9. Beam is well clear of optics on OMC table on the way from OMPO-OMMTSM to OM6, the final steering mirror bringing the AS beam out onto the table
  10. Beam is low and to the left on OM6. It is pretty bad here, we are < 1 beam diameter away from clipping on this optic, this along with the near miss on the BS/PRM chamber are the two most precarious positions we noticed today, consistent with the hypothesis in this elog that there could be multiple in vacuum clipping points
    Link to IMG_2306.JPG
  11. Beam clears the mirror just before the window pretty confortably (see photo, CAD drawing). But this mirror is not being used for anything useful at the moment. More importantly, there is some reflection off the window back onto this mirror frame which is then scattering and creating some ghost beams, so this could explain the anomalous ASDC behaviour Koji and Yutaro saw. In any case, I would favour removing this mirror since it is serving no purpose at the moment.
    Link to IMG_2310.JPG

Attachment #5 is extracted from the 40m CAD drawing which was last updated in 2012. It shows the beam path for the output beam from the BS all the way to the table (you may need to zoom in to see some labels. The drawing may not be accurate for the OMC chamber but it does show all the relevant optics approximately in their current positions.

EQ will put up photos from the ITMY and BS/PRM chambers.

Plan for Monday: Reconfirm all the findings from today immediately after running the dither alignment so that we can be sure that the ITMs are well-aligned. Then start at OM1 and steer the beam out of the chambers, centering the beam as best as possible given other constraints on all the optics sequentially. All shutters are closed for the weekend, though I left the SOS iris in the chamber...

Here is the link to the Picasa album with a bunch of photos from the OMC chamber prior to us making any changes inside it - there are also some photos in there of the AS beam path inside the OMC chamber...

Attachment 1: IMG_2289.JPG
IMG_2289.JPG
Attachment 2: IMG_2294.JPG
IMG_2294.JPG
Attachment 3: IMG_2306.JPG
IMG_2306.JPG
Attachment 4: IMG_2310.JPG
IMG_2310.JPG
Attachment 5: ASBeamClipping.pdf
ASBeamClipping.pdf
  12566   Mon Oct 17 22:45:16 2016 gautamUpdateGeneralAS beam centered on all OMs

[ericq, lydia, gautam]

IMC realignment, Arm dither alignment

  • We started today by re-locking the PMC (required a c1psl restart), re-locking the IMC and then locking the arms
  • While trying to dither align the arms, I could only get the Y arm transmission to a maximum of ~0.09, while we are more used to something like 0.3 when the arm is well aligned this vent
  • As it turns out, Y arm was probably locked to an HOM, as a result of some minor drift in the ITMY optical table leveling due to the SOS tower aperture being left in over the weekend

ITMY chamber

  • We then resolved to start at the ITMY chamber, and re-confirm that the beam is indeed centered on the SRM by means of the above-mentioned aperture
  • Initially, there was considerable yaw misalignment on the aperture, probably due to the table level drifting because of the additional weight of the aperture
  • As soon as I removed the aperture, eric was able to re-dither-align the arms and their transmission went back up to the usual level of ~0.3 we are used to this vent
  • We quickly re-inserted the aperture and confirmed that the beam was indeed centered on the SRM
  • Then we removed the aperture from the chamber and set about inspecting the beam position on OM1
  • While the beam position wasn't terribly bad, we reasoned that we may as well do as good a job as we can now - so OM1 was moved ~0.5 in such that the beam through the SRM is now well centered on OM1 (see Attachment #1 for a CAD drawing of the ITMY table layout and the direction in which OM1 was moved)
  • Naturally this affected the beam position on OM2 - I re-centered the beam on OM2 by first coarsely rotating OM1 about the post it is mounted on, and then with the knobs on the mount. The beam is now well centered on OM2
  • We then went about checking the table leveling and found that the leveling had drifted substantially - I re-levelled the table by moving some of the weights around, but this has to be re-checked before closing up... 

BS/PRM chamber

  • The beam from OM2 was easily located in the BS/PRM chamber - it required minor yaw adjustment on OM2 to center the beam on OM3
  • Once the beam was centered on OM3, minor pitch and yaw adjustments on the OM3 mount were required to center the beam on OM4
  • The beam path from OM3 to OM4, and OM4 to the edge of the BS/PRM chamber towards the OMC chamber was checked. There is now good clearance (>2 beam diameters) between the beam from OM4 to the OMC chamber, and the green steering mirror in the path, which was one of the prime clipping candidates identified on Friday

OMC chamber

  • First, the beam was centered on OM5 by minor tweaking of the pitch and yaw knobs on OM4 (see Attachment #2)
  • Next, we set about removing the unused mirror just prior to the window on the AP table (see Attachment #3). PSL shutter was closed for this stage of work, in order to minimize the chance of staring directly into the input beam!
  • Unfortunately, we neglected checking the table leveling prior to removing the optic. A check after removing the optic suggested that the table wasn't level - this isn't so easy to check as the table is really crowded, and we can only really check near the edges of the table (see Attachment #3). But placing the level near the edge introduces an unknown amount of additional tilt due to its weight. We tried to minimize these effects by using the small spirit level, which confirmed that the table was indeed misaligned
  • To mitigate this, we placed a rectangular weight (clean) around the region where the removed mirror used to sit (see Attachment #3)Approximately half the block extends over the edge of the table, but it is bolted down. The leveling still isn't perfect - but we don't want to be too invasive on this table (see next bullet point). Since there are no suspended optics on this table, I think the leveling isn't as critical as on the other tables. We will take another pass at this tomorrow but I think we are in a good enough state right now. 
  • All this must have bumped the table quite a bit, because when we attempted re-locking the IMC, we noticed substantial misalignment. We should of course have anticipated this because the mirror launching the input beam into the IMC, and also MMT2 launching the beam into the arms, sits on this table! After exploring the alignment space of the IMC for a while, eric was able to re-lock the IMC and recover nominal transmission levels of ~1200 counts. 
  • We then re-locked the arms (needed some tip-tilt tweaking) and ran the dither again, setting us up for the final alignment onto OM6
  • OM5 pitch and yaw knobs were used to center the beam on OM6 - the resulting beam spot on OMPO-OMMTSM and OM6 are shown in Attachment #4 and Attachment #5 respectively. The centering on OMPO-OMMTSM isn't spectacular, but I wanted to avoid moving this optic if possible. Moreover, we don't really need the beam to follow this path (see last bullet in this section)
  • Beam path in the OMC chamber (OM5 --> OMPO-OMMTSM --> OM6 --> window was checked and no significant danger of clipping was found
  • Beam makes it cleanly through the window onto the AP table. We tweaked the pitch and yaw knobs on OM6 to center the beam on the first in-air pick off mirror steering the AS beam on the AP table. The beam is now visible on the camera, and looks clean, no hint of clipping
  • As a check, I wondered where the beam into the OMC is actually going. Turns out that as things stand, it is hitting the copper housing (see Attachment #6, it's had to get a good shot because of the crowded table...). While this isn't critical, perhaps we can avoid this extra scatter by dumping this beam?
  • Alternatively, we could just bypass OMPO-OMMTSM altogether - so rotate OM5 in-situ such that we steer the beam directly onto OM6. This way, we avoid throwing away half (?) the light in the AS beam. If this is the direction we want to take, it should be easy enough to make the change tomorrow

In summary...

  • AS beam has been centered on all steering optics (OM1 through OM6)
  • Table leveling has been checked on ITMY and OMC chambers - this will be re-checked prior to closing up
  • Green-scatter issue has to be investigated, should be fairly quick..
  • In the interest of neatness, we may want to install a couple of beam dumps - one to catch the back-reflection off the window in the OMC chamber, and the other for the beam going to the OMC (unless we decide to swivel OM5 and bypass the OMC section altogether, in which case the latter is superfluous)

C1SUSAUX re-booting

  • Not really related to this work, but we couldn't run the MC relief script due to c1susaux being unresponsive
  • I re-started c1susaux (taking care to follow the instructions in this elog to avoid getting ITMX stuck)
  • Afterwards, I was able to re-lock the IMC, recover nominal transmission of ~1200 counts. I then ran the MC relief servo
  • All shutters have been closed for the night
Attachment 1: OM1Moved.pdf
OM1Moved.pdf
Attachment 2: IMG_3304.JPG
IMG_3304.JPG
Attachment 3: OMCchamber.pdf
OMCchamber.pdf
Attachment 4: IMG_3292.JPG
IMG_3292.JPG
Attachment 5: IMG_3307.JPG
IMG_3307.JPG
Attachment 6: IMG_3297.JPG
IMG_3297.JPG
  12568   Tue Oct 18 18:56:57 2016 gautamUpdateGeneralOM5 rotated to bypass OMC, green scatter is from window to PSL table

[ericq, lydia, gautam]

  • We started today by checking leveling of ITMY table, all was okay on that front after the adjustment done yesterday. Before closing up, we will have detailed pictures of the current in vacuum layout
  • We then checked centering on OMs 1 and 2 (after having dither aligned the arms), nothing had drifted significantly from yesterday and we are still well centered on both these OMs
  • We then moved to the BS/PRM chamber and checked the leveling, even though nothing was touched on this table. Like in the OMC chamber, it is difficult to check the leveling here because of layout constraints, but I verified that the table was pretty close to being level using the small (clean) spirit level in two perpendicular directions
  • Beam centering was checked on OMs 3 and 4 and verified to be okay. Clearance of beam from OM4 towards the OMC chamber was checked at two potential clipping points - near the green steering mirror and near tip-tilt 2. Clearance at both locations was deemed satisfactory so we moved onto the OMC chamber
  • We decided to go ahead and rotate OM5 to send the beam directly to OM6 and bypass the partially transmissive mirror meant to send part of the AS beam to the OMC
  • In order to accommodate the new path, I had to remove a razor beam dump on the OMC setup, and translate OM5 back a little (see Attachment #1), but we have tried to maintain ~45 degree AOI on both OMs 5 and 6
  • Beam was centered on OM6 by adjusting the position of OM5. We initially fiddled around with the pitch and yaw knobs of OM4 to try and center the beam on OM5, but it was decided that it was better just to move OM5 rather than mess around on the BS/PRM chamber and introduce potential additional scatter/clipping
  • OMC table leveling was checked and verified to not have been significantly affected by todays work
  • It was necessary to loosen the fork and rotate OM6 to extract the AS beam from the vacuum chambers onto the AP table
  • AS beam is now on the camera, and looks nice and round, no evidence of any clipping. Some centering on in air lenses and mirrors on the AP table remains to be done. We are now pretty well centered on all 6 OMs and should have more power at the AS port given that we are now getting light previously routed to the OMC out as well. A quantitative measure of how much more light we have now will have to be done after pumping down and turning the PSL power back up
  • I didn't see any evidence of back-scattered light from the window even though there were hints of this previously (sadly the same can't be said about the green). I will check once again tomorrow, but this doesn't look like a major problem at the moment

Lydia and I investigated the extra green beam situation. Here are our findings.

  1. There appears to be 3 ghost beams in addition to the main beam. These ghosts appeared when we locked the X green and Y green individually, which lead us to conclude that whatever is causing this behaviour is located downstream of the periscope on the BS/PRM chamber
    Link to greenGhosts.JPG
  2. I then went into the BS/PRM chamber and investigated the spot on the lower periscope mirror. It isn't perfectly centered, but it isn't close to clipping on any edge, and the beam leaving the upper mirror on the periscope looks clean as well (only the X-arm green was used for this, and subsequent checks). The periscope mirror looks a bit dusty and scatters rather a lot which isn't ideal...
    Link to IMG_3322.JPG
  3. There are two steering mirrors on the IMC table which we do not have access to this vent. But I looked at the beam coming into the OMC chamber and it looks fine, no ghosts are visible when letting the main beam pass through a hole in one of our large clean IR viewing cards - and the angular separation of these ghosts seen on the PSL table suggests that we would see these ghosts if they exist prior to the OMC chamber on the card...
  4. The beam hits the final steering mirror which sends it out onto the PSL table on the OMC chamber cleanly - the spot leaving the mirror looks clean. However, there are two reflections from the two surfaces of the window that come back into the OMC chamber. Space constraints did not permit me to check what surfaces these scatter off and make it back out to the PSL table as ghosts, but this can be checked again tomorrow.
    Link to IMG_3326.JPG

I can't think of an easy fix for this - the layout on the OMC chamber is pretty crowded, and potential places to install a beam dump are close to the AS and IMC REFL beam paths (see Attachment #1). Perhaps Steve can suggest the best, least invasive way to do this. I will also try and nail down more accurately the origin of these spots tomorrow.


Light doors are back on for the night. I re-ran the dithers, and centered the oplevs for all the test-masses + BS. I am leaving the PSL shutter closed for the night

 

Attachment 1: OMCchamber.pdf
OMCchamber.pdf
Attachment 2: greenGhosts.JPG
greenGhosts.JPG
Attachment 3: IMG_3322.JPG
IMG_3322.JPG
Attachment 4: IMG_3326.JPG
IMG_3326.JPG
  12571   Wed Oct 19 16:41:55 2016 gautamUpdateGeneralHeavy doors back on

[ericq, lydia, steve, gautam]

  • We aligned the arms, and centered the in-air AS beam onto the PDs and camera
  • Misaligned the ITMs in a controlled ramp, observed ASDC level, didn't see any strange features
  • We can misalign the ITMs by +/- 100urad in yaw and not see any change in the ASDC level (i.e. no clipping). We think this is reasonable and it is unlikely that we will have to deal with such large misalignments. We also scanned a much larger range of ITM misalignments (approximately +/-1mrad), and saw no strange features in the ASDC levels as was noted in this elog - we used both the signal from the AS110 PD which had better SNR and also the AS55 PD. We take this to be a good sign, and will conduct further diagnostics once we are back at high power.
  • Opened up all light doors, checked centering on all 6 OM mirrors again, these were deemed to be satisfactory 
  • To solve the green scattering issue, we installed a 1in wide glass piece (~7inches tall) mounted on the edge of the OMC table to catch the reflection off the window (see Attachment #1) - this catches most of the ghost beams on the PSL table, there is one that remains directly above the beam which originates at the periscope in the BS/PRM chamber (see Attachment #2) but we decided to deal with this ghost on the PSL table rather than fiddle around in the vacuum and possibly make something else worse
    Link to IMG_2332.JPG
    Link to IMG_2364.JPG
  • Re-aligned arms, ran the dither, and then aligned the PRM and SRM - we saw nice round DRMI flashes on the cameras
  • Took lots of pictures in the chamber, put heavy doors back on. Test mass Oplev spots looked reasonably well centered, I re-centerd PRM and SRM spots in their aligned states, and then misaligned both
  • The window from the OMC chamber to the AS table looked clean enough to not warrant a cleaning..
  • PSL shutter is closed for now. I will check beam alignment, center Oplevs, and realign the green in the evening. Plan is to pump down first thing tomorrow morning

AS beam on OM1

Link to IMG_2337.JPG

AS beam on OM2

AS beam on OM3

AS beam on OM4

 
AS beam on OM6

I didn't manage to get a picture of the beam on OM5 because it is difficult to hold a card in front of it and simultaneously take a photo, but I did verify the centering...

It remains to update the CAD diagram to reflect the new AS beam path - there are also a number of optics/other in-vacuum pieces I noticed in the BS/PRM and OMC chambers which are not in the drawings, but I should have enough photos handy to fix this.  

Here is the link to the Picasa album with a bunch of photos from the OMC, BS/PRM and ITMY chambers prior to putting the heavy doors back on...


SRM satellite box has been removed for diagnostics by Rana. I centered the SRM Oplev prior to removing this, and I also turned off the watchdog and set the OSEM bias voltages to 0 before pulling the box out (the PIT and YAW bias values in the save files were accurate). Other Oplevs were centered after dither-aligning the arms (see Attachment #8, ignore SRM). Green was aligned to the arms in order to maximize green transmission (GTRX ~0.45, GTRY ~0.5, but transmission isn't centered on cameras).

I don't think I have missed out on any further checks, so unless anyone thinks otherwise, I think we are ready for Steve to start the pumpdown tomorrow morning.

Attachment 1: IMG_2332.JPG
IMG_2332.JPG
Attachment 2: IMG_2364.JPG
IMG_2364.JPG
Attachment 3: IMG_2337.JPG
IMG_2337.JPG
Attachment 4: IMG_2338.JPG
IMG_2338.JPG
Attachment 5: IMG_2356.JPG
IMG_2356.JPG
Attachment 6: IMG_2357.JPG
IMG_2357.JPG
Attachment 7: IMG_2335.JPG
IMG_2335.JPG
Attachment 8: Oplevs_19Oct2016.png
Oplevs_19Oct2016.png
  12576   Fri Oct 21 02:06:20 2016 gautamUpdateGeneralIFO recovery

The pressure on the newly installed gauge on the X arm was 6E-5 torr when I came in today evening, so I decided to start the recovery process.

  1. I first tried working at low power. I was able to lock the IMC as well as the arms. But the dither alignment didn't work so well. So I decided to go to nominal PSL power.
  2. I first changed the 2" HR mirror that is used to send all the MC REFL light to the MC REFL PD in low power operation with a 10% BS. I then roughly aligned the beam onto the PD using the tiny steering mirror. At this point, I also re-installed the ND filters on the end Transmon QPDs and also the CCD at the Y end.
  3. I then rotated the waveplate (the second one from the PSL aperture) until I maximized the power as measured just before the PSL shutter with a power meter. I then re-aligned the PMC to maximize transmission. After both these steps, we currently have 1.09W of IR light going into the IMC
  4. I then re-aligned MC REFL onto the PD (~90mW of light comes through to the PD) and maximized the DC output using an oscilloscope. I then reverted the Autolocker to the nominal version from the low power variant that has been running on megatron during the vent (although we never really used it). The autolocker worked well and I was able to lock the IMC without much trouble. I tweaked the alignment sliders for the IMC optics, but wasn't able to improve the transmission much. It is ~14600 cts right now, which is normal I think
  5. I then centered the beams onto the WFS QPDs, ran the WFSoffsets script after turning the inputs to the WFS servos off, and ran the relief script as well - I didn't try anything further with the IMC
  6. I then tried to lock the arms - I first used the green to align the test-masses. Once I was able to lock to a green 00-mode, I saw strong IR flashes and so I was able to lock the Y arm. I then ran the dither. Next, I did the same for the X arm. Even though I ran LSCoffsets before beginning work tonight, the Y arm transmission after maximization is ~5, and that for the X arm is ~2.5. I refrained from running the normalization scripts in case I am missing something here, but the mode itself is clearly visible on the cameras and is a 00-mode.
    GV edit 21Oct2016: For the Y-arm, the discrepancy was down to TRY being derived from the high gain PD as opposed to the QPD. Switching these and running the dither, TRY now maxes out at around 1.0. For TRX, the problem was that I did not install one of the ND filters - so the total ND was 1.2 rather than 1.6, which is what we were operating at and which is the ND on TRY. Both arms now have transmission ~1 after maximizing with the dither alignment...
  7. The AS spot looks nice and round on the camera, although the real check would be to do the sort of scan Yutaro and Koji did, and monitor the ASDC levels. I am leaving this task for tomorrow, along with checking the recycling cavities.
  8. Lastly, I centered the Oplevs for all the TMs

 

  12578   Mon Oct 24 11:39:13 2016 gautamUpdateGeneralALS recovered

I worked on recovering ALS today. Alignments had drifted sufficiently that I had to to the alignment on the PSL table onto the green beat PDs for both arms. As things stand, both green (and IR) beats have been acquired, and the noise performance looks satisfactory (see Attachment #1), except that the X beat noise above 100Hz looks slightly high. I measured the OLTF of the X end green PDH loop (after having maximized the arm transmission, dither alignment etc, measurement done at error point with an excitation amplitude of 25mV), and adjusted the gain such that the UGF is ~10kHz (see Attachment #2).

Attachment 1: ALSOutOfLoop20161024.pdf
ALSOutOfLoop20161024.pdf
Attachment 2: XendPDHOLTF20161024.pdf
XendPDHOLTF20161024.pdf
  12579   Tue Oct 25 15:56:11 2016 gautamUpdateGeneralPRFPMI locked, arms loss improved

[ericq,gautam]

Given that most of the post vent recovery tasks were done, and that the ALS noise performance looked good enough to try locking, we decided to try PRFPMI locking again last night. Here are the details:


PRM alignment, PRMI locking

  • We started by trying to find the REFL beam on the camera, the alignment biases for the 'correct' PRM alignment has changed after the vent
  • After aligning, the Oplev was way off center so that was fixed. We also had to re-center the ITMX oplev after a few failed locking attempts
  • The REFL beam was centered on all the RFPDs on the ASDC table

Post the most recent vent, where we bypass the OMC altogether, we have a lot more light now at the AS port. It has not yet been quantified how much more, but from the changes that had to be made to the loop gain for a stable loop, we estimate we have 2-3 times more power at the AS port now.


PRFPMI locking

  • We spent a while unsuccessfully trying to get the PRMI locked and reduce the carm offset on ALS control to bring the arms into the 'buzzing' state - the reason was that we forgot that it was established a couple of weeks ago that REFL165 had better MICH SNR. Once this change was made, we were readily able to reduce the carm offset to 0
  • Then we spent a few attempts trying to do blend in RF control - as mentioned in the above referenced elog, the point of failure always was trying to turn on the integrator in the CARM B path. We felt that the appearance of the CARM B IN1 signal on dataviewer was not what we are used to seeing but were unable to figure out why (as it turns out, we were locking CARM on POY11 and not REFL11 indecision, more on this later)
  • Eric found that switching the sign of the CARM B gain was the solution - we spent some time puzzling over why this should have changed, and hypothesized that perhaps we are now overcoupled, but it is more likely that this was because of the error signal mix up mentioned above...
  • We also found the DC coupling of the ITM Oplev loops to be not so reliable - perhaps this has to do with the wonky ITMY UL OSEM, more on this later. We usually turn the DC coupling on after dither aligning the arms, and in the past, it has been helpful. But we had more success last night with the DC coupling turned off rather than on.
  • Once the sign flip was figured out, we were repeatedly able to achieve locks with CARM partially on RF - we got through about 3 or 4, each was stable for just tens of seconds though. Also, we only progressed to RF on CARM on 1 attempt, the lock lasted for just a few seconds
  • Unfortunately, the mode cleaner decided to act up just about after we figured all this out, and it was pushing 4am so we decided to give up for the night.
  • The arm transmissions hit 300! We had run the transmission normalization scripts just before starting the lock so this number should be reliable (compare to ~130 in October last year). The corresponding PRG is about 16.2, which according to my Finesse models suggest we are still undercoupled, but are close to critical coupling (this needs a bit more investigation, supporting plots to follow). => Average arm loss is ~150ppm! So looks like we did some good with the vent, although of course an independent arm loss measurement has to be done...
  • Lockloss plot for one of the locks is Attachment #1

Other remarks:

  • Attachment #2 shows that the ITMY UL coil is glitchy (while the others are not). At some point last night, we turned off this sensor input to the damping servos, but for the actual locks, we turned it back on. I will do a Satellite box swap to see if this is a Sat. Box problem (which I suspect it is, the bad Sat. Boxes are piling up...)
  • Just now, eric was showing me the CM board setup in the LSC rack, because for the next lock attempts, we want to measure the CARM loop - but we found that the input to the CM board was POY and not REFL! This probably explains the sign flip mentioned above. The mix-up has been rectified
  • The MICH dither align doesn't seem to be working too well - possibly due to the fact that we have a lot more ASDC light now, this has to be investigated. But last night, we manually tweaked the BS alignment to make the dark port dark, and it seemed to work okay, although each time we aligned the PRMI on carrier, then went back to put the arms on ALS, and came back to PRMI, we would see some yaw misalignment in the AS beam...
  • I believe the SRM sat. box is still being looked at by Ben so it has not been reinstalled...
  • Eric has put together a configure script for the PRFPMI configuration which I have added to the IFO configure MEDM screen for convenience
  • For some reason, the appropriate whitening gain for POX11 and the XARM loop gain to get the XARM to lock has changed - the appropriate settings now are +30dB and 0.03 respectively. These have not been updated in some scripts, so for example, when the watch script resets the IFO configuration, it doesn't revert to these values. Just something to keep in mind for now...
Attachment 1: PRFPMIlock_25Oct2016.pdf
PRFPMIlock_25Oct2016.pdf
Attachment 2: ITMYwoes.png
ITMYwoes.png
  12583   Thu Oct 27 12:06:39 2016 gautamUpdateGeneralPRFPMI locked, arms loss improved
Quote:

Great to hear that we have the PRG of ~16 now!

Is this 150ppm an avg loss per mirror, or per arm?

I realized that I did not have a Finesse model to reflect the current situation of flipped folding mirrors (I've been looking at 'ideal' RC cavity lengths with folding mirrors oriented with HR side inside the cavity so we didn't have to worry about the substrate/AR surface losses), and it took me a while to put together a model for the current configuration. Of course this calculation does not need a Finesse model but I thought it would be useful nevertheless. 

In summary - the model with which the attached plot was generated assumes the following:

  • Arm lengths of 37.79m, given our recent modification of the Y arm length
  • RC lengths are all taken from here, I have modelled the RC folding mirrors as flipped with the substrate and AR surface losses taken from the spec sheet
  • The X axis is the average arm loss - i.e. (LITMX+LITMY+LETMX+LETMY)/2. In the model, I have distributed the loss equally between the ITMs and ETMs.

This calculation agrees well with the analytic results Yutaro computed here - the slight difference is possibly due to assuming different losses in the RC folding mirrors. 

The conclusion from this study seems to be that the arm loss is now in the 100-150ppm range (so each mirror has 50-75ppm loss). But these numbers are only so reliable, we need an independent loss measurement to verify. In fact, during last night's locking efforts, the arm transmission sometimes touched 400 (=> PRG ~22), which according to these plots suggest total arm losses of ~50ppm, which would mean each mirror has only 25ppm loss, which seems a bit hard to believe.

Attachment 1: PRG.pdf
PRG.pdf
  12586   Fri Oct 28 01:44:48 2016 gautamUpdateGeneralPRFPMI model vs data studies

Following Koji's suggestion, I decided to investigate the relation between my Finesse model and the measured data.

For easy reference, here is the loss plot again:

Sticking with the model, I used the freedom Finesse offers me to stick in photodiodes wherever I desire, to monitor the circulating power in the PRC directly, and also REFLDC. Note that REFLDC goes to 0 because I am using Finesse's amplitude detector at the carrier frequency for the 00 mode only. 

  

Both the above plots essentially show the same information, except the X axis is different. So my model tells me that I should expect the point of critical coupling to be when the average arm loss is ~100ppm, corresponding to a PRG of ~17 as suggested by my model.

Eric has already put up a scatter plot, but I reproduce another from a fresh lock tonight. The data shown here corresponds to the IFO initially being in the 'buzzing' state where the arms are still under ALS control and we are turning up the REFL gain - then engaging the QPD ASC really takes us to high powers. The three regimes are visible in the data. I show here data sampled at 16 Hz, but the qualitative shape of the scatter does not change even with the full data. As an aside, today I saw the transmission hit ~425!

  

I have plotted the scatter between TRX and REFL DC, but if I were to plot the scatter between POP DC and REFL DC, the shape looks similar - specifically, there is an 'upturn' in the REFL DC values in an area similar to that seen in the above scatter plot. POP DC is a proxy for the PRG, and I confirmed that for the above dataset, there is a monotonic, linear relationship between TRX and POPDC, so I think it is legitimate to compare the plot on the RHS in the row directly above, to the plot from the Finesse model one row further up. In the data, REFL DC seems to hit a minimum around TRX=320. Assuming a PRM transmission of 5.5%, TRX of 320 corresponds to a PRG of 17.5, which is in the ballpark of the region the model tells us to expect it to be. Based on this, I conclude the following:

  • It seems like the Finesse model I have is quite close to the current state of the IFO 
  • Given that we can trust the model, the PRC is now OVERCOUPLED - the scatter plot of data supports this hypothesis
  • Given that in today's lock, I saw arm transmission go up to ~425, this suggests that at optimal alignment, PRG can reach 23. Then, Attachment #1 suggests the average arm loss is <50ppm, which means the average loss per optic is <25ppm. I am not sure how physical this is, given that I remember seeing the specs for the ITMs and ETMs being for scatter less than 40 25ppm, perhaps the optic exceeded the specs, or I remember the wrong numbers, or the model is wrong

In other news, I wanted to try and do the sensing matrix measurements which we neglected to do yesterday. I turned on the notches in CARM, DARM, PRCL and MICH, and then tuned the LO amplitudes until I saw a peak in the error signal for that particular DOF with peak height a factor of >10 above the noise floor. The LO amplitudes I used are 

MICH: 40

PRCL: 0.7

CARM: 0.08

DARM: 0.08

There should be about 15 minutes of good data. More impressively, the lock tonight lasted 1 hour (see Attachment #6, unfortunately FB crashed in between). Last night we lost lock while trying to transition control to 1f signals and tonight, I believe a P.C. drive excursion of the kind we are used to seeing was responsible for the lockloss, so the PRFPMI seems pretty stable.

With regards to the step in the lock acquisition sequence where the REFL gain is turned up, I found in my (4) attempts tonight that I had most success when I adjusted the CARM A slider while turning up the REFL gain to offload the load on the CARM B servo. Of course, this may mean nothing... 

Attachment 1: loss.pdf
loss.pdf
Attachment 2: REFLDC.pdf
REFLDC.pdf
Attachment 3: CriticalCoupling.pdf
CriticalCoupling.pdf
Attachment 4: PRFPMI_Oct282016.pdf
PRFPMI_Oct282016.pdf
Attachment 5: PRFPMI_scatter.pdf
PRFPMI_scatter.pdf
Attachment 6: 1hourPRFPMILock.png
1hourPRFPMILock.png
  12587   Fri Oct 28 15:46:29 2016 gautamSummaryLSCX/Y green beat mode overlap measurement redone

I've been meaning to do this analysis ever since putting in the new laser at the X-end, and finally got down to getting all the required measurements. Here is a summary of my results, in the style of the preceeding elogs in this thread. I dither aligned the arms and maximized the green transmission DC levels, and also the alignment on the PSL table to maximize the beat note amplitude (both near and far field alignment was done), before taking these measurements. I measured the beat amplitude in a few ways, and have reported all of them below...

             XARM   YARM 
o BBPD DC output (mV), all measured with Fluke DMM
 V_DARK:     +1.0    +3.0
 V_PSL:      +8.0    +14.0
 V_ARM:      +175.0  +11.0


o BBPD DC photocurrent (uA)
I_DC = V_DC / R_DC ... R_DC: DC transimpedance (2kOhm)

 I_PSL:       3.5    5.5
 I_ARM:      87.0    4.0


o Expected beat note amplitude
I_beat_full = I1 + I2 + 2 sqrt(e I1 I2) cos(w t) ... e: mode overlap (in power)

I_beat_RF = 2 sqrt(e I1 I2)

V_RF = 2 R sqrt(e I1 I2) ... R: RF transimpedance (2kOhm)

P_RF = V_RF^2/2/50 [Watt]
     = 10 log10(V_RF^2/2/50*1000) [dBm]

     = 10 log10(e I1 I2) + 82.0412 [dBm]
     = 10 log10(e) +10 log10(I1 I2) + 82.0412 [dBm]

for e=1, the expected RF power at the PDs [dBm]
 P_RF:      -13.1  -24.5


o Measured beat note power (measured with oscilloscope, 50 ohm input impedance)      
 P_RF:      -17.8dBm (81.4mVpp)  -29.8dBm (20.5mVpp)   (38.3MHz and 34.4MHz)  
    e:        34                    30  [%]                          
o Measured beat note power (measured with Agilent RF spectrum analyzer)       
 P_RF:      -19.2  -33.5  [dBm] (33.2MHz and 40.9MHz)  
    e:       25     13    [%]                          

I also measured the various green powers with the Ophir power meter: 

o Green light power (uW) [measured just before PD, does not consider reflection off the PD]
 P_PSL:       16.3    27.2
 P_ARM:       380     19.1

Measured beat note power at the RF analyzer in the control room
 P_CR:      -36    -40.5    [dBm] (at the time of measurement with oscilloscope)
Expected    -17    - 9    [dBm] (TO BE UPDATED)

Expected Power: (TO BE UPDATED)
Pin + External Amp Gain (25dB for X, Y from ZHL-3A-S)
    - Isolation trans (1dB)
    + GAV81 amp (10dB)
    - Coupler (10.5dB)


The expected numbers for the control room analyzer in red have to be updated. 

The main difference seems to be that the PSL power on the Y broadband PD has gone down by about 50% from what it used to be. In either measurement, it looks like the mode matching is only 25-30%, which is pretty abysmal. I will investigate the situation further - I have been wanting to fiddle around with the PSL green path in any case so as to facilitate having an IR beat even when the PSL green shutter is closed, I will try and optimize the mode matching as well... I should point out that at this point, the poor mode-matching on the PSL table isn't limiting the ALS noise performance as we are able to lock reliably...

  12592   Wed Nov 2 22:56:45 2016 gautamUpdateCDSc1pem revamped

installing the BLRMS 2k blocks turned out to be quite non-trivial due to a whole host of CDS issues that had to be debugged, but i've restored everything to a good state now, and the channels are being logged. detailed entry with all the changes to follow.

  12594   Thu Nov 3 11:33:24 2016 gautamUpdateGeneralpower glitch - recovery

I did the following:

  • Hard reboots for fb, megatron, and all the frontends, in that order
  • Checked time on all FEs, ran sudo ntpdate -b -s -u pool.ntp.org where necessary
  • Restarted all realtime models
  • Restarted monit on all FEs
  • Reset Marconi to nominal settings, fCarrier=11.066209MHz, +13dBm amplitude
  • In the control room, restarted the projector and set up the usual StripTool traces
  • Realigned PMC
  • Slow machines did not need any touchups - interestingly, ITMX did not get stuck during this power glitch!

There was a regular beat coming from the speakers. After muting all the channels on the mixer and pulling the 3.5mm cable out, the sound persisted. It now looks like the mixer is broken sad

     ProFX8v2

 

  12595   Thu Nov 3 12:38:42 2016 gautamUpdateCDSc1pem revamped

A number of changes were made to C1PEM and some library parts. Recall that the motivation was to add BLRMS channels for all our suspension coils and shadow sensor PDs, which we are first testing out on the IMC mirrors.

Here is the summary:

BLRMS_2k library block

  • The name of the custom C code block in this library part was named 'BLRMSFILTER' which conflicted with the name of the function call in the C code it is linked to, which lead to compilation errors
  • Even though the part was found in /opt/rtcds/userapps/release/cds/c1/models and not in the common repository, just to be safe, I made a copy of the part called BLRMS_2k_40m which lives in the above directory. I also made a copy of the code it calls in /opt/rtcds/userapps/release/cds/c1/src

C1PEM model + filter channels

  • Adding the updated BLRMS_2k_40m library part still resulted in some compilation errors - specifically, it was telling me to check for missing links around the ADC parts
  • Eric suggested that the error messages might not be faithfully reporting what the problem is - true enough, the problem lay in the fact that c1pem wasn't updated to follow the namespace convention that we now use in all the RT models - the compiler was getting confused by the fact that the BLRMS stuff was in a namespace block called 'SUS', but the rest of the PEM stuff wasn't in such a block
  • I revamped c1pem to add namespace blocks called PEM and DAF, and put the appropriate stuff in the blocks, after which there were no more compilation errors
  • However, this namespace convention messed up the names of the filter modules and associated channels - this was resolved with Eric's help (find and replace did the job, this is a familiar problem that we had encountered not too long ago when C1IOO was similarly revamped...)
  • There was one last twist in that the model would compile and install, but just would not start. I tried the usual voodo of restarting all the models, and even did a soft reboot of c1sus, to no avail. Looking at dmesg, I tracked the problem down to a burt restore issue - the solution was to press the little 'BURT' button next to c1pem on the CDS overview MEDM screen as soon as it appeared while restarting the model

All the channels seem to exist, and FB seems to not be overloaded judging by the performance overnight up till the power outage. I will continue to monitor this...

GV Edit 3 Nov 2016 7pm:

I had meant to check the suitability of the filters used - there is a detailed account of the filters implemented in BLRMSFILTER.c here, and I quickly looked at the file on hand to make sure the BP filters made sense (see Attachment #1). These the BP filters are 8th order elliptical filters and the lowpass filters are16th order elliptical filters scaled for the appropriate frequency band, which are somewhat different from what we use on the seismometer BLRMS channels, where the filters are order 4, but I don't think we are significantly overloaded on the computational aspect, and the lowpass filters have sufficiently steep roll-off, these should be okay...

Attachment 1: BLRMSresp.pdf
BLRMSresp.pdf
  12596   Thu Nov 3 12:40:10 2016 gautamUpdateGeneral projector light bulb is out

The projector failed just now with a pretty loud 'pop' sound - I've never been present when the lamp goes out, so I don't know if this is usual. I have left the power cable unplugged for now...

Replacement is ordered Nov 4

  12602   Mon Nov 7 16:05:55 2016 gautamUpdateSUSPRM Sat. Box. Debugging

Short summary of my Sat. Box. debugging activities over the last few days. Recall that the SRM Sat. Box has been plugged into the PRM suspension for a while now, while the SRM has just been hanging out with no electrical connections to its OSEMs.

As Steve mentioned, I had plugged in Ben's extremely useful tester box (I have added these to the 40m Electronics document sub-tree on the DCC) into the PRM Sat. Box and connected it to the CDS system over the weekend for observation. The problematic channel is LR.  Judging by Steve's 2 day summary plots, LR looks fine. There is some unexplained behavior in the UR channel - but this is different from the glitchy behaviour we have seen in the LR channel in the past. Moreover, subsequent debugging activities did not suggest anything obviously wrong with this channel. So no changes were made to UR. I then pulled out the PRM sat.box for further diagnostics, and also, for comparison, the SRM sat. box which has been hooked up to the PRM suspension as we know this has been working without any issues. 

Tracing out the voltages through the LED current driver circuit for the individual channels, and comparing the performance between PRM and SRM sat. boxes, I narrowed the problem down to a fault in either the LT1125CSW Quad Op-Amp IC or the LM6321M current driver IC in the LR channel. Specifically, I suspected the output of U3A (see Attachment #1) to be saturated, while all the other channels were fine. Looking at the spectrum at various points in the circuit with an SR785, I could not find significant difference between channels, or indeed, between the PRM/SRM boxes (up to 100kHz). So I decided to swap out both these ICs. Just replacing the OpAmp IC did not have any effect on the performance. But after swapping out the current buffer as well, the outputs of U3A and U11 matched those of the other channels. It is not clear to me what the mode of failure was, or if the problem is really fixed. I also checked to make sure that it was indeed the ICs that had failed, and not the various resistors/capacitors in the signal path. I have plugged in the PRM sat. box + tester box setup back into our CDS data acquisition for observation over a couple of days, but hopefully this does the job... I will update further details over the coming days.

I have restored control to PRM suspensions via the working SRM sat. box. The PRM Sat. Box and tester box are sitting near the BS/PRM chamber in the same configuration as Steve posted in his earlier elog for further diagnostics...


GV Edit 2230 hrs 7Nov2016: The signs from the last 6 hours has been good - see the attached minute trend plot. Usually, the glitches tend to show up in this sort of time frame. I am not quite ready to call the problem solved just yet, but I have restored the connections to the SRM suspension (the PRM and SRM Sat. Boxes are still switched). I've also briefly checked the SRM alignment, and am able to lock the DRMI, but the lock doesn't hold for more than a few seconds. I am leaving further investigations for tomorrow, let's see how the Sat. Box does overnight.

Attachment 1: D961289-B2.pdf
D961289-B2.pdf
Attachment 2: PRMSatBoxtest.png
PRMSatBoxtest.png
  12603   Mon Nov 7 17:24:12 2016 gautamUpdateGreen LockingGreen beat setup on PSL table

I've been trying to understand the green beat setup on the PSL table to see if I can explain the abysmal mode-matching of the arm and PSL green beams on the broadband beat PDs. My investigations suggest that the mode-matching is very sensitive to the position of one of the lenses in the arm green path. I will upload a sktech of the PSL beat setup along with some photos, but here is the quick summary.

  1. I first mapped the various optical components and distances between them on the PSL table, both for the arm green path and the PSL green path
  2. Next, setting the PSL green waist at the center of the doubling oven and the arm green waist at the ITMs (in vacuum distances for the arm green backed out of CAD drawing), I used a la mode to trace the Gaussian beam profile for our present configuration. The main aim here was to see what sort of mode matching we can achieve theoretically, assuming perfect alignment onto the BBPDs. The simulation is simplified, the various beam splitters and other transmissive optics are treated as having 0 width
  3. It is pretty difficult to accurately measure path lengths to mm accuracy, so to validate my measurement, I measured the beam widths of the arm and PSL green beams at a few locations, and compared them to what my simulation told me to expect. The measurements were taken with a beam profiler I borrowed from Andrew Wade, and both the arm and PSL green beams have smooth Gaussian intensity profiles for the TEM00 mode (as they should!). I will upload some plots shortly. The agreement is pretty good, to within 10%, although geometric constraints on the PSL table limited the number of measurements I could take (I didn't want to disturb any optics at this point)
  4. I then played around with the position of a fast (100mm EFL) lens in the arm green path, to which the mode matching efficiency on the BBPD is most sensitive, and found that in a +/- 1cm range, the mode matching efficiency changes dramatically

Results:

Attachments #1 and 2: Simulated and measured beam profiles for the PSL and arm green beams. The origin is chosen such that both beams have travelled to the same coordinate when they arrive at the BBPD. The agreement between simulation and measurement is pretty good, suggesting that I have modelled the system reasonably well. The solid black line indicates the (approximate) location of the BBPD

     

Attachment #3: Mode matching efficiency as a function of shift of the above-mentioned fast lens. Currently, after my best efforts to align the arm and PSL green beams in the near and far fields before sending them to the BBPD results in a mode matching efficiency of ~30% - the corresponding coordinate in the simulation is not 0 because my length measurements are evidently not precise to the mm level. But clearly the mode matching efficiency is strongly sensitive to the position of this lens. Nevertheless, I believe that the conclusion that shifting the position of this lens by just 2.5mm from its optimal position degrades the theoretical maximum mode matching efficiency from >95% to 50% remains valid. I propose that we align the beams onto the BBPD in the near and far fields, and then shift this lens which is conveniently mounted on a translational stage, by a few mm to maximize the beat amplitude from the BBPDs. 

Unrelated to this work: I also wish to shift the position of the PSL green shutter. Currently, it is located before the doubling oven. But the IR pickoff for the IR beat setup currently is located after the doubling oven, so when the PSL green shutter is closed, we don't have an IR beat. I wish to relocate the shutter to a position such that it being open or closed does not affect the IR beat setup. Eventually, we want to implement some kind of PID control to make the end laser frequencies track the PSL frequency continuously using the frequency counter setup, for which we need this change...

Attachment 1: CurrentX.pdf
CurrentX.pdf
Attachment 2: CurrentY.pdf
CurrentY.pdf
Attachment 3: ProposedShift_copy.pdf
ProposedShift_copy.pdf
  12606   Tue Nov 8 11:54:38 2016 gautamUpdateSUSPRM Sat. Box. looks to be fixed

Looks like the PRM Sat. Box is now okay, no evidence of the kind of glitchy behaviour we are used to seeing in any of the 5 channels.

Quote:
 
GV Edit 2230 hrs 7Nov2016: The signs from the last 6 hours has been good - see the attached minute trend plot. Usually, the glitches tend to show up in this sort of time frame. I am not quite ready to call the problem solved just yet, but I have restored the connections to the SRM suspension (the PRM and SRM Sat. Boxes are still switched). I've also briefly checked the SRM alignment, and am able to lock the DRMI, but the lock doesn't hold for more than a few seconds. I am leaving further investigations for tomorrow, let's see how the Sat. Box does overnight.

 

  12609   Wed Nov 9 23:21:44 2016 gautamUpdateGreen LockingGreen beat setup on PSL table

I tried to realize an improvement in the mode matching onto the BBPDs by moving the lens mentioned in the previous elog in this thread. My best efforts today yielded X and Y beats at amplitudes -15.9dBm (@37MHz) and -25.9dBm (@25MHz) respectively. The procedure I followed was roughly:

  1. Do the near-field far-field alignment of the arm and PSL green beams
  2. Steer beam onto BBPD, center as best as possible using the usual technique of walking the beam across the photodiode
  3. Hook up the output of the scope to the Agilent network analyzer. Tweak the arm and PSL green alignments to maximize the beat amplitude. Then move the lens to maximize the beat amplitude.

As per my earlier power budget, these numbers translate to a mode matching efficiency of ~53% for the X arm beat and ~58% for the Y arm beat, which is a far cry from the numbers promised by the a la mode simulation (~90% at the optimal point, I could not achieve this for either arm scanning the lens through a maximum of the beat amplitude). Looks like this is the best we can do without putting in any extra lenses. Still a marginal improvement from the previous state though...

  12610   Thu Nov 10 19:02:03 2016 gautamUpdateCDSEPICS Freezes are back

I've been noticing over the last couple of days that the EPICS freezes are occurring more frequently again. Attached is an instance of StripTool traces flatlining. Not sure what has changed recently in terms of the network to cause the return of this problem... Also, they don't occur coincidentally on multiple workstations, but they do pop up in both pianosa and rossa.

Not sure if it is related, but we have had multiple slow machine crashes today as well. Specifically, I had to power cycle C1PSL, C1SUSAUX, C1AUX, C1AUXEX, C1IOOL0 at some point today

Attachment 1: epicsFreezesBack.png
epicsFreezesBack.png
  12611   Sat Nov 12 01:09:56 2016 gautamUpdateLSCRecovering DRMI locking

Now that we have all Satellite boxes working again, I've been working on trying to recover the DRMI 1f locking over the last couple of days, in preparation for getting back to DRFPMI locking. Given that the AS light levels have changed, I had to change the whitening gains on the AS55 and AS110 channels to take this into account. I found that I also had to tune a number of demod phases to get the lock going. I had some success with the locks tonight, but noticed that the lock would be lost when the MICH/SRCL boosts were triggered ON - when I turned off the triggering for these, the lock would hold for ~1min, but I couldn't get a loop shape measurement in tonight.


As an aside, we have noticed in the last couple of months glitchy behaviour in the ITMY UL shadow sensor PD output - qualitatively, these were similar to what was seen in the PRM sat. box, and since I was able to get that working again, I did a similar analysis on the ITMY sat. box today with the help of Ben's tester box. However, I found nothing obviously wrong, as I did for the PRM sat. box. Looking back at the trend, the glitchy behaviour seems to have stopped some days ago, the UL channel has been well behaved over the last week. Not sure what has changed, but we should keep an eye on this...

  12613   Mon Nov 14 14:21:06 2016 gautamSummaryCDSReplacing DIMM on Optimus

I replaced the suspected faulty DIMM earlier today (actually I replaced a pair of them as per the Sun Fire X4600 manual). I did things in the following sequence, which was the recommended set of steps according to the maintenance manual and also the set of graphics on the top panel of the unit:

  1. Checked that Optimus was shut down
  2. Removed the power cables from the back to cut the standby power. Two of the fan units near the front of the chassis were displaying fault lights, perhaps this has been the case since the most recent power outage after which I did not reboot Optimus
  3. Took off the top cover, removed CPU 6 (labelled "G" in the unit). The manual recommends finding faulty DIMMs by looking for an LED that is supposed to indicate the location of the bad card, but I couldn't find any such LEDs in the unit we have, perhaps this is an addition to the newer modules?
  4. Replaced the topmost (w.r.t the orientation the CPU normally sits inside the chassis) DIMM card with one of the new ones Steve ordered
  5. Put everything back together, powered Optimus up again. Reboot went smoothly, fan unit fault lights which I mentioned earlier did not light up on the reboot so that doesn't look like an issue.

I then checked for memory errors using edac-utils, and over the last couple of hours, found no errors (corrected or otherwise, see Praful's earlier elog for the error messages that we were getting prior to the DIMM swap)- I guess we will need to monitor this for a while more before we can say that the issue has been resolved.

Looking at dmesg after the reboot, I noticed the following error messages (not related to the memory issue I think):

[   19.375865] k10temp 0000:00:18.3: unreliable CPU thermal sensor; monitoring disabled
[   19.375996] k10temp 0000:00:19.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376234] k10temp 0000:00:1a.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376362] k10temp 0000:00:1b.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376673] k10temp 0000:00:1c.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376816] k10temp 0000:00:1d.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376960] k10temp 0000:00:1e.3: unreliable CPU thermal sensor; monitoring disabled
[   19.377152] k10temp 0000:00:1f.3: unreliable CPU thermal sensor; monitoring disabled

I wonder if this could explain why the fans on Optimus often go into overdrive and make a racket? For the moment, the fan volume seems normal, comparable to the other SunFire X4600s we have running like megatron and FB...

  12616   Tue Nov 15 19:22:17 2016 gautamUpdateGeneralhousekeeping

PRM and SRM sat. boxes have been switched for some time now - but the PRM sat. box has one channel with a different transimpedance gain, and the damping loops for the PRM and SRM were not systematically adjusted to take this into account (I just tweaked the gain for the PRM and SRM side damping loops till the optic damped). Since both sat. boxes are nominally functioning now, I saw no reason to maintain this switched configuration so I swapped the boxes back, and restored the damping settings to their values from March 29 2016, well before either of this summer's vents. In addition, I want to collect some data to analyze the sat. box noise performance so I am leaving the SRM sat. box connected to the DAQ, but with the tester box connected to where the vacuum feedthroughs would normally go (so SRM has no actuation right now). I will collect a few hours of data and revert later tonight for locking activities....

  12619   Wed Nov 16 03:10:01 2016 gautamUpdateLSCDRMI locked on 1f and 3f signals

After much trial and error with whitening gains, demod phases and overall loop gains, I was finally able to lock the DRMI on both 1f and 3f signals! I went through things in the following order tonight:

  1. Lock the arms, dither align
  2. Lock the PRMI on carrier and dither align the PRM to get good alignment
  3. Tried to lock the DRMI on 1f signals - this took a while. I realized the reason I had little to no success with this over the last few days was because I did not turn off the automatic unwhitening filter triggering on the demod screens. I had to tweak the SRM alignment while looking at the AS camera, and also adjust the demod phases for AS55 (MICH is on AS55Q) and REFL55 (SRCL is on REFL55I). Once I was able to get locks of a few seconds, I used the UGF servos to set the overall loop gain for MICH, PRCL and SRCL, after which I was able to revert the filter triggering to the usual settings
  4. Once I adjusted the overall gains and demod phases, the DRMI locks were very stable - I left a lock alone for ~20mins, and then took loop shape measurements for all 3 loops
  5. Then I decided to try transfering to 3f signals - I first averaged the IN1s to the 'B' channels for the 3 vertex DOFs using cds avg while locked on the 1f signals. I then set a ramp time of 5 seconds and turned the gain of the 'A' channels to 0 and 'B' channels to 1. The transition wasn't smooth in that the lock was broken but was reacquired in a couple of seconds.
  6. The lock on 3f signals was also pretty stable, the current one has been going for >10 minutes and even when it loses lock, it is able to reacquire in a few seconds

I have noted all the settings I used tonight, I will post them tomorrow. I was planning to try a DRFPMI lock if I was successful with the DRMI earlier tonight, but I'm calling it a night for now. But I think the DRMI locking is now back to a reliable level, and we can push ahead with the full IFO lock...

It remains to update the auto-configure scripts to restore the optimized settings from tonight, I am leaving this to tomorrow as well...


Updated 16 Nov 2016 1130am

Settings used were as follows:

1f/3f DOF Error signal Whitening gain (dB) Demod phase (deg) Loop gain Trigger
DRMI Locking 16 Nov 2016
1f MICH (A) AS55Q 0 -42 -0.026 POP22I=1
1f PRCL (A) REFL11I 18 18 -0.0029 POP22I=1
1f SRCL (A) REFL55I 18 -175 -0.035 POP22I=10
3f MICH (B) REFL165Q 24 -86 -0.026 POP22I=1
3f PRCL (B) REFL33I 30 136 -0.0029 POP22I=1
3f SRCL (B) REFL165I and REFL33I - - -0.035 POP22I=10

 

  12623   Thu Nov 17 15:17:16 2016 gautamUpdateIMCMCL Feedback

As a starting point, I was looking at some of the old elogs and tried turning on the MCL feedback path with the existing control filters today. I tried various combinations of MCL Feedback and FF on and off, and looked at the MCL error signal (which I believe comes from the analog MC servo board?) spectrum for each case. We had used this earlier this year when EricQ and I were debugging the EX laser frequency noise to stabilize the low frequency excursions of the PSL frequency. The low frequency suppression can be seen in Attachment #1, there looks to be some excess MCL noise around 16Hz when the servo is turned on. But the MC transmission (and hence the arm transmission) decays and gets noisier when the MCL feedback path is turned on (see Attached StripTool screenshots).

Attachment 1: MCLerror.pdf
MCLerror.pdf
Attachment 2: MCLtest.png
MCLtest.png
Attachment 3: YarmCtrl.pdf
YarmCtrl.pdf
  12627   Fri Nov 18 17:52:42 2016 gautamUpdatePSLFSS Slow control -> Python, WFS re-engaged

[yinzi, craig, gautam]

Yinzi had translated the Perl PID script used to implement the discrete-time PID control, and had implemented it with Andrew at the PSL lab. Today afternoon we made some minor edits to make this suitable for the FSS Slow loop (essentially just putting the right channel names into her Python script). I then made an init file to run this script on megatron, and it looks to be working fine over the last half hour of observation or so. I am going to leave things in this state over the weekend to see how it performs.


We have been running with just the MC2 Transmission QPD for angular control of the IMC for a couple of months now because the WFS loops seemed to drag the alignment away from the optimum. We did the following to try and re-engage the WFS feedback:

  • Close the PSL shutter, turned off all the lights in the lab and ran the WFS DC offsets script : /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_DC_offsets
  • Locked the IMC, optimized alignment by hand (WFS feedback turned off) /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_DC_offsets
  • Unlocked the IMC, went to the AS table and centered the spots on the WFS
  • Ran WFS RF offsets script - this should be done with the IMC unlocked (after good alignment has been established) /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_DC_offsets
  • Re-engaged WFS servo

GV addendum 23Nov2016: The WFS have been working well over the last few days - I've had to periodically (~ once in a day) run the WFS reflief script to keep the outputs to the suspension PIT and YAW DOFs below 50cts, but the WFS aren't dragging the alignment away as we had noticed before. The only thing I did differently is to follow Rana's suggestion and set the RF offsets with the MC unlocked as opposed to locked. I've added a line to the script to remind the user to do so... Also, note that EricQ has recently cleaned up the scripts directory to remove the numerous obsolete scripts in there...

 

  12630   Mon Nov 21 14:02:32 2016 gautamUpdateLSCDRMI locked on 3f signals, arms held on ALS

Over the weekend, I was successful in locking the DRMI with the arms held on ALS. The locks were fairly robust, lasting order of minutes, and was able to reacquire by itself when it lost the lock in <1min. I had to tweak the demod phases and loop gains further compared to the 1f lock with no arms, but eventually I was able to run a sensing matrix measurement as well. A summary of the steps I had to follow:

  • Lock on 1f signals, no arms, and run sensing lines, adjust REFL33 and REFL 165 demod phases to align PRCL, MICH and SRCL as best as possible to REFL33I, REFL165Q and REFL165I respectively
  • I also set the offsets to the 'B' inputs at this stage
  • Lock arms on ALS, engage DRMI locking on 3f signals (the restore script resets some values like the 'B' channel offsets, so I modified the restore script to set the offsets I most recently measured)
  • I was able to achieve short locks on the settings from the locking with no arms - I set the loop gains using the UGF servos and ran some sensing lines to get an idea of what the final demod phases should be
  • Adjusted the demod phases, locked the DRMI again (with CARM offset = -4.0), and took another sensing matrix measurement (~2mins). The data was analyzed using the set of scripts EricQ has made for this purpose, here is the result from a lock yesterday evening (the radial axis is meant to be demod board output volts per meter but the calibration I used may be wrong)

I've updated the appropriate fields in the restore script. Now that the DRMI locking is somewhat stable again, I think the next step towards the full lock would be to zero the CARM offset and turning on the AO path.

On the downside, I noticed yesterday that ITMY UL shadow sensor readback was glitching again - for the locking yesterday, I simply held the output of that channel to the input matrix, which worked fine. I had already done some debugging on the Sat. Box with the help of the tester box, but unlike the PRM sat. box, I did not find anything obviously wrong with the ITMY one... I also ran into a CDS issue when I tried to run the script that sets the phase tracker UGF - the script reported that the channels it was supposed to read (the I and Q outputs of the ALS signal, e.g. C1:ALS-BEATX_FINE_I_OUT) did not exist. The same channels worked on dataviever though, so I am not sure what the problem was. Some time later, the script worked fine too. Something to look out for in the future I guess..

Attachment 1: DRMIArms_Nov20.pdf
DRMIArms_Nov20.pdf
  12631   Mon Nov 21 15:34:24 2016 gautamUpdateCOCRC folding mirrors - updated specs

Following up on the discussion from last week's Wednesday meeting, two points were raised:

  1. How do we decide what number we want for the coating on the AR side for 532nm?
  2. Do we want to adjust T@1064nm on the HR side to extract a stronger POP beam?

With regards to the coating on the AR side, I've put in R<300ppm@1064nm and R<1000ppm@532nm on the AR side. On the HR side, we have T>97% @ 532nm (copied from the current PR3/SR3 spec), and T<50ppm @1064nm. What are the ghost beams we need to be worried about? 

  • Scattered light the AR side interfering with the main transmitted green beam possibly making our beat measurement noisier
    • With the above numbers, accounting for the fact that we ask for a 2 degree wedge on PR3, the first ghost beam from reflection on the AR side will have an angular separation from the main beam of ~7.6 degrees. So over the ~4m the green beam travels before reaching the PSL table, I think there is sufficient angular separation for us to catch this ghost and dump it. 
    • Moreover, the power in this first ghost beam will be ~30ppm relative to the main green beam. If we can get R<100ppm @532nm on the AR side, the number becomes 3ppm
  • Prompt reflection from the HR surface of PR3 scattering green light back into the arm cavity mode 
    • The current spec has T>97% @532nm. So 3% is promptly reflected at the HR side of PR3
    • I'm not sure how much of a problem this really will be - I couldn't find the reflectivities of PR2 and PRM @532nm (were these ever measured?)
    • In any case, if we can have T<50ppm @1064nm and R>99.9% @532nm, that would be better

So in conclusion, with the specs as they are now, I don't think the ALS noise performance is adversely affected. I have updated the spec to have the following numbers now.

HR side: T < 50ppm @1064nm, T>99.9% @532nm

AR side: R < 100ppm @1064nm and @532nm

 

As for the POP question, if we want to extract a stronger POP beam, we will have to relax the requirement on the transmission @1064nm on the HR side. But recall that the approach we are now considering is to replace only PR3, and flip PR2 back the right way around. Currently, POP is extracted at PR2, so if we want to stick with the idea of getting a new PR3 and extracting a stronger POP beam, there needs to be a major optical layout reshuffle in the BS/PRM chamber. Koji suggested that in the interest of keeping things moving along, we don't worry about POP for the time being...


Alternatively, if it turns out that the vendor can meet the specs for our second requirement (which requires 1.5% of lambda @632nm measurement precision to meet the 10+/-5km RoC tolerance on PR3), then we can ast for T<1000ppm @1064nm for the HR coating on PR2, and keep the coating specs on PR3 as above. 

 

Attached is a pdf with the specs updated to reflect all the above considerations...

Attachment 1: Recycling_Mirrors_Specs_Nov2016.pdf
Recycling_Mirrors_Specs_Nov2016.pdf Recycling_Mirrors_Specs_Nov2016.pdf Recycling_Mirrors_Specs_Nov2016.pdf
  12635   Wed Nov 23 01:13:02 2016 gautamUpdateIMCMCL Feedback

I wanted to get a clearer idea of the FSS servo and the various boxes in the signal chain and so Lydia and I poked around the IOO rack and the PSL table - I will post a diagram here tomorrow.

We then wanted to characterize the existing loop. It occurred to me later in the evening to measure the plant itself to verify the model shape used to construct the invP filter in the feedback path. I made the measurement with a unity gain control path, and I think there may be an extra zero @10Hz in the model.

Earlier in the evening, we measured the OLG of the MCL loop using the usual IN1/IN2 prescription, in which above 10Hz, the measurement and FOTON disagree, which is not surprising given Attachment #1.

I didn't play around with the loop shape too much tonight, but we did perform some trials using the existing loop, taking into account some things I realized since my previous attempts. The summary of the performanceof the existing loop is:

  • Below 1Hz, MCL loop injects noise to the arm control signal. I need to think about why this is, but perhaps it is IMC sensing noise?
  • Between 1-4Hz, the MCL loop suppresses the arm control signal
  • Between 4-10Hz (and also between 60-100Hz for the Xarm), the MCL loop injects noise. Earlier in the evening, we had noticed that there was a bump in the X arm control signal between 60-100Hz (which was absent in the Y arm control signal). Koji later helped me diagnose this as too low loop gain, this has since been rectified, but the HF noise of the X arm remains somewhat higher than the Y arm.

All of the above is summarized in the below plots - this behaviour is (not surprisingly) in line with what Den observed back when he put these in.

  

 

The eventual goal here is to figure out if we can get an adaptive feedback loop working in this path, which can take into account prevailing environmental conditions and optimally shape the servo to make the arms follow the laser frequency more closely at low frequencies (i.e. minimize the effect of the noise injected by IMC length fluctuations at low frequency). But first we need to make a robust 'static' feedback path that doesn't inject control noise at higher frequencies, I need to think a little more about this and work out the loop algebra to figure out how to best do this...

Attachment 1: MCL_plant.pdf
MCL_plant.pdf
Attachment 2: OLG.pdf
OLG.pdf
Attachment 3: MC_armSpectra_X.pdf
MC_armSpectra_X.pdf
Attachment 4: MC_armSpectra_Y.pdf
MC_armSpectra_Y.pdf
  12638   Wed Nov 23 16:21:02 2016 gautamUpdateLSCITMY UL glitches are back

 

Quote:

As an aside, we have noticed in the last couple of months glitchy behaviour in the ITMY UL shadow sensor PD output - qualitatively, these were similar to what was seen in the PRM sat. box, and since I was able to get that working again, I did a similar analysis on the ITMY sat. box today with the help of Ben's tester box. However, I found nothing obviously wrong, as I did for the PRM sat. box. Looking back at the trend, the glitchy behaviour seems to have stopped some days ago, the UL channel has been well behaved over the last week. Not sure what has changed, but we should keep an eye on this...

I've noticed that the glitchy behaviour in ITMY UL shadow sensor readback is back - as mentioned above, I looked at the Sat. Box and could not find anything wrong with it, perhaps I'll plug the tester box in over the Thanksgiving weekend and see if the glitches persist...

  12643   Mon Nov 28 10:27:13 2016 gautamUpdateSUSITMY UL glitches are back

I left the tester box plugged in from Thursday night to Sunday afternoon, and in this period, the glitches still appeared in (and only in) the UL channel.

So yesterday evening, I pulled the Sat. Box. out and checked the DC voltages at various points in the circuit using a DMM, including the output of the high current buffer that supplies the drive current to the shadow sensor LEDs. When we had similar behaviour in the PRM box, this kind of analysis immediately identified the faulty component as the high current buffer IC (LM6321M) in the bad channel, but everything seems in order for the ITMY box. 

I then checked the Satellite Amplifier Termination Board, which basically just adds 100ohm series resistors to the output of the PD readout, and all the resistors seem fine, the piece of insulating material affixed to the bottom of this board is also intact. I then used the SR785 in AC coupled mode to look at the high frequency spectrum at the same points I checked the DC voltages with the DMM (namely the drive voltage to the LEDs, and the PD readout voltages on the PCB as well as on the pins of the connector on the outside of the box after the termination board (leading to the DAQ), and nothing sticks out here in the UL channel either. Of course it could be that the glitches are intermittent, and during my tests they just weren't there...

I am hesitant to start pulling out ICs and replacing them without any obvious signs of failure from them, but I am out of debugging ideas...


One possibility is that the problem lies upstream of the Sat. Box - perhaps the UL channel in the Suspension PD Whitening and Interface Board is faulty. To test, I have now hooked up ITMY Sat. Box. + tester box to the signal chain of ETMY. If I can get the other tester box back from Ben, I will plug in the ETMY sat. box. + tester to the ITMY signal chain. This should tell us something...

Attachment 1: ITMY_satboxSpectra.pdf
ITMY_satboxSpectra.pdf
  12648   Wed Nov 30 01:47:56 2016 gautamUpdateLSCSuspension woes

Short summary:

  • Looks like Satellite boxes are not to blame for glitchy behaviour of shadow sensor PD readouts
  • Problem may lie at the PD whitening boards (D000210) or with the Contec binary output cards in c1sus
  • Today evening, similar glitchy behaviour was observed in all MC1 PD readout channels, leading to frequent IMC unlocking. Cause unknown, although I did work at 1X5, 1X6 today, and pulled out the PD whitening board for ITMY which sits in the same eurocrate as that for MC1. MC2/MC3 do not show any glitches.

Detailed story below...


Part 1: Satellite box swap

Yesterday, I switched the ITMY and ETMY satellite boxes, to see if the problems we have been seeing with ITMY UL move with the box to ETMY. It did not, while ITMY UL remained glitchy (based on data from approximately 10pm PDT on 28Nov - 10am PDT 29 Nov). Along with the tabletop diagnosis I did with the tester box, I concluded that the satellite box is not to blame.


Part 2: Tracing the signal chain (actually this was part 3 chronologically but this is how it should have been done...)

So if the problem isn't with the OSEMs themselves or the satellite box, what is wrong? I attempted to trace the signal chain from the satellite box into our CDS system as best as I could. The suspension wiring diagram on our wiki page is (I think) a past incarnation. Of course putting together a new diagram was a monumental task I wasn't prepared to undertake tonight, but in the long run this may be helpful. I will put up a diagram of the part I did trace out tomorrow, but the relevant links for this discussion are as follows (? indicates I am unsure):

  1. Sat box (?)--> D010069 via 64pin IDE connector --> D000210 via DB15 --> D990147 via 4pin LEMO connectors --> D080281 via DB25 --> ADC0 of c1sus
  2. D000210 backplane --> cross-connect (mis)labelled "ITMX white" via IDE connector
  3. c1sus CONTEC DO-32L-PE --> D080478 via DB37 --> BO0-1 --> cross-connect labelled "XY220 1Y4-33-16A" via IDE --> (?)  cross-connect (mis)labelled "ITMX white" via IDE connector

I have linked to the DCC page for the various parts where available. Unfortunately I can't locate (on new DCC or old or elog or wiki) drawings for D010069 (Satellite Amplifier Adapter Board), D080281 ("anti-aliasing interface)" or D080478 (which is the binary output breakout box). I have emailed Ben Abbott who may have access to some other archive - the diagrams would be useful as it is looking likely that the problem may lie with the binary output.

So presumably the first piece of electronics after the Satellite box is the PD whitening board. After placing tags on the 3 LEMOs and 1 DB15 cable plugged into this board, I pulled out the ITMY board to do some tabletop diagnosis in the afternoon around 2pm 29Nov.


Part 3: PD whitening board debugging

This particular board has been reported as problematic in the recent past. I started by inserting a tester board into the slot occupied by this board - the LEDs on the tester board suggested that power-supply from the backplane connectors were alright, confirmed with a DMM.

Looking at the board itself, C4 and C6 are tantalum capacitors, and I have faced problems with this type of capacitor in the past. In fact, on the corresponding MC3 board (which is the only one visible, I didn't want to pull out boards unnecessarily) have been replaced with electrolytic capacitors, which are presumably more reliable. In any case, these capacitors do not seem to be at any fault, the board receives +/-15 V as advertised.

The whitening switching is handled by the MAX333 - this is what I looked at next. This IC is essentially a quad SPDT switch, and a binary input supplied via the backplane connector serves to route the PD input either through a whitening filter, or bypass it via a unity gain buffer. The logic levels that effect the switching are +15V and 0V (and not the conventional 5V and 0V), but according to the MAX333 datasheet, this is fine. I looked at the supply voltage to all ICs on the board, DC levels seemed fine (as measured with a DMM) and I also looked at it on an oscilloscope, no glitches were seen in ~30sec viewing stretch. I did notice something peculiar in that with no input supplied to the MAX333 IC (i.e. the logic level should be 15V), the NO and NC terminals appear shorted when checked with a DMM. Zach has noticed something similar in the past, but Koji pointed out that the DMM can be fooled into thinking there is a short. Anyway, the real test was to pull the logic input of the MAX333 to 0, and look at the output, this is what I did next.

The schematic says the whitening filter has poles at 30,100Hz and a zero at 3 Hz. So I supplied as "PD input" a 12Hz 1Vpp sinewave - there should be a gain of ~x4 when this signal passes through the path with the whitening filter. I then applied a low frequency (0.1Hz) square wave (0-5V) to the "bypass" input, and looked at the output, and indeed saw the signal amplitude change by ~4x when the input to the switch was pulled low. This behaviour was confirmed on all five channels, there was no problem. I took transfer functions for all 5 channels (both at the "monitor" point on the backplane connector and on the front panel LEMOs), and they came out as expected (plot to be uploaded soon).

Next, I took the board back to the eurocrate. I first put in a tester box into the slot and measured the voltage levels on the backplane pins that are meant to trigger bypassing of the whitening stage, all the pins were at 0V. I am not sure if this is what is expected, I will have to look inside D080478 as there is no drawing for it. Note that these levels are set using a Contec binary output card. Then I attached the PD whitening board to the tester board, and measured the voltages at the "Input" pins of all the 5 SPDT switches used under 2 conditions - with the appropriate bit sent out via the Contec card set to 0 or 1 (using the button on the suspension MEDM screens). I confirmed using the BIO medm screen that the bit is indeed changing on the software side, but until I look at D080478, I am not sure how to verify the right voltage is being sent out, except to check at the pins on the MAX333. For this test, the UL channel was indeed anomalous - while the other 4 channels yielded 0V (whitening ON, bit=1) and 15V (whitening OFF, bit=0), the corresponding values for the UL channel were 12V and 10V.

I didn't really get any further than this tonight. But this still leaves unanswered questions - if the measured values are faithful, then the UL channel always bypasses the whitening stage. Can this explain the glitchy behaviour?


Part 4: MC1 troublesfrown

At approximately 8pm, the IMC started losing lock far too often - see the attached StripTool trace. There was a good ~2hour stretch before that when I realigned the IMC, and it held lock, but something changed abruptly around 8pm. Looking at the IMC mirror OSEM PD signals, all 5 MC1 channels are glitching frequently. Indeed, almost every IMC lockloss in the attached StripTool is because of the MC1 PD readouts glitching, and subsequently, the damping loops applying a macroscopic drive to the optic which the FSS can't keep up with. Why has this surfaced now? The IMC satellite boxes were not touched anytime recently as far as I am aware. The MC1 PD whitening board sits in the same eurocrate I pulled the ITMY board out of, but squishing cables/pushing board in did not do anything to alleviate the situation. Moreover, MC2 and MC3 look fine, even though their PD whitening boards also sit in the same eurocrate. Because I was out of ideas, I (soft) restarted c1sus and all the models (the thinking being if something was wrong with the Contec boards, a restart may fix it), but there was no improvement. The last longish lock stretch was with the MC1 watchdog turned off, but as soon as I turned it back on the IMC lost lock shortly after.

I am leaving the autolocker off for the night, hopefully there is an easy fix for all of this...

Attachment 1: IMCwoes.png
IMCwoes.png
  12652   Wed Nov 30 17:08:56 2016 gautamUpdateLSCBinary output breakout box removed

[ericq, gautam]

To diagnose the glitches in OSEM readouts, we have removed one of the PCIE BO D37 to IDE50 adaptor boxes from 1X5. All the watchdogs were turned off, and the power to the unit was cut before the cables on the front panel were removed. I am working on the diagnosis, I will update more later in the evening. Note that according to the c1sus model, the box we removed supplies backplane logic inputs that control whitening for ITMX, ITMY, BS and PRM (in case anyone is wondering/needs to restore damping to any of these optics). The whitening settings for the IMC mirrors resides on the other unit in 1X5, and should not be affected.

  12653   Thu Dec 1 02:19:13 2016 gautamUpdateLSCBinary output breakout box restored

As we suspected, the binary breakout board (D080478, no drawing available) is simply a bunch of tracks printed on the PCB to route the DB37 connector pins to two IDE50 connectors. There was no visible damage to any of the tracks (some photos uploaded to the 40m picasa). Further, I checked the continuity between pins that should be connected using a DMM.

I got a slightly better understanding of how the binary output signal chain is - the relevant pages are 44 and 48 in the CONTEC manual. The diagram on pg44 maps the pins on the DB37 connector, while the diagram on pg 48 maps how the switching actually occurs. The "load" in our case is the 4.99kohm resistor on the PD whitening board D000210. Following the logic in the diagram on pg48 is easy - setting a "high" bit in the software should pull the load resistor to 0V while setting a "low" bit keeps the load at 15V (so effectively the whole setup of CONTEC card + breakout board + pull-up resistor can be viewed as a simple NOT gate, with the software bit as the input, and the output connected to the "IN" pin of the MAX333).

Since I was satisfied with the physical condition of the BO breakout board, I re-installed the box on 1X5. Then, with the help of a breakout board, I diagnosed the situation further - I monitored the voltage to the pins on the backplane connector to the whitening boards while switching the MEDM switches to toggle the whitening state. For all channels except ITMY UL, the behaviour was as expected, in line with the preceeding paragraph - the voltage swings between ~0V and ~15V. As mentioned in my post yesterday, the ITMY UL channel remains dodgy, with voltages of 12.84V (bit=1) and 10.79V (bit=0). So unless I am missing something, this must point to a faulty CONTEC card? We do have spares, do we want to replace this? It also looks like this problem has been present since at least 2011...

In any case, why should this lead to ITMY UL glitching? According to the MAX333 datasheet, the switch wants "low"<0.8V and "high">2.4V - so even if the CONTEC card is malfunctioning and the output is toggling between these two states, the condition should be that the whitening stage is always bypassed for this channel. The bypassed route works just fine, I measured the transfer function and it is unity as expected.

So what could possibly be leading to the glitches? I doubt that replacing the BO card will solve this problem. One possibility that came up in today's meeting is that perhaps the +24V to the Sat. Box. (which is used to derive the OSEM LED drive current) may be glitching - of course we have no monitor for this, but given that all the Sat. Amp. Adaptor boards are on 1X5 near the Acromag, perhaps Lydia and Johannes can recommission the PSL diagnostic Aromag to a power supply monitoring Acromag?


What do these glitches look like anyway? Here is a few second snapshot from one of the many MC1 excursions from yesterday - the original glitch itself is very fast, and then that gives an impulse to the damping loop which eventually damps away.

And here is one from when there was a glitch when the tester box was plugged in to the ITMY signal chain (so we can rule out anything in the vacuum, and also the satellite box itself as the glitches seem to remain even when boxes are shuffled around, and don't migrate with the box). So even though the real glitch happens in the UL channel (note the y axes are very different for the channels), the UR, LR and LL channels also "feel" it. recall that this is with the tester box (so no damping loops involved), and the fact that the side channel is more immune to it than the others is hard to explain. Could this just be electrical cross-coupling?

Still beats me what in the signal chain could cause this problem.


Some good news - Koji was running some tests on the modified WFS demod board and locked the IMC for this. We noticed that MC1 seemed well behaved for extended periods of time unlike last night. I realigned the PMC and IMC, and we have been having lock streches of a few hours as we usually have. I looked at the MC1 OSEM PD readbacks during the couple of lock losses in the last few hours, and didn't notice anything dramatic laugh. So if things remain in this state, at least we can do other stuff with the IFO... I have plugged in the ITMY sat. box again, but have left the watchdog disabled, lets see what the glitching situation is overnight... The original ITMY sat. box has been plugged into the ETMY DAQ signal chain with a tester box. The 3 day trend supports the hypothesis the sat. box is not to blame. So I am plugging the ETMY suspension back in as well...

Attachment 4: ULcomparison.pdf
ULcomparison.pdf
  12655   Thu Dec 1 20:20:15 2016 gautamUpdateIMCIMC loss measurement plan

We want to measure the IMC round-trip loss using the Isogai et. al. ringdown technique. I spent some time looking at the various bits and pieces needed to make this measurement today, this elog is meant to be a summary of my thoughts.

  1. Inventory
    • AOM (in its new mount to have the right polarization) has been installed upstream of the PMC by Johannes. He did a brief check to see that the beam is indeed diffracted, but a more thorough evaluation has to be done. There is currently no input to the AOM, the function generator on the PSL table is OFF.
    • The Isogai paper recommends 3 high BW PDs for the ringdown measurement. Souring through some old elogs, I gather that the QPDs aren't good for this kind of measurement, but the PDA255 (50MHz BW) is a suitable candidate. I found two in the lab today - one I used to diagnose the EX laser intensity noise and so I know it works, need to check the other one. We also have a working PDA10CF detector (150 MHz BW). In principle, we could get away with just two, as the ringdown in reflection and transmission do not have to be measured simultaneously, but it would be nice to have 3
    • DAQ - I think the way to go is to use a fast scope triggered on the signal sent to the AOM to cut the light to the IMC, need to figure out how to script this though judging by some 2007 elogs by rana, this shouldn't be too hard...
  2. Layout plans
    • Where to put the various PDs? Keeping with the terminology of the Isogai paper, the "Trans diode" can go on the MC2 table - from past measurements, there is already a pickoff from the beam going to the MC TRANS QPD which is currently being dumped, so this should be straightforward...
    • For the "Incident Diode", we can use the beam that was used for the 3f cancellation trials - I checked that the beam still runs along the edge of the PSL table, we can put a fast PD in there...
    • For the "REFL diode" - I guess the MC REFL PD is high BW enough, but perhaps it is better to stick another PD in on the AS table, we can use one of the existing WFS paths? That way we avoid the complicated transfer function of the IMC REFL PD which is tuned to have a resonance at 29.4MHz, and keeps interfacing with the DAQ also easy, we can just use BNC cables...
    • We should be able to measure and calibrate the powers incident on these PDs relatively easily.
       
  3. Other concerns
    • I have yet to do a thorough characterization of the AOM performance, there have been a number of elogs noting possible problems with the setup. For one, the RF driver datasheet recommends 28V supply voltage but we are currently giving it 24V. In the (not too distant) past, the AOM has been seen to not be very efficient at cutting the power, the datasheet suggests we should be able to diffract away 80% of the central beam but only 10-15% was realized, though this may have been due to sub-optimal alignment or that the AOM was receiving the wrong polarization...
  4. Plan of action
    • Check RF driver, AOM performance, I have in mind following the methodology detailed here
    • Measure PMC ringdown - this elog says we want it to be faster than 1us
    • Put in the three high BW PDs required for the IMC ringdown, check that these PDs are working
    • Do the IMC ringdown

Does this sound like a sensible plan? Or do I need to do any further checks?

  12657   Fri Dec 2 11:56:42 2016 gautamUpdateLSCMC1 LEMO jiggled

I noticed 2 periods of frequent IMC locklosses on the StripTool trace, and so checked the MC1 PD readout channels to see if there were any coincident glitches. Turns out there wasnt yes BUT - the LR and UR signals had changed significantly over the last couple of days, which is when I've been working at 1X5. The fast LR readback was actually showing ~0, but the slow monitor channel had been steady so I suspected some cabling shenanigans.

Turns out, the problem was that the LEMO connector on the front of the MC1 whitening board had gotten jiggled ever so slightly - I re-jiggled it till the LR fast channel registered similar number of counts to the other channels. All looks good for now. For good measure, I checked the 3 day trend for the fast PD readback for all 8 SOS optics (40 channels in all, I didn't look at the ETMs as their whitening boards are at the ends), and everything looks okay... This while situation seems very precarious to me, perhaps we should have a more robust signal routing from the OSEMs to the DAQ that is more immune to cable touching etc...

  12659   Fri Dec 2 16:21:12 2016 gautamUpdateGeneralrepaired projector, new mixer arrived and installed

The most recent power outage took out our projector and mixer. The projector was sent for repair while we ordered a new mixer. Both arrived today. Steve is working on re-installing the projector right now, and I installed the mixer which was verified to be working with our DAFI system (although the 60Hz issue still remains to be sorted out). The current channel configuration is:

Ch1: 3.5mm stereo output from pianosa

Ch2: DAFI (L)

Ch3: DAFI (R)

I've set some random gains for now, but we will have audio again when locking laugh

  12660   Fri Dec 2 16:40:29 2016 gautamUpdateIMC24V fuse pulled out

I've pulled out the 24V fuse block which supplies power to the AOM RF driver. The way things are set up on the PSL table, this same voltage source powers the RF amplifiers which amplify the green beatnote signals before sending them to the LSC rack. So I turned off the green beat PDs before pulling out the fuse. I then disconnected the input to the RF driver (it was plugged into a DS345 function generator on the PSL table) and terminated it with a 50 ohm terminator. I want to figure out a smart way of triggering the AOM drive and recording a ringdown on the scope, after which I will re-connect the RF driver to the DS345. The RF driver, as well as the green beat amplifiers and green beat PDs, remain unpowered for now...

  12663   Mon Dec 5 01:58:16 2016 gautamUpdateIMCIMC ringdowns

Over the weekend, I worked a bit on getting these ringdowns going. I will post a more detailed elog tomorrow but here is a quick summary of the changes I made hardware-wise in case anyone sees something unfamiliar in the lab...

  • PDA10CF PD installed on PSL table in the beam path that was previously used for the 3f cancellation trials
  • PDA255 installed on MC2 trans table, long BNC cable running from there to vertex via overhead cable tray
  • PDA255 installed on AS table in front of one of the (currently unused) WFS

I spent a while in preparation for these trials (details tomorrow) like optimizing AOM alignment/diffracted power ratio, checking AOM and PMC switching times etc, but once the hardware is laid out, it is easy to do a bunch of ringdowns in quick succession with an ethernet scope. Tonight I did about 12 ringdowns - but stupidly, for the first 10, I was only saving 1 channel from the oscilloscope instead of the 3 we want to apply the MIT method.

Here is a representative plot of the ringdown - at the moment, I don't have an explanation for the funky oscillations in the reflected PD signal, need to think on this.. More details + analysis to follow...


Dec 5 2016, 130pm:

Actually the plot I meant to put up is this one, which has the time window acquired slightly longer. The feature I am referring to is the 100kHz oscillation in the REFL signal. Any ideas as to what could be causing this?

Attachment 1: IMCringdown.pdf
IMCringdown.pdf
Attachment 2: IMCringdown_2.pdf
IMCringdown_2.pdf
  12664   Mon Dec 5 15:05:37 2016 gautamUpdateLSCMC1 glitches are back

For no apparent reason, the MC1 glitches are back. Nothing has been touched near the PD whitening chassis today, and the trend suggests the glitching started about 3 hours ago.. I had disabled the MC1 watchdog for a while to avoid the damping loop kicking the suspension around when these glitches occur, but have re-enabled it now. IMC is holding lock for some minutes... I was hoping to do another round of ringdowns tonight, but if this persists, its going to be difficult...

  12665   Mon Dec 5 15:55:25 2016 gautamUpdateIMCIMC ringdowns

As promised, here is the more detailed elog.


Part 1: AOM alignment and diffraction efficiency optimization

I started out by plugging in the input to the AOM driver back to the DS345 on the PSL table, after which I re-inserted the 24V fuse that was removed. I first wanted to optimize the AOM alignment and see how well we could cut the input power by driving the AOM. In order to investigate this, I closed the PMC, unlocked the PSL shutter, and dialed the PSL power down to ~100mW using the waveplate in front of the laser. Power before touching anything just before the AOM was 1.36W as measured with the Coherent power meter. 

The photodiode (PDA255) for this experiment was placed downstream of the 1%(?) transmissive optic that steers the beam into the PMC (this PD would also be used in Part 2, but has since been removed)...

Then I tuned the AOM alignment till I maximized the DC power on this newly installed PD. It would have been nicer to have the AOM installed on the mount such that the alignment screws were more easily accessible, but I opted against doing any major re-organization for the time being. Even after optimizing the AOM alignment, the diffraction efficiency was only ~15%, for 1V to the AOM driver input. So I decided to play with the AOM driver a bit.

Note that the AOM driver is powered by 24V DC, even though the spec sheet says it wants 28V. Also, the "ALC" input is left unconnected, which should be fine for our purposes. I opted to not mess with this for the time being - rather, I decided to tweak the RF adjust potentiometer on the front of the unit, which the spec sheet says can adjust the RF power between 1W and 2W. By iteratively tuning this pot and the AOM alignment, I was able to achieve a diffraction efficiency of ~87% (spec sheet tells us to expect 80%), in a switching time of ~130ns (spec sheet tells us to expect 200ns, but this is presumably a function of the beam size in the AOM). These numbers seemed reasonable to me, so I decided to push on. Note that I did not do a thorough check of the linearity of the AOM driver after touching the RF adjust potentiometer as Koji did - this would be relevant if we want to use the AOM as an ISS servo actuator, but for the ringdown, all that matters is the diffraction efficiency and switching time, which seemed satisfactory. 

At this point, I turned the PSL power back up (measured 1.36W just before the AOM). Before this, I estimated the PD would have ~10mW power incident on it, and I wanted it to be more like 1mW, so I I put an ND 1.0 filter on to avoid saturation.


Part 2: PMC "ringdown"

As mentioned in my earlier elog, we want the PMC to cut the light to the IMC in less than 1us. While I was at it, I decided to see if I could do a ringdown measurement for the PMC. For this, I placed two more PDs in addition to the one mentioned in Part 1. One monitored the transmitted intensity (PDA10CF, installed in the old 3f cancellation trial beam path, ~1mW incident on it when PMC is locked and well aligned). I also split off half the light to the PMC REFL CCD (2mW, so after splitting, PMC CCD gets 1mW through some ND filters, and my newly installed PD (PDA255) receives ~1mW). Unfortunately, the PMC ringdown attempts were not successful - the PMC remains locked even if we cut the incident light by 85%. I guess this isn't entirely surprising, given that we aren't completely extinguishing the input light - this document deals with this issue.... But the PMC transmitted intensity does fall in <200ns (see plot in earlier elog), which is what is critical for the IMC ringdown anyways. So I moved on.


Part 3: IMC ringdown

The PDA10CF installed in part 2 was left where it was. The reflected and transmitted light monitors were PDA255. The former was installed in front of the WFS2 QPD on the AS table (needed an ND1.0 filter to avoid damage if the IMC unlocks not as part of the ringdown, in which case ~6mW of power would be incident on this PD), while the latter was installed on the MC2 transmission table. We may have to remove the former, but I don't see any reason to remove the latter PD. I also ran a long cable from the MC2 trans table to the vertex area, which is where I am monitoring the various signals.

  

The triggering arrangement is shown below.

  

To actually do the ringdown, here is the set of steps I followed.

  1. Make sure settings on scope (X & Y scales, triggering) are optimized for data capture. All channels are set to 50ohm input impedance. The trigger comes from the "TTL" output of the DS345, whose "signal" output drives the AOM driver. Set the trigger to external, the mode should be "normal" and not "auto" (this keeps the data on the screen until the next trigger, allowing us to download the data via ethernet.
  2. The DS345 is set to output a low frequency (0.005Hz) square wave, with 1Vpp amplitude, 0.5V offset (so the AOM driver input is driven between 0V and 1V DC, which is what we want). This gives us ~100 seconds to re-lock the IMC, and download the data, all while chilling in the control room
  3. The autolocker was excellent yesterday, re-acquiring the IMC lock in ~30secs almost every time. But in the few instances it didn't work, turn the autolocker off (but make sure the MC2 tickle is on, it helps) and manually lock the IMC by twiddling the gain slider (basically manually do what the autolock script does). As mentioned above, you have ~100 secs to do this, if not just wait for 200secs and the next trigger...
  4. In the meantime, download the data (script details to follow). I've made a little wrapper script (/users/gautam/2016_12_IMCloss/grabChans.sh) which uses Tobin's original python script, which unfortunately only grabs data one channel at a time. The shell script just calls the function thrice, and needs two command line arguments, namely the base name for the files to which the data will be written, and an IP address for the scope...

It is possible to do ~15 ringdowns in an hour, provided the seismic activity is low and the IMC is in a good mood. Unfortunately, I messed up my data acquisiton yesterday, so I only have data from 2 ringdowns, which I will work on fitting and extracting a loss number from. The ringing in the REFL signal is also a mystery to me. I will try using another PDA255 and see if this persists. But anyways, I think we can exclude the later part of the REFL signal, and fit the early exponential decay, in the worst case. The ringdown signal plots have been uploaded to my previous elog. Also, the triggering arrangement can be optimized further, for example by using the binary output from one of our FEs to trigger the actual waveform instead of leaving it in this low frequency oscillation, but given our recent experience with the Binary Output cards, I thought this is unnecessary for the time being...

Data analysis to follow.


I have left all the PDs I put in for this measurement. If anyone needs to remove the one in front of WFS2, go ahead, but I think we can leave the one on the MC2 trans table there...

Attachment 2: AOMswitching.pdf
AOMswitching.pdf
Attachment 6: electricalLayout.pdf
electricalLayout.pdf
  12666   Mon Dec 5 19:29:52 2016 gautamUpdateIMCIMC ringdowns

The MC1 suspension troubles vanished as they came - but the IMC was remaining locked stably so I decided to do another round of ringdowns, and investigate this feature in the reflected light a bit more closely. Over 9 ringdowns, as seen in the below figure, the feature doesn't quite remain the same, but qualitatively the behaviour is similar.

Steve helped me find another PDA255 and so I will try switching out this detector and do another set of ringdowns later tonight. It just occurred to me that I should check the spectrum of the PD output out to high frequencies, but I doubt I will see anything interesting as the waveform looks clean (without oscillations) just before the trigger...

Attachment 1: REFLanomaly.pdf
REFLanomaly.pdf
  12667   Tue Dec 6 00:43:41 2016 gautamUpdateIMCmore IMC ringdowns

In an effor to see if I could narrow down the cause of the 100kHz ringing seen in the reflected PD signal, I tried a few things.

  1. Changed the PD - there was a PDA 255 sitting on the PSL table by the RefCav. Since it wasn't being used, I swapped the PD I was using with this. Unfortunately, this did not solve the problem.
  2. Used a different channel on the oscilloscope - ringing persisted
  3. Changed BNC cable running from PD to oscilloscope - ringing persisted
  4. Checked the spectrum of the PD under dark and steady illumination conditions for any features at 100kHz, saw nothing (as expected) 

I was working under the hypothesis that the ringing was due to some impedance mismatch between the PD output and the oscilloscope, and 4 above supports this. However, most documents I can find online, for example this one, recommend connecting the PD output via 50ohm BNC to a scope with input impedance 50ohms to avoid ringing, which is what I have done. But perhaps I am missing something.

Moreover, the ringdown in reflection actually supplies two of the five variables needed to apply the MIT method of loss estimation. I suppose we could fit the parameter "m4" from the ringdown in transmission, and then use this fitted value on the ringdown in reflection to see where the reflected power settles (i.e. the parameter "m3" as per the MIT paper). I will try analyzing the data on this basis.

I also measured the power levels at each of the PDs, these should allow us to calibrate the PD voltage outputs to power in Watts. All readings were taken with the Ophir power meter, with the filter removed, and the IMC locked.

PD Power level
REFL 0.47 mW (measured before 1.0 ND filter)
Trans 203 uW
Incident 1.06 mW

 

  12701   Tue Jan 10 22:55:43 2017 gautamUpdateCDSpower glitch - recovery steps

Here is a link to an elog with the steps I had to follow the last time there was a similar power glitch.

The RAID array restart was also done not too long ago, we should also do a data consistency check as detailed here, if not already..

If someone hasn't found the time to do this, I can take care of it tomorrow afternoon after I am back.

Quote:

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

 

  12702   Wed Jan 11 16:35:03 2017 gautamUpdateCDSpower glitch - recovery progress

[lydia, ericq, gautam]

We set about following the instructions linked in the previous elog. A few notes/remarks:

  1. It is important to run the ntpdate commands before restarting the models. Sometimes, multiple restarts of the models were required to turn all the indicator blocks on the MEDM screen green.
  2. There was also an issue of multiple ntpd processes running on the same machine, which obviously caused all sorts of timing havoc. EricQ helped us diagnose and fix these. At the moment, all the lights are green on the CDS status MEDM screen
  3. On the hardware side, apart from the usual suspects of frontends/megatron/optimus/fb needing to be rebooted, I noticed that the ETMX OSEM lights were off on the control room monitors. Investigation pointed to the 2 20V sorensens at the X end outputting 0V, 0A after the power glitch. We turned down both dials, and then gradually ramped them up again. Both Sorensens now read +/-20V, 0.3A, which is in agreement with the label stuck onto them.
  4. Restarted MC autolocker and FSS Slow scripts on megatron. I have not yet looked at the status of the nds2 server on megatron.
  5. 11 MHz Marconi has yet to be restarted - but I am unable to get even the IMC locked at the moment. For some reason, the RMS of the MC1 and MC3 coils are way higher than what I am used to seeing (~5mV rms as compared to the <1mV rms I am used to seeing for a damped optic). I will investigate further. Leaving MC autolocker disabled for now.
  12708   Thu Jan 12 17:31:51 2017 gautamUpdateCDSDC errors

The IFO is more or less back to an operational state. Some details:

  1. The IMC mirror excess motion alluded to in the previous elog was due to some timing issues on c1sus. The "DAC" and "DK" blocks in the c1x02 diag word were red instead of green. Restarting all the models on c1sus fixed the problem
  2. When c1ioo was restarted, all of Koji's changes (digital) to the MC WFS servo where lost as they were not committed to the SDF. Eric suggested that I could just restore them from burt snapshots, which is what I did. I used the c1iooepics.snap file from 12:19PM PST on 26 December 2016, which was a time when the WFS servo was working well as per this elog by Koji. I have also committed all the changes to the SDF. IMC alignment has been stable for the last 4 hours.
  3. Johannes aligned and locked the arms today. There was a large DC offset on POX11, which was zeroed out by closing the PSL shutter and running LSC offsets. Both arms lock and stay aligned now.
  4. The doubling oven controller at the Y end was switched off. Johannes turned it on.
  5. Eric and I started a data consistency check on the RAID array yesterday, it has completed today and indicated no issues
  6. NDS2 is now running again on megatron so channel access from outside should(???) be possible again.

One error persists - the "DC" indicator (data concentrator?) on the CDS medm screen for the various models spontaneously go red and return to green often. Is this a known issue with an easy fix?

ELOG V3.1.3-