40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 183 of 326  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  2023   Tue Sep 29 22:51:20 2009 KojiUpdateMZPossible gain mis-calibration at other places (Re: MZ work done)

Probably there is the same mistake for the PMC gain slider. Possibly on the FSS slider, too???

Quote:

2) The EPICS setting for the MZ gain slider was totaly wrong.
    Today I learned from the circuit, the full scale of the gain slider C1:PSL-MZ_GAIN gave us +/-10V at the DAC.
    This yield +/-1V to V_ctrl of the AD602 after the internal 1/10 attenuation stage.
    This +/-1V didn't correspond to -10dB~+30dB, but does -22dB~+42dB and is beyond the spec of the chip.

  9330   Sat Nov 2 19:36:15 2013 CharlesUpdateGeneralPossible misalignment?

 I was working on the electronics bench and what sounded like a huge truck rolled by outside. I didn't notice anything until now, but It looks like something became misaligned when the truck passed by (~6:45-6:50 pm). I can hear a lot of noise coming out of the control room speakers and pretty much all of the IOO plots on the wall have sharp discontinuities.

I haven't been moving around much for the past 2 hours so I don't think it was me, but I thought it was worth noting.

  3238   Fri Jul 16 16:07:14 2010 josephbUpdateComputersPossible solution for the last ADC

After talking with Jenne, I realized the ADC card in the c1ass machine was currently going unused.  As we are short an ADC card, a possible solution is to press that card into service.  Unfortunately, its currently on a PMC to PCI adapter, rather than PMC to PCIe adapter.  The two options I have are to try to find a different adapter board (I was handed 3 for RFM cards, so its possible there's another spare over in downs - unfortunately I missed Jay when I went over at 2:30 to check).  The other option is put it directly into a computer, the only option being megatron, as the other machines don't have full length PCI slot. 

I'm still waiting to hear back from Alex (who is in Germany for the next 10 days) whether I can connect both in the computer as well as with the IO chassis.

So to that end, I briefly turned off the c1ass machine, and pulled the card.  I then turned it back on, restarted all the code as per the wiki instructions, and had Jenne go over how it looked with me, to make sure everything was ok.

There is something odd with some of the channels reading 1e20 from the RFM network.  I believe this is related to those particular channels not being refreshed by their source (which is other suspension front end machines), so its just sitting at a default until the channel value actually changes.

 

 

  13609   Tue Feb 6 11:13:26 2018 gautamUpdateALSPossible source of ground loop identified

I think I've narrowed down the source of this ground loop. It originates from the fact that the DAC from which the signals for this board are derived sits in an expansion chassis in 1Y3, whereas the LSC electronics are all in 1Y2.

  • I pulled the board out and looked at the output of Ch8 on the oscilloscope with the board powered by a bench power supply - signal looked clean, no evidence of the noisy ~20mVpp signal I mentioned in my previous elogyes.
  • Put the board back into a different slot in the eurocrate chassis and looked at the signal from Ch8  - looked clean, so the ground of the eurocrate box itself isn't to blameyes.
  • Put the board back in its original slot and looked at the signal from Ch8 - the same noisy signal of ~20mVpp I saw yesterday was evident again no.
  • Disconnected the backplane connector which routes signals from the DAC adaptor box to D000316 board - noisy signal vanished yes.

Looking at Jamie's old elog from the time when this infrastructure was installed, there is a remark that the signal didn't look too noisy - so either this is a new problem, or the characterization back then wasn't done in detail. The main reason why I think this is non-ideal is because the tip-tilt steering mirrors sending the beam into the IFO is controlled by analogous infrastructure - I confirmed using the LEMO monitor points on the D000316 that routes signals to TT1 and TT2 that they look similarly noisy (see e.g. Attachment #1). So we are injecting some amount (about 10% of the DC level) of beam jitter into the IFO because of this noisy signal - seems non-ideal. If I understand correctly, there is no damping loops on these suspensions which would suppress this injection. 

How should we go about eliminating this ground loop?

 

Quote:
 

So either something is busted on this board (power regulating capacitor perhaps?), or we have some kind of ground loop between electronics in the same chassis (despite the D990694 being differential input receiving). Seems like further investigation is needed. Note that the D000316 just two boards over in the same Eurocrate chassis is responsible for driving our input steering mirror Tip-Tilt suspensions. I wonder if that board too is suffering from a similarly noisy ground?

 

Attachment 1: A68AF89C-E8A9-416D-BBD2-A1AD0A51E0B5.jpeg
A68AF89C-E8A9-416D-BBD2-A1AD0A51E0B5.jpeg
  13612   Tue Feb 6 22:55:51 2018 gautamUpdateALSPossible source of ground loop identified

[koji, gautam]

We discussed possible solutions to this ground loop problem. Here's what we came up with:

  1. Option #1 - Configure the DAC card to receive a ground voltage reference from the same source as that which defines the LSC rack ground.
  2. Option #2 - construct an adapter that is differential-to-single ended receiving converter, which we can then tack on to these boards.
  3. Option #3 - use the D000186-revD board as the receiver for the DAC signals - this looks to have differential receiving of the DAC signals (see secret schematic).  We might want to modify the notches on these given the change in digital clock frequency 

Why do we care about this so much anyways? Koji pointed out that the tip tilt suspensions do have passive eddy current damping, but that presumably isn't very effective at frequencies in the 10Hz-1kHz range, which is where I observed the noise injection.

Note that all our SOS suspensions are also possibly being plagued by this problem - the AI board that receives signals is D000186, but not revision D I think. But perhaps for the SOS optics this isn't really a problem, as the expansion chassis and the coil driver electronics may share a common power source? 

gautam 1530 7 Feb: Judging by the footprint of the front panel connectors, I would say that the AI boards that receive signals from the DACs for our SOS suspended optics are of the Rev B variety, and so receive the DAC voltages single ended. Of course, the real test would be to look inside these boards. But they certainly look distinct from the black front panelled RevD variant linked above, which has differential inputs. Rev D uses OP27s, although rana mentioned that the LT1125 isn't the right choice and from what I remember, LT1125 is just Quad OP27...

  3735   Mon Oct 18 15:33:00 2010 josephb, alexUpdateCDSPossibly broken timing cards

It looks as though we may have two IO chassis with bad timing cards.

Symptoms are as follows:

We can get our front end models writing data and timestamps out on the RFM network.

However, they get rejected on the receiving end because the timestamps don't match up with the receiving front end's timestamp.  Once started, the system is consistently off by the same amount. Stopping the front end module on c1ioo and restarting it, generated a new consistent offset.  Say off by 29,000 cycles in the first case and on restart we might be 11,000 cycles off.  Essentially, on start up, the IOP isn't using the 1PPS signal to determine when to start counting.

We tried swapping the spare IO chassis (intended for the LSC) in ....

# Joe will finish this in 3 days.

# Basically, in conclusion, in a word, we found that c1ioo IO chassis is wrong.

  7852   Tue Dec 18 16:37:17 2012 JamieUpdateAlignmentPost vent, pre door removal alignment

[Jenne, Manasa, Jamie]

Now that we're up to air we relocked the mode cleaner, tweaked up the alignment, and looked at the spot positions:

mcspot_post_vent.pdf

The measurements from yesterday were made before the input power was lowered.  It appears that things have not moved by that much, which is pretty good.

We turned on the PZT1 voltages and set them back to their nominal values as recorded before shut-down yesterday.  Jenne had centered IPPOS before shutdown (IPANG was unfortunately not coming out of the vacuum).  Now we're at the following value: (-0.63, 0.66).  We need to calibrate this to get a sense of how much motion this actually is, but this is not insignificant.

 

  8989   Thu Aug 8 21:25:36 2013 KojiUpdateGeneralPost-vent alignment cont'd

- IPANG aligned on the QPD. The beam seems to be partially clipped in the chamber.

- Oplev of the IFO mirrors are aligned.

- After the oplev alignment, ITMX Yaw oplev servo started to oscillate. Reduced the gain from -50 to -20.

  8990   Fri Aug 9 16:49:35 2013 Jenne, manasaUpdateElectronicsPost-vent alignment cont'd - RFPDs

Notes to the fiber team:

I am aligning beam onto the RFPDs (I have finished all 4 REFL diodes, and AS55), in preparation for locking. 

In doing so, I have noticed that the fiber lasers for the RFPD testing are always illuminating the photodiodes!  This seems bad!  Ack!  

For now, I blocked the laser light coming from the fiber, did my alignment, then removed my blocks.  The exception is REFL55, which I have left an aluminum beam dump, so that we can use REFL55 for PRM-ITMY locking, so I can align the POP diodes.

EDIT:  I have also aligned POP QPD, and POP110/22.  The fiber launcher for POP110 was not tight in its mount, so when I went to put a beam block in front of it and touched the mount, the whole thing spun a little bit.  Now the fiber to POP110 is totally misaligned, and should be realigned.

What was done for the alignment:

1. Aligned the arms (ran ASS).

2. Aligned the beam to all the REFL and AS PDs. 

3. Misaligned the ETMs and ITMX. 

4. Locked PRM+ITMY using REFL11.
The following were modified to enable locking
(1) PRCL gain changed from +2.0 to -12.
(2) Power normalization matrix for PRCL changed from +10.0 to 0.
(3) FM3 in PRCL servo filter module was turned OFF.

5. POP PDs were aligned.

  15301   Mon Apr 13 15:28:07 2020 KojiUpdateGeneralPower Event and recovery

[Larry (on site), Koji & Gautam (remote)]

Network recovery (Larry/KA)

  • Asked Larry to get into the lab. 

  • 14:30 Larry went to the lab office area. He restarted (power cycled) the edge-switch (on the rack next to the printer). This recovered the ssh-access to nodus. 

  • Also Larry turned on the CAD WS. Koji confirmed the remote access to the CAD WS.

Nodus recovery (KA)

  • Apr 12, 22:43 nodus was restarted.

  • Apache (dokuwiki, svn, etc) recovered along with the systemctl command on wiki

  • ELOG recovered by running the script

Control Machines / RT FE / Acromag server Status

  • Judging by uptime, basically only the machines that are on UPS (all control room workstations + chiara) survived the power outage. All RT FEs are down. Apart from c1susaux, the acromag servers are back up (but the modbus processes have NOT been restarted yet). Vacuum machine is not visible on the network (could just be a networking issue and the local subnet to valves/pumps is connected, but no way to tell remotely).

  • KA imagines that FB took some finite time to come up. However, the RT machines required FB to download the OS. That made the RTs down. If so, what we need is to power cycle them.

  • Acromag: unknown state

The power was lost at Apr 12 22:39:42, according to the vacuum pressure log. The power loss was for a few min.

  9746   Mon Mar 24 19:42:12 2014 CharlesFrogsVACPower Failure

 The 40m experienced a building-wide power failure for ~30 seconds at ~7:38 pm today.

Thought that might be important...

  9747   Mon Mar 24 21:36:28 2014 KojiUpdateGeneralPower Failure

I'm checking the status from home.

P1 is 8e-4 torr

nodus did not feel the power outage (is it APS supported?)

linux1 booted automatically

c1ioo booted automatically.

c1sus, c1lsc, c1iscex, c1iscey need manual power button push.

  9748   Mon Mar 24 22:13:37 2014 steveUpdateGeneralPower Failure

Quote:

I'm checking the status from home.

P1 is 8e-4 torr

nodus did not feel the power outage (is it APS supported?)

linux1 booted automatically

c1ioo booted automatically.

c1sus, c1lsc, c1iscex, c1iscey need manual power button push.

 9:11pm closed PSL shutter, turned Innolight 2W laser on,

         turned 3 IFO air cond on,

        CC1 5.1e-5 torr, V1 is closed, Maglev has failed, valve configuration is " Vacuum Normal "  with V1 & VM1 closed, RGA not running,   c1vac1 and c1vac2   were saved by UPS,

        (Maglev is not connected to the UPS because it is running on 220V)

        reset & started Maglev.........I can not open V1 without the 40mars running...........

        Rossa is the only computer running in the control room,

        Nodus and Linux1 was saved by UPS,

        turned on IR lasers at the ends, green shutters are closed

It is safe to leave the lab as is.

     

Attachment 1: poweroutage.png
poweroutage.png
  9750   Tue Mar 25 16:11:24 2014 KojiUpdateGeneralPower Failure

As far as I know the system is running as usual. I had the IMC locked and one of the arm flashing.
But the other arm had no flash and none of the arms were locked before kunch time.



This morning Steve and I went around the lab to turn on the realtime machines.

Also we took the advantage of this opportunity to shutdown linux1 and nodus
to replace the extension cables for their AC power.

I also installed a 3TB hard disk on linux1. This was to provide a local daily copy of our
working are. But I could not make the disk recognized by the OS.
It seems that there is a "2TB" barrier that the disk bigger than 2.2TB can't be recognized
by the older machines. I'll wait for the upgrade of the machine.

Rebooting the realtime machines did not help FB to talk with them. I fixed them.
Basically what I did was:

- Stop all of the realtime codes by running rtcds kill all on c1lsc, c1ioo, c1sus, c1iscex, c1iscey

- run sudo ntpdate -b -s -u pool.ntp.org on c1lsc, c1ioo, c1sus, c1iscex, c1iscey, and fb

- restart realtime codes one by one. I checked which code makes FB unhappy. But in reality
  FB was happy with all of them running.

Then slow machines except for c1vac1 and c1vac2 were burtrestored.

-------

Zach reported that svn was down. I went to the 40m wiki and searched "apache".
There is an instruction how to restart apache.

  9752   Wed Mar 26 11:30:07 2014 KojiUpdateGeneralPower Failure

Recovery work: now arms are locking as usual

- FB is failing very frequently. Everytime I see red signals in the CDS summary, I have to run "sudo ntpdate -b -s -u pool.ntp.org"

- PMC was aligned

- The main Marconi returned to initial state. Changed the frequency and amplitude to the nominal value labeled on the unit

- The SHG oven temp controllers were disabled. I visited all three units and pushed "enable" buttons.

- Y arm was immediately locked. It was aligned using ASS.

- X arm did not show any flash. I found that the scx model was not successfully burtrestored yesterday.
  The setting was restored using Mar 22 snapshot.

- After a little tweak of the ETMX alignment, a decent flash was achieved. But still it could not be locked.

- Run s/LSC/LSCoffset.py. This immediately made the X arm locked.

- Checked the green alignment. The X arm green is beating with the PSL at  ~100MHz but is misaligned beyond the PZT range.
  The Y arm green is locked on TEM00 and is beating with the PSL at ~100MHz.

  12010   Thu Feb 25 11:02:36 2016 SteveUpdateGeneralPower Glitch again
Quote:

Chiara reports an uptime of >195 days, so its UPS is working fine yes

FB, megatron, optimus booted via front panel button.

Jetstor RAID array (where the frames live) was beeping, since its UPS failed as well. The beep was silenced by clicking on "View Events/Mute Beeper" at 192.168.113.119 in a browser on a martian computer. I've started a data consistency check via the web interface, as well. According to the log, this was last done in July 2015, and took ~19 hrs.

Frontends powered up; models don't start automatically at boot anymore, so I ran rtcds start all on each of them. 

All frontends except c1ioo had a very wrong datetime, so I ran sudo ntpdate -b -s -u pool.ntp.org on all of them, and restarted the models (just updating the time isn't enough). There is an /etc/ntp.conf in the frontend filesystem that points to nodus, which is set up as an NTP server, but I guess this isn't working.

PMC locking was hindered by sticky sliders. I burtrestored the c1psl.snap from Friday, and the PMC locked up fine. (One may be fooled by the unchanged HV mon when moving the offset slider into thinking the HV KEPCO power supplies need to be brought down and up again, but it's just the sliders)

Mode cleaner manually locked and somewhat aligned. Based on my memory of PMC camera/transmission, the pointing changed; the WFS need a round of MC alignment and WFS offset setting, but the current state is fine for operation without all that. 

10:15 power glitch today. ETMX Lightwave and air conditions turned back on

Attachment 1: powerGlitch.png
powerGlitch.png
  12011   Thu Feb 25 11:32:04 2016 gautamUpdateGeneralPower Glitch again

 

Quote:

10:15 power glitch today. ETMX Lightwave and air conditions turned back on

The CDS situation was not as catastrophic as the last time, it was sufficient for me to ssh into all the frontends and restart all the models. I also checked that monit was running on all the FEs and that there was no date/time issues like we saw last week. Everything looks to be back to normal now, except that the ntpd process being monitored on c1iscex says "execution failed". I tried restarting the process a couple of times, but each time it returns the same status after a few minutes.

  11993   Tue Feb 16 15:02:19 2016 ericqUpdateGeneralPower Glitch recovery

Chiara reports an uptime of >195 days, so its UPS is working fine yes

FB, megatron, optimus booted via front panel button.

Jetstor RAID array (where the frames live) was beeping, since its UPS failed as well. The beep was silenced by clicking on "View Events/Mute Beeper" at 192.168.113.119 in a browser on a martian computer. I've started a data consistency check via the web interface, as well. According to the log, this was last done in July 2015, and took ~19 hrs.

Frontends powered up; models don't start automatically at boot anymore, so I ran rtcds start all on each of them. 

All frontends except c1ioo had a very wrong datetime, so I ran sudo ntpdate -b -s -u pool.ntp.org on all of them, and restarted the models (just updating the time isn't enough). There is an /etc/ntp.conf in the frontend filesystem that points to nodus, which is set up as an NTP server, but I guess this isn't working.

PMC locking was hindered by sticky sliders. I burtrestored the c1psl.snap from Friday, and the PMC locked up fine. (One may be fooled by the unchanged HV mon when moving the offset slider into thinking the HV KEPCO power supplies need to be brought down and up again, but it's just the sliders)

Mode cleaner manually locked and somewhat aligned. Based on my memory of PMC camera/transmission, the pointing changed; the WFS need a round of MC alignment and WFS offset setting, but the current state is fine for operation without all that. 

  11995   Tue Feb 16 23:42:22 2016 gautamUpdateGeneralPower Glitch recovery - arms recovered

 I was able to realign the arms, lock them, and have run the dither align to maximize IR transmission - looks like things are back to normal now. For the Y-end, I used the green beam initially to do some coarse alignment of the ITM and ETM, till I was able to see IR flashes in the control room monitors. I then tweaked the alignment of the tip-tilts till I saw TEM00 flashes, and then enabled LSC. Once the arm was locked, I ran the dither align. I then tweaked ITMX alignment till I saw IR flashes in the X arm as well, and was able to lock it with minimal tweaking of ETMX. The LSC actuation was set to ETMX when the models were restarted - I changed this to ITMX actuation, and now both arms are locked with nominal IR transmissions. I will center all the Oplev spots tomorrow before I start work on getting the X green back - I've left the ETM Oplev servos on for now.

While I was working, I noticed that frame builder was periodically crashing. I had to run mxstream restart a few times in order to get CDS back to the nominal state. I wonder if this is a persistent effect of the date/time issues we were seeing earlier today?

  2645   Sun Feb 28 16:45:05 2010 ranaSummaryGeneralPower ON Recovery
  1. Turned ON the RAID above linux1.
  2. Hooked up a monitor and keyboard and then turned ON linux1.
  3. After linux1 booted, turned ON nodus - then restarted apache and elog on it using the wiki instructions.
  4. Turned on all of the control room workstatiions, tuned Pandora to Johnny Cash, started the auto package updater on Rosalba (517 packages).
  5. Started the startStrip script on op540m.
  6. turned on RAID for frames - wait for it to say 'SATA', then turn on daqctrl and then fb40m and then daqawg and then dcuepics
  7. turned on all the crates for FEs, Sorensens, Kepcos for LSC, op340m, mafalda was already on
  8. fb40m again doesn't mount the RAID again!
  9. I turned on fb40m2 and that fixes the problem. The fb40m /etc/vfstab points to 198.168.1.2, not the JetStor IP address.
  10. I plugged in the Video Switch - its power cord was disconnected.
  11. FEs still timing out saying 'no response from EPICS', but Alberto is now here.

Sun Feb 28 18:23:09 2010

Hi. This is Alberto. Its Sun Feb 28 19:23:09 2010

  1. Turned on c1dcuepics, c0daqctrl and c0daqawg. c0daqawg had a "bad"status on the daqdetail medm screen. The FEs still don't come up.
  2. Rebooted c1dcuepics and power cycled c0daqctrl and c0daqawg. The problem is still there.
  3. Turned on c1omc. Problem solved.
  4. Rebooted c1dcuepics and power cycled c0daqctrl and c0daqawg. c0daqawg now good. The FE are coming up.
  5. Plugged in the laser for ETMY's oplev
  6. Turned on the laser of ETMX's oplev from its key.

 Monday, March 1, 9:00 2010 Steve turns on PSL-REF cavity ion pump HV at 1Y1

  14347   Wed Dec 12 11:53:29 2018 aaronUpdateGeneralPower Outage

At 11:13 am there was a ~2-3 second interruption of all power at the 40m.

I checked that nobody was in any of the lab areas at the time of the outage.

I walked along both arms of the 40m and looked for any indicator lights or unusual activity. I took photos of the power supplies that I encountered, attached. I tried to be somewhat complete, but didn't have a list of things in mind to check, so I may have missed something. 

I noticed an electrical buzzing that seemed to emanate from one of the AC adapters on the vacuum rack. I've attached a photo of which one, the buzzing changes when I touch the case of the adapter. I did not modify anything on the vacuum rack. There is also 

Most of the cds channels are still down. I am going through the wiki for procedures on what to log when the power goes off, and will follow the procedures here to get some useful channels.

Attachment 1: IMG_0033.HEIC
Attachment 2: IMG_1027.HEIC
Attachment 3: IMG_2605.HEIC
  14349   Thu Dec 13 01:26:34 2018 gautamUpdateGeneralPower Outage recovery

[koji, gautam]

After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:

  1. It seems that we need to first turn on FB
    • Manually start the open-mx and mx services using
      sudo systemctl start open-mx.service 
      sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources.
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  2. Then fully power cycle (including all front and rear panel power switches/cables) the FEs and the expansion chassis.
    • This seems to be a necessary step for models run on c1sus (as reported by the CDS MEDM screen) to pick up the correct system time (the FE itself seems to pick up the correct time, not sure what's going on here).
    • This was necessary to clear 0x4000 errors.
  3. Power on the expansion chassis.
  4. Power on the FE.
  5. Start the RTCDS models in the usual way
    • For some reason, there is a 1 second mismatch between the gpstime returned on the MEDM screen for a particular CDS model status, and that in the terminal for the host machine.
    • This in itself doesn't seem to cause any timing errors. But see remark about c1sus above in #2.

The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.

  14351   Thu Dec 13 12:06:35 2018 gautamUpdateGeneralPower Outage recovery

I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...

I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...

Quote:
 

The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.

  14353   Thu Dec 13 20:10:08 2018 KojiUpdateGeneralPower Outage recovery

[Gautam, Aaron, Koji]

The PSL interlock system was fixed and now the 40m lab is laser hazard as usual.


- The schematic diagram of the interlock system D1200192
- We have opened the interlock box. Immediately we found that the DC switching supply (OMRON S82K-00712) is not functioning anymore.  (Attachment #1)
- We could not remove the module as the power supply was attached on the DIN rail. We decided to leave the broken supply there (it is still AC powered with no DC output).

- Instead, we brought a DC supply adapter from somewhere and chopped the head so that we can hook it up on the crimping-type quick connects. In Attachment #1, the gray is +12V, and the orange and black lines are GND.

- Upon the inspection, the wires of the "door interlock reset button" fell off and the momentary switch (GRAYHILL 30-05-01-502-03) got broken. So it was replaced with another momentary swicth, which is way smaller than the original unfortunately. (Attachments 2 and 3)

- Once the DC supply adapter was pluged to an AC tap, we heard the sounds of the relays working, and we recovered the laser hazard lamps, PSL door alerm lamps. Also it was confirmed that the PSL innolight is operatable now. 

- BTW, there is the big switch box on the wall close to the PSL enclosure. Some of the green lamps were gone. We found that we have plenty of spare lamps and relays inside of the box. So we replaced the bulbs and know the A.C. lights are functioning. (Attachments 4 & 5)

Attachment 1: OMRON_S82K-00712.JPG
OMRON_S82K-00712.JPG
Attachment 2: reset_button_repaired1.JPG
reset_button_repaired1.JPG
Attachment 3: reset_button_repaired2.JPG
reset_button_repaired2.JPG
Attachment 4: gray_box.JPG
gray_box.JPG
Attachment 5: gray_box2.JPG
gray_box2.JPG
  6141   Wed Dec 21 04:29:01 2011 kiwamuUpdateGreen LockingPower Recycled Single Arm

I made the first trial of locking a Power-recycled single arm.

 This is NOT a work in the main stream,

but it gives us some prospects towards the full lock and perhaps some useful thoughts.

 

      Optical Configuration         

  • Y arm and PRM aligned. They become a three-mirror coupled optical cavity
    • Power Recycling Cavity (PRC) is kept at anti-resonance for the carrier when the arm length is off from the resonance point
    • Hence bringing the arm length to the resonance point lets the carrier resonate in the coupled cavity
    • BS behaves as a loss term in PRC and hence results a low recycling gain
  • Everything else are misaligned, including ITMX, ETMX, SRM and BS
    • Therefore there are neither Michelson, X arm nor Signal Recycling Cavity (SRC)

   Lock Acquisition Steps    

  1. Misalign PRM such that there is only Y arm flashing at 1064 nm
  2. Do ALS and bring the arm length to the resonance point
  3. Record the beat-note frequency such that we can go back to this resonance point later
  4. Displace the arm length by 13 nm, corresponding to a frequency shift of 200 kHz in the green beat note
  5. Restore the alignment of PRM.
  6. Lock PRC to the carrier anti-resonance condition using REFL33I. At this point the arm doesn't disturb the lock because it is off from the resonance anyway
  7. Reduce the displacement in the arm and bring it back to the resonance

 

     Actual Time Series     

Below is a plot of the actual lock acquisition sequence in time series.

time_series.png

  • The data starts from the time when the arm length was kept at the resonance point by the  ALS servo.
    • At this point PRM was still misaligned.
  • At 120 sec, the arm length started to be displaced off from the resonance point.
  • At 250 sec, the alignment of PRM was restored and the normalized DC reflection went to 1.
    • Error signals of PRC showed up in both REFL33 and POOY11
  • At 260 sec, PRC was locked to the carrier anti-resonance point using the REFL33_I signal.
    • Both REFL33 and POY11 became quiet.
    • REFLDC started staying at 1, because the carrier doesn't enter to the cavities and directly goes back to the REFL port.
  • At 300 sec, the arm length started to be brought to the resonance point.
  • At 400 sec, the arm length got back to the resonance point.
    • The intracavity power went to 3.5 or so
    • REFLDC went down a bit because some part of the light started entering in the cavities
    • REFL33 became noisier possibly because the Y arm length error signal leaked to it.
  6144   Wed Dec 21 16:55:30 2011 kiwamuUpdateGreen LockingPower Recycled Single Arm
 I did some brief parameter checks for the power-recycled single arm which I have done yesterday.
The purpose is to make sure that the interferometer and I weren't crazy.
So far the measured quantities look reasonable.
  

         Assumptions on the parameter estimations          

   No losses.
   Tprm = 0.05637
   Titm =0.01384
   Tetm = 15 ppm
   Tbs = 0.5
 
        Parameter estimations and comparison with measurement      
   Recycling gain G = Tprm / (1 - ritm * rprm * Tbs) = 0.21 
   Amplitude reflectivity of the arm rarm =   (retm - ritm) / (1 - ritm * retm) = 0.99785
   Effective ITM's amplitude reflectivity ritm' = ( ritm + rprm * Tbs) / (1 + ritm * rprm * Tbs) = 0.9976
   Arm finesse = pi * sqrt (ritm' * retm) / (1 - ritm' * retm) = 1298
 
  + Power build up from single arm to power-recycled arm = G / Tprm = 3.73
      => measured value is 3.8 at maximum
 
  + Reflectance of the coupled cavity R = ( rprm -  rarm * Tbs )2 / (1 - rprm * rarm * Tbs  )2 =  0.841
     => measured value was about 0.85 at minimum
 
 
  + Cavity full linewidth = lambda / arm_finesse / 2 = 0.41 nm
     => narrower than that of the usual single arm by factor of 2.9
     => I guess this was the reason why the intracavity power looked more fluctuating after everything was locked

Quote from #6141

I made the first trial of locking a Power-recycled single arm.

 

  8927   Fri Jul 26 14:39:08 2013 CharlesUpdateISSPower Regulation for ISS Board

I constructed a regulator board that can take ±24 V and supply a regulated ±15 V or ±5 V. I followed the schematics from LIGO-D1000217-v1.

I was going to make 2 boards, one for ±15 V and one for ±5, but Chub just gave me a second assembled board when I asked him for the parts to construct it 

 

  9355   Wed Nov 6 15:57:22 2013 JenneUpdateLSCPower Supply solution

We have decided that, rather than replacing the power source for the amplifiers that are on the rack, and leaving the Thorlabs PD as POP22/110, we will remove all of the temporary elements, and put in something more permanent.

So, I have taken the broadband PDs from Zach's Gyro experiment in the ATF.  We will figure out what needs to be done to modify these to notch out unwanted frequencies, and amplify the signal nicely.  We will also create a pair of cables - one for power from the LSC rack, and one for signal back to the LSC rack.  Then we'll swap out the currently installed Thorlabs PD and replace it with a broadband PD.

  15356   Tue May 26 16:00:06 2020 gautamUpdateLSCPower buildup diagnostics

Summary:

I looked at some DC signals for the buildup of the carrier and sideband fields in various places. The results are shown in Attachments #1 and #2.

Details:

  • A previous study may be found here.
  • For the carrier field, REFL, POP and TRX/TRY all show the expected behavior. In particular, the REFL/TRX variation is consistent with the study linked in the previous bullet.
  • There seems to be some offset between TRX and TRY - I don't yet know if this is real or just some PD gain imbalance issue.
  • The 1-sigma variation in TRX/TRY seen here is consistent with the RMS RIN of 0.1 evaluated here.
  • For the sideband powers, I guess the phasing of the POP22 and AS110 photodiodes should be adjusted? These are proxies for the buildup of the 11 MHz and 55 MHz sidebands in the vertex region, and so shouldn't depend on the arm offset, and so adjusting the digital demod phases shouldn't affect the LSC triggering for the PRMI locking, I think.
  • Based on this data, the recycling gain for the carrier is ~12 +/- 2, so still undercoupled. In fact, at some points, I saw the transmitted power exceed 300, which would be a recycling gain of ~17, which is then nearly the point of critical coupling. REFLDC doesn't hit 0 because of the mode mismatch I guess.
Attachment 1: PRFPMIcorner_DC_1274419354_1274419654.pdf
PRFPMIcorner_DC_1274419354_1274419654.pdf
Attachment 2: PRFPMIcorner_SB_1274419354_1274419654.pdf
PRFPMIcorner_SB_1274419354_1274419654.pdf
  15358   Wed May 27 17:41:57 2020 KojiUpdateLSCPower buildup diagnostics

This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.

  15367   Wed Jun 3 02:08:00 2020 gautamUpdateLSCPower buildup diagnostics

Attachments #1 and Attachments #2 are in the style of elog15356, but with data from a more recent lock. It'd be nice to calibrate the ASDC channel (and in general all channels) into power units, so we have an estimate of how much sideband power we expect, and the rest can be attributed to carrier leakage to ASDC.

On the basis of Attachments #1, the PRG is ~19, and at times, the arm transmission goes even higher. I'd say we are now in the regime where the uncertainty of the losses in the recycling cavity - maybe beamsplitter clipping? is important in using this info to try and constrain the arm cavity losses. I'm also not sure what to make of the asymmetry between TRX and TRY. Allegedly, the Y arm is supposed to be lossier.

Quote:

This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.

Attachment 1: PRFPMIcorner_DC_1275190251_1275190551.pdf
PRFPMIcorner_DC_1275190251_1275190551.pdf
Attachment 2: PRFPMIcorner_SB_1275190251_1275190551.pdf
PRFPMIcorner_SB_1275190251_1275190551.pdf
  15019   Wed Nov 6 20:34:28 2019 KojiUpdateIOOPower combiner loss (EOM resonant box installed)

Gautam and I were talking about some modulation and demodulation and wondered what is the power combining situation for the triple resonant EOM installed 8 years ago. And we noticed that the current setup has additional ~5dB loss associated with the 3-to-1 power combiner. (Figure a)

N-to-1 broadband power combiners have an intrinsic loss of 10 log10(N). You can think about a reciprocal process (power splitting) (Figure b). The 2W input coming to the 2-port power splitter gives us two 1W outputs. The opposite process is power combining as shown in Figure c. This case, the two identical signals are the constructively added in the combiner, but the output is not 20Vpk but 14Vpk. Considering thge linearity, when one of the port is terminated, the output is going to be a half. So we expect 27dBm output for a 30dBm input (Figure d). This fact is frequently oversight particularly when one combines the signals at multiple frequencies (Figrue e). We can avoid this kind of loss by using a frequency-dependent power combiner like a diplexer or a triplexer.

Attachment 1: power_combiner.pdf
power_combiner.pdf
  15955   Tue Mar 23 09:16:42 2021 Paco, AnchalUpdateComputersPower cycled C1PSL; restored C1PSL

So actually, it was the C1PSL channels that had died. We did the following to get them back:

  • We went to this page and tried the telnet procedure. But it was unable to find the host.
  • So we followed the next advice. We went to the 1X1 rack and manually hard shut off C1PSL computer by holding down the power button until the LEDs went off.
  • We wait for 5-7 seconds and switched it back on.
  • By the time we were back in control room, the C1PSL channels were back online.
  • The mode cleaner however was struggling to keep the lock. It was going in and out of lock.
  • So we followed the next advice and did burt restore which ran following command:
    burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/22/17:19/c1psl.snap -l /tmp/controls_1210323_085130_0.write.log -o /tmp/controls_1210323_085130_0.nowrite.snap -v 
  • Now the mode cleaner was locked but we found that the input switch of C1IOO-WFS1_PIT and C1IOO-WFS2_PIT filter banks were off. Which meant that only YAW sensors were in loop in the lock.
  • We went back in dataviewer and checked when these channels were shut down. See attachments for time series.
  • It seems this happened yesterday, March 22nd near 1:00 pm (20:00:00 UTC). We can't find any mention of anyone else doing it on elog and we left by 12:15pm.
  • So we shut down the PSL shutter (C1:PSL-PSL_ShutterRqst) and switched off MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Switched on C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1.
  • Turned back on PSL shutter (C1:PSL-PSL_ShutterRqst) and MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Mode cleaner locked back easily and now is keeping lock consistently. Everything looks normal.
Attachment 1: MCWFS1and2PITYAW.pdf
MCWFS1and2PITYAW.pdf
Attachment 2: MCWFS1and2PITYAW_Zoomed.pdf
MCWFS1and2PITYAW_Zoomed.pdf
  13034   Fri Jun 2 12:32:16 2017 gautamUpdateGeneralPower glitch

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

Attachment 1: IMG_7399.JPG
IMG_7399.JPG
  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

Attachment 1: power_glitch_recovery.png
power_glitch_recovery.png
Attachment 2: IMG_7406.JPG
IMG_7406.JPG
Attachment 3: IMG_7407.JPG
IMG_7407.JPG
Attachment 4: IMG_7400.JPG
IMG_7400.JPG
  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

Attachment 1: powerGlitchRecovery.png
powerGlitchRecovery.png
  4894   Tue Jun 28 07:46:54 2011 SureshUpdateIOOPower incident on REFL11 and REFL55

I measured the power incident on REFL11 and REFL55.  Steve was concerned that it is too high.  If we consider this elog the incident power levels were REFL11: 30 mW and REFL55: 87 mW. (assuming efficiency of ~ 0.8 A/W @1064nm for the C30642 PD).  However, currently there is a combination of Polarising BS and Half-waveplate with which we have attenuated the power incident on the REFL PDs.  We now have (with the PRM misaligned):

REFL11:  Power incident = 7.60 mW ;  DC out = 0.330 V  => efficiency = 0.87 A/W

REFL55:  Power incident = 23 mW ;  DC out = 0.850 V  => efficiency = 0.74 A/W

and with the PRM aligned::

REFL11:  DC out = 0.35 V  => 8 mW is incident

REFL55: DC out = 0.975 V  => 26 mW is incident

These power levels may go up further when everything is working well.

The max rated photo-current is 100mA => max power 125mW @0.8 A/W.

 

  4896   Tue Jun 28 10:11:13 2011 steveUpdateIOOPower incident on REFL11 and REFL55

Quote:

I measured the power incident on REFL11 and REFL55.  Steve was concerned that it is too high.  If we consider this elog the incident power levels were REFL11: 30 mW and REFL55: 87 mW. (assuming efficiency of ~ 0.8 A/W @1064nm for the C30642 PD).  However, currently there is a combination of Polarising BS and Half-waveplate with which we have attenuated the power incident on the REFL PDs.  We now have (with the PRM misaligned):

REFL11:  Power incident = 7.60 mW ;  DC out = 0.330 V  => efficiency = 0.87 A/W

REFL55:  Power incident = 23 mW ;  DC out = 0.850 V  => efficiency = 0.74 A/W

and with the PRM aligned::

REFL11:  DC out = 0.35 V  => 8 mW is incident

REFL55: DC out = 0.975 V  => 26 mW is incident

These power levels may go up further when everything is working well.

The max rated photo-current is 100mA => max power 125mW @0.8 A/W.

 

What is the power level on MC_REFL_ PDs and WFS  when the MC is not locked?

  3151   Wed Jun 30 23:03:46 2010 ranaConfigurationIOOPower into MC restored to max

Kiwamu, Nancy, and I restored the power into the MC today:

  1. Changed the 2" dia. mirror ahead of the MC REFL RFPD back to the old R=10% mirror.
  2. Since the MC axis has changed, we had to redo the alignment of the optics in that area. Nearly all optics had to move by 1-2 cm.
  3. 2 of the main mounts there had the wrong handedness (e.g. the U100-A-LH instead of RH). We rotated them to some level of reasonableness.
  4. Tuned the penultimate waveplate on the PSL (ahead of the PBS) to maximize the transmission to the MC and to minimize the power in the PBS reject beam.
  5. MC_REFL DC  =1.8 V.
  6. Beams aligned on WFS.
  7. MC mirrors alignment tweaked to maximize transmission. IN the morning we will check the whole A2L centering again. If its OK, fine. Otherwise, we'll restore the bias values and align the PSL beam to the MC via the persicope.
  8. waveplates and PBS in the PSL were NOT removed.
  9. MC TRANS camera and QPD have to be recentered after we are happy with the MC axis.
  10. MC REFL camera has to be restored.
  11. WFS measurements will commence after the SURF reports are submitted.

We found many dis-assembled Allen Key sets. Do not do this! Return tools to their proper places or else you are just wasting everyone's time!

 

  4103   Tue Jan 4 02:58:53 2011 JenneUpdateIOOPower into Mode Cleaner increased

What was the point:

I twiddled with several different things this evening to increase the power into the Mode Cleaner.  The goal was to have enough power to be able to see the arm cavity flashes on the CCD cameras, since it's going to be a total pain to lock the IFO if we can't see what the mode structure looks like.

Summed-up list of what I did:

* Found the MC nicely aligned.  Did not ever adjust the MC suspensions.

* Optimized MC Refl DC, using the old "DMM hooked up to DC out" method.

* Removed the temporary BS1-1064-33-1025-45S that was in the MC refl path, and replaced it with the old BS1-1064-IF-2037-C-45S that used to be there.  This undoes the temporary change from elog 3878.  Note however, that Yuta's elog 3892 says that the original mirror was a 1%, not 10% as the sticker indicates. The temporary mirror was in place to get enough light to MC Refl while the laser power was low, but now we don't want to fry the PD.

* Noticed that the MCWFS path is totally wrong.  Someone (Yuta?) wanted to use the MCWFS as a reference, but the steering mirror in front of WFS1 was switched out, and now no beam goes to WFS2 (it's blocked by part of the mount of the new mirror). I have not yet fixed this, since I wasn't using the WFS tonight, and had other things to get done.  We will need to fix this.

* Realigned the MC Refl path to optimize MC Refl again, with the new mirror.

* Replaced the last steering mirror on the PSL table before the beam goes into the chamber from a BS1-1064-33-1025-45S to a Y1-45S.  I would have liked a Y1-0deg mirror, since the angle is closer to 0 than 45, but I couldn't find one.  According to Mott's elog 2392 the CVI Y1-45S is pretty much equally good all the way down to 0deg, so I went with it.  This undoes the change of keeping the laser power in the chambers to a nice safe ~50mW max while we were at atmosphere.

* Put the HWP in front of the laser back to 267deg, from its temporary place of 240deg.  The rotation was to keep the laser power down while we were at atmosphere.  I put the HWP back to the place that Kevin had determined was best in his elog 3818.

* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX.  I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused.  I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking.  There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance.  Note to self:  Rana reminds me that the ITM should be misaligned while using BS to get beam on ETM, and then using ETM to get beam on ITM.  Only then should I have realigned the ITM.  I had the ITM aligned (just left where it had been) the whole time, so I was making my life way harder than it should have been.  I'll work on it again more today (Tuesday). 

What happened in the end:

The MC Trans signal on the MC Lock screen went up by almost an order of magnitude (from ~3500 to ~32,000).  When the count was near ~20,000 I could barely see the spot on a card, so I'm not worried about the QPD.  I do wonder, however, if we are saturating the ADC. Suresh changed the transimpedance of the MC Trans QPD a while ago (Suresh's elog 3882), and maybe that was a bad idea? 

Xarm not yet locked. 

Can't really see flashes on the Test Mass cameras. 

  4104   Tue Jan 4 11:06:32 2011 KojiUpdateIOOPower into Mode Cleaner increased

- Previously MC TRANS was 9000~10000 when the alignment was good. This means that the MC TRANS PD is saturated if the full power is given.
==> Transimpedance must be changed again.

- Y1-45S has 4% of transmission. Definitively we like to use Y1-0 or anything else. There must be the replaced mirror.
I think Suresh replaced it. So he must remember wher it is.

- We must confirm the beam pointing on the MC mirrors with A2L.

- We must check the MCWFS path alignment and configuration.

- We should take the picture of the new PSL setup in order to update the photo on wiki.

Quote:

What was the point:

I twiddled with several different things this evening to increase the power into the Mode Cleaner.  The goal was to have enough power to be able to see the arm cavity flashes on the CCD cameras, since it's going to be a total pain to lock the IFO if we can't see what the mode structure looks like.

Summed-up list of what I did:

* Found the MC nicely aligned.  Did not ever adjust the MC suspensions.

* Optimized MC Refl DC, using the old "DMM hooked up to DC out" method.

* Removed the temporary BS1-1064-33-1025-45S that was in the MC refl path, and replaced it with the old BS1-1064-IF-2037-C-45S that used to be there.  This undoes the temporary change from elog 3878.  Note however, that Yuta's elog 3892 says that the original mirror was a 1%, not 10% as the sticker indicates. The temporary mirror was in place to get enough light to MC Refl while the laser power was low, but now we don't want to fry the PD.

* Noticed that the MCWFS path is totally wrong.  Someone (Yuta?) wanted to use the MCWFS as a reference, but the steering mirror in front of WFS1 was switched out, and now no beam goes to WFS2 (it's blocked by part of the mount of the new mirror). I have not yet fixed this, since I wasn't using the WFS tonight, and had other things to get done.  We will need to fix this.

* Realigned the MC Refl path to optimize MC Refl again, with the new mirror.

* Replaced the last steering mirror on the PSL table before the beam goes into the chamber from a BS1-1064-33-1025-45S to a Y1-45S.  I would have liked a Y1-0deg mirror, since the angle is closer to 0 than 45, but I couldn't find one.  According to Mott's elog 2392 the CVI Y1-45S is pretty much equally good all the way down to 0deg, so I went with it.  This undoes the change of keeping the laser power in the chambers to a nice safe ~50mW max while we were at atmosphere.

* Put the HWP in front of the laser back to 267deg, from its temporary place of 240deg.  The rotation was to keep the laser power down while we were at atmosphere.  I put the HWP back to the place that Kevin had determined was best in his elog 3818.

* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX.  I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused.  I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking.  There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance. 

What happened in the end:

The MC Trans signal on the MC Lock screen went up by almost an order of magnitude (from ~3500 to ~32,000).  When the count was near ~20,000 I could barely see the spot on a card, so I'm not worried about the QPD.  I do wonder, however, if we are saturating the ADC. Suresh changed the transimpedance of the MC Trans QPD a while ago (Suresh's elog 3882), and maybe that was a bad idea? 

Xarm not yet locked. 

Can't really see flashes on the Test Mass cameras. 

 

  7410   Wed Sep 19 13:12:48 2012 JenneUpdateIOOPower into vacuum increased to 75mW

The power buildup in the MC is ~400, so 100mW of incident power would give about 40W circulating in the mode cleaner.

Rana points out that the ATF had a 35W beam running around the table in air, with a much smaller spot size than our MC has, so 40W should be totally fine in terms of coating damage.

I have therefore increased the power into the vacuum envelope to ~75mW.  The MC REFL PD should be totally fine up to ~100mW, so 75mW is plenty low.  The MC transmission is now a little over 1000 counts.  I have changed the low power mcup script to not bring the VCO gain all the way up to 31dB anymore.  Now it seems happy with a VCO gain of 15dB (which is the same as normal power).

  4958   Fri Jul 8 20:50:49 2011 sonaliUpdateGreen LockingPower of the AUX laser increased.

The ETMY laser was operating at 1.5 A current and 197 mW power.

For the efficient frequency doubling of the AUX  laser beam at the ETMY table, a higher power is required.

Steve and I changed the current level of the laser from 1.5 A to 2.1 A in steps of 0.1 A and noted the corresponding power output . The graph is attached here.

The laser has been set to current 1.8 Amperes. At this current, the power of the output beam just near the laser output is measured to be 390 mW.

The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).

The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.

Next we are going to adjust the green alignment on the ETMY and then measure the power of the beam.

At the output end of the fibre on the PSL, a power meter has been put to dump the beam for now as well as to help with the alignment at the ETMY table.

Attachment 1: Graph3.png
Graph3.png
Attachment 2: ETMY_beam_powers.png
ETMY_beam_powers.png
  4965   Thu Jul 14 02:32:11 2011 sonaliUpdateGreen LockingPower of the AUX laser increased.

Quote:

The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).

The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.

 For the phase locking or beat note measuring we only need ~1 mW. Its a bad idea to send so much power into the fiber because of SBS and safety. The power should be lowered until the output at the PSL is < 2 mW. In terms of SNR, there's no advantage to use such high powers.

  4973   Fri Jul 15 13:48:56 2011 sonaliUpdateGreen LockingPower of the AUX laser increased.

Quote:

Quote:

The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).

The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.

 For the phase locking or beat note measuring we only need ~1 mW. Its a bad idea to send so much power into the fiber because of SBS and safety. The power should be lowered until the output at the PSL is < 2 mW. In terms of SNR, there's no advantage to use such high powers.

 

Well,the plan is to put in  a neutral density filter in the beam path before it enters the fibre. But before I could do that, I set up the camera on the PSL table to look at the fiber output . I will need it while I realign the  beam after putting in the Neutral Density Filter. I have attached the ETMY layout with the Neutral Density filter in place herewith.

Attachment 1: ETMY_after_fibre_coupling_labelled.pdf
ETMY_after_fibre_coupling_labelled.pdf
  5631   Fri Oct 7 17:35:26 2011 KatrinUpdateGreen LockingPower on green YARM table

After all realignment is finished, here are the powers at several positions:

 

DSC_3496_power.JPG

  15523   Thu Aug 13 18:10:22 2020 gautamUpdateGeneralPower outage

There was a power outage ~30 mins ago that knocked out CDS, PSL etc. The lights in the office area also flickered briefly. Working on recovery now. The elog was also down (since nodus presumably rebooted), I restarted the service just now. Vacuum status seems okay, even though the status string reads "Unrecognized".

The recovery was complete at 1830 local time. Curiously, the EX NPRO and the doubling oven temp controllers stayed on, usually they are taken out as well. Also, all the slow machines and associated Acromag crates survived. I guess the interruption was so fleeting that some devices survived.

The control room workstation, zita, which is responsible for the IFO status StripTool display on the large TV screen, has some display driver issues I think - it crashed twice when I tried to change the default display arrangement (large TV + small monitor). It also wants to update to Ubuntu 18.04 LTS, but I decided not to for the time being (it is running Ubuntu 16.04 LTS). Anyways, after a couple of power cycles, the wall StripTools are up once again.

  10586   Thu Oct 9 10:52:37 2014 manasaUpdateGeneralPower outage II & recovery

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

  10587   Thu Oct 9 11:56:35 2014 SteveUpdateVACPower outage II & recovery

Quote:

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

ELOG V3.1.3-