Probably there is the same mistake for the PMC gain slider. Possibly on the FSS slider, too???
2) The EPICS setting for the MZ gain slider was totaly wrong.
Today I learned from the circuit, the full scale of the gain slider C1:PSL-MZ_GAIN gave us +/-10V at the DAC.
This yield +/-1V to V_ctrl of the AD602 after the internal 1/10 attenuation stage.
This +/-1V didn't correspond to -10dB~+30dB, but does -22dB~+42dB and is beyond the spec of the chip.
I was working on the electronics bench and what sounded like a huge truck rolled by outside. I didn't notice anything until now, but It looks like something became misaligned when the truck passed by (~6:45-6:50 pm). I can hear a lot of noise coming out of the control room speakers and pretty much all of the IOO plots on the wall have sharp discontinuities.
I haven't been moving around much for the past 2 hours so I don't think it was me, but I thought it was worth noting.
After talking with Jenne, I realized the ADC card in the c1ass machine was currently going unused. As we are short an ADC card, a possible solution is to press that card into service. Unfortunately, its currently on a PMC to PCI adapter, rather than PMC to PCIe adapter. The two options I have are to try to find a different adapter board (I was handed 3 for RFM cards, so its possible there's another spare over in downs - unfortunately I missed Jay when I went over at 2:30 to check). The other option is put it directly into a computer, the only option being megatron, as the other machines don't have full length PCI slot.
I'm still waiting to hear back from Alex (who is in Germany for the next 10 days) whether I can connect both in the computer as well as with the IO chassis.
So to that end, I briefly turned off the c1ass machine, and pulled the card. I then turned it back on, restarted all the code as per the wiki instructions, and had Jenne go over how it looked with me, to make sure everything was ok.
There is something odd with some of the channels reading 1e20 from the RFM network. I believe this is related to those particular channels not being refreshed by their source (which is other suspension front end machines), so its just sitting at a default until the channel value actually changes.
I think I've narrowed down the source of this ground loop. It originates from the fact that the DAC from which the signals for this board are derived sits in an expansion chassis in 1Y3, whereas the LSC electronics are all in 1Y2.
Looking at Jamie's old elog from the time when this infrastructure was installed, there is a remark that the signal didn't look too noisy - so either this is a new problem, or the characterization back then wasn't done in detail. The main reason why I think this is non-ideal is because the tip-tilt steering mirrors sending the beam into the IFO is controlled by analogous infrastructure - I confirmed using the LEMO monitor points on the D000316 that routes signals to TT1 and TT2 that they look similarly noisy (see e.g. Attachment #1). So we are injecting some amount (about 10% of the DC level) of beam jitter into the IFO because of this noisy signal - seems non-ideal. If I understand correctly, there is no damping loops on these suspensions which would suppress this injection.
How should we go about eliminating this ground loop?
So either something is busted on this board (power regulating capacitor perhaps?), or we have some kind of ground loop between electronics in the same chassis (despite the D990694 being differential input receiving). Seems like further investigation is needed. Note that the D000316 just two boards over in the same Eurocrate chassis is responsible for driving our input steering mirror Tip-Tilt suspensions. I wonder if that board too is suffering from a similarly noisy ground?
We discussed possible solutions to this ground loop problem. Here's what we came up with:
Why do we care about this so much anyways? Koji pointed out that the tip tilt suspensions do have passive eddy current damping, but that presumably isn't very effective at frequencies in the 10Hz-1kHz range, which is where I observed the noise injection.
Note that all our SOS suspensions are also possibly being plagued by this problem - the AI board that receives signals is D000186, but not revision D I think. But perhaps for the SOS optics this isn't really a problem, as the expansion chassis and the coil driver electronics may share a common power source?
gautam 1530 7 Feb: Judging by the footprint of the front panel connectors, I would say that the AI boards that receive signals from the DACs for our SOS suspended optics are of the Rev B variety, and so receive the DAC voltages single ended. Of course, the real test would be to look inside these boards. But they certainly look distinct from the black front panelled RevD variant linked above, which has differential inputs. Rev D uses OP27s, although rana mentioned that the LT1125 isn't the right choice and from what I remember, LT1125 is just Quad OP27...
It looks as though we may have two IO chassis with bad timing cards.
Symptoms are as follows:
We can get our front end models writing data and timestamps out on the RFM network.
However, they get rejected on the receiving end because the timestamps don't match up with the receiving front end's timestamp. Once started, the system is consistently off by the same amount. Stopping the front end module on c1ioo and restarting it, generated a new consistent offset. Say off by 29,000 cycles in the first case and on restart we might be 11,000 cycles off. Essentially, on start up, the IOP isn't using the 1PPS signal to determine when to start counting.
We tried swapping the spare IO chassis (intended for the LSC) in ....
# Joe will finish this in 3 days.
# Basically, in conclusion, in a word, we found that c1ioo IO chassis is wrong.
[Jenne, Manasa, Jamie]
Now that we're up to air we relocked the mode cleaner, tweaked up the alignment, and looked at the spot positions:
The measurements from yesterday were made before the input power was lowered. It appears that things have not moved by that much, which is pretty good.
We turned on the PZT1 voltages and set them back to their nominal values as recorded before shut-down yesterday. Jenne had centered IPPOS before shutdown (IPANG was unfortunately not coming out of the vacuum). Now we're at the following value: (-0.63, 0.66). We need to calibrate this to get a sense of how much motion this actually is, but this is not insignificant.
- IPANG aligned on the QPD. The beam seems to be partially clipped in the chamber.
- Oplev of the IFO mirrors are aligned.
- After the oplev alignment, ITMX Yaw oplev servo started to oscillate. Reduced the gain from -50 to -20.
Notes to the fiber team:
I am aligning beam onto the RFPDs (I have finished all 4 REFL diodes, and AS55), in preparation for locking.
In doing so, I have noticed that the fiber lasers for the RFPD testing are always illuminating the photodiodes! This seems bad! Ack!
For now, I blocked the laser light coming from the fiber, did my alignment, then removed my blocks. The exception is REFL55, which I have left an aluminum beam dump, so that we can use REFL55 for PRM-ITMY locking, so I can align the POP diodes.
EDIT: I have also aligned POP QPD, and POP110/22. The fiber launcher for POP110 was not tight in its mount, so when I went to put a beam block in front of it and touched the mount, the whole thing spun a little bit. Now the fiber to POP110 is totally misaligned, and should be realigned.
What was done for the alignment:
1. Aligned the arms (ran ASS).
2. Aligned the beam to all the REFL and AS PDs.
3. Misaligned the ETMs and ITMX.
4. Locked PRM+ITMY using REFL11.
The following were modified to enable locking
(1) PRCL gain changed from +2.0 to -12.
(2) Power normalization matrix for PRCL changed from +10.0 to 0.
(3) FM3 in PRCL servo filter module was turned OFF.
5. POP PDs were aligned.
[Larry (on site), Koji & Gautam (remote)]
Network recovery (Larry/KA)
Asked Larry to get into the lab.
14:30 Larry went to the lab office area. He restarted (power cycled) the edge-switch (on the rack next to the printer). This recovered the ssh-access to nodus.
Also Larry turned on the CAD WS. Koji confirmed the remote access to the CAD WS.
Nodus recovery (KA)
Apr 12, 22:43 nodus was restarted.
Apache (dokuwiki, svn, etc) recovered along with the systemctl command on wiki
ELOG recovered by running the script
Control Machines / RT FE / Acromag server Status
Judging by uptime, basically only the machines that are on UPS (all control room workstations + chiara) survived the power outage. All RT FEs are down. Apart from c1susaux, the acromag servers are back up (but the modbus processes have NOT been restarted yet). Vacuum machine is not visible on the network (could just be a networking issue and the local subnet to valves/pumps is connected, but no way to tell remotely).
KA imagines that FB took some finite time to come up. However, the RT machines required FB to download the OS. That made the RTs down. If so, what we need is to power cycle them.
Acromag: unknown state
The power was lost at Apr 12 22:39:42, according to the vacuum pressure log. The power loss was for a few min.
The 40m experienced a building-wide power failure for ~30 seconds at ~7:38 pm today.
Thought that might be important...
I'm checking the status from home.
P1 is 8e-4 torr
nodus did not feel the power outage (is it APS supported?)
linux1 booted automatically
c1ioo booted automatically.
c1sus, c1lsc, c1iscex, c1iscey need manual power button push.
9:11pm closed PSL shutter, turned Innolight 2W laser on,
turned 3 IFO air cond on,
CC1 5.1e-5 torr, V1 is closed, Maglev has failed, valve configuration is " Vacuum Normal " with V1 & VM1 closed, RGA not running, c1vac1 and c1vac2 were saved by UPS,
(Maglev is not connected to the UPS because it is running on 220V)
reset & started Maglev.........I can not open V1 without the 40mars running...........
Rossa is the only computer running in the control room,
Nodus and Linux1 was saved by UPS,
turned on IR lasers at the ends, green shutters are closed
It is safe to leave the lab as is.
As far as I know the system is running as usual. I had the IMC locked and one of the arm flashing.
But the other arm had no flash and none of the arms were locked before kunch time.
This morning Steve and I went around the lab to turn on the realtime machines.
Also we took the advantage of this opportunity to shutdown linux1 and nodus
to replace the extension cables for their AC power.
I also installed a 3TB hard disk on linux1. This was to provide a local daily copy of our
working are. But I could not make the disk recognized by the OS.
It seems that there is a "2TB" barrier that the disk bigger than 2.2TB can't be recognized
by the older machines. I'll wait for the upgrade of the machine.
Rebooting the realtime machines did not help FB to talk with them. I fixed them.
Basically what I did was:
- Stop all of the realtime codes by running rtcds kill all on c1lsc, c1ioo, c1sus, c1iscex, c1iscey
rtcds kill all
- run sudo ntpdate -b -s -u pool.ntp.org on c1lsc, c1ioo, c1sus, c1iscex, c1iscey, and fb
sudo ntpdate -b -s -u pool.ntp.org
- restart realtime codes one by one. I checked which code makes FB unhappy. But in reality
FB was happy with all of them running.
Then slow machines except for c1vac1 and c1vac2 were burtrestored.
Zach reported that svn was down. I went to the 40m wiki and searched "apache".
There is an instruction how to restart apache.
Recovery work: now arms are locking as usual
- FB is failing very frequently. Everytime I see red signals in the CDS summary, I have to run "sudo ntpdate -b -s -u pool.ntp.org"
- PMC was aligned
- The main Marconi returned to initial state. Changed the frequency and amplitude to the nominal value labeled on the unit
- The SHG oven temp controllers were disabled. I visited all three units and pushed "enable" buttons.
- Y arm was immediately locked. It was aligned using ASS.
- X arm did not show any flash. I found that the scx model was not successfully burtrestored yesterday.
The setting was restored using Mar 22 snapshot.
- After a little tweak of the ETMX alignment, a decent flash was achieved. But still it could not be locked.
- Run s/LSC/LSCoffset.py. This immediately made the X arm locked.
- Checked the green alignment. The X arm green is beating with the PSL at ~100MHz but is misaligned beyond the PZT range.
The Y arm green is locked on TEM00 and is beating with the PSL at ~100MHz.
Chiara reports an uptime of >195 days, so its UPS is working fine
FB, megatron, optimus booted via front panel button.
Jetstor RAID array (where the frames live) was beeping, since its UPS failed as well. The beep was silenced by clicking on "View Events/Mute Beeper" at 192.168.113.119 in a browser on a martian computer. I've started a data consistency check via the web interface, as well. According to the log, this was last done in July 2015, and took ~19 hrs.
Frontends powered up; models don't start automatically at boot anymore, so I ran rtcds start all on each of them.
rtcds start all
All frontends except c1ioo had a very wrong datetime, so I ran sudo ntpdate -b -s -u pool.ntp.org on all of them, and restarted the models (just updating the time isn't enough). There is an /etc/ntp.conf in the frontend filesystem that points to nodus, which is set up as an NTP server, but I guess this isn't working.
PMC locking was hindered by sticky sliders. I burtrestored the c1psl.snap from Friday, and the PMC locked up fine. (One may be fooled by the unchanged HV mon when moving the offset slider into thinking the HV KEPCO power supplies need to be brought down and up again, but it's just the sliders)
Mode cleaner manually locked and somewhat aligned. Based on my memory of PMC camera/transmission, the pointing changed; the WFS need a round of MC alignment and WFS offset setting, but the current state is fine for operation without all that.
10:15 power glitch today. ETMX Lightwave and air conditions turned back on
The CDS situation was not as catastrophic as the last time, it was sufficient for me to ssh into all the frontends and restart all the models. I also checked that monit was running on all the FEs and that there was no date/time issues like we saw last week. Everything looks to be back to normal now, except that the ntpd process being monitored on c1iscex says "execution failed". I tried restarting the process a couple of times, but each time it returns the same status after a few minutes.
I was able to realign the arms, lock them, and have run the dither align to maximize IR transmission - looks like things are back to normal now. For the Y-end, I used the green beam initially to do some coarse alignment of the ITM and ETM, till I was able to see IR flashes in the control room monitors. I then tweaked the alignment of the tip-tilts till I saw TEM00 flashes, and then enabled LSC. Once the arm was locked, I ran the dither align. I then tweaked ITMX alignment till I saw IR flashes in the X arm as well, and was able to lock it with minimal tweaking of ETMX. The LSC actuation was set to ETMX when the models were restarted - I changed this to ITMX actuation, and now both arms are locked with nominal IR transmissions. I will center all the Oplev spots tomorrow before I start work on getting the X green back - I've left the ETM Oplev servos on for now.
While I was working, I noticed that frame builder was periodically crashing. I had to run mxstream restart a few times in order to get CDS back to the nominal state. I wonder if this is a persistent effect of the date/time issues we were seeing earlier today?
Sun Feb 28 18:23:09 2010
Hi. This is Alberto. Its Sun Feb 28 19:23:09 2010
Monday, March 1, 9:00 2010 Steve turns on PSL-REF cavity ion pump HV at 1Y1
At 11:13 am there was a ~2-3 second interruption of all power at the 40m.
I checked that nobody was in any of the lab areas at the time of the outage.
I walked along both arms of the 40m and looked for any indicator lights or unusual activity. I took photos of the power supplies that I encountered, attached. I tried to be somewhat complete, but didn't have a list of things in mind to check, so I may have missed something.
I noticed an electrical buzzing that seemed to emanate from one of the AC adapters on the vacuum rack. I've attached a photo of which one, the buzzing changes when I touch the case of the adapter. I did not modify anything on the vacuum rack. There is also
Most of the cds channels are still down. I am going through the wiki for procedures on what to log when the power goes off, and will follow the procedures here to get some useful channels.
After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.
I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...
I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...
[Gautam, Aaron, Koji]
The PSL interlock system was fixed and now the 40m lab is laser hazard as usual.
- The schematic diagram of the interlock system D1200192
- We have opened the interlock box. Immediately we found that the DC switching supply (OMRON S82K-00712) is not functioning anymore. (Attachment #1)
- We could not remove the module as the power supply was attached on the DIN rail. We decided to leave the broken supply there (it is still AC powered with no DC output).
- Instead, we brought a DC supply adapter from somewhere and chopped the head so that we can hook it up on the crimping-type quick connects. In Attachment #1, the gray is +12V, and the orange and black lines are GND.
- Upon the inspection, the wires of the "door interlock reset button" fell off and the momentary switch (GRAYHILL 30-05-01-502-03) got broken. So it was replaced with another momentary swicth, which is way smaller than the original unfortunately. (Attachments 2 and 3)
- Once the DC supply adapter was pluged to an AC tap, we heard the sounds of the relays working, and we recovered the laser hazard lamps, PSL door alerm lamps. Also it was confirmed that the PSL innolight is operatable now.
- BTW, there is the big switch box on the wall close to the PSL enclosure. Some of the green lamps were gone. We found that we have plenty of spare lamps and relays inside of the box. So we replaced the bulbs and know the A.C. lights are functioning. (Attachments 4 & 5)
I made the first trial of locking a Power-recycled single arm.
This is NOT a work in the main stream,
but it gives us some prospects towards the full lock and perhaps some useful thoughts.
Lock Acquisition Steps
Actual Time Series
Below is a plot of the actual lock acquisition sequence in time series.
Assumptions on the parameter estimations
I constructed a regulator board that can take ±24 V and supply a regulated ±15 V or ±5 V. I followed the schematics from LIGO-D1000217-v1.
I was going to make 2 boards, one for ±15 V and one for ±5, but Chub just gave me a second assembled board when I asked him for the parts to construct it
We have decided that, rather than replacing the power source for the amplifiers that are on the rack, and leaving the Thorlabs PD as POP22/110, we will remove all of the temporary elements, and put in something more permanent.
So, I have taken the broadband PDs from Zach's Gyro experiment in the ATF. We will figure out what needs to be done to modify these to notch out unwanted frequencies, and amplify the signal nicely. We will also create a pair of cables - one for power from the LSC rack, and one for signal back to the LSC rack. Then we'll swap out the currently installed Thorlabs PD and replace it with a broadband PD.
I looked at some DC signals for the buildup of the carrier and sideband fields in various places. The results are shown in Attachments #1 and #2.
This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.
Attachments #1 and Attachments #2 are in the style of elog15356, but with data from a more recent lock. It'd be nice to calibrate the ASDC channel (and in general all channels) into power units, so we have an estimate of how much sideband power we expect, and the rest can be attributed to carrier leakage to ASDC.
On the basis of Attachments #1, the PRG is ~19, and at times, the arm transmission goes even higher. I'd say we are now in the regime where the uncertainty of the losses in the recycling cavity - maybe beamsplitter clipping? is important in using this info to try and constrain the arm cavity losses. I'm also not sure what to make of the asymmetry between TRX and TRY. Allegedly, the Y arm is supposed to be lossier.
Gautam and I were talking about some modulation and demodulation and wondered what is the power combining situation for the triple resonant EOM installed 8 years ago. And we noticed that the current setup has additional ~5dB loss associated with the 3-to-1 power combiner. (Figure a)
N-to-1 broadband power combiners have an intrinsic loss of 10 log10(N). You can think about a reciprocal process (power splitting) (Figure b). The 2W input coming to the 2-port power splitter gives us two 1W outputs. The opposite process is power combining as shown in Figure c. This case, the two identical signals are the constructively added in the combiner, but the output is not 20Vpk but 14Vpk. Considering thge linearity, when one of the port is terminated, the output is going to be a half. So we expect 27dBm output for a 30dBm input (Figure d). This fact is frequently oversight particularly when one combines the signals at multiple frequencies (Figrue e). We can avoid this kind of loss by using a frequency-dependent power combiner like a diplexer or a triplexer.
So actually, it was the C1PSL channels that had died. We did the following to get them back:
Looks like there was a power glitch at around 10am today.
All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).
Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.
GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)
[Koji, Rana, Gautam]
The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:
Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error?
Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.
Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params.
Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.
Attachment #4 - Warning lights on C1IOO
Now IFO work like fixing ASS can continue...
I measured the power incident on REFL11 and REFL55. Steve was concerned that it is too high. If we consider this elog the incident power levels were REFL11: 30 mW and REFL55: 87 mW. (assuming efficiency of ~ 0.8 A/W @1064nm for the C30642 PD). However, currently there is a combination of Polarising BS and Half-waveplate with which we have attenuated the power incident on the REFL PDs. We now have (with the PRM misaligned):
REFL11: Power incident = 7.60 mW ; DC out = 0.330 V => efficiency = 0.87 A/W
REFL55: Power incident = 23 mW ; DC out = 0.850 V => efficiency = 0.74 A/W
and with the PRM aligned::
REFL11: DC out = 0.35 V => 8 mW is incident
REFL55: DC out = 0.975 V => 26 mW is incident
These power levels may go up further when everything is working well.
The max rated photo-current is 100mA => max power 125mW @0.8 A/W.
What is the power level on MC_REFL_ PDs and WFS when the MC is not locked?
Kiwamu, Nancy, and I restored the power into the MC today:
We found many dis-assembled Allen Key sets. Do not do this! Return tools to their proper places or else you are just wasting everyone's time!
What was the point:
I twiddled with several different things this evening to increase the power into the Mode Cleaner. The goal was to have enough power to be able to see the arm cavity flashes on the CCD cameras, since it's going to be a total pain to lock the IFO if we can't see what the mode structure looks like.
Summed-up list of what I did:
* Found the MC nicely aligned. Did not ever adjust the MC suspensions.
* Optimized MC Refl DC, using the old "DMM hooked up to DC out" method.
* Removed the temporary BS1-1064-33-1025-45S that was in the MC refl path, and replaced it with the old BS1-1064-IF-2037-C-45S that used to be there. This undoes the temporary change from elog 3878. Note however, that Yuta's elog 3892 says that the original mirror was a 1%, not 10% as the sticker indicates. The temporary mirror was in place to get enough light to MC Refl while the laser power was low, but now we don't want to fry the PD.
* Noticed that the MCWFS path is totally wrong. Someone (Yuta?) wanted to use the MCWFS as a reference, but the steering mirror in front of WFS1 was switched out, and now no beam goes to WFS2 (it's blocked by part of the mount of the new mirror). I have not yet fixed this, since I wasn't using the WFS tonight, and had other things to get done. We will need to fix this.
* Realigned the MC Refl path to optimize MC Refl again, with the new mirror.
* Replaced the last steering mirror on the PSL table before the beam goes into the chamber from a BS1-1064-33-1025-45S to a Y1-45S. I would have liked a Y1-0deg mirror, since the angle is closer to 0 than 45, but I couldn't find one. According to Mott's elog 2392 the CVI Y1-45S is pretty much equally good all the way down to 0deg, so I went with it. This undoes the change of keeping the laser power in the chambers to a nice safe ~50mW max while we were at atmosphere.
* Put the HWP in front of the laser back to 267deg, from its temporary place of 240deg. The rotation was to keep the laser power down while we were at atmosphere. I put the HWP back to the place that Kevin had determined was best in his elog 3818.
* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX. I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused. I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking. There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance. Note to self: Rana reminds me that the ITM should be misaligned while using BS to get beam on ETM, and then using ETM to get beam on ITM. Only then should I have realigned the ITM. I had the ITM aligned (just left where it had been) the whole time, so I was making my life way harder than it should have been. I'll work on it again more today (Tuesday).
What happened in the end:
The MC Trans signal on the MC Lock screen went up by almost an order of magnitude (from ~3500 to ~32,000). When the count was near ~20,000 I could barely see the spot on a card, so I'm not worried about the QPD. I do wonder, however, if we are saturating the ADC. Suresh changed the transimpedance of the MC Trans QPD a while ago (Suresh's elog 3882), and maybe that was a bad idea?
Xarm not yet locked.
Can't really see flashes on the Test Mass cameras.
- Previously MC TRANS was 9000~10000 when the alignment was good. This means that the MC TRANS PD is saturated if the full power is given.
==> Transimpedance must be changed again.
- Y1-45S has 4% of transmission. Definitively we like to use Y1-0 or anything else. There must be the replaced mirror.
I think Suresh replaced it. So he must remember wher it is.
- We must confirm the beam pointing on the MC mirrors with A2L.
- We must check the MCWFS path alignment and configuration.
- We should take the picture of the new PSL setup in order to update the photo on wiki.
* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX. I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused. I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking. There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance.
The power buildup in the MC is ~400, so 100mW of incident power would give about 40W circulating in the mode cleaner.
Rana points out that the ATF had a 35W beam running around the table in air, with a much smaller spot size than our MC has, so 40W should be totally fine in terms of coating damage.
I have therefore increased the power into the vacuum envelope to ~75mW. The MC REFL PD should be totally fine up to ~100mW, so 75mW is plenty low. The MC transmission is now a little over 1000 counts. I have changed the low power mcup script to not bring the VCO gain all the way up to 31dB anymore. Now it seems happy with a VCO gain of 15dB (which is the same as normal power).
The ETMY laser was operating at 1.5 A current and 197 mW power.
For the efficient frequency doubling of the AUX laser beam at the ETMY table, a higher power is required.
Steve and I changed the current level of the laser from 1.5 A to 2.1 A in steps of 0.1 A and noted the corresponding power output . The graph is attached here.
The laser has been set to current 1.8 Amperes. At this current, the power of the output beam just near the laser output is measured to be 390 mW.
The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).
The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.
Next we are going to adjust the green alignment on the ETMY and then measure the power of the beam.
At the output end of the fibre on the PSL, a power meter has been put to dump the beam for now as well as to help with the alignment at the ETMY table.
For the phase locking or beat note measuring we only need ~1 mW. Its a bad idea to send so much power into the fiber because of SBS and safety. The power should be lowered until the output at the PSL is < 2 mW. In terms of SNR, there's no advantage to use such high powers.
Well,the plan is to put in a neutral density filter in the beam path before it enters the fibre. But before I could do that, I set up the camera on the PSL table to look at the fiber output . I will need it while I realign the beam after putting in the Neutral Density Filter. I have attached the ETMY layout with the Neutral Density filter in place herewith.
After all realignment is finished, here are the powers at several positions:
There was a power outage ~30 mins ago that knocked out CDS, PSL etc. The lights in the office area also flickered briefly. Working on recovery now. The elog was also down (since nodus presumably rebooted), I restarted the service just now. Vacuum status seems okay, even though the status string reads "Unrecognized".
The recovery was complete at 1830 local time. Curiously, the EX NPRO and the doubling oven temp controllers stayed on, usually they are taken out as well. Also, all the slow machines and associated Acromag crates survived. I guess the interruption was so fleeting that some devices survived.
The control room workstation, zita, which is responsible for the IFO status StripTool display on the large TV screen, has some display driver issues I think - it crashed twice when I tried to change the default display arrangement (large TV + small monitor). It also wants to update to Ubuntu 18.04 LTS, but I decided not to for the time being (it is running Ubuntu 16.04 LTS). Anyways, after a couple of power cycles, the wall StripTools are up once again.
Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.
I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.
c1vac1 is not responding as of now. All other computers have come back and are alive.
IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.