ID |
Date |
Author |
Type |
Category |
Subject |
13058
|
Fri Jun 9 19:18:10 2017 |
gautam | Update | IMC | IMC wonkiness |
It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.
I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.
GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.
Quote: |
Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.
|
|
Attachment 1: MC2_UL_glitchy.png
|
|
Attachment 2: MC2_glitch_fast.png
|
|
13059
|
Mon Jun 12 10:34:10 2017 |
gautam | Update | CDS | slow machine bootfest |
Reboots for c1susaux, c1iscaux, c1auxex today. I took this opportunity to squish the Sat. Box. Cabling for MC2 (both on the Sat box end and also the vacuum feedthrough) as some work has been recently ongoing there, maybe something got accidently jiggled during the process and was causing MC2 alignment to jump around.
Relocked PMC to offload some of the DC offset, and re-aligned IMC after c1susaux reboot. PMC and IMC transmission back to nominal levels now. Let's see if MC2 is better behaved after this sat. box. voodoo.
Interestingly, since Feb 6, there were no slow machine reboots for almost 3 months, while there have been three reboots in the last three weeks. Not sure what (if anything) to make of that. |
13060
|
Mon Jun 12 17:42:39 2017 |
gautam | Update | ASS | ETMY Oplev Pentek board pulled out |
As part of my Oplev servo investigations, I have pulled out the Pentek Generic Whitening board (D020432) from the Y-end electronics rack. ETMY watchdog was shutdown for this, I will restore it once the Oplev is re-installed. |
13061
|
Mon Jun 12 22:23:20 2017 |
rana | Update | IMC | IMC wonkiness |
wonder if its possible that the slow glitches in MC are just glitches in MC2 trans QPD? Steve sometimes dances on top of the MC2 chamber when he adjusts the MC2 camera.
I've re-enabled the WFS at 22:25 (I think Gautam had them off as part of the MC2 glitch investigation). WFS1 spot position seems way off in pitch & yaw.
From the turn on transient, it seems that the cross-coupled loops have a time constant of ~3 minutes for the MC2 spot, so maybe that's not consistent with the ~30 second long steps seen earlier. |
13062
|
Tue Jun 13 08:40:32 2017 |
Steve | Update | IMC | IMC wonkiness |
Happy MC after last glitch at 10:28 so the credit goes to Rana
GV edit 11:30am: I think the stuff at 10:28 is not a glitch but just the WFS servos coming on - the IMC was only hand aligned before this.
Quote: |
It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.
I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.
GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.
Quote: |
Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.
|
|
|
Attachment 1: happy_MC.png
|
|
Attachment 2: last_glitch.png
|
|
13063
|
Wed Jun 14 18:15:06 2017 |
gautam | Update | ASS | ETMY Oplev restored |
I replaced the Pentek Generic Whitening Board and the Optical Lever PD Interface Board (D010033) which I had pulled out. The ETMY optical lever servo is operational again. I will post a more detailed elog with deviations from schematics + photos + noise and TF measurements shortly.
Quote: |
As part of my Oplev servo investigations, I have pulled out the Pentek Generic Whitening board (D020432) from the Y-end electronics rack. ETMY watchdog was shutdown for this, I will restore it once the Oplev is re-installed.
|
|
13064
|
Thu Jun 15 01:56:50 2017 |
gautam | Update | ASS | ETMY Oplev restored |
Summary:
I tried playing around with the Oplev loop shape on ITMY, in order to see if I could successfully engage the Coil Driver whitening. Unfortunately, I had no success tonight.
Details:
I was trying to guess a loop shape that would work - I guess this will need some more careful thought about loop shape optimization. I was basically trying to keep all the existing filters, and modify the low-passing that minimizes control noise injection. By adding a 4th order elliptic low pass with corner at 50Hz and stopband attenuation of 60dB yielded a stable loop with upper UGF of ~6Hz and ~25deg of phase margin (which is on the low side). But I was able to successfully engage this loop, and as seen in Attachment #1, the noise performance above 50Hz is vastly improved. But it also seems that there is some injection of noise around 6Hz. In any case, as soon as I tried to engage the dewhitening, the DAC output quickly saturated. The whitening filter for the ITMs has ~40dB of gain at ~40Hz already, so it looks like the high frequency roll-off has to be more severe.
I am not even sure if the Elliptic filter is the right choice here - it does have the steepest roll off for a given filter order, but I need to look up how to achieve good roll off without compromising on the phase margin of the overall loop. I am going to try and do the optimization in a more systematic way, and perhaps play around with some of the other filters' poles and zeros as well to get a stable controller that minimizes control noise injection everywhere. |
Attachment 1: ITMY_OLspec.pdf
|
|
13065
|
Thu Jun 15 14:24:48 2017 |
Kaustubh, Jigyasa | Update | Computers | Ottavia Switched On |
Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it. |
13066
|
Thu Jun 15 18:56:31 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.
The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa
Apologies for any inconvenience.
Data analysis will follow. |
13067
|
Thu Jun 15 19:49:03 2017 |
Kaustubh, Jigyasa | Update | Computers | Ottavia Switched On |
It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night.
Quote: |
Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.
|
|
13068
|
Fri Jun 16 12:37:47 2017 |
Kaustubh, Jigyasa | Update | Computers | Ottavia Switched On |
Ottavia had been left running overnight and it seems to work fine. There has been no smell or any noticeable problems in the working. This morning Gautam, Kaustubh and I connected Ottavia to the Matrian Network through the Netgear switch in the 40m lab area. We were able to SSH into Ottavia through Pianosa and access directories. On the ottavia itself we were able to run ipython, access the internet. Since it seems to work out fine, Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.
Quote: |
It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night.
Quote: |
Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.
|
|
|
13069
|
Fri Jun 16 13:53:11 2017 |
gautam | Update | CDS | slow machine bootfest |
Reboots for c1psl, c1iool0, c1iscaux today. MC autolocker log was complaining that the C1:IOO-MC_AUTOLOCK_BEAT EPICS channel did not exist, and running the usual slow machine check script revealed that these three machines required reboots. PMC was relocked, IMC Autolocker was restarted on Megatron and everything seems fine now.
|
13071
|
Fri Jun 16 23:27:19 2017 |
Kaustubh, Jigyasa | Update | Computers | Ottavia Connected to the Netgear Box |
I just connected the Ottavia to the Netgear box and its working just fine. It'll remain switched on over the weekend.
Quote: |
Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.
|
|
13072
|
Mon Jun 19 18:32:18 2017 |
jigyasa | Update | Computer Scripts / Programs | Software Installation for image analysis |
The IRAF software from the National Optical Astronomy Observatory has been installed locally on Donatella(for testing) following the instructions listed here at http://www.astronomy.ohio-state.edu/~khan/iraf/iraf_step_by_step_installation_64bit
This is a step towards "aperture photometry" and would help identify point scatterers in the images of the test masses.
I will be testing this software, in particular, the use of DAOPHOT and if it seems to work out, we may install it on the shared directory.
Hope this isn't an inconvenience.
|
13073
|
Mon Jun 19 18:41:12 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The previous run of the script had produced some dubious results!
The script has been modified and now scans the transmission sum for a longer duration to provide a better estimate on the average transmission. The pitch and yaw offsets have been set to the values that were randomly generated in the previous run as this would enable comparison with the current data.
I am starting it on Donatella and it should run for a couple of hours.
Apologies for the inconvenience.
Quote: |
A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.
The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa
|
|
13074
|
Tue Jun 20 14:58:08 2017 |
Steve | Update | Cameras | GigE camera at ETMX |
GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.
Quote: |
One of the additional GigE cameras has been IP configured for use and installation.
Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
Gateway- 192.168.113.2
|
|
Attachment 1: ETMXgige.jpg
|
|
13075
|
Tue Jun 20 16:28:23 2017 |
Steve | Update | VAC | RGA scan |
|
Attachment 1: RGAscan243d.png
|
|
Attachment 2: RGAscan.png
|
|
13076
|
Tue Jun 20 17:44:12 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.
Quote: |
I am starting it on Donatella and it should run for a couple of hours.
Apologies for the inconvenience.
Quote: |
A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.
The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa
|
|
|
13078
|
Fri Jun 23 02:55:18 2017 |
Kaustubh | Update | Computer Scripts / Programs | Script Running |
I am leaving a script running on the Pianoso for the night. For this purpose, even the AG4395A is kept on. I'll see the result of the script in the morning (it should be complete by then). Just check so before fiddling with the Analyzer.
Thank you. |
13079
|
Sun Jun 25 22:30:57 2017 |
gautam | Update | General | c1iscex timing troubles |
I saw that the CDS overview screen indicated problems with c1iscex (also ETMX was erratic). I took a closer look and thought it might be a timing issue - a walk to the X-end confirmed this, the 1pps status light on the timing slave card was no longer blinking.
I tried all versions of power cycling and debugging this problem known to me, including those suggested in this thread and from a more recent time. I am leaving things as it for the night, will look into this more tomorrow. I've also shutdown the ETMX watchdog for the time being. Looks like this has been down since 24Jun 8am UTC. |
Attachment 1: c1iscex_status.png
|
|
13081
|
Mon Jun 26 22:01:08 2017 |
Koji | Update | General | c1iscex timing troubles |
I tried a couple of things, but no fundamental improvement of the missing LED light on the timing board.
- The power supply cable to the timing board at c1iscex indicated +12.3V
- I swapped the timing fiber to the new one (orange) in the digital cabinet. It didn't help.
- I swapped the opto-electronic I/F for the timing fiber with the Y-end one. The X-end one worked at Y-end, and Y-end one didn't work at X-end.
- I suspected the timing board itself -> I brought a "spare" timing board from the digital cabinet and tried to swap the board. This didn't help.
Some ideas:
- Bring the X-end fiber to C1SUS or C1IOO to see if the fiber is OK or not.
- We checked the opto-electronic I/F is OK
- Try to swap the IO chassis with the Y-end one.
- If this helps, swap the timing board only to see this is the problem or not. |
13082
|
Tue Jun 27 16:11:28 2017 |
gautam | Update | Electronics | Coil whitening |
I got back to trying to engage the coil driver whitening today, the idea being to try and lock the DRMI in a lower noise configuration - from the last time we had the DRMI locked, it was determined that A2L coupling from the OL loops and coil driver noise were dominant from ~10-200Hz. All of this work was done on the Y-arm, while the X-arm CDS situation is being resolved.
To re-cap, every time I tried to do this in the last month or so, the optic would get kicked around. I suspected that the main cause was the insufficient low-pass filtering on the Oplev loops, which was causing the DAC rms to rail when the whitening was turned on.
I had tried some loop-tweaking by hand of the OL loops without much success last week - today I had a little more success. The existing OL loops are comprised of the following:
- Differentiator at low frequencies (zero at DC, 2 poles at 300Hz)
- Resonant gain peaked around 0.6 Hz with a Q of ______ (to be filled in)
- BR notches
- A 2nd order elliptic low pass with 2dB passband ripple and 20dB stopband attenutation
THe elliptic low pass was too shallow. For a first pass at loop shaping today, I checked if the resonant gain filter had any effect on the transmitted power RMS profile - turns out it had negligible effect. So I disabled this filter, replaced the elliptic low pass with a 5th order ELP with 2dB passband ripple and 80dB stopband attenuation. I also adjusted the overall loop gain to have an upper UGF for the OL loops around 2Hz. Looking at the spectrum of one coil output in this configuration (ITMY UL), I determined that the DAC rms was no longer in danger of railing.
However, I was still unable to smoothly engage the de-whitening. The optic again kept getting kicked around each time I tried. So I tried engaging the de-whitening on the ITM with just the local damping loop on, but with the arm locked. This transition was successful, but not smooth. Looking at the transmon spot on the camera, every time I engage the whitening, the spot gets a sizeable kick (I will post a video shortly). In my ~10 trials this afternoon, the arm is able to stay locked when turning the whitening on, but always loses lock when turning the whitening off.
The issue here is certainly not the DAC rms railing. I had a brief discussion with Gabriele just now about this, and he suggested checking for some electronic voltage offset between the two paths (de-whitening engaged and bypassed). I also wonder if this has something to do with some latency between the actual analog switching of paths (done by a slow machine) and the fast computation by the real time model? To be investigated.
GV 170628 11pm: I guess this isn't a viable explanation as the de-whitening switching is handled by the one of the BIO cards which is also handled by the fast FEs, so there isn't any question of latency.
With the Oplev loops disengaged, the initial kick given to the optic when engaging the whitening settles down in about a second. Once the ITM was stable again, I was able to turn on both Oplev loops without any problems. I did not investigate the new Oplev loop shape in detail, but compared to the original loop shape, there wasn't a significant difference in the TRY spectrum in this configuration (plot to follow). This remains to be done in a systematic manner.
Plots to support all of this to follow later in the evening.
Attachment #1: Video of ETMY transmission CCD while engaging whitening. I confirmed that this "glitch" happens while engaging the whitening on the UL channel. This is reminiscent of the Satellite Box glitches seen recently. In that case, the problem was resolved by replacing the high-current buffer in the offending channel. Perhaps something similar is the problem here?
Attachment #2: Summary of the ITMY UL coil output spectra under various conditions.
|
Attachment 1: ETMYT_1182669422.mp4
|
Attachment 2: ITMY_whitening_studies.pdf
|
|
13083
|
Tue Jun 27 16:18:59 2017 |
jigyasa | Update | Cameras | GigE camera at ETMX |
The 50mm lens has arrived. (Delivered yesterday).
Also the GigE has been wired and conencted to the Martian. Image acquisition is possible with Pylon.
Quote: |
GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.
|
|
13084
|
Tue Jun 27 18:47:49 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The values generated from the script were analyzed and a 3D scatter plot in addition to a 2D map were plotted.
Yesterday, Rana pointed me to another method of collecting and analyzing the data. So I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.
Quote: |
The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.
|
|
13085
|
Wed Jun 28 20:15:46 2017 |
gautam | Update | General | c1iscex timing troubles |
[Koji, gautam]
Here is a summary of what we did today to fix the timing issue on c1iscex. The power supply to the timing card in the X end expansion chassis was to blame.
- We prepared the Y-end expansion chassis for transport to the X end. To do so, we disconnected the following from the expansion chassis
- Cables going to the ADC/DAC adaptor boards
- Dolphin connector
- BIO connector
- RFM fiber
- Timing fiber
- We then carried the expansion chassis to the X end electronics rack. There we repeated the above steps for the X-end expansion chassis
- We swapped the X and Y end expansion chassis in the X end electronics rack. Powering the unit, we immediately saw the green lights on the front of the timing card turn on, suggesting that the Y-end expansion chassis works fine at the X end as well (as it should). To further confirm that all was well, we were able to successfully start all the RT models on c1iscex without running into any timing issues.
- Next, we decided to verify if the spare timing card is functional. So we swapped out the timing card in the expansion chassis brought over to the X end from the Y end with the spare. In this test too, all worked as expected. So at this stage, we concluded that
- There was nothing wrong with the fiber bringing the timing signal to the X end
- The Y-end expansion chassis works fine
- The spare timing card works fine.
- Then we decided to try the original X-end expansion chassis timing card in the Y-end expansion chassis. This test too was successful - so there was nothing wrong with any of the timing card!
- Next, we decided to power the X-end timing chassis with its original timing card, which was just verified to work fine. Surprisingly, the indicator lights on the timing card did not turn on.
- The timing card has 3 external connections
- A 40 pin IDE connector
- Power
- Fiber carrying the timing signal
- We went back to the Y-end expansion chassis, and checked that the indicator lights on the timing card turned on even when the 40 pin IDE connector was left unconnected (so the timing card just gets power and the timing signal).
- We concluded that the power supply in the X end expansion chassis was to blame. Indeed, when Koji jiggled the connector around a little, the indicator lights came on!
- The connection was diagnosed to be somewhat flaky - it employs the screw-in variety of terminal blocks, and one of the connections was quite loose - Koji was able to pull the cable out of the slot applying a little pressure.
- I replaced the cabling (swapped the wires for thicker gauge, more flexible variety), and re-tightened the terminal block screws. The connection was reasonably secure even when I applied some force. A quick test verified that the timing card was functional when the unit was powered.
- We then replaced the X and Y-end expansion chassis (complete with their original timing cards, so the spare is back in the CDS cabinet), in the racks. The models started up again without complaint, and the CDS overview screen is now in a good state [Attachment #1]. The arms are locked and aligned for maximum transmission now.
- There was some additional difficulty in getting the 40-pin IDE connector in on the Y-end expansion chassis. Looked like we had bent some of the pins on the timing board while pulling this cable out. But Koji was able to fix this with a screw driver. Care should be taken when disconnecting this cable in the future!
There were a few more flaky things in the Expansion chassis - the IDE connectors don't have "keys" that fix the orientation they should go in, and the whole timing card assembly is kind of difficult and not exactly secure. But for now, things are back to normal it seems.
Wouldn't it be nice if this fix also eliminates the mystery ETMX glitching problem? After all, seems like this flaky power supply has been a problem for a number of years. Let's keep an eye out. |
Attachment 1: CDS_status_28Jun2017.png
|
|
13086
|
Thu Jun 29 00:13:08 2017 |
Kaustubh | Update | Computer Scripts / Programs | Transfer Function Testing |
In continuation to my previous posts, I have been working on evaluating the data on transfer function. Recently, I have calculated the correlation values between the real and imaginary part of the transfer function. Also I have written the code for plotting the transfer function data stream at each frequency in the argand plane just for referring to. Also I have done a few calculations and found the errors in magnitude and phase using those in the real and imaginary parts of the transfer function. More details for the process are in this git repository.
The following attachments have been added:
- The correlation plot at different frequencies. This data is for a 100 data files.
- The Test files used to produce the abover plot along with the code for the plotting it as well as the text file containing the correlation values. (Most of the code is commented as that part wasn't needed fo rhte recent changes.)
Conclusion:
Seeing the correlation values, it sounds reasonable that the gaussian in real and imaginary parts approximation is actually holding. This is because the correlation values are mostly quite small. This can be seen by studying the distribution of the transfer function on the argand plane. The entire distribution can be seen to be somewhat, if not entirely, circular. Even when the ellipticity of the curve seems to be high, the curve still appears to be elliptical along the real and imaginary axes, i.e., correlation in them is still low.
To Do:
- Use a better way to estimate the errors in magnitude and phase as the method used right now is a only valid with the liner approximation and gives insane values which are totally out of bounds when the magnitude is extrmely small and the phase is varying as mad.
- Use the errors in the transfer function to estimate the coherence in the data for each frequency point. That is basically plot a cohernece Vs frequency plot showing how the coherence of the measurements vary as the frequency is varied.
In order to test the above again, with an even larger data set, I am leaving a script running on Ottavia. It should take more than just the night(I estimate around 10-11 hours) if there are no problems. |
Attachment 1: Correlation_Plot.pdf
|
|
Attachment 2: 2x100_Test_Files_and_Code_and_Correlation_Files.zip
|
13087
|
Thu Jun 29 10:04:18 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The script is being executed again, now.
Quote: |
I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.
|
|
13088
|
Fri Jun 30 02:13:23 2017 |
gautam | Update | General | DRMI locking attempt |
Summary:
I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.
- Locked arms, ran ASS, centered OLs on ITMs and BS on their respective QPDs.
- Looked into changing the BS Oplev loop shape to match that of the ITMs - it looks like the analog electronics that take the QPD signals in for the BS Oplev is a little different, the 800Hz poles are absent. But I thought I had managed to do this successfully in that the error signal suppression improved and it didn't look like the performance of the modified loop was worse anywhere except possibly at the stack resonance of ~3Hz --- see Attachment #1 (will be rotated later). The TRX spectra before and after this modification also didn't raise any red flags.
- Re-aligned PRM - went to the AS table and centered beam on all REFL PDs
- Locked PRMI on carrier, ran MICH and AS dither alignment. PRC angular feedforward also seemed to work well.
- Re-aligned SRM, looked for DRMI locks - there was a brief lock of a couple of seconds, but after this, the BS behaviour changed dramatically.
Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing ). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow. |
Attachment 1: BS_OLmods.pdf
|
|
13089
|
Fri Jun 30 11:08:26 2017 |
jigyasa | Update | Cameras | GigE camera at ETMX |
With Steve's help in getting the right depth of field for imaging and focusing on the test mass with the new AR coated lens, Gautam's help with locking the arm and trying my hand at adjusting the focus of the camera yesterday, we were able to get some images of the IR beam, with the green shutter on and off at different exposures. Since the CCD is at an angle to the optic, the exposure time had to be increased signifcantly(and varied between 0.08 to 0.5 seconds) to capture bright images.
A few frames without the IR on and with the green shutter closed were captured.
These show the OSEM and the Oplev on the test mass.
Atm2, pcicture is taken through dirty window
Quote: |
Also the GigE has been wired and conencted to the Martian. Image acquisition is possible with Pylon.
|
|
Attachment 1: PicturesETMX.pdf
|
|
Attachment 2: dirtyETMXwindow.jpg
|
|
13090
|
Fri Jun 30 11:50:17 2017 |
gautam | Update | General | DRMI locking attempt |
Seems like the problem is actually with ITMX - the attached DV plots are for ITMX with just local damping loops on (no OLs), LR seems to be suspect.
I'm going to go squish cables and the usual sat. box voodoo, hopefully that settles it.
Quote: |
Summary:
I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.
- Locked arms, ran ASS, centered OLs on ITMs and BS on their respective QPDs.
- Looked into changing the BS Oplev loop shape to match that of the ITMs - it looks like the analog electronics that take the QPD signals in for the BS Oplev is a little different, the 800Hz poles are absent. But I thought I had managed to do this successfully in that the error signal suppression improved and it didn't look like the performance of the modified loop was worse anywhere except possibly at the stack resonance of ~3Hz --- see Attachment #1 (will be rotated later). The TRX spectra before and after this modification also didn't raise any red flags.
- Re-aligned PRM - went to the AS table and centered beam on all REFL PDs
- Locked PRMI on carrier, ran MICH and AS dither alignment. PRC angular feedforward also seemed to work well.
- Re-aligned SRM, looked for DRMI locks - there was a brief lock of a couple of seconds, but after this, the BS behaviour changed dramatically.
Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing ). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow.
|
|
Attachment 1: ITMX_glitchy.png
|
|
13091
|
Fri Jun 30 15:25:19 2017 |
jigyasa | Update | Cameras | GigE camera at ETMX |
All thanks to Steve, we cleaned the view port on the ETMX on which the camera is installed, and with a little fine tuning of the focus of the camera, here's a really good image of the beam spot at 6 and 14 ms.
|
Attachment 1: Image__2017-06-30__15-10-05.pdf
|
|
Attachment 2: 14ms.pdf
|
|
13092
|
Fri Jun 30 16:03:54 2017 |
jigyasa | Update | Cameras | GigE camera at ETMX |
Quote: |
All thanks to Steve, we cleaned the view port on the ETMX on which the camera is installed, and with a little fine tuning of the focus of the camera, here's a really good image of the beam spot at 6 and 14 ms.
|
|
Attachment 1: 14msexposure.png
|
|
13093
|
Fri Jun 30 22:28:27 2017 |
gautam | Update | General | DRMI re-locked |
Summary:
Reverted to old settings, tried to reproduce DRMI lock with settings as close to those used in May this year as possible. Tonight, I was successful in getting a couple of ~10min DRMI 1f locks . Now I can go ahead and try and reduce the noise.
I am not attempting a full characterization tonight, but the important changes since the May locks are in the de-whitening boards and coil driver boards. I did not attempt to engage the coil-dewhitening, but the PD whitening works fine.
As a quick check, I tested the hypothesis that the BS OL loop A2L coupling dominates between ~10-50Hz. The attached control signal spectra [Attachment #2] supports this hypothesis. Now to actually change the loop shape.
I've centered Oplevs of all vertex optics, and also the beams on the REFL and AS PDs. The ITMs and BS have been repeatedly aligned since re-installing their respective coil driver electronics, but the SRM alignment needed some adjustment of the bias sliders.
Full characterization to follow. Some things to check:
- Investigate adn fix the suspect X-arm ASS loop
- Is there too much power on the AS110 PD post Oct2016 vent? Is the PD saturating?
Lesson learnt: Don't try and change too many things at once!
GV July 5 1130am: Looks like the MICH loop gain wasn't set correctly when I took the attached spectra, seems like the bump around 300Hz was caused by this. On later locks, this feature wasn't present. |
Attachment 1: DRMI_relocked.png
|
|
Attachment 2: MICH_OL.pdf
|
|
13094
|
Sat Jul 1 14:27:00 2017 |
Koji | Update | General | DRMI re-locked |
Basically we use the arm cavities as the reference of the beam alignment. The incident beam is aligned such that the ITMY angle dither is minimized (at least at the dither freq).
This means that we have no capability to adjust the spot poisitions on the PRM, SRM, BS, ITMX optics.
We are still able to minimize A2L by adding intentional asymmetry to the coil actuators. |
13095
|
Wed Jul 5 10:23:18 2017 |
Steve | Update | safety | liquid nitrogen boil off |
The liquid nitrogen container has a pressure releif valve set to 35 PSI This valve will open periodically when contains LN2
The exiting very cold gas can cause burning so it should not hit directly your eyes or skin. Set the pointing of this valve into the corner.
Leave entry door open so nitrogen concentration can not build up.
Oxygen deficiency
Nitrogen can displace oxygen in the air, reducing the
percentage of oxygen to below safe levels. Because the brain
needs a continuous supply of oxygen to remain active, lack
of oxygen prevents the brain from functioning properly, and
it shuts down.
Being odorless, colorless, tasteless, and nonirritating,
nitrogen has no properties that can warn people of its pres-
ence. Inhalation of excessive amounts of nitrogen can cause
dizziness, nausea, vomiting, loss of consciousness, and death |
Attachment 1: liqued_nitrogen_boil_off.jpg
|
|
13096
|
Wed Jul 5 16:09:34 2017 |
gautam | Update | CDS | slow machine bootfest |
Reboots for c1susaux, c1iscaux today.
|
13097
|
Wed Jul 5 19:10:36 2017 |
gautam | Update | General | NB code checkout - updated |
I've been making NBs on my laptop, thought I would get the copy under version control up-to-date since I've been negligent in doing so.
The code resides in /ligo/svncommon/NoiseBudget, which as a whole is a git directory. For neatness, most of Evan's original code has been put into the sub-directory /ligo/svncommon/NoiseBudget/H1NB/, while my 40m NB specific adaptations of them are in the sub-directory /ligo/svncommon/NoiseBudget/NB40. So to make a 40m noise budget, you would have to clone and edit the parameter file accordingly, and run python C1NB.py C1NB_2017_04_30.py for example. I've tested that it works in its current form. I had to install a font package in order to make the code run (with sudo apt-get install tex-gyre ), and also had to comment out calls to GwPy (it kept throwing up an error related to the package "lal", I opted against trying to debug this problem as I am using nds2 instead of GwPy to get the time series data anyways).
There are a few things I'd like to implement in the NB like sub-budgets, I will make a tagged commit once it is in a slightly neater state. But the existing infrastructure should allow making of NBs from the control room workstations now.
Quote: |
[evan, gautam]
We spent some time trying to get the noise-budgeting code running today. I guess eventually we want this to be usable on the workstations so we cloned the git repo into /ligo/svncommon. The main objective was to see if we had all the dependencies for getting this code running already installed. The way Evan has set the code up is with a bunch of dictionaries for each of the noise curves we are interested in - so we just commented out everything that required real IFO data. We also commented out all the gwpy stuff, since (if I remember right) we want to be using nds2 to get the data.
Running the code with just the gwinc curves produces the plots it is supposed to, so it looks like we have all the dependencies required. It now remains to integrate actual IFO data, I will try and set up the infrastructure for this using the archived frame data from the 2016 DRFPMI locks..
|
|
13098
|
Thu Jul 6 11:58:28 2017 |
jigyasa | Update | Cameras | HDR images of ETMX |
I captured a few images of the beam spot on ETMX at 5ms, 10ms, 14ms, 50ms, 100ms, 500ms, 1000ms exposure and ran them through my python script for HDR images. Here's what I obtained.
The resulting image is an improvement over the highly saturated images at say, 500ms and 1 second exposures.
Additionally, I also included a colormapped version of the image. |
Attachment 1: ETMXHDRcolormap.png
|
|
Attachment 2: ETMXHDRimage.png
|
|
13100
|
Fri Jul 7 14:34:27 2017 |
rana | Update | Cameras | HDR images of ETMX |
i wonder how 'HDR' these images really are. is there a quantitative way to check that we are really getting more bits? also, how many bits does the PNG format allow for monochrome images? i worry that these elog images are already lossy.
|
13101
|
Sat Jul 8 17:09:50 2017 |
gautam | Update | General | ETMY TRANS QPD anomaly |
About 2 weeks ago, I noticed some odd behaviour of the LSC TRY data stream. Its DC value seems to be drifting ~10x more than TRX. Both signals come from the transmission QPDs. At the time, we were dealing with various CDS FE issues but things have been stable on that end for the last two weeks, so I looked into this a bit more today. It seems like one particular channel is bad - Quadrant 4 of the ETMY TRANS QPD. Furthermore, there is a bump around 150Hz, and some features above 2kHz, that are only present for the ETMY channels and not the ETMX ones.
Since these spectra were taken with the PSL shutter closed and all the lab room lights off, it would suggest something is wrong in the electronics - to be investigated.
The drift in TRY can be as large as 0.3 (with 1.0 being the transmitted power in the single arm lock). This seems unusually large, indeed we trigger the arm LSC loops when TRY > 0.3. Attachment #2 shows the second trend of the TRX and TRY 16Hz EPICS channels for 1 day. In the last 12 hours or so, I had left the LSC master switch OFF, but the large drift of the DC value of TRY is clearly visible.
In the short term, we can use the high-gain THORLABS PD for TRY monitoring. |
Attachment 1: ETMY_QPD.pdf
|
|
Attachment 2: ETMY_QPD.png
|
|
13102
|
Sun Jul 9 08:58:07 2017 |
rana | Update | General | ETMY TRANS QPD anomaly |
Indeed, the whole point of the high/low gain setup is to never use the QPDs for the single arm work. Only use the high gain Thorlabs PD and then the switchover code uses the QPD once the arm powers are >5.
I don't know how the operation procedure went so higgledy piggledy. |
13103
|
Mon Jul 10 09:49:02 2017 |
gautam | Update | General | All FEs down |
Attachment #1: State of CDS overview screen as of 9.30AM today morning when I came in.
Looks like there may have bene a power glitch, although judging by the wall StripTool traces, if there was one, it happened more than 8 hours ago. FB is down atm so can't trend to find out when this happened.
All FEs and FB are unreachable from the control room workstations, but Megatron, Optimus and Chiara are all ssh-able. The latter reports an uptime of 704 days, so all seems okay with its UPS. Slow machines are all responding to ping as well as telnet.
Recovery process to begin now. Hopefully it isn't as complicated as the most recent effort [FAMOUS LAST WORDS] |
Attachment 1: CDS_down_10Jul2017.png
|
|
13104
|
Mon Jul 10 11:20:20 2017 |
gautam | Update | General | All FEs down |
I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".
Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.
In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. |
13105
|
Mon Jul 10 17:13:21 2017 |
jigyasa | Update | Computer Scripts / Programs | Capture image without pylon GUI |
Over the day, I have been working on a C++ program to interface with Pylon to capture images and reduce dependence on the Pylon GUI. The program uses the Pylon header files along with opencv headers. While ultimately a wrapper in python may be developed for the program, the current C++ program at,
/users/jigyasa/GigEcode/Grab/Grab.cpp when compiled as
g++ -Wl,--enable-new-dtags -Wl,-rpath,/opt/pylon5/lib64 -o Grab Grab.o -L/opt/pylon5/lib64 -Wl,-E -lpylonbase -lpylonutility -lGenApi_gcc_v3_0_Basler_pylon_v5_0 -lGCBase_gcc_v3_0_Basler_pylon_v5_0 `pkg-config opencv --cflags --libs`
returns an executable file named Grab which can be executed as ./Grab
This captures one image from the camera and displays it, additionally it also displays the gray value of the first pixel.
I am working on adding more utility to the program such as manually adjusting exposure, gain and also on the python wrapper (Cython has been installed locally on Ottavia for the purpose)! |
13106
|
Mon Jul 10 17:46:26 2017 |
gautam | Update | General | All FEs down |
A bit more digging on the diagnostics page of the RAID array reveals that the two power supplies actually failed on Jun 2 2017 at 10:21:00. Not surprisingly, this was the date and approximate time of the last major power glitch we experienced. Apart from this, the only other error listed on the diagnostics page is "Reading Error" on "IDE CHANNEL 2", but these errors precede the power supply failure.
Perhaps the power supplies are not really damaged, and its just in some funky state since the power glitch. After discussing with Jamie, I think it should be safe to power cycle the Jetstor RAID array once the FB machine has been powered down. Perhaps this will bring back one/both of the faulty power supplies. If not, we may have to get new ones.
The problem with FB may or may not be related to the state of the Jestor RAID array. It is unclear to me at what point during the boot process we are getting stuck at. It may be that because the RAID disk is in some funky state, the boot process is getting disrupted.
Quote: |
I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".
Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.
In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling.
|
|
13107
|
Mon Jul 10 19:15:21 2017 |
gautam | Update | General | All FEs down |
The Jetstor RAID array is back in its nominal state now, according to the web diagnostics page. I did the following:
- Powered down the FB machine - to avoid messing around with the RAID array while the disks are potentially mounted.
- Turned off all power switches on the back of the Jetstor unit - there were 4 of them, all of them were toggled to the "0" position.
- Disconnected all power cords from the back of the Jetstor unit - there were 3 of them.
- Reconnected the power cords, turned the power switches back on to their "1" position.
After a couple of minutes, the front LCD display seemed to indicate that it had finished running some internal checks. The messages indicating failure of power units, which was previously constantly displayed on the front LCD panel, was no longer seen. Going back to the control room and checking the web diagnostics page, everything seemed back to normal.
However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. |
13108
|
Mon Jul 10 21:03:48 2017 |
jamie | Update | General | All FEs down |
Quote: |
However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now.
|
It's possible the fb bios got into a weird state. fb definitely has it's own local boot disk (*not* diskless boot). Try to get to the BIOS during boot and make sure it's pointing to it's local disk to boot from.
If that's not the problem, then it's also possible that fb's boot disk got fried in the power glitch. That would suck, since we'd have to rebuild the disk. If it does seem to be a problem with the boot disk then we can do some invasive poking to see if we can figure out what's up with the disk before rebuilding. |
13110
|
Mon Jul 10 22:07:35 2017 |
Koji | Update | General | All FEs down |
I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.
|
13111
|
Tue Jul 11 15:03:55 2017 |
gautam | Update | General | All FEs down |
Jamie suggested verifying that the problem is indeed with the disk and not with the controller, so I tried switching the original boot disk to Slot #1 (from Slot #0 where it normally resides), but the same problem persists - the green "OK" indicator light keeps flashing even in Slot #1, which was verified to be a working slot using the spare 2.5 inch disk. So I think it is reasonable to conclude that the problem is with the boot disk itself.
The disk is a Seagate Savvio 10K.2 146GB disk. The datasheet doesn't explicitly suggest any recovery options. But Table 24 on page 54 suggests that a blinking LED means that the disk is "spinning up or spinning down". Is this indicative of any particular failure moed? Any ideas on how to go about recovery? Is it even possible to access the data on the disk if it doesn't spin up to the nominal operating speed?
Quote: |
I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.
|
|
13112
|
Tue Jul 11 15:12:57 2017 |
Koji | Update | General | All FEs down |
If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.
If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?) |