40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 292 of 339  Not logged in ELOG logo
ID Date Author Type Category Subject
  2389   Thu Dec 10 17:05:21 2009 AlbertoConfigurationLSC166 MHz LO SMA-to-Heliax connection repaired

I replaced the SMA end connector for the 166 MHZ Local Oscillator signal that goes to the back of the flange in the 1Y2 rack. The connector had got damaged after it twisted when I was tigheting the N connector of the Heliax cable on the front panel.

  2388   Thu Dec 10 16:51:35 2009 steveUpdateVACpumpdown will start tomorrow morning

The vacuum system is at 760 torr  All chambers with doors on and their annuloses are pumped down.

PSL output shutter is still closed. We are fully prepared to star slow pump down tomorrow.

The plan is to reach 1 torr ~ in 6 hrs  without a creating a sand storm.

  2387   Thu Dec 10 15:18:55 2009 JenneUpdateVACseisBLRMS

last 20 days - including the pounding from next door

Attachment 1: Untitled.png
Untitled.png
  2386   Thu Dec 10 13:50:02 2009 JenneUpdateVACAll doors on, ready to pump

[Everybody:  Alberto, Kiwamu, Joe, Koji, Steve, Bob, Jenne]

The last heavy door was put on after lunch.  We're now ready to pump.

  2385   Thu Dec 10 13:13:08 2009 JenneUpdateComputersfb40m backup restarted

The frame builder was power cycled during the morning bootfest.  I have restarted the backup script once more.

  2384   Thu Dec 10 13:10:25 2009 AlbertoConfigurationLSC166 LO Disconnected

I temporarily disconnected the Heliax cable that brings the 166MHz LO to the LSC rack.

I'm doing a couple of measurement and I'll put it back in as soon as I'm done.

  2383   Thu Dec 10 10:31:18 2009 JenneUpdateComputersFronte-ends down

Quote:

All the front ends are back up.  

Quote:

Quote:

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
 
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
 
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.

 

 I burtrestored all the snapshots to Dec 9 2009 at 18:00.

  2382   Thu Dec 10 10:01:16 2009 JenneUpdateComputersFronte-ends down

All the front ends are back up.  

Quote:

Quote:

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
 
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
 
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.

 

  2381   Thu Dec 10 09:56:32 2009 KojiUpdatePSLRCPID settings not saved

Note: The set point C1:PSL-FSS_RCPID_SETPOINT is 37.0 on C1PSL_FSS_RCPID.adl.

Now the temp is recovering with its full speed. At some point we have to restore the value of the FSS SLOW DC as the temp change drag it up.

Quote:

Koji, Jenne, Rob

We found that the RCPID servo "setpoint" was not in the relevant saverestore.req file, and so when c1psl got rebooted earlier this week, this setting was left at zero.  Thus, the RC got a bit chilly over the last few days.  This channel has been added. 

Also, RCPID channels have been added (manually) to conlog_channels. 

 

Attachment 1: RC_TEMP.png
RC_TEMP.png
  2379   Thu Dec 10 09:51:06 2009 robUpdatePSLRCPID settings not saved

Koji, Jenne, Rob

 

We found that the RCPID servo "setpoint" was not in the relevant saverestore.req file, and so when c1psl got rebooted earlier this week, this setting was left at zero.  Thus, the RC got a bit chilly over the last few days.  This channel has been added. 

 

Also, RCPID channels have been added (manually) to conlog_channels. 

  2378   Thu Dec 10 08:50:33 2009 AlbertoUpdateComputersFronte-ends down

Quote:

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to single out the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
 
Then I did the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. I executed the steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
5) power-cycle and restart the single front-ends
6) burt-restore all the snapshots
 
When I tried to restart C1SOSVME by power-cycling it I still got the same response: "No response from EPICS". But I then reset C1SUSVME1 and C1SUSVME2 I was able to restart C1SOSVME.
 
It turned out that while I was checking the efficacy of the steps of the Grand Reboot to single out the crucial one, I was getting fooled by C1SOSVME's status. C1SOSVME was stuck, hanging on C1SUSVME1 and C1SUSVME2.
 
So the Nuclear option is still unproven as the only working procedure. It might be not necessary.
 
Maybe restating BOTH RFM switches, the one in 1Y7 and the one in 1Y6, would be sufficient. Or maybe just power-cycling the C0DAQCTRL and C1DCU1 is sufficient. This has to be confirmed next time we incur on the same problem.
  2377   Thu Dec 10 08:43:25 2009 steveFrogsEnvironmentdiesel fumes are less

Quote:

 Instead of doing RCG stuff, I went to Millikan to work on data analysis as I couldn't stand the fumes from the construction.  (this morning, 8am) 

 Diesel fumes are pumped away from control room AC intakes  with the help of newly installed  reflector boxes on the CES wall  fans.........see # 2272

Attachment 1: P1050817.JPG
P1050817.JPG
  2376   Thu Dec 10 08:40:12 2009 AlbertoUpdateComputersFronte-ends down

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

  2375   Thu Dec 10 00:46:15 2009 KojiUpdateSUSRe: free swinging spectra of ETMY and ITMX

Well, I get the point now. It could be either seismic or change in the suspension Q.

The pendulum memorizes its own state for a period of ~ Q T_pend. (T_pend is the period of the pendulum)
If the pendulum Q is very high (>104), once the pendulum is excited, the effect of the excitation can last many hours.

On the other hand, in our current case, we turned on the damping once, and then turned off the damping.
Again it takes ~Q T_pend to be excited. 

In those cases, the peak height is not yet before in equilibrium, and can be higher or lower than expected. 

So, my suggestion is:
Track the peak height along the long time scale (~10hrs) and compare between the previous one and the current one.
This may indicate whether it is equilibrium or not, and where the equilibrium is.

Quote:

If such variation of the peak heights is cased by the seismic activity, it means the seismic level change by several 10 times. It sounds large to me.

 

  2374   Wed Dec 9 21:09:28 2009 kiwamuUpdateSUSRe: free swinging spectra of ETMY and ITMX

Okay, now the data are attached. At that time I just wanted to say like the follower.

- - -

In the free-swinging spectra around ~0.5Hz, you can see the two resonances, which come from pitch and yaw mode of the pendulum.

Note that, the vertical and the horizontal axis are adjusted to be the same for the two plots in the figure .

And I found that

* the floor levels are almost the same (the factor of about 1.5 or something like that) compared to the past.

* however the peak heights for two resonances are several 10 times smaller than the past.

* this tendency are shown in all of the data (ITMX, ETMX, ETMY).

If such variation of the peak heights is cased by the seismic activity, it means the seismic level change by several 10 times. It sounds large to me.
 

Quote:

Where is the plot for the trend?
It can be either something very important or just a daydream of you.
We can't say anything before we see the data.

Quote:

By the way I found a trend, which can be seen in all of the data taken today and yesterday.

The resonances of pitch and yaw around 0.5Hz look like being damped, because their height from the floor become lower than the past.

I don't know what goes on, but it is interesting because you can see the trend in all of the data.

 

 

Attachment 1: Pitch-Yaw_modes.png
Pitch-Yaw_modes.png
  2373   Wed Dec 9 18:01:06 2009 KojiUpdateCOCWiping finished

[Kiwamu, Jenne, Alberto, Steve, Bob, Koji]

We finished wiping of four test masses without any trouble. ITMY looked little bit dusty, but not as much as ITMX did.
We confirmed the surface of the ITMX again as we worked at vertex a lot today. It still looked clean.

We closed the light doors. The suspensions are left free tonight in order to check their behavior.
Tomorrow morning from 9AM, we will replace the door to the heavy ones.

  2372   Wed Dec 9 17:51:03 2009 kiwamuUpdateSUSwatchdogs

Please do not touch the watchdogs for all SUSs except for MCs,

because I am going to measure the free swinging spectra for ITMs, ETMs, BS, PRM, SRM tonight.

Today, it is good chance to summarize those data under atmospheric pressure.

thank you.

 

  2371   Wed Dec 9 10:53:41 2009 josephbUpdateCamerasCamera client wasn't able to talk to server on port 5010, reboot fixed it.

I finally got around to taking a look at the digital camera setup today.  Rob had complained the client had stopped working on Rosalba.

After looking at the code start up and not complain, yet not produce any window output, it looks like it was a network problem.  I tried rebooting Rosalba, but that didn't fix anything.

Using netstat -an, I looked for the port 5010 on both rosalba and ottavia, since that is the port that was being used by the camera.  Ottavia was saying there were 6 established connections after Rosalba had rebooted (rosalba is 131.215.113.103).  I can only presume 6 instances of the camera code had somehow shutdown in such a way they had not closed the connection.

[root@ottavia controls]#netstat -an | grep 5010
tcp        0      0 0.0.0.0:5010                0.0.0.0:*                   LISTEN     
tcp        0      0 131.215.113.97:5010         131.215.113.103:57366       ESTABLISHED
tcp        0      0 131.215.113.97:5010         131.215.113.103:58417       ESTABLISHED
tcp        1      0 131.215.113.97:46459        131.215.113.97:5010         CLOSE_WAIT 
tcp        0      0 131.215.113.97:5010         131.215.113.103:57211       ESTABLISHED
tcp        0      0 131.215.113.97:5010         131.215.113.103:57300       ESTABLISHED
tcp        0      0 131.215.113.97:5010         131.215.113.103:57299       ESTABLISHED
tcp        0      0 131.215.113.97:5010         131.215.113.103:57315       ESTABLISHED

 

I switched the code to use port 5022 which worked fine.  However, I'm not sure what would have caused the original connection closure failures, as I test several close methods (including the kill command on the server end used by the medm screen), and none seemed to generate this broken connection state.  I rebooted Ottavia, and this seemed to fix the connections, and allowed port 5010 to work.  I also tried creating 10 connections, which all seem to run fine simultaneously.  So its not someone overloading that port with too many connections which caused the problem.  Its like the the port stopped working somehow, which froze the connection status, but how or why I don't know at this point.

  2370   Wed Dec 9 09:07:32 2009 steveUpdatePEMhigh seismic activity

The construction activity is shaking the tables in the control room.  The compactor- large remote controlled jackhammer is in the bottom of the 16-17 ft deep hole 15 ft east of ITMY in CES bay. The suspensions  are holding OK. PRM, MC1 and MC3 are effected mostly.

Attachment 1: seis24d.png
seis24d.png
Attachment 2: seis4h.png
seis4h.png
Attachment 3: P1050833.JPG
P1050833.JPG
Attachment 4: P1050836.JPG
P1050836.JPG
  2369   Wed Dec 9 00:23:28 2009 KojiUpdateSUSfree swinging spectra of ETMY and ITMX

Where is the plot for the trend?
It can be either something very important or just a daydream of you.
We can't say anything before we see the data.

We like to see it if you think this is interesting.

... Just a naive guess: Is it just because the seismic level got quiet in the night?

 

P.S.

You looks consistently confused some words like damping, Q, and peak height.

  • Q is defined by the transfer function of the system (= pendulum).
     
  • Damping (either active or passive) makes the Q lower.
     
  • The peak height of the resonance in the spectrum dy is determined by the disturbance dx and the transfer function H (=y/x).

dy = H dx

As the damping makes the Q lower, the peak height also gets lowered by the damping.
But if the disturbance gets smaller, the peak height can become small even without any change of the damping and the Q.

Quote:

By the way I found a trend, which can be seen in all of the data taken today and yesterday.

The resonances of pitch and yaw around 0.5Hz look like being damped, because their height from the floor become lower than the past.

I don't know what goes on, but it is interesting because you can see the trend in all of the data.

 

  2368   Tue Dec 8 23:13:32 2009 kiwamuUpdateSUSfree swinging spectra of ETMY and ITMX

The free swinging spectra of ETMY and ITMX were taken after today's wiping, in order to check the test masses.

These data were taken under the atmospheric pressure condition, as well as the spectra of ETMX taken yesterday.

Compared with the past (see Yoichi's  good summary in Aug.7 2008), there are no significant difference.

There are nothing wrong with the ETMY and ITMX successfully.

 --

By the way I found a trend, which can be seen in all of the data taken today and yesterday.

The resonances of pitch and yaw around 0.5Hz look like being damped, because their height from the floor become lower than the past.

I don't know what goes on, but it is interesting because you can see the trend in all of the data.

 

 

 

 

 

Attachment 1: SUS-ETMY.png
SUS-ETMY.png
Attachment 2: SUS-ITMX.png
SUS-ITMX.png
  2367   Tue Dec 8 16:27:13 2009 JenneUpdateCOCITMX wiped

Jenne, Kiwamu, Koji, Alberto, Steve, Bob

ITMX was wiped without having to move it. 
After 'practice' this morning on ETMY, Kiwamu and I successfully wiped ITMX by leaning into the chamber to get at the front face. 

Most notable (other than the not moving it) was that inspection with the fiber light before touching showed many very small particles on the coated part of the optic (this is versus ETMY, where we saw very few, but larger particles).  The after-wiping fiber light inspection showed many, many fewer particles on the optical surface.  I have high hopes for lower optical loss here!

  2366   Tue Dec 8 13:03:26 2009 KojiUpdateCOCETMY drag wiped

Jenne, Kiwamu, Alberto, Steve, Bob, Koji

We wiped ETMY after recovery of the computer system. We take the lunch and resume at 14:00 for ITMX.
Detailed reports will follow.

  2365   Tue Dec 8 10:20:33 2009 AlbertoDAQComputersBootfest succesfully completed

Alberto, Kiwamu, Koji,

this morning we found the RFM network and all the front-ends down.

To fix the problem, we first tried a soft strategy, that is, we tried to restart CODAQCTRL and C1DCUEPICS alone, but it didn't work.

We then went for a big bootfest. We first powered off fb40m, C1DCUEPICS, CODAQCTRL, reset the RFM Network switch. Then we rebooted them in the same order in which we turned them off.

Then we power cycled and restarted all the front-ends.

Finally we restored all the burt snapshots to Monday Dec 7th at 20:00.

  2364   Tue Dec 8 09:18:07 2009 JenneUpdateComputersA *great* way to start the day....

Opening of ETMY has been put on hold to deal with the computer situation.  Currently all front end computers are down.  The DAQ AWGs are flashing green, but everything else is red (fb40m is also green).  Anyhow, we'll deal with this, and open ETMY as soon as we can.

The computers take priority because we need them to tell us how the optics are doing while we're in the chambers, fitzing around.  We need to be sure we're not overly kicking up the suspensions. 

  2363   Tue Dec 8 03:53:49 2009 kiwamuUpdateSUSFree swinging spectra of ETMX

In this night, I checked the free swinging spectra of ETMX to make sure nothing wrong with ETMX by the wiping.

Compared with the past (Aug.6 2008), the spectra of ETMX doesn't show significant change.

Successfully the wiping activity didn't change its configuration so much and didn't bring bad situations.

(bad situation means for example, the suspended components hit some others).

 

 The spectra of ETMX by DTT are attached. Also you can see the past spectra in Yoichi's entry.

Yoichi's data was taken during the air-pressure condition, so it's good for comparing.

Actually I compared those data by my eyes, because I could not get the past raw data somehow.

The resonant frequencies and their typical height changed a little bit, but I think those are not significant.

NOTE: In the figure, pitch and yaw modes (~0.57Hz and ~0.58Hz) look like having a smaller Q-factor than the past.

 

Attachment 1: ETMX_air.png
ETMX_air.png
  2362   Mon Dec 7 19:02:22 2009 MottUpdateGeneralReflectivity Measurements

I have made some measurements of the R value for some coatings we are interested in.  The plots have statistical error bars from repeated measurements, but I would suspect that these do not dominate the noise, and would guess these should be trusted to plus or minus 5% or so.  They still should give some indication of how useful these coatings will be for the green light.  I plan to measure for the ITM as soon as possible, but with the venting and finals this may not be until late this week.

 

EDIT (12/9/09): I fixed the label on the y axis of the plots, and changed them to png format.

Attachment 1: Y1-45P_R.png
Y1-45P_R.png
Attachment 2: Y1-45S_R.png
Y1-45S_R.png
Attachment 3: Y1-50CC_R.png
Y1-50CC_R.png
  2361   Mon Dec 7 18:18:55 2009 JenneUpdateCOCETMX drag wiped

[Koji, Jenne, Alberto, Steve, Bob]

ETMX has been drag wiped. 

Around 2:45pm, after the main IFO volume had come up to atmospheric pressure, we removed both doors to the ETMX chamber.  Regular procedures (wiping of O-rings with a dry, lint-free cloth, covering them with the light O-ring covers, etc.) were followed.  Koji took several photos of the optic, and the rest of the ETMX chamber before anything was touched. These will be posted to the 40m Picasa page.  Steve and Koji then deionized the optic.

Koji removed the bottom front earthquake stop, and clamped the optic with the remaining earthquake stops.

The clean syringes were prepared: These are all glass and metal (nothing else) medical syringes.  The size used was 100microliters.  Earlier today, we had prepared our solvents in small little beakers which had been baked over the weekend.  Brand new glass bottles of Acetone and Isopropyl Alcohol were opened, and poured into the small beakers.  To make sure we have enough, we have 3 ~10ml beakers of each Acetone and Isopropyl.

We started with Acetone.  The syringe was filled completely with acetone, then squirted onto a kimwipe.  This was repeated ~twice, to ensure the syringe was well rinsed.  Then the syringe was filled a little past the 100 microliter mark.  Koji held a piece of lens cleaning paper to ETMX and used an allen wrench underneath the optic to help guide the paper, and keep it near the optic (of course, the only thing in actual contact with the optic was the lens paper).  In one smooth shot, the plunger of the syringe was pressed all the way down.   (This is a bit tricky, especially when the syringe is totally full.  You have to squeeze it so the plunger moves fairly quickly down the barrel of the syringe to get a good arc of liquid.  The goal is to shoot all of the solvent to the same place on the lens paper, so that it makes a little circle of wetness on the paper which covers the coated part of the optic.  The amount of solvent used should be balanced between having too little, so that the paper is dry by the time it has been wiped all the way down, and too much such that there is still a residue of liquid on the optic after the paper has been removed.)  The target was to hit the optic just above the center mark (the oplev was on, so I went for just above the red oplev dot).  Immediately after applying the liquid onto the paper, Koji slowly and smoothly pulled down on the lens paper until it came off of the bottom of the optic.  The acetone was repeated, for a total of 2 acetone wipes.  Because acetone evaporates very quickly, more acetone is used than isopropyl.  The optimal amount turned out to be ~115 microliters of acetone.  It is hard to say exactly how much I had on the second wipe, because the syringe is not marked past 100 microliters.  On the first wipe, with about 105 microliters, the lens paper was too dry at the bottom of the optic.

We then switched to Isopropyl.  A new syringe was used, and again we rinsed it by filling it completely with isopropyl, and emptying it onto a kimwipe.  This was repeated at least twice.  We followed the same procedure for applying liquid to the optic and wiping the optic with the lens paper.  On the first try with isopropyl, we used 100 microliters, since that was the preferred amount for acetone.  Since isopropyl evaporates much slower than acetone, this was determined to be too much liquid.  On the second isopropyl wipe, I filled the syringe to 50 microliters, which was just about perfect.  The isopropyl wiping was done a total of 2 times.

After wiping, we replaced the front bottom earthquake stop, and released the optic from the other earthquake stops' clamping.  The OSEM values were checked against the values from the screenshots taken yesterday afternoon, and were found to be consistent.  Koji took more photos, all of which will be placed on the 40m Picasa page.

We visually inspected the optic, and we couldn't see anything on the optical surface of the mirror.  Koji said that he saw a few particulates on some horizontal surfaces in the chamber.  Since the optic seemed (at least to the level of human vision without a strong, focused light) to be free of particulates on the optical surface to start with, the suspense will have to remain until we button down, pump down, and try to lock the IFO to determine our new finesse, to see if the wiping helped any substantial amount. 

We replaced the regular, heavy door on the inner side of the ETMX chamber (the side closer to the CES building), and put only a light door on the outer side of the chamber (the side closer to the regular walkway down the arm).  We will look at the spectra of the OSEMS tomorrow, to confirm that none of the magnets are stuck.

We commence at ~9am tomorrow with ETMY.

LESSONS LEARNED:

The LED lights are awesome.  It's easy to use several lights to get lots of brightness (more than we've had in the past), and the chamber doesn't get hot.

We should get larger syringes for the acetone for the large optics.  It's challenging to smoothly operate the plunger of the syringe while it's so far out.  We should get 200 microliter syringes, so that for the acetone we only fill them about half way.  It was noticeably easier to apply the isopropyl when the syringe only had 50 microliters.

* It may be helpful to have a strong, focused optical light to inspect the surface of the mirror.  Rana says that Garilynn might have such an optical fiber light that we could borrow.

  2360   Mon Dec 7 09:38:05 2009 KojiUpdateVACVent started

Steve, Jenne, Koji

The PSL was blocked by the shutter and the manual block.
We started venting
at 9:30

09:30  25 torr
10:30 180 torr
11:00 230 torr

12:00 380 torr

13:00 520 torr
14:30 680 torr - Finish. It is already over pressured.

  2359   Sat Dec 5 22:31:52 2009 robUpdateIOOfrequency noise problem

Quote:

There's a large broadband increase in the MC_F spectrum.  I'm not totally sure it's real--it could be some weird bit-swapping thing.  I've tried soft reboots of c1susvme2 and c1iovme, which haven't helped.  In any case, it seems like this is preventing any locking success today.  Last night it was fine.

 Rebooting c1iovme (by keying off the crate, waiting 30 seconds, and then keying it back on and restarting) has resolved this.  The frequency noise is back to the 'usual' trace.

  2358   Sat Dec 5 18:23:48 2009 KojiUpdateoplevsOplevs centered, IP_POS and IP_ANG centered

NOTE: HEPA is on at its full.

[[[OK]]] Align the suspended optics (by Rob)
[[[OK]]]
Align the oplevs again
[[[OK]]] Take snapshots for the suspensions/QPDs/IO QPDs/PZT strain gauges
[[[OK]]] Align the IP_POS, IP_ANG
[[[OK]]] Align the PSL table QPDs, the MC WFS QPDs, and the MCT QPD
[[[OK]]] Align the aux laser for the absolute length 


Alignment of the aux laser

o Go to only ITMX mode:
Save the alignment of the mirrors. Activate X-arm mode. Misalign ITMY and ETMX.

o Inject the aux beam:
Open the shutter of the aux NPRO. Turn the injection flipper on.

o Look at the faraday output:
There are several spots but only one was the right one. Confirm the alignment to the thorlabs PD. Connect the oscilloscope to the PD out with a 50Ohm termination.
Thanks to the Alberto's adjustment, the beat was already there at around 10MHz. After the PD adjustment, the DC was about 600mV, the beat amplitude was about 50mVpp.

o Adjust the aux beam alignment:
Adjust the alignment of the aux beam by the steering mirrors before the farady isolator. These only change the alignment of the aux beam independently from the IFO beam.
After the alignment, the beat amplitude of 100mVpp was obtained.

o Closing
Close the shutter of the NPRO. Turn off the flipper mirror. Restore the full alignment of the IFO.

Attachment 1: Screenshot_091205_1830.png
Screenshot_091205_1830.png
  2357   Sat Dec 5 17:34:30 2009 robUpdateIOOfrequency noise problem

There's a large broadband increase in the MC_F spectrum.  I'm not totally sure it's real--it could be some weird bit-swapping thing.  I've tried soft reboots of c1susvme2 and c1iovme, which haven't helped.  In any case, it seems like this is preventing any locking success today.  Last night it was fine.

Attachment 1: mcf.png
mcf.png
  2356   Sat Dec 5 15:20:10 2009 JenneAoGall down cond.sea of red, again

Quote:

Taking  a cue from entry 2346, I immediately went for the nuclear option and powered off fb40m.  Someone will probably need to restart the backup script.

 Backup script restarted.

  2355   Sat Dec 5 14:41:07 2009 robAoGall down cond.sea of red, again

Taking  a cue from entry 2346, I immediately went for the nuclear option and powered off fb40m.  Someone will probably need to restart the backup script.

  2354   Sat Dec 5 01:40:11 2009 KojiUpdateoplevsOplevs centered, IP_POS and IP_ANG centered

We restarted daqd and it did restored the problem
http://lhocds.ligo-wa.caltech.edu:8000/40m/Computer_Restart_Procedures#fb40m
Then restart the 'daqd' process:'telnet fb40m 8087', type "shutdown" at the prompt. The framebuilder will restart itself in ~20s.

 

It did not related to the problem, but we also cleaned the processes related to dtt, dataviewer by pkill

After that the alignment scripts started to work again. As a result, we got some misalignment of the oplevs.
I am going to come on Sunday
- Align the optics
- Align the oplevs again
- Take snapshots for the suspensions
- Align the IP_POS, IP_ANG
- Align the aux laser for the absolute length
- Align PSL table QPDs, and MCT QPD

  2353   Fri Dec 4 23:17:55 2009 robUpdateoplevsOplevs centered, IP_POS and IP_ANG centered

Quote:

[Jenne Koji]

 We aligned the full IFO, and centered all of the oplevs and the IP_POS and IP_ANG QPDs.  During alignment of the oplevs, the oplev servos were disabled.

Koji updated all of the screenshots of 10 suspension screens.  I took a screenshot (attached) of the oplev screen and the QPD screen, since they don't have snapshot buttons.

We ran into some trouble while aligning the IFO.  We tried running the regular alignment scripts from the IFO_CONFIGURE screen, but the scripts kept failing, and reporting "Data Receiving Error".  We ended up aligning everything by hand, and then did some investigating of the c1lsc problem.  With our hand alignment we got TRX to a little above 1, and TRY to almost .9 . SPOB got to ~1200 in PRM mode, and REFL166Q got high while in DRM (I don't remember the number). We also saw a momentary lock of the full initerferometer:   On the camera view we saw that Yarm locked by itself momentarily, and at that same time TRX was above 0.5 - so both arms were locked simultaneously.   We accepted this alignment as "good", and aligned all of the oplevs and  QPDs.

It seems that C1LSC's front end code runs fine, and that it sees the RFM network, and the RFM sees it, but when we start running the front end code, the ethernet connection goes away.  That is, we can ping or ssh c1lsc, but once the front end code starts, those functions no longer work.  During these investigations, We once pushed the physical reset button on c1lsc, and once keyed the whole crate.  We also did a couple rounds of hitting the reset button on the DAQ_RFMnetwork screen.

 A "Data Receiving Error" usually indicates a problem with the framebuilder/testpoint manager, rather than the front-end in question.  I'd bet there's a DTT somewhere that's gone rogue.

  2352   Fri Dec 4 21:48:01 2009 JenneUpdateoplevsOplevs centered, IP_POS and IP_ANG centered

[Jenne Koji]

 We aligned the full IFO, and centered all of the oplevs and the IP_POS and IP_ANG QPDs.  During alignment of the oplevs, the oplev servos were disabled.

Koji updated all of the screenshots of 10 suspension screens.  I took a screenshot (attached) of the oplev screen and the QPD screen, since they don't have snapshot buttons.

We ran into some trouble while aligning the IFO.  We tried running the regular alignment scripts from the IFO_CONFIGURE screen, but the scripts kept failing, and reporting "Data Receiving Error".  We ended up aligning everything by hand, and then did some investigating of the c1lsc problem.  With our hand alignment we got TRX to a little above 1, and TRY to almost .9 . SPOB got to ~1200 in PRM mode, and REFL166Q got high while in DRM (I don't remember the number). We also saw a momentary lock of the full initerferometer:   On the camera view we saw that Yarm locked by itself momentarily, and at that same time TRX was above 0.5 - so both arms were locked simultaneously.   We accepted this alignment as "good", and aligned all of the oplevs and  QPDs.

It seems that C1LSC's front end code runs fine, and that it sees the RFM network, and the RFM sees it, but when we start running the front end code, the ethernet connection goes away.  That is, we can ping or ssh c1lsc, but once the front end code starts, those functions no longer work.  During these investigations, We once pushed the physical reset button on c1lsc, and once keyed the whole crate.  We also did a couple rounds of hitting the reset button on the DAQ_RFMnetwork screen.

Attachment 1: Oplev_IPang_screenshot_4Dec2009.png
Oplev_IPang_screenshot_4Dec2009.png
  2351   Fri Dec 4 18:54:03 2009 JenneUpdatePEMRanger moved

The Ranger was left in a place where it could be bumped during next week's activities (near the crawl-space to access the inside of the "L" of the IFO on the Yarm).  It has been moved a meter or so to a safer place.

Also, so that Steve can replace the battery in the SR560 that is used for the Ranger, I swapped it out with one of the ones which already has a new, charged battery.  All of the settings are identical.  For posterity, I took a pic of the front panel before unplugging the old SR560.

Attachment 1: RangerSeismometer_SR560settings_4Dec2009.JPG
RangerSeismometer_SR560settings_4Dec2009.JPG
  2350   Thu Dec 3 15:55:24 2009 AlbertoAoGLSCRF AM Stabilizer Output Power

Today I measured the max output power at the EOM output of one of the RF AM Stabilizers that we use to control the modulation depth. I needed to know that number for the designing of the new RF system.

When the EPICS slider of the 166 MHz modulation depth is at 0 the modulation depth is max (the slider's values are reversed : 0 is max, 5 is min; it is also 0 for any value above 5, sepite it range from 0 to 10).

I measured 9.5V from the EOM output, that is 32 dBm on a 50 Ohm impedance.

  2349   Mon Nov 30 19:23:50 2009 JenneUpdateMZMZ down

Came back from dinner to find the Mach Zehnder unlocked.  The poor IFO is kind of having a crappy day (computers, MZ, and I think the Mode Cleaner alignment might be bad too).

  2348   Mon Nov 30 16:23:51 2009 JenneUpdateComputersc1omc restarted

I found the FEsync light on the OMC GDS screen red.  I power cycled C1OMC, and restarted the front end code and the tpman.  I assume this is a remnant of the bootfest of the morning/weekend, and the omc just got forgotten earlier today.

  2347   Mon Nov 30 11:45:54 2009 JenneUpdateComputersWireless is back

When Alberto was parting the Red Sea this morning, and turning it green, he noticed that the wireless had gone sketchy.

When I checked it out, the ethernet light was definitely blinking, indicating that it was getting signal.  So this was not the usual case of bad cable/connector which is a known problem for our wireless (one of these days we should probably relay that ethernet cable....but not today).  After power cycling and replugging the ethernet cable, the light for the 2.4GHz wireless was blinking, but the 5GHz wasn't.  Since the wireless still wasn't working, I checked the advanced configuration settings, as described by Yoichi's wiki page:  40m Network Page

The settings had the 5GHz disabled, while Yoichi's screenshots of his settings showed it enabled.  Immediately after enabling the 5GHz, I was able to use the laptop at Alberto's length measurement setup to get online.  I don't know how the 5GHz got disabled, unless that happened during the power cycle (which I doubt, since no other settings were lost), but it's all better now.

 

  2346   Mon Nov 30 11:29:40 2009 AlbertoAoGall down cond.sea of red

Quote:

Quote:

Came in, found all front-ends down.

 

Keyed a bunch of crates, no luck:

Requesting coeff update at 0x40f220 w/size of 0x1e44
No response from EPICS 

Powered off/restarted c1dcuepics.  Still no luck.

Powered off megatron.  Success!  Ok, maybe it wasn't megatron.  I also did c1susvme1 and c1susvme2 at this time.

 

BURT restored to Nov 26, 8:00am

 

But everything is still red on the C0_DAQ_RFMNETWORK.adl screen, even though the front-ends are running and synced with the LSC.  I think this means the framebuilder or the DAQ controller is the one in trouble--I keyed the crates with DAQCTRL and DAQAWG a couple of times, with no luck, so it's probably fb40m.    I'm leaving it this way--we can deal with it tomorrow.

 I found the red sea when I came in this morning.

I tried several things.

  1. ssh into fb40m: connection refused
  2. telnet fb40m 8087: didn't respond
  3. shutdown fb40m by physically pushing the power button: it worked and the FB came back to life but still with a red light on the MEDM DAQ_DETAIL screen;
  4. powercycled fb40m AND C0DAQCTRL: no improvement
  5. shutdown fb40m, C0DAQCTRL, C1DCUEPICS and pushed the reset button on the RF network crate; then I restarted the computers in this order: fb40m, C1DCUEPICS, C0DAQCTRL: it worked: they came back to life and the lights eventually turned green on the MEDM montior screen

I'm now going to restart the single front -ends and burtgooey them if necessary.

Everything is back on.

Restarted all the front ends. As usual c1susvme2 was stubborn  but eventually it came up.

I burt-restored all the front-ends to Nov 26 at 8am.

The mode cleaner is locked.

  2345   Mon Nov 30 10:28:47 2009 AlbertoAoGall down cond.sea of red

Quote:

Came in, found all front-ends down.

 

Keyed a bunch of crates, no luck:

Requesting coeff update at 0x40f220 w/size of 0x1e44
No response from EPICS 

Powered off/restarted c1dcuepics.  Still no luck.

Powered off megatron.  Success!  Ok, maybe it wasn't megatron.  I also did c1susvme1 and c1susvme2 at this time.

 

BURT restored to Nov 26, 8:00am

 

But everything is still red on the C0_DAQ_RFMNETWORK.adl screen, even though the front-ends are running and synced with the LSC.  I think this means the framebuilder or the DAQ controller is the one in trouble--I keyed the crates with DAQCTRL and DAQAWG a couple of times, with no luck, so it's probably fb40m.    I'm leaving it this way--we can deal with it tomorrow.

 I found the red sea when I came in this morning.

I tried several things.

  1. ssh into fb40m: connection refused
  2. telnet fb40m 8087: didn't respond
  3. shutdown fb40m by physically pushing the power button: it worked and the FB came back to life but still with a red light on the MEDM DAQ_DETAIL screen;
  4. powercycled fb40m AND C0DAQCTRL: no improvement
  5. shutdown fb40m, C0DAQCTRL, C1DCUEPICS and pushed the reset button on the RF network crate; then I restarted the computers in this order: fb40m, C1DCUEPICS, C0DAQCTRL: it worked: they came back to life and the lights eventually turned green on the MEDM montior screen

I'm now going to restart the single front -ends and burtgooey them if necessary.

  2344   Sun Nov 29 16:56:56 2009 robAoGall down cond.sea of red

Came in, found all front-ends down.

 

Keyed a bunch of crates, no luck:

Requesting coeff update at 0x40f220 w/size of 0x1e44
No response from EPICS 

Powered off/restarted c1dcuepics.  Still no luck.

Powered off megatron.  Success!  Ok, maybe it wasn't megatron.  I also did c1susvme1 and c1susvme2 at this time.

 

BURT restored to Nov 26, 8:00am

 

But everything is still red on the C0_DAQ_RFMNETWORK.adl screen, even though the front-ends are running and synced with the LSC.  I think this means the framebuilder or the DAQ controller is the one in trouble--I keyed the crates with DAQCTRL and DAQAWG a couple of times, with no luck, so it's probably fb40m.    I'm leaving it this way--we can deal with it tomorrow.

  2343   Sat Nov 28 20:27:12 2009 KojiUpdatePSLFSS oscillation: Total gain reduced

I stopped by the 40m for some reason and found that the MC trans was 7.5.
This was caused by an oscillation of FSS, which seemed to be started by itself.

The oscillation stopped by reducing the FSS total gain to +9dB (from +11dB).
This is not a permanent fix (i.e. autolocker will restore the gain).
If it seems necessary to reduce the FSS gain always, we change the MC autolocker script.

Attachment 1: 091128_PSL.png
091128_PSL.png
  2342   Fri Nov 27 02:25:26 2009 ranaUpdateABSLPLL Open Loop Gain Measured

Quote:

I measured the open loop gain of the PLL in the AbsL experiment.

 Plots don't really make sense. The second one is inherently unstable - and what's g?

  2341   Thu Nov 26 02:08:34 2009 KojiUpdateElectronicsMulti-resonant EOM --- Q-factor ----

The key point of the story is:
"The recipe to exploit maximum benefit from a resonant EOM"
- Make a resonant EOM circuit. Measure the impedance Z at the resonance.
- This Z determines the optimum turn ratio n of the step-up transformer.
 
(n2 = Z/Rin where Rin is 50Ohm in our case.)
- This n gives the maximum gain Gmax (= n/2) that can be obtained with the step up transformer.
  And, the impedance matching is also satisfied in this condition.

OK: The larger Z, the better. The higher Q, the Z larger, thus the better.
(Although the relationship between Z and Q were not described in the original post.)

So, how can we make the Q higher? What is the recipe for the resonant circuit?
=> Choose the components with smaller loss (resistance). The details will be provided by Kiwamu soon??? 


When I was young (3 months ago), I thought...

  • Hey! Let's increase the Q of an EOM! It will increase the modulation!
  • Hey! Let's use the step-up transformer with n as high as possible! It will increase the modulation!
  • Hey! Take the impedance matching! It will increase the modulation!

I was just too thoughtless. In reality, they are closely related each other.

A high Q resonant circuit has a high residual resistance at the resonant frequency. As far as the impedance is higher than the equivalent output impedance of the driving circuit (i.e. Z>Rin n2), we get the benefit of increasing the turn ratio of the transformer. In other words, "the performance of the resonant EOM is limited by the turn ratio of the transformer." (give us more turns!)

OK. So can we increase the turn ratio infinitely? No. Once Rin n2 gets larger than Z, you no longer get the benefit of the impedance transforming. The output impedance of the signal source yields too much voltage drop.

There is an optimum point for n. That is the above recipe. 

So, a low Q resonant EOM has a destiny to be useless. But high Q EOM still needs to be optimized. As far as we use a transformer with a low turn ratio, it only shows ordinary performance.

 

 

  2340   Wed Nov 25 20:44:48 2009 kiwamuUpdateElectronicsMulti-resonant EOM --- Q-factor ----

Now I am studying about the behavior of the Q-factor in the resonant circuit because the Q-factor of the circuit directly determine the performance as the EOM driver.

Here I summarize the fundamental which explains why Q-factor is important.

 --------------------------------------

The EOM driver circuit can be approximately described as shown in figure below

trans.png

Z represents the impedance of a resonant circuit.

In an ideal case, the transformer just raise the voltage level n-times larger.  Rin is the output impedance of the signal source and usually has 50[Ohm].

The transformer also makes the impedance Z 1/n^2 smaller. Therefore this configuration gives a following relation between Vin and Vout.

eq1.png

 Where G is the gain for the voltage. And G goes to a maximum value when Rin=Z/n2. This relation is shown clearly in the following plot.

 

impedance.png

 Note that I put Rin=50 [Ohm] for calculating the plot.

Under the condition  Rin=Z/n2( generally referred as impedance matching ), the maximum gain can be expressed as;

eq2.png

 

It means that larger Z makes more efficient gain. In our case, interested Z is considered as the impedance at a resonance.

So what we should do is making a resonant circuit which has a higher impedance at the resonance (e.g. high Q-resonant circuit).

 

 

  2339   Wed Nov 25 20:28:17 2009 AlbertoUpdateABSLStopped working on the AbsL

I closed the shutter of the NPRO for the night.

ELOG V3.1.3-