40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 311 of 357  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  6176   Fri Jan 6 11:49:13 2012 JamieUpdateCDSframebuilder taken offline to diagnose problem with symmetricom timing card

Alex and I have taken the framebuilder offline to try to see what's wrong with the symmetricom card.  We have removed the card from the chassis and Alex has taken it back to downs to do some more debugging.

We have been formulating some alternate methods to get timing to the fb in case we can't end up getting the card working.

  4772   Tue May 31 14:29:00 2011 jzweizigUpdateCDSframes

There seems to be something strange going on with the 40m frame builder.
Specifically, there is a gap in the frames in /frames/full near the start of
each 100k second subdirectory. For example, frames for the following times are missing:

990200042-990200669
990300045-990300492
990400044-990400800
990500032-990500635
990600044-990600725
990700037-990700704
990800032-990800677
990900037-990900719


To summarize, after writing the first two frames in a data directory, the next ~10 minutes of frames are usually missing. To make matters worse (for
the nds2 frame finder, at least) the first frame after the gap (and all successive frames) start at an arbitrary time, usually not aligned to a 16-second boundary. Is there something about the change of directories that is causing the frame builder to crash? Or is the platform/cache disk too slow to complete the directory switch-over without loss of data?

  28   Mon Oct 29 23:25:42 2007 tobinSoftware InstallationCDSframes mounted
I mounted the frames directory on mafalda and linux3. It's intentionally not listed in the /etc/fstab so that an fb crash won't prevent the controls machines from booting. The command to mount the frames directory is:

mount fb40m:/frames/frames /frames
  4887   Sun Jun 26 18:35:16 2011 ranaHowToSUSfree swing all optics

I used scripts/SUS/freeswing-all.csh to give the optics a kick and then turn off their watchdogs and collect the free swinging data.  Final script end time = 993173551. Start taking data ~ 993173751

I had to fix up the script a little: it had amateur stuff in there, such as undefined variables.

It still doesn't work that well. On the new Ubuntu workstations, pianosa, it fails by just not setting some of the EPICS variables using the EZCA stuff.

On Allegra, it failed on ~1 out of 10 commands by returning "epicsThreadOnce0sd epicsMutexLock failed" ???

On Pianosa, it sometimes says, instead, "epicsThreadOnceOsd: pthread_mutex_lock returned Invalid argument.".   Ah...now I understand?

So finally, I had to run the script on op340m to get it to actually run all of its commands. That's right; I used a 15 year old Solaris 9 Blade 150 because none of our fancy new Linux machines could do the job reliably.

Fixing our EZCA situation is a pretty high priority; if the locking scripts fail to run ~1 command every hour its going to completely derail the lock acquisition attempts.

If you want to use the IFO tonight, just run the script again on op340m again when you're done.

Attachment 1: ringdown.png
ringdown.png
  4892   Tue Jun 28 01:18:53 2011 ranaHowToSUSfree swing all optics

Chris Wipf tells me that the EPICS Mutex Jumbo Mumbo can be overcome by upgrading our EPICS. We should get one of Jamie's assistants to get this going on one of the Ubuntu workstations.

  5204   Fri Aug 12 04:11:13 2011 kiwamuUpdateSUSfree swinging

Excited all optics - -
Fri Aug 12 03:34:12 PDT 2011
997180467

  5220   Sat Aug 13 02:11:33 2011 kiwamuUpdateSUSfree swinging again

I am leaving all of the suspensions free swinging. They will automatically recover after 5 hours from now.

--
Excited all optics
Sat Aug 13 02:08:07 PDT 2011
997261703
--

FYI : I ran a combination of two scripts:   ./freeswing && ./opticshutdown

  5234   Sun Aug 14 22:48:37 2011 kiwamuUpdateSUSfree swinging again

Excited all optics
Sun Aug 14 20:22:33 PDT 2011
997413768

  5246   Tue Aug 16 04:50:17 2011 kiwamuUpdateSUSfree swinging again

 Since Suresh and I changed the DC biases on most of the suspension, the free swingning spectra will be different from the past.

- -

EXcited ETMX ETMY ITMX ITMY PRM SRM BS

Tue Aug 16 04:48:02 PDT 2011
997530498

  2405   Sun Dec 13 17:43:10 2009 kiwamuUpdateSUSfree swinging spectra (vacuum)

The free swinging spectra of ITMs, ETMs, BS, PRM and SRM were measured under the vacuum-condition. The attachment are measured spectra.

It looks there are nothing wrong because no significant difference appear from the past data and the current data (under atmosperic pressure).

So everything is going well.

Attachment 1: summary_FreeSwinging_vacuum.pdf
summary_FreeSwinging_vacuum.pdf summary_FreeSwinging_vacuum.pdf summary_FreeSwinging_vacuum.pdf summary_FreeSwinging_vacuum.pdf summary_FreeSwinging_vacuum.pdf summary_FreeSwinging_vacuum.pdf summary_FreeSwinging_vacuum.pdf
  2368   Tue Dec 8 23:13:32 2009 kiwamuUpdateSUSfree swinging spectra of ETMY and ITMX

The free swinging spectra of ETMY and ITMX were taken after today's wiping, in order to check the test masses.

These data were taken under the atmospheric pressure condition, as well as the spectra of ETMX taken yesterday.

Compared with the past (see Yoichi's  good summary in Aug.7 2008), there are no significant difference.

There are nothing wrong with the ETMY and ITMX successfully.

 --

By the way I found a trend, which can be seen in all of the data taken today and yesterday.

The resonances of pitch and yaw around 0.5Hz look like being damped, because their height from the floor become lower than the past.

I don't know what goes on, but it is interesting because you can see the trend in all of the data.

 

 

 

 

 

Attachment 1: SUS-ETMY.png
SUS-ETMY.png
Attachment 2: SUS-ITMX.png
SUS-ITMX.png
  2369   Wed Dec 9 00:23:28 2009 KojiUpdateSUSfree swinging spectra of ETMY and ITMX

Where is the plot for the trend?
It can be either something very important or just a daydream of you.
We can't say anything before we see the data.

We like to see it if you think this is interesting.

... Just a naive guess: Is it just because the seismic level got quiet in the night?

 

P.S.

You looks consistently confused some words like damping, Q, and peak height.

  • Q is defined by the transfer function of the system (= pendulum).
     
  • Damping (either active or passive) makes the Q lower.
     
  • The peak height of the resonance in the spectrum dy is determined by the disturbance dx and the transfer function H (=y/x).

dy = H dx

As the damping makes the Q lower, the peak height also gets lowered by the damping.
But if the disturbance gets smaller, the peak height can become small even without any change of the damping and the Q.

Quote:

By the way I found a trend, which can be seen in all of the data taken today and yesterday.

The resonances of pitch and yaw around 0.5Hz look like being damped, because their height from the floor become lower than the past.

I don't know what goes on, but it is interesting because you can see the trend in all of the data.

 

  5282   Tue Aug 23 01:09:44 2011 kiwamuUpdateSUSfree swinging test

excited all the optics ---

Tue Aug 23 01:08:00 PDT 2011
998122096

  5287   Tue Aug 23 11:57:22 2011 kiwamuUpdateSUSfree swinging test during lunch time

excited all the optics. (with ITMY WTF OFF)

Tue Aug 23 11:52:52 PDT 2011
998160788

  5290   Tue Aug 23 17:21:45 2011 kiwamuUpdateSUSfree swinging test for ETMY

 

Excited ETMY

Tue Aug 23 17:20:45 PDT 2011
998180460

 

  5375   Sat Sep 10 02:28:45 2011 kiwamuUpdateSUSfree swinging test in vacuum condition

All the optcs were excited

Sat Sep 10 02:14:11 PDT 2011
999681266

 

  5421   Thu Sep 15 18:12:21 2011 JenneUpdateSUSfree swinging test in vacuum condition

Quote:

All the optcs were excited

Sat Sep 10 02:14:11 PDT 2011
999681266

 

 

Optic The Plot Input Matrix BADness
ITMX  ITMX.png       pit     yaw     pos     side    butt
UL    0.601   0.680   1.260  -1.009   0.223 
UR    0.769  -1.254  -0.175  -0.179   0.581 
LR   -1.231   0.065   0.566  -0.480   0.252 
LL   -1.399   2.000   2.000  -1.310  -2.944 
SD   -0.580   0.868   2.451   1.000  -1.597 

 
7.95029
ITMY  ITMY.png       pit     yaw     pos     side    butt
UL    1.067   0.485   1.145  -0.195   0.929 
UR    0.548  -1.515   0.949  -0.142  -1.059 
LR   -1.452  -0.478   0.855  -0.101   1.051 
LL   -0.933   1.522   1.051  -0.153  -0.962 
SD   -0.530   0.903   2.115   1.000   0.142 
3.93939
ETMX ETMX.png       pit     yaw     pos     side    butt
UL    0.842   1.547   1.588  -0.018   1.026 
UR    0.126  -0.453   1.843   0.499  -1.173 
LR   -1.874  -0.428   0.412   0.511   0.934 
LL   -1.158   1.572   0.157  -0.006  -0.867 
SD    1.834   3.513  -0.763   1.000  -0.133
5.39825
ETMY ETMY.png       pit     yaw     pos     side    butt
UL   -0.344   1.280   1.425  -0.024   0.903 
UR    1.038  -0.720   1.484  -0.056  -1.161 
LR   -0.618  -1.445   0.575  -0.040   0.753 
LL   -2.000   0.555   0.516  -0.007  -1.184 
SD   -0.047  -0.038   0.986   1.000   0.083 
4.15747
BS  BS.png       pit     yaw     pos     side    butt
UL    1.549   0.655   0.393   0.263   0.997 
UR    0.192  -1.345   1.701  -0.063  -0.949 
LR   -1.808  -0.206   1.607  -0.085   0.952 
LL   -0.451   1.794   0.299   0.241  -1.101 
SD    0.724   0.293  -3.454   1.000   0.037 
5.66432
PRM  PRM.png       pit     yaw     pos     side    butt
UL    0.697   1.427   1.782  -0.337   0.934 
UR    1.294  -0.573   0.660  -0.068  -0.943 
LR   -0.706  -1.027   0.218   0.016   0.867 
LL   -1.303   0.973   1.340  -0.254  -1.257 
SD    0.369  -0.448  -0.496   1.000   0.456 
5.1026
SRM   Can't invert....need to fix the peak-finding.  
MC1  MC1.png       pit     yaw     pos     side    butt
UL    0.872   0.986   0.160   0.054   0.000 
UR    0.176  -0.752   0.917   0.018   0.000 
LR   -1.824  -2.000   1.840   0.002   3.999 
LL   -1.128  -0.262   1.083   0.038  -0.000 
SD    0.041   0.036  -0.193   1.000  -0.001 
5.31462
MC2  MC2.png       pit     yaw     pos     side    butt
UL    1.042   0.767   0.980   0.131   0.928 
UR    0.577  -1.233   1.076  -0.134  -0.905 
LR   -1.423  -0.640   1.020  -0.146   1.050 
LL   -0.958   1.360   0.924   0.120  -1.117 
SD   -0.073  -0.164  -0.702   1.000  -0.056 
4.07827
MC3  MC3.png       pit     yaw     pos     side    butt
UL    1.595   0.363   1.152   0.166   1.107 
UR    0.025  -1.629   1.135   0.197  -0.994 
LR   -1.975   0.008   0.848   0.105   0.904 
LL   -0.405   2.000   0.865   0.074  -0.995 
SD   -0.433   0.400  -1.624   1.000   0.022 
3.64881

 

  5480   Tue Sep 20 15:23:16 2011 JenneUpdateSUSfree swinging test in vacuum condition

This is using data for the SRM from: 20 Sept 2011 03:20:00 PDT = 1000549215

You can see that there are still some funny peaks between Pit and Yaw, but I finnessed the peak-finding, and I was able to fit all of the correct peaks, and invert the matrix:

 SRM now has its new matrix, and is damping happily.

Optic The Plot Matrix Badness
SRM SRM.png                pit     yaw     pos     side    butt
UL    0.877   0.983   1.105  -0.288   1.092 
UR    1.010  -1.017   1.123  -0.145  -1.055 
LR   -0.990  -1.002   0.895  -0.091   0.848 
LL   -1.123   0.998   0.877  -0.234  -1.006 
SD    0.089   0.064   3.752   1.000  -0.009
 4.4076

 

 

  5481   Tue Sep 20 15:39:57 2011 KojiUpdateSUSfree swinging test in vacuum condition

Can't we use Yuta's auto-Q adjust script?

 http://nodus.ligo.caltech.edu:8080/40m/3723

Edit by KI :

Of course we can use it but first we have to fix some pynds sentences since his script was written for the OLD pynds.

  5346   Tue Sep 6 17:56:12 2011 JenneUpdateSUSfree swinging test on ITMX

Quote:

Tue Sep  6 17:48:02 PDT 2011
999391697

 Kiwamu excited ITMY (which Suresh had already started).  I just kicked ITMX:

Tue Sep  6 17:55:21 PDT 2011
999392136

  5345   Tue Sep 6 17:48:57 2011 kiwamuUpdateSUSfree swinging test on ITMY

Tue Sep  6 17:48:02 PDT 2011
999391697

  5594   Sun Oct 2 02:21:27 2011 kiwamuUpdateSUSfree swinging test once more

The following optics were kicked:
MC1 MC2 MC3 ETMX ETMY ITMX ITMY PRM SRM BS
Sun Oct  2 02:13:40 PDT 2011
1001582036

They will automatically get back after 5 hours.

  5240   Mon Aug 15 17:23:55 2011 jamieUpdateSUSfreeswing script updated

I have updated the freeswing scripts, combining all of them into a single script that takes arguments to specify the optic to kick:

pianosa:SUS 0> ./freeswing
usage: freeswing SET
usage: freeswing OPTIC [OPTIC ...]

Kick and free-swing suspended optics.
Specify optics (i.e. 'MC1', 'ITMY') or a set:
'all' = (MC1 MC2 MC3 ETMX ETMY ITMX ITMY PRM SRM BS)
'ifo' = (ETMX ETMY ITMX ITMY PRM SRM BS)
'mc'  = (MC1 MC2 MC3)
pianosa:SUS 0>

I have removed all of the old scripts, and committed the new one to the SVN.

  6311   Fri Feb 24 04:12:44 2012 kiwamuUpdateSUSfreeswing test
The following optics were kicked:
MC1 MC2 MC3 ETMX ETMY ITMX ITMY PRM SRM BS
Fri Feb 24 04:11:15 PST 2012
1014120690
 
Steve (or anyone), can you restore the watchdogs when you come to the lab in the morning ?

Quote from #6305

Kiwamu (or whoever is here last tonight): please run the free-swing/kick script (/opt/rtcds/caltech/c1/scripts/SUS/freeswing) before you leave, and I'll check the matrices and update the suspensions tomorrow morning.

  2737   Wed Mar 31 02:57:48 2010 kiwamuUpdateGreen Lockingfrequency counter for green PLL

Rana found that we had a frequency counter SR620 which might be helpful for lock acquisition of the green phase lock.

It has a response of 100MHz/V up to 350MHz which is wide range and good for our purpose. And it has a noise level of 200Hz/rtHz @ 10Hz which is 1000 times worse than that Matt made (see the entry).

The attached figure is the noise curve measured while I injected a signal of several 100kHz. In fact I made sure that the noise level doesn't depends on the frequency of an input signal.

The black curve represents the noise of the circuit Matt has made, the red curve represents that of SR620.

Attachment 1: FCnoise.png
FCnoise.png
  2741   Wed Mar 31 12:30:31 2010 ranaUpdateGreen Lockingfrequency counter for green PLL

Its a good measurement - you should adjust the input range of the 620 using the front panel 'scale' buttons to see how the noise compares to Matt's circuit when the range is reduced to 1 MHz. In any case, we would use it in the 350 MHz range mode. What about the noise of the frequency discriminator from MITEQ?

  2751   Thu Apr 1 15:21:12 2010 ranaUpdateGreen Lockingfrequency counter for green PLL

 

  2718   Sun Mar 28 17:28:26 2010 matt, kiwamuUpdateGreen Lockingfrequency discriminator for green PLL

Last Friday, Matt made a frequency discriminator circuit on a bread board in order to test the idea and study the noise level. I think it will work for phase lock acquisition of Green locking.

As a result a response of 100kHz/V and a noise level of 2uV/rtHz @ 10Hz are yielded. This corresponds to 0.2Hz/rtHz @ 10Hz.

The motivation of using frequency discriminators is that  it makes a frequency range wider and easier for lock acquisition of PLLs in green locking experiment.

For the other possibility to help phase lock acquisition, Rana suggested to use a commercial discriminator from Miteq.


(principle idea)

The diagram below shows a schematic of the circuit which Matt has built.

FD.png

Basically an input signal is split into two signals right after the input, then one signal goes through directly to a NAND comparator.

On the other hand another split signal goes through a delay line which composed by some RC filters, then arrive at the NAND comparator with a certain amount of delay.

After going through the NAND comparator, the signal looks like a periodic pulses (see below).

If we put a signal of higher frequency we get more number of pulses after passing through the NAND.

pulses.png

Finally the pulse-signal will be integrated at the low pass filter and converted to a DC signal.

Thus the amplitude of DC signal depends on the number of the pulses per unit time, so that the output DC signal is proportional to the frequency of an input signal.

 

 

(result)

By putting a TTL high-low signal, an output of the circuit shows 100kHz/V linear response.

It means we can get DC voltage of 1 V if a signal of 100kHz is injected into the input.

And the noise measurement has been done while injecting a input signal. The noise level of 0.2Hz/rtHz @ 10 Hz was yielded.

Therefore we can lock the green PLL by using an ordinary VCO loop after we roughly guide a beat note by using this kind of discriminator.

 FDnoise.png

Attachment 1: DSC_1407.JPG
DSC_1407.JPG
Attachment 2: FD.png
FD.png
Attachment 3: FDnoise.png
FDnoise.png
  2728   Mon Mar 29 15:19:33 2010 mevansUpdateGreen Lockingfrequency discriminator for green PLL

Thanks for the great entry!

In order to make this work for higher frequencies, I would add Hartmut's suggestion of a frequency dividing input stage.  If we divide the input down by 100, the overall range will be about 200MHz, and the noise will be about 20Hz/rtHz.  That might be good enough... but we can hope that the commercial device is lower noise!

Quote:

Last Friday, Matt made a frequency discriminator circuit on a bread board in order to test the idea and study the noise level. I think it will work for phase lock acquisition of Green locking.

As a result a response of 100kHz/V and a noise level of 2uV/rtHz @ 10Hz are yielded. This corresponds to 0.2Hz/rtHz @ 10Hz.

The motivation of using frequency discriminators is that  it makes a frequency range wider and easier for lock acquisition of PLLs in green locking experiment.

FD.png

  6553   Fri Apr 20 23:02:25 2012 DenUpdateAdaptive Filteringfrequency domain filter

 DFT-LMS is a frequency domain adaptive filter that demonstrates faster convergence compared to the time-domain LMS filter. I've tested Discrete Fourier Transform (DFT-LMS) filter. It converts witness signal to the frequency domain using DFT and corrects the eigenvalues of the covariance matrix to make them as equal to each other as possible (does pre-whitenning of the witness signal).

Left plot compares learning curves for time domain LMS and DFT-LMS algorithms on the simulated data from seismometers and mcl (number of averages  = 30) Right plot shows the evolution of the filter coefficients norm (Euclidean norms of the coefficient vector). Though LMS algorithm works in the time domain and DFT-LMS in the frequency domain, the coefficient vectors must have the same length, because we Fourier Transform is achieved by applying a unitary operator => vector norm must not change.

dft.png   norm.png

Plots show that both algorithms converge to the same coefficients vector norm, but DFT-LMS does it much faster then LMS. 

Online realization: 

Good news: algorithm complexity is linear in filter length. Though the algorithm does Fourier transform, its complexity is still O(M), M - number of coefficients. Simulations show that DFT-LMS is ~8-9 times slower then LMS. This is not so bad, may be we can do even slightly better.

Bad news: downsample process is not simple. Due to Fourier transform, the filter needs the whole witness signal vector before calculating the output. This is sad and in contrast with LMS algorithm where we could start to calculate the new output immediately after computing the previous output. We either need to calculate the whole output immediately or introduce delay in the output or approximate Fourier transform with some previous witness signal values.

Realization in the kernel: I asked Alex about complex numbers, exponents, sin and cos functions in the kernel c and he answers that we do not have complex numbers, about exp, cos, sin he is not sure. But for DFT-LMS algorithm we are able to get round of these difficulties. Complex numbers will be presented as  2 real numbers. Then exp (a) = cos(a) + i*sin(a). All what we need for DFT-LMS are sin(2 * pi * k / M) and cos(2 * pi * k / M), k=0,1,2,...,M-1. Fortunately, M - (filter length) is big enough, typical value pi/M ~ 0.001 and we can calculate sin(2*pi/M) and cos(2*pi/M) using Taylor series. As the argument is small, 5-6 terms will be enough to get precision ~1e-20. Then we build the whole table of cos and sin according to induction cos(2*pi/M*k) = cos(2*pi/M*(k-1))cos(2*pi/M) - sin(2*pi/M*(k-1))sin(2*pi/M), sin(2*pi/M*k) = cos(2*pi/M*(k-1))sin(2*pi/M) + sin(2*pi/M*(k-1))cos(2*pi/M). We should do it only once, so the algorithm will build these values in the beginning during first several iterations, then will use them.

The main problem is downsampling. I need to think more about it.

  7000   Sat Jul 21 18:04:02 2012 DenUpdateAdaptive Filteringfrequency domain filter

I've implemented online frequency domain filter and applied it to MC_F.

freq_af.png+

Magnitude of the filter output at 1 Hz is the same as MC_F. This means that it is not hard for FIR to match the resonance. The problem is with the phase. We can not match the resonance exactly. If the resonance is at f0 and we match at f0 +/- df then in the frequency range (f0, f0 +/- df) the phase is not matched for 180. I guess the filter does not diverge because df is small but also the filter can not account for this huge phase lag. We need to slightly change the simulated actuator TF and see how the filter will react.

  2076   Fri Oct 9 16:36:13 2009 rob UpdateIOOfrequency noise problem

I used the XARM as a reference to measure the frequency noise after the MC.  It's huge around 4kHz--hundreds of times larger than the frequency noise the MC servo is actually squashing.  This presents a real problem for our noise performance.

An elog search reveals that this noise has been present (although not calibrated till now) for years.  We're not sure what's causing it, but suspicion falls on the piezojena input PZTs. 

I didn't bother too much about it before because we previously had enough common mode servo oomph to squash it below other DARM noises, and I didn't worry too much about stuff at 4kHz..  Now that we have a weaker FSS and thus much weaker CM servo, we can't squash it, and the most interesting feature of our IFO is at 4kHz. 

I'll measure the actual voltage noise going to the PZTs.  I remember doing this before and concluding it was ok, but can't find an elog entry.  So this time maybe I'll  do it right.

Attachment 1: freqnoiseaftermc.png
freqnoiseaftermc.png
  2172   Tue Nov 3 03:45:04 2009 rob UpdateIOOfrequency noise problem

Quote:

I used the XARM as a reference to measure the frequency noise after the MC.  It's huge around 4kHz--hundreds of times larger than the frequency noise the MC servo is actually squashing.  This presents a real problem for our noise performance.

An elog search reveals that this noise has been present (although not calibrated till now) for years.  We're not sure what's causing it, but suspicion falls on the piezojena input PZTs. 

I didn't bother too much about it before because we previously had enough common mode servo oomph to squash it below other DARM noises, and I didn't worry too much about stuff at 4kHz..  Now that we have a weaker FSS and thus much weaker CM servo, we can't squash it, and the most interesting feature of our IFO is at 4kHz. 

I'll measure the actual voltage noise going to the PZTs.  I remember doing this before and concluding it was ok, but can't find an elog entry.  So this time maybe I'll  do it right.

 

This level of frequency noise has not changed, but we now have increased common mode servo gain and so it's not as huge of a deal, although we should still probably do something about it. 

 

Attached is a plot of the piezojena noise measurement, estimated into Hz, along with another measurement of frequency noise as described above. 

To get the piezojena voltage noise into Hz, I estimated the PZTs within have a flat 2 micron/V response (based on a rough knowledge of their geometry and assuming a 10 milliradian / 150V steering range).  This is the voltage noise with the PZTs operating in closed loop mode, which is how we normally run them.  This plot also ignores the transfer function of the Pomona box, as we are mainly looking at noise in the kHz band.  I think this plot shows that these PZTs are a good candidate for creating this frequency noise, especially near their mechanical resonances (the manual says the unloaded resonances are in the 3-4kHz range).   

I've been operating one DOF of the piezojenas in open loop mode for a couple of weeks now, and the feared drift has not been a problem at all.  If we plan to keep using these after the upgrade, we should definitely put some big resistors in series at the outputs and operate them in open loop mode.

Also attached is a plot of RF DARM noise, with a frequency noise spectrum.  That spectrum is a REFL 2I spectrum put into DARM units using a measured TF (driving MC_L and measuring REFL 2I and DARM_ERR), and then put into meters using the same DARM calibration as used for the DARM curve.

Attachment 1: noise.png
noise.png
Attachment 2: spectra.pdf
spectra.pdf
  2357   Sat Dec 5 17:34:30 2009 robUpdateIOOfrequency noise problem

There's a large broadband increase in the MC_F spectrum.  I'm not totally sure it's real--it could be some weird bit-swapping thing.  I've tried soft reboots of c1susvme2 and c1iovme, which haven't helped.  In any case, it seems like this is preventing any locking success today.  Last night it was fine.

Attachment 1: mcf.png
mcf.png
  2359   Sat Dec 5 22:31:52 2009 robUpdateIOOfrequency noise problem

Quote:

There's a large broadband increase in the MC_F spectrum.  I'm not totally sure it's real--it could be some weird bit-swapping thing.  I've tried soft reboots of c1susvme2 and c1iovme, which haven't helped.  In any case, it seems like this is preventing any locking success today.  Last night it was fine.

 Rebooting c1iovme (by keying off the crate, waiting 30 seconds, and then keying it back on and restarting) has resolved this.  The frequency noise is back to the 'usual' trace.

  14086   Thu Jul 19 04:44:09 2018 Annalisa, TerraSummaryThermal Compensationfrequency shift observed with heating!

Annalisa, Gautum, Koji, Terra

Summary: with the reflector setup, we measured a frequency shift of the first and second order modes! First looks of shifts show 1st HOM shift ~-10 kHz, 2nd HOM shift ~-18 kHz (carrier ~4 kHz). We saw no shift with the cylinder/lenses set up.

- - - - -

Tonight we modified the cavity scan setup: the LO is provided by the Marconi which, at the same time, is also used to scan the AUX laser frequency instead of the Agilent. In order to get rid of the free running noise between Marconi and Agilent, the Marconi frequency was scanned and, point by point, the Agilent center frequency was changed accordingly. In order to speed up the process, the whole procedure was automated. The script is called AGfast.py and can be found in /users/annalisa/postVent.

One thing that helped in improving the data quality of the phase information was to set the Agilent IF bandwidth @1kHz. Not yet clear why, but it was better than having a lower bandwidth. To be further investigated.

With this setup, we made some coarse scan of the full FSR and then we "zoomed" around the main peaks in order to increase the resolution and get a more precise information about the peak frequency.

Here are the frequency ranges that we scanned:

  • carrier - central frequency: 31.73MHz; range: [31.68MHz - 31.78MHz]
  • HOM1 - central frequency: 32.88MHz; range: [32.84MHz - 32.93MHz]
  • HOM2 - central frequency: 34.03MHz; range: [33.95MHz - 34.06MHz]
  • HOM3 - central frequency: 35.18MHz; range: [35.09MHz - 35.25MHz]

We powered the heater of the lenses setup @4:55 am at 14.4V and 0.9A. Then we slightly increased the power @5:05am and the final "hot state" configuration is with heater powered at 16V and 0.9A.

With this setup we couldn't see any frequency shift

Then, at around 6:30 am we turned on the reflector setup and we measured a frequency shift of the first and second order modes. First scans show 1st HOM shift ~10 kHz, 2nd HOM shift ~18 kHz. First attachment shows carrier hot/cold, second attachment shows HOM2 hot/cold. We started to get plauged by high seismic noise. Heaters turned off at 7:45 am. Lots of scans and actual analysis to go.


gautam: about the questionable plotting -

  • 10 faint (alpha~0.3) lines are individual measurements with the reflector doing its heating. (AG4395A, 0 span, single frequency measurements plotted together).
  • charcoal line, labelled mean, is the mean of the 10 above lines.
  • bright green ("Reference") is the mean of a coarse scan (cold ETM) overlaid for comparison. 
  • "cold" - self explanatory.

My personal favourite plot is Attachment #3, which is a 5 MHz scan (cold) to identify positions of the various peaks. The power of including phase information in the analysis is clear. The second FSR on the right edge of the plot is not as prominent as the first is because the arm transmission was degrading throughout the measurement. For future measurements, we should consider locking the IMC length to the arm cavity - this would eliminate such alignment drifts, and maybe also make the PLL control signal RMS smaller. 

Attachment 1: scanning_fine_2018-07-19-07-32-08_parsed.pdf
scanning_fine_2018-07-19-07-32-08_parsed.pdf
Attachment 2: scanning_fine_2018-07-19-06-57-47_parsed.pdf
scanning_fine_2018-07-19-06-57-47_parsed.pdf
Attachment 3: Yscan_scanning_parsed_2am.txt.pdf
Yscan_scanning_parsed_2am.txt.pdf
  9224   Wed Oct 9 15:00:48 2013 SteveUpdatePEMfreshmen tour

 [Alan, Koji, Manasa]

Record number of 50 freshmen were given tour of the 40m this afternoon.

Attachment 1: freshmen2013.jpg
freshmen2013.jpg
  7493   Fri Oct 5 16:21:48 2012 SteveUpdateGeneralfreshmen visiters

40 plus freshmen visited the 40m today

Attachment 1: IMG_1693.JPG
IMG_1693.JPG
Attachment 2: IMG_1694.JPG
IMG_1694.JPG
  6203   Tue Jan 17 02:27:49 2012 kiwamuUpdateLSCfringe tests : all the suspensions are innocent

I did a quick test to check a hypothesis that PRM is interfering with some other optics in the single bounce configuration.

I shook all the suspensions (except the MC mirrors) at 3 Hz in POS, PIT and YAW with an amplitude of 1000 counts.

No effects were found in the REFL demod signals.

So it is NOT a fringe effect caused by the other suspended mirrors.

Quote from #6202

The REFL11 and REFL55 demod signals show high frequency noise depending on how big signals go to the POS actuator of PRM.

Is PRM making some fringes with some other optics ??   

 

  6088   Thu Dec 8 11:59:53 2011 ZachUpdateRF Systemfringing indeed

Here is a trend of 11 & 55 I&Q, along with the EOM temperature and PSL RMTEMP signals. You can see that there is definitely some fringe-like behavior for monotonic changes in temperature. This is consistent with what I have seen on the gyro table in the past.

Some other notes:

  • The EOM temperature (or at least the sensor temperature) seems to track RMTEMP almost exactly when there is no foam box on the EOM. I have verified that the max-min swing here is the same for both signals (~0.77 K).
  • Something crazy appears to happen at ~10:15, and all the RAM signals get much noisier. Does anyone know what happened at this time (2:15am local)?

We ought to get to the bottom of the fringing. The CTE of LiNbO3 is ~2 ppm/K, so given that the wavelength is on the order of 0.5 K, this is probably not caused by the etalon effect (2ppm/K * 0.5K * ~1cm << 1064nm).

RAM_with_temp_overnight.png

  7197   Wed Aug 15 17:23:22 2012 jamieUpdateCDSfront end IOP models changed to reflect actual physical hardware

As Rolf pointed out when he was here yesterday, all of our IOPs are filled with parts for ADCs and DACs that don't actually exist in the system.  This was causing needless module error messages and IOP GDS screens that were full of red indicators.  All the IOP models were identically stuffed with 9 ADC parts, 8 DAC parts, and 4 BO parts, even though none of the actual front end IO chassis had physical configurations even remotely like that.  This was probably not causing any particular malfunctions, but it's not right nonetheless.

I went through each IOP, c1x0{1-5}, and changed them to reflect the actual physical hardware in those systems.  I have committed these changes to the svn, but I haven't rebuilt the models yet.  I'll need to be able to restart all models to test the changes, so I'm going to wait until we have a quiet time, probably next week.

  9074   Tue Aug 27 19:34:36 2013 JamieConfigurationCDSfront end IPC configuration

So the IPC situation on the front end network is not so great right now.  For various no-longer-valid reasons, c1lsc had no RFM card, all the IPC connections were routed through the c1rfm model on c1sus, and routed to c1lsc via dolphin PCIe as needed.  As things grew, c1rfm became overloaded.  Koji tried to fix the situation by breaking things out of c1rfm to make direct connections where we could.  This cleared up c1rfm a bit, but not c1mcs is overloading.

Reminder: PCIe (dolphin) is faster and higher bandwidth than RFM.  The more things we can put on PCIe the better.

Attached is a graph of my rough accounting of the intended direct IPC connections between the front ends.  By "intended direct" I mean what should be direct connections if we had all the appropriate hardware.  Right now the actual connection graph is more convoluted than this since things are passing through c1rfm.  I note this graph was NOT particularly easy to make, which is very unfortunate.  I had to manually look through every model and determine the ultimate source of every incoming IPC.  Kind of a pain in the butt.  It would be nice if there was a simple way to represent this.

Here are some various solutions to the problem as I see it:

a) put c1lsc on the RFM network

This would allow c1lsc to talk to c1ioo, c1iscex, and c1iscey without having to go through c1sus, thereby eliminating c1rfm altogether.  I'm not sure why we didn't just do this originally.

Requires:

  • One RFM card for c1lsc

b) put c1ioo on the PCIe network (and move c1sus's RFM card to c1lsc)

This is probably the most robust solution.

b1) There are roughly 8 IPCs going from c1ioo to c1sus, and 4 going the other way, and 3 IPCs from c1ioo to c1lsc.  If we put c1ioo on PCIe all of these now RFM connections would become direct PCIe connections, which would be a big win.

At this point only the end station front ends would be on RFM, and most of the connections to those come from c1lsc, so it would make sense to give c1lsc the RFM card, thereby eliminating a lot of stuff from c1rfm.

Requires:

  • dolphin card for c1ioo (do the old sun machines support these?  if they don't we could swap the old sun machine with a new spare aLIGO-approved supermicro machines, which we have spares of)
  • dolphin fibre to go to dolphin switch in 1X3 rack

b2) OR, we could move c1ioo to 1X4 with c1lsc and c1sus, and get a OneStop fibre cable to connect to its IO chassis.  We would still need a dolphin card, but we could use coper instead of fibre.  This is my preferred solution, since it moves c1ioo out of 1X1, where it's really in the way and making a lot of noise.  It would also be easier to manage all the machines if they're together in one rack.

Requires:

  • dolphin card for c1ioo
  • dolphin coper cable for c1ioo
  • OneStop fibre for c1ioo

c) put another cpu in c1sus

c1sus is (I believe) able to support another 6-core cpu.  If we added more cores to c1sus, we could break up c1rfm into c1rfm0, c1rfm1, etc.  This is a less elegant solution imho, but it would probably do the job.

Requires:

  • one new CPU for c1sus
Attachment 1: hosts.png
hosts.png
  9076   Tue Aug 27 20:43:34 2013 KojiConfigurationCDSfront end IPC configuration

The reason we had the PCIe/RFM system was to test this mixed configuration in prior to the actual implementation at the sites.
Has this configuration been intesively tested at the site with practical configuration?

Quote:

Attached is a graph of my rough accounting of the intended direct IPC connections between the front ends. 

It's hard to believe that c1lsc -> c1sus only has 4 channels. We actuate ITMX/Y/BS/PRM/SRM for the length control.
In addition to these, we control the angles of ITMX/Y/BS/PRM (and SRM in future) via c1ass model on c1lsc.
So there should be at least 12 connections (and more as I ignored MCL).

I personally prefers to give the PCIe card to c1ioo and move the RFM card to c1lsc.
But in either cases, we want to quantitatively compare what the current configuration is (not omitting the bridging by c1rfm),
and what the future configuration will be including the addtional channels we want add in close future,

because RFM connections are really costly and moving the RFM card to c1lsc may newly cause the timeout of c1lsc
just instead of c1sus.

  9086   Wed Aug 28 19:47:28 2013 jamieConfigurationCDSfront end IPC configuration

Quote:

It's hard to believe that c1lsc -> c1sus only has 4 channels. We actuate ITMX/Y/BS/PRM/SRM for the length control.
In addition to these, we control the angles of ITMX/Y/BS/PRM (and SRM in future) via c1ass model on c1lsc.
So there should be at least 12 connections (and more as I ignored MCL).

Koji was correct that I missed some connections from c1lsc to c1sus.  I corrected the graph in the original post.

Also, I should have noted, that that graph doesn't actually include everything that we now have.  I left out all the simplant stuff, which adds extra connections between c1lsc and c1sus, mostly because the sus simplant is being run on c1lsc only because there was no space on c1sus.  That should be corrected, either by moving c1rfm to c1lsc, or by adding a new core to c1sus.

I also spoke to Rolf today and about the possibility of getting a OneStop fiber and dolphin card for c1ioo.  The dolphin card and cable we should be able to order no problem.  As for the OneStop, we might have to borrow a new fiber-supporting card from India, then send our current card to OneStop for fiber-supporting modifications.  It sounds kind of tricky.  I'll post more as I figure things out.

Rolf also said that in newer versions of the RCG, the RFM direct memory access (DMA) has improved in performance considerably, which reduces considerably the model run-time delay involved in using the RFM.  In other words, the long awaited RCG upgrade might alleviate some of our IPC woes.

We need to upgrade the RCG to the latest release (2.7)

  13138   Mon Jul 24 19:28:55 2017 JamieUpdateCDSfront end MX stream network working, glitches in c1ioo fixed

MX/OpenMX network running

Today I got the mx/open-mx networking working for the front ends.  This required some tweaking to the network interface configuration for the diskless front ends, and recompiling mx and open-mx for the newer kernel.  Again, this will all be documented.

controls@fb1:~ 0$ /opt/mx/bin/mx_info
MX Version: 1.2.16
MX Build: root@fb1:/opt/src/mx-1.2.16 Mon Jul 24 11:33:57 PDT 2017
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
    8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0:  364.4 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
    Status:        Running, P0: Link Up
    Network:    Ethernet 10G

    MAC Address:    00:60:dd:43:74:62
    Product code:    10G-PCIE-8B-S
    Part number:    09-04228
    Serial number:    485052
    Mapper:        00:60:dd:43:74:62, version = 0x00000000, configured
    Mapped hosts:    6

                                                        ROUTE COUNT
INDEX    MAC ADDRESS     HOST NAME                        P0
-----    -----------     ---------                        ---
   0) 00:60:dd:43:74:62 fb1:0                             1,0
   1) 00:30:48:be:11:5d c1iscex:0                         1,0
   2) 00:30:48:bf:69:4f c1lsc:0                           1,0
   3) 00:25:90:0d:75:bb c1sus:0                           1,0
   4) 00:30:48:d6:11:17 c1iscey:0                         1,0
   5) 00:14:4f:40:64:25 c1ioo:0                           1,0
controls@fb1:~ 0$

c1ioo timing glitches fixed

I also checked the BIOS on c1ioo and found that the serial port was enabled, which is known to cause timing glitches.  I turned off the serial port (and some power management stuff), and rebooted, and all the c1ioo timing glitches seem to have gone away.

It's unclear why this is a problem that's just showing up now.  Serial ports have always been a problem, so it seems unlikely this is just a problem with the newer kernel.  Could the BIOS have somehow been reset during the power glitch?

In any event, all the front ends are now booting cleanly, with all dolphin and mx networking coming up automatically, and all models running stably:

Now for daqd...

  3292   Mon Jul 26 12:31:36 2010 kiwamuUpdateCDSfront end machine for the X end

A brief report about the new front end machine "C1ISCEX" installed on the X end (old Y end).

Still the DAC is not working.

 

- Timing card

It's working correctly.

The 1PPS clock signal is supplied by the vertex clock distributer via a 40m long fiber.

 

- ADC

All 16 channels are working well.

We can see the signals in the medm screen while injecting some signals to the ADC by using a function generator.

 

-DAC

All 16 channels do NOT work.

We can not see any signals at the DAC SCSI cable while digitally injecting a signal on the medm screen.

  16302   Thu Aug 26 10:30:14 2021 JamieConfigurationCDSfront end time synchronization fixed?

I've been looking at why the front end NTP time synchronization did not seem to be working.  I think it might not have been working because the NTP server the front ends were point to, fb1, was not actually responding to synchronization requests.

I cleaned up some things on fb1 and the front ends, which I think unstuck things.

On fb1:

  • stopped/disabled the default client (systemd-timesyncd), and properly installed the full NTP server (ntp)
  • the ntp server package for debian jessie is old-style sysVinit, not systemd.  In order to make it more integrated I copied the auto-generated service file to /etc/systemd/system/ntp.service, and added and "[install]" section that specifies that it should be available during the default "multi-user.target".
  • "enabled" the new service to auto-start at boot ("sudo systemctl enable ntp.service") 
  • made sure ntp was configured to serve the front end network ('broadcast 192.168.123.255') and then restarted the server ("sudo systemctl restart ntp.service")

For the front ends:

  • on fb1 I chroot'd into the front-end diskless root (/diskless/root) and manually specifed that systemd-timesyncd should start on boot by creating a symlink to the timesyncd service in the multi-user.target directory:
$ sudo chroot /diskless/root
$ cd /etc/systemd/system/multi-user.target.wants
$ ln -s /lib/systemd/system/systemd-timesyncd.service
  • on the front end itself (c1iscex as a test) I did a "systemctl daemon-reload" to force it to reload the systemd config, and then restarted the client ("systemctl restart systemd-timesyncd")
  • checked the NTP synchronization with timedatectl:
controls@c1iscex:~ 0$ timedatectl 
      Local time: Thu 2021-08-26 11:35:10 PDT
  Universal time: Thu 2021-08-26 18:35:10 UTC
        RTC time: Thu 2021-08-26 18:35:10
       Time zone: America/Los_Angeles (PDT, -0700)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-14 01:59:59 PST
                  Sun 2021-03-14 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-11-07 01:59:59 PDT
                  Sun 2021-11-07 01:00:00 PST
controls@c1iscex:~ 0$ 

Note that it is now reporting "NTP enabled: yes" (the service is enabled to start at boot) and "NTP synchronized: yes" (synchronization is happening), neither of which it was reporting previously.  I also note that the systemd-timesyncd client service is now loaded and enabled, is no longer reporting that it is in an "Idle" state and is in fact reporting that it synchronized to the proper server, and it is logging updates:

controls@c1iscex:~ 0$ sudo systemctl status systemd-timesyncd
â— systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
   Active: active (running) since Thu 2021-08-26 10:20:11 PDT; 1h 22min ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 2918 (systemd-timesyn)
   Status: "Using Time Server 192.168.113.201:123 (ntpserver)."
   CGroup: /system.slice/systemd-timesyncd.service
           â””─2918 /lib/systemd/systemd-timesyncd

Aug 26 10:20:11 c1iscex systemd[1]: Started Network Time Synchronization.
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 26 10:20:11 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 64s/+0.000s/0.000s/0.000s/+26ppm
Aug 26 10:21:15 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 128s/-0.000s/0.000s/0.000s/+25ppm
Aug 26 10:23:23 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 256s/+0.001s/0.000s/0.000s/+26ppm
Aug 26 10:27:40 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 512s/+0.003s/0.000s/0.001s/+29ppm
Aug 26 10:36:12 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 1024s/+0.008s/0.000s/0.003s/+33ppm
Aug 26 10:53:16 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/-0.026s/0.000s/0.010s/+27ppm
Aug 26 11:27:24 c1iscex systemd-timesyncd[2918]: interval/delta/delay/jitter/drift 2048s/+0.009s/0.000s/0.011s/+29ppm
controls@c1iscex:~ 0$ 

So I think this means everything is working.

I then went ahead and reloaded and restarted the timesyncd services on the rest of the front ends.

We still need to confirm that everything comes up properly the next time we have an opportunity to reboot fb1 and the front ends (or the opportunity is forced upon us).

There was speculation that the NTP clients on the front ends (systemd-timesyncd) would not work on a read-only filesystem, but this doesn't seem to be true.  You can't trust everything you read on the internet.

  2982   Tue May 25 16:32:26 2010 kiwamuHowToElectronicsfront ends are back

 [Alex, Joe, Kiwamu]

Eventually all the front end computers came back !! 

There were two problems.

(1): C0DCU1 didn't want to come back to the network. After we did several things it turned the ADC board for C0DCU1 didn't work correctly.

(2): C1PEM1 and C0DAQAWG were cross-talking via the back panel of the crate.


(what we did)

* installed a VME crate with single back panel to 1Y6 and mounted C1PEM1 and C0DAQAWG on it. However it turned out this configuration was bad because the two CPUs could cross-talk via the back panel.

* removed the VME crate and then installed another VME crate which has two back panels so that we can electrically separate C1PEM1 and C0DAQAWG.  After this work, C0DAQAWG started working successfully.

 * rebooted all the front ends, fb40m and c1dcuepics.

 * reset the RFM bypath. But these things didn't bring C0DCU1 back.

 * telnet to C0DCU1 and ran "./startup.cmd" manually. In fact "./startup.cmd" should automatically be called when it boots.

 * saw the error messages from "./startup.cmd" and found it failed when initialization of the ADC board. It saids "Init Failure !! could not find ICS"

*  went to 1Y7 rack and checked the ADC. We found C0DCU1 had two ADC boards, one of two was not in used.

* disconnected all two ADCs and put back one which had not been in used. At the same time we changed the switching address of this ADC to have the same address as the other ADC. 

* powered off/on 1Y7 rack. Finally C0DCU1 got back.

* burtrestored the epics to the last Friday, May 21st 6:07am

  7477   Thu Oct 4 14:04:21 2012 jamieUpdateCDSfront ends back up

All the front end machines are back up after the outage.  It looks like none of the front end machines came back up once power was restored, and they all needed to be powered manually.  One of the things I want to do in the next CDS upgrade is put all the front end computers in one rack, so we can control their power remotely.

c1sus was the only one that had a little trouble.  It's timing was for some reason not syncing with the frame builder.  Unclear why, but after restarting the models a couple of times things came back.

There's still a little red, but it mostly has to do with the fact that c1oaf is busted and not running (it actually crashes the machine when I tried to start it, so this needs to be fixed!).

  6171   Wed Jan 4 16:40:52 2012 JamieUpdateComputersfront-end fb communication restored

Communication between the front end models and the framebuilder has been restored.  I'm not sure exactly what the issue was, but rebuilding the framebuilder daqd executable and restarting seems to have fixed the issue.

I suspect that the problem might have had to do with how I left things after the last attempt to upgrade to RCG 2.4.  Maybe the daqd that was running was linked against some library that I accidentally moved after starting the daqd process.  It would have kept running fine as was, but if the process died and was attempted to be started again, it's broken linking might have kept it from running correctly.  I don't have any other explanation.

It turns out this was not (best I can tell) related to the new year time sync issues that wer seen at the sites.

ELOG V3.1.3-