40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 205 of 335  Not logged in ELOG logo
ID Date Authordown Type Category Subject
  5758   Fri Oct 28 15:45:52 2011 MirkoUpdateComputersNifty screen generator

Quote:

Suresh showed me a cool script that Mirko made, but didn't elog about.

You tell the script what filter banks you want, and it creates a screen for each with a bunch of different filter module display formats.  Then you can copy the format you like into the actual screen you're modifying. 

Currently PEM, LSC and IOO (and maybe others?) have "fmX" folders inside their medm/c1.../master folders.  For each subsystem, we need to copy this folder, and modify the generic .adl file so that it puts in the correct subsystem letters.  Once this is done, you can just run the generateFMscreens.py after putting in your filter bank names.

 Wasn"t me.

  5763   Sat Oct 29 22:57:03 2011 MirkoUpdateLSCAM modulation due to non-optimal SB frequency

[Kiwamu, Mirko]

Non-optimal 11MHz SB frequency causes PM to be transformed into AM.
m_AM / m_PM = 4039 * 1kHz / df , with df beeing the amount the SB freq. is off.

Someone might want to double check ths.

Attachment 1: IMC.pdf
IMC.pdf
  5774   Tue Nov 1 13:41:38 2011 MirkoUpdateLSCAM modulation due to non-optimal SB frequency

Quote:

[Kiwamu, Mirko]

Non-optimal 11MHz SB frequency causes PM to be transformed into AM.
m_AM / m_PM = 4039 * 1kHz / df , with df beeing the amount the SB freq. is off.

Someone might want to double check ths.

 Actually there was an error.

For 11MHz it is:
m_AM / m_PM = 2228 * 1kHz / df

For 55MHz:
m_AM / m_PM = 99.80 * 1kHz / df

see PDF

Attachment 1: IMC.pdf
IMC.pdf
  5811   Fri Nov 4 15:24:13 2011 MirkoUpdateAdaptive FilteringAdaptive FF on the MC doesn't make sense

[Den, Jenne, Mirko]

DSC_3585.JPG

Here is the story:

1. High gain
The control loop has a high gain at the interesting frequencies. The error point (EP) before the servo is approx. zero and the information how much the mirror would move is in the feedback point (FB) behind the servo. The mirror doesn’t actually move because of the high gain. This is the case of the grav. wave detectors and medium frequencies (> approx. 50Hz, <<1kHz). Adding feed-forward (FF) to this doesn’t actually keep the mirror quieter. In fact if you look into the FB and subtract the seismic you make the mirror move more. Yes this is the case we have for the mode cleaner, doesn’t make sense.
In a real GW detector FF however isn’t totally useless. The FB tells you how much the mirror moves, due to GWs, seismic etc. When you record the FB and subtract (offline) the seismic you get closer to the real GW signal.

2. Low gain
When you, for technical reasons, can’t have a high gain in your control loop the EP contains information of how the mirror actually moves. You can then feed this into the adaptive filter and add its output to the FB. This will minimize the EP reducing the actual mirror motion. This is the case we will have for most or all other degrees of freedom in the 40m.

The reason we have so much gain in the mode cleaner length control is that we don’t actually move mirrors around. We change the frequency of the incoming laser light. You can do that crazy fast with a big amplitude. This gives us a high UGF and lots of gain in the 1Hz range we are interested in.

We now change the adaptive filter to look at the EP for all DOFs except for the MC. We calculate the effect of the FF on the MC length signal without ever applying the FF to the MC length control.

  5841   Tue Nov 8 17:48:21 2011 MirkoUpdateCDSDolphin weirdness

Had since yesterday evening some trouble with getting a channel from rfm on c1sus to oaf on c1lsc via dolphin. Several restarts of c1lsc and c1sus didn't help. At some point this morning a restart of c1lsc helped. Everything ok again.
At the bad time the dolphin TF looked like this:

Dolphn_TF.png

Should be flat at gain 1 and no phase change obviously.

  5842   Tue Nov 8 18:06:43 2011 MirkoUpdateAdaptive FilteringNoise injections to MC1-3 PIT & YAW

With fancy analysis tools approaching usability I looked some more into noise projections from PIT,YAW motion of the MC mirrors to MC length.

Injection channels are: C1:IOO-MC1-PIT_EXC. Actual injection signal is recorded in C1:IOO-MC1-PIT_OUT and similar.
Source channels for the projection are C1:IOO-WFS1_I_PIT_OUT_DQ and similar.
Response channel is C1:OAF-MCL_IN_IN1 or C1:IOO-MC_F_DQ.
MC auto-alignment was off.

1. Fixed sine injection @ 0.5Hz

Every injection lasted 4mins.

Start time DOF Amplitude [counts_pk] rough SNR in ASD BW 0.04Hz
16:09 MC1PIT 25 -
16:14 MC1YAW 40 12
16:21 MC2PIT 35 5
16:26 MC2YAW 35 5
16:34 MC3PIT 45 5
16:39 MC3YAW 45 -

2. Filtered white noise injection

Generated white noise from 0.5Hz-20Hz, then filtered that in the C1:IOO-MC1-PIT and similar filters by the following TF, (Notch 1Hz, Q=3, 40dB &  2 zeros @ 1.1Hz)

INJ_filter.png

All injections lasted 4mins. I left the filters in the first filter bank but disabled them. 

Start time DOF Amp. @ 0.5Hz [counts_pk (?)]
17:01 MC1PIT 250
17:10 MC1YAW 400
17:16 MC2PIT 400
17:21 MC2YAW 400
17:26 MC3PIT 500
17:31 MC3YAW 500

 

Attachment 2: MC1PIT_Noise_inj.png
MC1PIT_Noise_inj.png
Attachment 3: Noise_Inj_MC2PIT.png
Noise_Inj_MC2PIT.png
Attachment 4: Noise_Inj_MC2YAW.png
Noise_Inj_MC2YAW.png
Attachment 5: Noise_Inj_MC3PIT.png
Noise_Inj_MC3PIT.png
Attachment 6: Inj_Noise_MC3YAW.png
Inj_Noise_MC3YAW.png
Attachment 7: Inj_Noise_MC1YAW.png
Inj_Noise_MC1YAW.png
  5843   Tue Nov 8 19:08:21 2011 MirkoHowToComputersNew DV

Quote:
To use the new ligoDV (previously GEO DV) to look at 40m data, open up a matlab, set up for mDV as usual,
and then from the /cvs/cds/caltech/apps/ligoDV/ directory, type 'ligoDV'.

Then select which NDS server you want to look at and then start clicking to get some plots.

To start ligodv go in matlab to /cvs/cds/caltech/apps/ligoDV/ and call ligodv. Ligodv will start up when you are in another directory, but will give strange errors. Only seems to work with NDS2 server mafalda port 31200. This doesn't have all channels. When pointing it to fb port 8088 it freezes when you try to adjust the start/stop time. Make sure to ask for the correct UTC time, not the local time.
  5847   Wed Nov 9 13:44:04 2011 MirkoUpdateComputersNDS1 missing channels in matlab

The Matlab NDS1 access seems to only work for some channels. With other channels it just hangs 'Busy' and does nothing.
Below you can see some channels that make matlab hang. Everyting happened on allegra. I tried compiling the NDS1 sources (from https://www.gravity.phy.syr.edu/dokuwiki/doku.php?id=ligodv:nds1_ligodv_install ) into mex files myself. Same result. I
 

a=NDS_GetChannels('fb:8088'); %/cvs/cds/caltech/apps/linux64/share/matlab/NDS_GetChannels.m
%data=NDS_GetData({'C1:IOO-MC_F_DQ'},1004826500,100,'fb:8088',a)     %Works
%data=NDS_GetData({'C1:IOO-WFS1_PIT_IN1_DQ'},1004826500,100,'fb:8088',a)     %Works
data=NDS_GetData({'C1:LSC-AS11_I_OUT'},1004826500,100,'fb:8088',a)         %Doesn't work, hangs
%%%which NDS_GetData.m: /cvs/cds/caltech/apps/linux64/share/matlab/NDS_GetData.m

  5856   Wed Nov 9 20:35:58 2011 MirkoUpdateAdaptive FilteringSeismic noise injection into the MC

Very elaborated measurement ;-)

On 11-11-08:
18:40 Stomp near STS1 for 2mins
18:47 Jump near GUR1 for 2mins
18:52 Walk from MC2 approx. half-way to vertex for 2mins


Tried to see if jumping / stomping the ground near STS1 / vertex or GUR1 / MC2 would show up in the seismometer or MC length data.
In GUR1 jumping / stomping clearly shows up in the timeseries. Also it clearly shows up as a low frequency signal if you walk to a position near MC2. E.g. walk from the vertex to MC2. Stop near the cones. Gives a big dip on GUR1X, that recovers in 10-20sec if you remain stationary. Big "hill" if you come from x-arm end and stop on the x side of MC2. So probably lots of tilt to GUR1X coupling at low frequencies.

Nothing was really visible in spectra (see below).

Resonances:

There appear to be a lot of resonances in the 10-20Hz range, see e.g. 1st attached pic.

Coherence:

Looking at the coherence of difference axis of the seismometers. Kind of dirty measurement, could have all kinds of reasons.
Quite a bit of coherence in STS1 at 5-6Hz. Possibly limiting the STS1X to MC-F coherence to up to 4Hz?

Coherence_GUR.png

Coherence_STS1.png

Attachment 1: Inj_spectra_at_GUR1_all_DOFs.fig
Attachment 2: Inj_spectra_at_GUR1_all_DOFs.png
Inj_spectra_at_GUR1_all_DOFs.png
Attachment 3: Inj_spectra_at_STS1_all_DOFs.fig
Attachment 4: Inj_spectra_at_STS1_all_DOFs.png
Inj_spectra_at_STS1_all_DOFs.png
Attachment 7: Coherence_GUR.fig
Attachment 8: Coherence_STS1.fig
  5858   Wed Nov 9 21:32:38 2011 MirkoUpdateAdaptive FilteringPut accelerometers 4-6 on top of MC2 tank

Put the accelerometers on top of MC2. Orientated as the arms,GUR1 and STS1:
09112011069.jpg

Should be fixed a bit more rigidly.

 

Looking into the signals at a quiet time:

Coherence_quiet_time.png

Hmm. Either the acc. are mislabeled or there is really bad x-y coupling. The connectors in the back of the acc. power supply / amplifier box are in ascending order.

Attachment 3: Coherence_quiet_time.fig
  5863   Thu Nov 10 16:26:46 2011 MirkoUpdateComputersFirefox kills elog

Had to restart the elog many times. For some reason firefox 8 on Win 7 kills the elog pretty consistently when trying to make a new entry. IE9 works fine ....

  5864   Thu Nov 10 16:44:54 2011 MirkoUpdateAdaptive FilteringLooking into MC_F & PSL misalignment

 [Den, Mirko]

While doing the things below we accidentally misaligned the PSL laser. Thanks to Suresh and Jenne for realigning!!

There are a lot of strange features in MC_F (see for example http://nodus.ligo.caltech.edu:8080/40m/5738 )
To get some better understanding of the signals in the control loop we looked some more into what happens to the MC feedback signal after it exits the MC servo board (D040180 see DCC).

10112011076.jpg
The MC_F signal is actually the servo signal: http://nodus.ligo.caltech.edu:8080/40m/5695
The Thorlabs temperature controller is actually used in the PZT path!

 We measured the LP filter in the PZT path (that is kind of mislabeled as temp.)

10112011073.jpg

 

  5867   Thu Nov 10 22:00:38 2011 MirkoUpdateAdaptive FilteringLooking into MC_F & PSL misalignment

Quote:

 [Den, Mirko]

While doing the things below we accidentally misaligned the PSL laser. Thanks to Suresh and Jenne for realigning!!

There are a lot of strange features in MC_F (see for example http://nodus.ligo.caltech.edu:8080/40m/5738 )
To get some better understanding of the signals in the control loop we looked some more into what happens to the MC feedback signal after it exits the MC servo board (D040180 see DCC).

10112011076.jpg
The MC_F signal is actually the servo signal: http://nodus.ligo.caltech.edu:8080/40m/5695
The Thorlabs temperature controller is actually used in the PZT path!

 We measured the LP filter in the PZT path (that is kind of mislabeled as temp.)

10112011073.jpg

 

 

We looked into the signal from the MC servo board at different position at the PSL table.


We looked into the FB going into the temp. and PZT parts of the FB.
Temp.:
Screenshot.png

PZT:

Screenshot-1.png
We also looked at the signal in just in front of the FSS box No idea why the elog doesn't preview these pdfs.

Signal_at_MC_servo_board_and_PSL_table_2_1.pdf

Signal_at_MC_servo_board_and_PSL_table_2_2.pdf

Signal_at_MC_servo_board_and_PSL_table_2_3.pdf

Signal_at_MC_servo_board_and_PSL_table_2_4.pdf
Lots of extra noise there. We will check out where that comes from.

  5879   Sat Nov 12 02:00:36 2011 MirkoUpdateAdaptive FilteringMC-F and other signals

Regarding http://nodus.ligo.caltech.edu:8080/40m/5867 and http://nodus.ligo.caltech.edu:8080/40m/5869 :

MC_F signal:

The measurements on p. 5867 were done using the ADC attached to the PEM computer. There was a big difference between the MC_F signal recorded directly after the server board and the signal just before the FSS board as recorded by a PEM channel.
To understand how this happens we measured the signal at different places with a spec. analyzer:

1. WIth a locked MC measuring the signal just before the PEM ADCs (meaning after a 60ft BNC cable)
2. Same position, but unlocked and seemingly dark MC
3. Locked MC, signal just before the FSS box
4. MC_F signal that is usually going into the Pentek Generic board and is recorded in C1:IOO

Compare_signals_at_all_places.png

=> The 60ft BNC cable adds a considerable amount of noise, but doesn't fundamentally change the signal. It is weird that the signal is white from approx. 4Hz on.
Due to Jenne's measurement ( http://nodus.ligo.caltech.edu:8080/40m/5848 ) we know the TF from MCL through PD, mixer Pentek and into C1:IOO looks like this:
OAF-MCL-Delay-9Nov2011.pdf

This is with the double HP from 15Hz on that should be in the Pentek. So one might expect a less white signal going into the FSS board...

 PEM ADCs

The dark noise in the PEM ADCs is actually a factor 10 higher than in the IOO ADCs. Still ok wrt the the seismometers.
We also tried to measure essentially the dark noise of the whole seismometer readout (seismometer box, then ADC). That seemed ok, but is of limited value since the seismometer electronics behave a bit strange when there is no seismometer connected.

Channels_attached_to_the_PEM_ADC.png

Attachment 3: Compare_signals_at_all_places.fig
Attachment 5: Channels_attached_to_the_PEM_ADC.fig
  5900   Tue Nov 15 22:31:39 2011 MirkoUpdateAdaptive FilteringTowards wiener filtering and improved OAFing


[Jenne, Mirko]

1. We should help the OAF by compensating for the actuator TF:

The actuator TF, from adaptive filter output to MC2, through PD, mixer, Pentek and into C1:IOO looks like this:

 TFofTheMclLoop.pdf

If we assume a white-ish error signal that the adaptive code tries to compensate for its job gets extra complicated because it has to invert this TF. So we really should compensate for that. Easiest place for that is the CORR filter directly behind the adaptive code block.

Using the TF measurement from above I used the vectfit (" /cvs/cds/caltech/apps/mDV/extra/firfit_forFotonMirkoComplex.m" ) to get fit a corresponding digital filter:

MCL_round_trip.png

If we invert swap the zeros and poles in the digital filter we get the inverted TF.
(Todo: Figure out how to invert the TF. Just switching the poles and zeros doesn't work).

2. Wiener filtering

The idea was to use the adaptive filtering only for small corrections to the wiener filtering. So we really should try to get the wiener filtering going.

Howto:

1. Get data for STS1X and GUR1X and MC_F in matlab. E.g. via ligodv
2. Check the MC was in lock the entire time.
3, Filter MC_F with the actuator TF, so the wiener filter knows about that and compensates for it
4. Calculate the wiener filter " h1winolevLigoDV.m "
5. Export the data to the workspace. It is also saved to the disc as "h1filtcoeffTS.mat". Make sure there are first the witnesses, then MC_F
6. Execute " /cvs/cds/caltech/apps/mDV/extras/LHO/firfit_for_FotonMirko.m" while one directory higher. 
7. Copy the digital filter in SOS form that is printed into the matlab command line and put it into the corresponding filter in the OAF model via foton.

With data from 11-11-15 04:00 to 05:45. Sampling freq. 256Hz. 8000 Taps => length = 30.2s. Prefiltered to notch the 60Hz line in MC_F, but not compensation the actuator TF. This results in the following wiener filter and corresponding SOS filter to be copied into foton.
STS1X:

STS1X_Wiener_filter_data_from_11-11-15.png

GUR1X:

GUR1X_Wiener_filter_data_from_11-11-15.png

Attachment 3: MCL_round_trip.fig
Attachment 6: STS1X_Wiener_filter_data_from_11-11-15.fig
Attachment 7: GUR1X_Wiener_filter_data_from_11-11-15.fig
  5901   Tue Nov 15 23:44:44 2011 MirkoUpdateCDSC1:LSC & C1:SUS restarted

Earlier this evening C1:LSC died then I hit the DAQ reload after adding an OAF channel to be recorded. No change to any model. Had to restart C1:SUS too. Reloaded burts from this morning 5am, except for C1:IOO, which I loaded from 16:07.

  5915   Wed Nov 16 17:40:48 2011 MirkoUpdateIOOMC unlocked and misaligned.

MC fell out of lock and was then quite badly misaligned. Mostly in pitch. I realigned it and it locked ok.

Turns out the MC falls often out of lock when the WFS servo comes on. I think the MC2_Trans history is not cleared on lockloss. I cleared it manually and realigned. Seems fine for now.

  5949   Fri Nov 18 15:45:11 2011 MirkoUpdateIOOMode cleaner noise projection

[Rana, Den, Mirko]

Updated the MC noise projection to include the longitudinal motion of the MC mirrors.

WholeMCNoiseProjection.png

=> Lots of OSEM - local dampling noise!

Consistent with static wiener filter showing only benefits in the 1 - 4Hz region.

Attachment 2: WholeMCNoiseProjection.fig
  5952   Fri Nov 18 19:57:19 2011 MirkoUpdateIOOMode cleaner noise projection

 

Some more info on this:
 

f > 1 Hz:

At these frequencies the pendulum should be quieter than the stacks. By quite a bit actually since there is the stack resonance at a couple Hz. 'Glueing' them together via the local control is not wise. We put an elliptic LP ( 2.5Hz, 4th order 6dB) into the C1:SUS-MC?_SUSPOS pathes and MC-F got better above 1Hz

MC_ELP.png

Added an extra LP @ 10 Hz afterwards. Doesn't make a visible difference.

f < 1 Hz:

Now here is more stuff to consider.

1. The OSEMs glue the MC mirrors to the stacks
2. The pendulum TF should be 1
3. It shouldn't matter if the OSEMs do or do not act on the mirror at these frequencies, assuming they don't add extra noise.
4. Page http://nodus.ligo.caltech.edu:8080/40m/5547 seems to indicate OSEM sensor noise is so low it can be neglected.

Reduced OSEM gain below 1Hz:

If we reduce the gain in the OSEMs by adding additional HP filters ( cheby2, HP 0.3Hz, 6dB 4th order ) the happens:

1. MC length gets a bit more noisy at low frequencies - should be looked into some more
2. Coherence between the GUR1 seismometer and MC length goes up between 1E-2 and 1E-1 Hz:

( Ref is with low OSEM gain )

WithAndWithoutHPs.pdf

Possible explanation:

The stacks might be more correlatedly moving together than the pendulums. This would be not so nice for OAF test, but really fine for actually using the MC.
Todo: Measure the OSEM to seismometer coherences with high and low OSEM gains.

For reference the seismometer coherence with one another:
SeismCoh.pdf

 

 

  5960   Sat Nov 19 12:57:55 2011 MirkoUpdateIOOMode cleaner noise projection

Quote:

Could use some more detail on how this measurement was done. It looks like you used the SUSPOS signal with the mirror moving, however, this is not what we want. Of course, the SUSPOS with the mirror moving will always show the mirror motion because the OSEMs are motion sensors.

Instead, what we want is to project how the actual OSEM noise in the presence of no signal shows up as MC length. For that we should use the old traces of the OSEM noise with no magnets and then inject that spectrum of noise into the SUSPOS filter bank with all the loops running. We can then use this TF to estimate the projection of OSEM noise into the MC length.

As far as improving the damping filter, the 2.5 LP is not so hot since it doesn't help at low frequencies. Instead, one can compute the optimal filter for the SUSPOS feedback given the correct cost function. To first order this turns out to be the usual velocity damping filter but with a resonant gain at the pendulum resonance. This allows us to maintain the same gain at the pendulum mode but ~3x lower gain at other frequencies.

In the past, we had some issues with this due to finite cross-coupling with the angular loops. It would be interesting to see if we can use the optimal damping feedback now that the SUS DOFs have been diagonalized with the new procedure.

 The measurement was done with the MC in lock and the OSEMS active.

1. I injected noise into MC1-3 SUSPOS_EXC at a level that domiated the SUSPOS output.
2. Then I calculated the coupling coefficients of the SUSPOS outputs to MC_F during the time the noise is injected.
3. Without noise injection I projected the SUSPOS outputs to MC_F by multiplying the coupling coefficients with the SUSPOS outputs.

All on 11-11-18. White noise inj. from 0.1Hz to 20Hz. Duration 4mins each.

DOF      Amplitude(counts)     Time(UTC)
MC1      200                           22:08
MC2      25                             22:25
MC3      25                             22:50

Some thoughts on this, bare with me:

As you say this does not show the dark / bright noise of the OSEMs. It shows the influence of the OSEMS output onto MC_F in normal operation of the MC. I would have expected that to be very low everywhere except at the pendulum resonance. Reason for that not to be true could either be the OSEMs having considerable gain off of the resonance, or noise intrinsic to the OSEMs knocking the mirrors around. Since we know the OSEM signal to MC_F TF we only need to compare the OSEM signal to OSEM noise to see the noise contribution to MC_F. We know from http://nodus.ligo.caltech.edu:8080/40m/5547 that the OSEM sensor bright noise is considerably below the OSEM signal above 0.1Hz in actual operation. We checked that the MC OSEM signals are above the noise in the reference above 0.1Hz by a factor 3-10.

We actually measured the cost function with the noise projection (valid to 10Hz). It's just the coupling coefficient, right?

CouplingMClengthsToMCF.pdf

 

Attachment 2: NpModeCleaner.pdf
NpModeCleaner.pdf NpModeCleaner.pdf NpModeCleaner.pdf NpModeCleaner.pdf NpModeCleaner.pdf NpModeCleaner.pdf NpModeCleaner.pdf NpModeCleaner.pdf
  5961   Sat Nov 19 15:58:04 2011 MirkoUpdateIOOSome more looks into OSEM noise

[Den, Mirko]

We looked some more into the the OSEM signals and their coherence to the seismometer signals.

We were able to verify that the coherence OSEM sensor <-> seismometer signal goes down with increasing the OSEM gain. This seems to indicate that the OSEM FB add noise to the distance mirror <-> frame. We injected some noise into the OSEMs to see how the coherence behaves.

MC2 SUSPOS, 0.1Hz - 0.8Hz, 3mins each

Inj. amplitude   Time(UTC) Note

-                     21:35          Free swinging
-                     21:42          Big LF OSEM gain
-                     21:48          Small LF OSEM gain
150                     21:56          -"-
300                     22:00          -"-
900                     22:05          -"-

Free swinging:

FreeSwinging.png

High OSEM gain:


LocalDampingOn.pdf

Low OSEM gain:

LowOsemGain.pdf

LowOsemGainInj150.pdf

Low_LF_OSEM_Gain_Inj300.fig

LowOsemGainInj900.pdf

 

We left the filters that lower the OSEM gain below 0.3Hz on.

Attachment 2: High_Osem_Gain.pdf
High_Osem_Gain.pdf
Attachment 4: Low_LF_OSEM_Gain.fig
  5969   Mon Nov 21 15:47:58 2011 MirkoUpdateIOOOsem loop shape

[Jenne, Mirko]

To reiterate: We changed the OSEM loop shape for MC1-MC3. Below in black is the old loop shape, which simulated pendulum response in there. In red is the new loop shape.

OsemFilterShape.pdf

The differences are due to extra filter in C1:SUS-MC?_SUSPOS module 6,7,9

6: Elliptical LP @ 2.5Hz
7: Inverse Chebychev HP @0.3Hz
8: 1st order LP @ 10Hz

This has the potential to be unstable, but is not. At some point these filters should be tuned further.

  5971   Mon Nov 21 17:07:34 2011 MirkoUpdateCDSc1pem model dead

For some reason C1PEM doesn't seem to work anymore after a recompilation. It did recompile fine. We just changed some channel / subsystem names.

Tried reverting to the svn version. Doesn't work. Reboot C1SUS also no good.

  5973   Mon Nov 21 22:51:55 2011 MirkoUpdateCDSc1pem model dead

Quote:

For some reason C1PEM doesn't seem to work anymore after a recompilation. It did recompile fine. We just changed some channel / subsystem names.

Tried reverting to the svn version. Doesn't work. Reboot C1SUS also no good.

 It is fine again. Thanks Jamie.

  5981   Tue Nov 22 20:45:21 2011 MirkoUpdateIOOMeasurement of the actuator matrix

Tried measuring the actuator matrix for MC1.

With the watchdogs tripped I cut the loops for pos, pitch and yaw open just before the servos. Then I injected a fixed sine at 0.4Hz into the three DOFs (suspos, suspit, susyaw) one by one, while looking into the error signal just before the servos.

 

                                                         Response DOF

                                     pos                 pit             yaw

Injection DOF pos          0.008417       0.00301        0.004975
                     pit            0.01295         0.01959        0.0158
                     yaw         0.007188        0.002152     0.0144

Inverting that and dividing by the norm gives us

 0.8322   -0.1096   -0.1669
-0.2456    0.2869   -0.2293
-0.3777    0.0118    0.4211

Somehow putting this into the 'To coil' matrix has an effect even with the watchdog tripped!?!?

 

  5986   Wed Nov 23 02:34:28 2011 MirkoUpdatePEMSeismic spectrum & Striptool

The Striptool for the BLRMS seismic channels is running now. Channels are ( still ) recorded as slow EPICS channels.

A big peak in the 0.1 - 0.3Hz seismic region in both GUR1 and STS1 irritated us for a while. I added an extra LP filter @ 0.05Hz to the RMS_LP modules.

SeismicSpectrum.pdf

 

  6004   Thu Nov 24 20:22:42 2011 MirkoUpdateIOOF2A filter for MC

I calculated the F2A filters for the input mode cleaner optics as described in T010140-01-D eq (4). On Ranas recommendation I added an s/ ( w_0 * Q ) term to the numerator.

The used values are:

w_0 = 2pi / s
h= 0.0009
D= 2.46957E-2
Q=10

UpperCoils.pdf

LowerCoils.pdf

I put theses filters into C1:SUS-MC1_TO_COIL_1_1 to _4_1 . For convenience split in Z and P. Well it doesn't work. After a few seconds the optic begins to swing wildly.

  6005   Fri Nov 25 12:46:13 2011 MirkoUpdateWienerFilteringWiener filtering tryout

Tried the wiener filter with the TF from p.5900

Tried it out with the TFs from p.5900:

WienerFiltering.pdf

Adding a filter element that compensates the acutator TF makes the MC lose lock.

  6011   Fri Nov 25 22:11:12 2011 MirkoUpdateCDSBeware of fancy filter modules

[Rana, Den, Mirko]

It seems you can shoot yourself in the foot if your filter modules are too complex.

Den discovered this when looking into the C1:SUS-MC?_SUSPOS filter module named Cheby, consisting of cheby1("LowPass",6,1,12)cheby1("LowPass",2,0.1,3)gain(1.13501) by noticing that the coherence between input and output of the filter is low.

Cheby filter:

Cheby.png

CoherenceCheby.pdf

This is most likely due to the filter spanning more than the 16 orders of precision that the double data type spans.

The coherence is fine when one splits the filter in two, giving every cheby1 filter its own module. The coherence is also fine when you use the Cheby filter in a 2kHz system, although the freq. response looks very odd

Black: 16kHz, Red 2kHz (yes the filter was converted correctly, no text file editing there)

ChebyAt16kHzBlackand2kHzRed.png

The problem occurs on c1lsc as well as c1sus computer.

 

Looking into the foton files actually points to a precision problem, with the huge range of scale covered in there:

C1:MCS 16kHz (Cheby: Original filter with low coherence. CHbyTST & ChebyTST: Original filter split amongst two filter modules)
################################################################################
### SUS_MC3_LSC                                                              ###
################################################################################
# DESIGN   SUS_MC3_LSC 0 zpk([0],[30],0.333333,"n")
# DESIGN   SUS_MC3_LSC 1 cheby1("LowPass",6,1,12)
# DESIGN   SUS_MC3_LSC 2 cheby1("LowPass",2,0.1,3)gain(1.13501) \
#                       
# DESIGN   SUS_MC3_LSC 3 cheby1("LowPass",2,0.1,3)gain(1.13501)cheby1("LowPass",6,1,12)
# DESIGN   SUS_MC3_LSC 4 ellip("BandStop",4,1,40,16.1,16.9)ellip("BandStop",4,1,40,23.7,24.5)gain(1.25871)
###                                                                          ###
SUS_MC3_LSC 0 12 1  32768      0 30:0.0          9.942903833923793  -0.9885608209680459   0.0000000000000000  -1.0000000000000000   0.0000000000000000
SUS_MC3_LSC 1 21 3      0      0 CHbyTST     9.095012702673064e-18  -1.9978637592754149   0.9978663974923444   2.0000000000000000   1.0000000000000000
                                                                 -1.9984258494490537   0.9984376515442090   2.0000000000000000   1.0000000000000000
                                                                 -1.9994068831713223   0.9994278587363880   2.0000000000000000   1.0000000000000000
SUS_MC3_LSC 2 12 1  32768      0 ChebyTST    1.228759186937126e-06  -1.9972699801052749   0.9972743606395355   2.0000000000000000   1.0000000000000000
SUS_MC3_LSC 3 12 4  32768      0 Cheby       1.117558041371939e-23  -1.9972699801052749   0.9972743606395355   2.0000000000000000   1.0000000000000000
                                                                 -1.9978637592754149   0.9978663974923444   2.0000000000000000   1.0000000000000000
                                                                 -1.9984258494490537   0.9984376515442090   2.0000000000000000   1.0000000000000000
                                                                 -1.9994068831713223   0.9994278587363880   2.0000000000000000   1.0000000000000000
SUS_MC3_LSC 4 12 8  32768      0 BounceRoll     0.9991466189294013  -1.9996634951844035   0.9997010181703262  -1.9999611719719754   0.9999999999999997
                                                                 -1.9999303040590390   0.9999684339228864  -1.9999605309876360   0.9999999999999999
                                                                 -1.9999248796830529   0.9999668732412945  -1.9999594299327190   1.0000000000000002
                                                                 -1.9996385459838455   0.9996812069238987  -1.9999587601905868   1.0000000000000000
                                                                 -1.9996161812709703   0.9996978939989944  -1.9999163485656493   0.9999999999999999
                                                                 -1.9998855694973159   0.9999681878303275  -1.9999154056705493   0.9999999999999998
                                                                 -1.9998788577090287   0.9999671193335300  -1.9999137972442669   1.0000000000000000
                                                                 -1.9995951159123118   0.9996843310430819  -1.9999128255920269   1.0000000000000000

C1:OAF 2kHz
###############################################################################
### YARM_IN                                                                  ###
################################################################################
# DESIGN   YARM_IN 0 zpk([0],[30],0.333333,"n")
# DESIGN   YARM_IN 3 cheby1("LowPass",6,1,12)cheby1("LowPass",2,0.1,3)gain(1.13501)
# DESIGN   YARM_IN 4 ellip("BandStop",4,1,40,16.1,16.9)ellip("BandStop",4,1,40,23.7,24.5)gain(1.25871)
# DESIGN   YARM_IN 8 cheby1("LowPass",6,1,12)cheby1("LowPass",2,0.1,3)gain(1.13501)zpk([],[10],1,"n")
###                                                                          ###
YARM_IN  0 12 1   4096      0 30:0.0           9.56649943398763  -0.9119509539166185   0.0000000000000000  -1.0000000000000000   0.0000000000000000
YARM_IN  3 12 4   4096      0 Cheby       1.829878084970283e-16  -1.9828889048300398   0.9830565293861987   2.0000000000000000   1.0000000000000000
                                                                 -1.9868188576622443   0.9875701115261976   2.0000000000000000   1.0000000000000000
                                                                 -1.9940934073784453   0.9954330165532327   2.0000000000000000   1.0000000000000000
                                                                 -1.9781245722853238   0.9784022621062476   2.0000000000000000   1.0000000000000000

Attachment 1: ChebyTST3.png
ChebyTST3.png
  6012   Fri Nov 25 23:25:24 2011 MirkoUpdateIOOF2A filter for MC

Quote:

Woo. Pretty crazy. The numerators should only be ~10% larger than the denominator below 1 Hz. Let's try again.

 [Rana, Mirko]

I redid this calculation. The idea behind it is to get rid on any pitch that is introduced by applying longitudinal feedback to the mirrors. This coupling happens because the center of percussion for pitch , which is identical with the point where the wires lift off of the mirror, is above the center of mass.

With the same values as before, just less faulty math and Q = 2 instead of 10 we end up with the following filters:

For the lower coils (red), compared to corresponding preexisting BS filters (black):

F2aForMCcomparedToBS.pdf

The upper coils' TF is just mirrored at the 0dB magnitude axis, and have a corresponding frequency response.

I switched the F2a filters on for all MC mirrors. For convenience they are split into F2aZeros and F2aPoles. Everything seems fine. The F2a filters seem to be off for ( all ?) other mirrors.

  6013   Sat Nov 26 02:05:43 2011 MirkoUpdateCDSBeware of fancy filter modules

 

We replaced the complicated Cheby filter module with three separate filter modules. Probably the filter doesn't need to be so complicated, but rather not change too many things at once. The new filter modules are called:
Ch1, Ch2, Ch3 and are in filter module 3,9, and 10 of the C1:SUS-MC?_SUSPOS filters. The coherence with these filters is fine. Someone should look into the possibility of simplifying these filters.

It would be good to check for numerical problems in other filters!

  6014   Sat Nov 26 02:15:42 2011 MirkoUpdateSUSNot adaptive, still cool

[Rana, Mirko]

I tried out the virtual pendulum idea today. The idea is to compensate the physical pendulum via the control system, and then add a virtual pendulum formed in the control system. We know the actuator TF from p. 5900 and apply its inverse to the C1:SUS-MC?_SUSPOS filters. Also we add an pendulum Q=3 with a resonance frequency of 0.1Hz to the POS control loops.

The result is pretty awesome!

1. Black: Standard config. Wfs on. New Cheby filter in place ( p. 6031 )
2. Red: With virtual pendulum. Extra eliptic LP filter @ 2.5Hz

PendulumQ0.1HzWithElip2comma5HzLpWfsOnCorrectedShape.pdf

Filter shape:

VirtualPendulumFilterShape.pdf

This is stable for 5-10minutes, at which point it falls out of lock, swinging in yaw.

 

 

Attachment 3: SetupVirtualPendulumV2.png
SetupVirtualPendulumV2.png
  6057   Thu Dec 1 03:27:39 2011 MirkoUpdateSUSNot adaptive, still cool

Quote:

[Rana, Mirko]

I tried out the virtual pendulum idea today. The idea is to compensate the physical pendulum via the control system, and then add a virtual pendulum formed in the control system. We know the actuator TF from p. 5900 and apply its inverse to the C1:SUS-MC?_SUSPOS filters. Also we add an pendulum Q=3 with a resonance frequency of 0.1Hz to the POS control loops.

The result is pretty awesome!

1. Black: Standard config. Wfs on. New Cheby filter in place ( p. 6031 )
2. Red: With virtual pendulum. Extra eliptic LP filter @ 2.5Hz

PendulumQ0.1HzWithElip2comma5HzLpWfsOnCorrectedShape.pdf

Filter shape:

VirtualPendulumFilterShape.pdf

This is stable for 5-10minutes, at which point it falls out of lock, swinging in yaw.

 

 

In the above entry MC_f  signal is improved off of resonance by the implementation of the pendulum compensation. It should be checked if this is actually due to the implementation of the virtual pendulum at 0.1Hz. A way to check that might be to implement a simple double LP at 0.1Hz instead and look at the resulting MC_f signal. A projection of the OSEM FB noises with the compensation active might be interesting.
The physical resonance at 1Hz is clearly not compensated correctly, which is probably the reason for the lock losses after a few minutes. It might be a good start to measure the residual resonance with the compensation in place. The filters in bank 7 of C1:SUS-MC?_SUSPOS have both the compensation of the 1Hz resonance( really the inverse actuator TF ) and the virtual pendulum in them. The ‘pure’ compensation can be found in the InvTF module in the C1:OAF-ADAPT_MCL_CORR filter.
The fact that the beam swings in yaw at lock loss indicates that the separation of the DOFs might not be perfect. One should have a look into the yaw and pitch DOFs with the compensation active.

  6008   Fri Nov 25 19:45:36 2011 Mirco, DenSummarySUSExcess Noise in Digital Filtering

Quote:

We are now trying to understand why the coherence between SUS-X_SUSPOS_IN1 and SUS-X_SUSPOS_OUT is lost below 1 Hz for X = MC1, MC2, MC3, BS, ITMX, ITMY, ETMX, ETMY, SRM. It is OKEY only for PRM but the different filteres are used there. For PRM - 30:0.0 and Bounce Roll, for all others - 30:0.0 and Cheby. The transfer functions between these two signals plotted in foton and fft tools are also not the same.

If we switch off all the filters between these 2 signals, than the coherence is one. If one of the filters is switched on, everything is also fine. But if there are several present, than they filter the signal in unexpected way.

Moreover it seems that the coherence is dependent on the input signal. The coherence is better with local dumping than without.

 We have done the following on the c1sus and c1lsc computers : provided excitation, let the signal pass through filters 30:0.0 and Cheby and plotted the coherence between in and out signals.

c1sus computer - coherence is corrupted

c1lsc computer - coherence is not corrupted

Attachment 1: sus_coh-crop.pdf
sus_coh-crop.pdf
Attachment 2: lsc_coh-crop.pdf
lsc_coh-crop.pdf
  14626   Mon May 20 21:45:20 2019 MilindUpdate Traditional cv for beam spot motion

Went through all of Pooja's elog posts, her report and am currently cleaning up her code and working on setting up the simulations of spot motion from her work last year. I've also just begun to look at some material sent by Gautam on resonators.

This week, I plan to do the following:

1) Review Gabriele's CNN work for beam spot tracking and get his code running.

2) Since the relation between the angular motion of the optic and beam spot motion can be determined theoretically, I think a neural network is not mandatory for the tracking of beam spot motion. I strongly believe that a more traditional approach such as thresholding, followed by a hough transform ought to do the trick as the contours of the beam spot are circles. I did try a quick and dirty implementation today using opencv and ran into the problem of no detection or detection of spurious circles (the number of which decreased with the increased application of median blur). I will defer a more careful analysis of this until step (1) is done as Gautam has advised.

3) Clean up Pooja's code on beam tracking and obtain the simulated data.

4) Also data like this  (https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view) is incredibly noisy. I will look up some standard techniques for cleaning such data though I'm not sure if the impact of that can be measured until I figure out an algorithm to track the beam spot.

 

A more interesting question Gautam raised was the validity of using the beam spot motion for detection of angular motion in the presence of other factors such as surface irregularities. Another question is the relevance of using the beam spot motion when the oplevs are already in place. It is not immediately obvious to me how I can ascertain this and I will put more thought into this.

  14632   Thu May 23 08:51:30 2019 MilindUpdateCamerasSetting up beam spot simulation

I have done the following thus far since elog #14626:

Simulation:

  1. Cleaned up Pooja's code for simulating the beam spot. Added extensive comments and made the code modular. Simulated the Gaussian beam spot to exhibit 
    1. Horizontal motion
    2. Vertical motion
    3. motion along both x and y directions:
  2. The motion exhibited in any direction in the above videos is the combination of four sinusoids at the frequencies: 0.2, 0.4, 0.1, 0.3 Hz with amplitudes that can be found as defaults in the script ((0.1, 0.04, 0.05, 0.08)*64 for these simulations.). The variation looks as shown in Attachment 1. For the sake of convenience I have created the above video files with only a hundred frames (fps = 10, total time ~ 10s) and this took around 2.4s to write. Longer files need much longer. As I wish to simply perform image processing on these frames immediately, I don't see the need to obtain long video files right away.
  3. I have yet to add noise at the image level and randomness to the motion itself.  I intend to do that right away. Currently video 3 will show you that even though the time variation of the coordinates of the center of the beam is sinusoidal, the motion of the beam spot itself is along a line as both x and y motions have the same phase. I intend to add the feature of phase between the motion of x and y coordinates of the center of the beam, but it doesn't seem all too important to me right now. The white margins in the videos generated are annoying and make tracking the beam spot itself slightly difficult as they introduce offset (see below). I shall fix them later if simple cropping doesn't do the trick.
  4. I have yet to push the code to git. I will do that once I've incorporated the changes in (3).

Circle detection:

  1. If the beam spot intensity variation is indeed Gaussian (as it definitely is in the simulation), then the contours are circular. Consequently, centroid detection of the beam spot reduces to detecting these contours and then finding their centroid (center). I tried this for a simulated video I found in elog post 14005. It was a quick implementation of the following sequence of operations: threshold (arbritrarily set to 127), contour detection (video dependent and needs to be done manually), centroid determination from the required contour.  Its evident that the beam spot is being tracked (green circle in the video). Check #Attachment 2 for the results. However, no other quantitative claims can be made in the absence of other data.
  2. Following this, Gautam pointed me to a capture in elog post 13908. Again, the steps mentioned in (1) were followed and the results are presented below in Attachment #3. However, this time the contour is no longer circular but distorted. I didn't pursue this further. This test was just done to check that this approach does extend (even if not seamlessly) to real data. I'm really looking forward to trying this with this real data.
  3. So far, the problem has been that there is no source data to compare the tracked centroid with. That ought to be resolved with the use of simulated data that I've generated above. As mentioned before, some matplotlib features such as saving with margins introduce offsets in the tracked beam position. However, I expect to still be able to see the same sinusoidal motion. As a quick test, I'll obtain the fft of the centroid position time series data and check if the expected frequencies are present.

I will wrap up the simulation code today and proceed to going through Gabriele's repo. I will also test if the contour detection method works with the simulated data. During our meeting, it was pointed out that when working with real data, care has to be taken to synchronize the data with the video obtained. However, I wish to put off working on that till later in the pipeline as I think it doesn't affect the algorithm being used. I hope that's alright (?).

 

Attachment 1: variation.pdf
variation.pdf
Attachment 2: contours_simulated.mp4
Attachment 3: contours_real.mp4
  14635   Thu May 23 15:37:30 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.
Attachment 1: actual_motion.pdf
actual_motion.pdf
Attachment 2: predicted_motion.pdf
predicted_motion.pdf
Attachment 3: normalised_comparison.pdf
normalised_comparison.pdf
Attachment 4: residue_normalised.pdf
residue_normalised.pdf
Attachment 5: simulated_motion1.mp4
Attachment 6: elog_22may_contours.mp4
  14638   Sat May 25 20:29:08 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. I used the same motion as defined in the previous elog. I gradually added noise to the images. Noise added was uniform random noise - a 2 dimensinoal array of random numbers between 0 and a predetermined maximum (noise_amp). The previous elog provides the variation of the y coordinate. In this, I am also uploading the effect of noise on the error in the prediction of the x coordinate. As a reminder, the motion of the beam spot center was purely vertical. Attachement #1  is the error for noise_amp = 0, #2 for noise_amp = 20 and #3  for noise_amp = 40. While Attachment #3 does provide the impression of there being a large error, this is not really the case as without normalization, each peak corresponds to a deviation of one pixel about the central value, see Attachement #4 for reference.
  2. While the error does increase marginally, adding noise has no significant effect on the prediction of the y coordinate of the centroid as Attachment #5 shows at noise_amp = 40.
  3. I am currently running an experiment to obtain the variation of mean square error with different noise amplitudes and will put up the plots soon. Further, I shall vary the resolution of the image frames and the the standard deviation of the Gaussain beam with time and try to obtain simulations very close to the real data available and then determine the performance of the algorithm.
  4. The following videos will serve as a quick reference for what the videos and detection look like at
    1. noise_amp = 20
    2. noise_amp = 40
  5. I also performed a quick experiment to see how low the amplitude of motion could be before the algorithm falied to detect the motion and found it to occur at 2 orders of magnitude below the values used in the previous post. This is a line of thought I intend to pursue more carefully and I am looking into how opencv and python handle images with floats as coordinates and will provide more details about the previous trial soon. This should give us an idea of what the smallest motion of the beam spot that can be resolved is.
Quote:
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.

 

Attachment 1: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 2: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 3: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 4: predicted_motion_x.pdf
predicted_motion_x.pdf
Attachment 5: normalised_comparison_y.pdf
normalised_comparison_y.pdf
  14649   Mon Jun 3 21:03:54 2019 MilindUpdateCamerasSteps to interact with GigE

The following steps summarize the steps to setting up and interacting with a GigE camera.

Launching the PylonViewerApp:

  1. Open a new terminal using Ctrl + Alt + T on the keyboard.
  2. Launch the app using the command pylon.

Using setup python scripts to interact with the GigE (a summary of the steps listed here and here)

  1. Connect the GigE camera to the ethernet cable and record its IP address. If the IP address is not printed on the GigE, launch the PylonViewerApp and navigate to the "Tools" dropdown menu and select "pylon IP configurator" to be presented with a list of all connected cameras and their IP addresses.
  2. To simply observe the camera feed, open a new terminal and run the following commands:
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini  (only one config file is present currently and more will be added as more cameras are set up. The "Camera IP" in the  .ini file must match that determined in step 1). This starts the camera server.
  3. Open a new tab (Ctrl + Shift + T on the keyboard) in the terminal. You should still be in the same directory as navigated to in step 2.1. Run the following command.
    1. python camera_client.py -c C1-CAM-ETMX.ini
  4. This should bring up a feed from the camera. Close at will.
  5. To record a video file, repeat steps 1 and 2. Open a new tab as described in step 3. Then run the following command:
    1. python camera_client_movie.py -c C1-CAM-ETMX.ini
  6. Enter the full path to the file where you wish to save the movie in the prompt that appears. Use ./your_file_name_here.avi to save the the video in the working directory. Press Ctrl + C to stop recording. The recording can be played by navigating to the location where the recording is stored and running vlc your_file_name_here.avi.
  7. To adjust the exposure setting of the camera, open a new terminal and run the command sitemap . This should bring up the medm display in Attachment #1. Click on the Video/Lights button highlighted in red and select GigE. Adjust the exposure value in the next window using the slider before starting the server in step 1. Adjusting the slider once the server is started causes the program to freeze. Also set the Snapshot channel C1:CAM-ETMX_SNAP to off as mentioned in elog 14037.

 

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.

 

Attachment 1: sitemap.pdf
sitemap.pdf
  14650   Mon Jun 3 23:18:59 2019 MilindUpdateComputer Scripts / Programsupdating bashrc

I was working with the git repo in the SnapPy_pypylon folder (/cvs/cds/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon) and needed to create a branch. To avoid any confusion, I modified the PS1 variable and that alone in the bashrc file to reflect the git branch so that the prompt now displays the git branch if you enter a repository. This is just an update.

  14654   Tue Jun 4 22:24:45 2019 MilindUpdateCamerasSteps to interact with GigE

Figured out how to get/grab frames by looking at the pypylon documenation as that turned out to be easier than modifying Jon's code. Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). I will figure that out tomorrow and make a script suitable for Kruthi's usage (obtain a bunch of images with different exposure times). I will also try and integrate the video saving and streaming code into this and have a neat little script set up asap.

Quote:

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.
  14656   Wed Jun 5 22:30:13 2019 MilindUpdateCamerasSteps to interact with GigE

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

  14657   Thu Jun 6 16:01:52 2019 MilindUpdateCamerasSteps to interact with GigE

[Koji, Milind]

 

Today I ran into the following errors:

  1. Inability to access the EPICS channels using the commands caget and caput and thus the generation of a blank medm screen (error in Attachment #1) when simultaneously running the code in camera_server.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py).
  2. Inability to run camera_server.py code with an active medm screen with a "... failed to read <EPICS channel>" error.

Therefore, Koji and I took a look at it and putting our faith in Gautam's hunch from elog 13023, we walked down to rack 1Y1 and keyed it. Following this, all the functionality previously described was restored! Koji then took a look at all the channels handled by this machine and bestowed upon me the permission to key the crate should I lose control of the GigE again.

Quote:

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

 

Attachment 1: terminal_medm_error.pdf
terminal_medm_error.pdf
  14661   Mon Jun 10 22:22:19 2019 MilindUpdateCamerasSteps to interact with GigE

Steps to take snapshots using GigE at different exposures [Instructions for Kruthi]:

  1. Setup C1-CAM-ETMX.ini (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) appropriately. The parameter Number of Snapshots determines how many snapshots will be taken at any given exposure. Set Name Overlay, Time Overlay, Calculation Overlay, Calculations (if using very low values of exposure) and Auto Exposure to False. Ensure that that the IP address of the Camera in use and that in the configuration file match.
  2. Launch a server using the following commands (as described in elog 14649)
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini
  3. Open another terminal in the same directory and then run the following command
    1. python exposure_variation.py --minval <minval> --maxval <maxval> --step <step> where
      1. minval: lower bound of range of exposure values, defaults to 150
      2. maxval: upper bound of range of exposure values, defaults to 100000
      3. step: step size of variation in the range [minval, maxval], defaults to 2000

The python script takes in the above parameters and then takes snapshots by setting the exposure to values starting at minval and going upto maxval incrementing by step at each turn. This uses a simple for loop and is nothing elaborate.


A few unrelated updates:

  1. On a sidenote, I installed Sublime Text editor on rossa following the instructions at this site (check install using yum section). Further, I have also installed miniconda but did not set it up fully as I was in a rush and did not want to disturb any previously set up environment variables.
  2. I have cloned Gabriele's repository and am trying to get it to work on my system. As Gautam has pointed out that the end goal is to get stuff working on the lab machines, I will sharea .yml file with the necessary environment details upon completion.
  3. I will upload details of how I am going to construct the two learning tasks that Rana, Gautam and I discussed in a day or two including details of the use of simulation data for training data in the absence of real data (until Kruthi is done setting up the GigE) which Gautam suggested I do to speed things up.
  14662   Tue Jun 11 00:00:15 2019 MilindHowToPSLSteps to lock the PMC

Today, Rana had me key the PSL crate.

  1. Locating the rack: the crate is 1X1. This link provides details of the locations and functions of the racks.
  2. Keying the crate: the key is located at the bottom of the rack (in this case). Keying it requires one to turn the key through 90 degrees (anti clockwise facing the rack) and back to to the original position.

Locking the PMC:

  1. Accessing the medm screen for the PMC: open a new terminal and use the command sitemap. This should open up the sitemap medm screen. Click on the PSL button and then select C1PSL_PMC from the dropdown that is produced. This opens up a medm screen similar to that in Attachment #1.
  2. The correct toggling: The keying of the crate sometimes scrambles the settings on the medm screen. Rana and I performed extensive toggling of the buttons and concluded that the combination in Attachment #1 ought to be the correct one.
  3. Locking the PMC: The state of the PMC was deduced by observing CH01 on monitor 7. When not locked, there is no observable bright spot. At this point the "Input Offset (V)" slider is set to zero and the "Servo Gain Adjust (dB)" slider is set to minimum. To obtain lock, complete step 2 and then move the "DC Output Adjust (V)"  slider (at the bottom left on the screen) around rapidly while looking for a bright spot. On observing such a spot on the monitor, release the slider and quickly increase the "Servo Gain Adjust (dB)" slider to around 15 dB. Higher gain values produce a bright spot on CH02 as well which vanishes (almost) on decreasing the gain to the aforementioned value.
Attachment 1: pmc_locked_settings.pdf
pmc_locked_settings.pdf
  14667   Wed Jun 12 22:02:04 2019 MilindUpdateCamerasSimulation enhancements

Today, Rana asked me to work on improving simulations based on the ideas we discussed last week. As of the previous elog the simulation accomodated only

  1. Simulation of Gaussian beam spot.
  2. Arbitrary motion.

Today, I added the simulation of point scatterers.

What?

The image on the sensor (camera) is produced in roughly the following steps.

  1. Motion of the Gaussian beam on the optic (X,Y coordinates) which is what has been simulated so far.
  2. Reflection from the surface of the optic which can be modeled using knowledge of the BRDF has not been included as of this elog as I wish to do a little more reading before doing so.
  3. Reflection from point scatterers (dust particles burnt into the optic surface by the laser and so forth) which are characterised as peaks (impulses) in the TIS vs position plot. The laser beam is incident nearly normally on the optic and this behaviour is independent of the angle of observation. This is what has been added to the simulation.

How?

  1. Increased the frame resolution to 720 x 480.
  2. Defined an array of the same size and set values of at most "num_scatter" number of points at random positions to values determined randomly between 1 and "scatter_amp" + 1 where scatter_amp is non-negative.
  3. Multiplied the resulting array by the resulting Gaussian beam. The motivation was to imitate the bright specks obtained on various camera feeds in the lab. Physically, this also implies normal incidence and normal observation which is not the real case at all. I shall add these features in a day or two.

Herewith, in attachments #1, #2, #3 I am attaching videos obtained by varying scattering amplitude and number of scattering points in a vain attempt to reproduce this data. I shall work more on this simulation on Friday.

 


Scripting stuff:

  1. Previous elogs detail how to take gige images at various exposure times. I am still waiting on Kruthi to use the script.
  2. Tomorrow I shall work on the scripting software to interact with the GigE and take video for a fixed duration etc. I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.

Neural network stuff:

GANs for simulation:

  1. Other than putting the physics into simulation i.e the first portion of this elog, GANs can be trained to generate images similar to the original data. I am unfamiliar with training GANs and the various tricks that are used specifically for them. I will do a bit of reading and make an update by Friday. As of now, the data I plan to use is this and I will train it using the GTX 1060 on my machine.

Networks for beam tracking:

  1. I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.
Attachment 1: simulated_motion0.mp4
Attachment 2: simulated_motion0.mp4
Attachment 3: simulated_motion0.mp4
  14669   Thu Jun 13 15:08:31 2019 MilindUpdateElectronicsVCO pickup by Rich

Rich dropped by at around 3:00 PM today and picked up the VCO in Attachment #1 and left the note in Attachment #2 on Gautam's desk with the promise of bringing it back soon.

Attachment 1: WhatsApp_Image_2019-06-13_at_15.06.57.jpeg
WhatsApp_Image_2019-06-13_at_15.06.57.jpeg
Attachment 2: WhatsApp_Image_2019-06-13_at_15.06.57(1).jpeg
WhatsApp_Image_2019-06-13_at_15.06.57(1).jpeg
  14671   Thu Jun 13 21:29:52 2019 MilindUpdateCamerasSteps to interact with GigE

As directed by Gautam, I have set up one script- interact.py (at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/interact.py) to perform the following two tasks:

  1. View the GigE feed for a fixed period of time.
  2. Record the GigE feed for a fixed amount of time.

 

Steps to view GigE feed for a fixed amount of time:

  1. Run the following commands in the terminal to navigate to the concerned directory and then view the feed
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python interact.py --path_to_config <path_config> --mode 0 --view_time <viewing_time>, where
      1. path_config: full path to configuration file, defaults to /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini if --path_to_config is not used
      2. viewing_time: time in seconds for which the feed is to be displayed. The server is closed  after this time and the window freezes and can be manually closed.
    3. Exiting the feed in between: The script terminates automatically after the specified time. To terminate the feed in between, close the window manually using the x icon the top right. This makes sure that the server is correctly closed. If closed using the Ctrl-C command in the terminal, the server is left running and any attempt to unwittingly set up another results in an error (see Attachment #1). In this case, the server and client processes needs to be identified manually and killed. I have used the following steps
      1. ps -eaf | grep server, then identify the PID for the python camera_server.py process
      2. kill PID
      3. similarly for the client file

Steps to record the GigE feed for a fixed amount of time:

I tried to look for elegant solutions that wouldn't require editing the code that Jon wrote and stumbled upon this useful bit of information but ended up deciding that it was just easier to change the camera_client_movie.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_client_movie.py). It can still be run as previously described, where video recording is terminated by using Ctrl-C. Steps to record for a fixed period of time are

  1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
  2. python interact.py --path_to_config <path_config> --mode 1 --save_time <recording_time> --file_name filename, where\
    1. path_config: full path to configuration file, defaults to /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini if --path_to_config is not used
    2. recording_time: time in seconds for which the feed is to be saved. No video is displayed during this time.
    3. filename: full path to the file where the video is to be saved. Overwrites any existing files.

I'll make aliases for these to make the whole process more user friendly. I'm halting this for now and will discuss what else needs to be done once Gautam gets back.


Regarding the autolocker: I spoke to Aaron today and as he is in tomorrow, I'll ask him about the burt files and the ideal configuration.

I'm also starting with GANs now.

 

 

Attachment 1: terminal_error_server.pdf
terminal_error_server.pdf
  14678   Mon Jun 17 14:36:13 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".

Quote:

Networks for beam tracking:

  1. I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.

 

  14680   Mon Jun 17 22:19:04 2019 MilindUpdateComputer Scripts / ProgramsPMC autolocker

As Rana asked me to in the last meeting, I dug through the elogs to determine what had become of the previous autolockers. I stumbled upon this elog by Rana from before Gautam cleaned up the medm screen. Out of curiosity, I ran the autolocker script using the instructions in Rana's elog. I did this a total of 5 times and could lock the PMC 3 times fairly quickly. I attempted to decipher the details of the code but did not make much headway owing to my unfamiliarity with the language. From what I could make out from the medm screen while the autolocker was running, it appeared to be the same method as that in this elog. I will take a look at it again tomorrow. However, I intend to spend most of tomorrow working on preprocessing the data, developing the CNN script and then the simulation. 

Quote:
 
  1.  I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.

ELOG V3.1.3-