40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 130 of 337  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  542   Wed Jun 18 18:32:09 2008 Max JonesUpdateComputer Scripts / ProgramsNB Update
I am reconfiguring the the noisebudget code currently in use at the sights. To that end, I have done the following things (in addition to the elog I posted earlier)

In get_dtt_dataset.m - I added C1 specific cases for DARM_CTRL, SEIS, and ITMTRX changing the specific channels to make those in use at caltech

In LocalizeSite.m - I changed the NDS_PATH to match caltechs. I left NDS_HOST untouched.

Since I am trying to get SEIS and DARM to work initially I added C1 specific cases to both of these.

Better documentation may be found in /users/mjones/DailyProgressReport/06_18_08. Suggestions are appreciated. Max.
  565   Wed Jun 25 11:36:14 2008 Max JonesUpdateComputer Scripts / ProgramsFirst Week Update
For the first week I have been modifying the noise budget script in caltech/NB to run with 40 m parameters and data. As per Rana's instructions, I have tried to run the script with only seismic and Darm sources. This involves identifying and changing channel names and altering paramter files (such as NB/ReferenceData/C1IFOparams.m). To supply the parameter files, I have copied the H1 files with (as yet only) slight modification. The channel name changes have been made to mirror the sites for the most part. Two figures are attached which show the current noise budget. The Day plot was taken 6/23/08 at ~10:30 am. The night plot was taken 6/22/08 at ~11:00 pm . Note that the SRD curve is for the sites and not for the 40 m (I hope to change that soon). Also in one of the plots the DARM noise signal is visible. Obviously this needs work. A list of current concerns is

1) I am using a seismic transfer function made by previous SURF student Ryan Kinney to operate with channels of the form C1:PEM_ACC-ETMY_Y (should I be using C1:DMF-IX_ACCY?) and the channels I am currently using are the acceleraometers for the mode cleaner with names of the form C1:PEM_ACC-MC1_X. Rana said that he thinks these may be the same but I need to be sure.

2) We don't have a DARM_CTRL channel but the code requires it, currently I am using DARM_ERR as a substitute which is probably partly responsible for the obvious error in DARM noise.

Any suggestions are appreciated. Max.
Attachment 1: C1_NoiseBudgetPlot_Day.eps
C1_NoiseBudgetPlot_Day.eps
Attachment 2: C1_NoiseBudgetPlot_Night.eps
C1_NoiseBudgetPlot_Night.eps
  572   Thu Jun 26 10:56:15 2008 Max JonesUpdatePEMRemoved Magnetometer
I removed the Bartington Magnetometer on the x arm to one of the outside benches. I'll be trying to determine if and how it works today. It makes a horrible high pitched sound which is due to the fact that the battery is probably 16 yrs old. It still works with ac power though and I want to see if it is still operating correctly before I ask to buy a new battery. Sorry for the bother.
  680   Wed Jul 16 11:26:47 2008 Max JonesUpdate This Week
Baffles.

I got a battery for the magnetometer today which is slightly too large (~2 mm) in one dimension. Not sure what I'm going to do.

I'm attempting to calibrate the magnetometer but I'm having a hard time calibrating the axis that I cannot simply put through a coil parallel to the coils length. I have attempted to use the end fields of the solenoid but the measurements from the magnetometer are significantly different from the theoretical calculations.

I would appreciate suggestions. - Max.
  691   Thu Jul 17 16:39:58 2008 Max JonesUpdateDAQMagnetometer Installed
Today I installed the magnetometer near the beam splitter chamber. It is located on the BSC chamber at head height on the inner part of the interferometer (meaning I had to crawl under the arms to install it). I don't think I disturbed anything during installation but I think that's it's probably prudent to tell everyone that I was back there just in case. I plan to run 3 BNC cables (one for each axis) from the magnetometer to the DAQ input either tonight or tomorrow. Suggestions are appreciated. - Max.
  766   Wed Jul 30 13:08:44 2008 Max JonesUpdateComputer Scripts / ProgramsWeekly Summary
This week I've been working on the noise budget script. The goal is to add Siesmic, Darm, Mich, Prc and magnetometer noise. I believe I've added Seismic noise in a reasonable and 40m specific manner (please see the attached graph). The seismic noise in the noise budget at 100 Hz was 10 times higher than that predicted by Rana in elog #718. This could be due to the fact that graph is taken from data today when the device is unlocked and construction workers are busy next door. I am currently trying to fix the getDarm.m file to add the DARM source to the noise budget. I have run into several problems, the most pressing of which was that the C1:LSC-DARM_ERR channel is zero except with the interferometer is being locked. According to Rob, we only save data for approximately a day (we save trends for much longer but this is insufficient for the noise budget script) and sometimes we are not locked the night before. Rob showed me how I may introduce an artificial noise in the DARM_ERR signal but I'm having trouble making the script output a graphic. I'm still unsure how to make the getDarm function 40m specific.

Today I will start working on my second progress report and abstract.
Attachment 1: C1_NoiseBudgetPlot.pdf
C1_NoiseBudgetPlot.pdf
  3126   Mon Jun 28 11:27:08 2010 MeganUpdateElectronicsMarconi Phase Noise

Using the three Marconis in 40m at 11.1 MHz, the Three Cornered Hat technique was used to find the individual noise of each Marconi with different offset ranges and the direct/indirect frequency source of the rubidium clock.

Rana explained the TCH technique earlier - by measuring the phase noise of each pair of Marconis, the individual phase noise can be calculated by:

S1 = sqrt( (S12^2 + S13^2 - S23^2) / 2)

S2 = sqrt( (S12^2 + S23^2 - S13^2) / 2)

S3 = sqrt( (S13^2 + S23^2 - S12^2) / 2)

I measured the phase noise for offset ranges of 1Hz, 10Hz, 1kHz, and 100kHz (the maximum allowed for a frequency of 11.1Mhz) and calculated the individual phase noise for each source (using 7 averages, which gives all the spikes in the individual noise curves). The noise from each source is very similar, although not quite identical, while the noise is greater at higher frequencies for higher offset ranges, so the lowest possible offset range should be used. It appears the noise below a range of 10Hz is fairly constant, with a smoother curve at 10Hz.

The phase noise for direct vs indirect frequency source was measured with an offset range of 10Hz. While very similar at high and low frequencies for all 3 Marconis, the indirect source was consistently noisier in the middle frequencies, indicating that any Marconis connected to the rubidium clock should use the rubidium clock as a direct frequency reference.

Since I can't adjust settings of the Marconis at the moment, I have yet to finish measurements of the phase noise at 160 MHz and 80 MHz (those used in the PSL lab), but using the data I have for only the first 2 Marconis (so I can't finish the TCH technique), the phase noise appears to be lowest using the 100kHz offset except at the higher frequencies. The 160 MHz signal so far is noisier than the 11.1 MHz signal with offset ranges of 1 kHz and 10 Hz, but less noisy with a 100 kHz offset.

I still haven't measured anything at 80 MHz and have to finish taking more data to be able to use the TCH technique at 160 MHz, then the individual phase noise data will be used to measure the noise of the function generators used in the PSL lab.

Attachment 1: IndividualNoise11100kHzAllRanges.jpg
IndividualNoise11100kHzAllRanges.jpg
Attachment 2: IndividualNoise11100kHzSeparate.jpg
IndividualNoise11100kHzSeparate.jpg
Attachment 3: DirectvsIndirectNoise.jpg
DirectvsIndirectNoise.jpg
Attachment 4: FG12Noise.jpg
FG12Noise.jpg
  3240   Fri Jul 16 20:25:52 2010 MeganUpdatePSLReference Cavity Insulation

Rana and I

1) took the temperature sensors off the reference cavity;

2) wrapped copper foil around the cavity (during which I learned it is REALLY easy to cut hands with the foil);

3) wrapped electrical tape around the power terminals of the temperature sensors (color-coded, too! Red for the out of loop sensor, Blue for the first one, Brown for the second, Gray for the third, and Violet for the fourth. Yes, we went with an alphabetical coding system, excluding the out of loop sensor);

4) re-wrapped the thermal blanket heater;

5) covered the ends of the cavities with copper, ensuring that the beam can enter and exit;

6) took pretty pictures for your enjoyment!

We will see if this helps the temperature stabilization of the reference cavity.

 

DSC_2271.JPG

The end of the reference cavity, with a lovely square around the beam.

 

DSC_2266.JPG

The entire, well-wrapped reference cavity!

  3260   Wed Jul 21 15:43:38 2010 MeganSummaryPSLCopper Layer Thickness on the Reference Cavity

Using the equation for thermal resistance

Rthermal = L/(k*A)

where k is the thermal conductivity of a material, L is the length, and A is the surface area through which the heat passes, I could find the thermal resistance of the copper and stainless steel on the reference cavity. To reduce temperature gradients across the vacuum chamber, the thermal resistance of the copper must be the same or less than that of the stainless steel. Since the copper is directly on top of the stainless steel, the length and width will be the same for both, just the thickness will be different (for ease of calculation, I assumed flat, rectangular strips of the metal). Assuming we wish to have a thermal resistance of the copper n times less than that of the stainless steel, we have

RCu = RSS/n

or

L/(kCu*w*tCu) = L/(kSS*w*tSS*n)

so that

tCu/tSS = n*kSS/kCu

We know that kSS = 401 W/m*K and KCu = 16 W/m*K, so

tCu/tSS = 0.0399*n

By using the drawings for the short reference cavity vacuum chamber (the only one I could find drawings for online) I found a thickness of the walls of 0.12 in or 0.3048 cm. So for the same thermal resistance in both metals, the copper must be 0.0122 cm thick and for a thermal resistance 10 times less, it must be 0.122 cm thick. So we will have to keep wrapping the copper on the vacuum chamber!

  3159   Tue Jul 6 17:05:30 2010 Megan and JoeUpdateComputersc1iovme reboot

We rebooted c1iovme because the lines stopped responding to inputs on C1:I00-MC_DRUM1. This fixed the problem.

  6341   Wed Feb 29 17:32:11 2012 MikeUpdateComputersPyNDS and a Plot

Quote:

Quote:

Power Spectral Density plot using PyNDS, comparing 5 fast data channels for ETMX.

 Is there any stuff to install, etc?  Y'know, for those of use who don't really know how to use computers and stuff....

 No new stuff for these computers.  Everything should be installed already.

  6479   Tue Apr 3 12:42:19 2012 Mike J.UpdateComputersHysteresis Model

Here's my first hysteresis model in Simulink. It's based on the equation y=Amplitude*sin(frequency*t+phase)+(hysteresis/frequency2) as a solution to y''+frequency2*y+hysteresis=0. All values in the model are variables that should be manipulated through the model workspace or external code.

Attachment 1: hysteresis1.mdl
Model {
  Name			  "hysteresis1"
  Version		  7.6
  MdlSubVersion		  0
  GraphicalInterface {
    NumRootInports	    0
    NumRootOutports	    0
    ParameterArgumentNames  ""
    ComputedModelVersion    "1.9"
    NumModelReferences	    0
... 734 more lines ...
  6485   Wed Apr 4 21:43:16 2012 Mike J.UpdateComputersBetter Hysteresis Model

A better hysteresis model based on the simple harmonic oscillator equation. Useless variables have been removed and output can now be saved to workspace for plotting. The model is at "/users/mjenson/matlab/SHO_hyst.mdl".

Attachment 1: SHO_hyst.png
SHO_hyst.png
  6487   Thu Apr 5 01:07:08 2012 Mike J.UpdateComputersHysteresis Plots

Here are the hysteresis plots from the most recent model, which uses a modified harmonic oscillator equation y''=-(Frequency)2*y-Hysteresis.  The hysteresis constant seems to change both the amplitude and equilibrium point of the pendulums, which is akin to changing the length of a pendulum without changing the frequency. This does not make sense. Perhaps the hysteresis value should be moved to the "spring" constant for the pendula and not restricted to a position-biasing value.

SHO_hyst_plot.png

  6500   Fri Apr 6 19:40:57 2012 Mike J.SummaryGeneralLaser Emergency Shutoff

I accidently shut off the laser at 19:34 with the emergency shutoff button while trying to tap into a video line for the Sensoray device.

  6502   Fri Apr 6 20:24:31 2012 Mike J.UpdateComputersSensoray

The Sensoray device is currently viewing Monitor 4 and plugged into Pianosa.  The user interface is run at /home/controls/Downloads/sdk_2253_1.2.2_linux/python demo.py. It can preview and capture the video stream, however the captured files are terrible. I believe it has something to do with the bitrate, since the captured video with lower bitrates are not as bad as the ones with higher bitrates, but  I am not certain.

  6503   Fri Apr 6 20:38:41 2012 Mike J.UpdateComputersSensoray

 Turns out that the "MPEG-4 VES" video format is just bad for captured video.  Everything except "MP4" and "MPEG-TS" works for streaming, and "MP4" and "MPEG-TS" seem to be the only captured formats that can be viewed properly.

  6505   Sat Apr 7 01:45:02 2012 Mike J.UpdateComputersEven Better Hysteresis Model and Plots

 The new hysteresis model is slightly based on the SHO equation, but with the force being out of phase with the position by an amount of hysteresis {x(t)=Amp*sin(freq*t), F(t)=Amp*sin(freq*t+Hyst)}. The new model can be found at /users/mjenson/matlab/hyst_v_3.mdl.  Pictures are: new hysteresis model, x(t) subsystem in new model[xh''(t) only lacks -1 multiplier and includes hysteresis variable], new plots.

 hyst_v_3.pnghyst_v_3-x(t).pnghyst_v3.png

  6507   Sat Apr 7 02:01:29 2012 Mike J.UpdateComputersProjector Cable Management

I replaced the projector video and power cables with longer ones, and zip-tied them to the ceiling and wall so they don't block the image.

projector_cables.jpg

  6513   Mon Apr 9 20:02:19 2012 Mike J.UpdateComputersSensoray

The highest resolution available is 720x480 pixels. Bit depth of captured images and video is most likely 16 bits per pixel. Video may be captured raw as well, which will be necessary for image subtraction/enhancement, however it cannot currently be played raw. A captured image is shown below, along with MP4 video.

out_0.jpg

 

  6530   Thu Apr 12 22:04:17 2012 Mike J.UpdateComputersNew Hysteresis Model & Plots

The new hysteresis model uses a triangle wave with offset zero points as the position function and a sinusoidal force function, creating a loop similar to this. Model is at /users/mjenson/matlab/ferro_hyst.mdl.

ferro_hyst.pnghyst_combo.png

  6592   Tue May 1 17:42:15 2012 Mike J.UpdateComputersSensoray

I've upgraded the Sensoray GUI so it can now switch the video channel it receives, thanks to the videoswitch script.

V4L2_Capture_Demo_r01.png

  6645   Tue May 15 23:40:46 2012 Mike J.UpdateComputersImage Subtraction

I acquired 2 raw frames of MC2 using "/users/mjenson/sensoray/sdk_2253_1.2.2_linux/capture -n -s 720x480 -f 1", one while the laser was off the mode cleaner and another while it was on:

mc2_1.bmp mc2_2.bmp

I then used "/users/mjenson/sensoray/sdk_2253_1.2.2_linux/imsub/display-image.py" to generate bitmaps of the raw images, which I then subtracted using the Python Imaging Library to generate a new image:

mc2_1-mc2_2.bmp

It doesn't look all that different, but the first image didn't have that much lit up in it to begin with. I should be able to write a script that does all of this without needing to generate new files in between acquisition and subtraction.

  7362   Fri Sep 7 15:31:52 2012 Mike J.UpdateComputersSensoray back up

Video Capture with the Sensoray works again. Pianosa just needed mplayer installed for it to play properly.

Attachment 1: output_5.mp4
  7364   Fri Sep 7 17:24:16 2012 Mike J.UpdateComputersSensoray Video Capture

To capture video with the Sensoray, open the GUI (python ./demo.py), simply press "Save," enter a filename, and hit "Stop" when you wish to stop recording. If you want to change the video format, there is a dropdown menu labelled "Format." I recommend MP4 for standard video, and nv12 for RAW video.

  7380   Thu Sep 13 19:59:43 2012 Mike J.UpdateElectronicsAS beam scan

**EDIT:** Mixed up X and Y. Beam is 3.5844 mm tall and 2.7642 mm wide

14.112 hundredths of an inch in the vertical direction

3.5844 millimeters

10.883 hundredths of an inch in the horizontal direction

2.7642 millimeters

Plots and error bars to come soon.

  7386   Fri Sep 14 01:35:55 2012 Mike J.UpdateElectronicsAS beam scan PLOTS

H_razor.jpegV_razor.jpeg

  7404   Tue Sep 18 22:06:21 2012 Mike J.UpdateElectronicsAS beam scan plots and chi-squared

Results of the Razor Blade Beam Scan

The horizontal blade test measured the beam intensity as a razor blade passed in between it and a power meter from the left side of the beam (negative x values) until blocking it. The resulting function, found through least-squares regression of the error function, calculates a beam height of 3.6 mm +/- 16 mm. However, the function has a chi-squared value of 3.2, so that value may not be accurate.

H_raz.png

The vertical blade test measured beam intensity as a razor moved from below the beam (negative x values) until blocking it. This function, found the same way as above, calculates a beam width of 2.8mm +/- 9.6 mm, and has chi-squared value of 0.77.

 V_raz.png

Both data sets have a y-error of 0.5 micro-Watts, and an x-error of 0.127 mm. The Python code used to analyze the data and plot the results is attached.

Attachment 1: beam_width.py
#############################################
#   Python code for finding Gaussian-beam   #
# 		spot size w(z) from intensity 		#
# 		 vs. blocked portion of beam  		#
#############################################
#           Coded by Mike Jenson            #
#############################################

import numpy as np
from scipy.special import erf
... 93 more lines ...
  7427   Fri Sep 21 22:25:44 2012 Mike J.UpdateGeneralPOX, POY, PR2 pics

Unaltered PR2 images, with IR card, without card, and steering mirror:

PR2_card.jpgPR2_nocard.jpgPR2_Steering2.jpg

Unaltered POX and POY images:

out_25.jpgout_0.jpg

The POX images only needed a major brightness reduction and increased contrast to view:

out_25_brigcon.jpgout_29_brigcon.jpg

The POY images needed their intensity histograms shifted slightly right and made left-tailed:

out_0_brigcon.jpgout_13_brigcon.jpgout_43_brigcon.jpg

  14626   Mon May 20 21:45:20 2019 MilindUpdate Traditional cv for beam spot motion

Went through all of Pooja's elog posts, her report and am currently cleaning up her code and working on setting up the simulations of spot motion from her work last year. I've also just begun to look at some material sent by Gautam on resonators.

This week, I plan to do the following:

1) Review Gabriele's CNN work for beam spot tracking and get his code running.

2) Since the relation between the angular motion of the optic and beam spot motion can be determined theoretically, I think a neural network is not mandatory for the tracking of beam spot motion. I strongly believe that a more traditional approach such as thresholding, followed by a hough transform ought to do the trick as the contours of the beam spot are circles. I did try a quick and dirty implementation today using opencv and ran into the problem of no detection or detection of spurious circles (the number of which decreased with the increased application of median blur). I will defer a more careful analysis of this until step (1) is done as Gautam has advised.

3) Clean up Pooja's code on beam tracking and obtain the simulated data.

4) Also data like this  (https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view) is incredibly noisy. I will look up some standard techniques for cleaning such data though I'm not sure if the impact of that can be measured until I figure out an algorithm to track the beam spot.

 

A more interesting question Gautam raised was the validity of using the beam spot motion for detection of angular motion in the presence of other factors such as surface irregularities. Another question is the relevance of using the beam spot motion when the oplevs are already in place. It is not immediately obvious to me how I can ascertain this and I will put more thought into this.

  14632   Thu May 23 08:51:30 2019 MilindUpdateCamerasSetting up beam spot simulation

I have done the following thus far since elog #14626:

Simulation:

  1. Cleaned up Pooja's code for simulating the beam spot. Added extensive comments and made the code modular. Simulated the Gaussian beam spot to exhibit 
    1. Horizontal motion
    2. Vertical motion
    3. motion along both x and y directions:
  2. The motion exhibited in any direction in the above videos is the combination of four sinusoids at the frequencies: 0.2, 0.4, 0.1, 0.3 Hz with amplitudes that can be found as defaults in the script ((0.1, 0.04, 0.05, 0.08)*64 for these simulations.). The variation looks as shown in Attachment 1. For the sake of convenience I have created the above video files with only a hundred frames (fps = 10, total time ~ 10s) and this took around 2.4s to write. Longer files need much longer. As I wish to simply perform image processing on these frames immediately, I don't see the need to obtain long video files right away.
  3. I have yet to add noise at the image level and randomness to the motion itself.  I intend to do that right away. Currently video 3 will show you that even though the time variation of the coordinates of the center of the beam is sinusoidal, the motion of the beam spot itself is along a line as both x and y motions have the same phase. I intend to add the feature of phase between the motion of x and y coordinates of the center of the beam, but it doesn't seem all too important to me right now. The white margins in the videos generated are annoying and make tracking the beam spot itself slightly difficult as they introduce offset (see below). I shall fix them later if simple cropping doesn't do the trick.
  4. I have yet to push the code to git. I will do that once I've incorporated the changes in (3).

Circle detection:

  1. If the beam spot intensity variation is indeed Gaussian (as it definitely is in the simulation), then the contours are circular. Consequently, centroid detection of the beam spot reduces to detecting these contours and then finding their centroid (center). I tried this for a simulated video I found in elog post 14005. It was a quick implementation of the following sequence of operations: threshold (arbritrarily set to 127), contour detection (video dependent and needs to be done manually), centroid determination from the required contour.  Its evident that the beam spot is being tracked (green circle in the video). Check #Attachment 2 for the results. However, no other quantitative claims can be made in the absence of other data.
  2. Following this, Gautam pointed me to a capture in elog post 13908. Again, the steps mentioned in (1) were followed and the results are presented below in Attachment #3. However, this time the contour is no longer circular but distorted. I didn't pursue this further. This test was just done to check that this approach does extend (even if not seamlessly) to real data. I'm really looking forward to trying this with this real data.
  3. So far, the problem has been that there is no source data to compare the tracked centroid with. That ought to be resolved with the use of simulated data that I've generated above. As mentioned before, some matplotlib features such as saving with margins introduce offsets in the tracked beam position. However, I expect to still be able to see the same sinusoidal motion. As a quick test, I'll obtain the fft of the centroid position time series data and check if the expected frequencies are present.

I will wrap up the simulation code today and proceed to going through Gabriele's repo. I will also test if the contour detection method works with the simulated data. During our meeting, it was pointed out that when working with real data, care has to be taken to synchronize the data with the video obtained. However, I wish to put off working on that till later in the pipeline as I think it doesn't affect the algorithm being used. I hope that's alright (?).

 

Attachment 1: variation.pdf
variation.pdf
Attachment 2: contours_simulated.mp4
Attachment 3: contours_real.mp4
  14635   Thu May 23 15:37:30 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.
Attachment 1: actual_motion.pdf
actual_motion.pdf
Attachment 2: predicted_motion.pdf
predicted_motion.pdf
Attachment 3: normalised_comparison.pdf
normalised_comparison.pdf
Attachment 4: residue_normalised.pdf
residue_normalised.pdf
Attachment 5: simulated_motion1.mp4
Attachment 6: elog_22may_contours.mp4
  14638   Sat May 25 20:29:08 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. I used the same motion as defined in the previous elog. I gradually added noise to the images. Noise added was uniform random noise - a 2 dimensinoal array of random numbers between 0 and a predetermined maximum (noise_amp). The previous elog provides the variation of the y coordinate. In this, I am also uploading the effect of noise on the error in the prediction of the x coordinate. As a reminder, the motion of the beam spot center was purely vertical. Attachement #1  is the error for noise_amp = 0, #2 for noise_amp = 20 and #3  for noise_amp = 40. While Attachment #3 does provide the impression of there being a large error, this is not really the case as without normalization, each peak corresponds to a deviation of one pixel about the central value, see Attachement #4 for reference.
  2. While the error does increase marginally, adding noise has no significant effect on the prediction of the y coordinate of the centroid as Attachment #5 shows at noise_amp = 40.
  3. I am currently running an experiment to obtain the variation of mean square error with different noise amplitudes and will put up the plots soon. Further, I shall vary the resolution of the image frames and the the standard deviation of the Gaussain beam with time and try to obtain simulations very close to the real data available and then determine the performance of the algorithm.
  4. The following videos will serve as a quick reference for what the videos and detection look like at
    1. noise_amp = 20
    2. noise_amp = 40
  5. I also performed a quick experiment to see how low the amplitude of motion could be before the algorithm falied to detect the motion and found it to occur at 2 orders of magnitude below the values used in the previous post. This is a line of thought I intend to pursue more carefully and I am looking into how opencv and python handle images with floats as coordinates and will provide more details about the previous trial soon. This should give us an idea of what the smallest motion of the beam spot that can be resolved is.
Quote:
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.

 

Attachment 1: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 2: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 3: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 4: predicted_motion_x.pdf
predicted_motion_x.pdf
Attachment 5: normalised_comparison_y.pdf
normalised_comparison_y.pdf
  14649   Mon Jun 3 21:03:54 2019 MilindUpdateCamerasSteps to interact with GigE

The following steps summarize the steps to setting up and interacting with a GigE camera.

Launching the PylonViewerApp:

  1. Open a new terminal using Ctrl + Alt + T on the keyboard.
  2. Launch the app using the command pylon.

Using setup python scripts to interact with the GigE (a summary of the steps listed here and here)

  1. Connect the GigE camera to the ethernet cable and record its IP address. If the IP address is not printed on the GigE, launch the PylonViewerApp and navigate to the "Tools" dropdown menu and select "pylon IP configurator" to be presented with a list of all connected cameras and their IP addresses.
  2. To simply observe the camera feed, open a new terminal and run the following commands:
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini  (only one config file is present currently and more will be added as more cameras are set up. The "Camera IP" in the  .ini file must match that determined in step 1). This starts the camera server.
  3. Open a new tab (Ctrl + Shift + T on the keyboard) in the terminal. You should still be in the same directory as navigated to in step 2.1. Run the following command.
    1. python camera_client.py -c C1-CAM-ETMX.ini
  4. This should bring up a feed from the camera. Close at will.
  5. To record a video file, repeat steps 1 and 2. Open a new tab as described in step 3. Then run the following command:
    1. python camera_client_movie.py -c C1-CAM-ETMX.ini
  6. Enter the full path to the file where you wish to save the movie in the prompt that appears. Use ./your_file_name_here.avi to save the the video in the working directory. Press Ctrl + C to stop recording. The recording can be played by navigating to the location where the recording is stored and running vlc your_file_name_here.avi.
  7. To adjust the exposure setting of the camera, open a new terminal and run the command sitemap . This should bring up the medm display in Attachment #1. Click on the Video/Lights button highlighted in red and select GigE. Adjust the exposure value in the next window using the slider before starting the server in step 1. Adjusting the slider once the server is started causes the program to freeze. Also set the Snapshot channel C1:CAM-ETMX_SNAP to off as mentioned in elog 14037.

 

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.

 

Attachment 1: sitemap.pdf
sitemap.pdf
  14650   Mon Jun 3 23:18:59 2019 MilindUpdateComputer Scripts / Programsupdating bashrc

I was working with the git repo in the SnapPy_pypylon folder (/cvs/cds/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon) and needed to create a branch. To avoid any confusion, I modified the PS1 variable and that alone in the bashrc file to reflect the git branch so that the prompt now displays the git branch if you enter a repository. This is just an update.

  14654   Tue Jun 4 22:24:45 2019 MilindUpdateCamerasSteps to interact with GigE

Figured out how to get/grab frames by looking at the pypylon documenation as that turned out to be easier than modifying Jon's code. Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). I will figure that out tomorrow and make a script suitable for Kruthi's usage (obtain a bunch of images with different exposure times). I will also try and integrate the video saving and streaming code into this and have a neat little script set up asap.

Quote:

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.
  14656   Wed Jun 5 22:30:13 2019 MilindUpdateCamerasSteps to interact with GigE

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

  14657   Thu Jun 6 16:01:52 2019 MilindUpdateCamerasSteps to interact with GigE

[Koji, Milind]

 

Today I ran into the following errors:

  1. Inability to access the EPICS channels using the commands caget and caput and thus the generation of a blank medm screen (error in Attachment #1) when simultaneously running the code in camera_server.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py).
  2. Inability to run camera_server.py code with an active medm screen with a "... failed to read <EPICS channel>" error.

Therefore, Koji and I took a look at it and putting our faith in Gautam's hunch from elog 13023, we walked down to rack 1Y1 and keyed it. Following this, all the functionality previously described was restored! Koji then took a look at all the channels handled by this machine and bestowed upon me the permission to key the crate should I lose control of the GigE again.

Quote:

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

 

Attachment 1: terminal_medm_error.pdf
terminal_medm_error.pdf
  14661   Mon Jun 10 22:22:19 2019 MilindUpdateCamerasSteps to interact with GigE

Steps to take snapshots using GigE at different exposures [Instructions for Kruthi]:

  1. Setup C1-CAM-ETMX.ini (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) appropriately. The parameter Number of Snapshots determines how many snapshots will be taken at any given exposure. Set Name Overlay, Time Overlay, Calculation Overlay, Calculations (if using very low values of exposure) and Auto Exposure to False. Ensure that that the IP address of the Camera in use and that in the configuration file match.
  2. Launch a server using the following commands (as described in elog 14649)
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini
  3. Open another terminal in the same directory and then run the following command
    1. python exposure_variation.py --minval <minval> --maxval <maxval> --step <step> where
      1. minval: lower bound of range of exposure values, defaults to 150
      2. maxval: upper bound of range of exposure values, defaults to 100000
      3. step: step size of variation in the range [minval, maxval], defaults to 2000

The python script takes in the above parameters and then takes snapshots by setting the exposure to values starting at minval and going upto maxval incrementing by step at each turn. This uses a simple for loop and is nothing elaborate.


A few unrelated updates:

  1. On a sidenote, I installed Sublime Text editor on rossa following the instructions at this site (check install using yum section). Further, I have also installed miniconda but did not set it up fully as I was in a rush and did not want to disturb any previously set up environment variables.
  2. I have cloned Gabriele's repository and am trying to get it to work on my system. As Gautam has pointed out that the end goal is to get stuff working on the lab machines, I will sharea .yml file with the necessary environment details upon completion.
  3. I will upload details of how I am going to construct the two learning tasks that Rana, Gautam and I discussed in a day or two including details of the use of simulation data for training data in the absence of real data (until Kruthi is done setting up the GigE) which Gautam suggested I do to speed things up.
  14662   Tue Jun 11 00:00:15 2019 MilindHowToPSLSteps to lock the PMC

Today, Rana had me key the PSL crate.

  1. Locating the rack: the crate is 1X1. This link provides details of the locations and functions of the racks.
  2. Keying the crate: the key is located at the bottom of the rack (in this case). Keying it requires one to turn the key through 90 degrees (anti clockwise facing the rack) and back to to the original position.

Locking the PMC:

  1. Accessing the medm screen for the PMC: open a new terminal and use the command sitemap. This should open up the sitemap medm screen. Click on the PSL button and then select C1PSL_PMC from the dropdown that is produced. This opens up a medm screen similar to that in Attachment #1.
  2. The correct toggling: The keying of the crate sometimes scrambles the settings on the medm screen. Rana and I performed extensive toggling of the buttons and concluded that the combination in Attachment #1 ought to be the correct one.
  3. Locking the PMC: The state of the PMC was deduced by observing CH01 on monitor 7. When not locked, there is no observable bright spot. At this point the "Input Offset (V)" slider is set to zero and the "Servo Gain Adjust (dB)" slider is set to minimum. To obtain lock, complete step 2 and then move the "DC Output Adjust (V)"  slider (at the bottom left on the screen) around rapidly while looking for a bright spot. On observing such a spot on the monitor, release the slider and quickly increase the "Servo Gain Adjust (dB)" slider to around 15 dB. Higher gain values produce a bright spot on CH02 as well which vanishes (almost) on decreasing the gain to the aforementioned value.
Attachment 1: pmc_locked_settings.pdf
pmc_locked_settings.pdf
  14667   Wed Jun 12 22:02:04 2019 MilindUpdateCamerasSimulation enhancements

Today, Rana asked me to work on improving simulations based on the ideas we discussed last week. As of the previous elog the simulation accomodated only

  1. Simulation of Gaussian beam spot.
  2. Arbitrary motion.

Today, I added the simulation of point scatterers.

What?

The image on the sensor (camera) is produced in roughly the following steps.

  1. Motion of the Gaussian beam on the optic (X,Y coordinates) which is what has been simulated so far.
  2. Reflection from the surface of the optic which can be modeled using knowledge of the BRDF has not been included as of this elog as I wish to do a little more reading before doing so.
  3. Reflection from point scatterers (dust particles burnt into the optic surface by the laser and so forth) which are characterised as peaks (impulses) in the TIS vs position plot. The laser beam is incident nearly normally on the optic and this behaviour is independent of the angle of observation. This is what has been added to the simulation.

How?

  1. Increased the frame resolution to 720 x 480.
  2. Defined an array of the same size and set values of at most "num_scatter" number of points at random positions to values determined randomly between 1 and "scatter_amp" + 1 where scatter_amp is non-negative.
  3. Multiplied the resulting array by the resulting Gaussian beam. The motivation was to imitate the bright specks obtained on various camera feeds in the lab. Physically, this also implies normal incidence and normal observation which is not the real case at all. I shall add these features in a day or two.

Herewith, in attachments #1, #2, #3 I am attaching videos obtained by varying scattering amplitude and number of scattering points in a vain attempt to reproduce this data. I shall work more on this simulation on Friday.

 


Scripting stuff:

  1. Previous elogs detail how to take gige images at various exposure times. I am still waiting on Kruthi to use the script.
  2. Tomorrow I shall work on the scripting software to interact with the GigE and take video for a fixed duration etc. I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.

Neural network stuff:

GANs for simulation:

  1. Other than putting the physics into simulation i.e the first portion of this elog, GANs can be trained to generate images similar to the original data. I am unfamiliar with training GANs and the various tricks that are used specifically for them. I will do a bit of reading and make an update by Friday. As of now, the data I plan to use is this and I will train it using the GTX 1060 on my machine.

Networks for beam tracking:

  1. I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.
Attachment 1: simulated_motion0.mp4
Attachment 2: simulated_motion0.mp4
Attachment 3: simulated_motion0.mp4
  14669   Thu Jun 13 15:08:31 2019 MilindUpdateElectronicsVCO pickup by Rich

Rich dropped by at around 3:00 PM today and picked up the VCO in Attachment #1 and left the note in Attachment #2 on Gautam's desk with the promise of bringing it back soon.

Attachment 1: WhatsApp_Image_2019-06-13_at_15.06.57.jpeg
WhatsApp_Image_2019-06-13_at_15.06.57.jpeg
Attachment 2: WhatsApp_Image_2019-06-13_at_15.06.57(1).jpeg
WhatsApp_Image_2019-06-13_at_15.06.57(1).jpeg
  14671   Thu Jun 13 21:29:52 2019 MilindUpdateCamerasSteps to interact with GigE

As directed by Gautam, I have set up one script- interact.py (at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/interact.py) to perform the following two tasks:

  1. View the GigE feed for a fixed period of time.
  2. Record the GigE feed for a fixed amount of time.

 

Steps to view GigE feed for a fixed amount of time:

  1. Run the following commands in the terminal to navigate to the concerned directory and then view the feed
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python interact.py --path_to_config <path_config> --mode 0 --view_time <viewing_time>, where
      1. path_config: full path to configuration file, defaults to /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini if --path_to_config is not used
      2. viewing_time: time in seconds for which the feed is to be displayed. The server is closed  after this time and the window freezes and can be manually closed.
    3. Exiting the feed in between: The script terminates automatically after the specified time. To terminate the feed in between, close the window manually using the x icon the top right. This makes sure that the server is correctly closed. If closed using the Ctrl-C command in the terminal, the server is left running and any attempt to unwittingly set up another results in an error (see Attachment #1). In this case, the server and client processes needs to be identified manually and killed. I have used the following steps
      1. ps -eaf | grep server, then identify the PID for the python camera_server.py process
      2. kill PID
      3. similarly for the client file

Steps to record the GigE feed for a fixed amount of time:

I tried to look for elegant solutions that wouldn't require editing the code that Jon wrote and stumbled upon this useful bit of information but ended up deciding that it was just easier to change the camera_client_movie.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_client_movie.py). It can still be run as previously described, where video recording is terminated by using Ctrl-C. Steps to record for a fixed period of time are

  1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
  2. python interact.py --path_to_config <path_config> --mode 1 --save_time <recording_time> --file_name filename, where\
    1. path_config: full path to configuration file, defaults to /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini if --path_to_config is not used
    2. recording_time: time in seconds for which the feed is to be saved. No video is displayed during this time.
    3. filename: full path to the file where the video is to be saved. Overwrites any existing files.

I'll make aliases for these to make the whole process more user friendly. I'm halting this for now and will discuss what else needs to be done once Gautam gets back.


Regarding the autolocker: I spoke to Aaron today and as he is in tomorrow, I'll ask him about the burt files and the ideal configuration.

I'm also starting with GANs now.

 

 

Attachment 1: terminal_error_server.pdf
terminal_error_server.pdf
  14678   Mon Jun 17 14:36:13 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".

Quote:

Networks for beam tracking:

  1. I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.

 

  14680   Mon Jun 17 22:19:04 2019 MilindUpdateComputer Scripts / ProgramsPMC autolocker

As Rana asked me to in the last meeting, I dug through the elogs to determine what had become of the previous autolockers. I stumbled upon this elog by Rana from before Gautam cleaned up the medm screen. Out of curiosity, I ran the autolocker script using the instructions in Rana's elog. I did this a total of 5 times and could lock the PMC 3 times fairly quickly. I attempted to decipher the details of the code but did not make much headway owing to my unfamiliarity with the language. From what I could make out from the medm screen while the autolocker was running, it appeared to be the same method as that in this elog. I will take a look at it again tomorrow. However, I intend to spend most of tomorrow working on preprocessing the data, developing the CNN script and then the simulation. 

Quote:
 
  1.  I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.

  14682   Tue Jun 18 22:54:59 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.

Quote:

Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".

 

  14694   Tue Jun 25 00:25:47 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

In the previous meeting, Koji pointed out (once again) that I should determine if the displacement values and frames are synchronized before training a network. Pooja did the following last time. Koji also suggested that I first predict the motion (a series of x and y coordintates) and then slide resulting plots around until I get the best match for the original motion. This is however not possible with a neural network based approach as the network learns exactly what you show it and therefore it will learn any mismatch between the labels and the frames and predict exactly that. Therefore I came up with what Koji described as "hacky" method to achieve the same using the opencv work described previously in this elog (the only addition being the application of a mask to block out the OSEMs and work only with the beam spot) .

Hacky technique to sync frames and labels:

  1. I ran the OpenCV algorithm on the data to obtain a plot for predicted motion depicted in Attachment #2. As is evident, the predicted motion is only an approximate of the actual motion and also displays a shift . However, a plot of the fourier transform of the signal (see Attachment #1) shows that the components present are the same. However, the predominant frequency component is 0.22 Hz rather than 0.2 Hz as stated by Pooja in her elog. I wonder if this is of any consequence. Therefore, this predicted motion can be slid around until it overlaps with the applied sinusoidal dither signal "well".
  2. Defining "well": I computed an error signal as the differnece between the predicted signal and the actual motion with each signal being normalized by subtracting the mean and then dividing the resulting signal by the maximum value (see Attachment #3). The lower the power of the resulting signal, the better the synchronization of the predicted and actual signal. Note: To achieve this overlap of signals, datapoints are removed from either the start or end of the signals and this effectively reduces the number of data points available for training by 36 pionts (see Attachment #4, positive and negative shifts merely indicate if the predicted signal is being moved right or left).
  3. Attachments #5,#6 show the resutls of shifting the data by 36 samples. it is evident that there is far greater overlap of the prediction and the actual values.
  4. Well, what now? I will use the mapping between labels and frames obtained by the above steps to train a neural network.

[Koji, Milind - 21/06/2019]

  1. Well, the above is fine, but why is contour detection really necessary? Why not take a weighted sum of all the pixel values (in a rectangular region obtained, say, after blocking out the OSEMs) to see what the centroid motion is? Black areas (0 pixel intensity values) will not contribute to this sum anyway. Perhaps that can be used for the sliding instead of the above (fallible!) approach, specially for cases in which the beam "spot"  is just a collection of random speckles?
    1. Something like this was done by Pooja where she computed the sum of pixel intensities in a rectangular region containing the beam spot. However, she did this for very noisy data and observed intensity variation at a frequency double that of the applied signal.
    2. Results of applying a median filter and doing the same are presented in Attachment #7. Clearly, they can't be used for this sliding task.
    3. Results of computing the weighted sum of all the coordinates (with pixel intensities as the weights are presented in Attachment #8. Clearly, for this data and for this task, the contour approach seems to be a better method. Further, these resutls just serve to prove Rana's point that such simple, unsophisticated, naive approaches will not produce desired results and therefore, shall be presented in this very context in the report that is due.
  2. The contour detection technique does not work if the beam spot is just a cokllection of speckles. In that case Koji suggested that we use a bounding convex hull instead of a contour. Alternately, for a bunch of speckles I can perform dilation to reduce it to the same problem.
  3. Using gpstime for time stamping: To determine the absolute time which a frame is grabbed. However, the time between the time being recorded and grabbing of frame needs to be determined for this which should be doable using linux/python commands.
Quote:

Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.

 


Upcoming work (in the order of priority):

  1. Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.
  2. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
  3. Simulation:
    1. Putting the physics in: Previously, I worked on adding point scatterers. I shall add the effect of surface roughness and incorporate the BRDF next. Just as Gautam did, Rana also reccommended that I go through Hiro Yamamoto's work to improve my understanding of this.
    2. GANs: I will put together a readme (which I will turn into a wiki later) for all the material that I am using to develop my ideas about GAN training. Currently, my understanding of GANs is that they take as input noise vectors which are fed to the generative networks which then produce the fakes. This clearly isn't the only way to do it as GANs are used for several applications such as image generation from text. I am referring to these papers to set up the necessary architecture.
  4. PMC autolocker: I will convert the existing autolocker script to python. Rana also suggested that it would be interesting to see what the best settings of the hyperparameters would be to lock the PMC the fastest. I will write a script to do that and plot a 3D surface plot of the average time taken to lock the PMC as a function of the PZT scan speed and the Servo gain to determine the optimal setting of these "hyperparameters".
  5. Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
  6. Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.

 

Attachment 1: Spectra.pdf
Spectra.pdf
Attachment 2: normalised_comparison_y.pdf
normalised_comparison_y.pdf
Attachment 3: residue_normalised_y.pdf
residue_normalised_y.pdf
Attachment 4: error_power_sliding.pdf
error_power_sliding.pdf
Attachment 5: normalised_comparison_y.pdf
normalised_comparison_y.pdf
Attachment 6: residue_normalised_y.pdf
residue_normalised_y.pdf
Attachment 7: intensum.pdf
intensum.pdf
Attachment 8: centroid.pdf
centroid.pdf
  14697   Tue Jun 25 22:14:10 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I discussed this with Gautam and he asked me to come up with a list of signals that I would need for my use and then design the data acquisition task at a high level before proceeding. I'm working on that right now. We came up with a very elementary sketch of what the script will do-

  1. Check the MC is locked.
  2. Choose an exposure value.
  3. Choose a frequency and amplitude value for the applied sinusoidal dither (check warning by Gabriele below).
  4. Apply sinusoidal dither to optic.
  5. Timestamping: Record gpstime, instantaneous channel values and a frame. These frames can later be put together in a sequence and a network can be trained on this. (NEED TO COME UP WITH SOMETHING CLEVERER THAN THIS!)

Tomorrow I will try and prepare a dummy script for this before the meeting at noon. Gautam asked me to familiarize myself with the awg, cdsutils (I have already used ezca before) to write the script. This will also help me do the following two tasks-

  1. IFO test scripts that Rana asked me to work on a while ago
  2. The PMC autolocker scripts that Rana asked me work on
Quote:
 

Upcoming work (in the order of priority):

  1. Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.

 

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

I am pushing the code that I wrote for

  1. Kruthi's exposure variation - ccd calibration experiment
  2. modified camera_client_movie.py code (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)
  3. interact.py (to interact with the GigE in viewing or recording mode) (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)

to the GigEcamera repository.

 

Gautam also asked me to look at Jigyasa's report and elog 13443 to come up with the specs of a machine that would accomodate a dedicated camera server.

 

Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
  2. Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
  3. Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.

 

 

  14698   Tue Jun 25 23:52:37 2019 MilindUpdateCamerasSimulation enhancements

Yesterday, Rana asked me to look at Hiro Yamamoto's docs on the DCC to improve the simulation. I'm performing a first pass (=> Just skimming through to see if they're relevant, I will go through them more carefully soon!) and putting up stuff here for future reference. @Kruthi's help much appreciated!

  14700   Wed Jun 26 11:11:40 2019 MilindUpdateIOOPMC and IMC locked again, some MEDM maintenance

After helping Aaron key the crate and do a burt restore, I realized that it would probably be best to record the steps that Koji showed me to do a burt restore as a reference for (anyone) the future

Commands (in terminal):

  1. burttoday: changes to the directory with snapshots for the day (/opt/rtcds/caltech/c1/burt/autoburt/today)
  2. burtgooey: opens a new window with several buttons of which "Restore" needs to be selected. This opens up a second window as shown in Attachment #1. Click on Snapshot files and navigate to the snapshot you wish to restore (these are present at /opt/rtcds/caltech/c1/burt/autoburt/snapshots) and select that. A green "OK" button indicates if the Restore can be performed without a hitch. Hit "Restore" to perform the burtrestore.

 

Also Gautam explained today that the sticky slider problem is a hardware issue. That it basically means that the signal (voltage output for instance) that you request from the medm screen is not what the hardware delivers. Twice now, we have got around that with a burtrestore. My understanding of a burt restore is that it is a restoration of values from a certain time to the EPICS channels. Therefore, I don't understand why a restoration at the software level should fix how the hardware responds? Why does this happen?

Attachment 1: burtgooey.pdf
burtgooey.pdf
ELOG V3.1.3-