40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 67 of 339  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14114   Sun Jul 29 23:15:34 2018 poojaUpdateCamerasDeveloping CNN

Aim: To develop a convolutional neural network that resolves mirror motion from video.

Input : Previous simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08 where random uniform noise ranging 0.05 has been added to amplitudes and frequencies. This is divided into train (0.4), validation (0.1) and test (0.5).

Model topology:

  • Number of filters = 2
  • Kernel size = 2
  • Size of pooling windows = 2
  •                                        ----->         Dense layer of 4 nodes  ---->    Output layer of 1 node 

         Activation:                      selu                                                  linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)

Plots of CNN output & applied signal given in Attachment 1. The variation in loss value with epochs given in Attachment 2.

This needs to be further analysed with increasing random uniform noise over the pixels and by training CNN on simulated data of varying ampltides and frequencies for sine waves.

Attachment 1: conv_nn_varying_freq_amp_1.pdf
conv_nn_varying_freq_amp_1.pdf
Attachment 2: conv_nn_varying_freq_amp_2.pdf
conv_nn_varying_freq_amp_2.pdf
  14632   Thu May 23 08:51:30 2019 MilindUpdateCamerasSetting up beam spot simulation

I have done the following thus far since elog #14626:

Simulation:

  1. Cleaned up Pooja's code for simulating the beam spot. Added extensive comments and made the code modular. Simulated the Gaussian beam spot to exhibit 
    1. Horizontal motion
    2. Vertical motion
    3. motion along both x and y directions:
  2. The motion exhibited in any direction in the above videos is the combination of four sinusoids at the frequencies: 0.2, 0.4, 0.1, 0.3 Hz with amplitudes that can be found as defaults in the script ((0.1, 0.04, 0.05, 0.08)*64 for these simulations.). The variation looks as shown in Attachment 1. For the sake of convenience I have created the above video files with only a hundred frames (fps = 10, total time ~ 10s) and this took around 2.4s to write. Longer files need much longer. As I wish to simply perform image processing on these frames immediately, I don't see the need to obtain long video files right away.
  3. I have yet to add noise at the image level and randomness to the motion itself.  I intend to do that right away. Currently video 3 will show you that even though the time variation of the coordinates of the center of the beam is sinusoidal, the motion of the beam spot itself is along a line as both x and y motions have the same phase. I intend to add the feature of phase between the motion of x and y coordinates of the center of the beam, but it doesn't seem all too important to me right now. The white margins in the videos generated are annoying and make tracking the beam spot itself slightly difficult as they introduce offset (see below). I shall fix them later if simple cropping doesn't do the trick.
  4. I have yet to push the code to git. I will do that once I've incorporated the changes in (3).

Circle detection:

  1. If the beam spot intensity variation is indeed Gaussian (as it definitely is in the simulation), then the contours are circular. Consequently, centroid detection of the beam spot reduces to detecting these contours and then finding their centroid (center). I tried this for a simulated video I found in elog post 14005. It was a quick implementation of the following sequence of operations: threshold (arbritrarily set to 127), contour detection (video dependent and needs to be done manually), centroid determination from the required contour.  Its evident that the beam spot is being tracked (green circle in the video). Check #Attachment 2 for the results. However, no other quantitative claims can be made in the absence of other data.
  2. Following this, Gautam pointed me to a capture in elog post 13908. Again, the steps mentioned in (1) were followed and the results are presented below in Attachment #3. However, this time the contour is no longer circular but distorted. I didn't pursue this further. This test was just done to check that this approach does extend (even if not seamlessly) to real data. I'm really looking forward to trying this with this real data.
  3. So far, the problem has been that there is no source data to compare the tracked centroid with. That ought to be resolved with the use of simulated data that I've generated above. As mentioned before, some matplotlib features such as saving with margins introduce offsets in the tracked beam position. However, I expect to still be able to see the same sinusoidal motion. As a quick test, I'll obtain the fft of the centroid position time series data and check if the expected frequencies are present.

I will wrap up the simulation code today and proceed to going through Gabriele's repo. I will also test if the contour detection method works with the simulated data. During our meeting, it was pointed out that when working with real data, care has to be taken to synchronize the data with the video obtained. However, I wish to put off working on that till later in the pipeline as I think it doesn't affect the algorithm being used. I hope that's alright (?).

 

Attachment 1: variation.pdf
variation.pdf
Attachment 2: contours_simulated.mp4
Attachment 3: contours_real.mp4
  14633   Thu May 23 10:18:39 2019 KruthiUpdateCamerasCCD calibration

On Tuesday, I tried reproducing Pooja's measurements (https://nodus.ligo.caltech.edu:8081/40m/13986). The table below shows the values I got. Pictures of LED circuit, schematic and the setup are attached. The powermeter readings fluctuated quite a bit for input volatges (Vcc) > 8V, therefore, I expect a maximum uncertainity of 50µW to be on a safer side. Though the readings at lower input voltages didn't vary much over time (variation < 2µW), I don't know how relaible the Ophir powermeter is at such low power levels. The optical power output of LED was linear for input voltages 10V to 20V. I'll proceed with the CCD calibration soon.

Input Voltage (Vcc) in volts Optical power
0 (dark reading) 1.6 nW
2 55.4 µW
4 215.9 µW
6 0.398 mW
8 0.585 mW
10 0.769 mW
12 0.929 mW
14 1.065 mW
16 1.216 mW
18 1.330 mW
20 1.437 mW
22 1.484 mW
24 1.565 mW
26 1.644 mW
28 1.678 mW

Attachment 1: setup.jpeg
setup.jpeg
Attachment 2: led_circuit.jpeg
led_circuit.jpeg
Attachment 3: led_schematic.pdf
led_schematic.pdf
  14635   Thu May 23 15:37:30 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.
Attachment 1: actual_motion.pdf
actual_motion.pdf
Attachment 2: predicted_motion.pdf
predicted_motion.pdf
Attachment 3: normalised_comparison.pdf
normalised_comparison.pdf
Attachment 4: residue_normalised.pdf
residue_normalised.pdf
Attachment 5: simulated_motion1.mp4
Attachment 6: elog_22may_contours.mp4
  14638   Sat May 25 20:29:08 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. I used the same motion as defined in the previous elog. I gradually added noise to the images. Noise added was uniform random noise - a 2 dimensinoal array of random numbers between 0 and a predetermined maximum (noise_amp). The previous elog provides the variation of the y coordinate. In this, I am also uploading the effect of noise on the error in the prediction of the x coordinate. As a reminder, the motion of the beam spot center was purely vertical. Attachement #1  is the error for noise_amp = 0, #2 for noise_amp = 20 and #3  for noise_amp = 40. While Attachment #3 does provide the impression of there being a large error, this is not really the case as without normalization, each peak corresponds to a deviation of one pixel about the central value, see Attachement #4 for reference.
  2. While the error does increase marginally, adding noise has no significant effect on the prediction of the y coordinate of the centroid as Attachment #5 shows at noise_amp = 40.
  3. I am currently running an experiment to obtain the variation of mean square error with different noise amplitudes and will put up the plots soon. Further, I shall vary the resolution of the image frames and the the standard deviation of the Gaussain beam with time and try to obtain simulations very close to the real data available and then determine the performance of the algorithm.
  4. The following videos will serve as a quick reference for what the videos and detection look like at
    1. noise_amp = 20
    2. noise_amp = 40
  5. I also performed a quick experiment to see how low the amplitude of motion could be before the algorithm falied to detect the motion and found it to occur at 2 orders of magnitude below the values used in the previous post. This is a line of thought I intend to pursue more carefully and I am looking into how opencv and python handle images with floats as coordinates and will provide more details about the previous trial soon. This should give us an idea of what the smallest motion of the beam spot that can be resolved is.
Quote:
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.

 

Attachment 1: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 2: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 3: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 4: predicted_motion_x.pdf
predicted_motion_x.pdf
Attachment 5: normalised_comparison_y.pdf
normalised_comparison_y.pdf
  14639   Sun May 26 21:47:07 2019 KruthiUpdateCamerasCCD Calibration

 

On Friday, I tried calibrating the CCD with the following setup. Here, I present the expected values of scattered power (Ps) at \thetas = 45°, where \thetas is scattering angle (refer figure). The LED box has a hole with an aperture of 5mm and the LED is placed at approximately 7mm from the hole. Thus the aperture angle is 2*tan-1(2.5/7) ≈ 40° approx. Using this, the spot size of the LED light at a distance 'd' was estimated. The width of the LED holder/stand (approx 4") puts a constraint on the lowest possible \thetas. At this lowest possible \thetas, the distance of CCD/Ophir from the screen is given by \sqrt{d^2 + (2'')^2}. This was taken as the imaging distance for other angles also.

In the table below, Pi is taken to be 1.5mW, and Ps and \Omega were calculated using the following equations:

  \Omega = \frac{CCD \ sensor \ area}{(Imaging \ distance)^2}            P_{s} = \frac{1 }{\pi} * P_{i} *\Omega *cos(45^{\circ})  

d (cm)

Estimated spot diameter (cm)

Lowest possible \thetas  (in degrees)

Distance of CCD/Ophir from the screen (in cm) \Omega (in sr)

Expected Ps at   \thetas = 45° (in µW)

1.0 1.2 78.86 5.2 0.1036 34.98
2.0 2.0 68.51 5.5 0.0259 8.74
3.0 2.7 59.44 5.9 0.0115 3.88
4.0 3.4 51.78 6.5 0.0065 2.19
5.0 4.1 45.45 7.1 0.0041 1.38
6.0 4.9 40.25 7.9 0.0029 0.98
7.0 5.6 35.97 8.6 0.0021 0.71
8.0 6.3 32.42 9.5 0.0016 0.54
9.0 7.1 29.44 10.3 0.0013 0.44
10.0 7.8 26.93 11.2 0.0010 0.34

 

                                 

 

 

 

 

 

 

 

 

 

On measuring the scattered power (Ps) using the ophir power meter, I got values of the same order as that of  expected values given the above table. Like Gautam suggested, we could use a photodiode to detect the scattered power as it will offer us better precision or we could calibrate the power meter using the method mentioned in Johannes's post: https://nodus.ligo.caltech.edu:8081/40m/13391.

 

Attachment 1: CCD_calibration_setup.png
CCD_calibration_setup.png
  14644   Fri May 31 01:38:21 2019 KruthiUpdateCamerasTelescope

[Kruthi, Milind]

Yesterday, we were able to capture some images of objects at a distane of approx 60cm (see the attachment), with the GigE mounted onto the telescope. I think, Johannes had used it earlier to image the ETMX (https://nodus.ligo.caltech.edu:8081/40m/13375). His elog entry doesn't say anything about the focal length of the lenses that he had used. The link to the python code he had used to calculate the lens solution wasn't working. After Gautam fixed it, I took a look at it. He has used 150mm (front lens) and 250mm (back lens) as the focal length of lenses for the calculation. Using the lens formula and an image of a nearby light source, with a very rough measurement, I found the focal lengths to be around 14 cm and 23 cm. So, I'm assuming that the lenses in the telescope are of same focal lengths as in his code, i.e 150mm and 250mm.

Attachment 1: telescope_mug_image.pdf
telescope_mug_image.pdf
  14646   Mon Jun 3 16:40:48 2019 ranaUpdateCamerasTelescope

no BMP files crying

  14649   Mon Jun 3 21:03:54 2019 MilindUpdateCamerasSteps to interact with GigE

The following steps summarize the steps to setting up and interacting with a GigE camera.

Launching the PylonViewerApp:

  1. Open a new terminal using Ctrl + Alt + T on the keyboard.
  2. Launch the app using the command pylon.

Using setup python scripts to interact with the GigE (a summary of the steps listed here and here)

  1. Connect the GigE camera to the ethernet cable and record its IP address. If the IP address is not printed on the GigE, launch the PylonViewerApp and navigate to the "Tools" dropdown menu and select "pylon IP configurator" to be presented with a list of all connected cameras and their IP addresses.
  2. To simply observe the camera feed, open a new terminal and run the following commands:
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini  (only one config file is present currently and more will be added as more cameras are set up. The "Camera IP" in the  .ini file must match that determined in step 1). This starts the camera server.
  3. Open a new tab (Ctrl + Shift + T on the keyboard) in the terminal. You should still be in the same directory as navigated to in step 2.1. Run the following command.
    1. python camera_client.py -c C1-CAM-ETMX.ini
  4. This should bring up a feed from the camera. Close at will.
  5. To record a video file, repeat steps 1 and 2. Open a new tab as described in step 3. Then run the following command:
    1. python camera_client_movie.py -c C1-CAM-ETMX.ini
  6. Enter the full path to the file where you wish to save the movie in the prompt that appears. Use ./your_file_name_here.avi to save the the video in the working directory. Press Ctrl + C to stop recording. The recording can be played by navigating to the location where the recording is stored and running vlc your_file_name_here.avi.
  7. To adjust the exposure setting of the camera, open a new terminal and run the command sitemap . This should bring up the medm display in Attachment #1. Click on the Video/Lights button highlighted in red and select GigE. Adjust the exposure value in the next window using the slider before starting the server in step 1. Adjusting the slider once the server is started causes the program to freeze. Also set the Snapshot channel C1:CAM-ETMX_SNAP to off as mentioned in elog 14037.

 

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.

 

Attachment 1: sitemap.pdf
sitemap.pdf
  14651   Tue Jun 4 00:11:45 2019 KruthiUpdateCamerasGigE setup

Chub and I are trying to figure out a way to co-mount GigE into the existing cylindrical enclosure. I'm attaching a picture of the current setup that is being used for imaging MC2. As of now, I have thought of 3 possible setups (schematics attached); but I don't know how feasible they are. Let us know if you have any other ideas.


Update: The setup 3 would require us to use the 52cm long enclosure. It has a long breadboard welded to it, which makes it very convienient, but the whole setup becomes quite heavy and it's not that safe to install such heavy enclosure on top of the vaccuum system. Also, aligning its components would be more complicated than other setups.

I decided to start with the simple one, therefore, I tried implementing setup 1. Fitting in the analog camera horizontally alongside the telescope turned out to be tricky. Though I did manage to fit it in, it didn't leave any room to change the orientation of the beamsplitter. Like Koji suggested, I'll be trying the setup 2.

Attachment 1: MC2.pdf
MC2.pdf
Attachment 2: Setup_3.png
Setup_3.png
Attachment 3: Setup_1.png
Setup_1.png
Attachment 4: Setup_2.png
Setup_2.png
  14654   Tue Jun 4 22:24:45 2019 MilindUpdateCamerasSteps to interact with GigE

Figured out how to get/grab frames by looking at the pypylon documenation as that turned out to be easier than modifying Jon's code. Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). I will figure that out tomorrow and make a script suitable for Kruthi's usage (obtain a bunch of images with different exposure times). I will also try and integrate the video saving and streaming code into this and have a neat little script set up asap.

Quote:

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.
  14655   Tue Jun 4 23:41:13 2019 gautamUpdateCamerasSteps to interact with GigE

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

  14656   Wed Jun 5 22:30:13 2019 MilindUpdateCamerasSteps to interact with GigE

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

  14657   Thu Jun 6 16:01:52 2019 MilindUpdateCamerasSteps to interact with GigE

[Koji, Milind]

 

Today I ran into the following errors:

  1. Inability to access the EPICS channels using the commands caget and caput and thus the generation of a blank medm screen (error in Attachment #1) when simultaneously running the code in camera_server.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py).
  2. Inability to run camera_server.py code with an active medm screen with a "... failed to read <EPICS channel>" error.

Therefore, Koji and I took a look at it and putting our faith in Gautam's hunch from elog 13023, we walked down to rack 1Y1 and keyed it. Following this, all the functionality previously described was restored! Koji then took a look at all the channels handled by this machine and bestowed upon me the permission to key the crate should I lose control of the GigE again.

Quote:

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

 

Attachment 1: terminal_medm_error.pdf
terminal_medm_error.pdf
  14660   Sun Jun 9 21:24:00 2019 KruthiUpdateCamerasGigE setup

I managed to fit all the parts into the cylindrical enclosure without having to drill a hole in the enclosure to mount the analog camera (pictures attached); thanks to Koji for helping me find some fancy mechanical components (swivel post clamps, right angle post clamps and brackets). On Thursday, with Chub's help, I took a look at all the current analog camera positions with respect to the cylindrical enclosures. I think this setup gives me enough flexibility to align the components, as necessary, to be able to image the test masses/mirrors in all the cavities. I'll set it up for MC2 tomorrow.

 

Attachment 1: GigE_setup.jpg
GigE_setup.jpg
Attachment 2: GigE_setup_top_view.jpg
GigE_setup_top_view.jpg
  14661   Mon Jun 10 22:22:19 2019 MilindUpdateCamerasSteps to interact with GigE

Steps to take snapshots using GigE at different exposures [Instructions for Kruthi]:

  1. Setup C1-CAM-ETMX.ini (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) appropriately. The parameter Number of Snapshots determines how many snapshots will be taken at any given exposure. Set Name Overlay, Time Overlay, Calculation Overlay, Calculations (if using very low values of exposure) and Auto Exposure to False. Ensure that that the IP address of the Camera in use and that in the configuration file match.
  2. Launch a server using the following commands (as described in elog 14649)
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini
  3. Open another terminal in the same directory and then run the following command
    1. python exposure_variation.py --minval <minval> --maxval <maxval> --step <step> where
      1. minval: lower bound of range of exposure values, defaults to 150
      2. maxval: upper bound of range of exposure values, defaults to 100000
      3. step: step size of variation in the range [minval, maxval], defaults to 2000

The python script takes in the above parameters and then takes snapshots by setting the exposure to values starting at minval and going upto maxval incrementing by step at each turn. This uses a simple for loop and is nothing elaborate.


A few unrelated updates:

  1. On a sidenote, I installed Sublime Text editor on rossa following the instructions at this site (check install using yum section). Further, I have also installed miniconda but did not set it up fully as I was in a rush and did not want to disturb any previously set up environment variables.
  2. I have cloned Gabriele's repository and am trying to get it to work on my system. As Gautam has pointed out that the end goal is to get stuff working on the lab machines, I will sharea .yml file with the necessary environment details upon completion.
  3. I will upload details of how I am going to construct the two learning tasks that Rana, Gautam and I discussed in a day or two including details of the use of simulation data for training data in the absence of real data (until Kruthi is done setting up the GigE) which Gautam suggested I do to speed things up.
  14663   Tue Jun 11 00:25:05 2019 KruthiUpdateCamerasGigE setup

[Kruthi, Milind]

Today, with Milind's help, I installed the analog camera into the MC2 enclosure [picture attached]; but it is not yet focused. We replaced the bulky angular bracket with a simple one, this saved a lot of space inside and it's easier to align other components now. I'll finish setting it up tomorrow.


Telescope design for MC2:  Instead of using two 3" long stackable lens tubes (SM2L30), we can use one 3" lens tube with an adjustable lens tube (SM2V10), as shown in the picture. This gives a flexibility to change the focal plane distance by 1" and also reduces the overall length of telescope from 9 inches to 6-7 inches. I decided to use two 150mm biconvex lens instead of a combination of 150mm and 250mm lenses, as the former combination results in lower focal plane distance for a given distance between the lenses.

Specifications of current telescope system (for future reference):

Focal length of lenses used  150mm & 150mm
Distance between the lenses 1cm - 2cm (Wasn't able to make more accurate measurement)

With the above telescope, assuming the MC2 mirror to be at a distance of approx 75cm, the focal plane distance will range from 7.9cm to 8.1cm. Using the adjustable lens tube, we can further make the fine adjustment.

Attachment 1: MC2_analog_setup.jpg
MC2_analog_setup.jpg
Attachment 2: telescope.pdf
telescope.pdf
  14665   Wed Jun 12 02:15:50 2019 KruthiUpdateCamerasGigE setup

[Koji, Kruthi]

Yesterday, Koji helped me clean all the optics that are being used for the setup. We tried aligning the cameras with the previous configuration we had, but after connecting the analog camera cables there wasn't much room to align the beam splitter. Today, I tried a different configuration and tested the alignment of analog camera, GigE, beam splitter and the mirror using a laser beam [pictures attached]. But the MC2 isn't locked to test if the whole setup is actually aligned with the mirror inside the vacuum. 

Also, with this setup, just by using posts of different lengths with the middle 90º-post-clamp, we will be able to move all the components. This way, we can easily image the beam spot in all the cavities.

Attachment 1: MC2_GigE_setup.jpg
MC2_GigE_setup.jpg
  14666   Wed Jun 12 21:55:34 2019 KruthiUpdateCamerasGigE setup

I'm attaching a picture of the screen. I just positioned the enclosure by turning it a bit and I suppose we can see the mirror inside the vacuum now (the MC2 is still not locked). 

Quote:

[Koji, Kruthi]

Yesterday, Koji helped me clean all the optics that are being used for the setup. We tried aligning the cameras with the previous configuration we had, but after connecting the analog camera cables there wasn't much room to align the beam splitter. Today, I tried a different configuration and tested the alignment of analog camera, GigE, beam splitter and the mirror using a laser beam [pictures attached]. But the MC2 isn't locked to test if the whole setup is actually aligned with the mirror inside the vacuum. 

Also, with this setup, just by using posts of different lengths, we will be able to image the beam spot in all the cavities.

 

Attachment 1: MC2_analog_pic.jpg
MC2_analog_pic.jpg
  14667   Wed Jun 12 22:02:04 2019 MilindUpdateCamerasSimulation enhancements

Today, Rana asked me to work on improving simulations based on the ideas we discussed last week. As of the previous elog the simulation accomodated only

  1. Simulation of Gaussian beam spot.
  2. Arbitrary motion.

Today, I added the simulation of point scatterers.

What?

The image on the sensor (camera) is produced in roughly the following steps.

  1. Motion of the Gaussian beam on the optic (X,Y coordinates) which is what has been simulated so far.
  2. Reflection from the surface of the optic which can be modeled using knowledge of the BRDF has not been included as of this elog as I wish to do a little more reading before doing so.
  3. Reflection from point scatterers (dust particles burnt into the optic surface by the laser and so forth) which are characterised as peaks (impulses) in the TIS vs position plot. The laser beam is incident nearly normally on the optic and this behaviour is independent of the angle of observation. This is what has been added to the simulation.

How?

  1. Increased the frame resolution to 720 x 480.
  2. Defined an array of the same size and set values of at most "num_scatter" number of points at random positions to values determined randomly between 1 and "scatter_amp" + 1 where scatter_amp is non-negative.
  3. Multiplied the resulting array by the resulting Gaussian beam. The motivation was to imitate the bright specks obtained on various camera feeds in the lab. Physically, this also implies normal incidence and normal observation which is not the real case at all. I shall add these features in a day or two.

Herewith, in attachments #1, #2, #3 I am attaching videos obtained by varying scattering amplitude and number of scattering points in a vain attempt to reproduce this data. I shall work more on this simulation on Friday.

 


Scripting stuff:

  1. Previous elogs detail how to take gige images at various exposure times. I am still waiting on Kruthi to use the script.
  2. Tomorrow I shall work on the scripting software to interact with the GigE and take video for a fixed duration etc. I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.

Neural network stuff:

GANs for simulation:

  1. Other than putting the physics into simulation i.e the first portion of this elog, GANs can be trained to generate images similar to the original data. I am unfamiliar with training GANs and the various tricks that are used specifically for them. I will do a bit of reading and make an update by Friday. As of now, the data I plan to use is this and I will train it using the GTX 1060 on my machine.

Networks for beam tracking:

  1. I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.
Attachment 1: simulated_motion0.mp4
Attachment 2: simulated_motion0.mp4
Attachment 3: simulated_motion0.mp4
  14668   Thu Jun 13 14:28:46 2019 ranaUpdateCamerasGigE setup

don't need to lock - make sure the 4 OSEMs are centered on the camera field just as we have for the arm cavity mirrors

Quote:

I'm attaching a picture of the screen. I just positioned the enclosure by turning it a bit and I suppose we can see the mirror inside the vacuum now (the MC2 is still not locked). 

 

  14671   Thu Jun 13 21:29:52 2019 MilindUpdateCamerasSteps to interact with GigE

As directed by Gautam, I have set up one script- interact.py (at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/interact.py) to perform the following two tasks:

  1. View the GigE feed for a fixed period of time.
  2. Record the GigE feed for a fixed amount of time.

 

Steps to view GigE feed for a fixed amount of time:

  1. Run the following commands in the terminal to navigate to the concerned directory and then view the feed
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python interact.py --path_to_config <path_config> --mode 0 --view_time <viewing_time>, where
      1. path_config: full path to configuration file, defaults to /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini if --path_to_config is not used
      2. viewing_time: time in seconds for which the feed is to be displayed. The server is closed  after this time and the window freezes and can be manually closed.
    3. Exiting the feed in between: The script terminates automatically after the specified time. To terminate the feed in between, close the window manually using the x icon the top right. This makes sure that the server is correctly closed. If closed using the Ctrl-C command in the terminal, the server is left running and any attempt to unwittingly set up another results in an error (see Attachment #1). In this case, the server and client processes needs to be identified manually and killed. I have used the following steps
      1. ps -eaf | grep server, then identify the PID for the python camera_server.py process
      2. kill PID
      3. similarly for the client file

Steps to record the GigE feed for a fixed amount of time:

I tried to look for elegant solutions that wouldn't require editing the code that Jon wrote and stumbled upon this useful bit of information but ended up deciding that it was just easier to change the camera_client_movie.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_client_movie.py). It can still be run as previously described, where video recording is terminated by using Ctrl-C. Steps to record for a fixed period of time are

  1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
  2. python interact.py --path_to_config <path_config> --mode 1 --save_time <recording_time> --file_name filename, where\
    1. path_config: full path to configuration file, defaults to /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini if --path_to_config is not used
    2. recording_time: time in seconds for which the feed is to be saved. No video is displayed during this time.
    3. filename: full path to the file where the video is to be saved. Overwrites any existing files.

I'll make aliases for these to make the whole process more user friendly. I'm halting this for now and will discuss what else needs to be done once Gautam gets back.


Regarding the autolocker: I spoke to Aaron today and as he is in tomorrow, I'll ask him about the burt files and the ideal configuration.

I'm also starting with GANs now.

 

 

Attachment 1: terminal_error_server.pdf
terminal_error_server.pdf
  14674   Fri Jun 14 00:40:33 2019 KruthiUpdateCamerasGigE setup

Today, I tried aligning it further; I'm attaching a picture of it. We are not able to see all the 4 OSEMs yet. In the reference picture I had taken, before taking off the previous analog setup, the OSEMs are not seen. So, I don't really understand what the other 2 spots seen on the current screen are. Are they actually OSEMs?

I need a laptop next to MC2, so that I can have a look at it and make further alignments. So, I tried accessing the GigE attached to the telescope using Paola. The pylon app in it, throws an error, few seconds after running it in continuous shot mode, and disconnects the GigE; everything works fine on Rossa though. I'll put up further details soon.

Quote:

don't need to lock - make sure the 4 OSEMs are centered on the camera field just as we have for the arm cavity mirrors

Quote:

I'm attaching a picture of the screen. I just positioned the enclosure by turning it a bit and I suppose we can see the mirror inside the vacuum now (the MC2 is still not locked). 

 

 

Attachment 1: Reference_image_taken_with_previous_analog_camera_setup.jpeg
Reference_image_taken_with_previous_analog_camera_setup.jpeg
Attachment 2: MC2_image.JPG
MC2_image.JPG
  14676   Sat Jun 15 00:03:26 2019 KruthiUpdateCamerasGigE setup

The analog camera is aligned and we are able to see all the 4 OSEMs (pictures attached). Due to secondary reflection from the beamspiltter (BS1-1064-33-2037-45S), when the MC2 is locked, we are getting a ghost image of the beam spot along with the primary image. 


The pylon app in Paola was reporting an error saying "0xE1000014: The buffer was incompletely grabbed". I followed the instructions given in this site, and changed the 'Packet Size' to 1500 and 'Inter-Packet Delay parameter' to a value greater than 20,000 (µs). This did the trick and I was able to use the continuous shot mode without any interruption. I'm attaching a picture of MC2 that I captured using GigE.

Quote:

Today, I tried aligning it further; I'm attaching a picture of it. We are not able to see all the 4 OSEMs yet. In the reference picture I had taken, before taking off the previous analog setup, the OSEMs are not seen. So, I don't really understand what the other 2 spots seen on the current screen are. Are they actually OSEMs?

I need a laptop next to MC2, so that I can have a look at it and make further alignments. So, I tried accessing the GigE attached to the telescope using Paola. The pylon app in it, throws an error, few seconds after running it in continuous shot mode, and disconnects the GigE; everything works fine on Rossa though. I'll put up further details soon.

Quote:

don't need to lock - make sure the 4 OSEMs are centered on the camera field just as we have for the arm cavity mirrors

Quote:

I'm attaching a picture of the screen. I just positioned the enclosure by turning it a bit and I suppose we can see the mirror inside the vacuum now (the MC2 is still not locked). 

 

 

 

Attachment 1: mc2_GigE.pdf
mc2_GigE.pdf
Attachment 2: MC2_analog.jpeg
MC2_analog.jpeg
Attachment 3: MC2_analog_OSEMs.jpeg
MC2_analog_OSEMs.jpeg
  14678   Mon Jun 17 14:36:13 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".

Quote:

Networks for beam tracking:

  1. I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.

 

  14682   Tue Jun 18 22:54:59 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.

Quote:

Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".

 

  14694   Tue Jun 25 00:25:47 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

In the previous meeting, Koji pointed out (once again) that I should determine if the displacement values and frames are synchronized before training a network. Pooja did the following last time. Koji also suggested that I first predict the motion (a series of x and y coordintates) and then slide resulting plots around until I get the best match for the original motion. This is however not possible with a neural network based approach as the network learns exactly what you show it and therefore it will learn any mismatch between the labels and the frames and predict exactly that. Therefore I came up with what Koji described as "hacky" method to achieve the same using the opencv work described previously in this elog (the only addition being the application of a mask to block out the OSEMs and work only with the beam spot) .

Hacky technique to sync frames and labels:

  1. I ran the OpenCV algorithm on the data to obtain a plot for predicted motion depicted in Attachment #2. As is evident, the predicted motion is only an approximate of the actual motion and also displays a shift . However, a plot of the fourier transform of the signal (see Attachment #1) shows that the components present are the same. However, the predominant frequency component is 0.22 Hz rather than 0.2 Hz as stated by Pooja in her elog. I wonder if this is of any consequence. Therefore, this predicted motion can be slid around until it overlaps with the applied sinusoidal dither signal "well".
  2. Defining "well": I computed an error signal as the differnece between the predicted signal and the actual motion with each signal being normalized by subtracting the mean and then dividing the resulting signal by the maximum value (see Attachment #3). The lower the power of the resulting signal, the better the synchronization of the predicted and actual signal. Note: To achieve this overlap of signals, datapoints are removed from either the start or end of the signals and this effectively reduces the number of data points available for training by 36 pionts (see Attachment #4, positive and negative shifts merely indicate if the predicted signal is being moved right or left).
  3. Attachments #5,#6 show the resutls of shifting the data by 36 samples. it is evident that there is far greater overlap of the prediction and the actual values.
  4. Well, what now? I will use the mapping between labels and frames obtained by the above steps to train a neural network.

[Koji, Milind - 21/06/2019]

  1. Well, the above is fine, but why is contour detection really necessary? Why not take a weighted sum of all the pixel values (in a rectangular region obtained, say, after blocking out the OSEMs) to see what the centroid motion is? Black areas (0 pixel intensity values) will not contribute to this sum anyway. Perhaps that can be used for the sliding instead of the above (fallible!) approach, specially for cases in which the beam "spot"  is just a collection of random speckles?
    1. Something like this was done by Pooja where she computed the sum of pixel intensities in a rectangular region containing the beam spot. However, she did this for very noisy data and observed intensity variation at a frequency double that of the applied signal.
    2. Results of applying a median filter and doing the same are presented in Attachment #7. Clearly, they can't be used for this sliding task.
    3. Results of computing the weighted sum of all the coordinates (with pixel intensities as the weights are presented in Attachment #8. Clearly, for this data and for this task, the contour approach seems to be a better method. Further, these resutls just serve to prove Rana's point that such simple, unsophisticated, naive approaches will not produce desired results and therefore, shall be presented in this very context in the report that is due.
  2. The contour detection technique does not work if the beam spot is just a cokllection of speckles. In that case Koji suggested that we use a bounding convex hull instead of a contour. Alternately, for a bunch of speckles I can perform dilation to reduce it to the same problem.
  3. Using gpstime for time stamping: To determine the absolute time which a frame is grabbed. However, the time between the time being recorded and grabbing of frame needs to be determined for this which should be doable using linux/python commands.
Quote:

Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.

 


Upcoming work (in the order of priority):

  1. Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.
  2. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
  3. Simulation:
    1. Putting the physics in: Previously, I worked on adding point scatterers. I shall add the effect of surface roughness and incorporate the BRDF next. Just as Gautam did, Rana also reccommended that I go through Hiro Yamamoto's work to improve my understanding of this.
    2. GANs: I will put together a readme (which I will turn into a wiki later) for all the material that I am using to develop my ideas about GAN training. Currently, my understanding of GANs is that they take as input noise vectors which are fed to the generative networks which then produce the fakes. This clearly isn't the only way to do it as GANs are used for several applications such as image generation from text. I am referring to these papers to set up the necessary architecture.
  4. PMC autolocker: I will convert the existing autolocker script to python. Rana also suggested that it would be interesting to see what the best settings of the hyperparameters would be to lock the PMC the fastest. I will write a script to do that and plot a 3D surface plot of the average time taken to lock the PMC as a function of the PZT scan speed and the Servo gain to determine the optimal setting of these "hyperparameters".
  5. Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
  6. Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.

 

Attachment 1: Spectra.pdf
Spectra.pdf
Attachment 2: normalised_comparison_y.pdf
normalised_comparison_y.pdf
Attachment 3: residue_normalised_y.pdf
residue_normalised_y.pdf
Attachment 4: error_power_sliding.pdf
error_power_sliding.pdf
Attachment 5: normalised_comparison_y.pdf
normalised_comparison_y.pdf
Attachment 6: residue_normalised_y.pdf
residue_normalised_y.pdf
Attachment 7: intensum.pdf
intensum.pdf
Attachment 8: centroid.pdf
centroid.pdf
  14695   Tue Jun 25 11:54:47 2019 KruthiUpdateCamerasGigE

Turns out, focusing the GigE is actually a bit tricky. With pylon, everytime I change the exposure or the focus, I'm running into the error I had mentioned earlier in one of my elogs; so I tried using the python scripts to interact with the GigE. But whenever I try to change the focal plane distance by rotating the lens coupler, the ethernet cable connection becomes loose and the camera server needs to be relaunched every now and then. Also, everytime we want to change the distance between the lenses, the telescope needs to be dismantled and refocused again. I'll try to come up with a better telescope design for this.

Yesterday, I had focused the GigE using a low exposure time and small aperture of iris, to make sure that we are actually seeing a sharp image of the beam spot. I'm attaching a picture of the beam spot I had clicked while focusing it, unfortunately, I forgot to take a picture after I had focused it completely. I'm also attaching a picture of the final setup for future reference. 


Yesterday night, Rana asked me to lock the MC2. I figured that the PSL shutter was closed; I just opened it and was able to see the beam spot on the analog camera screen.

 

Attachment 1: MC2_GigE.pdf
MC2_GigE.pdf
Attachment 2: Cameras_final_setup.JPG
Cameras_final_setup.JPG
  14697   Tue Jun 25 22:14:10 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I discussed this with Gautam and he asked me to come up with a list of signals that I would need for my use and then design the data acquisition task at a high level before proceeding. I'm working on that right now. We came up with a very elementary sketch of what the script will do-

  1. Check the MC is locked.
  2. Choose an exposure value.
  3. Choose a frequency and amplitude value for the applied sinusoidal dither (check warning by Gabriele below).
  4. Apply sinusoidal dither to optic.
  5. Timestamping: Record gpstime, instantaneous channel values and a frame. These frames can later be put together in a sequence and a network can be trained on this. (NEED TO COME UP WITH SOMETHING CLEVERER THAN THIS!)

Tomorrow I will try and prepare a dummy script for this before the meeting at noon. Gautam asked me to familiarize myself with the awg, cdsutils (I have already used ezca before) to write the script. This will also help me do the following two tasks-

  1. IFO test scripts that Rana asked me to work on a while ago
  2. The PMC autolocker scripts that Rana asked me work on
Quote:
 

Upcoming work (in the order of priority):

  1. Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.

 

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

I am pushing the code that I wrote for

  1. Kruthi's exposure variation - ccd calibration experiment
  2. modified camera_client_movie.py code (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)
  3. interact.py (to interact with the GigE in viewing or recording mode) (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)

to the GigEcamera repository.

 

Gautam also asked me to look at Jigyasa's report and elog 13443 to come up with the specs of a machine that would accomodate a dedicated camera server.

 

Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
  2. Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
  3. Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.

 

 

  14698   Tue Jun 25 23:52:37 2019 MilindUpdateCamerasSimulation enhancements

Yesterday, Rana asked me to look at Hiro Yamamoto's docs on the DCC to improve the simulation. I'm performing a first pass (=> Just skimming through to see if they're relevant, I will go through them more carefully soon!) and putting up stuff here for future reference. @Kruthi's help much appreciated!

  14702   Wed Jun 26 19:12:00 2019 KruthiUpdateCamerasGigE

The GigE is focused now (judged by eye) and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs.

What was the solution to resolving the flaky video streaming during the alignment process????

-> I think, the issue was with either the poor wireless network conection or the GigE-PoE ethernet cable.

Quote:

Turns out, focusing the GigE is actually a bit tricky. With pylon, everytime I change the exposure or the focus, I'm running into the error I had mentioned earlier in one of my elogs; so I tried using the python scripts to interact with the GigE. But whenever I try to change the focal plane distance by rotating the lens coupler, the ethernet cable connection becomes loose and the camera server needs to be relaunched every now and then. Also, everytime we want to change the distance between the lenses, the telescope needs to be dismantled and refocused again. I'll try to come up with a better telescope design for this.

Yesterday, I had focused the GigE using a low exposure time and small aperture of iris, to make sure that we are actually seeing a sharp image of the beam spot. I'm attaching a picture of the beam spot I had clicked while focusing it, unfortunately, I forgot to take a picture after I had focused it completely. I'm also attaching a picture of the final setup for future reference. 


Yesterday night, Rana asked me to lock the MC2. I figured that the PSL shutter was closed; I just opened it and was able to see the beam spot on the analog camera screen.

Attachment 1: MC2_GigE_image.pdf
MC2_GigE_image.pdf
  14703   Wed Jun 26 20:45:03 2019 gautamUpdateCamerasField of view options

For the beam spot position tracking, I am wondering if there is any benefit to going for a wider field of view and getting the OSEMs in the frame? It may provide some "anchor points" against which whatever algorithm can calibrate the spot position against. But there are also several point scatterers visible in the current view, and perhaps the Gaussiam beam profile moving over them and tracking the scattered intensity from these point scatterers serves the same function? I don't know of a good solution to have a "switchable" field of view configuration in the already cramped camera enclosure though.

Also, I think it may be useful to have a cron job take a picture of MC2 and archive it (once a week? or daily?) to have some long term diagnostic of how the scattered light received by the camera changes over several months.

Quote:

The GigE is focused now and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs

  14706   Thu Jun 27 20:48:22 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

And finally, a network is trained!

Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github.

 

More details of the experiment:

Aim:

  1. To train a network to check that training occurs and get a feel for what the learning might be like.
  2. To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison.
  3. To track beam spot motion.

What I did:

  1. Set up a network that learns a framewise mapping as described in here.
  2. Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames
  3. Hyperparameters: Attachment #1
  4. Did no tuning of hyperparameters.
  5. Compiled and fit the models and saved the results.

 

What I saw

  1. Attachment #2: data fed to the network after pre-processing - median blur + crop
  2. Attachment #3: learning curves.
  3. Attachment #4: true and predicted motion. Nothing great.

What I think is going wrong-

  1. No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments.
  2. Too little data.
  3. Maybe wrong architecture.

Well, what now?

  1. Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?)
  2. Currently the network has around 200k parameters. Maybe reduce that.
  3. Set up a network that takes as input (one example corresponding to one forward pass)  a bunch of frames and predicts a vector of position values that can be used as continuous data).
Quote:

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

 
Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.

 

 

Attachment 1: readme.txt
Experiment file: train.py
batch_size: 32
dropout_probability: 0.8
eta: 0.0001
filter_size: 19
filter_type: median
initializer: xavier
num_epochs: 50
activation_function: relu
dense_layer_units: 64
... 10 more lines ...
Attachment 2: frame0.pdf
frame0.pdf
Attachment 3: Learning_curves.png
Learning_curves.png
Attachment 4: Motion.png
Motion.png
  14708   Sat Jun 29 03:11:18 2019 KruthiUpdateCamerasCCD Calibration

Finding the gain of the Photodiode: The three-position rotary switch of the photodiode being used (PDA520) wasn't working, so I determined its gain by making a comparative measurement between ophir power meter and the photodiode. The photodiode has a responsitivity of 0.34 A/W at 1064 nm (obtained from the responsitivity curve given in the spec sheet using a curve digitizing software). Using the following equation, I determined the gain setting, which turned out to be 20dB.

\large Transimpedance\ Gain (V/A) = \frac{Photodiode\ reading (V)}{Ophir\ reading (W) * Responsitivity (A/W)}

Setup: Here a 1050nm (closest we have to 1064nm) LED is used as the light source instead of a laser to eliminate the effects caused by coherence of a laser source, which might affect our radiometric calibration. The LED is placed in a box with a hole of diameter 5mm (aperture angle = 40 degrees approx.). Suitable lenses are used to focus the light onto a white paper, which is fixed at an arbitrary angle and serves as a Lambertian scatterer. To make a comparative measurement between the photodiode (PDA520) and GigE, we need to account for their different sensor areas, 8.8mm (aperture diameter) and 3.7mm x 2.8 mm respectively . This can be done by either using an iris with a common aperture so that both the photodiode and GigE receive same amount of light , or by calculating the power incident on GigE using the ratio of sensor areas and power incident on the photodiode (here we are using the fact that power scattered by Lambertian scatterer per unit solid angle is constant). 

Calibration of GigE 152 unit: I took around 50 images, starting with an exposure time of 2000 \LARGE \mu s in steps of 2000, using the exposure_variation.py code. But the code doesn't allow us to take images with an exposure time greater than 100 ms, so I took few more images at higher exposures manually. From each image I subtracted a dark image (not in the sense of usual CCD calibration, but just an image with same exposure time and no LED light). These dark images do the job of usual dark frame + bias frame and also account for stray lights. A plot of pixel sum vs exposure time is attached. From a linear fit for the unsaturated region, I obtained the slope and calculated the calibration factor.

Equations:      \LARGE Power (P)=\frac{Photodiode\ reading(V)}{Transimpedance\ gain (V/W) * Responsivity (A/W)}                    \LARGE Calibration factor (CF) = \frac{P}{slope}

Result: CF = 1.91x 10^-16 W-sec/counts  Update: I had used a wrong value for the area of photodiode. On using 61.36 mm^2 as the area, I got 2.04 x 10^-15 W-sec/counts.

I'll put the uncertainities soon. I'm also attaching the GigE spectral response curve for future reference.

Attachment 1: calibration_setup.jpg
calibration_setup.jpg
Attachment 2: CCD_calibration_2.jpeg
CCD_calibration_2.jpeg
Attachment 3: GigE_spectral_response_curve.png
GigE_spectral_response_curve.png
Attachment 4: 152_calibration_plot.png
152_calibration_plot.png
  14710   Sun Jun 30 22:02:26 2019 MilindUpdateCamerasKeyed c1aux crate

I wanted to try out the unstick.py script on c1aux but kept running into timeout errors. I was also confronted by a blank GigE screen. Further, couldn't telnet into c1aux using telnet c1aux as described here. Therefore, I went in and keyed the c1aux crate (1Y1).

  14714   Mon Jul 1 20:11:34 2019 MilindUpdateCamerasSimulation enhancements

Today, I read a lot more about BRDF and modelling but could not make much headway regarding the implementation in the simulation. I've stopped for now and I'll take a crack at it tomorrow again.

Quote:

Yesterday, Rana asked me to look at Hiro Yamamoto's docs on the DCC to improve the simulation. I'm performing a first pass (=> Just skimming through to see if they're relevant, I will go through them more carefully soon!) and putting up stuff here for future reference. @Kruthi's help much appreciated!

  14723   Wed Jul 3 23:53:38 2019 MilindUpdateCamerasdata for nns

Tried collecting data today. Was unable to keep the camera_server code running for any length of time as it threw segfaults. Will take a shot again tomorrow.

Quote:

The GigE is focused now (judged by eye) and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs.

What was the solution to resolving the flaky video streaming during the alignment process????

-> I think, the issue was with either the poor wireless network conection or the GigE-PoE ethernet cable.

Quote:

Turns out, focusing the GigE is actually a bit tricky. With pylon, everytime I change the exposure or the focus, I'm running into the error I had mentioned earlier in one of my elogs; so I tried using the python scripts to interact with the GigE. But whenever I try to change the focal plane distance by rotating the lens coupler, the ethernet cable connection becomes loose and the camera server needs to be relaunched every now and then. Also, everytime we want to change the distance between the lenses, the telescope needs to be dismantled and refocused again. I'll try to come up with a better telescope design for this.

Yesterday, I had focused the GigE using a low exposure time and small aperture of iris, to make sure that we are actually seeing a sharp image of the beam spot. I'm attaching a picture of the beam spot I had clicked while focusing it, unfortunately, I forgot to take a picture after I had focused it completely. I'm also attaching a picture of the final setup for future reference. 


Yesterday night, Rana asked me to lock the MC2. I figured that the PSL shutter was closed; I just opened it and was able to see the beam spot on the analog camera screen.

  14726   Thu Jul 4 18:19:08 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details. 

Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is  the training data I overfit to and the latter the validation data that the network has not generalized well to.

Quote:

And finally, a network is trained!

Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github.

 

More details of the experiment:

Aim:

  1. To train a network to check that training occurs and get a feel for what the learning might be like.
  2. To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison.
  3. To track beam spot motion.

What I did:

  1. Set up a network that learns a framewise mapping as described in here.
  2. Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames
  3. Hyperparameters: Attachment #1
  4. Did no tuning of hyperparameters.
  5. Compiled and fit the models and saved the results.

 

What I saw

  1. Attachment #2: data fed to the network after pre-processing - median blur + crop
  2. Attachment #3: learning curves.
  3. Attachment #4: true and predicted motion. Nothing great.

What I think is going wrong-

  1. No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments.
  2. Too little data.
  3. Maybe wrong architecture.

Well, what now?

  1. Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?)
  2. Currently the network has around 200k parameters. Maybe reduce that.
  3. Set up a network that takes as input (one example corresponding to one forward pass)  a bunch of frames and predicts a vector of position values that can be used as continuous data).
Quote:

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
Attachment 1: Motion.pdf
Motion.pdf
Attachment 2: Error.pdf
Error.pdf
Attachment 3: Learning_curves.pdf
Learning_curves.pdf
  14732   Sun Jul 7 21:59:28 2019 KruthiUpdateCamerasGhost image due to beamsplitter

The beam splitter (BS1-1064-33-2037-45S) that is currently being used has an antireflection coating on the second surface and a wedge of less than 5 arcmin; yet it leads to ghosting as shown in the figure attached (courtesy: Thorlabs). I'm also attaching its spec sheet I dug up on internet for future reference.

I came across pellicle beamsplitters, that are primarily used to eliminate ghost images. Pellicle beamsplitters have a few microns thick nitrocellulose layer and superimpose the secondary reflection on the first one. Thus the ghost image is eliminated. 

Should we go ahead and order them? (https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=898

https://www.edmundoptics.eu/c/beamsplitters/622/#28438=28438_s%3AUGVsbGljbGU1&27614=27614_d%3A%5B46.18%20TO%2077.73%5D)

Attachment 1: ghosting_schematic.png
ghosting_schematic.png
Attachment 2: Beamsplitter_spec.pdf
Beamsplitter_spec.pdf Beamsplitter_spec.pdf
  14734   Mon Jul 8 17:52:30 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

After the two earthquakes, I collected some data by dithering the optic and recording the QPD readings. Today, I set up scripts to process the data and then train networks on this data. I have pushed all the code to github. I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

Trainng networks with memory

I set up a network to handle input volumes (stacks of frames) instead of individual frames. It still uses 2D convolution and not 3D convolution. I am currently training on the new data. However, I was curious to see if it would provide any improved performance over the results I put up in the previous elog. After a bit of hyperparameter tuning, I did get some decent results which I have attached below. However, this is for Pooja's old data which makes them, ah, not so relevant. Also, this testing isn't truly representative because the test data isn't entirely new to the network. I am going to train this network on the new data now with the following objectives (in the following steps):

  1. Train on data recorded at one frequency, generalize/ test on unseen data of the same frequency, large amplitude of motion
  2. Train on data recorded at one frequency, generallize/ test on unseen data of a different frequency, large amplitude of motion
  3. Train on data recorded at one frequency, generalize/ test on unseen data of  same/ different frequency, small amplitude of motion
  4. Train on data at different frequencies and generalize/ test on data with a mixture of frequencies at small amplitudes - Gautam pointed out that the network would truly be superb (good?) if we can just predict the QPD output from the video of the beam spot when nothing is being shaken.

I hope this looks alright? Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).

Quote:

The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details. 

Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is  the training data I overfit to and the latter the validation data that the network has not generalized well to.

Attachment 1: Motion.pdf
Motion.pdf
  14735   Mon Jul 8 21:42:39 2019 ranaUpdateCamerasGhost image due to beamsplitter

you have to use a BS with a larger wedge angle (5 arcmin ~ 1 mrad) so that the beams don't overlap on the camera

  14741   Tue Jul 9 22:13:26 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-

  1. On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
  2. On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
  3. On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.

Therefore, I will carry out all training only on this machine from now.

 


Note to self:

Steps to repeat what you did are:

  1. ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
  2. activate virtualenv as descirbed above
  3. navigate to code  and run it.
Quote:

 I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

Attachment 1: predicted_motion_first.pdf
predicted_motion_first.pdf
Attachment 2: pcdev5_time.png
pcdev5_time.png
  14746   Wed Jul 10 22:32:38 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I trained a bunch (around 25 or so - to tune hyperparameters) of networks today. They were all CNNs. They all produced garbage. I also looked at lstm networks with CNN encoders (see this very useful link) and gave some thought to what kind of architecture we want to use and how to go about programming it (in Keras, will use tensorflow if I feel like I need more control). I will code it up tomorrow after some thought and discussion. I am not sure if abandoning CNNs is the right thing to do or if I should continue probing this with more architectures and tuning attempts. Any thoughts?

Right now, after speaking to Stuart (ldas_admin) I've decided on coding up the LSTM thing and then running that on one machine while probing the CNN thing on another.

 


Update on 10 July, 2019: I'm attaching all the results of training here in case anyone is interested in the future.

Quote:

I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-

  1. On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
  2. On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
  3. On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.

Therefore, I will carry out all training only on this machine from now.

 


Note to self:

Steps to repeat what you did are:

  1. ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
  2. activate virtualenv as descirbed above
  3. navigate to code  and run it.
Quote:

 I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

  14757   Sun Jul 14 00:24:29 2019 KruthiUpdateCamerasCCD Calibration

On Friday, I took images for different power outputs of LED. I calculated the calibration factor as explained in my previous elog (plots attached).

Vcc (V) Photodiode
reading(V)

Power incident on photodiode (W)

Power incident on GigE (W)
Slope (counts/​𝝁s)
Uncertainity in
 slope (counts/​𝝁s)
CF (W-sec/counts)
16 0.784 2.31E-06 3.89E-07 180.4029 1.02882 2.16E-15
18 0.854 2.51E-06 4.24E-07 207.7314 0.7656 2.04E-15
20 0.92 2.71E-06 4.57E-07 209.8902 1.358 2.18E-15
22 0.969 2.85E-06 4.81E-07 222.3862 1.456 2.16E-15
25 1.026 3.02E-06 5.09E-07 235.2349 1.53118 2.17E-15
  Average  2.14E-15

To estimate the uncertainity, I assumed an error of at most 20mV (due to stray lights or difference in orientation of GigE and photodiode) for the photodiode reading. Using the uncertainity in slope from the linear fit, I expect an uncertainity of maximum 4%. Note: I haven't accounted for the error in the responsivity value of the photodiode.

GigE area 10.36 sq.mm
PDA area 61.364 sq.mm
Responsivity 0.34 A/W
Transimpedance gain (at gain = 20dB) 10^6 V/W +/- 0.1%
Pixel format used Mono 8 bit

Johannes had reported CF as 0.0858E-15 W-sec/counts for 12 bit images, with measured a laser source. This value and the one I got are off by a factor of 25. Difference in the pixel formats and effect of coherence of the light used might be the possible reasons.

Attachment 1: CCD_calibration.png
CCD_calibration.png
  14760   Mon Jul 15 14:09:07 2019 MilindUpdateCamerasCNN LSTM for beam tracking

I've set up network with a CNN encoder (front end) feeding into a single LSTM cell followed by the output layer (see attachment #1). The network requires significantly more memory than the previous ones. It takes around 30s for one epoch of training. Attached are the predicted yaw motion and the fft of the same. The FFT looks rather curious. I still haven't done any tuning and these are only the preliminary results.

Quote:

 Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).

Well, what about the previous conv nets?

What I did:

  1. Extensive tuning - of learning rate, batch size, dropout ratio, input size using a grid search
  2. Trained each network for 75 epochs and obtained weights, predicted motion and corresponding FFT, error etc.

What I observed:

  1. Loss curves look okay, validation loss isn't going up, so I don't think overfitting is the issue
  2. Training for over (even) 75 epochs seems to be pointless.

What I think is going wrong:

  1. Input size- relatively large input size: 350 x 350. Here, the input image size seems to be 128 x 128.
  2. Inadequate pre-processing.
    1. I have not applied any filters/blurs etc. to the frames.
    2. I have also not tried dimensionality reduction techniques such as PCA

What I will try now:

  1. Collect new data: with smaller amplitudes and different frequencies
  2. Tune the LSTM network for the data I have
  3. Try new CNN architectures with more aggressive max pooling and fewer parameters
  4. Ensembling the models (see this and this). Right now, I have multiple models trained either with same architecture and different hyperparameters or with different architectures. As a first pass, I intend to average the predictions of all the models and see if that improves performance.
Attachment 1: cnn-lstm.png
cnn-lstm.png
Attachment 2: fft_yaw.pdf
fft_yaw.pdf
Attachment 3: yaw_motion.pdf
yaw_motion.pdf
  14768   Wed Jul 17 20:12:26 2019 KruthiUpdateCamerasAnother GigE in place of analog camera

I've taken the MC2 analog camera down and put another GigE (unit 151) in its place. This is just temporary and I'll put the analog camera back once I finish the MC2 loss map calibration. I'm using a 25mm focal length camera lens with it and it gives a view of MC2 similar to the analog camera one. But I don't think it is completely focused yet (pictures attached).

...more to follow

gautam - Attachment #3 is my (sad) attempt at finding some point scatterers - Kruthi is going to play around with photUtils to figure out the average size of some point scatterers.

Attachment 1: zoomed_out_gige.png
zoomed_out_gige.png
Attachment 2: osems_mc2.png
osems_mc2.png
Attachment 3: MC2.pdf
MC2.pdf
  14774   Thu Jul 18 22:03:00 2019 KruthiUpdateCamerasMC2 and cameras

[Kruthi, Yehonathan, Gautam]

Today evening, Yehonathan and I aligned the MC2 cameras. As of now there are 2 GigEs in the MC2 enclosure. For the temporary GigE (which is the analog camera's place), we are using an ethernet cable connection from the Netgear switch in 1x6. The MC2 was misaligned and the autolocker wasn't able to lock the mode cleaner. So, Gautam disabled the autolocker and manually changed the settings; the autolocker was able to take over eventually.

  14779   Fri Jul 19 16:47:06 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

I did a whole lot of hyperparameter tuning for convolutional networks (without 3d convolution). Of the results I obtained, I am attaching the best results below.

Define "best"?

The lower the power of the error signal (difference between the true and predicted X and Y positions), essentially mse, on the test data, the better the performance of the model. Of the trained models I had, I chose the one with the lowest mse.

Attached results:

  1. Attachment 1: Training configuration
  2. Attachment 2: Predicted motion along the Y direction for the test data
  3. Attachment 3: Predicted motion along the Y direction for the training data
  4. Attachment 4: Learning curves
  5. Attachment 5: Error in test predictions
  6. Attachment 6: Video of image histogram plots
  7. Attachment 7: Plot of percentage of pixels with intensity over 240 with time

(Note: Attachment 6 and 7 present information regarding a fraction of the data. However, the behaviour remains the same for the rest of the data.)

Observations and analysis:

  1. Data:
    1. From attachemtns 2, 3, 5: Maximum deviation from true labels at the peaks of applied dither/motion. Possible reasons:
      1. Stupid Cropping? I checked (by watching the video of cropped frames, i.e visually) to ensure that the entire motion of the beam spot is captured. Therefore, this is not the case.
      2. Intensity variation: The intensity (brightness?) of the beam spot varies (decreases) significantly at the maximum displacement. This, I think, is creating a skewed dataset with very few frames with low intensity pixels. Therefore, I think it makes sense to even this out and get more data points (frames) with similar (lower) pixel intensities. I can think of two ways of doing this:
        1. Collect more data with lower amplitude of sinusoidal dither. I used an amplitude of 80 cts to dither the optic. Perhaps something like 40 is more feasible. This will ensure the dataset isn't too skewed.
        2. Increase exposure time. I used an exposure time of 500us to capture data. Perhaps a higher exposure time will ensure that the image of the beam spot doesn't fade out at the peak of motion.
    2. From attachment 5, Saturated images?: We would like to gun for a maximum deviation of 10% (0.1 in this case) from the true values in the predicted labels (Tbh, I'm not sure why this is a good baseline, I ought to give that some thought. I think the maximum deviation of the OpenCV thing I did at the start might also be a good baseline?). Clearly, we're not meeting that. One possible reason is that the video might be saturated- (too many pixels at 255, bleeding into surrounding pixles) leading to loss of information. I set the exposure time to 500us precisely to avoid this. However, I also created videos of the image histograms of the frames to make sure the frames weren't saturated (Is there some better standard way of doing it?). From attachements 6 and 7, I think it's evident that saturation is not an issue. Consequently, I think increasing the exposure time and collecting data is a good idea.
  2. The network:
    1. From attachment 4: Training post 25 epochs seems to produce overfitting, though it doesn't seem too terrible (from attachments 2 and 3). The network is still learning after 75 epochs, so I'll tinker with the learning rate, dropout and maybe put in annealing.
    2. I don't think there is a need to change the architecture yet. The model seems to generalize okay (valdiation error is close to training error), therefore I think it'll be a good idea to increase dropout for the fully connected layers and train for longer/ with a higher learning rate.

 


 

P.S. I will also try the 2D convolution followed by the 1D convolution thing now. 

P.P.S. Gabriele suggested that I try average pooling instead of max pooling as this is a regression task. I'll give that a shot.

 

Attachment 1: readme.txt
Experiment file: train_both.py
batch_size: 32
dropout_probability: 0.5
eta: 0.0001
filter_size: 1
filter_type: median
initializer: Xavier
memory_size: 10
num_epochs: 75
activation_function: relu
... 22 more lines ...
Attachment 2: yaw_motion_test.pdf
yaw_motion_test.pdf
Attachment 3: yaw_motion_train.pdf
yaw_motion_train.pdf
Attachment 4: Learning_curves_replotted.pdf
Learning_curves_replotted.pdf
Attachment 5: yaw_error_test.pdf
yaw_error_test.pdf
Attachment 6: intensity_histogram.mp4
Attachment 7: saturation_percentage.pdf
saturation_percentage.pdf
  14786   Sat Jul 20 12:16:39 2019 gautamUpdateCamerasCNNs for beam tracking || Analysis of results
  1. Make the MSE a subplot on the same axes as the time series for easier interpretation.
  2. Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.
  3. What is the minimum detectable motion given the CCD resolution?
  4. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully?
  5. What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.
  6. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  7. Is the time-sync problem Koji raised limiting this approach?
  14787   Sat Jul 20 14:43:45 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

<Adding details>

See Attachment #2.

Quote:

Make the MSE a subplot on the same axes as the time series for easier interpretation.

Training dataset:

  1. Peak to peak amplitue in physical units: ?
  2. Dither frequency: 0.2 Hz
  3. Video data: zoomed in video of the beam spot obtained from GigE camera 198.162.113.153 at 500us exposure time. Each frame has a resolution of 640 x 480 which I have cropped to 350 x 350. Attachment #1 is one such frame.
  4. Yes, therefore I am going to obtain video at lower amplitudes. I think that should help me avoid the problem of not-nominal-maximum value?
  5. Other details of the training dataset:
    1. Dataset created from four vides of duration ~ 30, 60, 60, 60 s at 25 FPS.
    2. 4032 training data points
      1. Input (one example/ data point): 10 successive frames stacked to form a 3D volume of shape 350 x 350 x 10
      2. Output (2 dimensional vector): QPD readings (C1:IOO-MC_TRANS_PIT_ERR, C1:IOO-MC_TRANS_YAW_ERR)
    3. Pre-processing: none
    4. Shuffling: Dataset was shuffled before every epoch
    5. No thresholding: Binary images are gonna be of little use if the expectation is that the network will learn to interpret intensity variations of pixels.

Do I need to provide any more details here?

Quote

Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.

?

Quote:

What is the minimum detectable motion given the CCD resolution?

see attachment #4.

Quote:
  1. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully

 

I wrote what I think is a handy script to observe if the frames are saturated. I thought this might be handy for if/when I collect data with higher exposure times. I assumed there was no saturation in the images because I'd set the exposure value to something low. I thought it'd be useful to just verify that. Attachment #3 has log scale on the x axis.

Quote:

What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.

 

Quote:
  1. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  2. Is the time-sync problem Koji raised limiting this approach?

 

Attachment 1: frame0.pdf
frame0.pdf
Attachment 2: subplot_yaw_test.pdf
subplot_yaw_test.pdf
Attachment 3: intensity_histogram.mp4
Attachment 4: network2.pdf
network2.pdf
ELOG V3.1.3-