40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 65 of 335  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  13348   Mon Oct 2 12:44:45 2017 johannesUpdateCamerasBasler 120gm calibration

Disclaimer: Wrong calibration factors! See https://nodus.ligo.caltech.edu:8081/40m/13391

The two acA640-120gm Basler GigE-Cams have been calibrated. I used the collimated output of a fiber that carried the auxiliary laser light from the PSL table. With a non-polarizing beam splitter some of the light was picked off onto a PD, and I modified the RF amplitude of the AOM drive signal to vary the power coming out of the fiber. The fiber output was directed at a white paper, which was placed 1.06m from the front of the lens tube assembly, which is where the focal plane is. Using the Pylon Viewer App I made sure that the entirety of the beam spot was imaged onto the CCD. Since the camera sensor is 1/4" across, I removed the camera from the lens tube and instead placed the Ophir power meter head at the position of the sensor and measured the power reported versus PD voltage, which turned out to be 1.5 V/uW.

The camera was put back in place and I used the Pypylon package Gautam had stumbled upon to sweep the exposure time from 100us to 10ms at different light power settings including no laser light at all for background subtraction, and rather than keeping the full bitmap data for O(100s) of images I recorded only the quantities

  1. Pixel Max
  2. Pixel Sum
  3. Pixel Mean
  4. Pixel Standard Deviation
  5. Pixel Median

I performed this procedure for both the 152 and 153 cameras and plotted the pixel sum and the pixel max vs the exposure time. All the exposures were taken at a gain setting of 100, which is the smallest possible setting (out of 100-600). To obtain the calibration factor I use the input power Pin=75nW in the 'safe' region 1ms to 10ms where the pixel sum looks smooth and the CCD is reportedly not saturated.

Camera IP Calibration Factor CF
192.168.113.152 8.58 W*s
192.168.113.153 7.83 W*s

The incident power can be calculated as Pin =CF*Total(Counts-DarkCounts)/ExposureTime.

Attachment 1: calib_20170930_152.pdf
calib_20170930_152.pdf
Attachment 2: calib_20170930_153.pdf
calib_20170930_153.pdf
  13352   Mon Oct 2 23:16:05 2017 gautamHowToCamerasCCD calibration

Going through some astronomy CCD calibration resources ([1]-[3]), I gather that there are in general 3 distinct types of correction that are applied:

  1. Dark frames --- this would be what we get with a "zero duration" capture, some documents further subdivide this into various categories like thermal noise in the CCD / readout electronics, poissonian offsets on individual pixels etc.
  2. Bias frames --- this effect is attributed to the charge applied to the CCD array prior to the readout.
  3. Flat-field calibration --- this effect accounts for the non-uniform responsivity of individual pixels on the CCDs. 

The flat-field calibration seems to be the most complicated - the idea is to use a source of known radiance, and capture an image of this known radiance with the CCD. Then assuming we know the source radiance well enough, we can use some math to back out what the actual response function of individual pixels are. Then, for an actual image, we would divide by this response-map to get the actual image. There are a number of assumptions that go into this, such as: 

  • We know the source radiance perfectly (I guess we are assuming that the white paper is a Lambertian scatterer so we know its BRDF, and hence the radiance, perfectly, although the work that Jigyas and Amani did this summer suggest that white paper isn't really a Lambertian scatterer). 
  • There is only one wavelength incident on the CCD.
  • We can neglect the effects of dust on the telescope/CCD array itself, which would obviously modify the responsivity of the CCD, and is presumably not stationary. Best we can do is try and keep the setup as clean as possible during installation.

I am not sure what error is incurred by ignoring 2 and 3 in the list at the beginning of this elog, perhaps this won't affect our ability to estimate the scattered power from the test-masses to within a factor of 2. But it may be worth it to do these additional calibration steps. 

I also wonder what the uncertainty in the 1.5V/A number for the photodiode is (i.e. how much do we trust the Ophir power meter at low power levels?). The datasheet for the PDA100A says the transimpedance gain at 60dB gain is 1.5 MV/A (into high impedance load), and the Si responsivity at 1064nm is ~0.25A/W, so naively I would expect 0.375 V/uW which is ~factor of 4 lower. Is there a reason to trust one method over the other?  

Also, are the calibration factor units correct? Jigyasa reported something like 0.5nW s / ct in her report.

Camera IP Calibration Factor CF
192.168.113.152 8.58 W*s
192.168.113.153 7.83 W*s

The incident power can be calculated as Pin =CF*Total(Counts-DarkCounts)/ExposureTime.

References:

[1] http://www.astrophoto.net/calibration.php

[2] https://www.eso.org/~ohainaut/ccd/

[3] http://www.astro.ufl.edu/~lee/ast325/handouts/ccd.pdf

  13354   Tue Oct 3 01:58:32 2017 johannesHowToCamerasCCD calibration

Disclaimer: Wrong calibration factors! See https://nodus.ligo.caltech.edu:8081/40m/13391

The factors were indeed enormously off. The correct table reads:

Camera IP Calibration Factor CF
192.168.113.152 85.8 pW*s
192.168.113.153 78.3 pW*s

I did subtract a 'dark' frame from the images, though not in the sense of your point 1, just an exposure of identical duration with the laser turned off. This was mostly to reduce the effect of residual light, but given similar initial conditions would somewhat compensate for the offset that pre-existing charge and electronics noise put on the pixel values. The white field is of course a difference story.

I wonder how close we can get to a white field by putting a thin piece of paper in front of the camera without lenses and illuminate it from the other side. A problem is of course the coherence if we use a laser source... Or we scrap any sort of screen/paper and illuminate directly with a strongly divergent beam? Then there wouldn't be a specular pattern.

I'm not sure I understand your point about the 1.5V/A. Just to make sure we're talking about the same thing I made a crude drawing:

The PD sees plenty of light at all times, and the 1.5V/uW came from a comparative measurement PD<-->Ophir (which took the place of the CCD) while adjusting the power deflected with the AOM, so it doesn't have immediate connection to the conversion gain of silicon in this case. I can't remember the gain setting of the PD, but I believe it was 0dB, 20dB at most.

Attachment 1: gige_calibration.pdf
gige_calibration.pdf
  13375   Thu Oct 12 01:03:49 2017 johannesHowToCamerasETMX GigE side view

I calculated a better lens solution for the ETMX side view with the simple python script that's attached. The camera is still not as close to the viewport as we would like, and now the front lens is almost all the up to the end of the tube. With a little more playing around there maybe a better way, especially if we expand the repertoire of focal lengths. Using Steve's wonderful camera fixture I put the beam spot in focus. I turned the camera sideways for better use of the field of view, and now the beam spot actually fills the center area of the beam, to the point where we probably don't want more magnification or else we start losing the tails of the Gaussian.

We'll take a serious of images tomorrow, and will have an estimate of the scatter loss by the end of tomorrow.

 

Attachment 1: IMG_20171011_164549698.jpg
IMG_20171011_164549698.jpg
Attachment 2: Image__2017-10-11__16-52-01.png
Image__2017-10-11__16-52-01.png
Attachment 3: GigE_lens_position_helper.py.zip
  13377   Thu Oct 12 07:56:33 2017 SteveHowToCamerasETMX GigE side view at 50 deg of IR scattering

 Telescope front lens to wall distance 25 cm,  GigE camera lenght 6 cm and cat6 cable 2cm

 Atm3,   Existing short camera  can has 16cm  lenght to lexan guard on viewport. Available 2" od periscope tube lenght is 8cm. The one in use 16 cm long.

             Note: we can fabricate a lite cover with tube that would accomodate longer telescope.

             Can we calibrate the AR coated M5018-SW and compare it's performance agains the 2" periscope

             Look at the Edmond Optics 3" od camera lens with AR

This lower priced   1" apeture Navitar lens  can be an option too.

 

 Atm1,   Now I can see dust. This is much better. The focus is not right yet.

Atm2,   Chamber viewport wiped and image refocused. Actually I was focusing on the dust.

Quote:

I calculated a better lens solution for the ETMX side view with the simple python script that's attached. The camera is still not as close to the viewport as we would like, and now the front lens is almost all the up to the end of the tube. With a little more playing around there maybe a better way, especially if we expand the repertoire of focal lengths. Using Steve's wonderful camera fixture I put the beam spot in focus. I turned the camera sideways for better use of the field of view, and now the beam spot actually fills the center area of the beam, to the point where we probably don't want more magnification or else we start losing the tails of the Gaussian.

We'll take a serious of images tomorrow, and will have an estimate of the scatter loss by the end of tomorrow.

 

 

Attachment 1: Image__2017-10-11__15-29-52_15k400g.png
Image__2017-10-11__15-29-52_15k400g.png
Attachment 2: Image__2017-10-12__15-50-18wipedRefocud2.png
Image__2017-10-12__15-50-18wipedRefocud2.png
Attachment 3: camCan16cm.jpg
camCan16cm.jpg
  13389   Wed Oct 18 11:37:58 2017 johannesHowToCamerasETMX GigE side view at 50 deg
uote:

 Telescope front lens to wall distance 25 cm,  GigE camera lenght 6 cm and cat6 cable 2cm

 Atm3,   Existing short camera  can has 16cm  lenght to lexan guard on viewport. Available 2" od periscope tube lenght is 8cm. The one in use 16 cm long.

             Note: we can fabricate a lite cover with tube that would accomodate longer telescope.

             Can we calibrate the AR coated M5018-SW and compare it's performance agains the 2" periscope

             Look at the Edmond Optics 3" od camera lens with AR

Atm1,   Now I can see dust. This is much better. The focus is not right yet.

Atm2,   Chamber viewport wiped and image refocused. Actually I was focusing on the dust.

We don't really have to calibrate the lens, just the CCD, which we've done. It's more about knowing the true aperture size to know how much solid angle you're capturing to infer the total amount of scatter. For our custom lens tubes this is the ID of the retaining ring.

The Edmund Optics lens tube looks tempting, but itcomes at a price. Thorlabs sells lens tubes that offer a more flexibility than what we have right now, so I bought a few different ones, and also more 150mm 2" lenses. This will allow for more compact solutions and offer some in-situ focusing ability that doesn't require detaching the lens tube like now. Should be here in a couple of days, then we'll be able to enclose the GigE camera in the viewport can with a similar field of view we have now.

I also bought a collimation package for the AS port fiber stuff so we can move ahead with the ringdown measurements and also mode spectroscopy.

  13391   Wed Oct 18 15:26:58 2017 johannesHowToCamerasRevision: CCD calibration

The units were still off in my previous post. Here's the corrected, sanity-checked version:

Camera IP Calibration Factor
192.168.113.152 85.8 +/- 4.3 pW*μs
192.168.113.153 78.3 +/- 3.9 pW*μs

I estimated the uncertainties based on a linear fit to the data I recorded with 75nW incident on the CCD and assumed a 5% uncertainty in that number. This is just an upper limit, to be safe. I had calibrated the power reading placing the Ophir power meter where the CCD would otherwise be and comparing it to the PD voltage of a picked off beam. In my previous figures the axes were mislabeled, so I reproduce them here:

Using the current camera position I recorded 50 exposures both with and without beam (XARM locked vs PSL shutter closed) and averaged the images to see how much the reading fluctuates. The exposure time was 10 ms, which left the maximum reported pixel value in all exposures below 3800 out of 4096. The gain setting was 100, which is what I used to calibrate the CCDs.

Counts with XARM locked 2.799 +/- 0.027 x107
Counts with shutter closed 3.220 +/- 0.047 x106
Power on CCD 193.9 +/- 2.2 nW
Power scattered into 2π (*) 254 +/- 39 μW
ETMX scatter loss (**) 25.4 +/- 3.9 ppm

(*) I calculated the lens positions to focus at a plane 65cm from the front lens. We're pretty close to that, but I can't confirm the actual distance easily, so I assumed a 5cm error on the distance, which is where most of the error is coming from. This is also assuming uniform scatter.

(**) This is assuming 10W of circulating power

Attachment 1: calib_20170930_152.pdf
calib_20170930_152.pdf
Attachment 2: calib_20170930_153.pdf
calib_20170930_153.pdf
  13868   Fri May 18 20:03:14 2018 PoojaUpdateCamerasTelescopic lens solution for GigE

Aim: To find telescopic lens solution to image test mass onto the sensor of GigE camera.

I wrote a python code to find an appropriate combination of lenses to focus the optic onto the camera keeping in mind practical constraints like distance of GigE camera from the optic ~ 1m and distance between the lenses need to be in accordance with the Thorlab lens tubes available. We have to image both the enire optic of size 3" and beam spot of 1" using this combination of lens. The image size that efficiently utilizes the entire sensor array is 1/4". Therefore the magnification required for imaging the entire optic is 1/12 and that for the beam spot is 1/4.

I checked the website of Thorlabs to get the available focal lengths of 2" lenses (instead of 1" lenses to collect sufficient power). I have tried several combination of lenses and the ones I found close enough to what is required have been listed below along with thier colorbar plots.

a) 150mm-150mm (Attachment 2 & 3)

With this combination, object distance varies like 50cm for 1" beam spot to 105cm for 3" spot. Therefore, it posses a difficulty that there is a difference of ~48cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.

b) 125mm-150mm (Attachment 4 & 5)

With this combination, object distance varies like 45cm for 1" beam spot to 95cm for 3" spot. There is a difference of ~43cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.

c) 125mm-125mm (Attachment 6 & 7)

The object distance varies like 45cm for 1" beam spot to 90cm for 3" spot. There is a difference of ~39cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.

Sensitivity check was also done for these combination of lenses. An error of 1cm in object distance and 5mm in the distance between the lenses gives an error in magnification <2%.

The schematic of the telescopic lens system has been given in Attachment 8.

 

Attachment 1: calib_20170930_152.pdf
Attachment 2: plot_2018-05-18_tel-lens_150_150_1.pdf
plot_2018-05-18_tel-lens_150_150_1.pdf
Attachment 3: plot_2018-05-18_tel-lens_150_150_3.pdf
plot_2018-05-18_tel-lens_150_150_3.pdf
Attachment 4: plot_2018-05-18_tel-lens_125_150_1.pdf
plot_2018-05-18_tel-lens_125_150_1.pdf
Attachment 5: plot_2018-05-18_tel-lens_125_150_3.pdf
plot_2018-05-18_tel-lens_125_150_3.pdf
Attachment 6: plot_2018-05-18_tel-lens_125_125_1.pdf
plot_2018-05-18_tel-lens_125_125_1.pdf
Attachment 7: plot_2018-05-18_tel-lens_125_125_3.pdf
plot_2018-05-18_tel-lens_125_125_3.pdf
Attachment 8: tel_design.pdf
tel_design.pdf
  13874   Mon May 21 17:36:00 2018 poojaUpdateCamerasGigE camera image of ETMX

Today Steve and I tried to to capture the image of scattering of light by dust particles on the surface of ETMX using GigE camera. The image ( at gain =100, exposure time = 125000) obtained has been attached. Unlike the previous images, a creepy shape of bright spots was seen. Gautam helped us lock infrared light and see the image. A similar less intense shape was seen. This may be because of the dust on the lens.

Attachment 1: Image__2018-05-21__17-34-15_125k100g.tiff
  13893   Fri May 25 14:55:33 2018 Jon RichardsonUpdateCamerasStatus of GigE Camera Software Fixes

There is an effort to switch to an all-digital system for the GigE camera feeds similar to the one running at LLO, which uses Joe Betzwieser's custom SnapPy package to interface with the cameras in Python and aggregate their feeds into a fancy GUI. Joe's code is a SWIG-wrapping of the commercial camera-driver API, Pylon, from Basler. The wrapping allows the low-level camera driver methods to be called from within Python, and their feeds are forwarded to a gstreamer stream also initiated from within Python. The problem is that his wrapping (and the underlying Pylon software itself) is only runnable on an older version of Ubuntu. Efforts to run his software on newer distributions at the 40m have failed.

I'm working on a fix to essentially rewrite his high-level SnapPy code (generators of GUIs, etc.) to use the newest version of Pylon (pylon5) to interface at a low level with the cameras. I discovered that since the last attempt to digitize the camera system, Basler has released their own official version of a Python wrapping for Pylon on github (PyPylon).

Progress so far:

  • I've installed from source the newest version of Pylon, pylon5.0.12 on the SL7 machine (rossa). I chose that machine because LIGO is migrating to Scientific Linux, but I think this will also work for any distribution.
  • I've installed from source the the newest, official Python wrapping of the Basler Pylon software, pypylon.
  • I've tested the pypylon package and confirmed it can run our cameras.

The next and final step is to modify Joe's SnapPy package to import pypylon instead of his custom wrapping of an older version of the camera software, and update all of the Pylon calls to use the new methods. I'll hopefully get back to this early next week.

  13909   Fri Jun 1 19:25:11 2018 poojaUpdateCamerasSynchronizing video data with the applied motion to the mirror

Aim: To synchronize data from the captured video and the signal applied to ETMX

In order to correlate the intensity fluctuations of the scattered light with the motion of the test mass, we are planning to use the technique of neural network. For this, we need a synchronised video of scattered light with the signal applied to the test mass. Gautam helped me capture 60sec video of scattering of infrared laser light after ETMX was dithered in PITCH at ~0.2Hz..

I developed a python program to capture the video and convert it into a time series of the sum of pixel values in each frame using OpenCV to see the variation. Initially we had tried the same with green laser light and signal of approximately 11.12Hz. But in order to see the variation clearly, we repeated with a lower frequency signal after locking IR laser today. I have attached the plots that we got below. The first graph gives the intensity fluctuations from the video. The third and fourth graphs are that of transmitted light and the signal applied to ETMX to shake it. Since the video captured using the camera was very noisy and intensity fluctuations in the scattered light had twice the frequency of the signal applied, we captured a video after turning off the laser. The second plot gives the background noise probably from the camera. Since camera noise is very high, it may not be possible to train this data set in neural network.

Since the videos captured consume a lot of memory I haven't uploaded it here. I have uploaded the python code 'sync_plots.py' in github (https://github.com/CaltechExperimentalGravity/GigEcamera/tree/master/Pooja%20Sekhar/PythonCode).

 

Attachment 1: camera_mirror_motion_plots.pdf
camera_mirror_motion_plots.pdf
  13914   Mon Jun 4 11:34:05 2018 Jon RichardsonUpdateCamerasUpdate on GigE Cameras

I spent a day trying to modify Joe B.'s LLO camera client-server code without ultimate success. His codes now runs without throwing any errors, but something inside the black-box handoff of his camera source code to gstreamer appears to be SILENTLY FAILING. Gautam suggested a call with Joe B., which I think is worth a try.

In the meantime, I've impemented a simple Python video feed streamer which does work, and which students can use as a base framework to implement more complicated things (e.g., stream multiple feeds in one window, save a video stream movie or animation).

It uses the same PyPylon API to interface with the GigE cameras as does Joe's code. However, it uses matplotlib instead of gstreamer to render the imaging. The matplotlib code is optimized for maximum refresh rate and I observed it to achieve ~5 Hz for a single video feed. However, this demo code does not set any custom cameras settings (it just initializes a camera with its defaults), so it's quite possible that the refresh rate is actually limited by, e.g., the camera exposure time.

Location of the code (on the shared network drive):

/opt/rtcds/caltech/c1/scripts/GigE/demo_with_mpl/stream_camera_to_mpl.py

This demo initializes a single GigE camera with its default settings and continuously streams its video feed in a pop-up window. It runs continuously until the window is closed. I installed PyPylon from source on the SL7 machine (rossa) and have only tested it on that machine. I believe it should work on all our versions of Linux, but if not, run the camera software on rossa for now.

Usage:

From within the above directory, the code is executed as 

$python stream_camera_to_mpl.py [Camera IP address]

with a single argument specifying the IP address of the desired camera. At the time I tested, there was only one GigE camera on our network, at 192.168.113.152.

  13917   Tue Jun 5 20:31:42 2018 ranaUpdateCamerasUpdate on GigE Cameras

Aha! Video is back!

I think it would be good to add a flag whereby the video can be saved to disk in some uncompressed video format (ogg, avi, ?) instead of displayed to a matplotlib window. We could then use the default to just display video, but use the save-to-disk flag to grab a few minutes of video for image processing.

Quote:

In the meantime, I've impemented a simple Python video feed streamer which does work, and which students can use as a base framework to implement more complicated things (e.g., stream multiple feeds in one window, save a video stream movie or animation).

  13937   Sun Jun 10 15:04:33 2018 poojaUpdateCamerasDeveloping neural network

Aim: To develop a neural network in order to correlate the intensity fluctuations in the scattered light to the angular motion of the test mass. A block diagram of the technique employed is given in Attachment 1.

I have used Keras to implement supervised learning using neural network (NN). Initially I had developed a python code that converts a video (59 sec) of scattered light, after an excitation (sine wave of frequency 0.2 Hz) is applied to ETMX pitch, to image frames (of size 480*720)  and stores the 2D pixel values of 1791 images frames captured into an hdf5 file. This array of shape (1791,36500) is given as an input to the neural network. I have tried to implement regular NN only, not convolution or recurrent NN. I have used sequential model in Keras to do this. I have tried with various number of dense layers and varied the number of nodes in each layer. I got test accuracy of approximately 7% using the following network. There are two dense layers, first one with 750 nodes with a dropout of 0.1 ( 10% of the nodes not used) and second one with 500 nodes. To add nonlinearity to the network, both the layers are given an activation function of tanh. The output layer has 1 node and expects an output of shape (1791,1). This model has been compiled with a loss function of categorical crossentropy, optimizer = RMSprop. We have used these since they have been mostly used in the image analysis examples. Then the model is trained against the dataset of mirror motion. This has been obtained by sampling the cosine wave fit to the mirror motion so that the shapes of the input and output of NN are consistent. I have used a batch size ( number of samples per gradient update) = 32 and epochs (number of times entire dataset passes through NN) = 20. However, using this we got an accuracy of only 7.6%. 

I think that the above technique gives overfitting since dense layers use all the nodes during training apart from giving a dropout. Also, the beam spot moves in the video. So it may be necessary to use convolution NN to extract the information.

The video file can be accesses from this link https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view.

Gabriele told us that he had used the beam spot motion to train the neural network. Also he informed that GPUs are necessary for this. So we have to figure out a better way to train the network.  


gautam noon 11Jun: This link explains why the straight-up fully connected NN architecture is ill-suited for the kind of application we have in mind. Discussing with Gabriele, he informed us that training on a GPU machine with 1000 images took a few hours. I'm not sure what the CPU/GPU scaling is for this application, but given that he trained for 10000 epochs, and we see that training for 20 epochs on Optimus already takes ~30minutes, seems like a futile exercise to keep trying on CPU machines.

Attachment 1: nn_block_diag_2.pdf
nn_block_diag_2.pdf
  13940   Mon Jun 11 17:18:39 2018 poojaUpdateCamerasCCD calibration

Aim: To calibrate CCD of GigE using LED1050E.

The following table shows some of the specifications for LED1050E as given in Thorlabs datasheet.

Specifications Typical maximum ratings
DC forward current (mA)   100
Forward voltage (V) @ 20mA (VF) 1.25 1.55
Forward optical power (mW) 1.6  
Total optical power (mW) 2.5  
Power dissipation (mW)   130

 The circuit diagram is given in Attachment 1.

Considering a power supply voltage Vcc = 15V, current I = 20mA & forward voltage of led VF = 1.25V, resistance in the circuit is calculated as,

R = (Vcc - VF)/I = 687.5\ohm\ohms\Omega

Attachment 2 gives a plot of resistance (R) vs input voltage (Vcc) when a current of 20mA flows through the circuit. I hope I can proceed with this setup soon.

 

Attachment 1: led_circuit.pdf
led_circuit.pdf
Attachment 2: R_vs_V.pdf
R_vs_V.pdf
  13951   Tue Jun 12 19:27:25 2018 poojaUpdateCamerasCCD calibration

Today I made the led (1050nm) circuit inside a box as given in my previous elog. Steve drilled a 1mm hole in the box as an aperture for led light.

Resistance (R) used = 665 \Omega.

We connected a power supply and IR has been detected using the card.

Later we changed the input voltage and measured the optical power using a powermeter.

Input voltage (Vcc in V) Optical power
0 (dark reading) 60 nW
15 68 \muW
18 82.5 \muW
20 92 \muW

Since the optical power values are very less, we may need to drill a larger hole.

Now the hole is approximately 7mm from led, therefore aperture angle is approximately 2*tan-1(0.5/7) = 8deg. From radiometric curve given in the datasheet of LED1050E, most of the power is within 20 deg. So a hole of size 2* tan(10) *7 = 2.5mm may be required.

I have also attached a photo of the led beam spot on the IR detection card.

Attachment 1: IMG_20180612_163831.jpg
IMG_20180612_163831.jpg
  13972   Fri Jun 15 09:51:55 2018 poojaUpdateCamerasDeveloping neural network

Aim : To develop a neural network on simulated data.

I developed a python code that generates a 64*64 image of a white Gaussian beam spot at the centre of black background. I gave a sine wave of frequency 0.2Hz that moves the spot vertically (i.e. in pitch). Then I simulated this video at 10 frames/sec for 10 seconds. Then I saved this data into an hdf5 file, reshaped it to a 1D array and gave as input to a neural network. Out of the 100 image frames, 75 were taken as training dataset and 25 as test data. I varied several hyperparameters like learning rate of the optimizer, number of layers, nodes, activation function etc. Finally, I was successful in reducing the mean squared error with the following network model:

  • Sequential model of 2 fully connected layers with 256 nodes each and a dropout of 0.1
  • loss function = mean squared error, optimizer = RMSprop (learning rate = 0.00001) and activation function that adds nonlinearity = relu
  • batch size = 32 and number of epochs = 1000

I have attached the plot of the output of neural network (NN) as well as sine signal applied to simulate the video and their residula error in Attachment 1. The plot of variation in mean squared error (in log scale) as number of epochs increases is given in Attachment 2.

I think this network worked easily since there is no noise in the input. Gautam suggested to try the working of this network on simulated data with a noisy background.

 

Attachment 1: nn_1.pdf
nn_1.pdf
Attachment 2: nn_2.pdf
nn_2.pdf
  13986   Tue Jun 19 14:08:37 2018 poojaUpdateCamerasCCD calibration using LED1050E

Aim: To measure the optical power from led using a powermeter.

Yesterday Gautam drilled a larger hole of diameter 5mm in the box as an aperture for led (aperture angle is approximately 2*tan-1(2.5/7) = 39 deg). I repeated the measurements that I had done before (https://nodus.ligo.caltech.edu:8081/40m/13951). The measurents of optical power measured using a powermeter and the corresponding input voltages are listed below.

Input voltage (Vcc in V) Optical power
0 (dark reading) 0.8 nW
10 1.05 mW
12 1.15 mW
15 1.47 mW
16 1.56 mW
18 1.81 mW

So we are able to receive optical power close to the value (1.6mW) given in Thorlabs datasheet for LED1050E (https://www.thorlabs.com/drawings/e6da1d5608eefd5c-035CFFE5-C317-209E-7686CA23F717638B/LED1050E-SpecSheet.pdf). I hope we can proceed to BRDF measurements for CCD calibration.

Steve: did you center the LED ?

  13991   Wed Jun 20 20:39:36 2018 poojaUpdateCamerasCCD calibration using LED1050E

 

Quote:

Aim: To measure the optical power from led using a powermeter.

Yesterday Gautam drilled a larger hole of diameter 5mm in the box as an aperture for led (aperture angle is approximately 2*tan-1(2.5/7) = 39 deg). I repeated the measurements that I had done before (https://nodus.ligo.caltech.edu:8081/40m/13951). The measurents of optical power measured using a powermeter and the corresponding input voltages are listed below.

Input voltage (Vcc in V) Optical power
0 (dark reading) 0.8 nW
10 1.05 mW
12 1.15 mW
15 1.47 mW
16 1.56 mW
18 1.81 mW

So we are able to receive optical power close to the value (1.6mW) given in Thorlabs datasheet for LED1050E (https://www.thorlabs.com/drawings/e6da1d5608eefd5c-035CFFE5-C317-209E-7686CA23F717638B/LED1050E-SpecSheet.pdf). I hope we can proceed to BRDF measurements for CCD calibration.

Steve: did you center the LED ?

Yes.

  14018   Tue Jun 26 10:50:14 2018 poojaUpdateCamerasBeam spot tracking using OpenCV

Aim: To track the motion of beam spot in simulated video.

I simulated a video that moves the beam spot at the centre of the image initially by applying a sinusoidal signal of frequency 0.2Hz and amplitude 1 i.e. it moves maximum by 1 pixel. It can be found in this shared google drive link (https://drive.google.com/file/d/1GYxPbsi3o9W0VXybPfPSigZtWnVn7656/view?usp=sharing). I found a program that uses Kernelized Correlation Filter (KCF) to track object motion from the video. In the program we can initially define the bounding box (rectangle) that encloses the object we want to track in the video or select the bounding box by dragging in GUI platform. Then I saved the bounding box parameters in the program (x & y coordinates of the left corner point, width & height) and plotted the variation in the y coordinates. I have yet to figure out how this tracker works since the program reads 64*64 image frames in video as 480*640 frames with 3 colour channels and frame rate also randomly changes. The plot of the output of this tracking program & the applied signal has been attached below. The output is not exactly sinusoidal because it may not be able to track very slight movement especially at the peaks where the slope = 0.

Attachment 1: cv2_track_fig.pdf
cv2_track_fig.pdf
  14020   Tue Jun 26 17:20:33 2018 JonConfigurationCamerasLLO Python Camera Software is Working

Thanks to a discussion yesterday with Joe Betzweiser, I was able to identify and fix the remaining problem with the LLO GigE camera software. It is working now, currently only on rossa, but can be set up on all the machines. I've started a wiki page with documentation and usage instructions here:

https://wiki-40m.ligo.caltech.edu/Electronics/GigE_Cameras

This page is also linked from the main 40m wiki page under "Electronics."

This software has the ability to both stream live camera feeds and to record feeds as .avi files. It is described more on the wiki page.

  14021   Tue Jun 26 17:54:59 2018 poojaUpdateCamerasDeveloping neural networks

Aim:  To find a model that trains the simulated data of Gaussian beam spot moving in a vertical direction by the application of a sinusoidal signal. The data also includes random uniform noise ranging from 0 to 10.

All the attachments are in the zip folder.

I simulated images 128*128 at 10 frames/sec by applying a sine wave of frequency 0.2Hz that moves the beam spot, added random uniform noise ranging from 0 to 10 & resized the image frame using opencv to 64*64. 1000 cycles of this data is taken as train & 300 cycles as test data for the following cases. Optimizer = Nadam (learning rate = 0.001), loss function used = mean squared error, batch size = 32,

Case 1:

Model topology:

                         256 (dropout = 0.1)  ->           256 (dropout = 0.1)   ->       1

Activation :             selu                                         selu

Number of epochs = 240.

Variation in loss value of train & test datasets is given in Attachment 1 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 2 & 3 (zoomed version of 2).

The model fits well but there is no training since test loss is lower than train loss value. I found in several sites that dropout of some of the nodes during training but retaining them during test could be the probable reason for this (https://stackoverflow.com/questions/48393438/validation-loss-when-using-dropout , http://forums.fast.ai/t/validation-loss-lower-than-training-loss/4581 ). So I removed dropout while training next time.

Case 2:

Model topology:

                         256 (dropout = 0.1)  ->           256 (dropout = 0.1)   ->       1

Activation :             selu                                         selu                          linear

Number of epochs = 200.

Variation in loss value of train & test datasets is given in Attachment 4 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 5 & 6 (zoomed version of 2).

But still no improvement.

Case 3:

I changed the optimizer to Adam and tried with the same model topology & hyperparameters as case 2 with no success (Attachments 7,8 & 9).

Finally I think this is because I'm training & testing on the same data. So I'm now training with the simulated video but moving it by a maximum of 2 pixels only and testing with a video of ETMY that we had captured earlier.

Attachment 1: NN_noise_diag.zip
  14037   Wed Jul 4 20:48:32 2018 poojaUpdateCamerasMedm screen for GigE

(Gautam, Pooja)

Aim: To develop medm screen for GigE.

Gautam helped me set up the medm screen through which we can interact with the GigE camera. The steps adopted are as follows:

(i) Copied CUST_CAMERA.adl file from the location /opt/rtcds/userapps/release/cds/common/medm/ to /opt/rtcds/caltech/c1/medm/MISC/.

(ii) Made the following changes by opening CUST_CAMERA.adl in text editor.  

  • Changed the name of file to "/cvs/cds/rtcds/caltech/c1/medm/MISC/CUST_CAMERA.adl"
  • Replaced all occurences of "/ligo/apps/linux-x86_64/camera/bin/" to "/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/" & "/ligo/cds/$(site)/$(ifo)/camera/" to "/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/"

(iii) Added this .adl file as drop-out menu 'GigE' to VIDEO/LIGHTS section of sitemap (circled in Attachment 1) i.e opened Resource Palette of VIDEO/LIGHTS, clicked on Label/Name/Args & defined macros as CAMERA=C1:CAM-ETMX,CONFIG=C1-CAM-ETMX in Arguments box of Related Display Data dialog box (circled in Attachment 2) that appears. In Related Display Data dialog box, Display label is given as GigE and Display File as ./MISC/CUST_CAMERA.adl

(iv) All the channel names can be found in Jigyasa's elog https://nodus.ligo.caltech.edu:8081/40m/13023

(v) Since the slider (circled in Attachment 3) for pixel sum was not moving, changed the high limit value to 10000000 in PV Limits dialog box. This value is set such that the slider reaches the other end on setting the exposure time to maximum.

(vii) Set the Snapshot channel C1:CAM-ETMX_SNAP to off (very important!). Otherwise we cannot interact with the camera.

(vii) GigE camera gstreamer client is run in tmux session.

Now we can change the exposure time and record a video by specifying the filename and its location using medm screen. However, while recording the video, gstream video laucher of GigE stops or is stuck.

Attachment 1: sitemap.png
sitemap.png
Attachment 2: GigE_macros.png
GigE_macros.png
Attachment 3: CUST_CAMERA.png
CUST_CAMERA.png
  14053   Wed Jul 11 16:50:34 2018 poojaUpdateCamerasUpdate in developing neural networks

Aim: To develop a neural network that resolves mirror motion from video.

I had created a python code to find the combination of hyperparameters that trains the neural network. The code (nn_hyperparam_opt.py) is present in the github repo. It's running in cluster since a few days. In the meanwhile I had just tried some combination of hyperparameters.

These give a low loss value of approximately 1e-5 but there is a large error bar for loss value since it fluctuates a lot even after 1500 epochs. This is unclear.

Input: 64*64 image frames of simulated video by applying beam motion sine wave of frequency 0.2Hz and at 10 frames per sec. This input data is given as an hdf5 file.

Train : 100 cycles,  Test: 300 cycles, Optimizer = Nadam (learning rate = 0.001)

Model topology:

                    256       ->      128    ->       1

Activation :        selu     selu           linear

Case 1: batch size = 48, epochs = 1000, loss function = mean squared error

Plots of output predicted by neural network (NN) & input signal has been shown in 1st graph & variation in loss value with epochs in 2nd graph.

Case 2: batch size = 32, epochs = 1500, loss function = mean squared logarithmic error

Plots of output predicted by neural network (NN) & input signal has been shown in 3rd graph & variation in loss value with epochs in 4th graph.

 

 

 

Attachment 1: graphs.pdf
graphs.pdf graphs.pdf graphs.pdf graphs.pdf
  14070   Fri Jul 13 23:23:49 2018 poojaUpdateCamerasUpdate in developing neural networks

Aim: To develop a neural network that resolves mirror motion from video.

I tried to reduce the overfitting problem in previous neural network by reducing the number of nodes and layers and by varying the learning rate, beta factors (exponential decay rates of moving first and second moments) of Nadam optimizer assuming error of 5% is reasonable.

Input:

32 * 32 image frames (converted to 1d array & pixel values of 0 to 255 normalized) of simulated video by applying sine signal to move beam spot in pitch with frequency 0.2Hz and at 10 frames per second.

Total: 300 cycles ,           Train: 60 cycles,    Validation: 90 cycles,    Test: 150 cycles

Model topology:

                                          Input               -->                  Hidden layer               -->                    Output layer                                  

                                                                                          4 nodes                                              1 node

Activation function:                                  selu                                             linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam

Case 1:

Learning rate = 0.00001,    beta_1 = 0.8 (default value in Keras = 0.9),  beta_2 = 0.85 (default value in Keras = 0.999)

Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment.

Case 2:

Changed number of nodes in hidden layer from 4 to 8. All other parameters same.

These plots show that when residual error increases basically the output of neural network has a smaller amplitude compared to the applied signal. This kind of training error is unclear to me.

When beta parameters of optimizer is changed farther from 1, error increases.

Attachment 1: nn_simulation_2_nodes4_lr0p00001_beta1_0p8_beta2_0p85.pdf
nn_simulation_2_nodes4_lr0p00001_beta1_0p8_beta2_0p85.pdf
Attachment 2: nn_simulation_2_nodes8_lr0p00001_beta1_0p8_beta2_0p85.pdf
nn_simulation_2_nodes8_lr0p00001_beta1_0p8_beta2_0p85.pdf
  14089   Thu Jul 19 18:09:17 2018 poojaUpdateCamerasUpdate in developing neural networks

Aim: To develop a neural network that resolves mirror motion from video.

Case 1:

Input : Simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08

The data has been split into train, validation and test datasets and I tried training on neural network with the same model topology & parameters as in my previous elog (https://nodus.ligo.caltech.edu:8081/40m/14070)

The output of NN and residual error have been shown in Attachment 1. This NN model gives a large error for this. So I think we have to increase the number of nodes and learning rate so that we get a lower error value with a single sine wave simulated video ( but not overfitting) and then try training on linear combination of sine waves.

Case 2 :

Normalized the target sine signal of NN so that it ranges from -1 to 1 and then trained on the same neural network as in my previous elog with simulated video created using single sine wave. This gave comparatively lower error (shown in Attachment 2). But if we train using this network, we can get only the frequency of test mass motion but we can't resolve the amount by which test mass moves. So I'm unclear about whether we can use this.

Attachment 1: nn_simulation_mlt_sine_nodes4_lr0p00001_beta1_0p8_beta2_0p85_marked.pdf
nn_simulation_mlt_sine_nodes4_lr0p00001_beta1_0p8_beta2_0p85_marked.pdf
Attachment 2: nn_simulation_2_nodes4_target-1to1_marked.pdf
nn_simulation_2_nodes4_target-1to1_marked.pdf
  14097   Sun Jul 22 14:01:07 2018 poojaUpdateCamerasDeveloping neural networks on simulated video

Aim: To develop a neural network that resolves mirror motion from video.

Since error was high for the same input as in my previous elog http://nodus.ligo.caltech.edu:8080/40m/14089

I modified the network topology by tuning the number of nodes, layers and learning rate so that the model fitted the sum of 4 sine waves efficiently, saved weights of the final epoch and then in a different program, loaded saved weights & tested on simulated video that's produced by moving beam spot from the centre of image by sum of 4 sine waves whose frequencies and amplitudes change with time.

Input : Simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08. This is divided into train (0.4), validation (0.1) and test (0.5).

Model topology:

                                          Input               -->                  Hidden layer               -->                    Output layer                                  

                                                                                          8 nodes                                              1 node

Activation function:                                  selu                                             linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)

Normalized the target sine signal of NN by dividing by its maximum value.

Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment. These weights of the model in the final epoch have been saved to h5 file and then loaded & tested with simulated data of 4 sine waves with amplitudes and frequencies changing with time from their initial values by random uniform noise ranging from 0 to 0.05. Plot of predicted output by neural network, target signal of sine waves & residual error given in 2nd attachment. The actual signal can be got from predicted output of NN by multiplication with normalization constant used before. However, even though network fits training  & validation sets efficiently, it gives a comparatively large error on test data of varying amplitude & frequency.

Gautam suggested to try training on this noisy data of varying amplitudes and frequencies. The results using the same model of NN is given in Attachment 3. It was found that tuning the number of nodes, layers or learning rate didn't improve fitting much in this case.

 

 

Attachment 1: nn_simulation_2_normalized_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_2_normalized_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
Attachment 2: nn_simulation_normalizedtarget_128epochs_mult_sin_load_wt_varyingtest_nodes8_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_normalizedtarget_128epochs_mult_sin_load_wt_varyingtest_nodes8_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
Attachment 3: nn_simulation_2_normalized_varying_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_2_normalized_varying_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
  14100   Tue Jul 24 06:11:50 2018 ranaUpdateCamerasDeveloping neural networks on simulated video

This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects).

Quote:

Aim: To develop a neural network that resolves mirror motion from video.

  14101   Tue Jul 24 09:47:51 2018 gautamUpdateCamerasDeveloping neural networks on simulated video

I was thinking a little more about the way we are training the network for the current topology - because the network has no recurrent layers, I guess it has no memory of past samples, and so it doesn't have any sense of the temporal axis. In fact, Keras by default shuffles the training data you give it randomly so the time ordering is lost. So the training amounts to requiring the network to identify the center of the Gaussian beam and output that. So in the training dataset, all we need is good (spatial) coverage of the area in which the spot is most likely to move? Or is the idea to develop some tools to generate video with spot motion close to that on the ETM in lock, so that we can use it with a network topology that has memory? 

Quote:

This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects)

  14114   Sun Jul 29 23:15:34 2018 poojaUpdateCamerasDeveloping CNN

Aim: To develop a convolutional neural network that resolves mirror motion from video.

Input : Previous simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08 where random uniform noise ranging 0.05 has been added to amplitudes and frequencies. This is divided into train (0.4), validation (0.1) and test (0.5).

Model topology:

  • Number of filters = 2
  • Kernel size = 2
  • Size of pooling windows = 2
  •                                        ----->         Dense layer of 4 nodes  ---->    Output layer of 1 node 

         Activation:                      selu                                                  linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)

Plots of CNN output & applied signal given in Attachment 1. The variation in loss value with epochs given in Attachment 2.

This needs to be further analysed with increasing random uniform noise over the pixels and by training CNN on simulated data of varying ampltides and frequencies for sine waves.

Attachment 1: conv_nn_varying_freq_amp_1.pdf
conv_nn_varying_freq_amp_1.pdf
Attachment 2: conv_nn_varying_freq_amp_2.pdf
conv_nn_varying_freq_amp_2.pdf
  14632   Thu May 23 08:51:30 2019 MilindUpdateCamerasSetting up beam spot simulation

I have done the following thus far since elog #14626:

Simulation:

  1. Cleaned up Pooja's code for simulating the beam spot. Added extensive comments and made the code modular. Simulated the Gaussian beam spot to exhibit 
    1. Horizontal motion
    2. Vertical motion
    3. motion along both x and y directions:
  2. The motion exhibited in any direction in the above videos is the combination of four sinusoids at the frequencies: 0.2, 0.4, 0.1, 0.3 Hz with amplitudes that can be found as defaults in the script ((0.1, 0.04, 0.05, 0.08)*64 for these simulations.). The variation looks as shown in Attachment 1. For the sake of convenience I have created the above video files with only a hundred frames (fps = 10, total time ~ 10s) and this took around 2.4s to write. Longer files need much longer. As I wish to simply perform image processing on these frames immediately, I don't see the need to obtain long video files right away.
  3. I have yet to add noise at the image level and randomness to the motion itself.  I intend to do that right away. Currently video 3 will show you that even though the time variation of the coordinates of the center of the beam is sinusoidal, the motion of the beam spot itself is along a line as both x and y motions have the same phase. I intend to add the feature of phase between the motion of x and y coordinates of the center of the beam, but it doesn't seem all too important to me right now. The white margins in the videos generated are annoying and make tracking the beam spot itself slightly difficult as they introduce offset (see below). I shall fix them later if simple cropping doesn't do the trick.
  4. I have yet to push the code to git. I will do that once I've incorporated the changes in (3).

Circle detection:

  1. If the beam spot intensity variation is indeed Gaussian (as it definitely is in the simulation), then the contours are circular. Consequently, centroid detection of the beam spot reduces to detecting these contours and then finding their centroid (center). I tried this for a simulated video I found in elog post 14005. It was a quick implementation of the following sequence of operations: threshold (arbritrarily set to 127), contour detection (video dependent and needs to be done manually), centroid determination from the required contour.  Its evident that the beam spot is being tracked (green circle in the video). Check #Attachment 2 for the results. However, no other quantitative claims can be made in the absence of other data.
  2. Following this, Gautam pointed me to a capture in elog post 13908. Again, the steps mentioned in (1) were followed and the results are presented below in Attachment #3. However, this time the contour is no longer circular but distorted. I didn't pursue this further. This test was just done to check that this approach does extend (even if not seamlessly) to real data. I'm really looking forward to trying this with this real data.
  3. So far, the problem has been that there is no source data to compare the tracked centroid with. That ought to be resolved with the use of simulated data that I've generated above. As mentioned before, some matplotlib features such as saving with margins introduce offsets in the tracked beam position. However, I expect to still be able to see the same sinusoidal motion. As a quick test, I'll obtain the fft of the centroid position time series data and check if the expected frequencies are present.

I will wrap up the simulation code today and proceed to going through Gabriele's repo. I will also test if the contour detection method works with the simulated data. During our meeting, it was pointed out that when working with real data, care has to be taken to synchronize the data with the video obtained. However, I wish to put off working on that till later in the pipeline as I think it doesn't affect the algorithm being used. I hope that's alright (?).

 

Attachment 1: variation.pdf
variation.pdf
Attachment 2: contours_simulated.mp4
Attachment 3: contours_real.mp4
  14633   Thu May 23 10:18:39 2019 KruthiUpdateCamerasCCD calibration

On Tuesday, I tried reproducing Pooja's measurements (https://nodus.ligo.caltech.edu:8081/40m/13986). The table below shows the values I got. Pictures of LED circuit, schematic and the setup are attached. The powermeter readings fluctuated quite a bit for input volatges (Vcc) > 8V, therefore, I expect a maximum uncertainity of 50µW to be on a safer side. Though the readings at lower input voltages didn't vary much over time (variation < 2µW), I don't know how relaible the Ophir powermeter is at such low power levels. The optical power output of LED was linear for input voltages 10V to 20V. I'll proceed with the CCD calibration soon.

Input Voltage (Vcc) in volts Optical power
0 (dark reading) 1.6 nW
2 55.4 µW
4 215.9 µW
6 0.398 mW
8 0.585 mW
10 0.769 mW
12 0.929 mW
14 1.065 mW
16 1.216 mW
18 1.330 mW
20 1.437 mW
22 1.484 mW
24 1.565 mW
26 1.644 mW
28 1.678 mW

Attachment 1: setup.jpeg
setup.jpeg
Attachment 2: led_circuit.jpeg
led_circuit.jpeg
Attachment 3: led_schematic.pdf
led_schematic.pdf
  14635   Thu May 23 15:37:30 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.
Attachment 1: actual_motion.pdf
actual_motion.pdf
Attachment 2: predicted_motion.pdf
predicted_motion.pdf
Attachment 3: normalised_comparison.pdf
normalised_comparison.pdf
Attachment 4: residue_normalised.pdf
residue_normalised.pdf
Attachment 5: simulated_motion1.mp4
Attachment 6: elog_22may_contours.mp4
  14638   Sat May 25 20:29:08 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. I used the same motion as defined in the previous elog. I gradually added noise to the images. Noise added was uniform random noise - a 2 dimensinoal array of random numbers between 0 and a predetermined maximum (noise_amp). The previous elog provides the variation of the y coordinate. In this, I am also uploading the effect of noise on the error in the prediction of the x coordinate. As a reminder, the motion of the beam spot center was purely vertical. Attachement #1  is the error for noise_amp = 0, #2 for noise_amp = 20 and #3  for noise_amp = 40. While Attachment #3 does provide the impression of there being a large error, this is not really the case as without normalization, each peak corresponds to a deviation of one pixel about the central value, see Attachement #4 for reference.
  2. While the error does increase marginally, adding noise has no significant effect on the prediction of the y coordinate of the centroid as Attachment #5 shows at noise_amp = 40.
  3. I am currently running an experiment to obtain the variation of mean square error with different noise amplitudes and will put up the plots soon. Further, I shall vary the resolution of the image frames and the the standard deviation of the Gaussain beam with time and try to obtain simulations very close to the real data available and then determine the performance of the algorithm.
  4. The following videos will serve as a quick reference for what the videos and detection look like at
    1. noise_amp = 20
    2. noise_amp = 40
  5. I also performed a quick experiment to see how low the amplitude of motion could be before the algorithm falied to detect the motion and found it to occur at 2 orders of magnitude below the values used in the previous post. This is a line of thought I intend to pursue more carefully and I am looking into how opencv and python handle images with floats as coordinates and will provide more details about the previous trial soon. This should give us an idea of what the smallest motion of the beam spot that can be resolved is.
Quote:
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.

 

Attachment 1: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 2: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 3: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 4: predicted_motion_x.pdf
predicted_motion_x.pdf
Attachment 5: normalised_comparison_y.pdf
normalised_comparison_y.pdf
  14639   Sun May 26 21:47:07 2019 KruthiUpdateCamerasCCD Calibration

 

On Friday, I tried calibrating the CCD with the following setup. Here, I present the expected values of scattered power (Ps) at \thetas = 45°, where \thetas is scattering angle (refer figure). The LED box has a hole with an aperture of 5mm and the LED is placed at approximately 7mm from the hole. Thus the aperture angle is 2*tan-1(2.5/7) ≈ 40° approx. Using this, the spot size of the LED light at a distance 'd' was estimated. The width of the LED holder/stand (approx 4") puts a constraint on the lowest possible \thetas. At this lowest possible \thetas, the distance of CCD/Ophir from the screen is given by \sqrt{d^2 + (2'')^2}. This was taken as the imaging distance for other angles also.

In the table below, Pi is taken to be 1.5mW, and Ps and \Omega were calculated using the following equations:

  \Omega = \frac{CCD \ sensor \ area}{(Imaging \ distance)^2}            P_{s} = \frac{1 }{\pi} * P_{i} *\Omega *cos(45^{\circ})  

d (cm)

Estimated spot diameter (cm)

Lowest possible \thetas  (in degrees)

Distance of CCD/Ophir from the screen (in cm) \Omega (in sr)

Expected Ps at   \thetas = 45° (in µW)

1.0 1.2 78.86 5.2 0.1036 34.98
2.0 2.0 68.51 5.5 0.0259 8.74
3.0 2.7 59.44 5.9 0.0115 3.88
4.0 3.4 51.78 6.5 0.0065 2.19
5.0 4.1 45.45 7.1 0.0041 1.38
6.0 4.9 40.25 7.9 0.0029 0.98
7.0 5.6 35.97 8.6 0.0021 0.71
8.0 6.3 32.42 9.5 0.0016 0.54
9.0 7.1 29.44 10.3 0.0013 0.44
10.0 7.8 26.93 11.2 0.0010 0.34

 

                                 

 

 

 

 

 

 

 

 

 

On measuring the scattered power (Ps) using the ophir power meter, I got values of the same order as that of  expected values given the above table. Like Gautam suggested, we could use a photodiode to detect the scattered power as it will offer us better precision or we could calibrate the power meter using the method mentioned in Johannes's post: https://nodus.ligo.caltech.edu:8081/40m/13391.

 

Attachment 1: CCD_calibration_setup.png
CCD_calibration_setup.png
  14644   Fri May 31 01:38:21 2019 KruthiUpdateCamerasTelescope

[Kruthi, Milind]

Yesterday, we were able to capture some images of objects at a distane of approx 60cm (see the attachment), with the GigE mounted onto the telescope. I think, Johannes had used it earlier to image the ETMX (https://nodus.ligo.caltech.edu:8081/40m/13375). His elog entry doesn't say anything about the focal length of the lenses that he had used. The link to the python code he had used to calculate the lens solution wasn't working. After Gautam fixed it, I took a look at it. He has used 150mm (front lens) and 250mm (back lens) as the focal length of lenses for the calculation. Using the lens formula and an image of a nearby light source, with a very rough measurement, I found the focal lengths to be around 14 cm and 23 cm. So, I'm assuming that the lenses in the telescope are of same focal lengths as in his code, i.e 150mm and 250mm.

Attachment 1: telescope_mug_image.pdf
telescope_mug_image.pdf
  14646   Mon Jun 3 16:40:48 2019 ranaUpdateCamerasTelescope

no BMP files crying

  14649   Mon Jun 3 21:03:54 2019 MilindUpdateCamerasSteps to interact with GigE

The following steps summarize the steps to setting up and interacting with a GigE camera.

Launching the PylonViewerApp:

  1. Open a new terminal using Ctrl + Alt + T on the keyboard.
  2. Launch the app using the command pylon.

Using setup python scripts to interact with the GigE (a summary of the steps listed here and here)

  1. Connect the GigE camera to the ethernet cable and record its IP address. If the IP address is not printed on the GigE, launch the PylonViewerApp and navigate to the "Tools" dropdown menu and select "pylon IP configurator" to be presented with a list of all connected cameras and their IP addresses.
  2. To simply observe the camera feed, open a new terminal and run the following commands:
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini  (only one config file is present currently and more will be added as more cameras are set up. The "Camera IP" in the  .ini file must match that determined in step 1). This starts the camera server.
  3. Open a new tab (Ctrl + Shift + T on the keyboard) in the terminal. You should still be in the same directory as navigated to in step 2.1. Run the following command.
    1. python camera_client.py -c C1-CAM-ETMX.ini
  4. This should bring up a feed from the camera. Close at will.
  5. To record a video file, repeat steps 1 and 2. Open a new tab as described in step 3. Then run the following command:
    1. python camera_client_movie.py -c C1-CAM-ETMX.ini
  6. Enter the full path to the file where you wish to save the movie in the prompt that appears. Use ./your_file_name_here.avi to save the the video in the working directory. Press Ctrl + C to stop recording. The recording can be played by navigating to the location where the recording is stored and running vlc your_file_name_here.avi.
  7. To adjust the exposure setting of the camera, open a new terminal and run the command sitemap . This should bring up the medm display in Attachment #1. Click on the Video/Lights button highlighted in red and select GigE. Adjust the exposure value in the next window using the slider before starting the server in step 1. Adjusting the slider once the server is started causes the program to freeze. Also set the Snapshot channel C1:CAM-ETMX_SNAP to off as mentioned in elog 14037.

 

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.

 

Attachment 1: sitemap.pdf
sitemap.pdf
  14651   Tue Jun 4 00:11:45 2019 KruthiUpdateCamerasGigE setup

Chub and I are trying to figure out a way to co-mount GigE into the existing cylindrical enclosure. I'm attaching a picture of the current setup that is being used for imaging MC2. As of now, I have thought of 3 possible setups (schematics attached); but I don't know how feasible they are. Let us know if you have any other ideas.


Update: The setup 3 would require us to use the 52cm long enclosure. It has a long breadboard welded to it, which makes it very convienient, but the whole setup becomes quite heavy and it's not that safe to install such heavy enclosure on top of the vaccuum system. Also, aligning its components would be more complicated than other setups.

I decided to start with the simple one, therefore, I tried implementing setup 1. Fitting in the analog camera horizontally alongside the telescope turned out to be tricky. Though I did manage to fit it in, it didn't leave any room to change the orientation of the beamsplitter. Like Koji suggested, I'll be trying the setup 2.

Attachment 1: MC2.pdf
MC2.pdf
Attachment 2: Setup_3.png
Setup_3.png
Attachment 3: Setup_1.png
Setup_1.png
Attachment 4: Setup_2.png
Setup_2.png
  14654   Tue Jun 4 22:24:45 2019 MilindUpdateCamerasSteps to interact with GigE

Figured out how to get/grab frames by looking at the pypylon documenation as that turned out to be easier than modifying Jon's code. Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). I will figure that out tomorrow and make a script suitable for Kruthi's usage (obtain a bunch of images with different exposure times). I will also try and integrate the video saving and streaming code into this and have a neat little script set up asap.

Quote:

Upcoming updates:

  1. Automatic script to run the above steps.
  2. Pre-determining the time duration of the recorded video.
  3. Obtaining snapshots.
  14655   Tue Jun 4 23:41:13 2019 gautamUpdateCamerasSteps to interact with GigE

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

  14656   Wed Jun 5 22:30:13 2019 MilindUpdateCamerasSteps to interact with GigE

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

  14657   Thu Jun 6 16:01:52 2019 MilindUpdateCamerasSteps to interact with GigE

[Koji, Milind]

 

Today I ran into the following errors:

  1. Inability to access the EPICS channels using the commands caget and caput and thus the generation of a blank medm screen (error in Attachment #1) when simultaneously running the code in camera_server.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py).
  2. Inability to run camera_server.py code with an active medm screen with a "... failed to read <EPICS channel>" error.

Therefore, Koji and I took a look at it and putting our faith in Gautam's hunch from elog 13023, we walked down to rack 1Y1 and keyed it. Following this, all the functionality previously described was restored! Koji then took a look at all the channels handled by this machine and bestowed upon me the permission to key the crate should I lose control of the GigE again.

Quote:

Thanks! It does indeed do the trick! With that I was able to

  1. Obtain current exposure value using the terminal command caget C1:CAM-ETMX_EXP
  2. Set exposure value using the terminal command caput C1:CAM-ETMX_EXP <desired_exposure_value>

Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command  caput C1:CAM-ETMX_SNAP 1.

Quote:

caget/caput probably does the job.

Quote:

Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). 

 

 

Attachment 1: terminal_medm_error.pdf
terminal_medm_error.pdf
  14660   Sun Jun 9 21:24:00 2019 KruthiUpdateCamerasGigE setup

I managed to fit all the parts into the cylindrical enclosure without having to drill a hole in the enclosure to mount the analog camera (pictures attached); thanks to Koji for helping me find some fancy mechanical components (swivel post clamps, right angle post clamps and brackets). On Thursday, with Chub's help, I took a look at all the current analog camera positions with respect to the cylindrical enclosures. I think this setup gives me enough flexibility to align the components, as necessary, to be able to image the test masses/mirrors in all the cavities. I'll set it up for MC2 tomorrow.

 

Attachment 1: GigE_setup.jpg
GigE_setup.jpg
Attachment 2: GigE_setup_top_view.jpg
GigE_setup_top_view.jpg
  14661   Mon Jun 10 22:22:19 2019 MilindUpdateCamerasSteps to interact with GigE

Steps to take snapshots using GigE at different exposures [Instructions for Kruthi]:

  1. Setup C1-CAM-ETMX.ini (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) appropriately. The parameter Number of Snapshots determines how many snapshots will be taken at any given exposure. Set Name Overlay, Time Overlay, Calculation Overlay, Calculations (if using very low values of exposure) and Auto Exposure to False. Ensure that that the IP address of the Camera in use and that in the configuration file match.
  2. Launch a server using the following commands (as described in elog 14649)
    1. cd /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon
    2. python camera_server.py -c C1-CAM-ETMX.ini
  3. Open another terminal in the same directory and then run the following command
    1. python exposure_variation.py --minval <minval> --maxval <maxval> --step <step> where
      1. minval: lower bound of range of exposure values, defaults to 150
      2. maxval: upper bound of range of exposure values, defaults to 100000
      3. step: step size of variation in the range [minval, maxval], defaults to 2000

The python script takes in the above parameters and then takes snapshots by setting the exposure to values starting at minval and going upto maxval incrementing by step at each turn. This uses a simple for loop and is nothing elaborate.


A few unrelated updates:

  1. On a sidenote, I installed Sublime Text editor on rossa following the instructions at this site (check install using yum section). Further, I have also installed miniconda but did not set it up fully as I was in a rush and did not want to disturb any previously set up environment variables.
  2. I have cloned Gabriele's repository and am trying to get it to work on my system. As Gautam has pointed out that the end goal is to get stuff working on the lab machines, I will sharea .yml file with the necessary environment details upon completion.
  3. I will upload details of how I am going to construct the two learning tasks that Rana, Gautam and I discussed in a day or two including details of the use of simulation data for training data in the absence of real data (until Kruthi is done setting up the GigE) which Gautam suggested I do to speed things up.
  14663   Tue Jun 11 00:25:05 2019 KruthiUpdateCamerasGigE setup

[Kruthi, Milind]

Today, with Milind's help, I installed the analog camera into the MC2 enclosure [picture attached]; but it is not yet focused. We replaced the bulky angular bracket with a simple one, this saved a lot of space inside and it's easier to align other components now. I'll finish setting it up tomorrow.


Telescope design for MC2:  Instead of using two 3" long stackable lens tubes (SM2L30), we can use one 3" lens tube with an adjustable lens tube (SM2V10), as shown in the picture. This gives a flexibility to change the focal plane distance by 1" and also reduces the overall length of telescope from 9 inches to 6-7 inches. I decided to use two 150mm biconvex lens instead of a combination of 150mm and 250mm lenses, as the former combination results in lower focal plane distance for a given distance between the lenses.

Specifications of current telescope system (for future reference):

Focal length of lenses used  150mm & 150mm
Distance between the lenses 1cm - 2cm (Wasn't able to make more accurate measurement)

With the above telescope, assuming the MC2 mirror to be at a distance of approx 75cm, the focal plane distance will range from 7.9cm to 8.1cm. Using the adjustable lens tube, we can further make the fine adjustment.

Attachment 1: MC2_analog_setup.jpg
MC2_analog_setup.jpg
Attachment 2: telescope.pdf
telescope.pdf
  14665   Wed Jun 12 02:15:50 2019 KruthiUpdateCamerasGigE setup

[Koji, Kruthi]

Yesterday, Koji helped me clean all the optics that are being used for the setup. We tried aligning the cameras with the previous configuration we had, but after connecting the analog camera cables there wasn't much room to align the beam splitter. Today, I tried a different configuration and tested the alignment of analog camera, GigE, beam splitter and the mirror using a laser beam [pictures attached]. But the MC2 isn't locked to test if the whole setup is actually aligned with the mirror inside the vacuum. 

Also, with this setup, just by using posts of different lengths with the middle 90º-post-clamp, we will be able to move all the components. This way, we can easily image the beam spot in all the cavities.

Attachment 1: MC2_GigE_setup.jpg
MC2_GigE_setup.jpg
  14666   Wed Jun 12 21:55:34 2019 KruthiUpdateCamerasGigE setup

I'm attaching a picture of the screen. I just positioned the enclosure by turning it a bit and I suppose we can see the mirror inside the vacuum now (the MC2 is still not locked). 

Quote:

[Koji, Kruthi]

Yesterday, Koji helped me clean all the optics that are being used for the setup. We tried aligning the cameras with the previous configuration we had, but after connecting the analog camera cables there wasn't much room to align the beam splitter. Today, I tried a different configuration and tested the alignment of analog camera, GigE, beam splitter and the mirror using a laser beam [pictures attached]. But the MC2 isn't locked to test if the whole setup is actually aligned with the mirror inside the vacuum. 

Also, with this setup, just by using posts of different lengths, we will be able to image the beam spot in all the cavities.

 

Attachment 1: MC2_analog_pic.jpg
MC2_analog_pic.jpg
  14667   Wed Jun 12 22:02:04 2019 MilindUpdateCamerasSimulation enhancements

Today, Rana asked me to work on improving simulations based on the ideas we discussed last week. As of the previous elog the simulation accomodated only

  1. Simulation of Gaussian beam spot.
  2. Arbitrary motion.

Today, I added the simulation of point scatterers.

What?

The image on the sensor (camera) is produced in roughly the following steps.

  1. Motion of the Gaussian beam on the optic (X,Y coordinates) which is what has been simulated so far.
  2. Reflection from the surface of the optic which can be modeled using knowledge of the BRDF has not been included as of this elog as I wish to do a little more reading before doing so.
  3. Reflection from point scatterers (dust particles burnt into the optic surface by the laser and so forth) which are characterised as peaks (impulses) in the TIS vs position plot. The laser beam is incident nearly normally on the optic and this behaviour is independent of the angle of observation. This is what has been added to the simulation.

How?

  1. Increased the frame resolution to 720 x 480.
  2. Defined an array of the same size and set values of at most "num_scatter" number of points at random positions to values determined randomly between 1 and "scatter_amp" + 1 where scatter_amp is non-negative.
  3. Multiplied the resulting array by the resulting Gaussian beam. The motivation was to imitate the bright specks obtained on various camera feeds in the lab. Physically, this also implies normal incidence and normal observation which is not the real case at all. I shall add these features in a day or two.

Herewith, in attachments #1, #2, #3 I am attaching videos obtained by varying scattering amplitude and number of scattering points in a vain attempt to reproduce this data. I shall work more on this simulation on Friday.

 


Scripting stuff:

  1. Previous elogs detail how to take gige images at various exposure times. I am still waiting on Kruthi to use the script.
  2. Tomorrow I shall work on the scripting software to interact with the GigE and take video for a fixed duration etc. I shall also begin working on a script to autolock the PMC based on what Rana showed me on Monday. I will also take a look at the the contents of this elog and try to pick up from there. I hope to make significant progress by the next lab meeting.

Neural network stuff:

GANs for simulation:

  1. Other than putting the physics into simulation i.e the first portion of this elog, GANs can be trained to generate images similar to the original data. I am unfamiliar with training GANs and the various tricks that are used specifically for them. I will do a bit of reading and make an update by Friday. As of now, the data I plan to use is this and I will train it using the GTX 1060 on my machine.

Networks for beam tracking:

  1. I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.
Attachment 1: simulated_motion0.mp4
Attachment 2: simulated_motion0.mp4
Attachment 3: simulated_motion0.mp4
  14668   Thu Jun 13 14:28:46 2019 ranaUpdateCamerasGigE setup

don't need to lock - make sure the 4 OSEMs are centered on the camera field just as we have for the arm cavity mirrors

Quote:

I'm attaching a picture of the screen. I just positioned the enclosure by turning it a bit and I suppose we can see the mirror inside the vacuum now (the MC2 is still not locked). 

 

ELOG V3.1.3-