40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 71 of 349  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  13054   Fri Jun 9 09:13:26 2017 SteveUpdateCamerasGigE camera lens with AR

We should move on with getting this lens from Edmonds #67-717  at 1064 R<3% 

Computar M5018-SWIR is an other choice

AR coatings 500 - 1100nm R<1% are expensive.

 

Quote:

50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617    Camera manuals plus

 

Attachment 1: coating_curve.pdf
coating_curve.pdf
  13070   Fri Jun 16 18:21:40 2017 jigyasaConfigurationCamerasGigE camera IP

One of the additional GigE cameras has been IP configured for use and installation. 

Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
Gateway- 192.168.113.2
 

  13074   Tue Jun 20 14:58:08 2017 SteveUpdateCamerasGigE camera at ETMX

GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.

Quote:

One of the additional GigE cameras has been IP configured for use and installation. 

Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
Gateway- 192.168.113.2
 

 

Attachment 1: ETMXgige.jpg
ETMXgige.jpg
  13083   Tue Jun 27 16:18:59 2017 jigyasaUpdateCamerasGigE camera at ETMX

The 50mm lens has arrived. (Delivered yesterday).

Also the GigE has been wired and conencted to the Martian. Image acquisition is possible with Pylon.

Quote:

GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.

 

  13089   Fri Jun 30 11:08:26 2017 jigyasaUpdateCamerasGigE camera at ETMX
With Steve's help in getting the right depth of field for imaging and focusing on the test mass with the new AR coated lens, Gautam's help with locking the arm and trying my hand at adjusting the focus of the camera yesterday, we were able to get some images of the IR beam, with the green shutter on and off at different exposures. Since the CCD is at an angle to the optic, the exposure time had to be increased signifcantly(and varied between 0.08 to 0.5 seconds) to capture bright images. 
A few frames without the IR on and with the green shutter closed were captured.
These show the OSEM and the Oplev on the test mass. 
 
Steve's note: AR coated camera lens M5018-SW installed at ~40 degrees
                    Atm2,  pcicture is taken through dirty window
 
Quote:

Also the GigE has been wired and conencted to the Martian. Image acquisition is possible with Pylon.

 

 

Attachment 1: PicturesETMX.pdf
PicturesETMX.pdf PicturesETMX.pdf PicturesETMX.pdf PicturesETMX.pdf
Attachment 2: dirtyETMXwindow.jpg
dirtyETMXwindow.jpg
  13091   Fri Jun 30 15:25:19 2017 jigyasaUpdateCamerasGigE camera at ETMX

All thanks to Steve, we cleaned the view port on the ETMX on which the camera is installed, and with a little fine tuning of the focus of the camera, here's a really good image of the beam spot at 6 and 14 ms.

Quote:
Steve's note: AR coated camera lens M5018-SW installed at ~40 degrees

 

Attachment 1: Image__2017-06-30__15-10-05.pdf
Image__2017-06-30__15-10-05.pdf
Attachment 2: 14ms.pdf
14ms.pdf
  13092   Fri Jun 30 16:03:54 2017 jigyasaUpdateCamerasGigE camera at ETMX

 

Quote:

All thanks to Steve, we cleaned the view port on the ETMX on which the camera is installed, and with a little fine tuning of the focus of the camera, here's a really good image of the beam spot at 6 and 14 ms.

Quote:
Steve's note: AR coated camera lens M5018-SW installed at ~40 degrees

 

 

Attachment 1: 14msexposure.png
14msexposure.png
  13098   Thu Jul 6 11:58:28 2017 jigyasaUpdateCamerasHDR images of ETMX

I captured a few images of the beam spot on ETMX at 5ms, 10ms, 14ms, 50ms, 100ms, 500ms, 1000ms exposure and ran them through my python script for HDR images. Here's what I obtained. 
The resulting image is an improvement over the highly saturated images at say, 500ms and 1 second exposures. 
Additionally, I also included a colormapped version of the image. 

Attachment 1: ETMXHDRcolormap.png
ETMXHDRcolormap.png
Attachment 2: ETMXHDRimage.png
ETMXHDRimage.png
  13100   Fri Jul 7 14:34:27 2017 ranaUpdateCamerasHDR images of ETMX

i wonder how 'HDR' these images really are. is there a quantitative way to check that we are really getting more bits? also, how many bits does the PNG format allow for monochrome images? i worry that these elog images are already lossy.

 

  13118   Sat Jul 15 01:28:53 2017 jigyasaUpdateCamerasBRDF Calibrations

This evening, Gautam helped me with setting up the apparatus for calibrating the GigE for BRDF measurements.
The SP table was chosen to set up the experiment and for this reason a few things including a laser and power meter (presumably set up by Steve) had to be moved around.

We initially started by setting up the Crysta laser with its power source (Crysta #2, 150-190 mW 1064 laser) on the SP table. The Ophir power meter was used to measure the laser power. We discovered that the laser was highly unstable as its output on the power meter fluctuated (kind of periodically) between 40 and 150 mW. The beam spot on the beam card also appeared to validate this change in intensity. So we decided to use another 1064 nm laser instead.
Gautam got the LightWave NPro laser from the PSL table and set it up on the SP table and with this laser the output as measured by the same power meter was quite stable.

We manually adjusted the power to around 150 mW. This was followed by setting up the half wave plate(HWP) with the polarizing beam splitter (PBS), which was very gently and precisely done by Gautam, while explaining how to handle the optics to me.
 On first installing the PBS, we found that the beam was already quite strongly polarized as there seemed to be zero transmission but a strong reflection.
With the HWP in place, we get a control over the transmitted intensity. The reflected beam is directed to a beam dump.
I have taken down the GigE(+mount) at ETMX and wired a spare PoE injector.
We tried to interface with the camera wirelessly through the wireless network extenders but that seems to render an unstable connection to the GigE so while a single shot works okay, a continuous shot on the GigE didn’t succeed.

The GigE was connected to the Martian via Ethernet cable and images were observed using a continuous shot on the Pylon Viewer App on Paola. 

We deliberated over the need of a beam expander, but it has been omitted presently. White printer paper is currently being used to model the Lambertian scatterer. So light scattered off the paper was observed at a distance of about 40 cm from the sample.
While proceeding with the calibrations further tonight, we realized a few challenges.

While the CCD is able to observe the beam spot perfectly well, measuring the actual power with the power meter seems to be tricky. As the scattered power is quite low, we can’t actually see any spot using a beam card and hence can’t really ensure if we are capturing the entire beam spot on the active region of the power meter (placed at a distance of ~40cm from the paper) or if we are losing out on some light, all the while ensuring that the power meter and the CCD are in the same plane.

We tried to think of some ways around that, the description of which will follow. Any ideas would be greatly appreciated.

Thanks a ton for all your patience and help Gautam! :) 

More to follow.. 

  13119   Sat Jul 15 13:40:59 2017 ranaUpdateCamerasBRDF Calibrations

Power meter only needed to measure power going into the paper not out. We use the BRDF of paper to estimate the power going out given the power going in.

  13120   Sat Jul 15 16:19:00 2017 gautamUpdateCamerasMakeshift PyPylon

Some days ago, I stumbled upon this github page, by a grad student at KIT who developed this code as he was working with Basler GigE cameras. Since we are having trouble installing SnapPy, I figured I'd give this package a try. Installation was very easy, took me ~10mins, and while there isn't great documentation, basic use is very easy - for instance, I was able to adjust the exposure time, and capture an image, all from Pianosa. The attached is some kind of in-built function rendering of the captured image - it is a piece of paper with some scribbles on it near Jigyasa's BRDF measurement setup on the SP table, but it should be straightforward to export the images in any format we like. I believe the axes are pixel indices.

Of course this is only a temporary solution as I don't know if this package will be amenable to interfacing with EPICS servers etc, but seems like a useful tool to have while we figure out how to get SnapPy working. For instance, the HDR image capture routine can now be written entirely as a Python script, and executed via an MEDM button or something.

A rudimentary example file can be found at /opt/rtcds/caltech/c1/scripts/GigE/PyPylon/examples - some of the dictionary keywords to access various properties of the camera (e.g. Exposure time) are different, but these are easy enough to figure out.

 

Attachment 1: pyPylon_test.png
pyPylon_test.png
  13121   Sun Jul 16 11:58:36 2017 jigyasaUpdateCamerasBRDF Calibrations

 

From what I understood froom my reading, [Large-angle scattered light measurements for quantum-noise filter cavity design studies(Refer https://arxiv.org/abs/1204.2528)], we do the white paper test in order to calibrate for the radiometric response, i.e. the response of the CCD sensor to radiance.‘We convert the image counts measured by the CCD camera into a calibrated measure of scatter. To do this we measure the scattered light from a diffusing sample twice, once with the CCD camera and once with a calibrated power meter. We then compare their readings.’

But thinking about this further, if we assume that the BRDF remains unscaled and estimate the scattered power from the images, we get a calibration factor for the scattered power and the angle dependence of the scattered power!

Quote:

Power meter only needed to measure power going into the paper not out. We use the BRDF of paper to estimate the power going out given the power going in.

 

  13122   Sun Jul 16 12:09:47 2017 jigyasaUpdateCamerasBRDF Calibrations

With this idea in mind, we can now actually take images of the illuminated paper at different scattering angles, assume BRDF is the constant value of (1/pi per steradian), 

then scattered power Ps= BRDF * Pi cosθ * Ω, where Pi is the incident power, Ω is the solid angle of the camera and θ is the scattering angle at which measurement is taken. This must also equal the sum of pixel counts divided by the exposure time multiplied by some calibration factor. 

From these two equations we can obtain the calibration factor of the CCD. And for further BRDF measurements, scale the pixel count/ exposure by this calibration factor.  

Quote:

 

From what I understood froom my reading, [Large-angle scattered light measurements for quantum-noise filter cavity design studies(Refer https://arxiv.org/abs/1204.2528)], we do the white paper test in order to calibrate for the radiometric response, i.e. the response of the CCD sensor to radiance.‘We convert the image counts measured by the CCD camera into a calibrated measure of scatter. To do this we measure the scattered light from a diffusing sample twice, once with the CCD camera and once with a calibrated power meter. We then compare their readings.’

But thinking about this further, if we assume that the BRDF remains unscaled and estimate the scattered power from the images, we get a calibration factor for the scattered power and the angle dependence of the scattered power!

Quote:

Power meter only needed to measure power going into the paper not out. We use the BRDF of paper to estimate the power going out given the power going in.

 

 

  13310   Mon Sep 11 23:31:50 2017 johannesUpdateCameraspost-vent camera capture comparison

The latest pre-unintended vent captures of the test mass face cameras were taken on June 2nd, 2017. Only exposures for ITMYF, ETMYF, and ETMXF exist in /users/sensoray/SensorayCaptures/. I took new captures for those three after locking the arms and having the dither-alignment on for 5+ minutes (exposures were taken after turning the dithering off). The capture script is choking on ITMXF, saying the channel can't lock on. Maybe that's why there's also no reference image for it. Capturing QUAD3, which shows ITMXF in the lower right corner, works, but we don't have a capture for reference. I also recorded dark fields after closing the PSL shutter. Naturally, these don't subtract out as well for the three-month old pictures, but it's actually not terrible and qualitatively one can still compare the subtracted images

Visually, ITMYF and ETMYF do not show a dramatic difference between then and now. ETMXF however, does. To get a numerical estimate for the difference in counts, I worked with the subtracted images and placed an aperture about 1.5x the size of the visible beam blob. I summed up the pixel values inside and subtracted the sum of the pixel values of an equally sized area from the upper left corner of the respective image, which looks free of subtraction artifacts and looks qualitatively similar to the background in the central region.

The pixel sum has gone up by about 50% between the exposures. I still have to do the same for the YARM optics but don't expect such a large discrepancy. Unfortunately we're missing those ITMYF expsures...

All pictures are organized in this format:

Pre-vent exposure Post-vent exposure
Pre-vent subtracted Post-vent subtracted

 

ITMYF

   

   

ETMYF

   

   

ETMXF

   

   

Attachment 11: ETMXF_pre_sub.bmp
  13334   Tue Sep 26 22:11:08 2017 johannesUpdateCameraspost-vent camera capture comparison

I configured the remaining GigE-Camera to work on the 40m network. We currently have 3 operational Basler cameras:

The 120gm's have been assigned the IPs 192.168.113.152  (was already configured) and 192.168.113.153 (freshly configured) and have been labeled accordingly. Note that it was not necessary to connect the out-of-the-box camera directly to a dedicated ethernet adapter whose IP was set manually to 169.254.0.XXX as pointed out in earlier posts - a few seconds after connecting the camera to the control room switch (with PoE adapter to power it) the camera showed up in the configuration software tool which is launched via

/opt/rtcds/caltech/c1/scripts/GigE/pylon5/bin/./IpConfigurator

and can be assigned a corrected, static IP.

We have a plethora of 2" tubes for the lens assembly, but not a great variety of focal lengths for 2" lenses. Present with the camera gear were two f=250 mm and one f=150 mm 2" lenses with a NIR broadband AR coating

To determine the lens positions relativ to the sensor I assumed that the camera we're setting up looks at its test mass from a distance of 1m. Using the two available focal lengths we can look for solutions which have reasonable lens separations <~10cm and suitable magnification. We primarily want to image the central mirror area onto a 1/4" sized sensor, which can be achieved with a magnification of ~1/8.

I chose a lens separation of 6cm, which gives a theoretical magnification of -.12 and a sensor-lens 2 distance of 7.95 cm. I placed the lenses accordingly in the tubes and checked the focusing with Gautam's help:

       

It's pretty close to what we would expect. We will do the calibration using the auxiliary laser on the PSL table. For this I temporarily routed a fiber from the PSL enclosure to the SP table. Since the main cable hole is sort of cramped it's going in through a gap near the ceiling instead.  

 

Attachment 1: lens_distance.pdf
lens_distance.pdf
  13348   Mon Oct 2 12:44:45 2017 johannesUpdateCamerasBasler 120gm calibration

Disclaimer: Wrong calibration factors! See https://nodus.ligo.caltech.edu:8081/40m/13391

The two acA640-120gm Basler GigE-Cams have been calibrated. I used the collimated output of a fiber that carried the auxiliary laser light from the PSL table. With a non-polarizing beam splitter some of the light was picked off onto a PD, and I modified the RF amplitude of the AOM drive signal to vary the power coming out of the fiber. The fiber output was directed at a white paper, which was placed 1.06m from the front of the lens tube assembly, which is where the focal plane is. Using the Pylon Viewer App I made sure that the entirety of the beam spot was imaged onto the CCD. Since the camera sensor is 1/4" across, I removed the camera from the lens tube and instead placed the Ophir power meter head at the position of the sensor and measured the power reported versus PD voltage, which turned out to be 1.5 V/uW.

The camera was put back in place and I used the Pypylon package Gautam had stumbled upon to sweep the exposure time from 100us to 10ms at different light power settings including no laser light at all for background subtraction, and rather than keeping the full bitmap data for O(100s) of images I recorded only the quantities

  1. Pixel Max
  2. Pixel Sum
  3. Pixel Mean
  4. Pixel Standard Deviation
  5. Pixel Median

I performed this procedure for both the 152 and 153 cameras and plotted the pixel sum and the pixel max vs the exposure time. All the exposures were taken at a gain setting of 100, which is the smallest possible setting (out of 100-600). To obtain the calibration factor I use the input power Pin=75nW in the 'safe' region 1ms to 10ms where the pixel sum looks smooth and the CCD is reportedly not saturated.

Camera IP Calibration Factor CF
192.168.113.152 8.58 W*s
192.168.113.153 7.83 W*s

The incident power can be calculated as Pin =CF*Total(Counts-DarkCounts)/ExposureTime.

Attachment 1: calib_20170930_152.pdf
calib_20170930_152.pdf
Attachment 2: calib_20170930_153.pdf
calib_20170930_153.pdf
  13352   Mon Oct 2 23:16:05 2017 gautamHowToCamerasCCD calibration

Going through some astronomy CCD calibration resources ([1]-[3]), I gather that there are in general 3 distinct types of correction that are applied:

  1. Dark frames --- this would be what we get with a "zero duration" capture, some documents further subdivide this into various categories like thermal noise in the CCD / readout electronics, poissonian offsets on individual pixels etc.
  2. Bias frames --- this effect is attributed to the charge applied to the CCD array prior to the readout.
  3. Flat-field calibration --- this effect accounts for the non-uniform responsivity of individual pixels on the CCDs. 

The flat-field calibration seems to be the most complicated - the idea is to use a source of known radiance, and capture an image of this known radiance with the CCD. Then assuming we know the source radiance well enough, we can use some math to back out what the actual response function of individual pixels are. Then, for an actual image, we would divide by this response-map to get the actual image. There are a number of assumptions that go into this, such as: 

  • We know the source radiance perfectly (I guess we are assuming that the white paper is a Lambertian scatterer so we know its BRDF, and hence the radiance, perfectly, although the work that Jigyas and Amani did this summer suggest that white paper isn't really a Lambertian scatterer). 
  • There is only one wavelength incident on the CCD.
  • We can neglect the effects of dust on the telescope/CCD array itself, which would obviously modify the responsivity of the CCD, and is presumably not stationary. Best we can do is try and keep the setup as clean as possible during installation.

I am not sure what error is incurred by ignoring 2 and 3 in the list at the beginning of this elog, perhaps this won't affect our ability to estimate the scattered power from the test-masses to within a factor of 2. But it may be worth it to do these additional calibration steps. 

I also wonder what the uncertainty in the 1.5V/A number for the photodiode is (i.e. how much do we trust the Ophir power meter at low power levels?). The datasheet for the PDA100A says the transimpedance gain at 60dB gain is 1.5 MV/A (into high impedance load), and the Si responsivity at 1064nm is ~0.25A/W, so naively I would expect 0.375 V/uW which is ~factor of 4 lower. Is there a reason to trust one method over the other?  

Also, are the calibration factor units correct? Jigyasa reported something like 0.5nW s / ct in her report.

Camera IP Calibration Factor CF
192.168.113.152 8.58 W*s
192.168.113.153 7.83 W*s

The incident power can be calculated as Pin =CF*Total(Counts-DarkCounts)/ExposureTime.

References:

[1] http://www.astrophoto.net/calibration.php

[2] https://www.eso.org/~ohainaut/ccd/

[3] http://www.astro.ufl.edu/~lee/ast325/handouts/ccd.pdf

  13354   Tue Oct 3 01:58:32 2017 johannesHowToCamerasCCD calibration

Disclaimer: Wrong calibration factors! See https://nodus.ligo.caltech.edu:8081/40m/13391

The factors were indeed enormously off. The correct table reads:

Camera IP Calibration Factor CF
192.168.113.152 85.8 pW*s
192.168.113.153 78.3 pW*s

I did subtract a 'dark' frame from the images, though not in the sense of your point 1, just an exposure of identical duration with the laser turned off. This was mostly to reduce the effect of residual light, but given similar initial conditions would somewhat compensate for the offset that pre-existing charge and electronics noise put on the pixel values. The white field is of course a difference story.

I wonder how close we can get to a white field by putting a thin piece of paper in front of the camera without lenses and illuminate it from the other side. A problem is of course the coherence if we use a laser source... Or we scrap any sort of screen/paper and illuminate directly with a strongly divergent beam? Then there wouldn't be a specular pattern.

I'm not sure I understand your point about the 1.5V/A. Just to make sure we're talking about the same thing I made a crude drawing:

The PD sees plenty of light at all times, and the 1.5V/uW came from a comparative measurement PD<-->Ophir (which took the place of the CCD) while adjusting the power deflected with the AOM, so it doesn't have immediate connection to the conversion gain of silicon in this case. I can't remember the gain setting of the PD, but I believe it was 0dB, 20dB at most.

Attachment 1: gige_calibration.pdf
gige_calibration.pdf
  13375   Thu Oct 12 01:03:49 2017 johannesHowToCamerasETMX GigE side view

I calculated a better lens solution for the ETMX side view with the simple python script that's attached. The camera is still not as close to the viewport as we would like, and now the front lens is almost all the up to the end of the tube. With a little more playing around there maybe a better way, especially if we expand the repertoire of focal lengths. Using Steve's wonderful camera fixture I put the beam spot in focus. I turned the camera sideways for better use of the field of view, and now the beam spot actually fills the center area of the beam, to the point where we probably don't want more magnification or else we start losing the tails of the Gaussian.

We'll take a serious of images tomorrow, and will have an estimate of the scatter loss by the end of tomorrow.

 

Attachment 1: IMG_20171011_164549698.jpg
IMG_20171011_164549698.jpg
Attachment 2: Image__2017-10-11__16-52-01.png
Image__2017-10-11__16-52-01.png
Attachment 3: GigE_lens_position_helper.py.zip
  13377   Thu Oct 12 07:56:33 2017 SteveHowToCamerasETMX GigE side view at 50 deg of IR scattering

 Telescope front lens to wall distance 25 cm,  GigE camera lenght 6 cm and cat6 cable 2cm

 Atm3,   Existing short camera  can has 16cm  lenght to lexan guard on viewport. Available 2" od periscope tube lenght is 8cm. The one in use 16 cm long.

             Note: we can fabricate a lite cover with tube that would accomodate longer telescope.

             Can we calibrate the AR coated M5018-SW and compare it's performance agains the 2" periscope

             Look at the Edmond Optics 3" od camera lens with AR

This lower priced   1" apeture Navitar lens  can be an option too.

 

 Atm1,   Now I can see dust. This is much better. The focus is not right yet.

Atm2,   Chamber viewport wiped and image refocused. Actually I was focusing on the dust.

Quote:

I calculated a better lens solution for the ETMX side view with the simple python script that's attached. The camera is still not as close to the viewport as we would like, and now the front lens is almost all the up to the end of the tube. With a little more playing around there maybe a better way, especially if we expand the repertoire of focal lengths. Using Steve's wonderful camera fixture I put the beam spot in focus. I turned the camera sideways for better use of the field of view, and now the beam spot actually fills the center area of the beam, to the point where we probably don't want more magnification or else we start losing the tails of the Gaussian.

We'll take a serious of images tomorrow, and will have an estimate of the scatter loss by the end of tomorrow.

 

 

Attachment 1: Image__2017-10-11__15-29-52_15k400g.png
Image__2017-10-11__15-29-52_15k400g.png
Attachment 2: Image__2017-10-12__15-50-18wipedRefocud2.png
Image__2017-10-12__15-50-18wipedRefocud2.png
Attachment 3: camCan16cm.jpg
camCan16cm.jpg
  13389   Wed Oct 18 11:37:58 2017 johannesHowToCamerasETMX GigE side view at 50 deg
uote:

 Telescope front lens to wall distance 25 cm,  GigE camera lenght 6 cm and cat6 cable 2cm

 Atm3,   Existing short camera  can has 16cm  lenght to lexan guard on viewport. Available 2" od periscope tube lenght is 8cm. The one in use 16 cm long.

             Note: we can fabricate a lite cover with tube that would accomodate longer telescope.

             Can we calibrate the AR coated M5018-SW and compare it's performance agains the 2" periscope

             Look at the Edmond Optics 3" od camera lens with AR

Atm1,   Now I can see dust. This is much better. The focus is not right yet.

Atm2,   Chamber viewport wiped and image refocused. Actually I was focusing on the dust.

We don't really have to calibrate the lens, just the CCD, which we've done. It's more about knowing the true aperture size to know how much solid angle you're capturing to infer the total amount of scatter. For our custom lens tubes this is the ID of the retaining ring.

The Edmund Optics lens tube looks tempting, but itcomes at a price. Thorlabs sells lens tubes that offer a more flexibility than what we have right now, so I bought a few different ones, and also more 150mm 2" lenses. This will allow for more compact solutions and offer some in-situ focusing ability that doesn't require detaching the lens tube like now. Should be here in a couple of days, then we'll be able to enclose the GigE camera in the viewport can with a similar field of view we have now.

I also bought a collimation package for the AS port fiber stuff so we can move ahead with the ringdown measurements and also mode spectroscopy.

  13391   Wed Oct 18 15:26:58 2017 johannesHowToCamerasRevision: CCD calibration

The units were still off in my previous post. Here's the corrected, sanity-checked version:

Camera IP Calibration Factor
192.168.113.152 85.8 +/- 4.3 pW*μs
192.168.113.153 78.3 +/- 3.9 pW*μs

I estimated the uncertainties based on a linear fit to the data I recorded with 75nW incident on the CCD and assumed a 5% uncertainty in that number. This is just an upper limit, to be safe. I had calibrated the power reading placing the Ophir power meter where the CCD would otherwise be and comparing it to the PD voltage of a picked off beam. In my previous figures the axes were mislabeled, so I reproduce them here:

Using the current camera position I recorded 50 exposures both with and without beam (XARM locked vs PSL shutter closed) and averaged the images to see how much the reading fluctuates. The exposure time was 10 ms, which left the maximum reported pixel value in all exposures below 3800 out of 4096. The gain setting was 100, which is what I used to calibrate the CCDs.

Counts with XARM locked 2.799 +/- 0.027 x107
Counts with shutter closed 3.220 +/- 0.047 x106
Power on CCD 193.9 +/- 2.2 nW
Power scattered into 2π (*) 254 +/- 39 μW
ETMX scatter loss (**) 25.4 +/- 3.9 ppm

(*) I calculated the lens positions to focus at a plane 65cm from the front lens. We're pretty close to that, but I can't confirm the actual distance easily, so I assumed a 5cm error on the distance, which is where most of the error is coming from. This is also assuming uniform scatter.

(**) This is assuming 10W of circulating power

Attachment 1: calib_20170930_152.pdf
calib_20170930_152.pdf
Attachment 2: calib_20170930_153.pdf
calib_20170930_153.pdf
  13868   Fri May 18 20:03:14 2018 PoojaUpdateCamerasTelescopic lens solution for GigE

Aim: To find telescopic lens solution to image test mass onto the sensor of GigE camera.

I wrote a python code to find an appropriate combination of lenses to focus the optic onto the camera keeping in mind practical constraints like distance of GigE camera from the optic ~ 1m and distance between the lenses need to be in accordance with the Thorlab lens tubes available. We have to image both the enire optic of size 3" and beam spot of 1" using this combination of lens. The image size that efficiently utilizes the entire sensor array is 1/4". Therefore the magnification required for imaging the entire optic is 1/12 and that for the beam spot is 1/4.

I checked the website of Thorlabs to get the available focal lengths of 2" lenses (instead of 1" lenses to collect sufficient power). I have tried several combination of lenses and the ones I found close enough to what is required have been listed below along with thier colorbar plots.

a) 150mm-150mm (Attachment 2 & 3)

With this combination, object distance varies like 50cm for 1" beam spot to 105cm for 3" spot. Therefore, it posses a difficulty that there is a difference of ~48cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.

b) 125mm-150mm (Attachment 4 & 5)

With this combination, object distance varies like 45cm for 1" beam spot to 95cm for 3" spot. There is a difference of ~43cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.

c) 125mm-125mm (Attachment 6 & 7)

The object distance varies like 45cm for 1" beam spot to 90cm for 3" spot. There is a difference of ~39cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.

Sensitivity check was also done for these combination of lenses. An error of 1cm in object distance and 5mm in the distance between the lenses gives an error in magnification <2%.

The schematic of the telescopic lens system has been given in Attachment 8.

 

Attachment 1: calib_20170930_152.pdf
Attachment 2: plot_2018-05-18_tel-lens_150_150_1.pdf
plot_2018-05-18_tel-lens_150_150_1.pdf
Attachment 3: plot_2018-05-18_tel-lens_150_150_3.pdf
plot_2018-05-18_tel-lens_150_150_3.pdf
Attachment 4: plot_2018-05-18_tel-lens_125_150_1.pdf
plot_2018-05-18_tel-lens_125_150_1.pdf
Attachment 5: plot_2018-05-18_tel-lens_125_150_3.pdf
plot_2018-05-18_tel-lens_125_150_3.pdf
Attachment 6: plot_2018-05-18_tel-lens_125_125_1.pdf
plot_2018-05-18_tel-lens_125_125_1.pdf
Attachment 7: plot_2018-05-18_tel-lens_125_125_3.pdf
plot_2018-05-18_tel-lens_125_125_3.pdf
Attachment 8: tel_design.pdf
tel_design.pdf
  13874   Mon May 21 17:36:00 2018 poojaUpdateCamerasGigE camera image of ETMX

Today Steve and I tried to to capture the image of scattering of light by dust particles on the surface of ETMX using GigE camera. The image ( at gain =100, exposure time = 125000) obtained has been attached. Unlike the previous images, a creepy shape of bright spots was seen. Gautam helped us lock infrared light and see the image. A similar less intense shape was seen. This may be because of the dust on the lens.

Attachment 1: Image__2018-05-21__17-34-15_125k100g.tiff
  13893   Fri May 25 14:55:33 2018 Jon RichardsonUpdateCamerasStatus of GigE Camera Software Fixes

There is an effort to switch to an all-digital system for the GigE camera feeds similar to the one running at LLO, which uses Joe Betzwieser's custom SnapPy package to interface with the cameras in Python and aggregate their feeds into a fancy GUI. Joe's code is a SWIG-wrapping of the commercial camera-driver API, Pylon, from Basler. The wrapping allows the low-level camera driver methods to be called from within Python, and their feeds are forwarded to a gstreamer stream also initiated from within Python. The problem is that his wrapping (and the underlying Pylon software itself) is only runnable on an older version of Ubuntu. Efforts to run his software on newer distributions at the 40m have failed.

I'm working on a fix to essentially rewrite his high-level SnapPy code (generators of GUIs, etc.) to use the newest version of Pylon (pylon5) to interface at a low level with the cameras. I discovered that since the last attempt to digitize the camera system, Basler has released their own official version of a Python wrapping for Pylon on github (PyPylon).

Progress so far:

  • I've installed from source the newest version of Pylon, pylon5.0.12 on the SL7 machine (rossa). I chose that machine because LIGO is migrating to Scientific Linux, but I think this will also work for any distribution.
  • I've installed from source the the newest, official Python wrapping of the Basler Pylon software, pypylon.
  • I've tested the pypylon package and confirmed it can run our cameras.

The next and final step is to modify Joe's SnapPy package to import pypylon instead of his custom wrapping of an older version of the camera software, and update all of the Pylon calls to use the new methods. I'll hopefully get back to this early next week.

  13909   Fri Jun 1 19:25:11 2018 poojaUpdateCamerasSynchronizing video data with the applied motion to the mirror

Aim: To synchronize data from the captured video and the signal applied to ETMX

In order to correlate the intensity fluctuations of the scattered light with the motion of the test mass, we are planning to use the technique of neural network. For this, we need a synchronised video of scattered light with the signal applied to the test mass. Gautam helped me capture 60sec video of scattering of infrared laser light after ETMX was dithered in PITCH at ~0.2Hz..

I developed a python program to capture the video and convert it into a time series of the sum of pixel values in each frame using OpenCV to see the variation. Initially we had tried the same with green laser light and signal of approximately 11.12Hz. But in order to see the variation clearly, we repeated with a lower frequency signal after locking IR laser today. I have attached the plots that we got below. The first graph gives the intensity fluctuations from the video. The third and fourth graphs are that of transmitted light and the signal applied to ETMX to shake it. Since the video captured using the camera was very noisy and intensity fluctuations in the scattered light had twice the frequency of the signal applied, we captured a video after turning off the laser. The second plot gives the background noise probably from the camera. Since camera noise is very high, it may not be possible to train this data set in neural network.

Since the videos captured consume a lot of memory I haven't uploaded it here. I have uploaded the python code 'sync_plots.py' in github (https://github.com/CaltechExperimentalGravity/GigEcamera/tree/master/Pooja%20Sekhar/PythonCode).

 

Attachment 1: camera_mirror_motion_plots.pdf
camera_mirror_motion_plots.pdf
  13914   Mon Jun 4 11:34:05 2018 Jon RichardsonUpdateCamerasUpdate on GigE Cameras

I spent a day trying to modify Joe B.'s LLO camera client-server code without ultimate success. His codes now runs without throwing any errors, but something inside the black-box handoff of his camera source code to gstreamer appears to be SILENTLY FAILING. Gautam suggested a call with Joe B., which I think is worth a try.

In the meantime, I've impemented a simple Python video feed streamer which does work, and which students can use as a base framework to implement more complicated things (e.g., stream multiple feeds in one window, save a video stream movie or animation).

It uses the same PyPylon API to interface with the GigE cameras as does Joe's code. However, it uses matplotlib instead of gstreamer to render the imaging. The matplotlib code is optimized for maximum refresh rate and I observed it to achieve ~5 Hz for a single video feed. However, this demo code does not set any custom cameras settings (it just initializes a camera with its defaults), so it's quite possible that the refresh rate is actually limited by, e.g., the camera exposure time.

Location of the code (on the shared network drive):

/opt/rtcds/caltech/c1/scripts/GigE/demo_with_mpl/stream_camera_to_mpl.py

This demo initializes a single GigE camera with its default settings and continuously streams its video feed in a pop-up window. It runs continuously until the window is closed. I installed PyPylon from source on the SL7 machine (rossa) and have only tested it on that machine. I believe it should work on all our versions of Linux, but if not, run the camera software on rossa for now.

Usage:

From within the above directory, the code is executed as 

$python stream_camera_to_mpl.py [Camera IP address]

with a single argument specifying the IP address of the desired camera. At the time I tested, there was only one GigE camera on our network, at 192.168.113.152.

  13917   Tue Jun 5 20:31:42 2018 ranaUpdateCamerasUpdate on GigE Cameras

Aha! Video is back!

I think it would be good to add a flag whereby the video can be saved to disk in some uncompressed video format (ogg, avi, ?) instead of displayed to a matplotlib window. We could then use the default to just display video, but use the save-to-disk flag to grab a few minutes of video for image processing.

Quote:

In the meantime, I've impemented a simple Python video feed streamer which does work, and which students can use as a base framework to implement more complicated things (e.g., stream multiple feeds in one window, save a video stream movie or animation).

  13937   Sun Jun 10 15:04:33 2018 poojaUpdateCamerasDeveloping neural network

Aim: To develop a neural network in order to correlate the intensity fluctuations in the scattered light to the angular motion of the test mass. A block diagram of the technique employed is given in Attachment 1.

I have used Keras to implement supervised learning using neural network (NN). Initially I had developed a python code that converts a video (59 sec) of scattered light, after an excitation (sine wave of frequency 0.2 Hz) is applied to ETMX pitch, to image frames (of size 480*720)  and stores the 2D pixel values of 1791 images frames captured into an hdf5 file. This array of shape (1791,36500) is given as an input to the neural network. I have tried to implement regular NN only, not convolution or recurrent NN. I have used sequential model in Keras to do this. I have tried with various number of dense layers and varied the number of nodes in each layer. I got test accuracy of approximately 7% using the following network. There are two dense layers, first one with 750 nodes with a dropout of 0.1 ( 10% of the nodes not used) and second one with 500 nodes. To add nonlinearity to the network, both the layers are given an activation function of tanh. The output layer has 1 node and expects an output of shape (1791,1). This model has been compiled with a loss function of categorical crossentropy, optimizer = RMSprop. We have used these since they have been mostly used in the image analysis examples. Then the model is trained against the dataset of mirror motion. This has been obtained by sampling the cosine wave fit to the mirror motion so that the shapes of the input and output of NN are consistent. I have used a batch size ( number of samples per gradient update) = 32 and epochs (number of times entire dataset passes through NN) = 20. However, using this we got an accuracy of only 7.6%. 

I think that the above technique gives overfitting since dense layers use all the nodes during training apart from giving a dropout. Also, the beam spot moves in the video. So it may be necessary to use convolution NN to extract the information.

The video file can be accesses from this link https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view.

Gabriele told us that he had used the beam spot motion to train the neural network. Also he informed that GPUs are necessary for this. So we have to figure out a better way to train the network.  


gautam noon 11Jun: This link explains why the straight-up fully connected NN architecture is ill-suited for the kind of application we have in mind. Discussing with Gabriele, he informed us that training on a GPU machine with 1000 images took a few hours. I'm not sure what the CPU/GPU scaling is for this application, but given that he trained for 10000 epochs, and we see that training for 20 epochs on Optimus already takes ~30minutes, seems like a futile exercise to keep trying on CPU machines.

Attachment 1: nn_block_diag_2.pdf
nn_block_diag_2.pdf
  13940   Mon Jun 11 17:18:39 2018 poojaUpdateCamerasCCD calibration

Aim: To calibrate CCD of GigE using LED1050E.

The following table shows some of the specifications for LED1050E as given in Thorlabs datasheet.

Specifications Typical maximum ratings
DC forward current (mA)   100
Forward voltage (V) @ 20mA (VF) 1.25 1.55
Forward optical power (mW) 1.6  
Total optical power (mW) 2.5  
Power dissipation (mW)   130

 The circuit diagram is given in Attachment 1.

Considering a power supply voltage Vcc = 15V, current I = 20mA & forward voltage of led VF = 1.25V, resistance in the circuit is calculated as,

R = (Vcc - VF)/I = 687.5\ohm\ohms\Omega

Attachment 2 gives a plot of resistance (R) vs input voltage (Vcc) when a current of 20mA flows through the circuit. I hope I can proceed with this setup soon.

 

Attachment 1: led_circuit.pdf
led_circuit.pdf
Attachment 2: R_vs_V.pdf
R_vs_V.pdf
  13951   Tue Jun 12 19:27:25 2018 poojaUpdateCamerasCCD calibration

Today I made the led (1050nm) circuit inside a box as given in my previous elog. Steve drilled a 1mm hole in the box as an aperture for led light.

Resistance (R) used = 665 \Omega.

We connected a power supply and IR has been detected using the card.

Later we changed the input voltage and measured the optical power using a powermeter.

Input voltage (Vcc in V) Optical power
0 (dark reading) 60 nW
15 68 \muW
18 82.5 \muW
20 92 \muW

Since the optical power values are very less, we may need to drill a larger hole.

Now the hole is approximately 7mm from led, therefore aperture angle is approximately 2*tan-1(0.5/7) = 8deg. From radiometric curve given in the datasheet of LED1050E, most of the power is within 20 deg. So a hole of size 2* tan(10) *7 = 2.5mm may be required.

I have also attached a photo of the led beam spot on the IR detection card.

Attachment 1: IMG_20180612_163831.jpg
IMG_20180612_163831.jpg
  13972   Fri Jun 15 09:51:55 2018 poojaUpdateCamerasDeveloping neural network

Aim : To develop a neural network on simulated data.

I developed a python code that generates a 64*64 image of a white Gaussian beam spot at the centre of black background. I gave a sine wave of frequency 0.2Hz that moves the spot vertically (i.e. in pitch). Then I simulated this video at 10 frames/sec for 10 seconds. Then I saved this data into an hdf5 file, reshaped it to a 1D array and gave as input to a neural network. Out of the 100 image frames, 75 were taken as training dataset and 25 as test data. I varied several hyperparameters like learning rate of the optimizer, number of layers, nodes, activation function etc. Finally, I was successful in reducing the mean squared error with the following network model:

  • Sequential model of 2 fully connected layers with 256 nodes each and a dropout of 0.1
  • loss function = mean squared error, optimizer = RMSprop (learning rate = 0.00001) and activation function that adds nonlinearity = relu
  • batch size = 32 and number of epochs = 1000

I have attached the plot of the output of neural network (NN) as well as sine signal applied to simulate the video and their residula error in Attachment 1. The plot of variation in mean squared error (in log scale) as number of epochs increases is given in Attachment 2.

I think this network worked easily since there is no noise in the input. Gautam suggested to try the working of this network on simulated data with a noisy background.

 

Attachment 1: nn_1.pdf
nn_1.pdf
Attachment 2: nn_2.pdf
nn_2.pdf
  13986   Tue Jun 19 14:08:37 2018 poojaUpdateCamerasCCD calibration using LED1050E

Aim: To measure the optical power from led using a powermeter.

Yesterday Gautam drilled a larger hole of diameter 5mm in the box as an aperture for led (aperture angle is approximately 2*tan-1(2.5/7) = 39 deg). I repeated the measurements that I had done before (https://nodus.ligo.caltech.edu:8081/40m/13951). The measurents of optical power measured using a powermeter and the corresponding input voltages are listed below.

Input voltage (Vcc in V) Optical power
0 (dark reading) 0.8 nW
10 1.05 mW
12 1.15 mW
15 1.47 mW
16 1.56 mW
18 1.81 mW

So we are able to receive optical power close to the value (1.6mW) given in Thorlabs datasheet for LED1050E (https://www.thorlabs.com/drawings/e6da1d5608eefd5c-035CFFE5-C317-209E-7686CA23F717638B/LED1050E-SpecSheet.pdf). I hope we can proceed to BRDF measurements for CCD calibration.

Steve: did you center the LED ?

  13991   Wed Jun 20 20:39:36 2018 poojaUpdateCamerasCCD calibration using LED1050E

 

Quote:

Aim: To measure the optical power from led using a powermeter.

Yesterday Gautam drilled a larger hole of diameter 5mm in the box as an aperture for led (aperture angle is approximately 2*tan-1(2.5/7) = 39 deg). I repeated the measurements that I had done before (https://nodus.ligo.caltech.edu:8081/40m/13951). The measurents of optical power measured using a powermeter and the corresponding input voltages are listed below.

Input voltage (Vcc in V) Optical power
0 (dark reading) 0.8 nW
10 1.05 mW
12 1.15 mW
15 1.47 mW
16 1.56 mW
18 1.81 mW

So we are able to receive optical power close to the value (1.6mW) given in Thorlabs datasheet for LED1050E (https://www.thorlabs.com/drawings/e6da1d5608eefd5c-035CFFE5-C317-209E-7686CA23F717638B/LED1050E-SpecSheet.pdf). I hope we can proceed to BRDF measurements for CCD calibration.

Steve: did you center the LED ?

Yes.

  14018   Tue Jun 26 10:50:14 2018 poojaUpdateCamerasBeam spot tracking using OpenCV

Aim: To track the motion of beam spot in simulated video.

I simulated a video that moves the beam spot at the centre of the image initially by applying a sinusoidal signal of frequency 0.2Hz and amplitude 1 i.e. it moves maximum by 1 pixel. It can be found in this shared google drive link (https://drive.google.com/file/d/1GYxPbsi3o9W0VXybPfPSigZtWnVn7656/view?usp=sharing). I found a program that uses Kernelized Correlation Filter (KCF) to track object motion from the video. In the program we can initially define the bounding box (rectangle) that encloses the object we want to track in the video or select the bounding box by dragging in GUI platform. Then I saved the bounding box parameters in the program (x & y coordinates of the left corner point, width & height) and plotted the variation in the y coordinates. I have yet to figure out how this tracker works since the program reads 64*64 image frames in video as 480*640 frames with 3 colour channels and frame rate also randomly changes. The plot of the output of this tracking program & the applied signal has been attached below. The output is not exactly sinusoidal because it may not be able to track very slight movement especially at the peaks where the slope = 0.

Attachment 1: cv2_track_fig.pdf
cv2_track_fig.pdf
  14020   Tue Jun 26 17:20:33 2018 JonConfigurationCamerasLLO Python Camera Software is Working

Thanks to a discussion yesterday with Joe Betzweiser, I was able to identify and fix the remaining problem with the LLO GigE camera software. It is working now, currently only on rossa, but can be set up on all the machines. I've started a wiki page with documentation and usage instructions here:

https://wiki-40m.ligo.caltech.edu/Electronics/GigE_Cameras

This page is also linked from the main 40m wiki page under "Electronics."

This software has the ability to both stream live camera feeds and to record feeds as .avi files. It is described more on the wiki page.

  14021   Tue Jun 26 17:54:59 2018 poojaUpdateCamerasDeveloping neural networks

Aim:  To find a model that trains the simulated data of Gaussian beam spot moving in a vertical direction by the application of a sinusoidal signal. The data also includes random uniform noise ranging from 0 to 10.

All the attachments are in the zip folder.

I simulated images 128*128 at 10 frames/sec by applying a sine wave of frequency 0.2Hz that moves the beam spot, added random uniform noise ranging from 0 to 10 & resized the image frame using opencv to 64*64. 1000 cycles of this data is taken as train & 300 cycles as test data for the following cases. Optimizer = Nadam (learning rate = 0.001), loss function used = mean squared error, batch size = 32,

Case 1:

Model topology:

                         256 (dropout = 0.1)  ->           256 (dropout = 0.1)   ->       1

Activation :             selu                                         selu

Number of epochs = 240.

Variation in loss value of train & test datasets is given in Attachment 1 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 2 & 3 (zoomed version of 2).

The model fits well but there is no training since test loss is lower than train loss value. I found in several sites that dropout of some of the nodes during training but retaining them during test could be the probable reason for this (https://stackoverflow.com/questions/48393438/validation-loss-when-using-dropout , http://forums.fast.ai/t/validation-loss-lower-than-training-loss/4581 ). So I removed dropout while training next time.

Case 2:

Model topology:

                         256 (dropout = 0.1)  ->           256 (dropout = 0.1)   ->       1

Activation :             selu                                         selu                          linear

Number of epochs = 200.

Variation in loss value of train & test datasets is given in Attachment 4 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 5 & 6 (zoomed version of 2).

But still no improvement.

Case 3:

I changed the optimizer to Adam and tried with the same model topology & hyperparameters as case 2 with no success (Attachments 7,8 & 9).

Finally I think this is because I'm training & testing on the same data. So I'm now training with the simulated video but moving it by a maximum of 2 pixels only and testing with a video of ETMY that we had captured earlier.

Attachment 1: NN_noise_diag.zip
  14037   Wed Jul 4 20:48:32 2018 poojaUpdateCamerasMedm screen for GigE

(Gautam, Pooja)

Aim: To develop medm screen for GigE.

Gautam helped me set up the medm screen through which we can interact with the GigE camera. The steps adopted are as follows:

(i) Copied CUST_CAMERA.adl file from the location /opt/rtcds/userapps/release/cds/common/medm/ to /opt/rtcds/caltech/c1/medm/MISC/.

(ii) Made the following changes by opening CUST_CAMERA.adl in text editor.  

  • Changed the name of file to "/cvs/cds/rtcds/caltech/c1/medm/MISC/CUST_CAMERA.adl"
  • Replaced all occurences of "/ligo/apps/linux-x86_64/camera/bin/" to "/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/" & "/ligo/cds/$(site)/$(ifo)/camera/" to "/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/"

(iii) Added this .adl file as drop-out menu 'GigE' to VIDEO/LIGHTS section of sitemap (circled in Attachment 1) i.e opened Resource Palette of VIDEO/LIGHTS, clicked on Label/Name/Args & defined macros as CAMERA=C1:CAM-ETMX,CONFIG=C1-CAM-ETMX in Arguments box of Related Display Data dialog box (circled in Attachment 2) that appears. In Related Display Data dialog box, Display label is given as GigE and Display File as ./MISC/CUST_CAMERA.adl

(iv) All the channel names can be found in Jigyasa's elog https://nodus.ligo.caltech.edu:8081/40m/13023

(v) Since the slider (circled in Attachment 3) for pixel sum was not moving, changed the high limit value to 10000000 in PV Limits dialog box. This value is set such that the slider reaches the other end on setting the exposure time to maximum.

(vii) Set the Snapshot channel C1:CAM-ETMX_SNAP to off (very important!). Otherwise we cannot interact with the camera.

(vii) GigE camera gstreamer client is run in tmux session.

Now we can change the exposure time and record a video by specifying the filename and its location using medm screen. However, while recording the video, gstream video laucher of GigE stops or is stuck.

Attachment 1: sitemap.png
sitemap.png
Attachment 2: GigE_macros.png
GigE_macros.png
Attachment 3: CUST_CAMERA.png
CUST_CAMERA.png
  14053   Wed Jul 11 16:50:34 2018 poojaUpdateCamerasUpdate in developing neural networks

Aim: To develop a neural network that resolves mirror motion from video.

I had created a python code to find the combination of hyperparameters that trains the neural network. The code (nn_hyperparam_opt.py) is present in the github repo. It's running in cluster since a few days. In the meanwhile I had just tried some combination of hyperparameters.

These give a low loss value of approximately 1e-5 but there is a large error bar for loss value since it fluctuates a lot even after 1500 epochs. This is unclear.

Input: 64*64 image frames of simulated video by applying beam motion sine wave of frequency 0.2Hz and at 10 frames per sec. This input data is given as an hdf5 file.

Train : 100 cycles,  Test: 300 cycles, Optimizer = Nadam (learning rate = 0.001)

Model topology:

                    256       ->      128    ->       1

Activation :        selu     selu           linear

Case 1: batch size = 48, epochs = 1000, loss function = mean squared error

Plots of output predicted by neural network (NN) & input signal has been shown in 1st graph & variation in loss value with epochs in 2nd graph.

Case 2: batch size = 32, epochs = 1500, loss function = mean squared logarithmic error

Plots of output predicted by neural network (NN) & input signal has been shown in 3rd graph & variation in loss value with epochs in 4th graph.

 

 

 

Attachment 1: graphs.pdf
graphs.pdf graphs.pdf graphs.pdf graphs.pdf
  14070   Fri Jul 13 23:23:49 2018 poojaUpdateCamerasUpdate in developing neural networks

Aim: To develop a neural network that resolves mirror motion from video.

I tried to reduce the overfitting problem in previous neural network by reducing the number of nodes and layers and by varying the learning rate, beta factors (exponential decay rates of moving first and second moments) of Nadam optimizer assuming error of 5% is reasonable.

Input:

32 * 32 image frames (converted to 1d array & pixel values of 0 to 255 normalized) of simulated video by applying sine signal to move beam spot in pitch with frequency 0.2Hz and at 10 frames per second.

Total: 300 cycles ,           Train: 60 cycles,    Validation: 90 cycles,    Test: 150 cycles

Model topology:

                                          Input               -->                  Hidden layer               -->                    Output layer                                  

                                                                                          4 nodes                                              1 node

Activation function:                                  selu                                             linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam

Case 1:

Learning rate = 0.00001,    beta_1 = 0.8 (default value in Keras = 0.9),  beta_2 = 0.85 (default value in Keras = 0.999)

Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment.

Case 2:

Changed number of nodes in hidden layer from 4 to 8. All other parameters same.

These plots show that when residual error increases basically the output of neural network has a smaller amplitude compared to the applied signal. This kind of training error is unclear to me.

When beta parameters of optimizer is changed farther from 1, error increases.

Attachment 1: nn_simulation_2_nodes4_lr0p00001_beta1_0p8_beta2_0p85.pdf
nn_simulation_2_nodes4_lr0p00001_beta1_0p8_beta2_0p85.pdf
Attachment 2: nn_simulation_2_nodes8_lr0p00001_beta1_0p8_beta2_0p85.pdf
nn_simulation_2_nodes8_lr0p00001_beta1_0p8_beta2_0p85.pdf
  14089   Thu Jul 19 18:09:17 2018 poojaUpdateCamerasUpdate in developing neural networks

Aim: To develop a neural network that resolves mirror motion from video.

Case 1:

Input : Simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08

The data has been split into train, validation and test datasets and I tried training on neural network with the same model topology & parameters as in my previous elog (https://nodus.ligo.caltech.edu:8081/40m/14070)

The output of NN and residual error have been shown in Attachment 1. This NN model gives a large error for this. So I think we have to increase the number of nodes and learning rate so that we get a lower error value with a single sine wave simulated video ( but not overfitting) and then try training on linear combination of sine waves.

Case 2 :

Normalized the target sine signal of NN so that it ranges from -1 to 1 and then trained on the same neural network as in my previous elog with simulated video created using single sine wave. This gave comparatively lower error (shown in Attachment 2). But if we train using this network, we can get only the frequency of test mass motion but we can't resolve the amount by which test mass moves. So I'm unclear about whether we can use this.

Attachment 1: nn_simulation_mlt_sine_nodes4_lr0p00001_beta1_0p8_beta2_0p85_marked.pdf
nn_simulation_mlt_sine_nodes4_lr0p00001_beta1_0p8_beta2_0p85_marked.pdf
Attachment 2: nn_simulation_2_nodes4_target-1to1_marked.pdf
nn_simulation_2_nodes4_target-1to1_marked.pdf
  14097   Sun Jul 22 14:01:07 2018 poojaUpdateCamerasDeveloping neural networks on simulated video

Aim: To develop a neural network that resolves mirror motion from video.

Since error was high for the same input as in my previous elog http://nodus.ligo.caltech.edu:8080/40m/14089

I modified the network topology by tuning the number of nodes, layers and learning rate so that the model fitted the sum of 4 sine waves efficiently, saved weights of the final epoch and then in a different program, loaded saved weights & tested on simulated video that's produced by moving beam spot from the centre of image by sum of 4 sine waves whose frequencies and amplitudes change with time.

Input : Simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08. This is divided into train (0.4), validation (0.1) and test (0.5).

Model topology:

                                          Input               -->                  Hidden layer               -->                    Output layer                                  

                                                                                          8 nodes                                              1 node

Activation function:                                  selu                                             linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)

Normalized the target sine signal of NN by dividing by its maximum value.

Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment. These weights of the model in the final epoch have been saved to h5 file and then loaded & tested with simulated data of 4 sine waves with amplitudes and frequencies changing with time from their initial values by random uniform noise ranging from 0 to 0.05. Plot of predicted output by neural network, target signal of sine waves & residual error given in 2nd attachment. The actual signal can be got from predicted output of NN by multiplication with normalization constant used before. However, even though network fits training  & validation sets efficiently, it gives a comparatively large error on test data of varying amplitude & frequency.

Gautam suggested to try training on this noisy data of varying amplitudes and frequencies. The results using the same model of NN is given in Attachment 3. It was found that tuning the number of nodes, layers or learning rate didn't improve fitting much in this case.

 

 

Attachment 1: nn_simulation_2_normalized_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_2_normalized_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
Attachment 2: nn_simulation_normalizedtarget_128epochs_mult_sin_load_wt_varyingtest_nodes8_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_normalizedtarget_128epochs_mult_sin_load_wt_varyingtest_nodes8_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
Attachment 3: nn_simulation_2_normalized_varying_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_2_normalized_varying_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
  14100   Tue Jul 24 06:11:50 2018 ranaUpdateCamerasDeveloping neural networks on simulated video

This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects).

Quote:

Aim: To develop a neural network that resolves mirror motion from video.

  14101   Tue Jul 24 09:47:51 2018 gautamUpdateCamerasDeveloping neural networks on simulated video

I was thinking a little more about the way we are training the network for the current topology - because the network has no recurrent layers, I guess it has no memory of past samples, and so it doesn't have any sense of the temporal axis. In fact, Keras by default shuffles the training data you give it randomly so the time ordering is lost. So the training amounts to requiring the network to identify the center of the Gaussian beam and output that. So in the training dataset, all we need is good (spatial) coverage of the area in which the spot is most likely to move? Or is the idea to develop some tools to generate video with spot motion close to that on the ETM in lock, so that we can use it with a network topology that has memory? 

Quote:

This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects)

  14114   Sun Jul 29 23:15:34 2018 poojaUpdateCamerasDeveloping CNN

Aim: To develop a convolutional neural network that resolves mirror motion from video.

Input : Previous simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08 where random uniform noise ranging 0.05 has been added to amplitudes and frequencies. This is divided into train (0.4), validation (0.1) and test (0.5).

Model topology:

  • Number of filters = 2
  • Kernel size = 2
  • Size of pooling windows = 2
  •                                        ----->         Dense layer of 4 nodes  ---->    Output layer of 1 node 

         Activation:                      selu                                                  linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)

Plots of CNN output & applied signal given in Attachment 1. The variation in loss value with epochs given in Attachment 2.

This needs to be further analysed with increasing random uniform noise over the pixels and by training CNN on simulated data of varying ampltides and frequencies for sine waves.

Attachment 1: conv_nn_varying_freq_amp_1.pdf
conv_nn_varying_freq_amp_1.pdf
Attachment 2: conv_nn_varying_freq_amp_2.pdf
conv_nn_varying_freq_amp_2.pdf
  14632   Thu May 23 08:51:30 2019 MilindUpdateCamerasSetting up beam spot simulation

I have done the following thus far since elog #14626:

Simulation:

  1. Cleaned up Pooja's code for simulating the beam spot. Added extensive comments and made the code modular. Simulated the Gaussian beam spot to exhibit 
    1. Horizontal motion
    2. Vertical motion
    3. motion along both x and y directions:
  2. The motion exhibited in any direction in the above videos is the combination of four sinusoids at the frequencies: 0.2, 0.4, 0.1, 0.3 Hz with amplitudes that can be found as defaults in the script ((0.1, 0.04, 0.05, 0.08)*64 for these simulations.). The variation looks as shown in Attachment 1. For the sake of convenience I have created the above video files with only a hundred frames (fps = 10, total time ~ 10s) and this took around 2.4s to write. Longer files need much longer. As I wish to simply perform image processing on these frames immediately, I don't see the need to obtain long video files right away.
  3. I have yet to add noise at the image level and randomness to the motion itself.  I intend to do that right away. Currently video 3 will show you that even though the time variation of the coordinates of the center of the beam is sinusoidal, the motion of the beam spot itself is along a line as both x and y motions have the same phase. I intend to add the feature of phase between the motion of x and y coordinates of the center of the beam, but it doesn't seem all too important to me right now. The white margins in the videos generated are annoying and make tracking the beam spot itself slightly difficult as they introduce offset (see below). I shall fix them later if simple cropping doesn't do the trick.
  4. I have yet to push the code to git. I will do that once I've incorporated the changes in (3).

Circle detection:

  1. If the beam spot intensity variation is indeed Gaussian (as it definitely is in the simulation), then the contours are circular. Consequently, centroid detection of the beam spot reduces to detecting these contours and then finding their centroid (center). I tried this for a simulated video I found in elog post 14005. It was a quick implementation of the following sequence of operations: threshold (arbritrarily set to 127), contour detection (video dependent and needs to be done manually), centroid determination from the required contour.  Its evident that the beam spot is being tracked (green circle in the video). Check #Attachment 2 for the results. However, no other quantitative claims can be made in the absence of other data.
  2. Following this, Gautam pointed me to a capture in elog post 13908. Again, the steps mentioned in (1) were followed and the results are presented below in Attachment #3. However, this time the contour is no longer circular but distorted. I didn't pursue this further. This test was just done to check that this approach does extend (even if not seamlessly) to real data. I'm really looking forward to trying this with this real data.
  3. So far, the problem has been that there is no source data to compare the tracked centroid with. That ought to be resolved with the use of simulated data that I've generated above. As mentioned before, some matplotlib features such as saving with margins introduce offsets in the tracked beam position. However, I expect to still be able to see the same sinusoidal motion. As a quick test, I'll obtain the fft of the centroid position time series data and check if the expected frequencies are present.

I will wrap up the simulation code today and proceed to going through Gabriele's repo. I will also test if the contour detection method works with the simulated data. During our meeting, it was pointed out that when working with real data, care has to be taken to synchronize the data with the video obtained. However, I wish to put off working on that till later in the pipeline as I think it doesn't affect the algorithm being used. I hope that's alright (?).

 

Attachment 1: variation.pdf
variation.pdf
Attachment 2: contours_simulated.mp4
Attachment 3: contours_real.mp4
  14633   Thu May 23 10:18:39 2019 KruthiUpdateCamerasCCD calibration

On Tuesday, I tried reproducing Pooja's measurements (https://nodus.ligo.caltech.edu:8081/40m/13986). The table below shows the values I got. Pictures of LED circuit, schematic and the setup are attached. The powermeter readings fluctuated quite a bit for input volatges (Vcc) > 8V, therefore, I expect a maximum uncertainity of 50µW to be on a safer side. Though the readings at lower input voltages didn't vary much over time (variation < 2µW), I don't know how relaible the Ophir powermeter is at such low power levels. The optical power output of LED was linear for input voltages 10V to 20V. I'll proceed with the CCD calibration soon.

Input Voltage (Vcc) in volts Optical power
0 (dark reading) 1.6 nW
2 55.4 µW
4 215.9 µW
6 0.398 mW
8 0.585 mW
10 0.769 mW
12 0.929 mW
14 1.065 mW
16 1.216 mW
18 1.330 mW
20 1.437 mW
22 1.484 mW
24 1.565 mW
26 1.644 mW
28 1.678 mW

Attachment 1: setup.jpeg
setup.jpeg
Attachment 2: led_circuit.jpeg
led_circuit.jpeg
Attachment 3: led_schematic.pdf
led_schematic.pdf
  14635   Thu May 23 15:37:30 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.
Attachment 1: actual_motion.pdf
actual_motion.pdf
Attachment 2: predicted_motion.pdf
predicted_motion.pdf
Attachment 3: normalised_comparison.pdf
normalised_comparison.pdf
Attachment 4: residue_normalised.pdf
residue_normalised.pdf
Attachment 5: simulated_motion1.mp4
Attachment 6: elog_22may_contours.mp4
  14638   Sat May 25 20:29:08 2019 MilindUpdateCamerasSimulation enhancements and performance of contour detection
  1. I used the same motion as defined in the previous elog. I gradually added noise to the images. Noise added was uniform random noise - a 2 dimensinoal array of random numbers between 0 and a predetermined maximum (noise_amp). The previous elog provides the variation of the y coordinate. In this, I am also uploading the effect of noise on the error in the prediction of the x coordinate. As a reminder, the motion of the beam spot center was purely vertical. Attachement #1  is the error for noise_amp = 0, #2 for noise_amp = 20 and #3  for noise_amp = 40. While Attachment #3 does provide the impression of there being a large error, this is not really the case as without normalization, each peak corresponds to a deviation of one pixel about the central value, see Attachement #4 for reference.
  2. While the error does increase marginally, adding noise has no significant effect on the prediction of the y coordinate of the centroid as Attachment #5 shows at noise_amp = 40.
  3. I am currently running an experiment to obtain the variation of mean square error with different noise amplitudes and will put up the plots soon. Further, I shall vary the resolution of the image frames and the the standard deviation of the Gaussain beam with time and try to obtain simulations very close to the real data available and then determine the performance of the algorithm.
  4. The following videos will serve as a quick reference for what the videos and detection look like at
    1. noise_amp = 20
    2. noise_amp = 40
  5. I also performed a quick experiment to see how low the amplitude of motion could be before the algorithm falied to detect the motion and found it to occur at 2 orders of magnitude below the values used in the previous post. This is a line of thought I intend to pursue more carefully and I am looking into how opencv and python handle images with floats as coordinates and will provide more details about the previous trial soon. This should give us an idea of what the smallest motion of the beam spot that can be resolved is.
Quote:
  1. Implemented image level noise for simulation. Added only uniform random noise.
  2. Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
  3. Implemented motion along y axis according to data in "power_spectrum" file.
  4. Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
  5. Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience.  See Attachment #5.
  6. Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
  7. Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.

Things yet to be done:

Simulation:

  1. I will implement the mean square error function to compute the relativer performance as conditions change.
  2. I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
  3. Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
  4. Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
  5. It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
  6. Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
  7. NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.

Real data:

  1. Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
  2. Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
  3. Synchronization of data stream regarding beam spot motion and video.
  4. Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.

Other approaches:

  1. Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.

 

Attachment 1: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 2: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 3: residue_normalised_x.pdf
residue_normalised_x.pdf
Attachment 4: predicted_motion_x.pdf
predicted_motion_x.pdf
Attachment 5: normalised_comparison_y.pdf
normalised_comparison_y.pdf
ELOG V3.1.3-