Today Steve and I tried to to capture the image of scattering of light by dust particles on the surface of ETMX using GigE camera. The image ( at gain =100, exposure time = 125000) obtained has been attached. Unlike the previous images, a creepy shape of bright spots was seen. Gautam helped us lock infrared light and see the image. A similar less intense shape was seen. This may be because of the dust on the lens.
There is an effort to switch to an all-digital system for the GigE camera feeds similar to the one running at LLO, which uses Joe Betzwieser's custom SnapPy package to interface with the cameras in Python and aggregate their feeds into a fancy GUI. Joe's code is a SWIG-wrapping of the commercial camera-driver API, Pylon, from Basler. The wrapping allows the low-level camera driver methods to be called from within Python, and their feeds are forwarded to a gstreamer stream also initiated from within Python. The problem is that his wrapping (and the underlying Pylon software itself) is only runnable on an older version of Ubuntu. Efforts to run his software on newer distributions at the 40m have failed.
I'm working on a fix to essentially rewrite his high-level SnapPy code (generators of GUIs, etc.) to use the newest version of Pylon (pylon5) to interface at a low level with the cameras. I discovered that since the last attempt to digitize the camera system, Basler has released their own official version of a Python wrapping for Pylon on github (PyPylon).
Progress so far:
The next and final step is to modify Joe's SnapPy package to import pypylon instead of his custom wrapping of an older version of the camera software, and update all of the Pylon calls to use the new methods. I'll hopefully get back to this early next week.
Aim: To synchronize data from the captured video and the signal applied to ETMX
In order to correlate the intensity fluctuations of the scattered light with the motion of the test mass, we are planning to use the technique of neural network. For this, we need a synchronised video of scattered light with the signal applied to the test mass. Gautam helped me capture 60sec video of scattering of infrared laser light after ETMX was dithered in PITCH at ~0.2Hz..
I developed a python program to capture the video and convert it into a time series of the sum of pixel values in each frame using OpenCV to see the variation. Initially we had tried the same with green laser light and signal of approximately 11.12Hz. But in order to see the variation clearly, we repeated with a lower frequency signal after locking IR laser today. I have attached the plots that we got below. The first graph gives the intensity fluctuations from the video. The third and fourth graphs are that of transmitted light and the signal applied to ETMX to shake it. Since the video captured using the camera was very noisy and intensity fluctuations in the scattered light had twice the frequency of the signal applied, we captured a video after turning off the laser. The second plot gives the background noise probably from the camera. Since camera noise is very high, it may not be possible to train this data set in neural network.
Since the videos captured consume a lot of memory I haven't uploaded it here. I have uploaded the python code 'sync_plots.py' in github (https://github.com/CaltechExperimentalGravity/GigEcamera/tree/master/Pooja%20Sekhar/PythonCode).
I spent a day trying to modify Joe B.'s LLO camera client-server code without ultimate success. His codes now runs without throwing any errors, but something inside the black-box handoff of his camera source code to gstreamer appears to be SILENTLY FAILING. Gautam suggested a call with Joe B., which I think is worth a try.
In the meantime, I've impemented a simple Python video feed streamer which does work, and which students can use as a base framework to implement more complicated things (e.g., stream multiple feeds in one window, save a video stream movie or animation).
It uses the same PyPylon API to interface with the GigE cameras as does Joe's code. However, it uses matplotlib instead of gstreamer to render the imaging. The matplotlib code is optimized for maximum refresh rate and I observed it to achieve ~5 Hz for a single video feed. However, this demo code does not set any custom cameras settings (it just initializes a camera with its defaults), so it's quite possible that the refresh rate is actually limited by, e.g., the camera exposure time.
Location of the code (on the shared network drive):
This demo initializes a single GigE camera with its default settings and continuously streams its video feed in a pop-up window. It runs continuously until the window is closed. I installed PyPylon from source on the SL7 machine (rossa) and have only tested it on that machine. I believe it should work on all our versions of Linux, but if not, run the camera software on rossa for now.
From within the above directory, the code is executed as
$python stream_camera_to_mpl.py [Camera IP address]
with a single argument specifying the IP address of the desired camera. At the time I tested, there was only one GigE camera on our network, at 192.168.113.152.
Aha! Video is back!
I think it would be good to add a flag whereby the video can be saved to disk in some uncompressed video format (ogg, avi, ?) instead of displayed to a matplotlib window. We could then use the default to just display video, but use the save-to-disk flag to grab a few minutes of video for image processing.
Aim: To develop a neural network in order to correlate the intensity fluctuations in the scattered light to the angular motion of the test mass. A block diagram of the technique employed is given in Attachment 1.
I have used Keras to implement supervised learning using neural network (NN). Initially I had developed a python code that converts a video (59 sec) of scattered light, after an excitation (sine wave of frequency 0.2 Hz) is applied to ETMX pitch, to image frames (of size 480*720) and stores the 2D pixel values of 1791 images frames captured into an hdf5 file. This array of shape (1791,36500) is given as an input to the neural network. I have tried to implement regular NN only, not convolution or recurrent NN. I have used sequential model in Keras to do this. I have tried with various number of dense layers and varied the number of nodes in each layer. I got test accuracy of approximately 7% using the following network. There are two dense layers, first one with 750 nodes with a dropout of 0.1 ( 10% of the nodes not used) and second one with 500 nodes. To add nonlinearity to the network, both the layers are given an activation function of tanh. The output layer has 1 node and expects an output of shape (1791,1). This model has been compiled with a loss function of categorical crossentropy, optimizer = RMSprop. We have used these since they have been mostly used in the image analysis examples. Then the model is trained against the dataset of mirror motion. This has been obtained by sampling the cosine wave fit to the mirror motion so that the shapes of the input and output of NN are consistent. I have used a batch size ( number of samples per gradient update) = 32 and epochs (number of times entire dataset passes through NN) = 20. However, using this we got an accuracy of only 7.6%.
I think that the above technique gives overfitting since dense layers use all the nodes during training apart from giving a dropout. Also, the beam spot moves in the video. So it may be necessary to use convolution NN to extract the information.
The video file can be accesses from this link https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view.
Gabriele told us that he had used the beam spot motion to train the neural network. Also he informed that GPUs are necessary for this. So we have to figure out a better way to train the network.
gautam noon 11Jun: This link explains why the straight-up fully connected NN architecture is ill-suited for the kind of application we have in mind. Discussing with Gabriele, he informed us that training on a GPU machine with 1000 images took a few hours. I'm not sure what the CPU/GPU scaling is for this application, but given that he trained for 10000 epochs, and we see that training for 20 epochs on Optimus already takes ~30minutes, seems like a futile exercise to keep trying on CPU machines.
Aim: To calibrate CCD of GigE using LED1050E.
The following table shows some of the specifications for LED1050E as given in Thorlabs datasheet.
The circuit diagram is given in Attachment 1.
Considering a power supply voltage Vcc = 15V, current I = 20mA & forward voltage of led VF = 1.25V, resistance in the circuit is calculated as,
R = (Vcc - VF)/I = 687.5
Attachment 2 gives a plot of resistance (R) vs input voltage (Vcc) when a current of 20mA flows through the circuit. I hope I can proceed with this setup soon.
Today I made the led (1050nm) circuit inside a box as given in my previous elog. Steve drilled a 1mm hole in the box as an aperture for led light.
Resistance (R) used = 665 .
We connected a power supply and IR has been detected using the card.
Later we changed the input voltage and measured the optical power using a powermeter.
Since the optical power values are very less, we may need to drill a larger hole.
Now the hole is approximately 7mm from led, therefore aperture angle is approximately 2*tan-1(0.5/7) = 8deg. From radiometric curve given in the datasheet of LED1050E, most of the power is within 20 deg. So a hole of size 2* tan(10) *7 = 2.5mm may be required.
I have also attached a photo of the led beam spot on the IR detection card.
Aim : To develop a neural network on simulated data.
I developed a python code that generates a 64*64 image of a white Gaussian beam spot at the centre of black background. I gave a sine wave of frequency 0.2Hz that moves the spot vertically (i.e. in pitch). Then I simulated this video at 10 frames/sec for 10 seconds. Then I saved this data into an hdf5 file, reshaped it to a 1D array and gave as input to a neural network. Out of the 100 image frames, 75 were taken as training dataset and 25 as test data. I varied several hyperparameters like learning rate of the optimizer, number of layers, nodes, activation function etc. Finally, I was successful in reducing the mean squared error with the following network model:
I have attached the plot of the output of neural network (NN) as well as sine signal applied to simulate the video and their residula error in Attachment 1. The plot of variation in mean squared error (in log scale) as number of epochs increases is given in Attachment 2.
I think this network worked easily since there is no noise in the input. Gautam suggested to try the working of this network on simulated data with a noisy background.
Aim: To measure the optical power from led using a powermeter.
Yesterday Gautam drilled a larger hole of diameter 5mm in the box as an aperture for led (aperture angle is approximately 2*tan-1(2.5/7) = 39 deg). I repeated the measurements that I had done before (https://nodus.ligo.caltech.edu:8081/40m/13951). The measurents of optical power measured using a powermeter and the corresponding input voltages are listed below.
So we are able to receive optical power close to the value (1.6mW) given in Thorlabs datasheet for LED1050E (https://www.thorlabs.com/drawings/e6da1d5608eefd5c-035CFFE5-C317-209E-7686CA23F717638B/LED1050E-SpecSheet.pdf). I hope we can proceed to BRDF measurements for CCD calibration.
Steve: did you center the LED ?
Aim: To track the motion of beam spot in simulated video.
I simulated a video that moves the beam spot at the centre of the image initially by applying a sinusoidal signal of frequency 0.2Hz and amplitude 1 i.e. it moves maximum by 1 pixel. It can be found in this shared google drive link (https://drive.google.com/file/d/1GYxPbsi3o9W0VXybPfPSigZtWnVn7656/view?usp=sharing). I found a program that uses Kernelized Correlation Filter (KCF) to track object motion from the video. In the program we can initially define the bounding box (rectangle) that encloses the object we want to track in the video or select the bounding box by dragging in GUI platform. Then I saved the bounding box parameters in the program (x & y coordinates of the left corner point, width & height) and plotted the variation in the y coordinates. I have yet to figure out how this tracker works since the program reads 64*64 image frames in video as 480*640 frames with 3 colour channels and frame rate also randomly changes. The plot of the output of this tracking program & the applied signal has been attached below. The output is not exactly sinusoidal because it may not be able to track very slight movement especially at the peaks where the slope = 0.
Thanks to a discussion yesterday with Joe Betzweiser, I was able to identify and fix the remaining problem with the LLO GigE camera software. It is working now, currently only on rossa, but can be set up on all the machines. I've started a wiki page with documentation and usage instructions here:
This page is also linked from the main 40m wiki page under "Electronics."
This software has the ability to both stream live camera feeds and to record feeds as .avi files. It is described more on the wiki page.
Aim: To find a model that trains the simulated data of Gaussian beam spot moving in a vertical direction by the application of a sinusoidal signal. The data also includes random uniform noise ranging from 0 to 10.
All the attachments are in the zip folder.
I simulated images 128*128 at 10 frames/sec by applying a sine wave of frequency 0.2Hz that moves the beam spot, added random uniform noise ranging from 0 to 10 & resized the image frame using opencv to 64*64. 1000 cycles of this data is taken as train & 300 cycles as test data for the following cases. Optimizer = Nadam (learning rate = 0.001), loss function used = mean squared error, batch size = 32,
256 (dropout = 0.1) -> 256 (dropout = 0.1) -> 1
Activation : selu selu
Number of epochs = 240.
Variation in loss value of train & test datasets is given in Attachment 1 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 2 & 3 (zoomed version of 2).
The model fits well but there is no training since test loss is lower than train loss value. I found in several sites that dropout of some of the nodes during training but retaining them during test could be the probable reason for this (https://stackoverflow.com/questions/48393438/validation-loss-when-using-dropout , http://forums.fast.ai/t/validation-loss-lower-than-training-loss/4581 ). So I removed dropout while training next time.
Activation : selu selu linear
Number of epochs = 200.
Variation in loss value of train & test datasets is given in Attachment 4 of the attached zip folder & the applied signal as well as the output of neural network given in Attachments 5 & 6 (zoomed version of 2).
But still no improvement.
I changed the optimizer to Adam and tried with the same model topology & hyperparameters as case 2 with no success (Attachments 7,8 & 9).
Finally I think this is because I'm training & testing on the same data. So I'm now training with the simulated video but moving it by a maximum of 2 pixels only and testing with a video of ETMY that we had captured earlier.
Aim: To develop medm screen for GigE.
Gautam helped me set up the medm screen through which we can interact with the GigE camera. The steps adopted are as follows:
(i) Copied CUST_CAMERA.adl file from the location /opt/rtcds/userapps/release/cds/common/medm/ to /opt/rtcds/caltech/c1/medm/MISC/.
(ii) Made the following changes by opening CUST_CAMERA.adl in text editor.
(iii) Added this .adl file as drop-out menu 'GigE' to VIDEO/LIGHTS section of sitemap (circled in Attachment 1) i.e opened Resource Palette of VIDEO/LIGHTS, clicked on Label/Name/Args & defined macros as CAMERA=C1:CAM-ETMX,CONFIG=C1-CAM-ETMX in Arguments box of Related Display Data dialog box (circled in Attachment 2) that appears. In Related Display Data dialog box, Display label is given as GigE and Display File as ./MISC/CUST_CAMERA.adl
(iv) All the channel names can be found in Jigyasa's elog https://nodus.ligo.caltech.edu:8081/40m/13023
(v) Since the slider (circled in Attachment 3) for pixel sum was not moving, changed the high limit value to 10000000 in PV Limits dialog box. This value is set such that the slider reaches the other end on setting the exposure time to maximum.
(vii) Set the Snapshot channel C1:CAM-ETMX_SNAP to off (very important!). Otherwise we cannot interact with the camera.
(vii) GigE camera gstreamer client is run in tmux session.
Now we can change the exposure time and record a video by specifying the filename and its location using medm screen. However, while recording the video, gstream video laucher of GigE stops or is stuck.
I had created a python code to find the combination of hyperparameters that trains the neural network. The code (nn_hyperparam_opt.py) is present in the github repo. It's running in cluster since a few days. In the meanwhile I had just tried some combination of hyperparameters.
These give a low loss value of approximately 1e-5 but there is a large error bar for loss value since it fluctuates a lot even after 1500 epochs. This is unclear.
Input: 64*64 image frames of simulated video by applying beam motion sine wave of frequency 0.2Hz and at 10 frames per sec. This input data is given as an hdf5 file.
Train : 100 cycles, Test: 300 cycles, Optimizer = Nadam (learning rate = 0.001)
256 -> 128 -> 1
Activation : selu selu linear
Case 1: batch size = 48, epochs = 1000, loss function = mean squared error
Plots of output predicted by neural network (NN) & input signal has been shown in 1st graph & variation in loss value with epochs in 2nd graph.
Case 2: batch size = 32, epochs = 1500, loss function = mean squared logarithmic error
Plots of output predicted by neural network (NN) & input signal has been shown in 3rd graph & variation in loss value with epochs in 4th graph.
I tried to reduce the overfitting problem in previous neural network by reducing the number of nodes and layers and by varying the learning rate, beta factors (exponential decay rates of moving first and second moments) of Nadam optimizer assuming error of 5% is reasonable.
32 * 32 image frames (converted to 1d array & pixel values of 0 to 255 normalized) of simulated video by applying sine signal to move beam spot in pitch with frequency 0.2Hz and at 10 frames per second.
Total: 300 cycles , Train: 60 cycles, Validation: 90 cycles, Test: 150 cycles
Input --> Hidden layer --> Output layer
4 nodes 1 node
Activation function: selu linear
Batch size = 32, Number of epochs = 128, loss function = mean squared error
Learning rate = 0.00001, beta_1 = 0.8 (default value in Keras = 0.9), beta_2 = 0.85 (default value in Keras = 0.999)
Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment.
Changed number of nodes in hidden layer from 4 to 8. All other parameters same.
These plots show that when residual error increases basically the output of neural network has a smaller amplitude compared to the applied signal. This kind of training error is unclear to me.
When beta parameters of optimizer is changed farther from 1, error increases.
Input : Simulated video of beam spot motion in pitch by applying 4 sine waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08
The data has been split into train, validation and test datasets and I tried training on neural network with the same model topology & parameters as in my previous elog (https://nodus.ligo.caltech.edu:8081/40m/14070)
The output of NN and residual error have been shown in Attachment 1. This NN model gives a large error for this. So I think we have to increase the number of nodes and learning rate so that we get a lower error value with a single sine wave simulated video ( but not overfitting) and then try training on linear combination of sine waves.
Case 2 :
Normalized the target sine signal of NN so that it ranges from -1 to 1 and then trained on the same neural network as in my previous elog with simulated video created using single sine wave. This gave comparatively lower error (shown in Attachment 2). But if we train using this network, we can get only the frequency of test mass motion but we can't resolve the amount by which test mass moves. So I'm unclear about whether we can use this.
Since error was high for the same input as in my previous elog http://nodus.ligo.caltech.edu:8080/40m/14089
I modified the network topology by tuning the number of nodes, layers and learning rate so that the model fitted the sum of 4 sine waves efficiently, saved weights of the final epoch and then in a different program, loaded saved weights & tested on simulated video that's produced by moving beam spot from the centre of image by sum of 4 sine waves whose frequencies and amplitudes change with time.
Input : Simulated video of beam spot motion in pitch by applying 4 sine waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08. This is divided into train (0.4), validation (0.1) and test (0.5).
8 nodes 1 node
Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)
Normalized the target sine signal of NN by dividing by its maximum value.
Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment. These weights of the model in the final epoch have been saved to h5 file and then loaded & tested with simulated data of 4 sine waves with amplitudes and frequencies changing with time from their initial values by random uniform noise ranging from 0 to 0.05. Plot of predicted output by neural network, target signal of sine waves & residual error given in 2nd attachment. The actual signal can be got from predicted output of NN by multiplication with normalization constant used before. However, even though network fits training & validation sets efficiently, it gives a comparatively large error on test data of varying amplitude & frequency.
Gautam suggested to try training on this noisy data of varying amplitudes and frequencies. The results using the same model of NN is given in Attachment 3. It was found that tuning the number of nodes, layers or learning rate didn't improve fitting much in this case.
This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects).
I was thinking a little more about the way we are training the network for the current topology - because the network has no recurrent layers, I guess it has no memory of past samples, and so it doesn't have any sense of the temporal axis. In fact, Keras by default shuffles the training data you give it randomly so the time ordering is lost. So the training amounts to requiring the network to identify the center of the Gaussian beam and output that. So in the training dataset, all we need is good (spatial) coverage of the area in which the spot is most likely to move? Or is the idea to develop some tools to generate video with spot motion close to that on the ETM in lock, so that we can use it with a network topology that has memory?
This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects)
Input : Previous simulated video of beam spot motion in pitch by applying 4 sine waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08 where random uniform noise ranging 0.05 has been added to amplitudes and frequencies. This is divided into train (0.4), validation (0.1) and test (0.5).
Activation: selu linear
Plots of CNN output & applied signal given in Attachment 1. The variation in loss value with epochs given in Attachment 2.
This needs to be further analysed with increasing random uniform noise over the pixels and by training CNN on simulated data of varying ampltides and frequencies for sine waves.
I have done the following thus far since elog #14626:
I will wrap up the simulation code today and proceed to going through Gabriele's repo. I will also test if the contour detection method works with the simulated data. During our meeting, it was pointed out that when working with real data, care has to be taken to synchronize the data with the video obtained. However, I wish to put off working on that till later in the pipeline as I think it doesn't affect the algorithm being used. I hope that's alright (?).
On Tuesday, I tried reproducing Pooja's measurements (https://nodus.ligo.caltech.edu:8081/40m/13986). The table below shows the values I got. Pictures of LED circuit, schematic and the setup are attached. The powermeter readings fluctuated quite a bit for input volatges (Vcc) > 8V, therefore, I expect a maximum uncertainity of 50µW to be on a safer side. Though the readings at lower input voltages didn't vary much over time (variation < 2µW), I don't know how relaible the Ophir powermeter is at such low power levels. The optical power output of LED was linear for input voltages 10V to 20V. I'll proceed with the CCD calibration soon.
Things yet to be done:
On Friday, I tried calibrating the CCD with the following setup. Here, I present the expected values of scattered power (Ps) at s = 45°, where s is scattering angle (refer figure). The LED box has a hole with an aperture of 5mm and the LED is placed at approximately 7mm from the hole. Thus the aperture angle is 2*tan-1(2.5/7) ≈ 40° approx. Using this, the spot size of the LED light at a distance 'd' was estimated. The width of the LED holder/stand (approx 4") puts a constraint on the lowest possible s. At this lowest possible s, the distance of CCD/Ophir from the screen is given by . This was taken as the imaging distance for other angles also.
In the table below, Pi is taken to be 1.5mW, and Ps and were calculated using the following equations:
Lowest possible s (in degrees)
Expected Ps at s = 45° (in µW)
On measuring the scattered power (Ps) using the ophir power meter, I got values of the same order as that of expected values given the above table. Like Gautam suggested, we could use a photodiode to detect the scattered power as it will offer us better precision or we could calibrate the power meter using the method mentioned in Johannes's post: https://nodus.ligo.caltech.edu:8081/40m/13391.
Yesterday, we were able to capture some images of objects at a distane of approx 60cm (see the attachment), with the GigE mounted onto the telescope. I think, Johannes had used it earlier to image the ETMX (https://nodus.ligo.caltech.edu:8081/40m/13375). His elog entry doesn't say anything about the focal length of the lenses that he had used. The link to the python code he had used to calculate the lens solution wasn't working. After Gautam fixed it, I took a look at it. He has used 150mm (front lens) and 250mm (back lens) as the focal length of lenses for the calculation. Using the lens formula and an image of a nearby light source, with a very rough measurement, I found the focal lengths to be around 14 cm and 23 cm. So, I'm assuming that the lenses in the telescope are of same focal lengths as in his code, i.e 150mm and 250mm.
no BMP files
The following steps summarize the steps to setting up and interacting with a GigE camera.
Launching the PylonViewerApp:
Using setup python scripts to interact with the GigE (a summary of the steps listed here and here)
Chub and I are trying to figure out a way to co-mount GigE into the existing cylindrical enclosure. I'm attaching a picture of the current setup that is being used for imaging MC2. As of now, I have thought of 3 possible setups (schematics attached); but I don't know how feasible they are. Let us know if you have any other ideas.
Update: The setup 3 would require us to use the 52cm long enclosure. It has a long breadboard welded to it, which makes it very convienient, but the whole setup becomes quite heavy and it's not that safe to install such heavy enclosure on top of the vaccuum system. Also, aligning its components would be more complicated than other setups.
I decided to start with the simple one, therefore, I tried implementing setup 1. Fitting in the analog camera horizontally alongside the telescope turned out to be tricky. Though I did manage to fit it in, it didn't leave any room to change the orientation of the beamsplitter. Like Koji suggested, I'll be trying the setup 2.
Figured out how to get/grab frames by looking at the pypylon documenation as that turned out to be easier than modifying Jon's code. Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog). I will figure that out tomorrow and make a script suitable for Kruthi's usage (obtain a bunch of images with different exposure times). I will also try and integrate the video saving and streaming code into this and have a neat little script set up asap.
caget/caput probably does the job.
Still not sure about how to modify the exposure time (other than using the pylon app, the only technique I know so far is to adjust the exposure manually on the medm screen and then run the scripts as described in the previous elog).
Thanks! It does indeed do the trick! With that I was able to
Further, a quick look at the camera server code in /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_server.py revealed that the script expects the details of "Number of Snapshots" in "Camera Settings" in the configuration file i.e in C1-CAM-ETMX.ini at ( /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/C1-CAM-ETMX.ini) which wasn't present before. Adding this parameter to the config file allows one to take a snapshot using the medm screen. Infact, unlike as described in this elog, I was able to start the server and client as described in elog 14649, and then obtain snapshots using the terminal command caput C1:CAM-ETMX_SNAP 1.
Today I ran into the following errors:
Therefore, Koji and I took a look at it and putting our faith in Gautam's hunch from elog 13023, we walked down to rack 1Y1 and keyed it. Following this, all the functionality previously described was restored! Koji then took a look at all the channels handled by this machine and bestowed upon me the permission to key the crate should I lose control of the GigE again.
I managed to fit all the parts into the cylindrical enclosure without having to drill a hole in the enclosure to mount the analog camera (pictures attached); thanks to Koji for helping me find some fancy mechanical components (swivel post clamps, right angle post clamps and brackets). On Thursday, with Chub's help, I took a look at all the current analog camera positions with respect to the cylindrical enclosures. I think this setup gives me enough flexibility to align the components, as necessary, to be able to image the test masses/mirrors in all the cavities. I'll set it up for MC2 tomorrow.
Steps to take snapshots using GigE at different exposures [Instructions for Kruthi]:
The python script takes in the above parameters and then takes snapshots by setting the exposure to values starting at minval and going upto maxval incrementing by step at each turn. This uses a simple for loop and is nothing elaborate.
A few unrelated updates:
Today, with Milind's help, I installed the analog camera into the MC2 enclosure [picture attached]; but it is not yet focused. We replaced the bulky angular bracket with a simple one, this saved a lot of space inside and it's easier to align other components now. I'll finish setting it up tomorrow.
Telescope design for MC2: Instead of using two 3" long stackable lens tubes (SM2L30), we can use one 3" lens tube with an adjustable lens tube (SM2V10), as shown in the picture. This gives a flexibility to change the focal plane distance by 1" and also reduces the overall length of telescope from 9 inches to 6-7 inches. I decided to use two 150mm biconvex lens instead of a combination of 150mm and 250mm lenses, as the former combination results in lower focal plane distance for a given distance between the lenses.
Specifications of current telescope system (for future reference):
With the above telescope, assuming the MC2 mirror to be at a distance of approx 75cm, the focal plane distance will range from 7.9cm to 8.1cm. Using the adjustable lens tube, we can further make the fine adjustment.
Yesterday, Koji helped me clean all the optics that are being used for the setup. We tried aligning the cameras with the previous configuration we had, but after connecting the analog camera cables there wasn't much room to align the beam splitter. Today, I tried a different configuration and tested the alignment of analog camera, GigE, beam splitter and the mirror using a laser beam [pictures attached]. But the MC2 isn't locked to test if the whole setup is actually aligned with the mirror inside the vacuum.
Also, with this setup, just by using posts of different lengths with the middle 90º-post-clamp, we will be able to move all the components. This way, we can easily image the beam spot in all the cavities.
I'm attaching a picture of the screen. I just positioned the enclosure by turning it a bit and I suppose we can see the mirror inside the vacuum now (the MC2 is still not locked).
Also, with this setup, just by using posts of different lengths, we will be able to image the beam spot in all the cavities.
Today, Rana asked me to work on improving simulations based on the ideas we discussed last week. As of the previous elog the simulation accomodated only
Today, I added the simulation of point scatterers.
The image on the sensor (camera) is produced in roughly the following steps.
Herewith, in attachments #1, #2, #3 I am attaching videos obtained by varying scattering amplitude and number of scattering points in a vain attempt to reproduce this data. I shall work more on this simulation on Friday.
Neural network stuff:
GANs for simulation:
Networks for beam tracking:
don't need to lock - make sure the 4 OSEMs are centered on the camera field just as we have for the arm cavity mirrors
As directed by Gautam, I have set up one script- interact.py (at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/interact.py) to perform the following two tasks:
Steps to view GigE feed for a fixed amount of time:
Steps to record the GigE feed for a fixed amount of time:
I tried to look for elegant solutions that wouldn't require editing the code that Jon wrote and stumbled upon this useful bit of information but ended up deciding that it was just easier to change the camera_client_movie.py (/opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon/camera_client_movie.py). It can still be run as previously described, where video recording is terminated by using Ctrl-C. Steps to record for a fixed period of time are
I'll make aliases for these to make the whole process more user friendly. I'm halting this for now and will discuss what else needs to be done once Gautam gets back.
Regarding the autolocker: I spoke to Aaron today and as he is in tomorrow, I'll ask him about the burt files and the ideal configuration.
I'm also starting with GANs now.
Today, I tried aligning it further; I'm attaching a picture of it. We are not able to see all the 4 OSEMs yet. In the reference picture I had taken, before taking off the previous analog setup, the OSEMs are not seen. So, I don't really understand what the other 2 spots seen on the current screen are. Are they actually OSEMs?
I need a laptop next to MC2, so that I can have a look at it and make further alignments. So, I tried accessing the GigE attached to the telescope using Paola. The pylon app in it, throws an error, few seconds after running it in continuous shot mode, and disconnects the GigE; everything works fine on Rossa though. I'll put up further details soon.
The analog camera is aligned and we are able to see all the 4 OSEMs (pictures attached). Due to secondary reflection from the beamspiltter (BS1-1064-33-2037-45S), when the MC2 is locked, we are getting a ghost image of the beam spot along with the primary image.
The pylon app in Paola was reporting an error saying "0xE1000014: The buffer was incompletely grabbed". I followed the instructions given in this site, and changed the 'Packet Size' to 1500 and 'Inter-Packet Delay parameter' to a value greater than 20,000 (µs). This did the trick and I was able to use the continuous shot mode without any interruption. I'm attaching a picture of MC2 that I captured using GigE.
Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".
Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.
In the previous meeting, Koji pointed out (once again) that I should determine if the displacement values and frames are synchronized before training a network. Pooja did the following last time. Koji also suggested that I first predict the motion (a series of x and y coordintates) and then slide resulting plots around until I get the best match for the original motion. This is however not possible with a neural network based approach as the network learns exactly what you show it and therefore it will learn any mismatch between the labels and the frames and predict exactly that. Therefore I came up with what Koji described as "hacky" method to achieve the same using the opencv work described previously in this elog (the only addition being the application of a mask to block out the OSEMs and work only with the beam spot) .
Hacky technique to sync frames and labels:
[Koji, Milind - 21/06/2019]
Upcoming work (in the order of priority):
Turns out, focusing the GigE is actually a bit tricky. With pylon, everytime I change the exposure or the focus, I'm running into the error I had mentioned earlier in one of my elogs; so I tried using the python scripts to interact with the GigE. But whenever I try to change the focal plane distance by rotating the lens coupler, the ethernet cable connection becomes loose and the camera server needs to be relaunched every now and then. Also, everytime we want to change the distance between the lenses, the telescope needs to be dismantled and refocused again. I'll try to come up with a better telescope design for this.
Yesterday, I had focused the GigE using a low exposure time and small aperture of iris, to make sure that we are actually seeing a sharp image of the beam spot. I'm attaching a picture of the beam spot I had clicked while focusing it, unfortunately, I forgot to take a picture after I had focused it completely. I'm also attaching a picture of the final setup for future reference.
Yesterday night, Rana asked me to lock the MC2. I figured that the PSL shutter was closed; I just opened it and was able to see the beam spot on the analog camera screen.
I discussed this with Gautam and he asked me to come up with a list of signals that I would need for my use and then design the data acquisition task at a high level before proceeding. I'm working on that right now. We came up with a very elementary sketch of what the script will do-
Tomorrow I will try and prepare a dummy script for this before the meeting at noon. Gautam asked me to familiarize myself with the awg, cdsutils (I have already used ezca before) to write the script. This will also help me do the following two tasks-
I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.
I am pushing the code that I wrote for
to the GigEcamera repository.
Gautam also asked me to look at Jigyasa's report and elog 13443 to come up with the specs of a machine that would accomodate a dedicated camera server.