40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 68 of 341  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14708   Sat Jun 29 03:11:18 2019 KruthiUpdateCamerasCCD Calibration

Finding the gain of the Photodiode: The three-position rotary switch of the photodiode being used (PDA520) wasn't working, so I determined its gain by making a comparative measurement between ophir power meter and the photodiode. The photodiode has a responsitivity of 0.34 A/W at 1064 nm (obtained from the responsitivity curve given in the spec sheet using a curve digitizing software). Using the following equation, I determined the gain setting, which turned out to be 20dB.

\large Transimpedance\ Gain (V/A) = \frac{Photodiode\ reading (V)}{Ophir\ reading (W) * Responsitivity (A/W)}

Setup: Here a 1050nm (closest we have to 1064nm) LED is used as the light source instead of a laser to eliminate the effects caused by coherence of a laser source, which might affect our radiometric calibration. The LED is placed in a box with a hole of diameter 5mm (aperture angle = 40 degrees approx.). Suitable lenses are used to focus the light onto a white paper, which is fixed at an arbitrary angle and serves as a Lambertian scatterer. To make a comparative measurement between the photodiode (PDA520) and GigE, we need to account for their different sensor areas, 8.8mm (aperture diameter) and 3.7mm x 2.8 mm respectively . This can be done by either using an iris with a common aperture so that both the photodiode and GigE receive same amount of light , or by calculating the power incident on GigE using the ratio of sensor areas and power incident on the photodiode (here we are using the fact that power scattered by Lambertian scatterer per unit solid angle is constant). 

Calibration of GigE 152 unit: I took around 50 images, starting with an exposure time of 2000 \LARGE \mu s in steps of 2000, using the exposure_variation.py code. But the code doesn't allow us to take images with an exposure time greater than 100 ms, so I took few more images at higher exposures manually. From each image I subtracted a dark image (not in the sense of usual CCD calibration, but just an image with same exposure time and no LED light). These dark images do the job of usual dark frame + bias frame and also account for stray lights. A plot of pixel sum vs exposure time is attached. From a linear fit for the unsaturated region, I obtained the slope and calculated the calibration factor.

Equations:      \LARGE Power (P)=\frac{Photodiode\ reading(V)}{Transimpedance\ gain (V/W) * Responsivity (A/W)}                    \LARGE Calibration factor (CF) = \frac{P}{slope}

Result: CF = 1.91x 10^-16 W-sec/counts  Update: I had used a wrong value for the area of photodiode. On using 61.36 mm^2 as the area, I got 2.04 x 10^-15 W-sec/counts.

I'll put the uncertainities soon. I'm also attaching the GigE spectral response curve for future reference.

Attachment 1: calibration_setup.jpg
calibration_setup.jpg
Attachment 2: CCD_calibration_2.jpeg
CCD_calibration_2.jpeg
Attachment 3: GigE_spectral_response_curve.png
GigE_spectral_response_curve.png
Attachment 4: 152_calibration_plot.png
152_calibration_plot.png
  14710   Sun Jun 30 22:02:26 2019 MilindUpdateCamerasKeyed c1aux crate

I wanted to try out the unstick.py script on c1aux but kept running into timeout errors. I was also confronted by a blank GigE screen. Further, couldn't telnet into c1aux using telnet c1aux as described here. Therefore, I went in and keyed the c1aux crate (1Y1).

  14714   Mon Jul 1 20:11:34 2019 MilindUpdateCamerasSimulation enhancements

Today, I read a lot more about BRDF and modelling but could not make much headway regarding the implementation in the simulation. I've stopped for now and I'll take a crack at it tomorrow again.

Quote:

Yesterday, Rana asked me to look at Hiro Yamamoto's docs on the DCC to improve the simulation. I'm performing a first pass (=> Just skimming through to see if they're relevant, I will go through them more carefully soon!) and putting up stuff here for future reference. @Kruthi's help much appreciated!

  14723   Wed Jul 3 23:53:38 2019 MilindUpdateCamerasdata for nns

Tried collecting data today. Was unable to keep the camera_server code running for any length of time as it threw segfaults. Will take a shot again tomorrow.

Quote:

The GigE is focused now (judged by eye) and I have closed the lid. I'm attaching a picture of the MC2 beam spot, captured using GigE at an exposure time of 400µs.

What was the solution to resolving the flaky video streaming during the alignment process????

-> I think, the issue was with either the poor wireless network conection or the GigE-PoE ethernet cable.

Quote:

Turns out, focusing the GigE is actually a bit tricky. With pylon, everytime I change the exposure or the focus, I'm running into the error I had mentioned earlier in one of my elogs; so I tried using the python scripts to interact with the GigE. But whenever I try to change the focal plane distance by rotating the lens coupler, the ethernet cable connection becomes loose and the camera server needs to be relaunched every now and then. Also, everytime we want to change the distance between the lenses, the telescope needs to be dismantled and refocused again. I'll try to come up with a better telescope design for this.

Yesterday, I had focused the GigE using a low exposure time and small aperture of iris, to make sure that we are actually seeing a sharp image of the beam spot. I'm attaching a picture of the beam spot I had clicked while focusing it, unfortunately, I forgot to take a picture after I had focused it completely. I'm also attaching a picture of the final setup for future reference. 


Yesterday night, Rana asked me to lock the MC2. I figured that the PSL shutter was closed; I just opened it and was able to see the beam spot on the analog camera screen.

  14726   Thu Jul 4 18:19:08 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details. 

Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is  the training data I overfit to and the latter the validation data that the network has not generalized well to.

Quote:

And finally, a network is trained!

Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github.

 

More details of the experiment:

Aim:

  1. To train a network to check that training occurs and get a feel for what the learning might be like.
  2. To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison.
  3. To track beam spot motion.

What I did:

  1. Set up a network that learns a framewise mapping as described in here.
  2. Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames
  3. Hyperparameters: Attachment #1
  4. Did no tuning of hyperparameters.
  5. Compiled and fit the models and saved the results.

 

What I saw

  1. Attachment #2: data fed to the network after pre-processing - median blur + crop
  2. Attachment #3: learning curves.
  3. Attachment #4: true and predicted motion. Nothing great.

What I think is going wrong-

  1. No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments.
  2. Too little data.
  3. Maybe wrong architecture.

Well, what now?

  1. Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?)
  2. Currently the network has around 200k parameters. Maybe reduce that.
  3. Set up a network that takes as input (one example corresponding to one forward pass)  a bunch of frames and predicts a vector of position values that can be used as continuous data).
Quote:

I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.

Quote:
 
  1. Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
Attachment 1: Motion.pdf
Motion.pdf
Attachment 2: Error.pdf
Error.pdf
Attachment 3: Learning_curves.pdf
Learning_curves.pdf
  14732   Sun Jul 7 21:59:28 2019 KruthiUpdateCamerasGhost image due to beamsplitter

The beam splitter (BS1-1064-33-2037-45S) that is currently being used has an antireflection coating on the second surface and a wedge of less than 5 arcmin; yet it leads to ghosting as shown in the figure attached (courtesy: Thorlabs). I'm also attaching its spec sheet I dug up on internet for future reference.

I came across pellicle beamsplitters, that are primarily used to eliminate ghost images. Pellicle beamsplitters have a few microns thick nitrocellulose layer and superimpose the secondary reflection on the first one. Thus the ghost image is eliminated. 

Should we go ahead and order them? (https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=898

https://www.edmundoptics.eu/c/beamsplitters/622/#28438=28438_s%3AUGVsbGljbGU1&27614=27614_d%3A%5B46.18%20TO%2077.73%5D)

Attachment 1: ghosting_schematic.png
ghosting_schematic.png
Attachment 2: Beamsplitter_spec.pdf
Beamsplitter_spec.pdf Beamsplitter_spec.pdf
  14734   Mon Jul 8 17:52:30 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

After the two earthquakes, I collected some data by dithering the optic and recording the QPD readings. Today, I set up scripts to process the data and then train networks on this data. I have pushed all the code to github. I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

Trainng networks with memory

I set up a network to handle input volumes (stacks of frames) instead of individual frames. It still uses 2D convolution and not 3D convolution. I am currently training on the new data. However, I was curious to see if it would provide any improved performance over the results I put up in the previous elog. After a bit of hyperparameter tuning, I did get some decent results which I have attached below. However, this is for Pooja's old data which makes them, ah, not so relevant. Also, this testing isn't truly representative because the test data isn't entirely new to the network. I am going to train this network on the new data now with the following objectives (in the following steps):

  1. Train on data recorded at one frequency, generalize/ test on unseen data of the same frequency, large amplitude of motion
  2. Train on data recorded at one frequency, generallize/ test on unseen data of a different frequency, large amplitude of motion
  3. Train on data recorded at one frequency, generalize/ test on unseen data of  same/ different frequency, small amplitude of motion
  4. Train on data at different frequencies and generalize/ test on data with a mixture of frequencies at small amplitudes - Gautam pointed out that the network would truly be superb (good?) if we can just predict the QPD output from the video of the beam spot when nothing is being shaken.

I hope this looks alright? Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).

Quote:

The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details. 

Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is  the training data I overfit to and the latter the validation data that the network has not generalized well to.

Attachment 1: Motion.pdf
Motion.pdf
  14735   Mon Jul 8 21:42:39 2019 ranaUpdateCamerasGhost image due to beamsplitter

you have to use a BS with a larger wedge angle (5 arcmin ~ 1 mrad) so that the beams don't overlap on the camera

  14741   Tue Jul 9 22:13:26 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-

  1. On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
  2. On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
  3. On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.

Therefore, I will carry out all training only on this machine from now.

 


Note to self:

Steps to repeat what you did are:

  1. ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
  2. activate virtualenv as descirbed above
  3. navigate to code  and run it.
Quote:

 I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

Attachment 1: predicted_motion_first.pdf
predicted_motion_first.pdf
Attachment 2: pcdev5_time.png
pcdev5_time.png
  14746   Wed Jul 10 22:32:38 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

I trained a bunch (around 25 or so - to tune hyperparameters) of networks today. They were all CNNs. They all produced garbage. I also looked at lstm networks with CNN encoders (see this very useful link) and gave some thought to what kind of architecture we want to use and how to go about programming it (in Keras, will use tensorflow if I feel like I need more control). I will code it up tomorrow after some thought and discussion. I am not sure if abandoning CNNs is the right thing to do or if I should continue probing this with more architectures and tuning attempts. Any thoughts?

Right now, after speaking to Stuart (ldas_admin) I've decided on coding up the LSTM thing and then running that on one machine while probing the CNN thing on another.

 


Update on 10 July, 2019: I'm attaching all the results of training here in case anyone is interested in the future.

Quote:

I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-

  1. On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
  2. On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
  3. On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.

Therefore, I will carry out all training only on this machine from now.

 


Note to self:

Steps to repeat what you did are:

  1. ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
  2. activate virtualenv as descirbed above
  3. navigate to code  and run it.
Quote:

 I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.

  14757   Sun Jul 14 00:24:29 2019 KruthiUpdateCamerasCCD Calibration

On Friday, I took images for different power outputs of LED. I calculated the calibration factor as explained in my previous elog (plots attached).

Vcc (V) Photodiode
reading(V)

Power incident on photodiode (W)

Power incident on GigE (W)
Slope (counts/​𝝁s)
Uncertainity in
 slope (counts/​𝝁s)
CF (W-sec/counts)
16 0.784 2.31E-06 3.89E-07 180.4029 1.02882 2.16E-15
18 0.854 2.51E-06 4.24E-07 207.7314 0.7656 2.04E-15
20 0.92 2.71E-06 4.57E-07 209.8902 1.358 2.18E-15
22 0.969 2.85E-06 4.81E-07 222.3862 1.456 2.16E-15
25 1.026 3.02E-06 5.09E-07 235.2349 1.53118 2.17E-15
  Average  2.14E-15

To estimate the uncertainity, I assumed an error of at most 20mV (due to stray lights or difference in orientation of GigE and photodiode) for the photodiode reading. Using the uncertainity in slope from the linear fit, I expect an uncertainity of maximum 4%. Note: I haven't accounted for the error in the responsivity value of the photodiode.

GigE area 10.36 sq.mm
PDA area 61.364 sq.mm
Responsivity 0.34 A/W
Transimpedance gain (at gain = 20dB) 10^6 V/W +/- 0.1%
Pixel format used Mono 8 bit

Johannes had reported CF as 0.0858E-15 W-sec/counts for 12 bit images, with measured a laser source. This value and the one I got are off by a factor of 25. Difference in the pixel formats and effect of coherence of the light used might be the possible reasons.

Attachment 1: CCD_calibration.png
CCD_calibration.png
  14760   Mon Jul 15 14:09:07 2019 MilindUpdateCamerasCNN LSTM for beam tracking

I've set up network with a CNN encoder (front end) feeding into a single LSTM cell followed by the output layer (see attachment #1). The network requires significantly more memory than the previous ones. It takes around 30s for one epoch of training. Attached are the predicted yaw motion and the fft of the same. The FFT looks rather curious. I still haven't done any tuning and these are only the preliminary results.

Quote:

 Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).

Well, what about the previous conv nets?

What I did:

  1. Extensive tuning - of learning rate, batch size, dropout ratio, input size using a grid search
  2. Trained each network for 75 epochs and obtained weights, predicted motion and corresponding FFT, error etc.

What I observed:

  1. Loss curves look okay, validation loss isn't going up, so I don't think overfitting is the issue
  2. Training for over (even) 75 epochs seems to be pointless.

What I think is going wrong:

  1. Input size- relatively large input size: 350 x 350. Here, the input image size seems to be 128 x 128.
  2. Inadequate pre-processing.
    1. I have not applied any filters/blurs etc. to the frames.
    2. I have also not tried dimensionality reduction techniques such as PCA

What I will try now:

  1. Collect new data: with smaller amplitudes and different frequencies
  2. Tune the LSTM network for the data I have
  3. Try new CNN architectures with more aggressive max pooling and fewer parameters
  4. Ensembling the models (see this and this). Right now, I have multiple models trained either with same architecture and different hyperparameters or with different architectures. As a first pass, I intend to average the predictions of all the models and see if that improves performance.
Attachment 1: cnn-lstm.png
cnn-lstm.png
Attachment 2: fft_yaw.pdf
fft_yaw.pdf
Attachment 3: yaw_motion.pdf
yaw_motion.pdf
  14768   Wed Jul 17 20:12:26 2019 KruthiUpdateCamerasAnother GigE in place of analog camera

I've taken the MC2 analog camera down and put another GigE (unit 151) in its place. This is just temporary and I'll put the analog camera back once I finish the MC2 loss map calibration. I'm using a 25mm focal length camera lens with it and it gives a view of MC2 similar to the analog camera one. But I don't think it is completely focused yet (pictures attached).

...more to follow

gautam - Attachment #3 is my (sad) attempt at finding some point scatterers - Kruthi is going to play around with photUtils to figure out the average size of some point scatterers.

Attachment 1: zoomed_out_gige.png
zoomed_out_gige.png
Attachment 2: osems_mc2.png
osems_mc2.png
Attachment 3: MC2.pdf
MC2.pdf
  14774   Thu Jul 18 22:03:00 2019 KruthiUpdateCamerasMC2 and cameras

[Kruthi, Yehonathan, Gautam]

Today evening, Yehonathan and I aligned the MC2 cameras. As of now there are 2 GigEs in the MC2 enclosure. For the temporary GigE (which is the analog camera's place), we are using an ethernet cable connection from the Netgear switch in 1x6. The MC2 was misaligned and the autolocker wasn't able to lock the mode cleaner. So, Gautam disabled the autolocker and manually changed the settings; the autolocker was able to take over eventually.

  14779   Fri Jul 19 16:47:06 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

I did a whole lot of hyperparameter tuning for convolutional networks (without 3d convolution). Of the results I obtained, I am attaching the best results below.

Define "best"?

The lower the power of the error signal (difference between the true and predicted X and Y positions), essentially mse, on the test data, the better the performance of the model. Of the trained models I had, I chose the one with the lowest mse.

Attached results:

  1. Attachment 1: Training configuration
  2. Attachment 2: Predicted motion along the Y direction for the test data
  3. Attachment 3: Predicted motion along the Y direction for the training data
  4. Attachment 4: Learning curves
  5. Attachment 5: Error in test predictions
  6. Attachment 6: Video of image histogram plots
  7. Attachment 7: Plot of percentage of pixels with intensity over 240 with time

(Note: Attachment 6 and 7 present information regarding a fraction of the data. However, the behaviour remains the same for the rest of the data.)

Observations and analysis:

  1. Data:
    1. From attachemtns 2, 3, 5: Maximum deviation from true labels at the peaks of applied dither/motion. Possible reasons:
      1. Stupid Cropping? I checked (by watching the video of cropped frames, i.e visually) to ensure that the entire motion of the beam spot is captured. Therefore, this is not the case.
      2. Intensity variation: The intensity (brightness?) of the beam spot varies (decreases) significantly at the maximum displacement. This, I think, is creating a skewed dataset with very few frames with low intensity pixels. Therefore, I think it makes sense to even this out and get more data points (frames) with similar (lower) pixel intensities. I can think of two ways of doing this:
        1. Collect more data with lower amplitude of sinusoidal dither. I used an amplitude of 80 cts to dither the optic. Perhaps something like 40 is more feasible. This will ensure the dataset isn't too skewed.
        2. Increase exposure time. I used an exposure time of 500us to capture data. Perhaps a higher exposure time will ensure that the image of the beam spot doesn't fade out at the peak of motion.
    2. From attachment 5, Saturated images?: We would like to gun for a maximum deviation of 10% (0.1 in this case) from the true values in the predicted labels (Tbh, I'm not sure why this is a good baseline, I ought to give that some thought. I think the maximum deviation of the OpenCV thing I did at the start might also be a good baseline?). Clearly, we're not meeting that. One possible reason is that the video might be saturated- (too many pixels at 255, bleeding into surrounding pixles) leading to loss of information. I set the exposure time to 500us precisely to avoid this. However, I also created videos of the image histograms of the frames to make sure the frames weren't saturated (Is there some better standard way of doing it?). From attachements 6 and 7, I think it's evident that saturation is not an issue. Consequently, I think increasing the exposure time and collecting data is a good idea.
  2. The network:
    1. From attachment 4: Training post 25 epochs seems to produce overfitting, though it doesn't seem too terrible (from attachments 2 and 3). The network is still learning after 75 epochs, so I'll tinker with the learning rate, dropout and maybe put in annealing.
    2. I don't think there is a need to change the architecture yet. The model seems to generalize okay (valdiation error is close to training error), therefore I think it'll be a good idea to increase dropout for the fully connected layers and train for longer/ with a higher learning rate.

 


 

P.S. I will also try the 2D convolution followed by the 1D convolution thing now. 

P.P.S. Gabriele suggested that I try average pooling instead of max pooling as this is a regression task. I'll give that a shot.

 

Attachment 1: readme.txt
Experiment file: train_both.py
batch_size: 32
dropout_probability: 0.5
eta: 0.0001
filter_size: 1
filter_type: median
initializer: Xavier
memory_size: 10
num_epochs: 75
activation_function: relu
... 22 more lines ...
Attachment 2: yaw_motion_test.pdf
yaw_motion_test.pdf
Attachment 3: yaw_motion_train.pdf
yaw_motion_train.pdf
Attachment 4: Learning_curves_replotted.pdf
Learning_curves_replotted.pdf
Attachment 5: yaw_error_test.pdf
yaw_error_test.pdf
Attachment 6: intensity_histogram.mp4
Attachment 7: saturation_percentage.pdf
saturation_percentage.pdf
  14786   Sat Jul 20 12:16:39 2019 gautamUpdateCamerasCNNs for beam tracking || Analysis of results
  1. Make the MSE a subplot on the same axes as the time series for easier interpretation.
  2. Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.
  3. What is the minimum detectable motion given the CCD resolution?
  4. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully?
  5. What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.
  6. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  7. Is the time-sync problem Koji raised limiting this approach?
  14787   Sat Jul 20 14:43:45 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

<Adding details>

See Attachment #2.

Quote:

Make the MSE a subplot on the same axes as the time series for easier interpretation.

Training dataset:

  1. Peak to peak amplitue in physical units: ?
  2. Dither frequency: 0.2 Hz
  3. Video data: zoomed in video of the beam spot obtained from GigE camera 198.162.113.153 at 500us exposure time. Each frame has a resolution of 640 x 480 which I have cropped to 350 x 350. Attachment #1 is one such frame.
  4. Yes, therefore I am going to obtain video at lower amplitudes. I think that should help me avoid the problem of not-nominal-maximum value?
  5. Other details of the training dataset:
    1. Dataset created from four vides of duration ~ 30, 60, 60, 60 s at 25 FPS.
    2. 4032 training data points
      1. Input (one example/ data point): 10 successive frames stacked to form a 3D volume of shape 350 x 350 x 10
      2. Output (2 dimensional vector): QPD readings (C1:IOO-MC_TRANS_PIT_ERR, C1:IOO-MC_TRANS_YAW_ERR)
    3. Pre-processing: none
    4. Shuffling: Dataset was shuffled before every epoch
    5. No thresholding: Binary images are gonna be of little use if the expectation is that the network will learn to interpret intensity variations of pixels.

Do I need to provide any more details here?

Quote

Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.

?

Quote:

What is the minimum detectable motion given the CCD resolution?

see attachment #4.

Quote:
  1. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully

 

I wrote what I think is a handy script to observe if the frames are saturated. I thought this might be handy for if/when I collect data with higher exposure times. I assumed there was no saturation in the images because I'd set the exposure value to something low. I thought it'd be useful to just verify that. Attachment #3 has log scale on the x axis.

Quote:

What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.

 

Quote:
  1. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  2. Is the time-sync problem Koji raised limiting this approach?

 

Attachment 1: frame0.pdf
frame0.pdf
Attachment 2: subplot_yaw_test.pdf
subplot_yaw_test.pdf
Attachment 3: intensity_histogram.mp4
Attachment 4: network2.pdf
network2.pdf
  14801   Tue Jul 23 21:59:08 2019 JonUpdateCamerasPlan for GigE cameras

This afternoon Gautam and I assessed what to do about restoring the GigE camera software. Here's what I propose:

  • Set up one of the new rackmount Supermicros as a dedicated camera feed server
  • All GigE cameras on a local subnet connected to the second network interface (these Supermicros have two)
  • Put the SnapPy, pypylon, and pylon5 binaries on the shared network drive. These all have to be built from source.
  • All other dependencies can be gotten through the package managers, so create requirements files for yum and pip to automatically install these locally.

I've started resolving the many dependencies of this code on rossa. The idea is to get a working environment on one workstation, then generate requirements files that can be used to set up the rest of the machines. I believe the dependencies have all been installed. However, many of the packages are newer versions than before, and this seems to have broken SnapPy. I'll continue debugging this tomorrow.

  14803   Wed Jul 24 02:06:05 2019 KruthiUpdateCamerasHDR images

I have been trying a couple of HDR algorithms, all of them seem to give very different results. I don't know how suitable these algorithms are for our purpose, because they are more concerned with final display. I'm attaching the HDR image I got by modifying Jigyasa's code a bit (this image has been be modified further to make it suitable for displaying). Here, I'm trying compare the plots of images that look similar. The HDR image has a dynamic ratio of 700:1

PS: 300us_image.png file actually looks very similar to HDR image on my laptop (might be an issue with elog editor?). So I'm attaching its .tiff version also to avoid any confusion.

Attachment 1: HDR_8bit.png
HDR_8bit.png
Attachment 2: hdrplot.png
hdrplot.png
Attachment 3: C_MC2_2019-07-19-01-50-09.tiff
Attachment 4: 300us_image.png
300us_image.png
Attachment 5: 300us_image.tiff
Attachment 6: actualimageplot.png
actualimageplot.png
  14806   Wed Jul 24 16:45:32 2019 JonUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

I upgraded Pylon, the C/C++ API for the GigE cameras, to the latest release, 5.2.0. It is installed in the same location as before, /opt/rtcds/caltech/c1/scripts/GigE/pylon5, so environment variables do not change. The old version, 5.0.12, still exists at opt/rtcds/caltech/c1/scripts/GigE/backup_pylon5.

The package contains a GUI application (/bin/PylonViewerApp) for streaming video. The old version supports saving still images, but Milind discovered that the new version supports saving video as well. This required installing a supplementary package supporting MPEG-4 output.

Basler's GUI application is launched from the terminal using the alias pylon. I've tested it and confirm it can save both videos and still-image formats. I recommend to also try grabbing images using this program and check the bit resolution. It would be a useful diagnostic to know whether it's a bug in Joe B.'s code or something deeper in the camera settings.

Attachment 1: IMG_3525.jpg
IMG_3525.jpg
  14807   Wed Jul 24 20:05:47 2019 MilindUpdateCamerasCNNs for beam tracking || Tales of desperation

At the lab meeting today, Rana suggested that I use the Pylon app to collect more data if that's what I need. Following this, Jon helped me out by updating the pylon version and installing additional software to record video. Now I am collecting data at

  1. higher exposure rate - 600 us magically gives me a saturation percentage of around 1%, see attachment #1 (i.e around 1% of the pixels in the region containing the beam spot are over 240 in value). Ths is a consequence of my discussion with Gabriele where we concluded that I was losing information due the low exposure rate I was using.
  2. For much longer: roughly 10 minutes
    1. at an amplitude of 40 cts for 0.2 Hz
    2. at an amplitude of 20 cts for 0.2 Hz
    3. at an amplitude of 10 cts for 0.2 Hz
    4. at an amplitude of 40 cts for 0.4 Hz
    5. at an amplitude of 20 cts for 0.2 Hz
    6. Random motion

Consequently I have dithered the MC2 optic from around 9:00 PM.

Attachment 1: saturation_percentage.pdf
saturation_percentage.pdf
  14808   Wed Jul 24 20:23:52 2019 gautamUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

Since there are multiple SURF projects that rely on the cameras:

  1. I moved the new installs Jon made to "new_pylon5" and "new_pypylon". The old installs were moved back to be the default directories.
  2. The bashrc alias for pylon was updated to allow the recording of videos (i.e. it calls the PylonViewerApp from new_pypylon).
  3. There is a script that can grab images at multiple exposures and save 12-bit data as uint16 numpy arrays to an HDF5 file. Right now, it is located at /users/kruthi/scripts/grabHDR.py. We can move this to a better place later, and also improve the script for auto adjusting the exposure time to avoid saturations.

My changes were necessary because the grabHDR.py script was throwing python exceptions, whereas it was running just fine before Jon's changes. We can move the "new_*" dirs to the default once the SURFs are gone.

Let's freeze the camera software config in this state until next week.

  14809   Thu Jul 25 00:26:47 2019 MilindUpdateCamerasConvolutional neural networks for beam tracking

Somehow I never got around to doing the pixel sum thing for the new real data from the GigE. Since I have to do it for the presentation, I'm putting up the results here anyway. I've normalized this and computed the SNR with the true readings.

SNR = (power in true readings)/ (power in error signal between true and predicted values)

Attachment #2 is SNR of best performing CNN for comparison.

Attachment 1: centroid.pdf
centroid.pdf
Attachment 2: subplot_yaw_test.pdf
subplot_yaw_test.pdf
  14810   Thu Jul 25 09:19:32 2019 JonUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

I'll keep developing the camera server on a parallel track using the "new_..." directory naming convention. One thing I forgot to note is that the new pylon/pypylon packages require Python 3, so will not work with any of the 2.7 scripts. All of the environment I need to set up is exclusively Python 3. I won't change anything in the Python 2.7 environment in current use.

Also, I found the source of the bit resolution issue: Joe B's code loads a set of initialization parameters from a config file. One of them is "Frame Type = Mono8" which sets the dynamic range of the stream. I'll look into how this should be changed. 

Quote:

Since there are multiple SURF projects that rely on the cameras:

  1. I moved the new installs Jon made to "new_pylon5" and "new_pypylon". The old installs were moved back to be the default directories.
  2. The bashrc alias for pylon was updated to allow the recording of videos (i.e. it calls the PylonViewerApp from new_pypylon).
  3. There is a script that can grab images at multiple exposures and save 12-bit data as uint16 numpy arrays to an HDF5 file. Right now, it is located at /users/kruthi/scripts/grabHDR.py. We can move this to a better place later, and also improve the script for auto adjusting the exposure time to avoid saturations.
  14814   Fri Jul 26 19:53:53 2019 JonOmnistructureCamerasGigE Camera Server

I've started setting up the last new rackmount SuperMicro as a dedicated server for the GigE cameras. The new machine is currently sitting on the end of the electronics test bench. It is assigned the hostname c1cam at IP 192.168.113.116 on the martian network. I've installed Debian 10, which will be officially supported until July 2024.

I've added the /cvs/cds NFS mount and plan to house all the client/server code on this network disk. Any dependencies that must be built from source will be put on the network disk as well. Any dependencies that can be gotten through the package manager, however, will be installed locally but in an automated way using a reqs file.

We should ask Chub to reorder several more SuperMicro rackmount machines, SSD drives, and DRAM cards. Gautam has the list of parts from Johannes' last order.

  14824   Fri Aug 2 16:46:09 2019 KruthiUpdateCamerasClean up

I've put the analog camera back and disconnected the 151 unit GigE. But I ran out of time and wasn't able to replace the beamsplitter. I've put all the equipments back to the place where I took them from. The chopper and beam dump mount, that Koji had got me for the scatterometer, are kept outside, on the table I was working on earlier, in the control room. The camera lenses, additional GigEs, wedge beamsplitter, 1050nm LED and all related equipments are kept in the GigE box. This box was put back into CCD cameras' cabinet near the X arm.

Note: To clean stuff up, I had entered the lab around 9.30pm on Monday. This might have affected Yehonathan's loss measurement readings (until then around 57 readings had been recorded).

Sorry for the late update.

  14856   Fri Aug 23 19:10:02 2019 JonUpdateCamerasGigE camera server is online

Following the death of rossa, which was hosting the only working environment for the GigE camera software, I've set up a new dedicated rackmount camera server: c1cam (details here). The Python server script is now configured as a persistent systemd service, which automatically starts on boot and respawns after a crash. The server depends on a set of EPICS channels being available to control the camera settings, so c1cam is also running a softIOC service hosting these channels. At the moment only the ETMX camera is set up, but we can now easily add more cameras.

Usage

Instructions for connecting to a live video feed are posted here. Any machine on the martian network can stream the feed(s). The only requirement is that the client machine have GStreamer 0.10 installed (all the control room workstations satisfy this).

Code Locations

As much as possible, the code and dependencies are hosted on the /cvs/cds network drive instead of installed locally. The client/server code and the Pylon5, PyPylon, and PyEpics dependencies are all installed at /cvs/cds/rtcds/caltech/c1/scripts/GigE. The configuration files for the soft IOC are located at /cvs/cds/caltech/target/c1cam.

Upgrade Goals

The 40m GigE camera code is a slightly-updated version of the 10+ year-old camera code in use at the sites. Consequently every one of its dependencies is now deprecated. Ultimately, we'd like to upgrade to the following:

  • Python 2.7 --> 3.7
  • Basler Pylon 5.0.12 --> 5.2.0
  • PyPylon 1.1.1 --> 1.4.0
  • GStreamer 0.10 --> 1.2

This is a long-term project, however, as many of these APIs are very different between Python 2 and 3.

  14883   Mon Sep 16 17:53:16 2019 aaronUpdateCamerasMC2 trans camera (?) rotated

We noticed last week that the MC2 trans camera has pitch and yaw swapped; I rotated what I thought is the correct camera by 90 degrees clockwise (as viewed from above, like in the attachment), but I now have doubts. It's the camera on the right in the attachment.

Attachment 1: 47D6ED9C-BF21-4D6E-9947-284FE4A336F4.jpeg
47D6ED9C-BF21-4D6E-9947-284FE4A336F4.jpeg
  14884   Mon Sep 16 19:29:24 2019 KojiUpdateCamerasMC2 trans camera (?) rotated

The left one is analog and 90deg rotated.

See also: This issue tracker

  15048   Tue Nov 26 13:33:33 2019 YehonathanUpdateCamerasMC2 Camera rotated by 90 degrees

MC2 analog camera was rotated by 90 degrees. Orientation correctness was verified by exciting the MC2 Yaw degree of freedom.

Attached before and after photos of the camera setup.

Attachment 1: MC2AnalogCameraAfter.jpg
MC2AnalogCameraAfter.jpg
Attachment 2: MC2AnalogCameraBefore.jpg
MC2AnalogCameraBefore.jpg
  15306   Sat Apr 18 13:32:31 2020 ranaUpdateCamerasGigE w better NIR sensitivvity

There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.

Might allow for better scatter measurements - not that we need more signal, but it could allow us to use shorter exposure times and reduce blurring due to the wobbly beams.

  15311   Thu Apr 23 09:52:02 2020 JonUpdateCamerasGigE w better NIR sensitivvity

Nice, and we should also permanently install the camera server (c1cam) which is still sitting on the electronics bench. It is running an adapted version of the Python 2/Debian 8 site code. Maybe if COVID continues long enough I'll get around to making the Python 3 version we've long discussed.

Quote:

There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.

  16060   Wed Apr 21 10:59:07 2021 ranaSummaryCamerasnote on new GigE cam @ 1064

Note from Stephen on more sensitive Baslers.

  16190   Mon Jun 7 15:37:01 2021 Anchal, Paco, YehonathanSummaryCamerasMon 7 in Control Room Died

We found Mon7 in control room dead today afternoon. It's front power on green light is not lighting up. All other monitors are working as normal.

This monitor was used for looking at IMC camera analog feed. It is one of the most important monitors for us, so we should replace it with a different monitor.

Yehonathan and Paco disconnected the monitor and brought it down. We put it under the back table if anyone wants to fix it. Paco has ordered a BNC to VGA/HDMI converter to put in any normal monitor up there. It will happen this Wednesday. Meanwhile, I have changed the MON4 assignment from POP to Quad2 to be used for IMC.

  16204   Wed Jun 16 13:20:19 2021 Anchal, PacoSummaryCamerasMon 7 in Control Room Replaced

We replaced the Mon 7 with an LCD monitor from back bench. It is fed the analog signal from BNC converted into VGS with a converter box that Paco bought. We can replace this monitor with another monitor if it is required on the back bench. For now, we definitely need a monitor to show IMC camera's up there.

Attachment 1: IMG_20210616_083810.jpg
IMG_20210616_083810.jpg
  16774   Wed Apr 13 15:57:25 2022 Ian MacMillanUpdateCamerasCamera Battery Test

Tested the Nikon batteries for the camera. they are supposed to be 7V batteries but they don't hold a charge. I confirmed this with multi-meter after charging for days. Ordered new ones Nikon EN-EL9

  16776   Wed Apr 13 18:55:54 2022 KojiUpdateCamerasCamera Battery Test

I believe that the Nikon has an exposure problem and that's why we bought the Canon.

 

  16802   Fri Apr 22 07:01:58 2022 JcUpdateCoil DriversAdding Resistors and Reinstalling

[Koji, JC]

Coil Drivers LO2, SR2, AS4, and AS1 have been updated a reinstalled into the system. 

LO2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100008

LO2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100530

SR2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100614

SR2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100615

AS1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100610

AS1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100611

AS4 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100612

AS4 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100613

Attachment 1: IMG_0536.jpeg
IMG_0536.jpeg
Attachment 2: IMG_0539.jpeg
IMG_0539.jpeg
Attachment 3: IMG_0542.jpeg
IMG_0542.jpeg
Attachment 4: IMG_0545.jpeg
IMG_0545.jpeg
Attachment 5: IMG_0548.jpeg
IMG_0548.jpeg
Attachment 6: IMG_0551.jpeg
IMG_0551.jpeg
Attachment 7: IMG_0554.jpeg
IMG_0554.jpeg
Attachment 8: IMG_0557.jpeg
IMG_0557.jpeg
  16804   Fri Apr 22 12:09:51 2022 AnchalUpdateCoil DriversPlease update DCC pages

Nice. Please put this information on the DCC pages of the coil driver units. You'll find links to all the units in this document tree LIGO-E2100447. For each page, click on "Change Metadata" from the left panel and add the change made to the resistor (including the resistor name on PCB, previous and new value), and add a link to your previous elog post which has more details like photos, to "Notes and Changes", and upload an updated version of the circuit schematic by creating an annotation in the previous circuit schematic pdf. Every unit that has a serial number in the lab has a DCC page (if not, we should create one) where we should track all such hard changes.

 

  16814   Wed Apr 27 10:05:55 2022 JCUpdateCoil DriversCoil Drivers Update

18 (9 pairs) Coil Drivers have been modified. Namely ETMX/ITMX/ITMY/BS/PRM/SRM/MC1/MC2/MC3.

 

ETMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100624                     ETMX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100631

ITMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100620                      IMTX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100633

ITMY Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100623                   ITMY Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100632

BS Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3            S2100625                     BS Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100649

PRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100627                      PRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100650

SRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100626                        SRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100648

MC1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100628                        MC1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100651

MC2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3       S2100629                        MC2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100652

MC3 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100630                        MC3 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100653

 

 

Will be updating this linking each coil driver to the DCC

  16823   Mon May 2 13:30:52 2022 JCUpdateCoil DriversCoil Drivers Update

The DCC has been updated, along with the modified schematic. Links have been attached.

Quote:

18 (9 pairs) Coil Drivers have been modified. Namely ETMX/ITMX/ITMY/BS/PRM/SRM/MC1/MC2/MC3.

 

ETMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100624                     ETMX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100631

ITMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100620                      IMTX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100633

ITMY Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100623                   ITMY Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100632

BS Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3            S2100625                     BS Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100649

PRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100627                      PRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100650

SRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100626                        SRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100648

MC1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3         S2100628                        MC1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3      S2100651

MC2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3       S2100629                        MC2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100652

MC3 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        S2100630                        MC3 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3         S2100653

 

 

Will be updating this linking each coil driver to the DCC

 

  138   Thu Nov 29 10:36:47 2007 albertoConfigurationComputer Scripts / ProgramsAgilent 82357B GPIB to USB Interface Installation Procedeure
To run the Agilent Automation-Ready CD provided with the interface is only the first step of the installation. Apparently there should be also a second CD with the drivers for Windows XP but I couldn't find it. So, after Installaing the IO Libraries Suite from the CD, I had to install the drivers with an executable downloaded from the Agilent's website at:

http://www.home.agilent.com/agilent/editorial.jspx?cc=US&lc=eng&ckey=1188958&nid=-35199.0.00&id=1188958

and only then I could plug in the interface.
Anyway, I burned a cd with the file and put it together with the other one.
  142   Thu Nov 29 18:10:13 2007 AlbertoHowToComputer Scripts / ProgramsGPIB Scripts
I've spent a lot of time trying to configurate the GPIB-USB interface for the HP4195. After installing 1) the Agilent libraries, 2) the drivers, 3) the matlab Instrument Toolbox, 4) Jamie script, 5) Alice's script the computer can see the HP but still they can't 'talk' to each other.
I give up. I asked Alice Wang how she managed to get data. I'm not sure she used the GPIB interace. Rob said she might have used the old fashion floppy disks that we can't read anymore here.
I would really appreciate any suggestion by anyone who happened to have the same problems.
  143   Thu Nov 29 19:35:14 2007 ranaHowToComputer Scripts / ProgramsGPIB Scripts

Quote:
I've spent a lot of time trying to configurate the GPIB-USB interface for the HP4195. After installing 1) the Agilent libraries, 2) the drivers, 3) the matlab Instrument Toolbox, 4) Jamie script, 5) Alice's script the computer can see the HP but still they can't 'talk' to each other.
I give up. I asked Alice Wang how she managed to get data. I'm not sure she used the GPIB interace. Rob said she might have used the old fashion floppy disks that we can't read anymore here.
I would really appreciate any suggestion by anyone who happened to have the same problems.


Alice and Jamie used the USB-GPIB interface. You should just try using the black laptop which already has this capability or ask Jamie Rollins
who actually knows something.
  157   Mon Dec 3 00:10:42 2007 ranaDAQComputer Scripts / Programslinemon
I've started up one of our first Matlab based DMT processes as a test.

There's a matlab script running on Mafalda which is measuring the height of the 60 Hz peak
in the MC1 UL SENSOR and writing it to an unused EPICS channel (PZT1_PIT_OFFSET).

The purpose of this is just to see if such a thing is stable over long periods of time. Its
open on a terminal on linux3 so it can be killed at any time if it runs amok.

Right now the code just demods the channel and tracks the absolute value of the peak. The
next upgrade will have it track the actual frequency once per minute and then report that
as well. We also have to figure out how to make it a binary and then make a single script
that launches all of the binaries.

For now you can watch its progress on the StripTool on op540m; its cheap and easy DMT viewer.
  159   Mon Dec 3 17:55:39 2007 tobinHowToComputer Scripts / Programslinemon
Matlab's Signal Processing toolbox has a set of algorithms for identifying sinusoids in data. Some of them (e.g., rootmusic) take the number of sinusoids to find as an argument and return the "most probable N frequencies." These could be useful in line monitoring.
  160   Mon Dec 3 19:06:49 2007 ranaDAQComputer Scripts / Programslinemon
I turned up my nose at Matlab's special tools. I modified the linetracker to use the
relationship phase = 2*pi*f*t to estimate the frequency each minute. The
code uses 'polyfit' to get the mean and trend of the unwrapped phase and then determines
how far the initial frequency estimate was off. It then uses the updated number as the
initial guess for the next minute.

I looked at a couple hours of data before letting it run. It looks like the phase of the
'60 Hz' peak varies at 20 second time scales but not much faster or rather anything faster
would be a glitch and not a monotonic frequency drift.

From the attached snapshot you can see that the amplitude (PZT1_PIT) varies by ~10 %
and the frequency by ~40 mHz in a couple hour span.
Attachment 1: spd64d1.jpg
spd64d1.jpg
  166   Wed Dec 5 16:57:36 2007 tobinHowToComputer Scripts / ProgramsSR785 data converter on linux
I was pleased to find that the SR785 Data Viewer (including the command line conversion utility) installs and works in linux using WINE (on my laptop at least). There are some quirks, of course, but I was able to extract my data.
  175   Thu Dec 6 18:11:15 2007 robHowToComputer Scripts / ProgramsMaking DMF monitors

I was able to use the matlab compiler to compile a version of the linetracker written by Rana, and run the compiled version on mafalda.

I believe I made the necessary edits to our mDV config file so that it should be easy for others to follow these steps:

1) Write the DMF routine you want, as a matlab function (not a script).

2) If it runs correctly in matlab, then from the matlab command line compile
it using the -m flag (i.e., mcc -m myfun.m). You should run the
compiler from the directory where you want the executable to end up (don't use the mDV/extra
directory so it doesn't get all cluttered).

3) prior to running the resulting executable (which should be called simply myfun),
prepend the directories
/cvs/cds/caltech/apps/linux/matlab/bin/glnx86
/cvs/cds/caltech/apps/linux/matlab/sys/os/glnx86/

to the LD_LIBRARY_PATH enviroment variable. These directories must be prepended as the
versions that already exist in /usr/lib don't work; I'm loathe to do this in the cshrc.40m
for fear of later conflicts, so it will need to be done in some sort of shell script which
calls the matlab executable.
  180   Fri Dec 7 14:14:48 2007 robMetaphysicsComputer Scripts / Programstdsread problems on Solaris

tdsread has developed a strange new illness, whereby it cannot read EPICS values from two subsystems at once (e.g., getting an LSC and SUS value simultaneously). I thought this might have something to with the fact that both losepics and iscepics are running on the same box,
but the same thing happens with IOO EPICS records, so that's not the culprit.

This is new behaviour, and it's only happening on the solaris machines. I suspect some ENV/cshrc juju has caused it, as the tdsread executable is the same one from April, and I don't think our EPICS infrastructure has changed otherwise. In the near term we can either try running the scripts on linux, or modify the IFO scripts to not do these types of calls.
ELOG V3.1.3-