40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  TCS elog, Page 2 of 5  Not logged in ELOG logo
ID Dateup Author Type Category Subject
  51   Thu Jun 17 07:40:07 2010 James KMiscHartmann sensorSURF Log -- Day 1, Getting Started

 For Wednesday, June 16:

I attended the LIGO Orientation and first Introduction to LIGO lecture in the morning. In the afternoon, I ran a few errands (got keys to the office, got some Computer Use Policy Documentation done) and toured the lab. I then got Cygwin installed on my laptop along with the proper SSH packets and was successfully able to log in to and interact with the Hartmann computer in the lab through the terminal, from the office. I have started reading relevant portions of Dr. Brooks' thesis and of "Fundamentals of Interferometric Gravitational Wave Detectors" by Saulson.
  52   Thu Jun 17 22:03:51 2010 James KMiscHartmann sensorSURF Log -- Day 2, Getting Started

For Thursday, June 17:

Today I attended a basic laser safety training orientation, the second Introduction to LIGO lecture, a Summer Research Student Safety Orientation, and an Orientation for Non-Students living on campus (lots of mandatory meetings today). I met with Dr. Willems and Dr. Brooks in the morning and went over some background information regarding the project, then in the afternoon I got an idea of where I should progress from here from talking with Dr. Brooks. I read over the paper "Adaptive thermal compensation of test masses in advanced LIGO" and the LIGO TCS Preliminary Design document, and did some further reading in the Brooks thesis.

I'm making a little bit of progress with accessing the Hartmann lab computer with Xming but got stuck, and hopefully will be able to sort that out in the morning and progress to where I want to be (I wasn't able to get much further than that, since I can't access the Hartmann computer in the lab currently due to laser authorization restrictions). I'm currently able to remotely open an X terminal on the server but wasn't able to figure out how to then be able to log in to the Hartmann computer. I can do it via SSH on that terminal, of course, but am having the same access restrictions that I was getting when I was logging in to the Hartmann computer via SSH directly from my laptop (i.e. I can log in to the Hartmann computer just fine, and access the camera and framegrabber programs, but for the vast majority of the stuff on there, including MATLAB, I don't have permissions for some reason and just get 'access denied'). I'm sure that somebody who actually knows something about this stuff will be able to point out the problem and point me in the right direction fairly quickly (I've never used SSH or the X Window system before, which is why it's taking me quite a while to do this, but it's a great learning experience so far at least).

Goals for tomorrow: get that all sorted out and learn how to be able to fully access the Hartmann computer remotely and run MATLAB off of it. Familiarize myself with the camera program. Set the camera into test pattern mode and use the 'take' programs to retrieve images from it. Familiarize myself with the 'take' programs a bit and the various options and settings of them and other framegrabber programs. Get MATLAB running and use fread to import the image data arrays I take with the proper data representation (uint16 for each array entry). Then, set the camera back to recording actual images, take those images from the framegrabber and save them, then import them into MATLAB. I should familiarize myself with the various settings of the camera at this stage, as well.

 

--James

  53   Sat Jun 19 17:31:46 2010 James KMiscHartmann sensorSURF Log -- Day 3, Initial Image Analysis
For Friday, June 18:
(note that I haven't been working on this stuff all of Saturday or anything, despite posting it now. It was getting late on Friday evening so I opted to just type it up now, instead)

(all matlab files referenced can be found in /EDTpdv/JKmatlab unless otherwise noted)

I finally got Xming up and running on my laptop and had Dr. Brooks edit the permissions of the controls account, so now I can fully access the Hartmann computer remotely (run MATLAB, interact with the framegrabber programs, etc.). I was able to successfully adjust camera settings and take images using 'take', saving them as .raw files. I figured out how to import these .raws into MATLAB using fopen and display them as grayscale images using the Imshow command. I then wrote a program (readimgs.m, as attached) which takes inputs a base filename and number of images (n), then automatically loads the first 'n' .raw files located in /EDTpdv/JKimg/ with the inputted base file name, formatting them properly and saving them as a 1024x1024x(n) matrix.

After trying out the test pattern of the camera, I set the camera into normal operating mode. I took 200 images of the HWS illuminated by the OLED, using the following camera settings:

 
Temperature data from the camera was, unfortunately, not taken, though I now know how to take it.
 
The first of these 200 images is shown below:
 
hws0000.png

As a test exercise in MATLAB and also to analyze the stability of the HWS output, I wrote a series of functions to allow me to find and plot the means and standard deviations of the intensity of each pixel over a series of images. First, knowing that I would need it in following programs in order to use the plot functions on the data, I wrote "ar2vec.m" (as attached), which simply inputs an array and concatenates all of the columns into a single column vector.

Then, I wrote "stdvsmean.m" (as attached), which inputs a 3D array (such as the 1024x1024x(n) array of n image files), which first calculates the standard deviation and mean of this array along the 3rd dimension (leaving, for example, two 1024x1024 arrays, which give the mean and standard deviation of each pixel over the (n) images). It then uses ar2vec to create two column vectors, representing the mean and standard deviation of each pixel. It then plots a scatterplot of the standard deviation of each pixel vs. its mean intensity (with logarithmic axes), along with histograms of the mean intensities and standard deviation of intensities (with logarithmic y-axes).

"imgdevdat.m" (as attached) is simply a master function which combines the previous functions to input image files, format them, analyze them statistically and create plots.

Running this function for the first 20 images gave the following output:

(data from 20 images, over all 1024x1024 pixels)

Note that the background level is not subtracted out in this function, which is apparent from the plots. The logarithmic scatter plot looks pretty linear, as expected, but there are interesting features arising between the intensities of ~120 to ~130 (the obvious spike upward of standard deviation, followed immediately by a large dip downward).

MATLAB gets pretty bogged down trying to plot over a million data points at a time, to the point where it's very difficult to do anything with the plots. I therefore wrote the function "minimgstat.m" (as attached), which is very similar to imgdevdat.m except that before doing the analysis and plotting, it reduces the size of the image array to the upper-left NxN square (where N is an additional argument of the function).

Using this function, I did the same analysis of the upper-left 200x200 pixels over all 200 images:

(data from 200 images, over the upper-left 200x200 pixels)

The intensities of the pixels don't go as high this time because the upper portion of the images are dimmer than much of the rest of the image (as is apparent from looking at the image itself, and as I demonstrate further a little bit later on). Do note the change in axis scaling resulting from this when comparing the image. We do, however, see the same behavior in the ~120-128 intensity level region (more pronounced in this plot because of the change in axis scaling).

I was interested in looking at which pixels constituted this band, so I wrote a function "imgbandfind.m" (as attached), which inputs a 2D array and a minimum and maximum range value, goes through the image array pixel-by-pixel, determines which pixels are within the range, and then constructs an RGB image which displays pixels within the range as red and images outside the range as black.

I inputted the first image in the series into this function along with the range of 120-129, and got the following:

(pixels in intensity range of 120-129 in first image)

So the pixels in this range appear to be the pixels on the outskirts of each wavefront dot near the vertical center of the image. The outer circles of the dots on the lower and upper portions of the image do not appear, perhaps because the top of the image is dimmer and the bottom of the image is brighter, and thus these outskirt pixels would then have lower and higher values, respectively. I plan to investigate this and why it happens (what causes this 'flickering' and if it is a problem at all) further.

The fact that the background levels are lower nearer to the upper portion of the image is demonstrated in the next image, which shows all intensity levels less than 70:
(pixels in intensity range of 0-70 in first image)

So the background levels appear the be nonuniform across the CCD, as are the intensities of each dot. Again, I plan to investigate this further. (could it be something to do with stray light hitting the CCD nonuniformly, maybe? I haven't thought through all the possibilities)
 
The OLED has been turned off, so my next immediate step will be to investigate the background levels further by analyzing the images when not illuminated by the OLED.
 
In other news: today I also attended the third Intro to LIGO lecture, a talk on Artificial Neural Networks and their applications to automated classification of stellar spectra, and the 40m Journal Club on the birth rates of neutron stars (though I didn't think to learn how to access the wiki until a few hours right before, and then didn't actually read the paper. I fully intend to read the paper for next week before the meeting).
 
Attachment 2: ar2vec.m
function V = ar2vec(A)
%AR2VEC V=ar2vec(A)
%concenates the columns of 2D array A into a single column vector V

sz = size(A);
n=sz(1,2);
i=1;
V=[];

while i<(n+1)
... 7 more lines ...
Attachment 3: readimgs.m
function arr = readimgs(imn,n)
%readimgs('basefilename',n) 
%- A function to load a series of .raw files outputted by 'take'
%and stored in /opt/EDTpdv/JKimg/
%  Inputs: 'basefilename' is a string input (for example, for series of
%   images "testpat####.raw" input 'testpat'). "n" is the number of images,
%   so for testpat0000-testpat0004 input n=5

i=0;
arr=[];
... 32 more lines ...
Attachment 4: stdvsmean.m
function M = stdvsmean(A)
%STDVSMEAN takes a 3D array of image data and computes
%stdev vs. mean for each pixel

%find means/st devs between each image
astd = std(double(A),0,3);
armn = mean(double(A),3);

%convert into column vectors of pixel-by-pixel data
asvec=ar2vec(astd);
... 33 more lines ...
Attachment 5: imgdevdat.m
function imgdevdat(basefilename,imgnum)
%IMGDEVDAT Inputs base file name and number of images stored as .raw files
%in ../EDTpdv/JKimg/, automatically imports as 1024x1024x(n) matrix, finds
%the mean and standard deviation of each pixel in each image and plots
A=readimgs(basefilename,imgnum);
stdvsmean(A)
end

Attachment 6: minimgstat.m
function imgdevdat(basefilename,imgnum,size)
%IMGDEVDAT Inputs base file name and number of images stored as .raw files
%in ../EDTpdv/JKimg/, automatically imports as (size)x(size)x(n) matrix, finds
%the mean and standard deviation of each pixel in each image and plots
A=readimgs(basefilename,imgnum);
smA=A(1:size,1:size,:);
stdvsmean(smA)
end
Attachment 7: imgbandfind.m
function [HILT] = imgbandfind(img,minb,maxb)
%IMGBANDFIND inputs an image array and minimum and maximum value,
% then finds all values of the array within that range, then plots with
%values in range highlighted in red against a black background

img=double(img);
maxv=max(max(img));
sizm=size(img);
rows=sizm(1,1);
cols=sizm(1,2);
... 20 more lines ...
  54   Tue Jun 22 00:21:47 2010 James KMiscHartmann sensorSurf Log -- Day 4, Hartmann Spot Flickering Investigation

 I started out the day by taking some images from the CCD with the OLED switched off, to just look at the pattern when it's dark. The images looked like this:

 
Taken with camera settings:

The statistical analysis of them using the functions from Friday gave the following result:

 
At first glance, the distribution looks pretty Poissonian, as expected. There are a few scattered pixels registering a little brighter, but that's perhaps not so terribly unusual, given the relatively tiny spread of intensities with even the most extreme outliers. I won't say for certain whether or not there might be something unexpected at play, here, but I don't notice anything as unusual as the standard deviation 'spike' seen from intensities 120-129 as observed in the log from yesterday.
 
Speaking of that spike, the rest of the day was spent trying to investigate it a little more. In order to accomplish this, I wrote the following functions (all attached):
 
-spotfind.m -- inputs a 3D array of several Hartmann images as well as a starting pixel and threshold intensity level. analyzes the first image, scanning starting at the starting pixel until it finds a spot (with an edge determined by the threshold level), after which it finds a box of pixels which completely surrounds the spot and then shrinks the matrix down to this size, localizing the image to a single spot
 
-singspotcent.m -- inputs the image array outputted from spotfind, subtracts an estimate of the background, then uses the centroiding algorithm sum(x*P^2)/sum(P^2) to find the centroid (where x is the coordinate and P is the intensity level), then outputs the centroid location
 
-hemiadd.m -- inputs the image from spotfind and the centroid from singspotcent, subtracts an estimate of the background, then finds the sum total intensity in the top half of the image above the centroid, the bottom half, the left half and the right half, outputs these values as n-component vectors for an n-image input, subtracts from each vector its mean and then plots the deviations in intensity from the mean in each half of the image as a function of time
 
-edgeadd.m -- similar to hemiadd, except that rather than adding up all pixels on one half of the image, it inputs a threshold, determines how far to the right of the centroid that the spot falls past this treshold and uses it as a radial length, then finds the sum of the intensities of a bar of 3 pixels on this "edge" at the radial length away from the centroid.
 
-spotfft.m -- performs a fast fourier transform on the outputs from edgeadd, outputting the frequency spectrum at which the intensity of these edge pixels oscillate, then plotting these for each of the four edge vectors. see an example output below.
 
--halfspot_fluc.m and halfspot_edgefluc.m -- master functions which combine and automate the functions previous
 
Dr. Brooks has suggested that the observed flickering might perhaps be an effect resulting from the finite thickness of the Hartmann Plate. The OLED can be treated as a point source and thus can be approximated as emitting a spherical wavefront, and thus the light from it will hit this edge at an angle and be scattered onto the CCD. If the plate vibrates, then (which it certainly must to some degree) the wavefront will hit this edge at a different angle as the edge is displaced temporarily through vibration, and thus this light will hit the CCD at a different point, causing the flickering (which is, after all, observed to occur near the edge of the spot). This effect certainly must cause some level of noise, but whether it's the culprit for our 'flickering' spike in the standard deviation remains to be seen.

Here is the frequency spectrum of the edge intensity sums for two separate spots, found over 128 images:
Intensity Sum Amplitude Spectrum of Edge Fluctuations, 128 images, spot search point (100,110), threshold level 110

128 images, spot search point (100,100), threshold level 129
At first glance, I am not able to conclude anything from this data. I should investigate this further.

A few things to note, to myself and others:
--I still should construct a Bode plot from this data and see if I can deduce anything useful from it
--I should think about whether or not my algorithms are good for detecting what I want to look at. Is looking at a 3 pixel vertical or horizontal 'bar' on the edge good for determining what could possibly be a more spherical phenomenon? Are there any other things I need to consider? How will the settings of the camera affect these images and thus the results of these functions?
--Am I forgetting any of the subtleties of FFTs? I've confirmed that I am measuring the amplitude spectrum by looking at reference sine waves, but I should be careful since I haven't worked with these in a while
 
It's late (I haven't been working on this all night, but I haven't gotten the chance to type this up until now), so thoughts on this problem will continue tomorrow morning..

Attachment 1: spotfind.m
function [spotM,r0,c0] = spotfind(M,level,rs,cs)
%SPOTFIND Inputs a 3D array of hartmann spots and spot edge level
%and outputs a subarray located around a single spot located near (rs,cs)
cut=level/65535;
A=double(M(:,:,1)).*double(im2bw(M(:,:,1),cut));

%start at (rs,cs) and sweep to find spot
r=rs;
c=cs;
while A(r,c)==0
... 34 more lines ...
Attachment 2: singspotcent.m
function [rc,cc] = singspotcent(A)
%SINGSPOTCENT returns centroid location for first image in input 3D matrix
MB=double(A(:,:,1));
[rn cn]=size(MB);
M=MB-mean(mean(min(MB)));
r=1;
c=1;
sumIc=0;
sumIr=0;
while c<(cn+1)
... 26 more lines ...
Attachment 3: hemiadd.m
function [topsum,botsum,leftsum,ritsum] = hemiadd(MB,rcd,ccd)
%HEMIADD inputs a 3D image matrix and centroid location and finds the difference of
%the sums of the top half, bottom half, left half and right half at each time
%compared to their means over that time

%round coordinates of centroid
rc=round(rcd);
cc=round(ccd);

%subtract approximate background
... 51 more lines ...
Attachment 4: edgeadd.m
function [topsum,botsum,leftsum,ritsum] = edgeadd(MB,rcd,ccd,edgemax)
%HEMIADD inputs a 3D image matrix and centroid location and finds the difference of
%the sums of 3 edge pixels at radial distance "radial" from centroid for
%the top half, bottom half, left half and right half at each time
%compared to their means over that time

%round coordinates of centroid
rc=round(rcd);
cc=round(ccd);

... 59 more lines ...
Attachment 5: spotfft.m
function spotfft(t,b,l,r)
%SPOTFFT Does an fft and plots the frequency spectrum of four input vectors
%Specifically, this is to be used with halfspot_edgefluc to find the
%frequencies of oscillations about the edges of Hartmann spots
[n,m]=size(t);
NFFT=2^nextpow2(n);
T=fft(t,NFFT)/n;
B=fft(b,NFFT)/n;
L=fft(l,NFFT)/n;
R=fft(r,NFFT)/n;
... 30 more lines ...
Attachment 6: halfspot_fluc.m
function [top,bot,lft,rgt] = halfspot_fluc(M,spotr,spotc,thresh)
%HALFSPOT_FLUC Inputs a 3D array of Hartmann sensor images, along with an
%approximate spot location and intensity threshhold. Finds a spot on the
%first image near (spotc,spotr) and defines boundary of spot near an
%intensity of 'thresh'. Outputs fluctuations of the intensity sums of the
%top, bottom, left and right halves of the spot about their means, and
%graphs these against each other automatically.

[I,r0,c0]=spotfind(M,thresh,spotr,spotc);
[r,c]=singspotcent(I);
... 7 more lines ...
Attachment 7: halfspot_edgefluc.m
function [top,bot,lft,rgt] = halfspot_edgefluc(M,spotr,spotc,thresh,plot)
%HALFSPOT_FLUC Inputs a 3D array of Hartmann sensor images, along with an
%approximate spot location and intensity threshhold. Finds a spot on the
%first image near (spotc,spotr) and defines boundary of spot near an
%intensity of 'thresh'. Outputs fluctuations of the intensity sums of the
%top, bottom, left and right edges of the spot about their means, and
%graphs these against each other automatically.
%
%For 'plot', specify 'time' for the time signal or 'fft' for the frequency

... 10 more lines ...
  55   Tue Jun 22 22:30:24 2010 James KMiscHartmann sensorSURF Log -- Day 5, more Hartmann image preliminary analysis

 Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):

hws.png

Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.

I tested to see whether the same 'flickering' is present in images under this setup.

For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:

750 microseconds:

750us.png

1000 microseconds:

1000us.png

1500 microseconds:

1500us.png

2000 microseconds:

2000us.png

3000 microseconds:

3000us.png

4000 microseconds:

4000us.png

5000 microseconds. Note that the background level is approaching the level of the feature:

5000us.png

6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:

8000us.png

 

When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):

hws3k.png

This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction). 

 

To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):

 

Set to 30Hz:

f30Hz.png

Set to 1Hz:

f1Hz.png

 

To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of  115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).

 

I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.

 

EDIT: Some late-night conjecturing: Consider the following,

I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.

Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.

So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.

Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.

This is very similar to the sort of behavior that we're seeing!

Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:

levelinstances.png

Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.

 

This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.

 

  56   Wed Jun 23 06:49:48 2010 AidanMiscHartmann sensorSURF Log -- Day 5, more Hartmann image preliminary analysis

Nice work!

 

Quote:

 Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):

hws.png

Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.

I tested to see whether the same 'flickering' is present in images under this setup.

For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:

750 microseconds:

750us.png

1000 microseconds:

1000us.png

1500 microseconds:

1500us.png

2000 microseconds:

2000us.png

3000 microseconds:

3000us.png

4000 microseconds:

4000us.png

5000 microseconds. Note that the background level is approaching the level of the feature:

5000us.png

6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:

8000us.png

 

When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):

hws3k.png

This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction). 

 

To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):

 

Set to 30Hz:

f30Hz.png

Set to 1Hz:

f1Hz.png

 

To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of  115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).

 

I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.

 

EDIT: Some late-night conjecturing: Consider the following,

I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.

Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.

So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.

Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.

This is very similar to the sort of behavior that we're seeing!

Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:

levelinstances.png

Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.

 

This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.

 

 

  57   Wed Jun 23 22:57:22 2010 James KMiscHartmann sensorSURF Log -- Day 6, Centroiding

 So in addition to taking steps towards starting to set stuff up for the experiment in the lab, I spent a good deal of the day figuring out how to use the pre-existing code for finding the centroids in spot images. I spent quite a bit of time trying to use an outdated version of the code that didn't work for the actual captured images, and then once I was directed towards the right version I was hindered for a little while by a bug.

The 'bug' turns out to be something very simple, yet relatively subtle. In the function centroid_images.m in '/opt/EDTpdv/hartmann/src/', the function was assuming a threshold of 0 with my images, even though it has not long before been working with an image that Dr. Brooks loaded. Looking through the code, I noticed that before finding the threshold using the MATLAB function graythresh, several adjustments were made so as to subtract out the background and normalize the array. After estimating and subtracting a background, the function divides the entries of the image array by the maximum value in the image so as to normalize this. For arrays composed of numbers represented as doubles, this is fine. However, the function that I wrote to import my image arrays into MATLAB outputs an image array with integer data. So when the function divided my integer image arrays by the maximum values in the array, it rounded every value in the array to the nearest integer -- that is, the "normalized" array only contained ones and zeros. The function graythresh views this as a black and white image, and thus outputs a threshold of 0.

To remedy this, I edited centroid_images.m to convert the image array into an array of doubles near the very beginning of the function. The only new line is simply "image=double(image);", and I made a note of my edit in a comment above that line. The function started working for me after I did that.

 

I then wrote a function which automatically centroids an input image and then plots the centroids as scatter-plot of red circles over the image. For an image taken off of the Hartmann camera, it gave the following:

centroidplot_nozoom.png

Zoomed in on the higher-intensity peaks, the centroids look good. They're a little offset, but that could just be an artifact of the plotting procedure; I can't say for certain either way. They all appear offset by the same amount, though:

centroidplot_zoom.png

One problem is that, for spots with a much lower relative intensity than the maximum intensity peak, the centroid appears to be offset:

centroidplot_zoom2.png

Better centering of the beam and more even illumination of the Hartmann plate could mitigate this problem, perhaps.

 

I also wrote a function which inputs two image matrices and outputs vector field plots representing the shift in each centroid from the first to the second images. To demonstrate that I could use this function to display the shifting of the centroids from a change in the wavefront, I translated the fiber mount of the SLED in the direction of the optical axis by about 6 turns of the z-control knob  (corresponding to a translation of about 1.9mm, according to the user's guide for the fiber aligner). This gave the following images:

 

Before the translation:

6turn_before.png

After:

6turn_after.png

 This led to a displacement of the centroids shown as follows:

6turnDisplacementVectors.png

Note that the magnitudes of the actual displacements are small, making the shift difficult to see. However, when we scale the displacement vectors up, we can get much more readily visible Direction vectors (having the same direction as the actual displacement vectors, but not the same magnitude):

6turnDirectionVectors.png

This was a very rough sort of measurement, since exposure time, focus of the microscope optic, etc. were not adjusted, and the centroids are compared between single images rather than composite images, meaning that random noise could have quite an effect, especially for the lower-magnitude displacements. However, this plot appears to show the centroids 'spreading out', which is as expected for moving the SLED closer to the sensor along the optical axis.

 

The following MATLAB functions were written for this (both attached):

centroidplot.m -- calls centroid_image and plots the data

centroidcompare.m -- calls centroid_image twice for two inputs matrices, using the first matrix's centroid output structure as a reference for the second. Does a vector field plot from the displacements and reference positions in the second output centroids structure.

Attachment 5: 6turn_before.png
6turn_before.png
Attachment 9: centroidplot.m
function centroiddata=centroidplot(M,N)
%a function to read the image matrix M and plot the centroids of each plot
%on the image
H=M(:,:,N);
cd /opt/EDTpdv/hartmann/src/
centroiddata = centroid_image(H);
cd /opt/EDTpdv/JKmatlab/

v=centroiddata.current_centroids;
r=v(:,1);
... 6 more lines ...
Attachment 10: centroidcompare.m
function centroiddata=centroidcompare(A,B,M,N)
%compares the Mth image in 3D image matrix A to Nth in B
H=A(:,:,M);
I=B(:,:,N);
cd /opt/EDTpdv/hartmann/src/
cent0=centroid_image(H);
centroiddata=centroid_image(I,cent0);
cd /opt/EDTpdv/JKmatlab
v=centroiddata.reference_centroids;
dv=centroiddata.displacement_of_centroids;
... 16 more lines ...
  58   Fri Jun 25 00:11:13 2010 James KMiscHartmann sensorSURF Log -- Day 7, SLED Beam Characterization

BACKGROUND:


In order to conduct future optical experiments with the SLED and to be able to predict the behavior of the beam as it propagates across the table and through various optics, it is necessary to know the properties of the beam. The spot size, divergence angle, and radius of curvature are all of interest if we wish to be able to predict the pattern which should appear on the Hartmann sensor given a certain optical layout.

It was therefore necessary to conduct an experiment to measure these properties. The wavefront emanating from the SLED is assumed to be approximately Gaussian, and thus has an intensity of the form:

 

where A is some amplitude, w is the spot size, x and y are the coordinates transverse to the optical axis, and x0 is the displacement of the optical axis in the x-direction from the optical axis. The displacement of the optical axis in the y-direction is assumed to be zero (that is, y0=0). A and w are both functions of z, which is the coordinate of displacement parallel to the optical axis.

 

Notice that the total intensity read by a photodetector reading the entire beam would be the double integral from negative infinity to infinity for both x and y. If a opaque plate was placed such that the the beam was blocked from some x=xm to x=inf (where xm is the location of the edge of the plate), then the intensity read by a photodetector reading the entire non-blocked portion of the beam would be:

 

Mathematica was used to simplify this integral, and it showed it to be equivalent to:

where Erfc() is the complementary error function. Note that for fixed z, this intensity is a function only of xm. If an experiment was carried out to measure the intensity of the beam blocked by a plate from x=-inf to x=xm for multiple values of xm, it would therefore be possible via regression analysis to compute the best-fit values of A, w, and x0 for the measured values of Ipd and xm. This would give us A, w and x0 for that z-value. By repeating this process for multiple values of z, we could therefore find the behavior of these parameters as a function of z.

Furthermore, we know that at z-values well beyond the Rayleigh range, w should be linear with respect to z. Assuming that our measurements are done in the far-field (which, for the SLED, they almost certainly would be) we could therefore find the divergence angle by knowing the slope of the linear relation between w and z. Knowing this, we could further calculate such quantities as the Rayleigh range, the minimum spot size, and the radius of curvature of the SLED output (see p.490 of "Lasers" by Milonni and Eberly for the relevant functional relationships for Gaussian beams).


EXPERIMENT:

An experiment was therefore carried out to measure the intensity of of beam blocked from x~=-inf to x=xm, for multiple values of xm, for multiple values of z. A diagram of the optical layout of the experiment is below:

 

(top view)


The razor blade was mounted on a New Focus 9091 Translational Stage, the relative displacement of which in the x-direction was measured with the Vernier micrometer mounted on the base. Tape was placed on the front of the razor so as to block light from passing through any of its holes. The portion of the beam not blocked by the razor then passed through a lens which was used to focus the beam back onto a PDA1001A Large Area Silicon Photodiode, the voltage output of which was monitored using a Fluke digital multimeter. The ruler stayed securely clamped onto the optical table (except when it was translated in the x-direction once during the experiment, as described later).

The following is a picture of this layout, as constructed:

 

 
The procedure of the experiment was as follows: first, the translational stage was clamped securely with the left-most edge of its base lined up with the desired z-value as measured on the ruler. The z-value as measured on the ruler was recorded. Then, the translational stage was moved in the negative x-direction until there was no change in the voltage measured on the DMM (which is directly proportional to the measured intensity of the beam). When no further DMM readout change was yielded from -x translation, it was assumed that the the razor was no longer blocking the beam. Then, the stage was moved in the +x direction until the voltage output on the DMM just began to change. The micrometer and DMM values were both recorded. The stage was then moved inward until the DMM read a voltage which was close to the nearest multiple of 0.5V, and this DMM voltage and micrometer reading were recorded. The stage was then translated until the DMM voltage dropped by approximately 0.5V, the micrometer and DMM readings were recorded, and this process was repeated until the voltage reached ~0.5V. The beam output was then covered by a card so as to completely block it, and the voltage output from the DMM was recorded as the intensity from the ambient light from that measurement. The stage was then unclamped and moved to the next z-value, and this process was repeated for 26 different values of z, starting at z=36.5mm and then incrementing z upwards by ~4mm for the first ten measurements, then by increments of ~6mm for the remaining measurements.
 
The data from these measurements can be found on the attached spreadsheet.
 
A few notes on the experiment:
 
The vernier micrometer has a measurement limit of 13.5mm. After the tenth measurement, the measured xm values began to exceed this limit. It was therefore necessary to translate the ruler in the negative x-direction without translating it in the z-direction. Plates were clamped snugly to either side of the ruler such that the ruler could not be translated in the z-direction, but could be moved in the x-direction when the ruler was unclamped. After securing these plates, the ruler was moved in the negative x-direction by approximately 5mm. The ruler was then clamped securely in place at its new x location. In order to better estimate the actual x-translation of the ruler, I took the following series of measurements: I moved the stage to z-values at which sets of measurements were previously taken. Then, I moved the razor out of the beam path and carefully moved it back inwards until the output on the DMM matched exactly the DMM output of the first measurement taken previously at that z-value. The xm value corresponding to this voltage was then read. The translation of the stage should be approximately equal to the difference of the measured xm values for that DMM voltage output at that z-value. This was done for 8 z-values, and the average difference was found to be 4.57+-0.03mm, which should also be the distance of stage translation (this data and calculation is included in the "x translation" sheet of the attached excel workbook).
 
At this same point, I started using two clamps to attach the translational stage to the table for each measurement set, as I was unhappy with the level of secureness which one clamp provided. I do not, however, believe that the use of one clamp compromised the quality of previous sets of measurements.

 

RESULTS:


A MATLAB function 'gsbeam.m' was written to replicate the function:

and then another function 'beamdata.m' was written to input each dataset, fit the data to a curve of the functional form of the previous function for each set of data automatically, and then output PDF files plotting all of the fit curves against each other, each individual fit curve against the data from that measurement, and a plot showing the widths w as a function of z. Linear regression was done on w against z to find the slope of the w(z) (which, for these measurements, is clearly shown by the plot that the beam was measured in the far-field and thus w is approximately a linear function of z). An array of the z-location of the ruler, the fit parameters A, x0, x, and the 2-norm of the residual of the fit is also outputted, and is shown below for the experimental data:

 

z(ruler) A x0 w 2normres
36.5 7.5915 11.089 0.8741 0.1042
39.9 5.2604 11.1246 1.048 0.1013
44 3.8075 11.1561 1.2332 0.1164
48 2.777 11.1628 1.4479 0.0964
52 2.1457 11.1363 1.6482 0.1008
56 1.6872 11.4206 1.858 0.1029
60 1.3831 11.2469 2.0523 0.1021
64 1.1564 11.1997 2.2432 0.1059
68 0.972 11.1851 2.4483 0.0976
72 0.8356 11.1728 2.6392 0.1046
78 0.67 6.8821 2.9463 0.0991
84 0.5559 6.7548 3.2375 0.1036
90 0.4647 6.715 3.5402 0.0958
96 0.3993 6.7003 3.8158 0.1179
112 0.2719 6.8372 4.6292 0.0924
118 0.2398 6.7641 4.925 0.1029
124 0.2117 6.7674 5.2435 0.1002
130 0.189 6.8305 5.5513 0.0965
136 0.1709 6.8551 5.8383 0.1028
142 0.1544 6.8243 6.1412 0.0981
148 0.1408 6.7993 6.4313 0.099
154 0.1286 6.8062 6.7322 0.0948
160 0.1178 6.9059 7.0362 0.1009
166 0.1089 6.904 7.3178 0.0981
172 0.1001 6.8817 7.6333 0.1025
178 0.0998 6.711 7.6333 0

 

All outputted PDF's are included in the .zip file attached. The MATLAB functions themselves are also attached.The plots of the fit curves and the plot of the widths vs. the ruler location are also included below:

 

(note that I could probably improve on the colormap that I chose for this. note also that the 'gap' is because I temporarily forgot how to add integers while taking the measurements, and thus went from 96mm on the ruler to 112mm on the ruler despite going by a 6mm increment otherwise in that range. Also, note that all of these fit curves were automatically centered at x=0 for the plot, so they wouldn't necessarily intersect so neatly if I tried to include the difference in the estimated 'beam centers')

(note that the width calculated from the 26th measurement is not included in the regression calculation or included on this plot. The width parameter was calculated as being exactly the same as it was for the 25th measurement, despite the other parameters varying between the measurements. I suspect that the beam size was starting to exceed the dimensions blocked by the razor and that this caused this problem, and that would be easy to check, but I have yet to do it. Regardless, the fit looks good from just the other 25 measurements)

These results are as expected: that the beam spot-size should increase as a function of z and that it should do so linearly in the far-field. My next step will be to use the results of this experiment to calculate the properties of the SLED beam, characterizing the beam and thusly enabling me to predict its behavior within further optical systems.

 

Attachment 1: BeamData.xlsx
Attachment 2: beam_pdfs.zip
Attachment 3: beamdata.m
function D=beamdata(M,guess)
%Imports array of beam characterization measurements. Structure of M is 
% [z, x, I, a] where z is the displacement of the beam blockage along
% the optical axis, x is the coordinate of razor edge, I is the measured
% output of the photodetector and a is the ambient light level
%and guess is an estimate of the parameters [Amplitude x0 width] for the
%first measurement
%Output Structure [z A x0 w residual_2norm]
thisfile=mfilename('fullpath');
thisdir=strrep(thisfile,mfilename(),'');
... 105 more lines ...
Attachment 4: gsbeam.m
function I=gsbeam(x,xdat)
I=pi/4*x(1)*x(3)^2*erfc(sqrt(2)*(x(2)-xdat)/x(3));
end
  59   Fri Jun 25 10:47:08 2010 AidanMiscaLIGO ModelingUploaded aLIGO axicon+ITM COMSOL model to the 40m SVN

I added a COMSOL model of the aLIGO ITM being heated by an axicon-formed annulus to the 40m SVN. The model assumes a fixed input beam size into an axicon pair and then varies the distance between the axicons. The output is imaged onto the ITM with varying magnitudes. The thermal lens is determined in the ITM and added  to the self-heating thermal lens (assuming 1W absorption, I think - need to check). The power in the annulus is varied until the sum of the two thermal lenses scatters the least amount of power out of the TEM00 mode of the IFO.

https://nodus.ligo.caltech.edu:30889/svn/trunk/comsol/TCS/aLIGO/

The results across the parameter space (axicon separation and post-axicon-magnification) are attached. These were then mapped from this space to the space of annulus thickness vs annulus diameter, (see elog here).

 

 

Attachment 1: aLIGO_axicon_spacing_post-magnification_optimization.jpg
aLIGO_axicon_spacing_post-magnification_optimization.jpg
  60   Fri Jun 25 10:59:43 2010 AidanMiscaLIGO ModelingUploaded aLIGO axicon+ITM COMSOL model to the 40m SVN

Here are the results in the annulus thickness vs annulus diameter space ...

Quote:

I added a COMSOL model of the aLIGO ITM being heated by an axicon-formed annulus to the 40m SVN. The model assumes a fixed input beam size into an axicon pair and then varies the distance between the axicons. The output is imaged onto the ITM with varying magnitudes. The thermal lens is determined in the ITM and added  to the self-heating thermal lens (assuming 1W absorption, I think - need to check). The power in the annulus is varied until the sum of the two thermal lenses scatters the least amount of power out of the TEM00 mode of the IFO.

https://nodus.ligo.caltech.edu:30889/svn/trunk/comsol/TCS/aLIGO/

The results across the parameter space (axicon separation and post-axicon-magnification) are attached. These were then mapped from this space to the space of annulus thickness vs annulus diameter, also attached.

 

 

 

Attachment 1: Screen_shot_2010-06-25_at_11.01.38_AM.png
Screen_shot_2010-06-25_at_11.01.38_AM.png
  61   Wed Jun 30 00:00:13 2010 Kathryn and WonComputingHartmann sensorrms of centroid position changes

Given below is a brief overview of calculating rms of spot position changes to test the accuracy/precision of the centroiding code. Centroids are obtained by summing over the array of size 30 by 30 around peak pixels, as opposed to the old method of using matlab built-in functions only. Still peak pixel positions were obtained by using builtin matlab function. Plese see the code detect_peaks_bygrid.m for bit more details.

 

My apologies for codes being well modularised and bit messy...

 

Please unzip the attached file to find the matlab codes.

The rest of this log is mainly put together by Kathryn.

 

Won

 

(EDIT/PS) The attached codes were run with raw image data saved on the hard disk, but it should be relatively easy to edit the script to use images acquired real time. We are yet to play with real-time images, and still operating under Windows XP...

---
When calculating the rms, the code outputs the results of two
different methods. The "old" method is using the built-in matlab
method while the "new" method is one Won constructed and seems to
give a result that is closer to the expected value. In calculating
and plotting the rms, the following codes were used:

- centroid_statics_raw_bygrid.m (main script run to do the analysis)
- process_raw.m (takes raw image data and converts them into 2D array)
- detect_peaks_bygrid.m (returns centroids obtained by old and new methods)
- shuffle.m (used to shuffle the images before averaging)

The reference image frame was obtained by averaging 4000 image frames,
the test image frames were obtained by averaging 1, 2, 5, 10 ... 500,
1000 frames respectively, from the remaining 1000 images.

In order to convert rms values in units of pixels to wavefront
aberration, do the following:

aberration = rms * pixel_width * hole_spacing / lever_arm

pixel_width: 12 micrometer
hole_spacing: about 37*12 micrometer
lever_arm: 0.01 meter

rms of 0.00018 roughly corresponds to lambda over 10000.

Note: In order to get smaller rms values the images had to be shuffled
before taking averages. By setting shuffle_array (in
centroid_statics_raw_bygrid.m) to be false one can
turn off the image array shuffling.

N_av        rms

1     0.004018866673087
2     0.002724680286563
5     0.002319477846009
10    0.001230553835673
20    0.000767638027270
50    0.000432681002432
100   0.000427139665006
200   0.000270955332752
500   0.000226521040455
1000  0.000153760240692

fitted_slope = -0.481436501422376

Here are some plots:

rms_plot_shuffle.jpgrms_plot_noshuffle.jpg

---

Next logs will be about centroid testing with simulated images, and wavefront changes due to the change in the camera temperature!

(PS) I uploaded the same figure twice by accident, and the site does not let me remove a copy!...

Attachment 2: rms_plot_shuffle.jpg
rms_plot_shuffle.jpg
Attachment 4: eLOG.zip
  62   Thu Jul 1 09:40:13 2010 James KunertMiscHartmann sensorSURF Log 8 -- more SLED characterization

As I started setting up my next experiment, I noticed that the beam size from the SLED appeared to be larger than expected from previous analysis. It was therefore necessary to conduct further experiments to characterize the divergence angle of the beam.

First, I set up the photodetector attached to an SLED and mounted a razor blade on a translational stage, in the same manner as done previously. All of these components were the exact same ones used in the previous beam size experiment. The only differences in the components of the apparatus were as follows: first, the photodetector was placed considerably closer to the SLED source than was done previously. Second, a different lens was used to focus the light onto the photodetector. Lens LX082 from the lenskit was used, which is a one-inch lens of focal length f=50.20mm.

Experiment 1: Columnated Beam Size Measurement

Before repeating the previous experiment, the following experiment was done: the beam was columnated by placing the lens 50.20mm away from the source and then adjusting until columnation was observed. Columnation was confirmed by setting a mirror in the optical path of the beam directing it to the other side of the room. The position of the lens along the optical axis was adjusted until the beam exiting the lens did not change in size across the length of the table and appeared to be roughly the same size as the spot on the opposite side of the room (as gauged roughly by the apparent size on an IR card and through an IR viewer).

Then,the translational stage onto with the laser was mounted was placed after the lens against the ruler clamped to the table, and beam size was measured using the same experimental procedure used to find the width in the previous experiment. The only variation in the experimental procedure was that measurements were not taken strictly at 0.5V intervals; rather, intensity readings were taken for 28 different intensity outputs. The following measurements were collected:


x(mm)   V(V)
13.00  7.25
12.00  7.24
10.80   7.18
10.15   7.09
9.50   6.92
9.30   6.86
9.00   6.74
8.75   6.61
8.50   6.47
8.25   6.31
8.00   6.12
7.75   5.92
7.50   5.69
7.30   5.49
7.15   5.33
7.00   5.17
6.75   4.88
6.50   4.58
6.25   4.27
6.00   3.95
5.75   3.63
5.50   3.32
5.25   3.02
5.00   2.729
4.60   2.306
4.50   2.205
4.25   1.981
4.00 1.770
ambient 0.585

When fit to gsbeam.m using lsqcurvefit, this yielded a width of 4.232mm. Since the beam is columnated through the lens, we know that it is approximately f=50.2mm from the source. Thus the divergence angle is approximately 0.084.

At this point, to double-check that the discrepency between this value and the previous experiment was not a result of a mistake in the function, I wrote a simpler function to go through the steps of using lsqcurvefit and plotting the fit curve versus the data automatically, 'manualbeam.m' (attached), which simply fits a curve to one set of data from a constant z-value. Using this one-by-one on each z-value in the previous experiment, it was shown that the slope of the widths was still ~0.05, so this discrepency was not the result of a mistake in the previous function somewhere.

Experiment 2: Blocked Beam Analysis 2

I then placed the razor before the lens in the beampath and repeated the previous experiment exactly. See the previous eLog for details on experimental procedure. Sets of measurements were taken at 6 different z-values, and widths were found using manualbeam.m in MATLAB. A curve of the calculated widths versus the z-position of the stage on the ruler is below:

BeamSpot_Exp3.tif

Note that this appears to be consistant with the first experiment.

Experiment 3: Direct Beam Measurements on CCD

The front-plate of the Hartmann sensor was replaced with the new invar design (on a related note, the thread on the front plate needs a larger chamfer). In doing this, the Hartmann plate was removed. The sensor was moved much closer to the SLED along the optical axis, and an optical filter of OD 0.7 was screwed into the new frontplate. This setup allows for the direct imaging of the intensity of the beam, as shown below:

directbeam.PNG

The spots and distortions on the image are from dust and other blemishes on the optical filter, as was confirmed by rotating the filter and observing the subsequent rotation of each feature.

Note that in some images, there may be a jump in intensity in the middle of the image. This is believed to be due to a inconsistant gain between the two sides of the image.

The means of the intensities of each row and each column will be Gaussian, and thus can be fit to a Gaussian using lsqcurvefit. Function 'gauss_beam1D.m' was written and this function was fit to using function 'autogaussfit1', which automatically imports the data from .raw files, fits Gaussians to the means of each row and column, and plots everything.

An example of the fit for the means of the columns of one image is as follows:

beamfit100mm.PNG

 And for the rows:

beamfit100mmRows.PNG

Note that for all the fits, the fitting generally looks a little better along the row than along the column (which is true here, as well).

The following procedure was used to calculate the change of the beam width as a function of distance: the left edge of the base of the Hartmann sensor was measured against a ruler which was clamped to the table. The ruler position z was recorded. Then, preliminary images would be taken and the exposure time would be adjusted as needed. The exposure time was then noted. Then, an image was taken and curves were fit to it, and the width was calculated. This was done for 15 different positions of the Hartmann sensor along the optical axis.

The calculated widths vs. displacements plot from this can be seen below:

DirectBeams1.tif

Note that the row width and column width are not the same, implying that the beam is not circularly symmetric and is thusly probably off alignment by a little bit. Also, the calculated slopes are different than the value of 0.085 acquired from the previous two measurements. Further investigation into the beam size and divergence angle is required to finally put this question to rest.

Attachment 6: manualbeam.m
function x=manualbeam(M,guess)
    x=lsqcurvefit(@gsbeam,guess,M(:,2),M(:,3)-M(:,4));
    figure
    hold
    grid on
    xlabel('Stage Translation (mm)')
    ylabel('Photodetector Output (V)')
    text(0.8,0.2,['A = ' num2str(x(1))],'FontSize',12,'Interpreter','none','Units','normalized');
    text(0.8,0.15,['x0 = ' num2str(x(2))],'FontSize',12,'Interpreter','none','Units','normalized');
    text(0.8,0.1,['w = ' num2str(x(3))],'FontSize',12,'Interpreter','none','Units','normalized');
... 7 more lines ...
Attachment 7: gauss_beam1D.m
function result = gauss_beam1D(x0, xdata)
% x0(1) = offset
% x0(2) = amplitude
% x0(3) = x centroid location
% x0(4) = x width

result = x0(1) + x0(2).*exp(-2.0.*( ...
                           ((xdata - x0(3)).^2)/(x0(4).^2)));
                       
Attachment 8: autogaussfit1.m
function [x y wx wy]=autogaussfit1(imgname,guess,imgdetails)
%guess = [offset amplitude centroidlocation width]
%imgdetails = [HWSrulerlocation exp.time]
%output vectors same format as guess
thisfile=mfilename('fullpath');
thisdir=strrep(thisfile,mfilename(),'');

rulerz = imgdetails(1,1);
exposure = imgdetails(1,2);

... 56 more lines ...
  63   Sun Jul 4 06:45:50 2010 Kathryn and WonComputingHartmann sensoranalyzing the wavefront aberration

Happy Fourth of July!

The following is a brief overview of how we are analyzing the wavefront aberration and includes the aberration parameters calculated for 9 different temperature differences. So far we are still seeing the cylindrical power even after removing the tape/glue on the Hartmann plate. Attached are the relevant matlab codes and a couple of plots of the wavefront aberration.

We took pictures when the camera was in equilibrium at room temperature and then at each degree increase in camera temperature as we heated the room using the air conditioner. For each degree increase in camera temperature, we compared the spot positions at the increased temperature to the spot positions at room temperature. We used the following codes to generate the aberration parameters and make plots of the wavefront aberration:

-build_M.m (builds 8 by 8 matrix M from centroid displacements)
-wf_aberration_temperature_bygrid.m (main script)
-wf_from_parms.m (generates 2D aberration array from aberation parameters)
-intgrad2.m (generates 2D aberration array from an interpolated array of centroid displacements)

In order to perform the "inverse gradient" method to obtain contours, we first interpolated the centroid displacement vectors to generate a square array. As this array has some NaN (not a number) values, we cropped out some outer region of the array and used array values from (200,200) to (800,800). Sorry we forgot to put that part of the code in wf_aberration_temperature_bygrid.m.

The main script wf_aberration_temperature_bygrid.m needs to be revised so that the sign conventions are less confusing... We will update the code later.

The initial and final temperature values are as follows:

 

  Hand-held Digitizer Board Sensor Board
Initial 30.8 44.4 36.0
Final 40.8 51.2 43.2

 

Aberration parameters:


1) Comparing high temp (+10)  with room temp


        p: 1.888906773203923e-004
       al: -0.295042766811686
      phi: 0.195737681653530
        c: -0.001591869846958
        s: -0.003826146141562
        b: 0.098283157674967
       be: -0.376038636781319
        a: 5.967617809296910


2) Comparing +9 with room temp


        p: 1.629083055002727e-004
       al: -0.222506109890745
      phi: 0.193334452094940
        c: -0.001548838746542
        s: -0.003404217451916
        b: 0.091368295953142
       be: -0.351830698303612
        a: 5.764068008962653


3) Comparing +8 with room temp


        p: 1.485283322069376e-004
       al: -0.212605187544093
      phi: 0.206716196097728
        c: -0.001425962488852
        s: -0.003148796701331
        b: 0.089936286297599
       be: -0.363538909377296
        a: 5.546514425485094


4) Comparing +7 with room temp


        p: 1.284124028380585e-004
       al: -0.163672705473379
      phi: 0.229219952949728
        c: -0.001452457146947
        s: -0.002807207555944
        b: 0.084090100490331
       be: -0.379195428095102
        a: 5.289173743478881


5) Comparing +6 with room temp


        p: 1.141756950753851e-004
       al: -0.149439038317734
      phi: 0.240503450300707
        c: -0.001350015836130
        s: -0.002529240946848
        b: 0.078118977034120
       be: -0.326704416216547
        a: 4.847406652448727


6) Comparing +5 with room temp


        p: 8.833496828581757e-005
       al: -0.071871278822766
      phi: 0.263210114512376
        c: -0.001257787180513
        s: -0.002095618522105
        b: 0.069587080420443
       be: -0.335912998511077
        a: 4.542557551218057


7) Comparing +4 with room temp


        p: 6.217428324604411e-005
       al: 0.019965235199575
      phi: 0.250991433584904
        c: -0.001266061216964
        s: -0.001568527823273
        b: 0.058323732750548
       be: -0.289315790283207
        a: 3.957825468583509


8) Comparing +3 with room temp


        p: 4.781068895714900e-005
       al: 0.140720713391208
      phi: 0.270865276786418
        c: -0.001228146894728
        s: -0.001371110045136
        b: 0.052794990899554
       be: -0.273968130963666
        a: 3.591187350052610


9) Comparing +2 with room temp


        p: 2.491163442408281e-005
       al: 0.495136135872766
      phi: 0.220727346409557
        c: -9.897729773516012e-004
        s: -0.001076008621974
        b: 0.048467660428427
       be: -0.280879088681660
        a: 3.315430577872808


10) Comparing +1 with room temp


       p: 8.160828332639811e-006
      al: 1.368853902659128
     phi: 0.116300954280238
       c: -6.149390553733007e-004
       s: -3.621216621887707e-004
       b: 0.025454969698557
      be: -0.242584267252882
       a: 1.809039775332749

The first plot is of the wavefront aberration obtained by integrating the gradient of the aberration and the second plot fits the aberration according to the aberration parameters so is smoother since it is an approximation.


Attachment 1: eLOG.zip
Attachment 2: wf_aberration_plot_hightemp_byintegration.jpg
wf_aberration_plot_hightemp_byintegration.jpg
Attachment 3: wf_aberration_plot_hightemp_fitted.jpg
wf_aberration_plot_hightemp_fitted.jpg
  64   Tue Jul 6 21:57:19 2010 James KunertMiscHartmann sensorSURF Log -- SLED fiber output temporal analysis

In the previous log, I describe the direct measurement of the fiber output beam using the Hartmann sensor with the plate removed. In order to analyze how these properties might change as a function of time, we left the camera running over the holiday weekend, Dr. Brooks having written a bash script which took images from the sensor every 500 seconds. This morning I wrote a MATLAB script to automatically analyze all of these images and plot the fit parameters as a function of time (weekendbeamtime.m, attached). Note that the formatiing of a few of the following graphs was edited manually after being outputted by the program (just to note why the plots look different than the code might imply).

The following plots were made:

Amplitude as a function of time:

Amplitude.png

Amplitude again, focused in with more analysis:

Amplitude2.png

 

Offset level:

Offset.png

 

Beam Size:

BeamSize.png

 

Centroid Displacement (note the axis values, it's fairly zoomed in):

CentroidDisplacement.png

Note that these values were converted into radians by approximating the fiber-output/CCD distance and dividing this from the displacement in mm (after converting from pixels). This distance was approximated by assuming a divergence angle of 0.085 and a beam size of ~5.1mm (being a value inbetween the horizontal and vertical beam sizes calculated). This gave a value of ~60mm, which was confirmed as plausible by a quick examination in the lab.

In the first three plots, there are obvious temporary effects which seem to cause the values to fluctuate much more rapidly than they do for the rest of the duration. It is suspected that this could be related to temperature changes within the sensor as the camera begins taking images. Further investigation (tomorrow) will investigate these effects further, while collecting temperature data.

Attachment 1: weekendbeamtime.m
function [X Y wX wY]=weekendbeamtime(basename,guess,N)
%fits 1D gaussian curve to N images in sequence of basename basefile
thisfile=mfilename('fullpath');
thisdir=strrep(thisfile,mfilename(),'');

i=0;
X=[];
Y=[];
wX=[];
wY=[];
... 82 more lines ...
  65   Thu Jul 15 20:06:37 2010 James KMiscHartmann sensorSURF Log: Thermally Induced Defocus Experiments

A quick write-up on recent work can be found at: Google Docs

 

I can't find a Tex interpreter or any other sort of equation editor on the eLog, is why I kept it on Google Docs for now instead of transferring it over.

 

--James

 

Attachment 1: pydefoc.m
function slopes=pydefoc(mainout,N,tol)
[x,y,dx,dy,time,M,centroids]=pyanalyze(mainout,N);
slopes=xdxslope(x,dx,time,tol);
Attachment 2: pyanalyze.m
function [x,y,dx,dy,time,M,centroids]=pyanalyze(mainout,N)
n=0;
centroids=[];
while n<N
    display(['Image Number ' num2str(n)])
    if n==0
        I=pyimportsingle(mainout,n);
        cent=centroid_image(I);
    else
        I=pyimportsingle(mainout,n);
... 22 more lines ...
Attachment 3: pyimportsingle.m
function I=pyimportsingle(mainoutbase,Nimg)
%Imports mainoutbase.raw generated by python framesumexport2 and outputs
%Nth image matrix I

mainout=['/opt/EDTpdv/' mainoutbase '.raw'];
%open main output array
fid = fopen(mainout,'r');
fseek(fid,4*1024*1024*(Nimg-1),'bof');
arr = fread(fid,1024*1024,'float');

... 16 more lines ...
Attachment 4: pyimportM.m
function M=pyimportM(mainoutbase,N)
maintxt=['/opt/EDTpdv/' mainoutbase '.txt'];
fid=fopen(maintxt);
str=fread(fid,'*char');
fclose(fid);
str=str';
str=strrep(str,'Camera Temperature on Digitizer Board:','');
str=strrep(str,'Camera Temperature on Sensor Board:','');
str=strrep(str,' Celsius','');
str=strrep(str,'Start Time','');
... 14 more lines ...
Attachment 5: framesumexport2.py
#!/bin/python

import array
import os
import time
#Number of loops LoopN over which to sum SumN images of dim num_H by num_W
LoopN=100
SumN=200
num_H = 1024
num_W = 1024
... 94 more lines ...
  66   Tue Jul 20 15:45:51 2010 AidanComputingGeneralAdd fixed IP addresses to local machines in TCS lab

http://nodus.ligo.caltech.edu:8080/AdhikariLab/859

  67   Tue Jul 20 18:13:06 2010 AidanComputingGeneralAdded TCS channels to frame builder

 http://nodus.ligo.caltech.edu:8080/AdhikariLab/860

 contents of tcs_daq: /target/TCS_westbridge.db

grecord(ai,"C4:TCS-ATHENA_ADC0")
{
field(DTYP,"ATHENA")
field(INP,"#C0 S0")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC1")
{
       field(DTYP,"ATHENA")
field(INP,"#C0 S1")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC2")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S2")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC3")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S3")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC4")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S4")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC5")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S5")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC6")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S6")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC7")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S7")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC8")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S8")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC9")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S9")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC10")
{
        field(DTYP,"ATHENA")
        field(INP,"#C0 S10")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC11")
        field(DTYP,"ATHENA")
        field(INP,"#C0 S11")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC12")
        field(DTYP,"ATHENA")
        field(INP,"#C0 S12")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC13")
        field(DTYP,"ATHENA")
        field(INP,"#C0 S13")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC14")
        field(DTYP,"ATHENA")
        field(INP,"#C0 S14")
        field(SCAN,".1 second")
}
grecord(ai,"C4:TCS-ATHENA_ADC15")
        field(DTYP,"ATHENA")
        field(INP,"#C0 S15")
        field(SCAN,".1 second")
}
grecord(ao,"C4:TCS-ATHENA_DAC0")
{
        field(DTYP,"ATHENA")
        field(OUT,"#C0 S0")
        field(HOPR,"32768")
        field(LOPR,"-32768")
}
 
grecord(ao,"C4:TCS-ATHENA_DAC1")
{
        field(DTYP,"ATHENA")
        field(OUT,"#C0 S1")
}
grecord(bi,"bi0")
{
        field(SCAN,".1 second")
field(DTYP,"ATHENA")
field(INP,"#C0 S8")
        field(ZNAM,"zero")
        field(ONAM,"one")
}
grecord(bo,"C4:TCS-ATHENA_BO0")
{
        field(DOL,"HeartBeat")
        field(OMSL,"closed_loop")
        field(DTYP,"ATHENA")
        field(OUT,"#C0 S1")
        field(ZNAM,"zero")
        field(ONAM,"one")
}
grecord(bo,"C4:TCS-ATHENA_BO1")
{
        field(DOL,"LevelAlarm")
        field(OMSL,"closed_loop")
        field(DTYP,"ATHENA")
        field(OUT,"#C0 S2")
        field(ZNAM,"zero")
        field(ONAM,"one")
}
grecord(bo,"C4:TCS-ATHENA_BO2")
{
        field(DOL,"Pressure_OK")
        field(OMSL,"closed_loop")
        field(DTYP,"ATHENA")
        field(OUT,"#C0 S3")
        field(ZNAM,"zero")
        field(ONAM,"one")
}
 
 

 

  68   Thu Jul 22 11:02:59 2010 AidanComputingGeneralRestarted hartmann machine

hartmann had started responding to requests to log-in with the a request to change the password. Attempts to change the password proved unsuccessful. I tried to access the machine locally to change the password but I couldn't the display started, so I had to reboot it.

 

 

  69   Thu Jul 22 21:46:55 2010 James KunertMiscHartmann sensorHartmann Sensor Thermal Defocus Measurement Noise & Ambient Light Effects

As discussed during the teleconference, a series of experiments have been conducted which attempt to measure the thermally induced defocus in the Hartmann sensor measurement. However, there was a limiting source of noise which caused a very large displacement of the centroids between images, making the images much too noisy to properly analyze.

The general setup of this series of experiments is as follows: the fiber output from the SLED was mounted about one meter away from the Hartmann sensor. No other optics were placed in the optical path. Everything except for the Hartmann sensor was enclosed (a box was constructed out of wall segments and posterboard, with a hole cut in the end which allowed the beam to propagate into the sensor. The sensor was a short distance from the end of the box, less than a centimeter. There was no obvious difference in test images taken with the lights on and the lights off, which previously suggested to me that ambient light would not have a large effect). Temperature variations in the sensor were induced by changing the set temperature of the lab with the thermostat. A python script was used to take cumulative sums of 200 images (taken at 11Hz) every ~5 minutes.

This overly large centroid displacement appeared only in certain areas of the images. However, changing the orientation of the plate appeared to change the regions which were noisy. That is, if the orientation of the Hartmann plate was not changed between measurements, the noise would appear in the same regions in consecutive experiments (even in experiments conducted on different days). However, if the orientation of the Hartmann plate was changed between measurements, the noise would appear in a different region in the next experiment. This suggests that the noise is perhaps due to a physical phenomenon which would change with the orientation of the plate.

There were a few hypotheses which attempted to explain this noise but were shown to not be the likely cause. I hypothesized that the large thermal expansion coefficient of the aluminum camera housing could be inducing a stress on the invar frontplate, causing the Hartmann plate to warp. This hypothesis was tested by loosening the screws which attach the front and back portion of the frontplate (such that the Hartmann plate was not strongly mechanically coupled with the rest of the frontplate) and running another iteration of the experiment. The noisy regions were seen to still appear, indicating that thermally induced stress was not the cause of the distortion. Furthermore, experiments done while the sensor was in relative thermal equilibrium over long periods still showed noisy regions, and there was no apparent correlation between noise magnitude and sensor temperature for any experiment, indicating that thermal effects in general were not responsible.

Another suspected cause was the increased noise at intensity levels of 128 (as discussed in a previous eLog). However, it was observed that there was no apparent difference in the prevalence of 128-count pixels between the noisy regions and the cleaner regions, indicating that this was not the cause either.

A video was made which shows vector plots of centroid displacements for each summed image relative to the first image taken in an experiment, and was posted as an unlisted youtube video at: http://www.youtube.com/watch?v=HUH1tHRr98I

The length of each vector in the video is proportional to the magnitude of the displacement. The localization of the noise can be seen. Notice also the sudden appearance and disappearance of the noise at images 19 and 33, indicating that the cause of the noise is relatively sudden and does not vary smoothly.

Another video showing a logarithmic plot of the absolute value of the difference of each image from the first image (for the same experiment as previous) can be seen here: http://www.youtube.com/watch?v=_CiaMpw9Ig0

Notice there are jumps in the background level which appear to correspond with the disappearance and appearance of the noisy regions in the centroids (at images 18 and 32) (I forgot to manually set the framerate on these last three .avi's, so they go by a little too quickly, but it's still all there). The one-image delay between the intensity shift and centroid noise shift is perhaps related to the fact that the analysis uses the previous image centroids as the reference to find the new image centroid locations.

A video showing histograms of the intensity of each pixel in an image (within the intensity range of 50 and 140 in the averaged summed-image) for this same experiment can be seen at: http://www.youtube.com/watch?v=MogPd-vaWn4

Notice that the peak of the distribution corresponding to the background appears to shift by ~5 counts at images 18 and 32.

 

An experiment was then done which had the exact same procedure except that it was done at a stabilized lab temperature and with the SLED turned off, such that only the background appears in each image. A logarithmic plot of the absolute value of the difference in intensity at each pixel for each image can be seen at: http://www.youtube.com/watch?v=Y66wL5usN18

Other work was being done in the lab throughout the day, so the lights were on for every image but one. I made a point of turning off the lights while the 38th image was being taken. The framerate of the linked video is unfortunately a little too fast to really see what goes on (I adjusted the framerate while viewing it in MATLAB but forgot to do so for the AVI), but you can clearly see a major change in the image during the 38th image, and during that image only (it looks like a red 'flash' at the 38th frame, near the very end). The only thing that was changed while taking this specific image was the ambient light level, so this major difference must be due to ambient light. A plot of the difference between images 38 and 1 is shown below:

72210b_ambient.png

Note that the maximum difference between the images is 1107 levels, which for the 200 images in each summed image corresponds to an average shift of ~5.5 levels. This is of a very similar magnitude to the shift that can be seen in the histogram of the previous experiment. This suggests that changes in ambient light levels are perhaps somehow responsible for the noisy regions of the image. Note also the non-uniformity of the ambient light; such a non-uniform change could certainly shift the centroid positions.

One question is how, exactly, this change might have propagated into the analysis. The shape of the background level change appears to be very different from the shape of the noisy regions seen for this plate configuration. This is something which I need to examine further; this, combined with the fact that the changes in the noise appear to occur one image after the actual change in intensities, suggests to me that there could perhaps be some subtle things going on with my data analysis procedures which I don't currently fully understand.

Still, I highly suspect that ambient light is the root cause of the noisy regions. It would be a remarkable coincidence if the centroid displacement shift was not ultimately due to the observed intensity shift, or if the intensity shift was not due to a change in ambient light (since the intensity shift in the histogram analysis and ambient light change in the background analysis are observed to correspond to roughly the same magnitude of intensity change). I had initially suspected that effects from ambient light would be negligible since, while taking test images while setting up each experiment, the image did not appear to change based upon whether I had the lights on or off. I checked this a few times, but did not examine the images closely enough to be able to detect such a small non-uniform change in the intensity of each image.

If ambient light was responsible, this could also perhaps explain why the location of the noise appeared to depend on the orientation of the plate. The Hartmann plate would be in the optical path of any ambient light leaking in, so a change in the orientation of the plate could perhaps change the way that the ambient light was propagating onto the sensor (especially since the Hartmann plates are slightly warped and not perfectly planar). That's all purely speculation at this point, but it's something that I intend to investigate further.

I tried analyzing some previous data by subtracting part of the background, but was unsuccessful at reducing the noise in the results. I attempted to reduce the background in previous data by setting all values below a certain threshold equal to zero (before inputting the image into the centroiding function). However, the maximum threshold which I could use before getting an error message was ~130. If I set the threshold to, say, 135, I received an error from the centroiding function that the image was 'too dissimilar to the hex grid'. I did analysis of the images with a threshold of 130, but this still left random patches of background spaced between the spots in each image. The presence of only patches of background as opposed to the complete background actually increased the level of noise in the results by about a factor of 3. I would need to come up with a better method of subtracting the background level if I wanted to actually reduce the noise in this data.

The next step in this work, I think, will perhaps be to better enclose the system from ambient light to where I'm confident that it could have little or no effect. If noisy regions were not seen to appear after this was done, that would more or less confirm that ambient light was the cause of all this trouble. Hopefully, if ambient light is indeed the cause of the noise, reducing it will enable an accurate and reliable measurement of thermally induced defocus within the Hartmann sensor.

  70   Fri Jul 23 10:33:08 2010 AidanComputingHartmann sensorDalsa camera ADC 8th digitizer error

I plotted a histogram of the total intensity of the Hartmann sensor when illuminated and found that the 128 count problem extends all the way up through the distribution. This isn't unreasonable since that digitizer is going to be called on mutliple times.

First things first, the value of 128 equals a 1 in the 8th digitizer, so for a 16-bit number in binary, it looks like this: 0000 0000 1000 0000 and in hex-code 080

The values of the peaks in the attached distribution are as follows:

 

Number of counts Hex Code

128

 080
384  180
640  280
896  380
1152  480
1408  580
1664  680
1920  780
2176  880
2432  980
2688  A80
2944  B80
3200  C80

 

Attachment 1: histogram_of_dalsa_intensity.pdf
histogram_of_dalsa_intensity.pdf
  71   Fri Jul 23 12:33:51 2010 AidanComputingHartmann sensorInvar clamp scatter

I illuminated the Hartmann sensor with the output of a fiber placed ~1m away.

I noticed that the illumination was not uniform, rather there was some sort of 'burst' or 'star' right near the center of the image. This turned out to be due to the Hartmann plate clamps - it disappeared when I removed those. It appears that there is scatter off the inner surface of the holes through the clamp plates. I'm not sure if it's from the front or back plates.

Needs further investigation ...

  72   Fri Jul 23 12:38:58 2010 AidanComputingHartmann sensorImages for Dalsa

Attached are the background and 80% illumination (~uniform spatially uniform) images that Dalsa requested.

Note that the gain of the taps does not appear to be balanced.

 

Attachment 1: dark_0000.jpg
dark_0000.jpg
Attachment 2: bright_0000.jpg
bright_0000.jpg
  73   Fri Jul 23 19:52:49 2010 AidanComputingHartmann sensorDalsa camera ADC 8th digitizer error

I've attached an image that shows the locations of those pixels that record a number of counts = (2*n-1)*128. 

The image is the sum of 200 binary images where pixels are given values of 1 if their number of counts = (2*n-1)*128 and 0 otherwise.

The excess of counts is clearly coming from the left hand tap. This is good news because the two taps have independent ADCs and it suggests that it is only a malfunctioning ADC on the LHS that is giving us this problem.

Quote:

I plotted a histogram of the total intensity of the Hartmann sensor when illuminated and found that the 128 count problem extends all the way up through the distribution. This isn't unreasonable since that digitizer is going to be called on mutliple times.

First things first, the value of 128 equals a 1 in the 8th digitizer, so for a 16-bit number in binary, it looks like this: 0000 0000 1000 0000 and in hex-code 080

The values of the peaks in the attached distribution are as follows:

 

Number of counts Hex Code

128

 080
384  180
640  280
896  380
1152  480
1408  580
1664  680
1920  780
2176  880
2432  980
2688  A80
2944  B80
3200  C80

 

 

Attachment 1: image-location-of-excess_pixel_count_pixels.jpg
image-location-of-excess_pixel_count_pixels.jpg
  74   Sat Jul 24 10:50:14 2010 AidanElectronicsHartmann sensorLab Temperature and HWS temperature: pre-indium

 Hour-long trend puts the lab temperature at 19.51C

Dalsa temperature:

 

Camera Temperature on Digitizer Board: 41.0 Celsius
Camera Temperature on Sensor Board: 32.9 Celsius
OK>
 
There is currently no Indium in the HWS.

 

  75   Sun Jul 25 16:24:56 2010 AidanComputingSLEDSuperlum SLED test integrated with DAQ - new channel names

 I added some new channels to the Athena DAQ that record the diagnostic channels from the Superlum SLED.

  • C4:TCS-ATHENA_I_SET_VOLTS:  - the set current signal in Volts (1V = 1A)
  • C4:TCS-ATHENA_I_ACTUAL_VOLTS:   - a signal proportional to the actual current flowing to the SLED (1V = 1A)
  • C4:TCS-ATHENA_I_LIM_VOLTS: - the current limit signal in volts (1V = 1A)
  • C4:TCS-ATHENA_TEMP_SENS_V:   - the signal from the on-board temperature sensor [thermistor] (1V = 10kOhm ?)
  • C4:TCS-ATHENA_PD_VOLTAGE: - the signal from the on-board photodetector (1V = 1A?)

The ioc that handles the EPICS channels is on tcs_daq(10.0.1.34) in /target/TCS_westbridge.db

The channels are added to the frame builder in /cvs/cds/caltech/chans/daq/C4TCS.ini

Currently, the driver for the SLED is ON but the current to the SLED is off. This is to check that the zero value of the PD_VOLTAGE signal doesn't wander.

Also, the input noise of the Athena is around +/- 10 counts (where 2^15 counts = 10V) which is a pretty poor 3mV.

  76   Mon Jul 26 09:42:30 2010 AidanComputingSLEDLong term test on SLED started - Day 0

 I set up the SLED to test its long term performance. The test began, after a couple of false starts, around 9:15AM this morning.

The output of the fiber-optic patch cord attached to the SLED is illuminating a photo-detector. The zero-level on the PD was 72.7mV (with the lights on). Once the PD was turned on the output was ~5.50 +/- 0.01V. This is with roughly 900uW exiting the SLED.

The instructions from Superlum suggest limiting the amount of power coupled back into the fiber to less than 3%. With the current setup, the fiber is approximately 2" from the photodetector. What is the power coupled back into the fiber?

Assume a worst case of 100% of the light reflected from the PD, the wavelength is 830nm and a waist size of about 6um radius at the output of the fiber. The beam size at 4" (from the fiber output to the PD and back again) or ~100mm from the fiber is about 4.4mm radius. Therefore about (6um/4.4mm)^2 or ~2ppm will be coupled back into the fiber. This is sufficiently small.

The attached plots from dataviewer show measurements from the SLED (on-board photodetector, on-board temperature sensor, current setpoint, current limit, current to diode) over the last 15 hours.

Attachment 1: SLED_superlum_long_term_test_0001A.pdf
SLED_superlum_long_term_test_0001A.pdf
Attachment 2: SLED_superlum_long_term_test_0001B.pdf
SLED_superlum_long_term_test_0001B.pdf
  77   Mon Jul 26 12:17:25 2010 AidanElectronicsHartmann sensorAdded Indium to HWS

 I added some 0.004" thick indium sheet to the copper heat spreaders and and the heat sinks on the side of the HWS to try and improve the thermal contact. Once installed the steady state temperature of the sensor was the same as before. It's possible that the surface of the copper is even more uneven than 0.004".

 

 

Attachment 1: indium-01.jpg
indium-01.jpg
Attachment 2: indium-03.jpg
indium-03.jpg
Attachment 3: indium-05.jpg
indium-05.jpg
Attachment 4: indium-02.jpg
indium-02.jpg
Attachment 5: indium-04.jpg
indium-04.jpg
  78   Mon Jul 26 18:47:12 2010 James KMiscHartmann sensorHex Grid Analysis Errors and Thermal Defocus Noise

My previous eLog details how the noise in Hartmann Sensor defocus measurements appears to vary with ambient light. New troubleshooting analysis reveals that the rapid shifts in the noise were still related to the ambient light, sort of, but that ambient light is not the real issue. Rather, the noise was the result of some trouble with the centroiding algorithm.

The centroiding functions I have been using can be found on the SVN under /users/aidan/cit_centroid_code. When finding centroids for non-uniform intensity distributions, it is desirable to avoid simply using a single threshold level to isolate individual spots, as dimmer spots may be below this threshold and would therefore not be "seen" by the algorithm. The centroiding functions used here get around this issue by initially setting a relatively high threshold to find the centroids of the brighter spots, and then fitting a hexagonal close-packed array to these spots so as to be able to infer where the rest of the spots are located. Centroiding is then done within small boxes around each estimated centroid location (as determined by the hexagonal array). The functions "find_hex_grid.m" and "flesh_out_hex_grid.m" serve the purpose of finding this hexagonal grid. However, there appear to be bugs in these functions which compromise the ability of the functions to accurately locate spots and their centroids.

The centroiding error can be clearly seen in the following plot of calculated centroids plotted against the raw image from which they were calculated:

centerror.PNG

At the bottom of the image, it can be seen that the functions fail at estimating the location of the spots. Because of this, centroiding is actually being done on a small box surrounding each point which consists only of the background of the image. This can explain why these centroids were calculated to have much larger displacements and shifted dramatically with small changes in ambient light levels. The centroiding algorithm was being applied to the background surrounding each of these points, so it's very reasonable to believe that a non-uniform background fluctuation could cause a large shift in the calculated centroid of each of these regions.

It was determined that this error arose during the application of the hex grid by going through the centroiding functions step-by-step to narrow down where specifically the results appeared to be incorrect. The function's initial estimate for the centroids right before the application of the hex grid  is shown plotted against the original image:

centinit.png

The centroids in this image appear to correspond well to the location of each spot, so it does not appear that the error arises before this point in the function. However, when flesh_out_hex_grid and its subfunction find_hex_grid were called, they produced the following hexagonal grid:

hexgrid.png

It can be seen in this image that the estimated "spot locations" (the intersections of the grid) near the bottom of the image differ from the actual spot locations. The centroiding algorithm is applied to small regions around each of these intersections, which explains why the calculated "spot centroids" appear at incorrect locations.

It will be necessary to fix the hexagonal grid fitting so as to allow for accurate centroiding over non-uniform intensity distributions. However, recent experiments in measuring thermally induced defocus produce images with a fairly uniform distribution. It should therefore be possible to find the centroids of the images from these experiments to decent accuracy by simply temporarily bypassing the hexagonal-grid fitting functions. To demonstrate this, I analyzed some data from last week (experiment 72010a). Without bypassing the hex-grid functions, analysis yielded the following results:

72010a.png

However, when hexagonal grid fitting was bypassed, analysis yielded the following:

72010a_nohex.PNG

The level of noise in the centroid displacement vs. centroid location plot, though still not ideal, is seen to decrease by nearly two orders of magnitude. This indicates that bypassing or fixing the problems with the hexagonal grid fitting functions should enable a more accurate measurement of thermally induced defocus in future experiments.

  79   Tue Jul 27 08:31:10 2010 AidanElectronicsSLEDLong term SLED test - Day 1

 The measurement from the on-board PD of the Superlum SLED seems to be falling. This effect started around 5PM last night which is right about the time we moved the position of the PD that the SLED is illuminating on the optical table (via optical fiber).

Curiously, the current set point and delivered current to the SLED are dropping as well. 

Attachment 1: SLED_superlum_long_term_test_0002A.pdf
SLED_superlum_long_term_test_0002A.pdf
  80   Wed Jul 28 18:32:12 2010 AidanElectronicsSLEDSLED long term test - Day 2

Here's the data from the last 2 1/2 days of running the SLED. The decrease in photo-current measured by the on-board photo-detector is consistent with the decrease in the current set-point and the delivered current, but it is not clear why these should be changing.

Strictly speaking I should add some analysis that shows that delta_PD_voltage_measured = delta_I_set_measured * [delta_PD_voltage/delta_I_set (I_set)]_calculated ...

Attachment 1: SLED_superlum_long_term_test_0003A.pdf
SLED_superlum_long_term_test_0003A.pdf
  81   Thu Jul 29 10:09:19 2010 AidanElectronicsSLEDSLED long term test - Day 2

I've attached the Acceptance Test Report data from SUPERLUM for this SLED. I've also determined the expected percentage decrease in power/photo-current per mA drop in forward current.

The measured decrease in forward current over the last 2 1/2 days is around 1.4mA from around 111mA. The expected drop in power is thus (4.5% per mA)*(1.4mA) = 6.3%.

The drop in photo-current is around 37.5mA to 35mA = 2.5mA. The percentage drop is around 100*(2.5mA)/(36.3mA) = 6.9%. 

Therefore, the drop in measured power is consistent with what we would expect given the decrease in forward current (which is consistent with the drop in the set point). Why the set-point is dropping is still a mystery.

Quote:

Here's the data from the last 2 1/2 days of running the SLED. The decrease in photo-current measured by the on-board photo-detector is consistent with the decrease in the current set-point and the delivered current, but it is not clear why these should be changing.

Strictly speaking I should add some analysis that shows that delta_PD_voltage_measured = delta_I_set_measured * [delta_PD_voltage/delta_I_set (I_set)]_calculated ...

 

Attachment 1: superlum_SLED_ATR.pdf
superlum_SLED_ATR.pdf
  82   Fri Jul 30 10:04:54 2010 AidanComputingHartmann sensorRestarted the HWS EPICS channels

 Restarted the HWS EPICS channels on hartmann with the following command:

/cvs/opt/epics-3.14.10-RC2-i386/base/bin/linux-x86/softIoc -S HWS.cmd &
 

  83   Fri Jul 30 11:01:31 2010 AidanComputingHartmann sensorEPICS softIoc alias

 I added an alias HWSIoc to controls which can be used to start the HWS EPICS softIoc.

 

alias HWSIoc='/cvs/cds/caltech/target/softIoc/startHWSIOC.sh'
 
and the bash script is:
 

#!/bin/bash
 
cd /cvs/cds/caltech/target/softIoc
 
/cvs/opt/epics-3.14.10-RC2-i386/base/bin/linux-x86/softIoc -S  /cvs/cds/caltech/
target/softIoc/HWS.cmd &
 
cd -
 

 

  84   Fri Jul 30 13:38:39 2010 James KunertComputingHartmann sensorSummary of Thermal Defocus Data Analysis

Below is a table summarizing the results of recent thermal defocus experiments. The values are the calculated change in measured defocus per unit temperature change of the sensor:

Experiment 72710a 72710b 72810a 72910a
DeltaS/DeltaT (x) [m^-1/K] -1.31E-4 -1.46E-4 -1.40E-4 -1.52E-4
DeltaS/DeltaT (y) [m^-1/K] -1.63E-4 -1.53E-4 -1.56E-4 -1.70E-4

More detail on these experiments will be available in my second progress report, which will be uploaded to the LIGO DCC by next Monday.

The main purpose of this particular eLog is to summarize what functions I wrote and used to do this data analysis, and how I used them. All relevant code which is referenced here can be found on the SVN; I uploaded my most recent versions earlier today.

Here is a flowchart summarizing the three master functions which were specifically addressed for each experiment:

ThermDefocMasterFunctions.png

py4plot.m is probably the most complicated of these three functions, in terms of the amount of data analysis done, so here's a flowchart which shows how the function works and the main subfunctions that it addresses:

ThermDefoc--py4plot.png

 

Also, here is a step-by-step example of how these functions might be used during a particular experiment:

 

(1)Suppose that I have an experiment which I have named "73010a", in which I wish to take 40 images of 200 sums. I would open the code for framesumexport2.py and change lines 7, 8 and 17 to read:


7  LoopN=40
8 SumN=200
17 mainoutbase="73010a"

And I would then save the changes. I would double-check that the output basename had indeed been changed to 73010a (it will overwrite existing data files, if you forget to change the basename before running it). I would then let the script run (changing the set temperature of the lab after the first summed image was taken). Note that the total duration of the measurement is a function of how many images are summed and how many summed images are taken (in this example, if I was taking each single image at a rate of 11Hz, data collection would take ~20 seconds and data processing (summing the images) would take ~4 minutes (on the order of ~1 second per image in the sum) (the script isn't very quick at summing images, obviously).

EDIT(7/30 3:40pm):  I just updated framesumexport2.py so that the program prompts you for this information. I also changed enabled execute permissions on the copy of the code on the Hartmann machine located in /users/jkunert/, so going to that directory and typing ./framesumexport2.py then inputting the information when prompted is all you need to do now. No need to go change around the actual code every time, any more.

 

(2)Once data collection had ceased entirely, I would open MATLAB and enter the following:

[x,y,dx,dy,time,M,centroids]=pyanalyze_uni('73010a',40);

The function would then look for 73010a.raw and 73010a.txt in ./opt/EDTpdv/ and import the 40 images individually and centroid them. The x and y outputs are the centroid locations. If, for example, 952 centroids were located, x and y would be 952x1x40 arrays. M would be a 40x4 array of the form:

[time_before_img_taken      time_after_img_taken      digitizer_temp      sensor_temp]

 

(3)Once MATLAB had finished the previous function, I would input:

tG=struct;
py4plot('73010a',0,39,x,y,'73010a','200',[1 952],2,tG)

The inputs are, respectively:

(1)python output basename,
(2)first image to analyze (where the first image is image 0),
(3)last image to analyze,
(4)x data (or, rather, data to analyze. to analyze y instead, just flip around "x" and "y" in the input),
(5)y data (or, if you want to analyze the y-direction, "x" would be the entry here),
(6)experiment name,
(7)number of sums in each image (as a string),
(8)range of centroids to include in analysis (if you have 952 centroids, for example, and no ridiculous noise at the edges of the CCD, then [1 952] would be the best entry here),
(9)outlier tolerance (number of standard deviations from initial fit line that a datapoint must be within to be included in the second line fitting, in the dx vs x plot),
(10)exponential fitting structure (input an empty structure unless the temperature/time exponential fit turns out poorly, in which case a better fit parameter guess can be inputted as field tG.guess)

  85   Fri Jul 30 19:22:24 2010 AidanComputingEPICSWaveform Channel Access for storing centroids

 A waveform channel was added to the HWS softIoc on hartmann. This allows arrays of data to be stored in a single channel. It's not clear whether it is storing this data as a set of number or strings. However, the values can be changed by the following command:

caput -a -n C4:TCS-HWS_CENTROIDSX 5 1,2,3,4,5

Although the <no of values> entry doesn't seem to actual enforce anything - you can have more or less values than this in the array and they are still added to the channel. What does seem to be necessary is no spaces between the commas and the values of the array.

This also works:

[controls@fb1 cds]$ caput -a -n C4:TCS-HWS_CENTROIDSX 2 1,2,3n
Old : C4:TCS-HWS_CENTROIDSX          1,2,35.4342 
New : C4:TCS-HWS_CENTROIDSX          1,2,3n 
which suggests that this is really a string variable - even with the -n enforce. The cainfo command suggests this as well. 

[controls@fb1 cds]$ cainfo C4:TCS-HWS_CENTROIDSX
C4:TCS-HWS_CENTROIDSX
    State:         connected
    Host:          
    Access:        read, write
    Data type:     DBR_STRING (native: DBF_STRING)
    Element count: 1
 
 

Usage: caput [options] <PV name> <PV value>

       caput -a [options] <PV name> <no of values> <PV value> ...

  -h: Help: Print this message

Channel Access options:

  -w <sec>:  Wait time, specifies longer CA timeout, default is 1.000000 second

Format options:

  -t: Terse mode - print only sucessfully written value, without name

Enum format:

  Default: Auto - try value as ENUM string, then as index number

  -n: Force interpretation of values as numbers

  -s: Force interpretation of values as strings

Arrays:

  -a: Put array

      Value format: number of requested values, then list of values


 

  86   Fri Jul 30 21:19:05 2010 AidanComputingEPICSWaveform Channel Access for storing centroids

After some discussion with Frank we figured out how to edit the record type in HWS.db so that the waveform/array channel actually behaved like a numerical array and not like a single string. This just involved defining the data type and the element count in the record definition, like so:

record(waveform, "C4:TCS-HWS_CENTROIDSX")
{
field(EGU,"PIXELS")
field(HOPR,"1024")
field(LOPR,"0")
field(FTVL,"DOUBLE")
field(NELM,"1000")
}
 

and then when the ioc was rebooted, examination of the channel showed the following:

 

[controls@hartmann softIoc]$ cainfo C4:TCS-HWS_CENTROIDSX
C4:TCS-HWS_CENTROIDSX
    State:         connected
    Host:          hartmann:5064
    Access:        read, write
    Data type:     DBR_DOUBLE (native: DBF_DOUBLE)
    Element count: 1000
 
 
[controls@hartmann softIoc]$ caput -a -n C4:TCS-HWS_CENTROIDSX 10 1 2 3 4 5 6 7 8 9 10 11 12 13.1
Old : C4:TCS-HWS_CENTROIDSX 13 1 2 3 4 5 6 7 8 9 10 11 12 13.1 
New : C4:TCS-HWS_CENTROIDSX 13 1 2 3 4 5 6 7 8 9 10 11 12 13.1 
 
 
[controls@hartmann softIoc]$ caget C4:TCS-HWS_CENTROIDSX
C4:TCS-HWS_CENTROIDSX 1000 1 2 3 4 5 6 7 8 9 10 11 12 13.1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 

 

Quote:

 A waveform channel was added to the HWS softIoc on hartmann. This allows arrays of data to be stored in a single channel. It's not clear whether it is storing this data as a set of number or strings. However, the values can be changed by the following command:

caput -a -n C4:TCS-HWS_CENTROIDSX 5 1,2,3,4,5

Although the <no of values> entry doesn't seem to actual enforce anything - you can have more or less values than this in the array and they are still added to the channel. What does seem to be necessary is no spaces between the commas and the values of the array.

This also works:

 

[controls@fb1 cds]$ caput -a -n C4:TCS-HWS_CENTROIDSX 2 1,2,3n
Old : C4:TCS-HWS_CENTROIDSX          1,2,35.4342 
New : C4:TCS-HWS_CENTROIDSX          1,2,3n 
which suggests that this is really a string variable - even with the -n enforce. The cainfo command suggests this as well. 

 

 

[controls@fb1 cds]$ cainfo C4:TCS-HWS_CENTROIDSX
C4:TCS-HWS_CENTROIDSX
    State:         connected
    Host:          
    Access:        read, write
    Data type:     DBR_STRING (native: DBF_STRING)
    Element count: 1
 
 

 

 

Usage: caput [options] <PV name> <PV value>

       caput -a [options] <PV name> <no of values> <PV value> ...

  -h: Help: Print this message

Channel Access options:

  -w <sec>:  Wait time, specifies longer CA timeout, default is 1.000000 second

Format options:

  -t: Terse mode - print only sucessfully written value, without name

Enum format:

  Default: Auto - try value as ENUM string, then as index number

  -n: Force interpretation of values as numbers

  -s: Force interpretation of values as strings

Arrays:

  -a: Put array

      Value format: number of requested values, then list of values


 

 

 

  87   Sat Jul 31 11:54:20 2010 AidanComputingSLEDSLED Test Day 5 - Re-tuned current set-point control voltage

Main Points

  • Re-set SLED current set-point control voltage to 0.111V
  • SLED current set-point voltage drops by about 5mV when the SLED is dis-engaged
  • Resetting was around 11:45AM PDT 31st-Jul-2010

I turned off the SLED for 10s and reset the current set-point voltage (read using a mutlimeter and probing a couple of pins at the back of the driver board). The initial voltage when the test started on Monday was 0.111V when the SLED was engaged. This drooped to 0.109V over the week and there was a corresponding (but possible not resulting) drop in on-board photo-diode voltage. When the SLED was disengaged the set-point current control voltage dropped to 0.104V. I turned the LP pot on the front of the SLED driver board until the multimeter read 0.106V and re-engaged the SLED. The curernt set-point voltage then read 0.111V, occasionally popping up to 0.112V for a moment or two.

The DC Power Supply to the SLED reads 8.9V with 0.26A current being drawn.

  88   Wed Aug 4 09:57:38 2010 Aidan, JamesComputingHartmann sensorRMS measurements with Hartmann sensor

[INCOMPLETE ENTRY]

We set up the Hartmann sensor and illuminated it with the output from the fiber-coupled SLED placed about 1m away. The whole arrangement was covered with a box to block out ambient light. The exposure time on the Hartmann sensor was adjusted so that the maximum number of counts in a pixel was about 95% of the saturation level.

We recorded a set of 5000 images to file and analyzed them using the Caltech and Adelaide centroiding codes. The results are shown below. Basically, we see the same deviation from ideal improvement that is observed at Adelaide.

Attachment 1: rms_analyze_centroids_aidan.pdf
rms_analyze_centroids_aidan.pdf
Attachment 2: RMS_WonCode.png
RMS_WonCode.png
Attachment 3: RMS_WonCodeLessPrism.png
RMS_WonCodeLessPrism.png
  89   Mon Aug 9 10:58:37 2010 AidanLaserSLEDSLED 15-day trend

 Here's a plot of the 15-day output of the SLED.

Currently there is an 980nm FC/APC fiber-optic patch-cord attached to the SLED. It occurred to me this morning that even though the patch cord is angle-cleaved, there may be some back-reflection than desired because the SLED output is 830nm (or thereabouts) while the patch cord is rated for 980nm.

 I'm going to turn off the SLED until I get an 830nm patch-cord and try it then. 

Correction: I removed the fiber-optic connector and put the plastic cap back on the SLED output. The mode over-lap (in terms of area) from the reflection off the cap with the output from the fiber is about 1 part in 1000. So even with 100% reflection, there is less than the 0.3% danger level coupled back into the fiber. The SLED is on again.

Attachment 1: SLED_superlum_long_term_test_0005A_annotated_15-day_result.pdf
SLED_superlum_long_term_test_0005A_annotated_15-day_result.pdf
  90   Tue Aug 17 16:31:55 2010 AidanThings to BuyLaserBought a laser diode from Thorlabs for HWS

http://www.thorlabs.com/thorProduct.cfm?partNumber=CPS180

I bought this laser diode from Thorlabs today to try the current modulation trick Phil and I discussed last Friday. 

That is:

  1. Accept that there will be interference fringes on the Hartmann sensor probe beam with a laser diode source (especially if the probe beam is the retro-reflection from a Michelson interferometer with a macroscopic arm length difference)
  2. Modulate the current of the laser diode source to vary its wavelength by a few hundreds on nm. Do this on a time scale that is much faster than the exposure time for a Hartmann sensor measurement
  3. The contrast of the interference fringes should average out and the exposure should appear to be the sum of two incoherent beams.

 

 

 

  91   Tue Aug 17 22:34:14 2010 ranaMiscGeneralETM temperature after a 1W step

This attachment is a Shockwave Flash animation of the iLIGO ETM getting a 1 W beam with a 3.5 cm radius getting fully absorbed onto the surface at t = 0.

Attachment 1: etmt.swf
  92   Wed Aug 18 18:38:11 2010 AidanComputingHartmann sensorHartmann sensor code

 I downloaded and tested revision 47 of the Adelaide Hartmann sensor code from the SVN (https://trac.ligo.caltech.edu/Hartmann_Sensor/browser/users/won/HS_OO?rev=47). After giving it the correct input filenames it centroided the Hartmann sensor images pretty seamlessly. The output and code is attached below.

The code takes two Hartmann images. Locates the centroids in both instances and then determines the displacements of all the centroids between the two images. The locations of the centroids are plotted in a diagram and the x- and y- centroid displacements are plotted vs the index of each centroid.

The following comments are output on the command line in MATLAB:

 

>> test_HS_Classes
Current plot held
Current plot released
----------------------------------------------------------------
Obtained reference and test centroids.
Number of centroids in reference centroids = 951
average position of reference centroids:
x = 506.39615297  y = 512.890603168
Number of centroids in test centroids = 951
average position of test centroids:
x = 506.396160891  y = 512.892513673

---------------------------------------------------------------- 

HWS_code_output.png - shows the output from the code: we'll need to get more labels on these plots.

HWS_input_image.png - the reference input image (using false color scale) to the Hartmann code

Attachment 1: test_HS_Classes.m
% test_HS_classes.m
%
% A script to test and demonstrate the usage of classes HS_Centroids and
% HS_Classes.

% (LIGO) If half_offset is set true, image and centroids won't be in
% sync in the scatter plot.

% Input parameters
background = 49.3;
... 107 more lines ...
Attachment 2: HS_Image.m
% HS_Image.m
%
%
% HS_Image is a class used to store and interact with images from
% Hartmann Sensor camera.
%
% An instance of the class HS_Image is also used as a property of an
% instance of the class HS_Centroids.
%
% Properties:
... 70 more lines ...
Attachment 3: HS_Centroids.m
% HS_Centroids.m
%
%
% HS_Centroids is a class used to generate and interact with centroids
% of Hartmann Sensor images.
%
% An instance of the class HS_Centroids holds a set of centroids of an
% image.
%
% Properties:
... 254 more lines ...
Attachment 4: HWS_code_output.png
HWS_code_output.png
Attachment 5: HWS_input_image.png
HWS_input_image.png
  93   Mon Aug 23 08:43:16 2010 AidanThings to BuyLaserBought a laser diode from Thorlabs for HWS

It arrived on Friday.

Quote:

http://www.thorlabs.com/thorProduct.cfm?partNumber=CPS180

I bought this laser diode from Thorlabs today to try the current modulation trick Phil and I discussed last Friday. 

That is:

  1. Accept that there will be interference fringes on the Hartmann sensor probe beam with a laser diode source (especially if the probe beam is the retro-reflection from a Michelson interferometer with a macroscopic arm length difference)
  2. Modulate the current of the laser diode source to vary its wavelength by a few hundreds on nm. Do this on a time scale that is much faster than the exposure time for a Hartmann sensor measurement
  3. The contrast of the interference fringes should average out and the exposure should appear to be the sum of two incoherent beams.

 

 

 

 

  94   Mon Sep 13 18:24:52 2010 AidanLaserHartmann sensorEnclosure for the HWS

I've assembled the box Mindy ordered from Newport that will house the Hartmann sensor. It's mainly to reduce ambient light, air currents and to keep the table cleaner than it would otherwise be.

We need to add a few more holes to allow access for extra cables.

 

Attachment 1: 00001.jpg
00001.jpg
Attachment 2: 00002.jpg
00002.jpg
Attachment 3: 00003.jpg
00003.jpg
Attachment 4: 00005.jpg
00005.jpg
  95   Tue Sep 28 10:41:32 2010 AidanLaserHartmann sensorAligning HWS cross-sample experiment - polarization issues

I'm in the process of aligning the cross-sampling experiment for the HWS. I've put the 1" PBS cube into the beam from the fiber-coupled SLED and found that the split between s- and p-polarizations is not 50-50. In fact, it looks more like 80% reflected and 20% transmitted. This will, probably, be due to the polarization-maintaining patch-cord that connects to the SLED. I'll try switching it out with a non-PM maintaining fiber.

 


Later ...

That worked.

  96   Tue Sep 28 17:53:40 2010 AidanLaserHartmann sensorCrude alignment of cross-sampling measurement

I've set up a crude alignment of the cross-sampling system (optical layout to come). This was just a sanity check to make sure that the beam could successfully get to the Hartmann sensor. The next step is to replace the crappy beam-splitter with one that is actually 50/50.

Attached is an image from the Hartmann sensor.

Attachment 1: 2010_09_28-HWS_cross_sample_expt_crude_alignment_01.pdf
2010_09_28-HWS_cross_sample_expt_crude_alignment_01.pdf
  97   Wed Sep 29 16:49:36 2010 AidanLaserHartmann sensorCross-sampling experiment power budget

I've been setting up the cross-sampling test of the Hartmann sensor, Right now I'm waiting on a 50/50 BS so I'm improvising with a BS for 1064nm.

The output from the SLED (green-beam @ 980nm) is around 420uW (the beam completely falls on the power meter.) There are a couple of irises shortly afterwards that cut out a lot of the power - apparently down to 77uW (but the beam is larger than the detection area of the power meter at this point - by ~50%). The BS is not very efficient on reflection and cuts down the power to 27uW (overfilled power meter). The measurement of 39uW is near a focus and the power meter captures the whole beam. There is a PBS cube that is splitting the beam unequally between s- and p-polarizations (I think this is due to uneven reflections for s- and p-polarizations from the 1064nm BS). The beam is retro-reflected back to the HWS where about 0.95uW makes it to the detector.

There is a 1mW 633nm laser diode that is used to align the optical axis. There are two irises that are used to match the optical axis of the laser diode and the SLED output.

 

Attachment 1: 00001.jpg
00001.jpg
  98   Mon Oct 4 19:44:03 2010 AidanLaserHartmann sensorCross-sampling experiment - two beams on HWS

I've set up the HWS with the probe beam sampling two optics in a Michelson configuration (source = SLED, beamsplitter = PBS cube). The return beams from the Michelson interferometer are incident on the HWS. I misaligned the reflected beam from the transmitted beam to create two Hartmann patterns, as shown below.

The next step is to show that the centroiding is a linear superposition of these two wavefronts.

Attachment 1: test001_two_beams_on_HWS.pdf
test001_two_beams_on_HWS.pdf
  99   Tue Oct 5 12:51:16 2010 AidanLaserHartmann sensorVariable power in two beams of cross-sampling experiment

The SLED in the cross-sampling experiment produces unpolarized light at 980nm. So I added a PBS after the output and then a HWP (for 1064nm sadly) after that. In this way I produced linearly p-polarized light from the PBS. Then I could rotate it to any angle by rotating the HWP. The only drawback was that the HWP was only close to half a wave of retardation at 980nm. As a result, the output from this plate became slightly elliptically polarized.

The beam then went into another PBS which split it into two beams in a Michelson-type configuration (REFL and TRANS beams) - see attached image. By rotating the HWP I could vary the relative amount of power in the two arms of the Michelson. The two beams were retro-reflected and we then incident onto a HWS.

I measured the power in the REFL beam relative to the total power as a function of the HWP angle. The results are shown in the attached plot.

 

Attachment 1: test002_two_beams_on_HWS_analyze.pdf
test002_two_beams_on_HWS_analyze.pdf
Attachment 2: Hartmann_Enclosure_Diagram__x-sampling.png
Hartmann_Enclosure_Diagram__x-sampling.png
  100   Thu Nov 4 13:31:19 2010 Won KimComputingHartmann sensorFrame Grabber SDK installation

 Appended below is the step by step procedure that I used to install and
use the frame grabber SDK. Note that the installation process was a lot
simpler with the SDK version 4.2.4.3 than the previous version.

Lines starting with ":" are my inputs and with ">" the computer outputs.

I tried to put this into elog but the web page says the laser password is
wrong so I could not.

Won

---

0. Turn on or restart the computer. For installation of the frame grabber
  SDK, go to step 1. If using the existing installation go to step 5.

1. Copy the script EDTpdv_lnx_4.2.4.3.run to my home folder.

2. Ensure that the script is executable.

: chmod +x EDTpdv_lnx_4.2.4.3.run

3. Run the script.

: sudo ./EDTpdv_lnx_4.2.4.3.run

4. After entering the root password, the script asks for the installation
  directory. Default is /opt/EDTpdv, to which I type 'y'.

  The script then runs, printing out a massive log. This completes the
  installation process.

5. Move to the directory in which the SDK is installed.

: cd /opt/EDTpdv

6. Initialise the camera by loading a camera configuration file
  dalasa_1m60.cfg located in the camera_config folder.

: ./initcam -f camera_config/dalsa_1m60.cfg

  Which will output the message (if successful)

opening pdv unit 0....
done


7. Take an image frame.

: ./take -f ~/matlab/images/test.raw

  which will save the raw file as specified above and generate following
  message on the terminal:

reading image from Dalsa 1M60 12 bit dual channel camera link
width 1024 height 1024 depth 12  total bytes 2097152
writing 1024x1024x12 raw file to /home/won/matlab/images/test.raw

(actual size 2097152)

1 images 0 timeouts 0 overruns



Whether the image taken was valid or not, I followed the exactly same
procedure. In step 7, when the image was not valid, the message after
executing the take command said "1 timeoutouts", and when the image was
valid I got "0 timeouts".

You will also get "1 timeouts" if you turn off the camera and execute the
take command. So at least I know that when an image taken was not valid it
is due to the frame grabber failing to obtain the image from the camera.

ELOG V3.1.3-