40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  TCS elog, Page 5 of 6  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Datedown Author Type Category Subject
  55   Tue Jun 22 22:30:24 2010 James KMiscHartmann sensorSURF Log -- Day 5, more Hartmann image preliminary analysis

 Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):

hws.png

Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.

I tested to see whether the same 'flickering' is present in images under this setup.

For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:

750 microseconds:

750us.png

1000 microseconds:

1000us.png

1500 microseconds:

1500us.png

2000 microseconds:

2000us.png

3000 microseconds:

3000us.png

4000 microseconds:

4000us.png

5000 microseconds. Note that the background level is approaching the level of the feature:

5000us.png

6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:

8000us.png

 

When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):

hws3k.png

This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction). 

 

To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):

 

Set to 30Hz:

f30Hz.png

Set to 1Hz:

f1Hz.png

 

To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of  115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).

 

I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.

 

EDIT: Some late-night conjecturing: Consider the following,

I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.

Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.

So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.

Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.

This is very similar to the sort of behavior that we're seeing!

Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:

levelinstances.png

Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.

 

This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.

 

  54   Tue Jun 22 00:21:47 2010 James KMiscHartmann sensorSurf Log -- Day 4, Hartmann Spot Flickering Investigation

 I started out the day by taking some images from the CCD with the OLED switched off, to just look at the pattern when it's dark. The images looked like this:

 
Taken with camera settings:

The statistical analysis of them using the functions from Friday gave the following result:

 
At first glance, the distribution looks pretty Poissonian, as expected. There are a few scattered pixels registering a little brighter, but that's perhaps not so terribly unusual, given the relatively tiny spread of intensities with even the most extreme outliers. I won't say for certain whether or not there might be something unexpected at play, here, but I don't notice anything as unusual as the standard deviation 'spike' seen from intensities 120-129 as observed in the log from yesterday.
 
Speaking of that spike, the rest of the day was spent trying to investigate it a little more. In order to accomplish this, I wrote the following functions (all attached):
 
-spotfind.m -- inputs a 3D array of several Hartmann images as well as a starting pixel and threshold intensity level. analyzes the first image, scanning starting at the starting pixel until it finds a spot (with an edge determined by the threshold level), after which it finds a box of pixels which completely surrounds the spot and then shrinks the matrix down to this size, localizing the image to a single spot
 
-singspotcent.m -- inputs the image array outputted from spotfind, subtracts an estimate of the background, then uses the centroiding algorithm sum(x*P^2)/sum(P^2) to find the centroid (where x is the coordinate and P is the intensity level), then outputs the centroid location
 
-hemiadd.m -- inputs the image from spotfind and the centroid from singspotcent, subtracts an estimate of the background, then finds the sum total intensity in the top half of the image above the centroid, the bottom half, the left half and the right half, outputs these values as n-component vectors for an n-image input, subtracts from each vector its mean and then plots the deviations in intensity from the mean in each half of the image as a function of time
 
-edgeadd.m -- similar to hemiadd, except that rather than adding up all pixels on one half of the image, it inputs a threshold, determines how far to the right of the centroid that the spot falls past this treshold and uses it as a radial length, then finds the sum of the intensities of a bar of 3 pixels on this "edge" at the radial length away from the centroid.
 
-spotfft.m -- performs a fast fourier transform on the outputs from edgeadd, outputting the frequency spectrum at which the intensity of these edge pixels oscillate, then plotting these for each of the four edge vectors. see an example output below.
 
--halfspot_fluc.m and halfspot_edgefluc.m -- master functions which combine and automate the functions previous
 
Dr. Brooks has suggested that the observed flickering might perhaps be an effect resulting from the finite thickness of the Hartmann Plate. The OLED can be treated as a point source and thus can be approximated as emitting a spherical wavefront, and thus the light from it will hit this edge at an angle and be scattered onto the CCD. If the plate vibrates, then (which it certainly must to some degree) the wavefront will hit this edge at a different angle as the edge is displaced temporarily through vibration, and thus this light will hit the CCD at a different point, causing the flickering (which is, after all, observed to occur near the edge of the spot). This effect certainly must cause some level of noise, but whether it's the culprit for our 'flickering' spike in the standard deviation remains to be seen.

Here is the frequency spectrum of the edge intensity sums for two separate spots, found over 128 images:
Intensity Sum Amplitude Spectrum of Edge Fluctuations, 128 images, spot search point (100,110), threshold level 110

128 images, spot search point (100,100), threshold level 129
At first glance, I am not able to conclude anything from this data. I should investigate this further.

A few things to note, to myself and others:
--I still should construct a Bode plot from this data and see if I can deduce anything useful from it
--I should think about whether or not my algorithms are good for detecting what I want to look at. Is looking at a 3 pixel vertical or horizontal 'bar' on the edge good for determining what could possibly be a more spherical phenomenon? Are there any other things I need to consider? How will the settings of the camera affect these images and thus the results of these functions?
--Am I forgetting any of the subtleties of FFTs? I've confirmed that I am measuring the amplitude spectrum by looking at reference sine waves, but I should be careful since I haven't worked with these in a while
 
It's late (I haven't been working on this all night, but I haven't gotten the chance to type this up until now), so thoughts on this problem will continue tomorrow morning..

Attachment 1: spotfind.m
function [spotM,r0,c0] = spotfind(M,level,rs,cs)
%SPOTFIND Inputs a 3D array of hartmann spots and spot edge level
%and outputs a subarray located around a single spot located near (rs,cs)
cut=level/65535;
A=double(M(:,:,1)).*double(im2bw(M(:,:,1),cut));

%start at (rs,cs) and sweep to find spot
r=rs;
c=cs;
while A(r,c)==0
... 34 more lines ...
Attachment 2: singspotcent.m
function [rc,cc] = singspotcent(A)
%SINGSPOTCENT returns centroid location for first image in input 3D matrix
MB=double(A(:,:,1));
[rn cn]=size(MB);
M=MB-mean(mean(min(MB)));
r=1;
c=1;
sumIc=0;
sumIr=0;
while c<(cn+1)
... 26 more lines ...
Attachment 3: hemiadd.m
function [topsum,botsum,leftsum,ritsum] = hemiadd(MB,rcd,ccd)
%HEMIADD inputs a 3D image matrix and centroid location and finds the difference of
%the sums of the top half, bottom half, left half and right half at each time
%compared to their means over that time

%round coordinates of centroid
rc=round(rcd);
cc=round(ccd);

%subtract approximate background
... 51 more lines ...
Attachment 4: edgeadd.m
function [topsum,botsum,leftsum,ritsum] = edgeadd(MB,rcd,ccd,edgemax)
%HEMIADD inputs a 3D image matrix and centroid location and finds the difference of
%the sums of 3 edge pixels at radial distance "radial" from centroid for
%the top half, bottom half, left half and right half at each time
%compared to their means over that time

%round coordinates of centroid
rc=round(rcd);
cc=round(ccd);

... 59 more lines ...
Attachment 5: spotfft.m
function spotfft(t,b,l,r)
%SPOTFFT Does an fft and plots the frequency spectrum of four input vectors
%Specifically, this is to be used with halfspot_edgefluc to find the
%frequencies of oscillations about the edges of Hartmann spots
[n,m]=size(t);
NFFT=2^nextpow2(n);
T=fft(t,NFFT)/n;
B=fft(b,NFFT)/n;
L=fft(l,NFFT)/n;
R=fft(r,NFFT)/n;
... 30 more lines ...
Attachment 6: halfspot_fluc.m
function [top,bot,lft,rgt] = halfspot_fluc(M,spotr,spotc,thresh)
%HALFSPOT_FLUC Inputs a 3D array of Hartmann sensor images, along with an
%approximate spot location and intensity threshhold. Finds a spot on the
%first image near (spotc,spotr) and defines boundary of spot near an
%intensity of 'thresh'. Outputs fluctuations of the intensity sums of the
%top, bottom, left and right halves of the spot about their means, and
%graphs these against each other automatically.

[I,r0,c0]=spotfind(M,thresh,spotr,spotc);
[r,c]=singspotcent(I);
... 7 more lines ...
Attachment 7: halfspot_edgefluc.m
function [top,bot,lft,rgt] = halfspot_edgefluc(M,spotr,spotc,thresh,plot)
%HALFSPOT_FLUC Inputs a 3D array of Hartmann sensor images, along with an
%approximate spot location and intensity threshhold. Finds a spot on the
%first image near (spotc,spotr) and defines boundary of spot near an
%intensity of 'thresh'. Outputs fluctuations of the intensity sums of the
%top, bottom, left and right edges of the spot about their means, and
%graphs these against each other automatically.
%
%For 'plot', specify 'time' for the time signal or 'fft' for the frequency

... 10 more lines ...
  53   Sat Jun 19 17:31:46 2010 James KMiscHartmann sensorSURF Log -- Day 3, Initial Image Analysis
For Friday, June 18:
(note that I haven't been working on this stuff all of Saturday or anything, despite posting it now. It was getting late on Friday evening so I opted to just type it up now, instead)

(all matlab files referenced can be found in /EDTpdv/JKmatlab unless otherwise noted)

I finally got Xming up and running on my laptop and had Dr. Brooks edit the permissions of the controls account, so now I can fully access the Hartmann computer remotely (run MATLAB, interact with the framegrabber programs, etc.). I was able to successfully adjust camera settings and take images using 'take', saving them as .raw files. I figured out how to import these .raws into MATLAB using fopen and display them as grayscale images using the Imshow command. I then wrote a program (readimgs.m, as attached) which takes inputs a base filename and number of images (n), then automatically loads the first 'n' .raw files located in /EDTpdv/JKimg/ with the inputted base file name, formatting them properly and saving them as a 1024x1024x(n) matrix.

After trying out the test pattern of the camera, I set the camera into normal operating mode. I took 200 images of the HWS illuminated by the OLED, using the following camera settings:

 
Temperature data from the camera was, unfortunately, not taken, though I now know how to take it.
 
The first of these 200 images is shown below:
 
hws0000.png

As a test exercise in MATLAB and also to analyze the stability of the HWS output, I wrote a series of functions to allow me to find and plot the means and standard deviations of the intensity of each pixel over a series of images. First, knowing that I would need it in following programs in order to use the plot functions on the data, I wrote "ar2vec.m" (as attached), which simply inputs an array and concatenates all of the columns into a single column vector.

Then, I wrote "stdvsmean.m" (as attached), which inputs a 3D array (such as the 1024x1024x(n) array of n image files), which first calculates the standard deviation and mean of this array along the 3rd dimension (leaving, for example, two 1024x1024 arrays, which give the mean and standard deviation of each pixel over the (n) images). It then uses ar2vec to create two column vectors, representing the mean and standard deviation of each pixel. It then plots a scatterplot of the standard deviation of each pixel vs. its mean intensity (with logarithmic axes), along with histograms of the mean intensities and standard deviation of intensities (with logarithmic y-axes).

"imgdevdat.m" (as attached) is simply a master function which combines the previous functions to input image files, format them, analyze them statistically and create plots.

Running this function for the first 20 images gave the following output:

(data from 20 images, over all 1024x1024 pixels)

Note that the background level is not subtracted out in this function, which is apparent from the plots. The logarithmic scatter plot looks pretty linear, as expected, but there are interesting features arising between the intensities of ~120 to ~130 (the obvious spike upward of standard deviation, followed immediately by a large dip downward).

MATLAB gets pretty bogged down trying to plot over a million data points at a time, to the point where it's very difficult to do anything with the plots. I therefore wrote the function "minimgstat.m" (as attached), which is very similar to imgdevdat.m except that before doing the analysis and plotting, it reduces the size of the image array to the upper-left NxN square (where N is an additional argument of the function).

Using this function, I did the same analysis of the upper-left 200x200 pixels over all 200 images:

(data from 200 images, over the upper-left 200x200 pixels)

The intensities of the pixels don't go as high this time because the upper portion of the images are dimmer than much of the rest of the image (as is apparent from looking at the image itself, and as I demonstrate further a little bit later on). Do note the change in axis scaling resulting from this when comparing the image. We do, however, see the same behavior in the ~120-128 intensity level region (more pronounced in this plot because of the change in axis scaling).

I was interested in looking at which pixels constituted this band, so I wrote a function "imgbandfind.m" (as attached), which inputs a 2D array and a minimum and maximum range value, goes through the image array pixel-by-pixel, determines which pixels are within the range, and then constructs an RGB image which displays pixels within the range as red and images outside the range as black.

I inputted the first image in the series into this function along with the range of 120-129, and got the following:

(pixels in intensity range of 120-129 in first image)

So the pixels in this range appear to be the pixels on the outskirts of each wavefront dot near the vertical center of the image. The outer circles of the dots on the lower and upper portions of the image do not appear, perhaps because the top of the image is dimmer and the bottom of the image is brighter, and thus these outskirt pixels would then have lower and higher values, respectively. I plan to investigate this and why it happens (what causes this 'flickering' and if it is a problem at all) further.

The fact that the background levels are lower nearer to the upper portion of the image is demonstrated in the next image, which shows all intensity levels less than 70:
(pixels in intensity range of 0-70 in first image)

So the background levels appear the be nonuniform across the CCD, as are the intensities of each dot. Again, I plan to investigate this further. (could it be something to do with stray light hitting the CCD nonuniformly, maybe? I haven't thought through all the possibilities)
 
The OLED has been turned off, so my next immediate step will be to investigate the background levels further by analyzing the images when not illuminated by the OLED.
 
In other news: today I also attended the third Intro to LIGO lecture, a talk on Artificial Neural Networks and their applications to automated classification of stellar spectra, and the 40m Journal Club on the birth rates of neutron stars (though I didn't think to learn how to access the wiki until a few hours right before, and then didn't actually read the paper. I fully intend to read the paper for next week before the meeting).
 
Attachment 2: ar2vec.m
function V = ar2vec(A)
%AR2VEC V=ar2vec(A)
%concenates the columns of 2D array A into a single column vector V

sz = size(A);
n=sz(1,2);
i=1;
V=[];

while i<(n+1)
... 7 more lines ...
Attachment 3: readimgs.m
function arr = readimgs(imn,n)
%readimgs('basefilename',n) 
%- A function to load a series of .raw files outputted by 'take'
%and stored in /opt/EDTpdv/JKimg/
%  Inputs: 'basefilename' is a string input (for example, for series of
%   images "testpat####.raw" input 'testpat'). "n" is the number of images,
%   so for testpat0000-testpat0004 input n=5

i=0;
arr=[];
... 32 more lines ...
Attachment 4: stdvsmean.m
function M = stdvsmean(A)
%STDVSMEAN takes a 3D array of image data and computes
%stdev vs. mean for each pixel

%find means/st devs between each image
astd = std(double(A),0,3);
armn = mean(double(A),3);

%convert into column vectors of pixel-by-pixel data
asvec=ar2vec(astd);
... 33 more lines ...
Attachment 5: imgdevdat.m
function imgdevdat(basefilename,imgnum)
%IMGDEVDAT Inputs base file name and number of images stored as .raw files
%in ../EDTpdv/JKimg/, automatically imports as 1024x1024x(n) matrix, finds
%the mean and standard deviation of each pixel in each image and plots
A=readimgs(basefilename,imgnum);
stdvsmean(A)
end

Attachment 6: minimgstat.m
function imgdevdat(basefilename,imgnum,size)
%IMGDEVDAT Inputs base file name and number of images stored as .raw files
%in ../EDTpdv/JKimg/, automatically imports as (size)x(size)x(n) matrix, finds
%the mean and standard deviation of each pixel in each image and plots
A=readimgs(basefilename,imgnum);
smA=A(1:size,1:size,:);
stdvsmean(smA)
end
Attachment 7: imgbandfind.m
function [HILT] = imgbandfind(img,minb,maxb)
%IMGBANDFIND inputs an image array and minimum and maximum value,
% then finds all values of the array within that range, then plots with
%values in range highlighted in red against a black background

img=double(img);
maxv=max(max(img));
sizm=size(img);
rows=sizm(1,1);
cols=sizm(1,2);
... 20 more lines ...
  52   Thu Jun 17 22:03:51 2010 James KMiscHartmann sensorSURF Log -- Day 2, Getting Started

For Thursday, June 17:

Today I attended a basic laser safety training orientation, the second Introduction to LIGO lecture, a Summer Research Student Safety Orientation, and an Orientation for Non-Students living on campus (lots of mandatory meetings today). I met with Dr. Willems and Dr. Brooks in the morning and went over some background information regarding the project, then in the afternoon I got an idea of where I should progress from here from talking with Dr. Brooks. I read over the paper "Adaptive thermal compensation of test masses in advanced LIGO" and the LIGO TCS Preliminary Design document, and did some further reading in the Brooks thesis.

I'm making a little bit of progress with accessing the Hartmann lab computer with Xming but got stuck, and hopefully will be able to sort that out in the morning and progress to where I want to be (I wasn't able to get much further than that, since I can't access the Hartmann computer in the lab currently due to laser authorization restrictions). I'm currently able to remotely open an X terminal on the server but wasn't able to figure out how to then be able to log in to the Hartmann computer. I can do it via SSH on that terminal, of course, but am having the same access restrictions that I was getting when I was logging in to the Hartmann computer via SSH directly from my laptop (i.e. I can log in to the Hartmann computer just fine, and access the camera and framegrabber programs, but for the vast majority of the stuff on there, including MATLAB, I don't have permissions for some reason and just get 'access denied'). I'm sure that somebody who actually knows something about this stuff will be able to point out the problem and point me in the right direction fairly quickly (I've never used SSH or the X Window system before, which is why it's taking me quite a while to do this, but it's a great learning experience so far at least).

Goals for tomorrow: get that all sorted out and learn how to be able to fully access the Hartmann computer remotely and run MATLAB off of it. Familiarize myself with the camera program. Set the camera into test pattern mode and use the 'take' programs to retrieve images from it. Familiarize myself with the 'take' programs a bit and the various options and settings of them and other framegrabber programs. Get MATLAB running and use fread to import the image data arrays I take with the proper data representation (uint16 for each array entry). Then, set the camera back to recording actual images, take those images from the framegrabber and save them, then import them into MATLAB. I should familiarize myself with the various settings of the camera at this stage, as well.

 

--James

  51   Thu Jun 17 07:40:07 2010 James KMiscHartmann sensorSURF Log -- Day 1, Getting Started

 For Wednesday, June 16:

I attended the LIGO Orientation and first Introduction to LIGO lecture in the morning. In the afternoon, I ran a few errands (got keys to the office, got some Computer Use Policy Documentation done) and toured the lab. I then got Cygwin installed on my laptop along with the proper SSH packets and was successfully able to log in to and interact with the Hartmann computer in the lab through the terminal, from the office. I have started reading relevant portions of Dr. Brooks' thesis and of "Fundamentals of Interferometric Gravitational Wave Detectors" by Saulson.
  50   Wed Jun 16 11:47:11 2010 AidanMiscHartmann sensorSpot displacement maps - temperate sensitivity tests - PRISM

I think that the second plot is just showing PRISM and converting it to its radial components. This is due to the fact that the sign of the spot displacement on the LHS is the opposite of the sign of the spot displacement on the RHS. For spherical or cylindrical power, the sign of the spot displacement should be the same on both the RHS and the LHS.

I've attached a Mathematica PDF that illustrates this.

 


Quote:

Results of initial measurement of temperature sensitivity of Hartmann sensor

"Cold" images were taken at the following temperature:
| before | 32.3 | 45.3 | 37.0 |
| after  | 32.4 | 45.6 | 37.3 |

"Hot" Images were taken at the following temperature:

| before | 36.5 | 48.8 | 40.4 |
| after  | 36.4 | 48.8 | 40.4 |

"before" are the temperatures just before taking 5000 images, and "after" are
the temperatures just after taking the images. First column is the temperature
measured using the IR temp sensor pointed at the heat sink, the second column the
camera temperature, and the third column the sensor board temperature.

Temperature change produced by placing a "hat" over the top of the HP assembly and the top of the heatsinks.

Averaged images "cool" and "hot" were created using 200 frames (each).

Aberration parameter values are as follows:

Between cool and hot images (cool spots - hot spots)

     p: 4.504320557133363e-005
    al: 0.408248504544179
   phi: 0.444644135542724
     c: 0.001216006036395
     s: -0.002509569677737
     b: 0.054773177423349
    be: 0.794567342929695
     a: -1.030687344054648

Between cool images only

     p: 9.767143368103721e-007
    al: 0.453972584677992
   phi: -0.625590459774765
     c: 2.738206187344315e-004
     s: 1.235384158257808e-006
     b: 0.010135170457321
    be: 0.807948378729832
     a: 0.256508288049258

Between hot images only

     p: 3.352035441252169e-007
    al: -1.244075541477539
   phi: 0.275705676833192
     c: -1.810992355666772e-004
     s: 7.076678388064736e-005
     b: 0.003706221758158
    be: -0.573902879552339
     a: 0.042442307609231

Attached are two contour plots of the radial spot displacements, one between
cool and hot images, and the other between cool images only. The color
bars roughly indicate the values of maximum and minimum spot
displacements.

Possible causes:

1. anisotropy of the thermal expansion of the invar foil HP caused by the rolling

2. non-uniform clamping of the HP by the clamp plate

3. vertical thermal gradient produced by the temperature raising method

4. buckling of the HP due to slight damage (dent)

 

Attachment 1: Prism_as_radial_vector.pdf
Prism_as_radial_vector.pdf Prism_as_radial_vector.pdf Prism_as_radial_vector.pdf
  49   Tue Jun 15 16:30:10 2010 Peter VeitchMiscHartmann sensorSpot displacement maps - temperate sensitivity tests

Results of initial measurement of temperature sensitivity of Hartmann sensor

"Cold" images were taken at the following temperature:
| before | 32.3 | 45.3 | 37.0 |
| after  | 32.4 | 45.6 | 37.3 |

"Hot" Images were taken at the following temperature:

| before | 36.5 | 48.8 | 40.4 |
| after  | 36.4 | 48.8 | 40.4 |

"before" are the temperatures just before taking 5000 images, and "after" are
the temperatures just after taking the images. First column is the temperature
measured using the IR temp sensor pointed at the heat sink, the second column the
camera temperature, and the third column the sensor board temperature.

Temperature change produced by placing a "hat" over the top of the HP assembly and the top of the heatsinks.

Averaged images "cool" and "hot" were created using 200 frames (each).

Aberration parameter values are as follows:

Between cool and hot images (cool spots - hot spots)

     p: 4.504320557133363e-005
    al: 0.408248504544179
   phi: 0.444644135542724
     c: 0.001216006036395
     s: -0.002509569677737
     b: 0.054773177423349
    be: 0.794567342929695
     a: -1.030687344054648

Between cool images only

     p: 9.767143368103721e-007
    al: 0.453972584677992
   phi: -0.625590459774765
     c: 2.738206187344315e-004
     s: 1.235384158257808e-006
     b: 0.010135170457321
    be: 0.807948378729832
     a: 0.256508288049258

Between hot images only

     p: 3.352035441252169e-007
    al: -1.244075541477539
   phi: 0.275705676833192
     c: -1.810992355666772e-004
     s: 7.076678388064736e-005
     b: 0.003706221758158
    be: -0.573902879552339
     a: 0.042442307609231

Attached are two contour plots of the radial spot displacements, one between
cool and hot images, and the other between cool images only. The color
bars roughly indicate the values of maximum and minimum spot
displacements.

Possible causes:

1. anisotropy of the thermal expansion of the invar foil HP caused by the rolling

2. non-uniform clamping of the HP by the clamp plate

3. vertical thermal gradient produced by the temperature raising method

4. buckling of the HP due to slight damage (dent)

Attachment 1: spot_displacements_same_temp_0611.jpg
spot_displacements_same_temp_0611.jpg
Attachment 2: spot_displacements_diff_temp_0611.jpg
spot_displacements_diff_temp_0611.jpg
  48   Thu May 27 17:49:02 2010 AidanElectronicsHartmann sensorHartmann sensor cooling fins added - base piece added

 Back to Configuration 1 again - this time the fins were bolted very securely to the camera.

 7:25 PM - [about 2 hours later] - Digitizer = 39.7C, Sensor = 31.4C, Ambient = 19.0C

 

 

Attachment 1: HWS_CONFIG1-tight.jpg
HWS_CONFIG1-tight.jpg
  47   Thu May 27 15:42:06 2010 AidanElectronicsHartmann sensorHartmann sensor with just the base piece

I switched in just the base piece of the Hartmann sensor. The cooling fins are removed. I bolted the camera securely to the base plate and I bolted the plate securely to the table.

 5:00PM - (Digitizer = 41.9C, Sensor = 33.8C, Ambient = 19.3C)

 

Attachment 1: HWS_CONFIG4.jpg
HWS_CONFIG4.jpg
  46   Thu May 27 13:18:51 2010 AidanElectronicsHartmann sensorRemoved Cooling fins from Hartmann sensor

I removed the cooling fins from the Hartmann sensor to see what steady-state temperature it reached without any passive cooling elements. I also dropped the set-point temperature for the lab to help keep for getting too hot.

After nearly 3 hours the temperature is:

(Digitizer: 54.3C, Sensor: 46.6C, Ambient: 19.6C)

 

 

Attachment 1: HWS_CONFIG3.jpg
HWS_CONFIG3.jpg
  45   Thu May 27 08:25:37 2010 AidanElectronicsHartmann sensorHartmann sensor cooling fins added - base piece removed

 8:10AM - I removed the base plate from the Hartmann sensor. I want to know what steady-state temperature the HWS achieves without the plate.

The photo below shows the current configuration.

11:22AM - (Digitizer - 52.2C, Sensor - 43.8C, Ambient - 21.8C)

Attachment 1: HWS_CONFIG2.jpg
HWS_CONFIG2.jpg
  44   Wed May 26 14:58:04 2010 AidanMiscHartmann sensorHartmann sensor cooling fins added

14:55 -  Mindy stopped by with the copper heater spreaders and the cooling fins for the Hartmann sensor. We've set them all up and have turned on the camera to see what temperature above ambient is achieves.

17:10 - Temperature of the HWS with no active cooling( Digitizer = 44.1C, Sensor = 36.0C, Ambient = 21.4C)

 

Attachment 1: HWS_CONFIG1.jpg
HWS_CONFIG1.jpg
  43   Wed May 26 06:47:02 2010 AidanLaserSLEDSwitched off SLED - 6:40AM
  42   Mon May 24 19:17:32 2010 AidanLaserHartmann sensorReplaced Brass Plate with Invar Hartmann Plate

I just replaced the brass Hartmann plate with the Invar one. The camera was off during the process but has been turned on again. The camera is now warming up again. I've manually set the temperature in the EPICS channels by looking at the on-board temperature via the serial communications.

 I also made sure the front plate was secured tightly.

  41   Thu May 20 17:08:36 2010 AidanElectronicsSLEDSLED module - and driver - LIGO D1000892 - and Hartmann sensor

Verified that the test-point for the current limit pot on the driver (Wavelength Electronics - LDTC 0520) was at 0.5V. Driver is set to INTERNAL set point at the moment. This is down about 10% below the current limited point.

Voltage across TP7 and TP9 = 0.970V = LD Current Mon

Voltage across TP2 and TP3 = 0.017V = LD P Mon

 

--- Hartmann sensor ---

-set the sampling rate on the CCD to 16HZ. With the current alignment and intensity this gives as maximum intensity of around 3850 out of 4095. Thus the pixels are not saturated.

- centroid_image located some of the spots - see attached image of spots where those located by the algorithm and circled. I need to play with the threshold level and spot_radius to get this to work properly.

 

 

Attachment 1: 2010-05-20_hartmann_image_and_located_spot.jpg
2010-05-20_hartmann_image_and_located_spot.jpg
  40   Thu May 20 10:44:13 2010 AidanElectronicsSLEDQPhotonics 980nm source power

dV = 0.385V

Responsivity= 0.65A/W

Transimpedance = 1.5E4 V/A

therefore power= 0.385V / (1.5E4 * 0.65 V/W) = 40uW

 

 

  39   Thu May 20 08:20:54 2010 AidanComputingHartmann sensorCentroiding algorithm and code to generate simulated data

 Here's a copy of an email I distributed today that describes the centroid and simulation code I wrote.

Hi Won,

I've written some code that generates an image of Gaussian spots and provides you with the coordinates of the centers used to generate those spots. There is the facility to turn on i) photo-electron shot noise, ii) random displacement of the nominal positions of the centers from a regular array and iii) 12-bit digitization to more accurately model the output from a CCD.

I've included an example routine that calls this function and then centroids those spots using a variant of your centroiding algorithm.

You should be able to use this to generate reliable simulated data to test versions of your centroiding algorithm.

Cheers,
Aidan.

Attached files: 
1. test_spot_generation_and_centroiding.m     - the example routine. Run this first
2. generate_simulated_spots.m         - the function to generate the simulated spots in an image and as a set of positions
3. centroid_image.m - the function to centroid an image

Attachment 1: test_spot_generation_and_centroiding.m
% example usage of generate_simulated_spots and centroid_image

clear all
close all



%% example 1 - 
%----------------------------------------------------------
npixels = 1024;          % the number of pixels in the image
... 143 more lines ...
Attachment 2: generate_simulated_spots.m
function output = generate_simulated_spots(npixels, digitizeFLAG, ...
                          IntensityNoiseFLAG, positionNoiseFLAG)
%
% a function to generate an image of spots to centroid and to provide the original 
% locations of the spots. 
% 
% input
% -----
% npixels - pixels in output image
% digitizeFLAG:          0 - floating point array is output
... 160 more lines ...
Attachment 3: centroid_image.m
function centroids = centroid_image(image, centroids)
%
% This function centroids a supplied image. It returns a centroids structure
%
% 'centroids' structure
% --------------------
% centroids.image_background_level  - the background intensity of an image
%                                     with no illumination on it
%          .spot_radius             - the radius of a hartmann spot
%          .spot_threshold_level    - the minimum intensity of pixels used to
... 95 more lines ...
  38   Tue May 18 09:33:44 2010 AidanComputingEPICSAdded defocus and other Hartmann sensor channels to EPICS and DAQ

 I've added the following channels to the HWS softIoc in /cvs/cds/caltech/target/softIoc/HWS.db

 

 

EPICS and DAQ restart procedure

  1. Kill the existing softIoc. Use a "ps -e | grep softIoc" command to determine the process id.
  2. After editing the HWS.db file restart the softIoc with the following command:
[controls@hartmann softIoc]$  /cvs/opt/epics-3.14.10-RC2-i386/base/bin/linux-x86/softIoc -S  HWS.cmd &
[3] 11280
[controls@hartmann softIoc]$ dbLoadRecords "HWS.db"
iocInit
Starting iocInit
############################################################################
## EPICS R3.14.10- $R3-14-10-RC2$ $2008/10/10 15:01:51$
## EPICS Base built Oct 28 2009
############################################################################
iocRun: All initialization complete

        3. Edit the /cvs/cds/caltech/chans/daq/C4TCS.ini file and kill the daqd process on fb1. It should restart automatically.

Done!

 

  37   Mon May 17 19:41:13 2010 AidanComputingFrame GrabberC code that calls MATLAB engine and centroiding algorithms

This is an amended version of simple_take.c.

 

The files below are all in the directory /opt/EDTpdv/hartmann/src

  1. simple_hartmann.c   - the C code to access the frame grabber, retrieve an image, load the MATLAB engine and pass the image to MATLAB for centroiding
  2. centroid_image.m  - the MATLAB routine that centroids the image
  3. get_defocus.m - the MATLAB function that determines the defocus in the centroids
  4. build_simple_hartmann.sh - a shell script I wrote that contains the compile and link options to build the thing correctly 
Attachment 1: simple_hartmann.c
/**
 * @file
 * An example program to show usage of EDT PCI DV library to acquire and
 * optionally save single or multiple images from devices connected to EDT
 * high speed digital video interface such as the PCI DV C-Link or PCI DV
 * FOX / RCX.
 * 
 * Provided as a starting point example for adding digital video acquisition
 * to a user application.  Includes optimization strategies that take
 * advantage of the EDT ring buffer library subroutines for pipelining image
... 521 more lines ...
Attachment 2: centroid_image.m
function centroids = centroid_image(image, centroids)
%
% This function centroids a supplied image. It returns a centroids structure
%
% 'centroids' structure
% --------------------
% centroids.image_background_level  - the background intensity of an image
%                                     with no illumination on it
%          .spot_radius             - the radius of a hartmann spot
%          .spot_threshold_level    - the minimum intensity of pixels used to
... 98 more lines ...
Attachment 3: get_defocus.m
function defocus = get_defocus(centroids)
% 
% a function to extract the defocus of the gradient field
%
% 'centroids' structure
% --------------------
% centroids.image_background_level  - the background intensity of an image
%                                     with no illumination on it
%          .spot_radius             - the radius of a hartmann spot
%          .spot_threshold_level    - the minimum intensity of pixels used to
... 56 more lines ...
Attachment 4: build_simple_hartmann.sh
#!/bin/bash


gcc -O2 -I/opt/EDTpdv -I/apps/matlab_R2008b/extern/include -I/apps/matlab_R2008b/simulink/include -DMATLAB_MEX_FILE -c -D_GNU_SOURCE -fexceptions -I/apps/matlab_R2008b/extern/include -DMX_COMPAT_32 -O -DNDEBUG simple_hartmann.c


gcc -O2 -I/opt/EDTpdv -I/apps/matlab_R2008b/extern/include -I/apps/matlab_R2008b/simulink/include -DMATLAB_MEX_FILE -c -D_GNU_SOURCE -fexceptions -I/apps/matlab_R2008b/extern/include -DMX_COMPAT_32 -O -DNDEBUG /apps/matlab_R2008b/extern/src/mexversion.c

gcc -O2 -O -I/opt/EDTpdv -o /opt/EDTpdv/hartmann/bin/simple_hartmann simple_hartmann.o mexversion.o -L/opt/EDTpdv -lpdv -lpthread -lm -ldl -Wl,-rpath-link,/apps/matlab_R2008b/bin/glnxa64 -L/apps/matlab_R2008b/bin/glnxa64 -leng -lmx -lstdc++
  36   Thu May 13 16:54:46 2010 AidanComputingHartmann sensorRunning MATLAB programs in C on CentOS - only use R2008b for less hassle

 After much effort trying to get a MATLAB routine to compile in C I discovered the following pieces of information.

1. CentOS will not install a gcc compiler more recent than 4.1.2 with yum install. This is circa 2007. If you want a more recent compiler it must be installed manually.

2. To compile and link C programs that call the MATLAB engine, they must be compiled in MATLAB using the mex command. Every version of MATLAB after R2008b requires the gcc compiler 4.2.3.

3. Building gcc 4.2.3 takes a lot more than 1 hour of compile time. I accidentally killed the build process and gave it up as a lost cause. 

 

  35   Tue May 11 10:32:00 2010 AidanComputingHartmann sensorPeak detection and centroiding code - review

This looks really efficient! However, I think there's a systematic error in the calculation. I tested it on some simulated data and it had trouble getting the centroids exactly right. I need to better understand the functions that are called to get an idea of what might be the problem.

 

Quote:

 

 

Attached is .m file of the custom function that I wrote and used to automatically detect peaks in a Hartmann image,
and calculate the centroid corrdinates of each of those peaks.

A simple example of its usage,  provided that myimage is a two-dimensional image array obtained from the camera, is

 

radius = 10;

peak_positions = detect_peaks_uml(myimage,radius);

no_of_peaks = length(peak_positions);

centroids_array = zeros(no_of_peaks);

for k = 1:no_of_peaks

  centroids_array(k,1) = peak_positions(k).WeightedCentroid(1);

  centroids_array(k,2) = peak_positions(k).WeightedCentroid(2);

end

 

I chose my value of radius by looking at spots in a sample image and counting the number of pixels across a peak. It may be 
more useful to automatically obtain a value for the
radius. I may run some tests to see how different choices of radius
affect the centroid calculations. 

I may also need to add some error checking and/or image validating codes, but so far I have not encountered any problems. 

Please let me know if anyone needs more explanation!

Won

 

 

 

Attachment 1: test_centroid_code.m
% generate an blank image
imarr = zeros(1024, 1024);

% get the background and intensity levels
bckgrd = 700.0;
I1 = 56000.0;

% get the 2D coordinate arrays
x = 1:1024;
... 61 more lines ...
  34   Thu May 6 21:32:26 2010 Won KimComputingHartmann sensorPeak detection and centroiding code

 

 

Attached is .m file of the custom function that I wrote and used to automatically detect peaks in a Hartmann image,
and calculate the centroid corrdinates of each of those peaks.

A simple example of its usage,  provided that myimage is a two-dimensional image array obtained from the camera, is

 

radius = 10;

peak_positions = detect_peaks_uml(myimage,radius);

no_of_peaks = length(peak_positions);

centroids_array = zeros(no_of_peaks);

for k = 1:no_of_peaks

  centroids_array(k,1) = peak_positions(k).WeightedCentroid(1);

  centroids_array(k,2) = peak_positions(k).WeightedCentroid(2);

end

 

I chose my value of radius by looking at spots in a sample image and counting the number of pixels across a peak. It may be 
more useful to automatically obtain a value for the
radius. I may run some tests to see how different choices of radius
affect the centroid calculations. 

I may also need to add some error checking and/or image validating codes, but so far I have not encountered any problems. 

Please let me know if anyone needs more explanation!

Won

Attachment 1: detect_peaks_uml.m
function ctr = detect_peaks_uml(image,radius)
% Usage example:
% positions = detect_peaks_uml(myimage,10);
% 
% total number of peaks detected: length(positions.WeightedCentroid)
% access the coordinates of the nth peak:
% positions(n).Weightedcentroid(1), positions(n).WeightedCentroid(2)

weighted_image = image .^ 2;
background = imopen(weighted_image,strel('disk',radius));
... 11 more lines ...
  33   Thu May 6 12:32:11 2010 AidanComputingHartmann sensordalsa_to_epics Python script crashed ...

Here's the error:

 

 Traceback (most recent call last):

  File "./dalsa_to_epics.py", line 81, in ?
    stdout = subprocess.PIPE)    
  File "/usr/lib64/python2.4/subprocess.py", line 550, in __init__
    errread, errwrite)
  File "/usr/lib64/python2.4/subprocess.py", line 916, in _execute_child
    errpipe_read, errpipe_write = os.pipe()
OSError: [Errno 24] Too many open files

[2]+  Exit 1                  ./dalsa_to_epics.py  (wd: ~/scripts)
(wd now: /cvs/users/abrooks/advLigo/HWS)
 
  32   Thu May 6 10:34:38 2010 AidanComputingHartmann sensorEPICS and MEDM screen for Hartmann sensor - part 2

I added the camera parameters to EPICS and the MEDM screen. These are available as channels now in EPICS and eventually there will be a python script that writes the EPICS value to those channels, but right now it is just a python script that reads the values off the Dalsa camera.

I updated the channels in /cvs/cds/caltech/chans/daq/C4TCS.ini so that these are saved to the daq and I also restarted the daq daemon.

The python script that gets the camera parameters is here: scripts/Dalsa1M60/GetCameraParameters.py and the script that writes the parameters to the EPICS channels is here scripts/dalsa_to_epics.py.

These are attached as is C4TCS.ini and HWS.db which defines the new channels.

Attachment 1: dalsa_to_epics.py
#!/usr/bin/python

# Import the Dalsa1M60 packzge
import Dalsa1M60, subprocess

# define the serial command location
serial_cmd_location = '/opt/EDTpdv/serial_cmd'

# start a loop that continually gets the temperatures
getTemperatures = 1
... 75 more lines ...
Attachment 2: GetCameraParameters.py
#!/usr/bin/python

# NAME
#       GetCameraParameters - a module for getting the Dalsa 1M60 parameters
#
# PACKAGE
#       Part of the Dalsa1M60 python package
#
# SYNOPSIS
#       GetCameraParameters( serial_cmd_location  )
... 412 more lines ...
Attachment 3: HWS.db
record(ai,"C4:TCS-HWS_TEMP_DIGITIZER")
record(ai,"C4:TCS-HWS_TEMP_SENSOR")
record(ai, "C4:TCS-HWS_TAP1GAIN")
record(ai, "C4:TCS-HWS_TAP2GAIN")
record(ai, "C4:TCS-HWS_PRETRIGGER")
record(ai, "C4:TCS-HWS_DATA_MODE")
record(ai, "C4:TCS-HWS_BINNING_MODE")
record(ai, "C4:TCS-HWS_GAIN_MODE")
record(ai, "C4:TCS-HWS_OUTPUT_CONFIG")
record(ai, "C4:TCS-HWS_EXPOSURE_MODE")
... 27 more lines ...
Attachment 4: C4TCS.ini
[default]
dcuid=4
datarate=16
gain=1.0
acquire=1
ifoid=0
datatype=4
slope=1.0
offset=0
units=NONE
... 14 more lines ...
  31   Wed May 5 18:45:51 2010 AidanComputingHartmann sensorPython code to interface the Dalsa1M60 and export the temperature to EPICS

Python script

I wrote a Python script, ~/scripts/dalsa_to_epics.py that reads the temperature off the camera using serial_cmd vt and then it writes this to the EPICS channels using ezcawrite. See attached. It is now running continuously in the background as dalsa_to_epics.

Dalsa1M60 baud rate

Also I accessed the menu of the 1M60 and changed the baud rate to 115200 using sbr 115200. Then I edited the dalsa_1m60.cfg file to set the baud rate to 115200 in that file. Finally, I changed the settings on the camera so that it will boot with the new baud rate when it is turned off and on again - this was with wus in the camera menu.

All the files are attached.

~/scripts/dalsa_to_epics.py

~/scripts/Dalsa1M60/VerifyTemperature.py

/opt/EDTpdv/camera_config/dalsa_1m60.cfg

Attachment 1: dalsa_to_epics.py
#!/usr/bin/python

# Import the Dalsa1M60 packzge
import Dalsa1M60, subprocess

# define the serial command location
serial_cmd_location = '/opt/EDTpdv/serial_cmd'

# start a loop that continually gets the temperatures
getTemperatures = 1
... 18 more lines ...
Attachment 2: VerifyTemperature.py
#!/usr/bin/python

# part of the Dalsa1M60 package
# a module for verifying the temperature of the Dalsa 1M60
#
# The serial command 'vt' is sent to the camera. The camera responds as follow
s
#    > vt
#    Camera Temperature on Digitzer Board: 47.2 Celsius
#    Camera Temperature on Sensor Board: 39.4 Celsius
... 65 more lines ...
Attachment 3: dalsa_1m60.cfg
#
# CAMERA_MODEL "Dalsa 1m60 config file (freerun)"
#

# camera name/description
#
camera_class:                  "Dalsa"
camera_model:                  "1M60"
camera_info:                   "12 bit dual channel camera link"

... 51 more lines ...
  30   Wed May 5 09:04:01 2010 AidanComputingHartmann sensorAdded /home/controls/scripts/modules directory to PYTHONPATH on hartmann

 I added the following line to ~/.bashrc

 

export PYTHONPATH=/home/controls/scripts/modules:/usr/local/lib/python

This adds the above directory to PYTHONPATH and allows those modules in that directory to be access from anywhere.

 

  29   Tue May 4 13:35:13 2010 AidanComputingHartmann sensorHartmann temperature channels in frame builder

 I've added the digitizer and sensor board temperature readings from the HWS to the frames. This was done in the following way

1. Create a new file /cvs/cds/caltech/chans/daq/C4TCS.ini - with the channels in it - see below

2.  open /cvs/cds/caltech/target/fb1/master

3. add a line that includes the C4TCS.ini file when the frame builder starts

4. restart frame-builder by killing the daq daemon - kill <process id for daqd> (this is the only thing that needs to be entered as it will automatically restart)

 

C4TCS.ini

 

[default]

dcuid=4

datarate=16

gain=1.0

acquire=1

ifoid=0

datatype=4

slope=1.0

offset=0

units=NONE

 

 

 

[C4:TCS-HWS_TEMP_SENSOR]

[C4:TCS-HWS_TEMP_DIGITIZER]


 

 

 

  28   Tue May 4 10:30:07 2010 AidanComputingHartmann sensorEPICS and MEDM screen for Hartmann sensor

I added the Dalsa 1M60 temperature measurements to EPICS. The break down is as follows:

  Digitizer Board Temperature Sensor Board Temperature
Dalsa 1M60 menu command vt vt

Response from 1M60

Camera Temperature on Digitizer Board: 47.2 Celsius Camera Temperature on Sensor Board: 39.4 Celsius
Menu accessed via MATLAB: unix('/opt/EDTpdv/serial_cmd vt') MATLAB: unix('/opt/EDTpdv/serial_cmd vt')
Temperature stored in MATLAB: local variable called DBtemp (from the numerical sub-string) MATLAB: local variable called SBtemp (from the numerical sub-string)
EPICS channel written via MATLAB: unix(['ezcawrite {channel-name} ' num2str(DBtemp)]) MATLAB: unix(['ezcawrite {channel-name} ' num2str(SBtemp)])
EPICS channel defined in HWS.db HWS.db
Channel name C4:TCS-HWS_TEMP_DIGITIZER C4:TCS-HWS_TEMP_SENSOR

I added a softIoc called HWS to /cvs/cds/caltech/target/softIoc. It added the channels following channels: C4:TCS-HWS_TEMP_DIGITIZER and C4:TCS-HWS_TEMP_SENSOR. The ioc (input/output controller) is run with the following command:

 

/cvs/opt/epics-3.14.10-RC2-i386/base/bin/linux-x86/softIoc HWS.cmd

although this doesn't execute it in the background. The MATLAB routine /home/controls/matlab_scripts/read_dalsa_temperature_write_to_epics.m is run continuously to access the serial port, get the temperature data and to write it to the EPICS channels. These were then available to read in the Hartmann sensor MEDM screen which is shown below. Also shown is a StripTool monitoring the temperatures. I had just turned off a fan that was cooling the 1M60 which is why the temperature is rising.
 
 
 

 

 

Attachment 1: Screenshot-C4HWS_medm_21.adl_(edited).png
Screenshot-C4HWS_medm_21.adl_(edited).png
Attachment 2: Screenshot-StripTool_Graph_Window.png
Screenshot-StripTool_Graph_Window.png
Attachment 3: HWS.db
record(ai,"C4:TCS-HWS_TEMP_DIGITIZER")


record(ai,"C4:TCS-HWS_TEMP_SENSOR")
Attachment 4: HWS.cmd
dbLoadRecords "HWS.db"
iocInit
Attachment 5: read_dalsa_temperature_write_to_epics.m
% get the temperature off the 1M60
% written by Aidan Brooks. 22nd Apr 2010

% define aliases
ezcawrite = '/cvs/opt/apps/Linux/gds/bin/ezcawrite';


ii = 1;
while ii == 1
    [s, r] = unix('/opt/EDTpdv/serial_cmd vt');
... 54 more lines ...
  27   Tue May 4 09:18:15 2010 AidanComputingHartmann sensorAdded aliases and icons for EPICS commands and dataviewer etc. to hartmann

I updated the .bashrc file in controls@hartmann to include aliases for the ezca EPICS commands and a few others. Details shown below:

Also added launchers to the top panel for MATLAB, sitemap, dataviewer and StripTool. The icons for the launchers are located in:

/cvs/users/ops/ligo-launchers/icons

Changes to .bashrc

alias dv="/cvs/opt/apps/Linux/dataviewer/dataviewer"
alias StripTool = "/cvs/opt/apps/Linux/medm/bin/StripTool"
alias medm="/cvs/opt/apps/Linux/medm/bin/medm"
alias sitemap='medm -x /cvs/cds/caltech/medm/c2/atf/C2ATF_MASTER.adl'

# EPICS aliases
alias ezcademod="/cvs/opt/apps/Linux/gds/bin/ezcademod"
alias ezcaread="/cvs/opt/apps/Linux/gds/bin/ezcaread"
alias ezcaservo="/cvs/opt/apps/Linux/gds/bin/ezcaservo"
alias ezcastep="/cvs/opt/apps/Linux/gds/bin/ezcastep"
alias ezcaswitch="/cvs/opt/apps/Linux/gds/bin/ezcaswitch"
alias ezcawrite="/cvs/opt/apps/Linux/gds/bin/ezcawrite"

  26   Mon May 3 17:43:48 2010 AidanComputingFrame GrabberSuccessful image capture with EDT frame grabber

I noticed that when i ran /opt/EDTpdv/camconfig and selected camera 331, which appeared to be closest to the Dalsa Pantera 1M60 camera, the software loaded the configuration file pantera11m4fr.cfg.

I tried to locate which entry in the camconfig list corresponded to the dalsa_1m60.cfg configuration file, but none of them seemed to. I couldn't select any entry and get it to report that it was using the 1m60 config file.

Next I noticed that there were 659 configuration files in the /opt/EDTpdv/camera_config directory but only 460 configuration options in camconfig. This seemed like 1/3 of the config files were somehow not formatted correctly, including,possibly the 1M60 config file.

By editing the pantera11m4fr.cfg I verified that the name of the camera, as it appears in the camconfig program, is the second line in the configuration file. For that file it was:

# CAMERA_MODEL 	"Dalsa Pantera 12 bit single channel camera link"
where the first line is just a single hash. The dalsa_1m60.cfg file did not have a name formatted in the same way as above: it was originally as shown below:

# Dalsa 1m60 config file (freerun)
so i changed the name in that configuration file to the following and it was suddenly available in the list when ./camconfig was run

# CAMERA_MODEL "Dalsa 1m60 config file (freerun)"

I selected that camera (number 53 in the list). Once this was done I ran pdv_flshow/pdvshow again the image that was displayed from the camera appeared to be correctlty demodulated.

Actually, the very first time i ran pdvshow the image was demodulated correctly but it appeared that the origin was offset and then the image wrapped around a little at the edges. However, every successive time I've run pdvshow since then I've had a perfectly demodulated image.

I ran some test patterns by changing the video mode using the serial communications menu in the camera. I also illuminated the Hartmann sensor with a torch/flashlight and got some spot patterns - see attached images.

Also, I've attached the dalsa_1m60.cfg file.

 

 

Attachment 1: 20100503_dalsa1m60_configuration_notes.txt
Configuring HWS to get image in CentOS
----------

9:34AM - Dalsa 1m60 turned on

----
$ /opt/EDTpdv
$ ./serial_cmd
%%this starts the serial communications device in the EDT FG but it isn't configured.

... 123 more lines ...
Attachment 2: 2010-05-03_dalsa1m60_image_test_pattern_and_spots.tif
2010-05-03_dalsa1m60_image_test_pattern_and_spots.tif
Attachment 3: 2010-05-03_dalsa1m60_image_test_pattern_right_side.tif
2010-05-03_dalsa1m60_image_test_pattern_right_side.tif
Attachment 4: dalsa_1m60.cfg
#
# CAMERA_MODEL "Dalsa 1m60 config file (freerun)"
#

# camera name/description
#
camera_class:                  "Dalsa"
camera_model:                  "1M60"
camera_info:                   "12 bit dual channel camera link"

... 39 more lines ...
  25   Mon May 3 17:42:20 2010 AidanComputingEPICSEPICS install by Alex

Alex Ivanov came in on Friday and demonstrated his EPICS kung-fu. His EPICS knig-fu is strong.

We fixed the IP address of the Hartmann machine, renamed it hartmann, and mounted the cvs drives from the frame builder. - including the EPICS base from that machine. In principle, with a new softIoc, this should have been enough to run EPICS on the hartmann machine. However, whilst the softIoc would start, it wouldn't broadcast any channels. Eventually we figured out that this was because of the Windows Virtualization adding another IP address to the hartmann machine (revealed with /sbin/ifconfig). So we removed the virtualization system and then EPICS seemed to broadcast much better.

The minutia of install isshown in the history files for the controls and root users - attached.

 

Attachment 1: history.txt
    1  cd
    2  mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
    3  echo '%_topdir %(echo $HOME)/rpmbuild' > .rpmacros
    4  ls
    5  cd rpmbuild/
    6  rpm -i http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18.15.1.el5.src.rpm 2
    7  cd ..
    8  rpm -i http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18.15.1.el5.src.rpm 2>&1 | grep -v mockb
    9  rpm -i http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18-164.15.1.e15.src.rpm 2>&1 | grep -v mockb
   10  rpm -i http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18-164.15.1.e15.src.rpm 2>&1 | grep -v mockb
... 183 more lines ...
Attachment 2: history_root.txt
    1  yum
    2  yum install gcc
    3  yum install make
    4  yum install tk
    5  yum install tcl
    6  yum install mm
    7  yum install kernel
    8  yum install source
    9  yum install include
   10  yum install kernel-source
... 797 more lines ...
  24   Thu Apr 22 08:22:18 2010 AidanComputingFrame Grabberfrom the manual install.pdf

Quote:

 

Regarding the installation of EDT software, I overlooked a note from the install.pdf  file.

 

The gist of it is that if the scripts do not run, then remount the CD-ROM by typing the

following:

 

mount /mnt/cdrom -o remount,exec

 

which will then allow the scripts to be run. The directory /mnt/cdrom should be changed if

the cdrom is mounted somewhere else. (The note can be found in the page 1 of the file

install.pdf.)

 

Unfortunately I don't have linux installed at the moment so I cannot test this. My computer was

reinstalled with Windows XP, the previous CentOS system being wiped out. However if this works,

then there is probably no need to copy the files to the hard drive. 

 

I saw this and tried it when i was installing, but I had more flexibility when I copied the files directly to the hard drive.

 

  23   Thu Apr 22 08:20:51 2010 AidanComputingHartmann sensorInstalled MATLAB and Windows XP Virtualization on Hartmann machine

I installed a Windows XP virtualization on the Hartmann machine. It can be accessed from the desktop, or by running virt-manager at the command line. Once the virtualization manager starts the virtualization of Windows needs to be started. It runs quite slowly.

I also installed MATLAB on this machine in /apps/. TThis was intended to be /apps/MATLAB/ but apparently the install program doesn't add a top directory called MATLAB as you might expect. I had to run a yum install libXp because it was complaining that "/apps/bin/glnxa64/MATLAB: error while loading shared libraries: libXp.so.6: cannot open shared object file: No such file or directory"

  22   Thu Apr 22 01:48:33 2010 Won KimComputingFrame Grabberfrom the manual install.pdf

 

Regarding the installation of EDT software, I overlooked a note from the install.pdf  file.

 

The gist of it is that if the scripts do not run, then remount the CD-ROM by typing the

following:

 

mount /mnt/cdrom -o remount,exec

 

which will then allow the scripts to be run. The directory /mnt/cdrom should be changed if

the cdrom is mounted somewhere else. (The note can be found in the page 1 of the file

install.pdf.)

 

Unfortunately I don't have linux installed at the moment so I cannot test this. My computer was

reinstalled with Windows XP, the previous CentOS system being wiped out. However if this works,

then there is probably no need to copy the files to the hard drive. 

  21   Wed Apr 21 06:49:51 2010 AidanComputingFrame GrabberInstalling CentOS 5.3 and the EDT frame-grabber - Part 1

Yesterday, I installed CentOS 5.3 on the Gateway GT5482 machine that housed the EDT frame-grabber.

  1. I installed CentOS 5.3 with all the default options
  2. As recommended by the README.lnx_pkg_reqs, I tried and failed to install the "Development Tools", "Development Libraries" and the "X Software Development" using the Add/Remove Software.
  3. I copied the entire install CD to ~/fgdriver on the hard disk.
  4. Installed the following packages at the command line

> yum install gcc

> yum install make

> yum install tk

> yum install kernel

 

I tried to run ~/fgdriver/linux.go at this point to install the EDT driver, but the installation failed about halfway through with the message "problem making the driver module". An investigation revealed that this was the due to the failure of ~/fgdriver/linux/module/makefile. I tried running that makefile separately to build the driver module and it crashed with the message: Can't find /lib/modules/2.6.18-128.el5/source/include/linux/mm.h. I concluded that the kernel source code wasn't installed

  • Added "Development Libraries" with Add/Remove Software
  • Ran the following command lines

> yum install kernel-devel

> yum install kernel-xen-devel

 And then I followed the instructions at the link: http://wiki.centos.org/HowTos/I_need_the_Kernel_Source

from: > yum install rpm-build redhat-rpm-config unifdef

 to:  > rpm -i http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18-164.15.1.el5.src.rpm 2>&1 | grep -v mockb

and at the latter point the rpm build  pissed and moaned that it couldn't find the file kernel-2.6.18-164.15.1.el5.src.rpm

However, some combination of the above must have worked. I rebooted the computer and logged in again as root. At this point the install script ~/fgdriver/linux.go ran from start to finish without complaining. A quick test of the resulting /opt/EDTpdv/camconfig and then /opt/EDTpdv/serial_cmd showed that I could access the Dalsa 1M60 camera through the frame grabber.

 

 

 

 

 

 

  20   Tue Apr 20 18:05:24 2010 AidanComputingHartmann sensorImages off the Dalsa Camera in CentOS

 I installed CentOS on the machine with the EDT frame-grabber. I then installed the frame-grabber software from the CD.

In the /opt/EDTpdv/ directory the camconfig program was run and I entered "331" to start the frame-grabber and run with the Dalsa 1M60 settings ... this was necessary to get the frame grabber running, but didn't seem to force pdvshow, installed at a later point, to use this configuration file. At this point I could access the camera menu with the serial_cmd program.

 

After some effort, which will be detailed shortly, I managed to finally get the pdv_show GUI program compiled and installed. I found that trying to run that program with the dalsa_1m60.cfg configuration file resulted in a segmentation fault.

However, when I ran it with the default Dalsa configuration file, pantera11m4fr.cfg, and selected "Continuous Exposure" I got a stream of illuminated pixels on the screen. It was clear that the display was displaying the pixels coming back from the camera in the wrong way (for instance, trying to load a 1024x1024 image into a 1440x900 array), however, by changing the frame rate on the camera to 20Hz and waving my hand around in front of the camera I was able to modulate the intensity of the hash of pixels being displayed. This means that the frame-grabber is successfully getting data - it just isn't interpreting it correctly yet.

Here are a couple of images from pdv_show (hit Alt+PrtScrn to get a screenshot of the active window):

 1. Screenshot-PCI_DV_Display.png - the image on the computer with the camera running unobscured

2. Screenshot-PCI_DV_Display-1.png - the image on the computer with me covering the camera with my hand.

3. -opt-EDTpdv.png - the camera parameters at the time of this test (running serial_cmd)

 

Attachment 1: Screenshot-PCI_DV_Display.png
Screenshot-PCI_DV_Display.png
Attachment 2: Screenshot-PCI_DV_Display-1.png
Screenshot-PCI_DV_Display-1.png
Attachment 3: -opt-EDTpdv.png
-opt-EDTpdv.png
  19   Thu Apr 15 01:47:47 2010 Won KimComputingHartmann sensorNotes on installing EDT PCIe4 DV frame grabber
* EDT PCIe4 DV frame grabber: installation notes for linux system 

(Note)

Main issue I encountered was the fact that most of the shell scripts
did not run by simply entering them. It's bit strange because if you
do ls -al to view the file lists they are made executable. So it's
possible that others don't encounter the same kind of problems as I
did.

However, if one executes the command "./linux.go", for example, and
receives the message saying

 bash: ./linux.go: /bin/sh: bad interpreter: Permission denied

then one may follow the steps I took as below.


1. Make a folder to put the content of CD, for example:

     mkdir ~/fgdriver

2. Copy the content of the CD-ROM to the folder.

3. Go to the folder.

     cd ~/fgdriver

4. Change or check the mode of the following script files (using the 
   command chmod) to be executable (using "chmod a+x filename"):

     linux.go

     ~/fgdriver/linux/EDTpdv/installpdv (this one should already be 
     executable)

     ~/fgdriver/linux/EDTpdv/pdv/setup.sh      

5. run ./linux.go and choose DV by clicking it.

(Note)

I am assuming that the programming language Tcl is already installed
in the machine. CentOS 5.4 that I have installed came with Tcl. If Tcl
is not installed, I think that linux.go will run cli_startmenu.sh
instead (located in the same directory as linux.go). So make sure
cli_startmenu.sh is executable (see step 4).

6. Choose default installation directory and start installation

(Note) 

In my first attempt to install the files, the installation message
window hung after displaying many lines of "........". That was
because the file setup.sh was not made executable (see step 4). So I
made setup.sh executable, ran linux.go again, then I could see further
messages flowing through (basically compiling c source files). I'm not
sure whether others will enounter the exactly same problem though.

7. After the installation completes, go to the /opt/EDTpdv folder.

     cd /opt/EDTpdv

8. Final Step: Make edt_load and edt_unload executable. (See step 4)

(Note)

Most of the other executables we need for running the frame
grabber/camera should already be executable at this point; but somehow
in my installation the above two files were not made executable. I
again do not know whether others will experience the same
problem. Since there are lots of executables generated when
installation completes, I advise that, whenever a certain command does
not run, one should check if that command file is executable or not.

----

Please let me know if you find any parts of the above confusing. I will
do my best to clarify.
  18   Mon Apr 12 17:25:01 2010 AidanElectronicsHartmann sensorFiber-Camera Link demonstration

 I installed the EDT PCIe4 DV C-Link frame grabber in a spare Windows XP PC and connected the Dalsa 1M60 camera directly to it via the CameraLink cable. In this configuration I was able to access the menu system in the camera using the supplied serial_cmd.exe routine.

PC --> Frame-Grabber --> Camera-Link Cable --> Dalsa 1M60: works OK

Next, I attached the RCX C-Link: Fiber to Camera Link converters to either end of a 300' fiber, plugged them into the PC and the Dalsa 1M60 and then supplied them with 5V of power. Once again, I was able to access the on-board menu system in the camera (as the attached screen-capture shows). I also did a quick-test using the in-built video display program and verified that I could get an image from the camera - by waving around my hand in front of the CCD I was able to modulate the light in the image on the computer. This, therefore, demonstrates that the camera can be easily accessed and run at a distance of at least 300' via optical fiber.

 

PC --> Frame-Grabber --> RCX C-Link --> 300' optical fiber --> RCX C-Link --> Dalsa 1M60works OK

The attached images:

 hartman_sensor.JPG: a screencap of the Dalsa 1M60 on-board menu system captured with the C-Link to fiber connector running

Fiber_Camera_Link_1.jpg: A RCX C-Link and one end of the 300' fiber connected to the Dalsa 1M60

 

Fiber_Camera_Link_3.jpg: A RCX C-Link and the other end of the 300' fiber connected to the PC

 

 

 

 
 
 
 
 
 
 
 
 

 

Attachment 1: hartmann_sensor.JPG
hartmann_sensor.JPG
Attachment 2: Fiber_Camera_Link_1.jpg
Fiber_Camera_Link_1.jpg
Attachment 3: Fiber_Camera_Link_3.jpg
Fiber_Camera_Link_3.jpg
  17   Mon Apr 12 08:55:37 2010 AidanComputingHartmann sensorEDT frame grabber is here

 The EDT PCIe4 DV C-Link frame grabber arrived this morning. There is a CD of drivers and software with it that I'll back up to the wiki or 40m svn sometime soon.

  16   Fri Feb 12 21:00:06 2010 AidanElectronicsRing HeaterRing heater step function response - time series

Hideously slow internet at airport is making me write a brief entry. This is the times series of the hesilver watlow heater radiative response to a step function.

Laso United airlines are a bit cheap ....

Attachment 1: silver_watlow_heater_step_function_response_2010-02-12.pdf
silver_watlow_heater_step_function_response_2010-02-12.pdf
  15   Fri Feb 12 11:39:28 2010 AidanElectronicsRing HeaterRing heater transfer function

I applied a step function to the silver WATLOW heater and measured the response with the photodiode. The power spectrum of the derivative of the PD response is attached. The voltage isn't calibrated, but that's okay because right now we're just interested in the shape of the transfer function. It looks like a single pole around 850uHz. The noise floor is too great above 4 or 5 mHz to say anything about the transfer function.

 

 

Attachment 1: watlow_heater_transfer_fn.jpg
watlow_heater_transfer_fn.jpg
  14   Thu Feb 11 21:46:23 2010 AidanElectronicsRing HeaterRing heater time constant measurement - start time

After leaving the ring heater off for several hours I turned on a 40V, 0.2A supply at a gps time of 949 988 700

The channel recording the PD response is C2:ATF-TCS_PD_HGCDTE_OUT.

However, there is a delay between the time at which something is supposed to be recorded and the time at which it is recorded. I looked at the GPS clock and it read that time when I started the heater voltage. If you play the channel back in dataviewer you see the temperature start to increase around 80s BEFORE the heater current was switched on. This needs to be calibrated away!!!

  13   Thu Feb 11 18:04:08 2010 AidanLaserRing HeaterRing heater time constant

I've been looking to see what the time constant of the ring heater is. The attached plot shows the voltage measured by the photodiode in response to the heater turning on and off with a period of 30 minutes.

The time constant looks to be on the order of 600s.

  12   Mon Feb 8 17:44:38 2010 AidanElectronicsPre-amplifierreplace Pot with fixed Resistor

Quote:

 

            Preamp for Bulls eye detector                
                             
  It was felt that the Pot used at the input stage to remove offset added Noise            
  To test this the Pot was replaced with a fixed resistor and the offset removed at the second stage        
  Noise was measured after the first stage and at the monitor point first with the pot and then with the pot replaced with a Resistor  
                             
              First stage gain =1+500/10 test point 1 gain = 51      
              second stage gain=10K/1K test point 2 gain = 510    
  1K Pot (R19) is present                      
                             
  Chan #1                          
    dbVrms/Hz       nV/Hz       Referred Input Noise  nV/Hz      
                    gain = 51        
    200Hz 100Hz 50Hz   200Hz 100Hz 50Hz   200Hz 100Hz 50Hz    
  Test Point #1 -141.1 -140.0 -136.8   88.1 100.0 144.5   1.7 2.0 2.8    
                    gain = 510        
  Test Point #2 -119.4 -120.4 -118.4   1071.5 955.0 1202.3   2.1 1.9 2.4    
                             
                             
                             
  Pot replaced with Resistor (R4)                    
                             
  Chan #1                          
    dbVrms/Hz       nV/Hz       RIN        
                    gain = 50        
    200Hz 100Hz 50Hz   200Hz 100Hz 50Hz   200Hz 100Hz 50Hz    
  Test Point #1 -142.7 -142.7 -141.9   73.7 73.3 80.8   1.4 1.4 1.6    
                    gain = 500        
  Test Point #2 -122.0 -121.1 -120.7   794.3 881.0 922.6   1.6 1.7 1.8    
                             
                             
  When the Pot was replaced with R4, the offset was removed with the Pot at the second gain stage          
  R4 was not a thin film metal resistor                    

 

Just a note: this board was for the QPD not the Bull's eye detector.

 

  11   Mon Feb 8 10:45:50 2010 Steve O'ConnorElectronicsPre-amplifierreplace Pot with fixed Resistor

 

            Preamp for Bulls eye detector                
                             
  It was felt that the Pot used at the input stage to remove offset added Noise            
  To test this the Pot was replaced with a fixed resistor and the offset removed at the second stage        
  Noise was measured after the first stage and at the monitor point first with the pot and then with the pot replaced with a Resistor  
                             
              First stage gain =1+500/10 test point 1 gain = 51      
              second stage gain=10K/1K test point 2 gain = 510    
  1K Pot (R19) is present                      
                             
  Chan #1                          
    dbVrms/Hz       nV/Hz       Referred Input Noise  nV/Hz      
                    gain = 51        
    200Hz 100Hz 50Hz   200Hz 100Hz 50Hz   200Hz 100Hz 50Hz    
  Test Point #1 -141.1 -140.0 -136.8   88.1 100.0 144.5   1.7 2.0 2.8    
                    gain = 510        
  Test Point #2 -119.4 -120.4 -118.4   1071.5 955.0 1202.3   2.1 1.9 2.4    
                             
                             
                             
  Pot replaced with Resistor (R4)                    
                             
  Chan #1                          
    dbVrms/Hz       nV/Hz       RIN        
                    gain = 50        
    200Hz 100Hz 50Hz   200Hz 100Hz 50Hz   200Hz 100Hz 50Hz    
  Test Point #1 -142.7 -142.7 -141.9   73.7 73.3 80.8   1.4 1.4 1.6    
                    gain = 500        
  Test Point #2 -122.0 -121.1 -120.7   794.3 881.0 922.6   1.6 1.7 1.8    
                             
                             
  When the Pot was replaced with R4, the offset was removed with the Pot at the second gain stage          
  R4 was not a thin film metal resistor                    
  10   Fri Feb 5 12:42:19 2010 SteveElectronicsPre-amplifierChanges to board

Test entry

  9   Thu Feb 4 19:45:56 2010 AidanMiscRing HeaterRing heater transfer function - increasing collection area

I mounted the thinner Aluminium Watlow heater inside a 14" long, 1" inner diameter cylinder. The inner surface was lined with Aluminium foil to provide a very low emissivity surface and scatter a lot of radiation out of the end. ZEMAX simulations show this could increase the flux on a PD by 60-100x. 

There was 40V across the heater and around 0.21A being drawn. The #9005 HgCdTe photo-detector was placed at one end of the cylinder to measure the far-IR. (Bear in mind this is a 1mmx1mm detector in an open aperture of approximately 490 mm^2), The measured voltage difference between OFF and the steady-state ON solution, after a 5000x gain stage, was around 270mV. This corresponds to 0.054mV at the photo-diode. Using the responsivity of the PD ~= 0.05V/W then this corresponds to about 10mW incident on the PD.

 

Attachment 1: low-emissivity-tube.jpg
low-emissivity-tube.jpg
  8   Thu Feb 4 15:26:37 2010 AidanElectronicsRing HeaterRing heater transfer function measurement 240mHz-5Hz

Quote:

I've been trying to measure the ring heater transfer function (current to emitted power) by sweeping the supply voltage and measuring the emitted power with a photodector positioned right next to the ring heater.

Last night the voltage was sweeping with a 1000mV setting on the SR785 which was fed into the Voltage Control of the Kepco Bipolar Operational Power Supply/Amplifier which was biased around 10V.

The results are very, very strange. The magnitude of the transfer function decreases at lower frequency. I'll post the data just as soon as I can (ASCII dumps 13 and 14 on the disk from the SR785).

The circuit looks like this:

 

SR785 drive ----> Amplifier ----> Ring Heater : Photodetector ---> SR560 (5000x gain) ----> SR785 input

 

 

 This is wrong. It turns out the SR785 was wired up incorrectly.

  7   Thu Feb 4 14:05:59 2010 AidanElectronicsRing HeaterRing heater transfer function measurement 240mHz-5Hz

I've been trying to measure the ring heater transfer function (current to emitted power) by sweeping the supply voltage and measuring the emitted power with a photodector positioned right next to the ring heater.

Last night the voltage was sweeping with a 1000mV setting on the SR785 which was fed into the Voltage Control of the Kepco Bipolar Operational Power Supply/Amplifier which was biased around 10V.

The results are very, very strange. The magnitude of the transfer function decreases at lower frequency. I'll post the data just as soon as I can (ASCII dumps 13 and 14 on the disk from the SR785).

The circuit looks like this:

 

SR785 drive ----> Amplifier ----> Ring Heater : Photodetector ---> SR560 (5000x gain) ----> SR785 input

 

 

  6   Fri Jan 29 10:02:15 2010 AidanComputingDAQNew DAQ ordered

 On the advice of Ben Abbott, I've ordered the Diamond Systems Athena II computer w/DAQ, as well as an I/O board, solid state disk and housing for it. The delivery time is 4-6 weeks.

Diamond Systems Athena II

 

ELOG V3.1.3-