40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  TCS elog, Page 2 of 5  Not logged in ELOG logo
ID Date Authordown Type Category Subject
  52   Thu Jun 17 22:03:51 2010 James KMiscHartmann sensorSURF Log -- Day 2, Getting Started

For Thursday, June 17:

Today I attended a basic laser safety training orientation, the second Introduction to LIGO lecture, a Summer Research Student Safety Orientation, and an Orientation for Non-Students living on campus (lots of mandatory meetings today). I met with Dr. Willems and Dr. Brooks in the morning and went over some background information regarding the project, then in the afternoon I got an idea of where I should progress from here from talking with Dr. Brooks. I read over the paper "Adaptive thermal compensation of test masses in advanced LIGO" and the LIGO TCS Preliminary Design document, and did some further reading in the Brooks thesis.

I'm making a little bit of progress with accessing the Hartmann lab computer with Xming but got stuck, and hopefully will be able to sort that out in the morning and progress to where I want to be (I wasn't able to get much further than that, since I can't access the Hartmann computer in the lab currently due to laser authorization restrictions). I'm currently able to remotely open an X terminal on the server but wasn't able to figure out how to then be able to log in to the Hartmann computer. I can do it via SSH on that terminal, of course, but am having the same access restrictions that I was getting when I was logging in to the Hartmann computer via SSH directly from my laptop (i.e. I can log in to the Hartmann computer just fine, and access the camera and framegrabber programs, but for the vast majority of the stuff on there, including MATLAB, I don't have permissions for some reason and just get 'access denied'). I'm sure that somebody who actually knows something about this stuff will be able to point out the problem and point me in the right direction fairly quickly (I've never used SSH or the X Window system before, which is why it's taking me quite a while to do this, but it's a great learning experience so far at least).

Goals for tomorrow: get that all sorted out and learn how to be able to fully access the Hartmann computer remotely and run MATLAB off of it. Familiarize myself with the camera program. Set the camera into test pattern mode and use the 'take' programs to retrieve images from it. Familiarize myself with the 'take' programs a bit and the various options and settings of them and other framegrabber programs. Get MATLAB running and use fread to import the image data arrays I take with the proper data representation (uint16 for each array entry). Then, set the camera back to recording actual images, take those images from the framegrabber and save them, then import them into MATLAB. I should familiarize myself with the various settings of the camera at this stage, as well.

 

--James

  53   Sat Jun 19 17:31:46 2010 James KMiscHartmann sensorSURF Log -- Day 3, Initial Image Analysis
For Friday, June 18:
(note that I haven't been working on this stuff all of Saturday or anything, despite posting it now. It was getting late on Friday evening so I opted to just type it up now, instead)

(all matlab files referenced can be found in /EDTpdv/JKmatlab unless otherwise noted)

I finally got Xming up and running on my laptop and had Dr. Brooks edit the permissions of the controls account, so now I can fully access the Hartmann computer remotely (run MATLAB, interact with the framegrabber programs, etc.). I was able to successfully adjust camera settings and take images using 'take', saving them as .raw files. I figured out how to import these .raws into MATLAB using fopen and display them as grayscale images using the Imshow command. I then wrote a program (readimgs.m, as attached) which takes inputs a base filename and number of images (n), then automatically loads the first 'n' .raw files located in /EDTpdv/JKimg/ with the inputted base file name, formatting them properly and saving them as a 1024x1024x(n) matrix.

After trying out the test pattern of the camera, I set the camera into normal operating mode. I took 200 images of the HWS illuminated by the OLED, using the following camera settings:

 
Temperature data from the camera was, unfortunately, not taken, though I now know how to take it.
 
The first of these 200 images is shown below:
 
hws0000.png

As a test exercise in MATLAB and also to analyze the stability of the HWS output, I wrote a series of functions to allow me to find and plot the means and standard deviations of the intensity of each pixel over a series of images. First, knowing that I would need it in following programs in order to use the plot functions on the data, I wrote "ar2vec.m" (as attached), which simply inputs an array and concatenates all of the columns into a single column vector.

Then, I wrote "stdvsmean.m" (as attached), which inputs a 3D array (such as the 1024x1024x(n) array of n image files), which first calculates the standard deviation and mean of this array along the 3rd dimension (leaving, for example, two 1024x1024 arrays, which give the mean and standard deviation of each pixel over the (n) images). It then uses ar2vec to create two column vectors, representing the mean and standard deviation of each pixel. It then plots a scatterplot of the standard deviation of each pixel vs. its mean intensity (with logarithmic axes), along with histograms of the mean intensities and standard deviation of intensities (with logarithmic y-axes).

"imgdevdat.m" (as attached) is simply a master function which combines the previous functions to input image files, format them, analyze them statistically and create plots.

Running this function for the first 20 images gave the following output:

(data from 20 images, over all 1024x1024 pixels)

Note that the background level is not subtracted out in this function, which is apparent from the plots. The logarithmic scatter plot looks pretty linear, as expected, but there are interesting features arising between the intensities of ~120 to ~130 (the obvious spike upward of standard deviation, followed immediately by a large dip downward).

MATLAB gets pretty bogged down trying to plot over a million data points at a time, to the point where it's very difficult to do anything with the plots. I therefore wrote the function "minimgstat.m" (as attached), which is very similar to imgdevdat.m except that before doing the analysis and plotting, it reduces the size of the image array to the upper-left NxN square (where N is an additional argument of the function).

Using this function, I did the same analysis of the upper-left 200x200 pixels over all 200 images:

(data from 200 images, over the upper-left 200x200 pixels)

The intensities of the pixels don't go as high this time because the upper portion of the images are dimmer than much of the rest of the image (as is apparent from looking at the image itself, and as I demonstrate further a little bit later on). Do note the change in axis scaling resulting from this when comparing the image. We do, however, see the same behavior in the ~120-128 intensity level region (more pronounced in this plot because of the change in axis scaling).

I was interested in looking at which pixels constituted this band, so I wrote a function "imgbandfind.m" (as attached), which inputs a 2D array and a minimum and maximum range value, goes through the image array pixel-by-pixel, determines which pixels are within the range, and then constructs an RGB image which displays pixels within the range as red and images outside the range as black.

I inputted the first image in the series into this function along with the range of 120-129, and got the following:

(pixels in intensity range of 120-129 in first image)

So the pixels in this range appear to be the pixels on the outskirts of each wavefront dot near the vertical center of the image. The outer circles of the dots on the lower and upper portions of the image do not appear, perhaps because the top of the image is dimmer and the bottom of the image is brighter, and thus these outskirt pixels would then have lower and higher values, respectively. I plan to investigate this and why it happens (what causes this 'flickering' and if it is a problem at all) further.

The fact that the background levels are lower nearer to the upper portion of the image is demonstrated in the next image, which shows all intensity levels less than 70:
(pixels in intensity range of 0-70 in first image)

So the background levels appear the be nonuniform across the CCD, as are the intensities of each dot. Again, I plan to investigate this further. (could it be something to do with stray light hitting the CCD nonuniformly, maybe? I haven't thought through all the possibilities)
 
The OLED has been turned off, so my next immediate step will be to investigate the background levels further by analyzing the images when not illuminated by the OLED.
 
In other news: today I also attended the third Intro to LIGO lecture, a talk on Artificial Neural Networks and their applications to automated classification of stellar spectra, and the 40m Journal Club on the birth rates of neutron stars (though I didn't think to learn how to access the wiki until a few hours right before, and then didn't actually read the paper. I fully intend to read the paper for next week before the meeting).
 
  54   Tue Jun 22 00:21:47 2010 James KMiscHartmann sensorSurf Log -- Day 4, Hartmann Spot Flickering Investigation

 I started out the day by taking some images from the CCD with the OLED switched off, to just look at the pattern when it's dark. The images looked like this:

 
Taken with camera settings:

The statistical analysis of them using the functions from Friday gave the following result:

 
At first glance, the distribution looks pretty Poissonian, as expected. There are a few scattered pixels registering a little brighter, but that's perhaps not so terribly unusual, given the relatively tiny spread of intensities with even the most extreme outliers. I won't say for certain whether or not there might be something unexpected at play, here, but I don't notice anything as unusual as the standard deviation 'spike' seen from intensities 120-129 as observed in the log from yesterday.
 
Speaking of that spike, the rest of the day was spent trying to investigate it a little more. In order to accomplish this, I wrote the following functions (all attached):
 
-spotfind.m -- inputs a 3D array of several Hartmann images as well as a starting pixel and threshold intensity level. analyzes the first image, scanning starting at the starting pixel until it finds a spot (with an edge determined by the threshold level), after which it finds a box of pixels which completely surrounds the spot and then shrinks the matrix down to this size, localizing the image to a single spot
 
-singspotcent.m -- inputs the image array outputted from spotfind, subtracts an estimate of the background, then uses the centroiding algorithm sum(x*P^2)/sum(P^2) to find the centroid (where x is the coordinate and P is the intensity level), then outputs the centroid location
 
-hemiadd.m -- inputs the image from spotfind and the centroid from singspotcent, subtracts an estimate of the background, then finds the sum total intensity in the top half of the image above the centroid, the bottom half, the left half and the right half, outputs these values as n-component vectors for an n-image input, subtracts from each vector its mean and then plots the deviations in intensity from the mean in each half of the image as a function of time
 
-edgeadd.m -- similar to hemiadd, except that rather than adding up all pixels on one half of the image, it inputs a threshold, determines how far to the right of the centroid that the spot falls past this treshold and uses it as a radial length, then finds the sum of the intensities of a bar of 3 pixels on this "edge" at the radial length away from the centroid.
 
-spotfft.m -- performs a fast fourier transform on the outputs from edgeadd, outputting the frequency spectrum at which the intensity of these edge pixels oscillate, then plotting these for each of the four edge vectors. see an example output below.
 
--halfspot_fluc.m and halfspot_edgefluc.m -- master functions which combine and automate the functions previous
 
Dr. Brooks has suggested that the observed flickering might perhaps be an effect resulting from the finite thickness of the Hartmann Plate. The OLED can be treated as a point source and thus can be approximated as emitting a spherical wavefront, and thus the light from it will hit this edge at an angle and be scattered onto the CCD. If the plate vibrates, then (which it certainly must to some degree) the wavefront will hit this edge at a different angle as the edge is displaced temporarily through vibration, and thus this light will hit the CCD at a different point, causing the flickering (which is, after all, observed to occur near the edge of the spot). This effect certainly must cause some level of noise, but whether it's the culprit for our 'flickering' spike in the standard deviation remains to be seen.

Here is the frequency spectrum of the edge intensity sums for two separate spots, found over 128 images:
Intensity Sum Amplitude Spectrum of Edge Fluctuations, 128 images, spot search point (100,110), threshold level 110

128 images, spot search point (100,100), threshold level 129
At first glance, I am not able to conclude anything from this data. I should investigate this further.

A few things to note, to myself and others:
--I still should construct a Bode plot from this data and see if I can deduce anything useful from it
--I should think about whether or not my algorithms are good for detecting what I want to look at. Is looking at a 3 pixel vertical or horizontal 'bar' on the edge good for determining what could possibly be a more spherical phenomenon? Are there any other things I need to consider? How will the settings of the camera affect these images and thus the results of these functions?
--Am I forgetting any of the subtleties of FFTs? I've confirmed that I am measuring the amplitude spectrum by looking at reference sine waves, but I should be careful since I haven't worked with these in a while
 
It's late (I haven't been working on this all night, but I haven't gotten the chance to type this up until now), so thoughts on this problem will continue tomorrow morning..

  55   Tue Jun 22 22:30:24 2010 James KMiscHartmann sensorSURF Log -- Day 5, more Hartmann image preliminary analysis

 Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):

hws.png

Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.

I tested to see whether the same 'flickering' is present in images under this setup.

For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:

750 microseconds:

750us.png

1000 microseconds:

1000us.png

1500 microseconds:

1500us.png

2000 microseconds:

2000us.png

3000 microseconds:

3000us.png

4000 microseconds:

4000us.png

5000 microseconds. Note that the background level is approaching the level of the feature:

5000us.png

6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:

8000us.png

 

When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):

hws3k.png

This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction). 

 

To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):

 

Set to 30Hz:

f30Hz.png

Set to 1Hz:

f1Hz.png

 

To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of  115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).

 

I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.

 

EDIT: Some late-night conjecturing: Consider the following,

I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.

Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.

So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.

Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.

This is very similar to the sort of behavior that we're seeing!

Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:

levelinstances.png

Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.

 

This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.

 

  57   Wed Jun 23 22:57:22 2010 James KMiscHartmann sensorSURF Log -- Day 6, Centroiding

 So in addition to taking steps towards starting to set stuff up for the experiment in the lab, I spent a good deal of the day figuring out how to use the pre-existing code for finding the centroids in spot images. I spent quite a bit of time trying to use an outdated version of the code that didn't work for the actual captured images, and then once I was directed towards the right version I was hindered for a little while by a bug.

The 'bug' turns out to be something very simple, yet relatively subtle. In the function centroid_images.m in '/opt/EDTpdv/hartmann/src/', the function was assuming a threshold of 0 with my images, even though it has not long before been working with an image that Dr. Brooks loaded. Looking through the code, I noticed that before finding the threshold using the MATLAB function graythresh, several adjustments were made so as to subtract out the background and normalize the array. After estimating and subtracting a background, the function divides the entries of the image array by the maximum value in the image so as to normalize this. For arrays composed of numbers represented as doubles, this is fine. However, the function that I wrote to import my image arrays into MATLAB outputs an image array with integer data. So when the function divided my integer image arrays by the maximum values in the array, it rounded every value in the array to the nearest integer -- that is, the "normalized" array only contained ones and zeros. The function graythresh views this as a black and white image, and thus outputs a threshold of 0.

To remedy this, I edited centroid_images.m to convert the image array into an array of doubles near the very beginning of the function. The only new line is simply "image=double(image);", and I made a note of my edit in a comment above that line. The function started working for me after I did that.

 

I then wrote a function which automatically centroids an input image and then plots the centroids as scatter-plot of red circles over the image. For an image taken off of the Hartmann camera, it gave the following:

centroidplot_nozoom.png

Zoomed in on the higher-intensity peaks, the centroids look good. They're a little offset, but that could just be an artifact of the plotting procedure; I can't say for certain either way. They all appear offset by the same amount, though:

centroidplot_zoom.png

One problem is that, for spots with a much lower relative intensity than the maximum intensity peak, the centroid appears to be offset:

centroidplot_zoom2.png

Better centering of the beam and more even illumination of the Hartmann plate could mitigate this problem, perhaps.

 

I also wrote a function which inputs two image matrices and outputs vector field plots representing the shift in each centroid from the first to the second images. To demonstrate that I could use this function to display the shifting of the centroids from a change in the wavefront, I translated the fiber mount of the SLED in the direction of the optical axis by about 6 turns of the z-control knob  (corresponding to a translation of about 1.9mm, according to the user's guide for the fiber aligner). This gave the following images:

 

Before the translation:

6turn_before.png

After:

6turn_after.png

 This led to a displacement of the centroids shown as follows:

6turnDisplacementVectors.png

Note that the magnitudes of the actual displacements are small, making the shift difficult to see. However, when we scale the displacement vectors up, we can get much more readily visible Direction vectors (having the same direction as the actual displacement vectors, but not the same magnitude):

6turnDirectionVectors.png

This was a very rough sort of measurement, since exposure time, focus of the microscope optic, etc. were not adjusted, and the centroids are compared between single images rather than composite images, meaning that random noise could have quite an effect, especially for the lower-magnitude displacements. However, this plot appears to show the centroids 'spreading out', which is as expected for moving the SLED closer to the sensor along the optical axis.

 

The following MATLAB functions were written for this (both attached):

centroidplot.m -- calls centroid_image and plots the data

centroidcompare.m -- calls centroid_image twice for two inputs matrices, using the first matrix's centroid output structure as a reference for the second. Does a vector field plot from the displacements and reference positions in the second output centroids structure.

  58   Fri Jun 25 00:11:13 2010 James KMiscHartmann sensorSURF Log -- Day 7, SLED Beam Characterization

BACKGROUND:


In order to conduct future optical experiments with the SLED and to be able to predict the behavior of the beam as it propagates across the table and through various optics, it is necessary to know the properties of the beam. The spot size, divergence angle, and radius of curvature are all of interest if we wish to be able to predict the pattern which should appear on the Hartmann sensor given a certain optical layout.

It was therefore necessary to conduct an experiment to measure these properties. The wavefront emanating from the SLED is assumed to be approximately Gaussian, and thus has an intensity of the form:

 

where A is some amplitude, w is the spot size, x and y are the coordinates transverse to the optical axis, and x0 is the displacement of the optical axis in the x-direction from the optical axis. The displacement of the optical axis in the y-direction is assumed to be zero (that is, y0=0). A and w are both functions of z, which is the coordinate of displacement parallel to the optical axis.

 

Notice that the total intensity read by a photodetector reading the entire beam would be the double integral from negative infinity to infinity for both x and y. If a opaque plate was placed such that the the beam was blocked from some x=xm to x=inf (where xm is the location of the edge of the plate), then the intensity read by a photodetector reading the entire non-blocked portion of the beam would be:

 

Mathematica was used to simplify this integral, and it showed it to be equivalent to:

where Erfc() is the complementary error function. Note that for fixed z, this intensity is a function only of xm. If an experiment was carried out to measure the intensity of the beam blocked by a plate from x=-inf to x=xm for multiple values of xm, it would therefore be possible via regression analysis to compute the best-fit values of A, w, and x0 for the measured values of Ipd and xm. This would give us A, w and x0 for that z-value. By repeating this process for multiple values of z, we could therefore find the behavior of these parameters as a function of z.

Furthermore, we know that at z-values well beyond the Rayleigh range, w should be linear with respect to z. Assuming that our measurements are done in the far-field (which, for the SLED, they almost certainly would be) we could therefore find the divergence angle by knowing the slope of the linear relation between w and z. Knowing this, we could further calculate such quantities as the Rayleigh range, the minimum spot size, and the radius of curvature of the SLED output (see p.490 of "Lasers" by Milonni and Eberly for the relevant functional relationships for Gaussian beams).


EXPERIMENT:

An experiment was therefore carried out to measure the intensity of of beam blocked from x~=-inf to x=xm, for multiple values of xm, for multiple values of z. A diagram of the optical layout of the experiment is below:

 

(top view)


The razor blade was mounted on a New Focus 9091 Translational Stage, the relative displacement of which in the x-direction was measured with the Vernier micrometer mounted on the base. Tape was placed on the front of the razor so as to block light from passing through any of its holes. The portion of the beam not blocked by the razor then passed through a lens which was used to focus the beam back onto a PDA1001A Large Area Silicon Photodiode, the voltage output of which was monitored using a Fluke digital multimeter. The ruler stayed securely clamped onto the optical table (except when it was translated in the x-direction once during the experiment, as described later).

The following is a picture of this layout, as constructed:

 

 
The procedure of the experiment was as follows: first, the translational stage was clamped securely with the left-most edge of its base lined up with the desired z-value as measured on the ruler. The z-value as measured on the ruler was recorded. Then, the translational stage was moved in the negative x-direction until there was no change in the voltage measured on the DMM (which is directly proportional to the measured intensity of the beam). When no further DMM readout change was yielded from -x translation, it was assumed that the the razor was no longer blocking the beam. Then, the stage was moved in the +x direction until the voltage output on the DMM just began to change. The micrometer and DMM values were both recorded. The stage was then moved inward until the DMM read a voltage which was close to the nearest multiple of 0.5V, and this DMM voltage and micrometer reading were recorded. The stage was then translated until the DMM voltage dropped by approximately 0.5V, the micrometer and DMM readings were recorded, and this process was repeated until the voltage reached ~0.5V. The beam output was then covered by a card so as to completely block it, and the voltage output from the DMM was recorded as the intensity from the ambient light from that measurement. The stage was then unclamped and moved to the next z-value, and this process was repeated for 26 different values of z, starting at z=36.5mm and then incrementing z upwards by ~4mm for the first ten measurements, then by increments of ~6mm for the remaining measurements.
 
The data from these measurements can be found on the attached spreadsheet.
 
A few notes on the experiment:
 
The vernier micrometer has a measurement limit of 13.5mm. After the tenth measurement, the measured xm values began to exceed this limit. It was therefore necessary to translate the ruler in the negative x-direction without translating it in the z-direction. Plates were clamped snugly to either side of the ruler such that the ruler could not be translated in the z-direction, but could be moved in the x-direction when the ruler was unclamped. After securing these plates, the ruler was moved in the negative x-direction by approximately 5mm. The ruler was then clamped securely in place at its new x location. In order to better estimate the actual x-translation of the ruler, I took the following series of measurements: I moved the stage to z-values at which sets of measurements were previously taken. Then, I moved the razor out of the beam path and carefully moved it back inwards until the output on the DMM matched exactly the DMM output of the first measurement taken previously at that z-value. The xm value corresponding to this voltage was then read. The translation of the stage should be approximately equal to the difference of the measured xm values for that DMM voltage output at that z-value. This was done for 8 z-values, and the average difference was found to be 4.57+-0.03mm, which should also be the distance of stage translation (this data and calculation is included in the "x translation" sheet of the attached excel workbook).
 
At this same point, I started using two clamps to attach the translational stage to the table for each measurement set, as I was unhappy with the level of secureness which one clamp provided. I do not, however, believe that the use of one clamp compromised the quality of previous sets of measurements.

 

RESULTS:


A MATLAB function 'gsbeam.m' was written to replicate the function:

and then another function 'beamdata.m' was written to input each dataset, fit the data to a curve of the functional form of the previous function for each set of data automatically, and then output PDF files plotting all of the fit curves against each other, each individual fit curve against the data from that measurement, and a plot showing the widths w as a function of z. Linear regression was done on w against z to find the slope of the w(z) (which, for these measurements, is clearly shown by the plot that the beam was measured in the far-field and thus w is approximately a linear function of z). An array of the z-location of the ruler, the fit parameters A, x0, x, and the 2-norm of the residual of the fit is also outputted, and is shown below for the experimental data:

 

z(ruler) A x0 w 2normres
36.5 7.5915 11.089 0.8741 0.1042
39.9 5.2604 11.1246 1.048 0.1013
44 3.8075 11.1561 1.2332 0.1164
48 2.777 11.1628 1.4479 0.0964
52 2.1457 11.1363 1.6482 0.1008
56 1.6872 11.4206 1.858 0.1029
60 1.3831 11.2469 2.0523 0.1021
64 1.1564 11.1997 2.2432 0.1059
68 0.972 11.1851 2.4483 0.0976
72 0.8356 11.1728 2.6392 0.1046
78 0.67 6.8821 2.9463 0.0991
84 0.5559 6.7548 3.2375 0.1036
90 0.4647 6.715 3.5402 0.0958
96 0.3993 6.7003 3.8158 0.1179
112 0.2719 6.8372 4.6292 0.0924
118 0.2398 6.7641 4.925 0.1029
124 0.2117 6.7674 5.2435 0.1002
130 0.189 6.8305 5.5513 0.0965
136 0.1709 6.8551 5.8383 0.1028
142 0.1544 6.8243 6.1412 0.0981
148 0.1408 6.7993 6.4313 0.099
154 0.1286 6.8062 6.7322 0.0948
160 0.1178 6.9059 7.0362 0.1009
166 0.1089 6.904 7.3178 0.0981
172 0.1001 6.8817 7.6333 0.1025
178 0.0998 6.711 7.6333 0

 

All outputted PDF's are included in the .zip file attached. The MATLAB functions themselves are also attached.The plots of the fit curves and the plot of the widths vs. the ruler location are also included below:

 

(note that I could probably improve on the colormap that I chose for this. note also that the 'gap' is because I temporarily forgot how to add integers while taking the measurements, and thus went from 96mm on the ruler to 112mm on the ruler despite going by a 6mm increment otherwise in that range. Also, note that all of these fit curves were automatically centered at x=0 for the plot, so they wouldn't necessarily intersect so neatly if I tried to include the difference in the estimated 'beam centers')

(note that the width calculated from the 26th measurement is not included in the regression calculation or included on this plot. The width parameter was calculated as being exactly the same as it was for the 25th measurement, despite the other parameters varying between the measurements. I suspect that the beam size was starting to exceed the dimensions blocked by the razor and that this caused this problem, and that would be easy to check, but I have yet to do it. Regardless, the fit looks good from just the other 25 measurements)

These results are as expected: that the beam spot-size should increase as a function of z and that it should do so linearly in the far-field. My next step will be to use the results of this experiment to calculate the properties of the SLED beam, characterizing the beam and thusly enabling me to predict its behavior within further optical systems.

 

  65   Thu Jul 15 20:06:37 2010 James KMiscHartmann sensorSURF Log: Thermally Induced Defocus Experiments

A quick write-up on recent work can be found at: Google Docs

 

I can't find a Tex interpreter or any other sort of equation editor on the eLog, is why I kept it on Google Docs for now instead of transferring it over.

 

--James

 

  78   Mon Jul 26 18:47:12 2010 James KMiscHartmann sensorHex Grid Analysis Errors and Thermal Defocus Noise

My previous eLog details how the noise in Hartmann Sensor defocus measurements appears to vary with ambient light. New troubleshooting analysis reveals that the rapid shifts in the noise were still related to the ambient light, sort of, but that ambient light is not the real issue. Rather, the noise was the result of some trouble with the centroiding algorithm.

The centroiding functions I have been using can be found on the SVN under /users/aidan/cit_centroid_code. When finding centroids for non-uniform intensity distributions, it is desirable to avoid simply using a single threshold level to isolate individual spots, as dimmer spots may be below this threshold and would therefore not be "seen" by the algorithm. The centroiding functions used here get around this issue by initially setting a relatively high threshold to find the centroids of the brighter spots, and then fitting a hexagonal close-packed array to these spots so as to be able to infer where the rest of the spots are located. Centroiding is then done within small boxes around each estimated centroid location (as determined by the hexagonal array). The functions "find_hex_grid.m" and "flesh_out_hex_grid.m" serve the purpose of finding this hexagonal grid. However, there appear to be bugs in these functions which compromise the ability of the functions to accurately locate spots and their centroids.

The centroiding error can be clearly seen in the following plot of calculated centroids plotted against the raw image from which they were calculated:

centerror.PNG

At the bottom of the image, it can be seen that the functions fail at estimating the location of the spots. Because of this, centroiding is actually being done on a small box surrounding each point which consists only of the background of the image. This can explain why these centroids were calculated to have much larger displacements and shifted dramatically with small changes in ambient light levels. The centroiding algorithm was being applied to the background surrounding each of these points, so it's very reasonable to believe that a non-uniform background fluctuation could cause a large shift in the calculated centroid of each of these regions.

It was determined that this error arose during the application of the hex grid by going through the centroiding functions step-by-step to narrow down where specifically the results appeared to be incorrect. The function's initial estimate for the centroids right before the application of the hex grid  is shown plotted against the original image:

centinit.png

The centroids in this image appear to correspond well to the location of each spot, so it does not appear that the error arises before this point in the function. However, when flesh_out_hex_grid and its subfunction find_hex_grid were called, they produced the following hexagonal grid:

hexgrid.png

It can be seen in this image that the estimated "spot locations" (the intersections of the grid) near the bottom of the image differ from the actual spot locations. The centroiding algorithm is applied to small regions around each of these intersections, which explains why the calculated "spot centroids" appear at incorrect locations.

It will be necessary to fix the hexagonal grid fitting so as to allow for accurate centroiding over non-uniform intensity distributions. However, recent experiments in measuring thermally induced defocus produce images with a fairly uniform distribution. It should therefore be possible to find the centroids of the images from these experiments to decent accuracy by simply temporarily bypassing the hexagonal-grid fitting functions. To demonstrate this, I analyzed some data from last week (experiment 72010a). Without bypassing the hex-grid functions, analysis yielded the following results:

72010a.png

However, when hexagonal grid fitting was bypassed, analysis yielded the following:

72010a_nohex.PNG

The level of noise in the centroid displacement vs. centroid location plot, though still not ideal, is seen to decrease by nearly two orders of magnitude. This indicates that bypassing or fixing the problems with the hexagonal grid fitting functions should enable a more accurate measurement of thermally induced defocus in future experiments.

  4   Tue Dec 29 16:05:09 2009 FrankComputingDAQbooting VME crates from fb1

 http://nodus.ligo.caltech.edu:8080/AdhikariLab/514

  232   Mon Jul 22 18:44:53 2019 Edita BytyqiThings to BuyGeneralNeed to Order Gloves

Small/Medium size gloves need to be ordered in order to handle the optics carefully.

  233   Mon Jul 22 18:46:23 2019 Edita BytyqiLab Infrastructure Laser-Lens-HWS Setup

Today, I set up a system consisting of the 520 nm laser, a 2'' mirror and two lenses of focal lengths f1 = 40 cm and f2 = 20 cm. The goal was to collimate the beam coming from the laser, so it goes parallel through the test optic at a radius of ~2.5 cm and then focus it to a radius of ~ 1.2 cm to fit the CCD dimensions of the HWS. The mirror was placed about 1 cm close to the laser and the first lens is setup at a distance~f1=40cm from the mirror. The test optic is placed between the two lenses and the second lens is placed about 10 cm from the CCD. The distance between the two lenses isn't important and could change in the future. The lenses and mirrors are all labeled.

I measured the approximate angle of divergence (0.06 rad) of the laser by taking the beam diameter at different positions along the propagation axis. This allowed for the ABCD matrix calculations to be finalized and the focal lengths of the lenses be chosen accordingly. 

In order to have more space in the box, I removed everything that was not necessary to the side.

  234   Wed Jul 24 16:25:13 2019 Edita BytyqiLab InfrastructureOpticsUpdated 2-lens setup

The previous 2-lens setup focused the beam to a tight spot, however due to the divergence angle of the laser beam, a significant amount of power was not being captured by the fiirst lens at a distance of 40 cm from the source. The divergence angle seems to be bigger than 0.06 by a factor of 2, so a f = 20 cm lens was used to collimate the beam and a f = 30 cm lens was used to focus it. A mirror was used to reflect the beam, so we obtain steering control. Additionally, the focusing lens was placed on a small 1-axis stage in order to control the distance of the lens from the CCD, providing control over the focused beam size.

Note: The 30 cm lens was cleaned with methanol, however it still has some residue on the surface. The beam imaged to the Harrtmann Sensor looks good, however the lens will be cleaned by using a different solvent or replaced by a different 30 cm lens. The 3 lenses at the edge of the box will stay inside in order to prevent contamination, however they will not be used in the design.

  236   Mon Jul 29 18:53:16 2019 Edita BytyqiLab InfrastructureOpticsMounted Reflector and Heater

Since we set up the 2-lens system focusing the laser beam to the CCD, the next step was to mount the spherical reflector (31 mm wide) and the heater (~3 mm diameter). I used a small 3-axis stage to mount the heater, providing 3 degrees of freedom that would allow to manipulate the height of the heater, its position with respect to the reflector (left-right and in-out). The reflector was mounted in such a way that we can control its rotation angle, height and horizontal displacement. The current design is not quite sophisticated as it is just a first test, however I will look into different tools in the lab to see if I can use less mounts to get the same degrees of freedom.

The new heaters are supposed to be heated using AC. We used a DC power supply and ran ~30V through the wire, however only about ~50 mA of current was running through it. Jon will look into the specs of the new heaters to see if the power supply was the problem.

  237   Thu Aug 1 15:20:39 2019 Edita BytyqiLab InfrastructureOpticsReflector Mount and DC Supply

Yesterday, we were able to take some data using the 120 V DC power supply. The reflectors cut at the focal point and radius were both tested; the semi-circle cut proved to give a better focus, likely because roughly half the heat is lost using the focal-point reflectors. For upcoming tests, the semicircle reflectors will be used. We varied the surface shine by using the dull and reflective side of Al foil, as well as using the machined Al itself. The best result was given by using the more reflective side of Al foil.

Figure 1 shows the steady-state surface deformation profile detected by the HWS. The heaters don't have a uniform distribution along the wire, so more heat is radiated in the center of it, thus more of it is being focused to the center of the test optic. The data needs to be analyzed to determine the radius of the focus. Our rough estimate is about ~1.5 - 2 cm. We cannot collect any more data until we get a new power supply (AC 120 V).

Today, I came up with a new design for mounting the reflectors. I used a big 3-axis stage and a small 4-axis stage. This provides 5 degrees of freedom: 3 translational and 2 rotational, which is what we need for fine-tuning the focus and directing it at different angles incident to the test optic. The only problem with this design is that the 3-axis stage is too tall for the box, so the lid won't close.There is a smaller one available, but I have to figure out a way to increase its height, since the screw size is different from the ones on the pedestals available.

Additionally, Chub used metal-to-metal epoxy to glue a screw to the back of a reflector. I will wait until tomorrow to test it, because it is a slow acting epoxy. If it works, I have the necessary tools to do the same with the other reflectors. With the current deisgn the reflector wil be screwed in to where the round screw is in the stage. If it heats up a lot and affects the material of the stages, a small optical post (top of stage) will be used to make up for the absorbed heat.

 

  240   Mon Aug 12 21:15:12 2019 Edita BytyqiElectronics Determining heater/reflector focus

I took images of the heat pattern projected on a piece of paper produced by the semi-circle reflector. I used 108V to drive current throught he heater. I tested the reflector without any coating and then with the dull and shiny sides of Al foil. I wasn't able to test the focal-point cut reflector because I had to glue a screw to it with epoxy which cures overnight. I will do these measurements tomorrow. Figure 2 shows the setup I used to get the data. The shiny side of Al foil is better at IR, so we will use that for the wavefront measurements.

  241   Fri Aug 16 17:05:14 2019 Edita BytyqiElectronics FLIR Images of new reflector focusing heat

We got 11 new semi-circle cut reflectors of radius ~3.6 cm. I glued a screw to the back of one reflector using the same epoxy as for the previous reflectors. Due to the bigger ROC of the reflector, a tight focus is achievable at greater distances (~15 cm).

  110   Thu Feb 24 10:23:31 2011 Christopher GuidoLaserLaserLTG initial noise

Cheryl Vorvick, Chris Guido, Phil Willems

Attached is a PDF with some initial noise testing. There are 5 spectrum plots (not including the PreAmp spectrum) of the laser. The first two are with V_DC around 100 mV, and the other three are with V_DC around 200 mV. (As measured with the 100X gain preamplifier, so ideally 1 and 2 mV actual) We did one spectrum (at each power level)  with no attempt of noise reduction and one spectrum with the lights off and a make shift tent to reduce air flow. The 5th plot is at 200mv with the tent and the PZT on. (The other 4 have the PZT off).

 

The second plot is just the spectrums divided by their respectives V_DC to get an idea of the RIN.

  215   Tue Jul 10 17:49:13 2018 Aria ChaderjianLaserGeneralJuly 10, 2018

Went down to the lab and showed Rana the setup. He's fine with me being down there as long as I let someone know. He also recommended using an adjustable mount  (three screws) for the test mirror instead of the mount with top bolt and two nubs on the bottom - he thinks the one with three screws as constraints for the silica will be easier to model (and be more symmetric constraints)

Mounted the f=8" lens (used a 2" pedestal) and placed it on the table so the image fit well on the CCD and so a sharp object in front of the lens resulted in a sharp image. The beam was clipping the f=4" lens (between gold mirror and test mirror) so I spent time moving that gold mirror and the f=4" lens around. I'll still need to finish up that setup.

 

  216   Thu Jul 12 18:48:21 2018 Aria ChaderjianLaserGeneralJuly 12, 2018

The beam reflecting off the test mirror was clipping the lens between gold mirror and test mirror, so I reconfigured some of the optics, unfortunately resulting in a larger angle of incidence.

From the test mirror, the beam size increases much too rapidly to fit onto the 2-inch diameter lens with f=8 that was meant to resize the beam for the CCD of the HWS. It seems that the f=8 lens can go about 6 inches from the test mirror, and an f ~ 2.3 (60 mm) lens can go about 2 inches in front of the CCD to give the appropriate beam size. However, the image doesn't seem very sharp.

The beam is also not hitting the CCD currently because of the increase in angle of incidence on the test mirror and limitations of the box. I'd like to move the HWS closer to the SLED (and will then have to move the SLED as well).

  217   Fri Jul 13 16:42:50 2018 Aria ChaderjianLaserGeneralJuly 13, 2018

The table is set up. The HWS and SLED were moved slightly, and a minimal angle between the test mirror and HWS was achieved.

There are two possible locations for the f=60mm lens that will achieve appropriate magnification onto the HWS: 64cm or 50 cm from the f=200mm lens. 

At 64cm away, approximately 79000 saturated pixels and 1054 average value.

At 50cm away, approximately 22010 saturated pixels and 1076 average value.

Currently the setup is at 64cm. Could afford to be more magnified, so might want to move the f=60mm lens around. Also, if we're going to need to be able to access the HWS (i.e. to screw on the array) we might want to move to the 50cm location.

  218   Mon Jul 23 10:04:19 2018 Aria ChaderjianLaserGeneralJuly 20, 2018

With Jon's help, I changed the setup to include a mode-matching telescope built from the f=60mm (1 inch diameter) lens and the f=100mm lens. These lenses are located after the last gold mirror and before the test optic. The height of the beam was also adjusted so that it is more centered on these lenses. Note: these two lenses cannot be much further apart from each other than they currently are, or the beam will be too large for the f=100mm lens.

We considered different possible mounts to use for the test optic, and decided to move it to a mount where there is less contact. The test optic was also moved closer to the HWS to achieve appropriate beamsize on the optic coming from the mode-matching telescope.

The f=200 lens is now approximately 2/3 of the distane from the test optic to the HWS, resulting in an appropriately sized beam at the HWS.

Current was also turned down to achieve 0 saturated pixels.

  219   Tue Jul 24 16:52:44 2018 Aria ChaderjianLaserGeneralJuly 23, 2018 and July 24, 2018

Attached the grid array of the HWS.

Applied voltage (5V, 7V, 9.9V, 14V) to the heater pad and took measurements of T and spherical power (aka defocus).

The adhesive of the temperature sensor isn't very sticky. The first time I did it it peeled off. (Second time partially peeled off). We want to put it on the side of Al if possible.

Bonded a mirror (thickness ~6 mm) to aluminum disk (thickness ~5 mm) and it's still curing.

  220   Fri Aug 3 15:46:12 2018 Aria ChaderjianLaserGeneralAugust 3, 2018

To the best of my ability, calculated the magnification of the plane of the test optic relative to the HWS (2.3) and input this value.

Increased the temperature slightly and saved data points of defocus to txt files when temperature leveled out. This was a slow process, as it takes a while for things to level out. I only got up to about 28.5C, and will need to continue this process.

I also plotted the best-fit defocus for each temperature from COMSOL (Temperature vs. Defocus), and looking at values from HWS it seems that we're off by a normalization factor of approx. 4.

  157   Tue Jun 5 17:25:43 2012 Alex MauneyMiscaLIGO Modeling6/5/12 Daily Summary

- Had a meeting to talk about the basics of LIGO (esp. TCS) and discuss the project

- Created COMSOL model for the test mass with incident Gaussian beam.

- Added a ring heater to the previous file

- Set up SVN for the COMSOL repository

  158   Wed Jun 6 16:54:09 2012 Alex MauneyMiscaLIGO Modeling6/6/12 Daily Summary

- Got access to and started working with SIS on Rigel1

- Fixed SVN issues

- Refined COMSOL model parameters and worked on a better way to implement the heating ring to get the astigmatic heating pattern.

  160   Thu Jun 7 16:50:16 2012 Alex MauneyMiscaLIGO Modeling6/7/12 Daily Summary

- Created a COMSOL model with thermal deformations

- Added non-symmetrical heating to cause astigmatism

- Worked on a method to compute the optical path length changes in COMSOL

  162   Fri Jun 8 16:36:47 2012 Alex MauneyMiscaLIGO Modeling6/8/12 Daily Summary

- Tried to fix COMSOL error using the (ts) module, ended up emailing support as the issue is new in 4.3

- Managed to get a symmetric geometric distortion by fixing the x and y movements of the mirror to be zero (need to look for a better way to do this as this may be unphysical)

- Worked on getting the COMSOL data into SIS, need to look through the SIS specs to find out how we should be doing this (current method isn't working well)

 

  164   Mon Jun 11 17:11:01 2012 Alex MauneyMiscaLIGO Modeling6/11/12 Daily Summary

- Fixed the (ts) model, got strange results that indicate that the antisymmetric heating mode is much more prominent than previously thought

- Managed to get COMSOL data through matlab and into SIS

 

  166   Wed Jun 13 16:36:14 2012 Alex MauneyMiscaLIGO Modeling6/12 and 6/13 Daily Summary

- Realized that the strange deformations that we were seeing only occur on the face nearest the ring heater, and not on the face we are worried about (the HR face)

- Read papers by Morrison et al. and Kogelnik to get a better understanding of the mathematics and operations of the optical cavity modeled in SIS

- Read some of the SIS manual to better understand the program and the physics that it was using (COMSOL licenses were full)

  168   Thu Jun 14 16:51:03 2012 Alex MauneyMiscaLIGO Modeling6/14/12 Daily Summary

- Plugged the output of the model with uniform heating into SIS using both modification of the radius of curvature, and direct importation of deflection data

- Generated a graph for asymmetric heating and did the same

- Aligned axes in model to better match with the axes in MATLAB and SIS so that the extrema in deflections lie along x and y (not yet implemented in the data below)

  169   Mon Jun 18 16:30:36 2012 Alex MauneyMiscaLIGO Modeling6/18/12 Daily Summary

- Verified that the SIS output does match satisfy the equations for Gaussian beam propagation

- Investigated how changing the amount of data points going into SIS changed the output, as well as how changes in the astigmatic heating effect the output

     + The results are very dependent on number of data points (similar order changes to changing the heating)

     + Holding the number of data points the same, more assymetric heating tends to lead to more power in the H(2,0) mode, and less in the H(0,2)

 

  171   Tue Jun 19 16:24:52 2012 Alex MauneyMiscaLIGO Modeling6/19/12 Daily Summary

- Did more modeling for different levels of heating and different mesh densities for the SIS input.

- Lots of orientation stuff

- Started on progress report.

  172   Wed Jun 20 16:44:58 2012 Alex MauneyMiscaLIGO Modeling6/20/12 Daily Summary

- Attended a lot of meetings (Safety, LIGO Orientation)

- Finished draft of week 3 report (images attached)

 

  174   Thu Jun 21 16:54:45 2012 Alex MauneyMiscaLIGO Modeling6/21/12 Daily Summary

- Paper edits and more data generation for the paper (lower resolution grid data)

- Attended a talk on LIGO

 

  177   Wed Jun 27 16:43:56 2012 Alex MauneyMiscaLIGO Modeling6/27/12 Daily Summary

Plan for building the model

- Find the fields that would be incident on the beam splitter from each arm (This is done already)

- Propagate these through until they get to the OMC using the TELESCOPE function in SIS

- Combine the fields incident on the OMC in MATLAB and minimize the power to get the input field for the OMC (Most of this is done, just waiting to figure out what kind of format we need to use it as an SIS input)

- Model the OMC as an FP cavity in SIS

    + Need to think about how to align the cavity in a sensible way in SIS (need to find out more about how they actually do it)

- Pick off the fields from both ends of the OMC-FP cavity for analysis

- Add thermal effects to one of the arms and see how that changes the fields, specifically how the signal to noise ratio changes

  178   Thu Jun 28 16:27:37 2012 Alex MauneyMiscaLIGO Modeling6/28/12 Daily Summary

- Finished the MatLab code that both combines two fields and simulates the adjustment of the beamsplitter to minimize the power out (with a small offset).

- Added the signal recycling telescope to the SIS code that generates the fields

To Do: Make the OMC cavity in SIS

 

  180   Mon Jul 9 16:54:17 2012 Alex MauneyMiscaLIGO Modeling7/9/12 Summary

Made a COMSOL model that can include CO2 laser heating, self heating, and ring heating

Figured out how to run SIS out of a script and set up commands to run the two SIS stages of the model

  142   Mon Apr 25 16:28:27 2011 Aidan, JoeComputingNetwork architectureFixed problem network drive fb1:/cvs on Ubuntu & CentOS machines

With Joe's help we fixed the failure of princess_sparkle to mount the fb1:/cvs directory when relying on /etc/fstab.

First we changed the mounting options in fstab to the following:

fb1:/cvs        /cvs            nfs     rw,bg,soft        1 1

When we got the following error trying it directly from the command line,

controls@princess_sparkle:~$ sudo mount /cvs
[sudo] password for controls:
mount: wrong fs type, bad option, bad superblock on fb1:/cvs,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

some quick Google searches suggested installing nfs-common, so we tried sudo apt-get install nfs-common and that seemed to do the trick.

CentOS

For the CentOS machines, the following was done:

sudo mkdir /cvs

and then the same mounting configuration was added to /etc/fstab
 

Additionally, all three machines now have a /users symbolic link to /cvs/users

  88   Wed Aug 4 09:57:38 2010 Aidan, JamesComputingHartmann sensorRMS measurements with Hartmann sensor

[INCOMPLETE ENTRY]

We set up the Hartmann sensor and illuminated it with the output from the fiber-coupled SLED placed about 1m away. The whole arrangement was covered with a box to block out ambient light. The exposure time on the Hartmann sensor was adjusted so that the maximum number of counts in a pixel was about 95% of the saturation level.

We recorded a set of 5000 images to file and analyzed them using the Caltech and Adelaide centroiding codes. The results are shown below. Basically, we see the same deviation from ideal improvement that is observed at Adelaide.

  1   Fri Nov 6 20:09:47 2009 AidanLaserLaserTest

 Does this work?

  3   Mon Dec 28 14:48:29 2009 AidanComputingDAQVME crate has a "new" CPU - needs to be configured

I installed a recycled VME crate in the electronics rack. It currently has a Baja 4700E CPU card in it - and this needs to be configured. We also have the following cards, which are not plugged in right now.

1. ICS-110A-32 Analogue-to-Digital Converter - the jumpers need to be set on this to give it a unique memory address in the VME bus.

2. D000186 LIGO-type Anti Image card.

The CPU card needs to be configured to search it's OS binaries on the network (in this case we're going to store them on the framebuilder in Rana's lab). These settings are accessed by plugging a serial cable into the front of the card and using a terminal window to access the menu system. There are some screen caps of this below. As the card is reset we get the Start-up screen and then we can either do nothing (and a full boot will take place) or we can press a key and access the menu. From there we can restart the boot process by entering "@" or we can change the boot settings by entering "c". These are shown below:

 

 

  5   Tue Dec 29 17:50:57 2009 AidanComputingDAQVME crate has proper boot settings

We fixed the start-up settings on the VME crate to look for a TCS startup file on fb0. The settings on the Baja 4700 are now:

  6   Fri Jan 29 10:02:15 2010 AidanComputingDAQNew DAQ ordered

 On the advice of Ben Abbott, I've ordered the Diamond Systems Athena II computer w/DAQ, as well as an I/O board, solid state disk and housing for it. The delivery time is 4-6 weeks.

Diamond Systems Athena II

 

  7   Thu Feb 4 14:05:59 2010 AidanElectronicsRing HeaterRing heater transfer function measurement 240mHz-5Hz

I've been trying to measure the ring heater transfer function (current to emitted power) by sweeping the supply voltage and measuring the emitted power with a photodector positioned right next to the ring heater.

Last night the voltage was sweeping with a 1000mV setting on the SR785 which was fed into the Voltage Control of the Kepco Bipolar Operational Power Supply/Amplifier which was biased around 10V.

The results are very, very strange. The magnitude of the transfer function decreases at lower frequency. I'll post the data just as soon as I can (ASCII dumps 13 and 14 on the disk from the SR785).

The circuit looks like this:

 

SR785 drive ----> Amplifier ----> Ring Heater : Photodetector ---> SR560 (5000x gain) ----> SR785 input

 

 

  8   Thu Feb 4 15:26:37 2010 AidanElectronicsRing HeaterRing heater transfer function measurement 240mHz-5Hz

Quote:

I've been trying to measure the ring heater transfer function (current to emitted power) by sweeping the supply voltage and measuring the emitted power with a photodector positioned right next to the ring heater.

Last night the voltage was sweeping with a 1000mV setting on the SR785 which was fed into the Voltage Control of the Kepco Bipolar Operational Power Supply/Amplifier which was biased around 10V.

The results are very, very strange. The magnitude of the transfer function decreases at lower frequency. I'll post the data just as soon as I can (ASCII dumps 13 and 14 on the disk from the SR785).

The circuit looks like this:

 

SR785 drive ----> Amplifier ----> Ring Heater : Photodetector ---> SR560 (5000x gain) ----> SR785 input

 

 

 This is wrong. It turns out the SR785 was wired up incorrectly.

  9   Thu Feb 4 19:45:56 2010 AidanMiscRing HeaterRing heater transfer function - increasing collection area

I mounted the thinner Aluminium Watlow heater inside a 14" long, 1" inner diameter cylinder. The inner surface was lined with Aluminium foil to provide a very low emissivity surface and scatter a lot of radiation out of the end. ZEMAX simulations show this could increase the flux on a PD by 60-100x. 

There was 40V across the heater and around 0.21A being drawn. The #9005 HgCdTe photo-detector was placed at one end of the cylinder to measure the far-IR. (Bear in mind this is a 1mmx1mm detector in an open aperture of approximately 490 mm^2), The measured voltage difference between OFF and the steady-state ON solution, after a 5000x gain stage, was around 270mV. This corresponds to 0.054mV at the photo-diode. Using the responsivity of the PD ~= 0.05V/W then this corresponds to about 10mW incident on the PD.

 

  12   Mon Feb 8 17:44:38 2010 AidanElectronicsPre-amplifierreplace Pot with fixed Resistor

Quote:

 

            Preamp for Bulls eye detector                
                             
  It was felt that the Pot used at the input stage to remove offset added Noise            
  To test this the Pot was replaced with a fixed resistor and the offset removed at the second stage        
  Noise was measured after the first stage and at the monitor point first with the pot and then with the pot replaced with a Resistor  
                             
              First stage gain =1+500/10 test point 1 gain = 51      
              second stage gain=10K/1K test point 2 gain = 510    
  1K Pot (R19) is present                      
                             
  Chan #1                          
    dbVrms/Hz       nV/Hz       Referred Input Noise  nV/Hz      
                    gain = 51        
    200Hz 100Hz 50Hz   200Hz 100Hz 50Hz   200Hz 100Hz 50Hz    
  Test Point #1 -141.1 -140.0 -136.8   88.1 100.0 144.5   1.7 2.0 2.8    
                    gain = 510        
  Test Point #2 -119.4 -120.4 -118.4   1071.5 955.0 1202.3   2.1 1.9 2.4    
                             
                             
                             
  Pot replaced with Resistor (R4)                    
                             
  Chan #1                          
    dbVrms/Hz       nV/Hz       RIN        
                    gain = 50        
    200Hz 100Hz 50Hz   200Hz 100Hz 50Hz   200Hz 100Hz 50Hz    
  Test Point #1 -142.7 -142.7 -141.9   73.7 73.3 80.8   1.4 1.4 1.6    
                    gain = 500        
  Test Point #2 -122.0 -121.1 -120.7   794.3 881.0 922.6   1.6 1.7 1.8    
                             
                             
  When the Pot was replaced with R4, the offset was removed with the Pot at the second gain stage          
  R4 was not a thin film metal resistor                    

 

Just a note: this board was for the QPD not the Bull's eye detector.

 

  13   Thu Feb 11 18:04:08 2010 AidanLaserRing HeaterRing heater time constant

I've been looking to see what the time constant of the ring heater is. The attached plot shows the voltage measured by the photodiode in response to the heater turning on and off with a period of 30 minutes.

The time constant looks to be on the order of 600s.

  14   Thu Feb 11 21:46:23 2010 AidanElectronicsRing HeaterRing heater time constant measurement - start time

After leaving the ring heater off for several hours I turned on a 40V, 0.2A supply at a gps time of 949 988 700

The channel recording the PD response is C2:ATF-TCS_PD_HGCDTE_OUT.

However, there is a delay between the time at which something is supposed to be recorded and the time at which it is recorded. I looked at the GPS clock and it read that time when I started the heater voltage. If you play the channel back in dataviewer you see the temperature start to increase around 80s BEFORE the heater current was switched on. This needs to be calibrated away!!!

  15   Fri Feb 12 11:39:28 2010 AidanElectronicsRing HeaterRing heater transfer function

I applied a step function to the silver WATLOW heater and measured the response with the photodiode. The power spectrum of the derivative of the PD response is attached. The voltage isn't calibrated, but that's okay because right now we're just interested in the shape of the transfer function. It looks like a single pole around 850uHz. The noise floor is too great above 4 or 5 mHz to say anything about the transfer function.

 

 

ELOG V3.1.3-