Below is a table summarizing the results of recent thermal defocus experiments. The values are the calculated change in measured defocus per unit temperature change of the sensor:
More detail on these experiments will be available in my second progress report, which will be uploaded to the LIGO DCC by next Monday.
The main purpose of this particular eLog is to summarize what functions I wrote and used to do this data analysis, and how I used them. All relevant code which is referenced here can be found on the SVN; I uploaded my most recent versions earlier today.
Here is a flowchart summarizing the three master functions which were specifically addressed for each experiment:
py4plot.m is probably the most complicated of these three functions, in terms of the amount of data analysis done, so here's a flowchart which shows how the function works and the main subfunctions that it addresses:
Also, here is a step-by-step example of how these functions might be used during a particular experiment:
(1)Suppose that I have an experiment which I have named "73010a", in which I wish to take 40 images of 200 sums. I would open the code for framesumexport2.py and change lines 7, 8 and 17 to read:
And I would then save the changes. I would double-check that the output basename had indeed been changed to 73010a (it will overwrite existing data files, if you forget to change the basename before running it). I would then let the script run (changing the set temperature of the lab after the first summed image was taken). Note that the total duration of the measurement is a function of how many images are summed and how many summed images are taken (in this example, if I was taking each single image at a rate of 11Hz, data collection would take ~20 seconds and data processing (summing the images) would take ~4 minutes (on the order of ~1 second per image in the sum) (the script isn't very quick at summing images, obviously).
EDIT(7/30 3:40pm): I just updated framesumexport2.py so that the program prompts you for this information. I also changed enabled execute permissions on the copy of the code on the Hartmann machine located in /users/jkunert/, so going to that directory and typing ./framesumexport2.py then inputting the information when prompted is all you need to do now. No need to go change around the actual code every time, any more.
(2)Once data collection had ceased entirely, I would open MATLAB and enter the following:
The function would then look for 73010a.raw and 73010a.txt in ./opt/EDTpdv/ and import the 40 images individually and centroid them. The x and y outputs are the centroid locations. If, for example, 952 centroids were located, x and y would be 952x1x40 arrays. M would be a 40x4 array of the form:
[time_before_img_taken time_after_img_taken digitizer_temp sensor_temp]
(3)Once MATLAB had finished the previous function, I would input:
The inputs are, respectively:
(1)python output basename,
(2)first image to analyze (where the first image is image 0),
(3)last image to analyze,
(4)x data (or, rather, data to analyze. to analyze y instead, just flip around "x" and "y" in the input),
(5)y data (or, if you want to analyze the y-direction, "x" would be the entry here),
(7)number of sums in each image (as a string),
(8)range of centroids to include in analysis (if you have 952 centroids, for example, and no ridiculous noise at the edges of the CCD, then [1 952] would be the best entry here),
(9)outlier tolerance (number of standard deviations from initial fit line that a datapoint must be within to be included in the second line fitting, in the dx vs x plot),
(10)exponential fitting structure (input an empty structure unless the temperature/time exponential fit turns out poorly, in which case a better fit parameter guess can be inputted as field tG.guess)
For Wednesday, June 16:
For Thursday, June 17:
Today I attended a basic laser safety training orientation, the second Introduction to LIGO lecture, a Summer Research Student Safety Orientation, and an Orientation for Non-Students living on campus (lots of mandatory meetings today). I met with Dr. Willems and Dr. Brooks in the morning and went over some background information regarding the project, then in the afternoon I got an idea of where I should progress from here from talking with Dr. Brooks. I read over the paper "Adaptive thermal compensation of test masses in advanced LIGO" and the LIGO TCS Preliminary Design document, and did some further reading in the Brooks thesis.
I'm making a little bit of progress with accessing the Hartmann lab computer with Xming but got stuck, and hopefully will be able to sort that out in the morning and progress to where I want to be (I wasn't able to get much further than that, since I can't access the Hartmann computer in the lab currently due to laser authorization restrictions). I'm currently able to remotely open an X terminal on the server but wasn't able to figure out how to then be able to log in to the Hartmann computer. I can do it via SSH on that terminal, of course, but am having the same access restrictions that I was getting when I was logging in to the Hartmann computer via SSH directly from my laptop (i.e. I can log in to the Hartmann computer just fine, and access the camera and framegrabber programs, but for the vast majority of the stuff on there, including MATLAB, I don't have permissions for some reason and just get 'access denied'). I'm sure that somebody who actually knows something about this stuff will be able to point out the problem and point me in the right direction fairly quickly (I've never used SSH or the X Window system before, which is why it's taking me quite a while to do this, but it's a great learning experience so far at least).
Goals for tomorrow: get that all sorted out and learn how to be able to fully access the Hartmann computer remotely and run MATLAB off of it. Familiarize myself with the camera program. Set the camera into test pattern mode and use the 'take' programs to retrieve images from it. Familiarize myself with the 'take' programs a bit and the various options and settings of them and other framegrabber programs. Get MATLAB running and use fread to import the image data arrays I take with the proper data representation (uint16 for each array entry). Then, set the camera back to recording actual images, take those images from the framegrabber and save them, then import them into MATLAB. I should familiarize myself with the various settings of the camera at this stage, as well.
I started out the day by taking some images from the CCD with the OLED switched off, to just look at the pattern when it's dark. The images looked like this:
Taken with camera settings:
The statistical analysis of them using the functions from Friday gave the following result:
At first glance, the distribution looks pretty Poissonian, as expected. There are a few scattered pixels registering a little brighter, but that's perhaps not so terribly unusual, given the relatively tiny spread of intensities with even the most extreme outliers. I won't say for certain whether or not there might be something unexpected at play, here, but I don't notice anything as unusual as the standard deviation 'spike' seen from intensities 120-129 as observed in the log from yesterday.
Speaking of that spike, the rest of the day was spent trying to investigate it a little more. In order to accomplish this, I wrote the following functions (all attached):
-spotfind.m -- inputs a 3D array of several Hartmann images as well as a starting pixel and threshold intensity level. analyzes the first image, scanning starting at the starting pixel until it finds a spot (with an edge determined by the threshold level), after which it finds a box of pixels which completely surrounds the spot and then shrinks the matrix down to this size, localizing the image to a single spot
-singspotcent.m -- inputs the image array outputted from spotfind, subtracts an estimate of the background, then uses the centroiding algorithm sum(x*P^2)/sum(P^2) to find the centroid (where x is the coordinate and P is the intensity level), then outputs the centroid location
-hemiadd.m -- inputs the image from spotfind and the centroid from singspotcent, subtracts an estimate of the background, then finds the sum total intensity in the top half of the image above the centroid, the bottom half, the left half and the right half, outputs these values as n-component vectors for an n-image input, subtracts from each vector its mean and then plots the deviations in intensity from the mean in each half of the image as a function of time
-edgeadd.m -- similar to hemiadd, except that rather than adding up all pixels on one half of the image, it inputs a threshold, determines how far to the right of the centroid that the spot falls past this treshold and uses it as a radial length, then finds the sum of the intensities of a bar of 3 pixels on this "edge" at the radial length away from the centroid.
-spotfft.m -- performs a fast fourier transform on the outputs from edgeadd, outputting the frequency spectrum at which the intensity of these edge pixels oscillate, then plotting these for each of the four edge vectors. see an example output below.
--halfspot_fluc.m and halfspot_edgefluc.m -- master functions which combine and automate the functions previous
Dr. Brooks has suggested that the observed flickering might perhaps be an effect resulting from the finite thickness of the Hartmann Plate. The OLED can be treated as a point source and thus can be approximated as emitting a spherical wavefront, and thus the light from it will hit this edge at an angle and be scattered onto the CCD. If the plate vibrates, then (which it certainly must to some degree) the wavefront will hit this edge at a different angle as the edge is displaced temporarily through vibration, and thus this light will hit the CCD at a different point, causing the flickering (which is, after all, observed to occur near the edge of the spot). This effect certainly must cause some level of noise, but whether it's the culprit for our 'flickering' spike in the standard deviation remains to be seen.
Here is the frequency spectrum of the edge intensity sums for two separate spots, found over 128 images:
Intensity Sum Amplitude Spectrum of Edge Fluctuations, 128 images, spot search point (100,110), threshold level 110
128 images, spot search point (100,100), threshold level 129
At first glance, I am not able to conclude anything from this data. I should investigate this further.
A few things to note, to myself and others:
--I still should construct a Bode plot from this data and see if I can deduce anything useful from it
--I should think about whether or not my algorithms are good for detecting what I want to look at. Is looking at a 3 pixel vertical or horizontal 'bar' on the edge good for determining what could possibly be a more spherical phenomenon? Are there any other things I need to consider? How will the settings of the camera affect these images and thus the results of these functions?
--Am I forgetting any of the subtleties of FFTs? I've confirmed that I am measuring the amplitude spectrum by looking at reference sine waves, but I should be careful since I haven't worked with these in a while
It's late (I haven't been working on this all night, but I haven't gotten the chance to type this up until now), so thoughts on this problem will continue tomorrow morning..
Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):
Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.
I tested to see whether the same 'flickering' is present in images under this setup.
For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:
5000 microseconds. Note that the background level is approaching the level of the feature:
6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:
When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):
This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction).
To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):
Set to 30Hz:
Set to 1Hz:
To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of 115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).
I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.
EDIT: Some late-night conjecturing: Consider the following,
I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.
Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.
So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.
Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.
This is very similar to the sort of behavior that we're seeing!
Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:
Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.
This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.
So in addition to taking steps towards starting to set stuff up for the experiment in the lab, I spent a good deal of the day figuring out how to use the pre-existing code for finding the centroids in spot images. I spent quite a bit of time trying to use an outdated version of the code that didn't work for the actual captured images, and then once I was directed towards the right version I was hindered for a little while by a bug.
The 'bug' turns out to be something very simple, yet relatively subtle. In the function centroid_images.m in '/opt/EDTpdv/hartmann/src/', the function was assuming a threshold of 0 with my images, even though it has not long before been working with an image that Dr. Brooks loaded. Looking through the code, I noticed that before finding the threshold using the MATLAB function graythresh, several adjustments were made so as to subtract out the background and normalize the array. After estimating and subtracting a background, the function divides the entries of the image array by the maximum value in the image so as to normalize this. For arrays composed of numbers represented as doubles, this is fine. However, the function that I wrote to import my image arrays into MATLAB outputs an image array with integer data. So when the function divided my integer image arrays by the maximum values in the array, it rounded every value in the array to the nearest integer -- that is, the "normalized" array only contained ones and zeros. The function graythresh views this as a black and white image, and thus outputs a threshold of 0.
To remedy this, I edited centroid_images.m to convert the image array into an array of doubles near the very beginning of the function. The only new line is simply "image=double(image);", and I made a note of my edit in a comment above that line. The function started working for me after I did that.
I then wrote a function which automatically centroids an input image and then plots the centroids as scatter-plot of red circles over the image. For an image taken off of the Hartmann camera, it gave the following:
Zoomed in on the higher-intensity peaks, the centroids look good. They're a little offset, but that could just be an artifact of the plotting procedure; I can't say for certain either way. They all appear offset by the same amount, though:
One problem is that, for spots with a much lower relative intensity than the maximum intensity peak, the centroid appears to be offset:
Better centering of the beam and more even illumination of the Hartmann plate could mitigate this problem, perhaps.
I also wrote a function which inputs two image matrices and outputs vector field plots representing the shift in each centroid from the first to the second images. To demonstrate that I could use this function to display the shifting of the centroids from a change in the wavefront, I translated the fiber mount of the SLED in the direction of the optical axis by about 6 turns of the z-control knob (corresponding to a translation of about 1.9mm, according to the user's guide for the fiber aligner). This gave the following images:
Before the translation:
This led to a displacement of the centroids shown as follows:
Note that the magnitudes of the actual displacements are small, making the shift difficult to see. However, when we scale the displacement vectors up, we can get much more readily visible Direction vectors (having the same direction as the actual displacement vectors, but not the same magnitude):
This was a very rough sort of measurement, since exposure time, focus of the microscope optic, etc. were not adjusted, and the centroids are compared between single images rather than composite images, meaning that random noise could have quite an effect, especially for the lower-magnitude displacements. However, this plot appears to show the centroids 'spreading out', which is as expected for moving the SLED closer to the sensor along the optical axis.
The following MATLAB functions were written for this (both attached):
centroidplot.m -- calls centroid_image and plots the data
centroidcompare.m -- calls centroid_image twice for two inputs matrices, using the first matrix's centroid output structure as a reference for the second. Does a vector field plot from the displacements and reference positions in the second output centroids structure.
In order to conduct future optical experiments with the SLED and to be able to predict the behavior of the beam as it propagates across the table and through various optics, it is necessary to know the properties of the beam. The spot size, divergence angle, and radius of curvature are all of interest if we wish to be able to predict the pattern which should appear on the Hartmann sensor given a certain optical layout.
Mathematica was used to simplify this integral, and it showed it to be equivalent to:
where Erfc() is the complementary error function. Note that for fixed z, this intensity is a function only of xm. If an experiment was carried out to measure the intensity of the beam blocked by a plate from x=-inf to x=xm for multiple values of xm, it would therefore be possible via regression analysis to compute the best-fit values of A, w, and x0 for the measured values of Ipd and xm. This would give us A, w and x0 for that z-value. By repeating this process for multiple values of z, we could therefore find the behavior of these parameters as a function of z.
The razor blade was mounted on a New Focus 9091 Translational Stage, the relative displacement of which in the x-direction was measured with the Vernier micrometer mounted on the base. Tape was placed on the front of the razor so as to block light from passing through any of its holes. The portion of the beam not blocked by the razor then passed through a lens which was used to focus the beam back onto a PDA1001A Large Area Silicon Photodiode, the voltage output of which was monitored using a Fluke digital multimeter. The ruler stayed securely clamped onto the optical table (except when it was translated in the x-direction once during the experiment, as described later).
A MATLAB function 'gsbeam.m' was written to replicate the function:
(note that the width calculated from the 26th measurement is not included in the regression calculation or included on this plot. The width parameter was calculated as being exactly the same as it was for the 25th measurement, despite the other parameters varying between the measurements. I suspect that the beam size was starting to exceed the dimensions blocked by the razor and that this caused this problem, and that would be easy to check, but I have yet to do it. Regardless, the fit looks good from just the other 25 measurements)
A quick write-up on recent work can be found at: Google Docs
I can't find a Tex interpreter or any other sort of equation editor on the eLog, is why I kept it on Google Docs for now instead of transferring it over.
My previous eLog details how the noise in Hartmann Sensor defocus measurements appears to vary with ambient light. New troubleshooting analysis reveals that the rapid shifts in the noise were still related to the ambient light, sort of, but that ambient light is not the real issue. Rather, the noise was the result of some trouble with the centroiding algorithm.
The centroiding functions I have been using can be found on the SVN under /users/aidan/cit_centroid_code. When finding centroids for non-uniform intensity distributions, it is desirable to avoid simply using a single threshold level to isolate individual spots, as dimmer spots may be below this threshold and would therefore not be "seen" by the algorithm. The centroiding functions used here get around this issue by initially setting a relatively high threshold to find the centroids of the brighter spots, and then fitting a hexagonal close-packed array to these spots so as to be able to infer where the rest of the spots are located. Centroiding is then done within small boxes around each estimated centroid location (as determined by the hexagonal array). The functions "find_hex_grid.m" and "flesh_out_hex_grid.m" serve the purpose of finding this hexagonal grid. However, there appear to be bugs in these functions which compromise the ability of the functions to accurately locate spots and their centroids.
The centroiding error can be clearly seen in the following plot of calculated centroids plotted against the raw image from which they were calculated:
At the bottom of the image, it can be seen that the functions fail at estimating the location of the spots. Because of this, centroiding is actually being done on a small box surrounding each point which consists only of the background of the image. This can explain why these centroids were calculated to have much larger displacements and shifted dramatically with small changes in ambient light levels. The centroiding algorithm was being applied to the background surrounding each of these points, so it's very reasonable to believe that a non-uniform background fluctuation could cause a large shift in the calculated centroid of each of these regions.
It was determined that this error arose during the application of the hex grid by going through the centroiding functions step-by-step to narrow down where specifically the results appeared to be incorrect. The function's initial estimate for the centroids right before the application of the hex grid is shown plotted against the original image:
The centroids in this image appear to correspond well to the location of each spot, so it does not appear that the error arises before this point in the function. However, when flesh_out_hex_grid and its subfunction find_hex_grid were called, they produced the following hexagonal grid:
It can be seen in this image that the estimated "spot locations" (the intersections of the grid) near the bottom of the image differ from the actual spot locations. The centroiding algorithm is applied to small regions around each of these intersections, which explains why the calculated "spot centroids" appear at incorrect locations.
It will be necessary to fix the hexagonal grid fitting so as to allow for accurate centroiding over non-uniform intensity distributions. However, recent experiments in measuring thermally induced defocus produce images with a fairly uniform distribution. It should therefore be possible to find the centroids of the images from these experiments to decent accuracy by simply temporarily bypassing the hexagonal-grid fitting functions. To demonstrate this, I analyzed some data from last week (experiment 72010a). Without bypassing the hex-grid functions, analysis yielded the following results:
However, when hexagonal grid fitting was bypassed, analysis yielded the following:
The level of noise in the centroid displacement vs. centroid location plot, though still not ideal, is seen to decrease by nearly two orders of magnitude. This indicates that bypassing or fixing the problems with the hexagonal grid fitting functions should enable a more accurate measurement of thermally induced defocus in future experiments.
[JC, Chub, Radhika]
Chub and I ordered a few parts from McMaster in order build a handrail-like stopper to keep the dewar from falling over. We also cut off the excess 8020 which was leaning over the table to fit. To hold down the support for the Dewar, Radhika and I decided to use C-clamps from the EE shop.
The desktop computer is now running Debian Linux
Small/Medium size gloves need to be ordered in order to handle the optics carefully.
Today, I set up a system consisting of the 520 nm laser, a 2'' mirror and two lenses of focal lengths f1 = 40 cm and f2 = 20 cm. The goal was to collimate the beam coming from the laser, so it goes parallel through the test optic at a radius of ~2.5 cm and then focus it to a radius of ~ 1.2 cm to fit the CCD dimensions of the HWS. The mirror was placed about 1 cm close to the laser and the first lens is setup at a distance~f1=40cm from the mirror. The test optic is placed between the two lenses and the second lens is placed about 10 cm from the CCD. The distance between the two lenses isn't important and could change in the future. The lenses and mirrors are all labeled.
I measured the approximate angle of divergence (0.06 rad) of the laser by taking the beam diameter at different positions along the propagation axis. This allowed for the ABCD matrix calculations to be finalized and the focal lengths of the lenses be chosen accordingly.
In order to have more space in the box, I removed everything that was not necessary to the side.
The previous 2-lens setup focused the beam to a tight spot, however due to the divergence angle of the laser beam, a significant amount of power was not being captured by the fiirst lens at a distance of 40 cm from the source. The divergence angle seems to be bigger than 0.06 by a factor of 2, so a f = 20 cm lens was used to collimate the beam and a f = 30 cm lens was used to focus it. A mirror was used to reflect the beam, so we obtain steering control. Additionally, the focusing lens was placed on a small 1-axis stage in order to control the distance of the lens from the CCD, providing control over the focused beam size.
Note: The 30 cm lens was cleaned with methanol, however it still has some residue on the surface. The beam imaged to the Harrtmann Sensor looks good, however the lens will be cleaned by using a different solvent or replaced by a different 30 cm lens. The 3 lenses at the edge of the box will stay inside in order to prevent contamination, however they will not be used in the design.
Since we set up the 2-lens system focusing the laser beam to the CCD, the next step was to mount the spherical reflector (31 mm wide) and the heater (~3 mm diameter). I used a small 3-axis stage to mount the heater, providing 3 degrees of freedom that would allow to manipulate the height of the heater, its position with respect to the reflector (left-right and in-out). The reflector was mounted in such a way that we can control its rotation angle, height and horizontal displacement. The current design is not quite sophisticated as it is just a first test, however I will look into different tools in the lab to see if I can use less mounts to get the same degrees of freedom.
The new heaters are supposed to be heated using AC. We used a DC power supply and ran ~30V through the wire, however only about ~50 mA of current was running through it. Jon will look into the specs of the new heaters to see if the power supply was the problem.
Yesterday, we were able to take some data using the 120 V DC power supply. The reflectors cut at the focal point and radius were both tested; the semi-circle cut proved to give a better focus, likely because roughly half the heat is lost using the focal-point reflectors. For upcoming tests, the semicircle reflectors will be used. We varied the surface shine by using the dull and reflective side of Al foil, as well as using the machined Al itself. The best result was given by using the more reflective side of Al foil.
Figure 1 shows the steady-state surface deformation profile detected by the HWS. The heaters don't have a uniform distribution along the wire, so more heat is radiated in the center of it, thus more of it is being focused to the center of the test optic. The data needs to be analyzed to determine the radius of the focus. Our rough estimate is about ~1.5 - 2 cm. We cannot collect any more data until we get a new power supply (AC 120 V).
Today, I came up with a new design for mounting the reflectors. I used a big 3-axis stage and a small 4-axis stage. This provides 5 degrees of freedom: 3 translational and 2 rotational, which is what we need for fine-tuning the focus and directing it at different angles incident to the test optic. The only problem with this design is that the 3-axis stage is too tall for the box, so the lid won't close.There is a smaller one available, but I have to figure out a way to increase its height, since the screw size is different from the ones on the pedestals available.
Additionally, Chub used metal-to-metal epoxy to glue a screw to the back of a reflector. I will wait until tomorrow to test it, because it is a slow acting epoxy. If it works, I have the necessary tools to do the same with the other reflectors. With the current deisgn the reflector wil be screwed in to where the round screw is in the stage. If it heats up a lot and affects the material of the stages, a small optical post (top of stage) will be used to make up for the absorbed heat.
I took images of the heat pattern projected on a piece of paper produced by the semi-circle reflector. I used 108V to drive current throught he heater. I tested the reflector without any coating and then with the dull and shiny sides of Al foil. I wasn't able to test the focal-point cut reflector because I had to glue a screw to it with epoxy which cures overnight. I will do these measurements tomorrow. Figure 2 shows the setup I used to get the data. The shiny side of Al foil is better at IR, so we will use that for the wavefront measurements.
We got 11 new semi-circle cut reflectors of radius ~3.6 cm. I glued a screw to the back of one reflector using the same epoxy as for the previous reflectors. Due to the bigger ROC of the reflector, a tight focus is achievable at greater distances (~15 cm).
Cheryl Vorvick, Chris Guido, Phil Willems
Attached is a PDF with some initial noise testing. There are 5 spectrum plots (not including the PreAmp spectrum) of the laser. The first two are with V_DC around 100 mV, and the other three are with V_DC around 200 mV. (As measured with the 100X gain preamplifier, so ideally 1 and 2 mV actual) We did one spectrum (at each power level) with no attempt of noise reduction and one spectrum with the lights off and a make shift tent to reduce air flow. The 5th plot is at 200mv with the tent and the PZT on. (The other 4 have the PZT off).
The second plot is just the spectrums divided by their respectives V_DC to get an idea of the RIN.
Went down to the lab and showed Rana the setup. He's fine with me being down there as long as I let someone know. He also recommended using an adjustable mount (three screws) for the test mirror instead of the mount with top bolt and two nubs on the bottom - he thinks the one with three screws as constraints for the silica will be easier to model (and be more symmetric constraints)
Mounted the f=8" lens (used a 2" pedestal) and placed it on the table so the image fit well on the CCD and so a sharp object in front of the lens resulted in a sharp image. The beam was clipping the f=4" lens (between gold mirror and test mirror) so I spent time moving that gold mirror and the f=4" lens around. I'll still need to finish up that setup.
The beam reflecting off the test mirror was clipping the lens between gold mirror and test mirror, so I reconfigured some of the optics, unfortunately resulting in a larger angle of incidence.
From the test mirror, the beam size increases much too rapidly to fit onto the 2-inch diameter lens with f=8 that was meant to resize the beam for the CCD of the HWS. It seems that the f=8 lens can go about 6 inches from the test mirror, and an f ~ 2.3 (60 mm) lens can go about 2 inches in front of the CCD to give the appropriate beam size. However, the image doesn't seem very sharp.
The beam is also not hitting the CCD currently because of the increase in angle of incidence on the test mirror and limitations of the box. I'd like to move the HWS closer to the SLED (and will then have to move the SLED as well).
The table is set up. The HWS and SLED were moved slightly, and a minimal angle between the test mirror and HWS was achieved.
There are two possible locations for the f=60mm lens that will achieve appropriate magnification onto the HWS: 64cm or 50 cm from the f=200mm lens.
At 64cm away, approximately 79000 saturated pixels and 1054 average value.
At 50cm away, approximately 22010 saturated pixels and 1076 average value.
Currently the setup is at 64cm. Could afford to be more magnified, so might want to move the f=60mm lens around. Also, if we're going to need to be able to access the HWS (i.e. to screw on the array) we might want to move to the 50cm location.
With Jon's help, I changed the setup to include a mode-matching telescope built from the f=60mm (1 inch diameter) lens and the f=100mm lens. These lenses are located after the last gold mirror and before the test optic. The height of the beam was also adjusted so that it is more centered on these lenses. Note: these two lenses cannot be much further apart from each other than they currently are, or the beam will be too large for the f=100mm lens.
We considered different possible mounts to use for the test optic, and decided to move it to a mount where there is less contact. The test optic was also moved closer to the HWS to achieve appropriate beamsize on the optic coming from the mode-matching telescope.
The f=200 lens is now approximately 2/3 of the distane from the test optic to the HWS, resulting in an appropriately sized beam at the HWS.
Current was also turned down to achieve 0 saturated pixels.
Attached the grid array of the HWS.
Applied voltage (5V, 7V, 9.9V, 14V) to the heater pad and took measurements of T and spherical power (aka defocus).
The adhesive of the temperature sensor isn't very sticky. The first time I did it it peeled off. (Second time partially peeled off). We want to put it on the side of Al if possible.
Bonded a mirror (thickness ~6 mm) to aluminum disk (thickness ~5 mm) and it's still curing.
To the best of my ability, calculated the magnification of the plane of the test optic relative to the HWS (2.3) and input this value.
Increased the temperature slightly and saved data points of defocus to txt files when temperature leveled out. This was a slow process, as it takes a while for things to level out. I only got up to about 28.5C, and will need to continue this process.
I also plotted the best-fit defocus for each temperature from COMSOL (Temperature vs. Defocus), and looking at values from HWS it seems that we're off by a normalization factor of approx. 4.
- Had a meeting to talk about the basics of LIGO (esp. TCS) and discuss the project
- Created COMSOL model for the test mass with incident Gaussian beam.
- Added a ring heater to the previous file
- Set up SVN for the COMSOL repository
- Got access to and started working with SIS on Rigel1
- Fixed SVN issues
- Refined COMSOL model parameters and worked on a better way to implement the heating ring to get the astigmatic heating pattern.
- Created a COMSOL model with thermal deformations
- Added non-symmetrical heating to cause astigmatism
- Worked on a method to compute the optical path length changes in COMSOL
- Tried to fix COMSOL error using the (ts) module, ended up emailing support as the issue is new in 4.3
- Managed to get a symmetric geometric distortion by fixing the x and y movements of the mirror to be zero (need to look for a better way to do this as this may be unphysical)
- Worked on getting the COMSOL data into SIS, need to look through the SIS specs to find out how we should be doing this (current method isn't working well)
- Fixed the (ts) model, got strange results that indicate that the antisymmetric heating mode is much more prominent than previously thought
- Managed to get COMSOL data through matlab and into SIS
- Realized that the strange deformations that we were seeing only occur on the face nearest the ring heater, and not on the face we are worried about (the HR face)
- Read papers by Morrison et al. and Kogelnik to get a better understanding of the mathematics and operations of the optical cavity modeled in SIS
- Read some of the SIS manual to better understand the program and the physics that it was using (COMSOL licenses were full)
- Plugged the output of the model with uniform heating into SIS using both modification of the radius of curvature, and direct importation of deflection data
- Generated a graph for asymmetric heating and did the same
- Aligned axes in model to better match with the axes in MATLAB and SIS so that the extrema in deflections lie along x and y (not yet implemented in the data below)
- Verified that the SIS output does match satisfy the equations for Gaussian beam propagation
- Investigated how changing the amount of data points going into SIS changed the output, as well as how changes in the astigmatic heating effect the output
+ The results are very dependent on number of data points (similar order changes to changing the heating)
+ Holding the number of data points the same, more assymetric heating tends to lead to more power in the H(2,0) mode, and less in the H(0,2)
- Did more modeling for different levels of heating and different mesh densities for the SIS input.
- Lots of orientation stuff
- Started on progress report.
- Attended a lot of meetings (Safety, LIGO Orientation)
- Finished draft of week 3 report (images attached)
- Paper edits and more data generation for the paper (lower resolution grid data)
- Attended a talk on LIGO
Plan for building the model
- Find the fields that would be incident on the beam splitter from each arm (This is done already)
- Propagate these through until they get to the OMC using the TELESCOPE function in SIS
- Combine the fields incident on the OMC in MATLAB and minimize the power to get the input field for the OMC (Most of this is done, just waiting to figure out what kind of format we need to use it as an SIS input)
- Model the OMC as an FP cavity in SIS
+ Need to think about how to align the cavity in a sensible way in SIS (need to find out more about how they actually do it)
- Pick off the fields from both ends of the OMC-FP cavity for analysis
- Add thermal effects to one of the arms and see how that changes the fields, specifically how the signal to noise ratio changes
- Finished the MatLab code that both combines two fields and simulates the adjustment of the beamsplitter to minimize the power out (with a small offset).
- Added the signal recycling telescope to the SIS code that generates the fields
To Do: Make the OMC cavity in SIS
Made a COMSOL model that can include CO2 laser heating, self heating, and ring heating
Figured out how to run SIS out of a script and set up commands to run the two SIS stages of the model
[Aidan, Jordan, Radhika]
Radhika and Jordan identified some particulates (hair and flecks of foil) on the O-ring on the IR labs dewer. Additionally, we saw a scratch in the O-ring groove and a nick on the metal of the base of the dewer where it meets the O-ring. All were in the leaky vicinity previously identified by the He testing.
We set up a cradle to hold the dewer while we are working on it. Still needs vertical supports.
R & J replaced the O-ring with a new one with Crytox applied.
With Joe's help we fixed the failure of princess_sparkle to mount the fb1:/cvs directory when relying on /etc/fstab.
First we changed the mounting options in fstab to the following:
fb1:/cvs /cvs nfs rw,bg,soft 1 1
When we got the following error trying it directly from the command line,
controls@princess_sparkle:~$ sudo mount /cvs
[sudo] password for controls:
mount: wrong fs type, bad option, bad superblock on fb1:/cvs,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so
some quick Google searches suggested installing nfs-common, so we tried sudo apt-get install nfs-common and that seemed to do the trick.
sudo apt-get install nfs-common
For the CentOS machines, the following was done:
sudo mkdir /cvs
and then the same mounting configuration was added to /etc/fstab
Additionally, all three machines now have a /users symbolic link to /cvs/users
We set up the Hartmann sensor and illuminated it with the output from the fiber-coupled SLED placed about 1m away. The whole arrangement was covered with a box to block out ambient light. The exposure time on the Hartmann sensor was adjusted so that the maximum number of counts in a pixel was about 95% of the saturation level.
We recorded a set of 5000 images to file and analyzed them using the Caltech and Adelaide centroiding codes. The results are shown below. Basically, we see the same deviation from ideal improvement that is observed at Adelaide.
Does this work?
I installed a recycled VME crate in the electronics rack. It currently has a Baja 4700E CPU card in it - and this needs to be configured. We also have the following cards, which are not plugged in right now.
1. ICS-110A-32 Analogue-to-Digital Converter - the jumpers need to be set on this to give it a unique memory address in the VME bus.
2. D000186 LIGO-type Anti Image card.
The CPU card needs to be configured to search it's OS binaries on the network (in this case we're going to store them on the framebuilder in Rana's lab). These settings are accessed by plugging a serial cable into the front of the card and using a terminal window to access the menu system. There are some screen caps of this below. As the card is reset we get the Start-up screen and then we can either do nothing (and a full boot will take place) or we can press a key and access the menu. From there we can restart the boot process by entering "@" or we can change the boot settings by entering "c". These are shown below:
We fixed the start-up settings on the VME crate to look for a TCS startup file on fb0. The settings on the Baja 4700 are now:
On the advice of Ben Abbott, I've ordered the Diamond Systems Athena II computer w/DAQ, as well as an I/O board, solid state disk and housing for it. The delivery time is 4-6 weeks.
Diamond Systems Athena II
I've been trying to measure the ring heater transfer function (current to emitted power) by sweeping the supply voltage and measuring the emitted power with a photodector positioned right next to the ring heater.
Last night the voltage was sweeping with a 1000mV setting on the SR785 which was fed into the Voltage Control of the Kepco Bipolar Operational Power Supply/Amplifier which was biased around 10V.
The results are very, very strange. The magnitude of the transfer function decreases at lower frequency. I'll post the data just as soon as I can (ASCII dumps 13 and 14 on the disk from the SR785).
The circuit looks like this:
SR785 drive ----> Amplifier ----> Ring Heater : Photodetector ---> SR560 (5000x gain) ----> SR785 input
This is wrong. It turns out the SR785 was wired up incorrectly.