40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  TCS elog, Page 4 of 6  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Typeup Category Subject
  258   Fri Mar 24 14:57:11 2023 JCLab InfrastructureFloor workLab Preparation for floor work
I prepared the East Side optical table on one half. The other half will be completely saran wrapped. The computer desk and Crackle chamber will next be placed closer to the South east corner of the room. Next, I will temporarily move the Hi Cube station to another lab until the floor is finished (probably DOPO).

· I have gathered optics into plastic boxes and placed them in the clear door cabinets.
· I placed the HEPA filter under the optical table in the Southeast corner. (This table will be saran wrapped from top to bottom)
· Unplugged majority of the electronic module and placed them on the rack.
· Removed loose cables from the wire rack.


Attachment #1 is what is left from the lab preparation list I've made. These are the next step I plan to take:

· Remove the last bit of equipment from the center room optical table.
· Before the meeting Tuesday, move the computer desk and the crackle chamber to the South East corner.
· Remove HiCube from the laboratory and into another.
· Saran Wrap the optical tables.
Attachment 1: Screen_Shot_2023-03-24_at_3.15.27_PM.png
Screen_Shot_2023-03-24_at_3.15.27_PM.png
Attachment 2: 5C51CF4C-0982-45A4-9FE0-A5B05DBD75CA.jpeg
5C51CF4C-0982-45A4-9FE0-A5B05DBD75CA.jpeg
Attachment 3: C5974FE2-8294-4C74-9FC6-854488CF17DB.jpeg
C5974FE2-8294-4C74-9FC6-854488CF17DB.jpeg
Attachment 4: EB24F74E-6B70-41C4-92CA-170C78707F81.jpeg
EB24F74E-6B70-41C4-92CA-170C78707F81.jpeg
Attachment 5: 3BEF083E-2475-4BDC-9C7C-1D6AB431E163.jpeg
3BEF083E-2475-4BDC-9C7C-1D6AB431E163.jpeg
  1   Fri Nov 6 20:09:47 2009 AidanLaserLaserTest

 Does this work?

  2   Thu Dec 10 22:23:47 2009 Not AidanLaserLaserTest

Yes.

Quote:

 Does this work?

  13   Thu Feb 11 18:04:08 2010 AidanLaserRing HeaterRing heater time constant

I've been looking to see what the time constant of the ring heater is. The attached plot shows the voltage measured by the photodiode in response to the heater turning on and off with a period of 30 minutes.

The time constant looks to be on the order of 600s.

  42   Mon May 24 19:17:32 2010 AidanLaserHartmann sensorReplaced Brass Plate with Invar Hartmann Plate

I just replaced the brass Hartmann plate with the Invar one. The camera was off during the process but has been turned on again. The camera is now warming up again. I've manually set the temperature in the EPICS channels by looking at the on-board temperature via the serial communications.

 I also made sure the front plate was secured tightly.

  43   Wed May 26 06:47:02 2010 AidanLaserSLEDSwitched off SLED - 6:40AM
  89   Mon Aug 9 10:58:37 2010 AidanLaserSLEDSLED 15-day trend

 Here's a plot of the 15-day output of the SLED.

Currently there is an 980nm FC/APC fiber-optic patch-cord attached to the SLED. It occurred to me this morning that even though the patch cord is angle-cleaved, there may be some back-reflection than desired because the SLED output is 830nm (or thereabouts) while the patch cord is rated for 980nm.

 I'm going to turn off the SLED until I get an 830nm patch-cord and try it then. 

Correction: I removed the fiber-optic connector and put the plastic cap back on the SLED output. The mode over-lap (in terms of area) from the reflection off the cap with the output from the fiber is about 1 part in 1000. So even with 100% reflection, there is less than the 0.3% danger level coupled back into the fiber. The SLED is on again.

Attachment 1: SLED_superlum_long_term_test_0005A_annotated_15-day_result.pdf
SLED_superlum_long_term_test_0005A_annotated_15-day_result.pdf
  94   Mon Sep 13 18:24:52 2010 AidanLaserHartmann sensorEnclosure for the HWS

I've assembled the box Mindy ordered from Newport that will house the Hartmann sensor. It's mainly to reduce ambient light, air currents and to keep the table cleaner than it would otherwise be.

We need to add a few more holes to allow access for extra cables.

 

Attachment 1: 00001.jpg
00001.jpg
Attachment 2: 00002.jpg
00002.jpg
Attachment 3: 00003.jpg
00003.jpg
Attachment 4: 00005.jpg
00005.jpg
  95   Tue Sep 28 10:41:32 2010 AidanLaserHartmann sensorAligning HWS cross-sample experiment - polarization issues

I'm in the process of aligning the cross-sampling experiment for the HWS. I've put the 1" PBS cube into the beam from the fiber-coupled SLED and found that the split between s- and p-polarizations is not 50-50. In fact, it looks more like 80% reflected and 20% transmitted. This will, probably, be due to the polarization-maintaining patch-cord that connects to the SLED. I'll try switching it out with a non-PM maintaining fiber.

 


Later ...

That worked.

  96   Tue Sep 28 17:53:40 2010 AidanLaserHartmann sensorCrude alignment of cross-sampling measurement

I've set up a crude alignment of the cross-sampling system (optical layout to come). This was just a sanity check to make sure that the beam could successfully get to the Hartmann sensor. The next step is to replace the crappy beam-splitter with one that is actually 50/50.

Attached is an image from the Hartmann sensor.

Attachment 1: 2010_09_28-HWS_cross_sample_expt_crude_alignment_01.pdf
2010_09_28-HWS_cross_sample_expt_crude_alignment_01.pdf
  97   Wed Sep 29 16:49:36 2010 AidanLaserHartmann sensorCross-sampling experiment power budget

I've been setting up the cross-sampling test of the Hartmann sensor, Right now I'm waiting on a 50/50 BS so I'm improvising with a BS for 1064nm.

The output from the SLED (green-beam @ 980nm) is around 420uW (the beam completely falls on the power meter.) There are a couple of irises shortly afterwards that cut out a lot of the power - apparently down to 77uW (but the beam is larger than the detection area of the power meter at this point - by ~50%). The BS is not very efficient on reflection and cuts down the power to 27uW (overfilled power meter). The measurement of 39uW is near a focus and the power meter captures the whole beam. There is a PBS cube that is splitting the beam unequally between s- and p-polarizations (I think this is due to uneven reflections for s- and p-polarizations from the 1064nm BS). The beam is retro-reflected back to the HWS where about 0.95uW makes it to the detector.

There is a 1mW 633nm laser diode that is used to align the optical axis. There are two irises that are used to match the optical axis of the laser diode and the SLED output.

 

Attachment 1: 00001.jpg
00001.jpg
  98   Mon Oct 4 19:44:03 2010 AidanLaserHartmann sensorCross-sampling experiment - two beams on HWS

I've set up the HWS with the probe beam sampling two optics in a Michelson configuration (source = SLED, beamsplitter = PBS cube). The return beams from the Michelson interferometer are incident on the HWS. I misaligned the reflected beam from the transmitted beam to create two Hartmann patterns, as shown below.

The next step is to show that the centroiding is a linear superposition of these two wavefronts.

Attachment 1: test001_two_beams_on_HWS.pdf
test001_two_beams_on_HWS.pdf
  99   Tue Oct 5 12:51:16 2010 AidanLaserHartmann sensorVariable power in two beams of cross-sampling experiment

The SLED in the cross-sampling experiment produces unpolarized light at 980nm. So I added a PBS after the output and then a HWP (for 1064nm sadly) after that. In this way I produced linearly p-polarized light from the PBS. Then I could rotate it to any angle by rotating the HWP. The only drawback was that the HWP was only close to half a wave of retardation at 980nm. As a result, the output from this plate became slightly elliptically polarized.

The beam then went into another PBS which split it into two beams in a Michelson-type configuration (REFL and TRANS beams) - see attached image. By rotating the HWP I could vary the relative amount of power in the two arms of the Michelson. The two beams were retro-reflected and we then incident onto a HWS.

I measured the power in the REFL beam relative to the total power as a function of the HWP angle. The results are shown in the attached plot.

 

Attachment 1: test002_two_beams_on_HWS_analyze.pdf
test002_two_beams_on_HWS_analyze.pdf
Attachment 2: Hartmann_Enclosure_Diagram__x-sampling.png
Hartmann_Enclosure_Diagram__x-sampling.png
  107   Fri Feb 18 14:53:50 2011 PhilLaserLaserLTG laser delivering specified power

I got the LTG CO2 laser to deliver 50.02W as measured by the Thorlabs 200W power head today. This required running the Glassman HV supply at full power (30.0kV, 31.1mA), tweaking the end grating and output coupler alignments, and cleaning the ZnSe Brewster windows on the laser tubes, and it only lasted a few seconds before dropping back to ~48W, but the laser delivered the specified power. In the factory it delivered 55W at the 10.6 micron line I am using now- I checked it with the CO2 laser spectrum analyzer- so there is more work to do.

  110   Thu Feb 24 10:23:31 2011 Christopher GuidoLaserLaserLTG initial noise

Cheryl Vorvick, Chris Guido, Phil Willems

Attached is a PDF with some initial noise testing. There are 5 spectrum plots (not including the PreAmp spectrum) of the laser. The first two are with V_DC around 100 mV, and the other three are with V_DC around 200 mV. (As measured with the 100X gain preamplifier, so ideally 1 and 2 mV actual) We did one spectrum (at each power level)  with no attempt of noise reduction and one spectrum with the lights off and a make shift tent to reduce air flow. The 5th plot is at 200mv with the tent and the PZT on. (The other 4 have the PZT off).

 

The second plot is just the spectrums divided by their respectives V_DC to get an idea of the RIN.

Attachment 1: LTG_InitialTest.pdf
LTG_InitialTest.pdf LTG_InitialTest.pdf
  128   Mon Mar 28 13:00:50 2011 AidanLaserHartmann sensorTo do: Check the polarization from the SLED
  129   Wed Mar 30 12:55:54 2011 AidanLaserHartmann sensorPrism modulation experiment

I've set up a quick experiment to modulate the angle of the Hartmann sensor probe beam at 10mHz and to monitor the measured prism. The beam from the SLED is collimated by a lens and this is incident on a galvo mirror. The reflection travels around 19" and is incident on the HWS. When the galvo mirror is sent a 1.1Vpp sine wave, then beam moves around +/- 0.5" on the surface of the Hartmann sensor, giving around 50mrad per Vpp.

The galvo is currently being sent a 0.02Vpp sine wave at 10mHz.

  130   Thu Mar 31 11:27:02 2011 AidanLaserHartmann sensorPrism modulation experiment

I changed the drive amplitude on the function generator to 0.05Vpp and have measured the angle of deflection by bouncing a laser off the laser mirror and projecting it 5.23m onto the wall. The total displacement of the spot was ~3.3mm +/- 0.4mm, so the amplitude of the angular signal is 1.6mm/5.23m ~ 3.1E-4 radians. The Hartmann Sensor should measure a prism of corresponding magnitude.

The frequency is still 10mHz.

Quote:

I've set up a quick experiment to modulate the angle of the Hartmann sensor probe beam at 10mHz and to monitor the measured prism. The beam from the SLED is collimated by a lens and this is incident on a galvo mirror. The reflection travels around 19" and is incident on the HWS. When the galvo mirror is sent a 1.1Vpp sine wave, then beam moves around +/- 0.5" on the surface of the Hartmann sensor, giving around 50mrad per Vpp.

The galvo is currently being sent a 0.02Vpp sine wave at 10mHz.

 

  137   Sun Apr 17 21:55:51 2011 AidanLaserHartmann sensorHartmann sensor prism/displacement test

I've set up an experiment to test the HWS intensity distribution displacement measurement code. Basically the beam from a SLED is just reflecting off a galvo mirror onto the HWS. The mirror is being fed a 0.02Vpp *10 gain, 10mHz sinewave from the function generator.

The experimental setup is shown below.

I hacked the HWS code to export the Gaussian X and Y centers to Seidel Alpha and Beta channels in EPICS (C4:TCS-HWSX_PSC_ALPHA, C4:TCS-HWSX_PSC_BETA)

Attachment 1: HWS_prims.jpg
HWS_prims.jpg
  151   Mon May 30 10:53:00 2011 AidanLaserHartmann sensorDefocus vs time

 I've had the output from a fiber projected about 400mm onto the Hartmann sensor for around 5 days now. (The divergence angle from the fiber is around 86 mrad).

I played around with the temperature of the lab to induce some defocus changes in the Hartmann sensor. The system is mostly linear, but there are relatively frequent jumps in the defocus of the order of 1E-4 m^-1. This may be due to a number of things - the Hartmann plate may be moving, the fiber holder may be shifting back and forth, there may be some issue with the source wavelength shifting.

  • A change in defocus, dS, of around 1E-4 m^-1 from a point source at 400mm from the Hartmann plate (dS/dx = -1/x^2), corresponds to a change in the end position of the fiber of around 16 microns. Seems a little big ... unless it's not secured properly ...
  • Also the defocus vs temperature slope is different from what James measured last year. There is an additional -4E-5 m^-1 K^-1 due to the expansion of the stainless steel table moving the point source further or closer to the Hartmann plate. That leaves about -3E-4 m^-1 K^-1 for the Hartmann sensor. This is roughly twice what James measured last year. 

Sun 30th May 2011 - 11:40AM - the z-axis control on the NewFocus 9091 fiber coupling mount was not tightened. I tightened that to secure the control.

Attachment 1: HWS_defocus_vs_temperature.pdf
HWS_defocus_vs_temperature.pdf
  152   Mon May 30 20:11:27 2011 AidanLaserHartmann sensorHartmann sensor lever arm calibration

 I ran through the procedure to calibrate the lever arm of the Hartmann sensor. The beam from a 632.8nm HeNe laser was expanded to approximately 12mm diameter and injected into a Michelson interferometer. The Hartmann sensor was placed at the output port of the Michelson.

  1. I tilted one of the mirrors of the interferometer to induce a prism between the two beams at the output. This created about 135 vertical fringes on the CCD.
  2. With the Hartmann plate removed, I recorded the interference pattern and took its 2D FFT. There was a peak in the Fourier transform about 134 pixels from the DC level. 
  3. This next part is questionable ...  I centroided the frequencies around the peak in the FFT to try to determine the spatial frequency of the fringes to better than the bandwidth of 1/1024 pixels^-1 (yes, they're strange units). This is, probably acceptable at improving the accuracy of the frequency measurement if it is known that the signal is a generated by a pure sine wave in the spatial domain - this is not an unreasonable assumption for the output of an interferometer. Anyway, the peak fluctuated around 133.9 units from DC by around +/- 0.1 units.
  4. The prism between the two beams is the measured spatial frequency (measured as 133.9 oscillations across the CCD) multiplied by the wavelength and divided by the width of the CCD (= 1024 x 12um). In other words, the prism is the ratio of the wavefront change across the CCD divided by the diameter of the CCD (= 6.895 +/- 0.005 mrad)
  5. Next, I inserted the Hartmann plate, blocked one of the beams and recorded the spot pattern. I then blocked the other beam and unblocked the first and recorded another spot pattern. 
  6. The mean displacement between the spot patterns was calculated. Due to a fairly noisy intensity distribution (the 2" mirrors were AR coated for 700-1000nm and hence there were some stray beams), the mean displacement was relatively noisy - about 5.60 pixels with a standard deviation of around 0.3 pixels and a standard error of around 0.01 pixels ( = 67.2 um)
  7. The lever arm is equal to the mean displacement of the spots divided by the prism. In this case, 9.75 +/- 0.02 mm
  8. I removed the Hartmann plate and confirmed that the FFT of the fringes from the IFO still had a peak at 133.9 +/- 0.1 units. It did.

 

  153   Thu Sep 1 15:51:38 2011 AidanLaserOrderingAccess Lasy50 50W laser arrived.

 The 50W Access Laser is now in the lab. We need to wire up the interlock to the laser, plumb the chiller lines to the power supply and to the laser head and also wire up all the electrical and electronics cables. Additionally, we will need to plumb the flow meter and attach a circuit to it that triggers the interlock if the flow falls too low.

 

  186   Mon Jun 5 16:41:59 2017 AidanLaserHartmann sensorHWS long term measurement results.

The data from the long-term measurement of the HWS is presented here. The beam envelope moves by, at most, about 0.3 pixels, or around 3.6 microns. The fiber-launcher is about 5" away from the HWS. Therefore, the motion corresponds to around 30 micro-radians (if it is a tilt). The beam displacement is around 4 microns.

The optical properties change very little over the full 38 days (about 2 micro-radians for tilt and around 2 micro-diopters for spherical power).

The glitches are from when the SLED drivers were turned off temporarily for other use (with the 2004nm laser).

Attachment 1: HWS_envelope_long_term.pdf
HWS_envelope_long_term.pdf
Attachment 2: HWS_optical_parameters_long_term.pdf
HWS_optical_parameters_long_term.pdf
Attachment 3: HWS_envelope_long_term_mm.pdf
HWS_envelope_long_term_mm.pdf
Attachment 4: IMG_9836.JPG
IMG_9836.JPG
  193   Mon Oct 23 19:02:42 2017 Jon RichardsonLaserHartmann sensorMitigated Heating Beam Losses

There were known to be huge (65%) heating beam power losses on the SRM AWC table, somewhere between the CO2 laser and the test optic. Today I profiled the setup with a power meter, looking for the dominant source of losses. It turned out to be a 10" focusing lens which had the incorrect coating for 10.2 microns. I swapped this lens with a known ZnSe 10" FL lens (Laser Research Optics LX-15A0-Z-ET6.0) and confirmed the power transmittance to be >99%, as spec'd. There is now ~310 mW maximum reaching the test optic, meaning that the table losses are now only 10%.

Using a single-axis micrometer stage I also made an occlusion measurement of the heating beam radius just in front of the test optic. I moved the 10" focusing lens back three inches away from the test optic to slightly enlarge the beam size. In this position, I measure a beam radius of 3.5+/-0.25 mm at 1.5" in front of the test optic (the closest I can place the power meter). The test optic is approximately 20" from the 10" FL lens, so the beam has gone through its waist and is again expanding approaching the test optic. I believe that at the test optic, the beam is very close to 4 mm.

  200   Mon Oct 30 17:13:32 2017 Jon RichardsonLaserHartmann sensorWrite-Up of CO2 Projector Measurements

For archive purposes, attached is a write-up of all the HWS measurements I've made to date for the SRM CO2 projector mock-up.

Attachment 1: awc_srm_actuator_v1.pdf
awc_srm_actuator_v1.pdf awc_srm_actuator_v1.pdf awc_srm_actuator_v1.pdf awc_srm_actuator_v1.pdf awc_srm_actuator_v1.pdf awc_srm_actuator_v1.pdf awc_srm_actuator_v1.pdf
  204   Thu Jan 18 10:09:50 2018 Jon RichardsonLaserGeneralRIN Measurement of 400 mW CO2 Laser in Progress

The 400 mW CO2 laser on the Hartmann table is currently configured for a measurement of its relative intensity noise. It is aligned to a TCS CO2P photodetector powered by a dual DC power supply beside the light enclosure. I got some data last night with the laser current dialed back for low output power (0.5-10 mW incident), but still need to analyze it. In the meantime please don't remove parts from the setup, as I may need to repeat the measurement with better power control.

  205   Mon Jan 22 10:39:32 2018 Jon RichardsonLaserGeneralRIN Measurement of 400 mW CO2 Laser in Progress

Attached for reference is the RIN measurement from the initial data.

Quote:

The 400 mW CO2 laser on the Hartmann table is currently configured for a measurement of its relative intensity noise. It is aligned to a TCS CO2P photodetector powered by a dual DC power supply beside the light enclosure. I got some data last night with the laser current dialed back for low output power (0.5-10 mW incident), but still need to analyze it. In the meantime please don't remove parts from the setup, as I may need to repeat the measurement with better power control.

 

Attachment 1: SRM_CO2_RIN_v1.pdf
SRM_CO2_RIN_v1.pdf SRM_CO2_RIN_v1.pdf
  207   Thu Feb 8 15:42:13 2018 AidanLaserLaserLow power CO2 laser [400mW] beam size measurement

I did a beam size/beam propagation measurement of the low power CO2 laser (Access Laser L3, SN:154507-154935)

 

Attachment 1: L3_CO2_laser_beam_size.m
% 400mW CO2 laser beam propagation measurement
% measurements of Access Laser L3 CO2 output power (at about 30% PWM)
% SN: 154507-154935
%
% Aidan Brooks, 8-Feb-2018

xposn = 10.5:-0.5:5.0;
dataIN = [25	113.2	113.2	112.7	112.3	110.2	108.6	98.2	74.6	40.6	13.5	2.3	0.3
50	114.5	114.5	114.9	115	114.9	112.1	100	74.2	38.8	12.5	2	-0.1
... 62 more lines ...
Attachment 2: L3_CO2_Laser_154507-154935.pdf
L3_CO2_Laser_154507-154935.pdf
  208   Fri May 11 12:32:21 2018 AidanLaserHartmann sensorHWS green LED fiber launcher

 [Aidan, Marie]

We tested the output of the fiber launcher D1800125-v3. We were using a 6mm spacer in the SM1 lens tube and 11mm spacer in the SM05 lens tube and the 50 micron core fiber.

The output of the fiber launcher was projected directly onto the CCD. Images of these are attached (coordinates are in pixels where 100 pixels = 1.2mm)

There is a lot of high-spatial frequency light on the output. It looks like there is core and cladding modes in addition to a more uniform background. There was an indication that we could clear up these annular modes with an iris immediately after the fiber launcher but I didn't get any images. We're going to test this next week when we get an SM1 mountable iris.

Attachment 1: LED_test_01_20180511.pdf
LED_test_01_20180511.pdf
  209   Fri May 11 16:18:47 2018 AidanLaserHartmann sensorHWS green LED fiber launcher

And here's the output of the fiber launcher when I fixed it at 313mm from the camera, attached an iris to the front and slowly reduced the aperture of the iris.

The titles reflect the calculated second moment of the intensity profiles (an estimate of the equivalent Gaussian beam radius). The iris is successful in spatially filtering the central annular mode at first and then the outer annular mode. 

We'll need to determine the optimum diameter to get good transmission spatially without sacrificing too much power.

Quote:

 [Aidan, Marie]

We tested the output of the fiber launcher D1800125-v3. We were using a 6mm spacer in the SM1 lens tube and 11mm spacer in the SM05 lens tube and the 50 micron core fiber.

The output of the fiber launcher was projected directly onto the CCD. Images of these are attached (coordinates are in pixels where 100 pixels = 1.2mm)

There is a lot of high-spatial frequency light on the output. It looks like there is core and cladding modes in addition to a more uniform background. There was an indication that we could clear up these annular modes with an iris immediately after the fiber launcher but I didn't get any images. We're going to test this next week when we get an SM1 mountable iris.

 

Attachment 1: LED_test_02_20180511.pdf
LED_test_02_20180511.pdf
  210   Wed May 16 16:47:29 2018 AidanLaserHartmann sensorHWS green LED fiber launcher - D1800125-v5_SN01

Here is the output from D1800125-v5_SN01.

Quote:

And here's the output of the fiber launcher when I fixed it at 313mm from the camera, attached an iris to the front and slowly reduced the aperture of the iris.

The titles reflect the calculated second moment of the intensity profiles (an estimate of the equivalent Gaussian beam radius). The iris is successful in spatially filtering the central annular mode at first and then the outer annular mode. 

We'll need to determine the optimum diameter to get good transmission spatially without sacrificing too much power.

Quote:

 [Aidan, Marie]

We tested the output of the fiber launcher D1800125-v3. We were using a 6mm spacer in the SM1 lens tube and 11mm spacer in the SM05 lens tube and the 50 micron core fiber.

The output of the fiber launcher was projected directly onto the CCD. Images of these are attached (coordinates are in pixels where 100 pixels = 1.2mm)

There is a lot of high-spatial frequency light on the output. It looks like there is core and cladding modes in addition to a more uniform background. There was an indication that we could clear up these annular modes with an iris immediately after the fiber launcher but I didn't get any images. We're going to test this next week when we get an SM1 mountable iris.

 

 

Attachment 1: HWS_LED_D1800125_V5_sn01.pdf
HWS_LED_D1800125_V5_sn01.pdf
Attachment 2: HWS_LED_D1800125_V5_sn01.m
% get the beam size from the HWS ETM source D1800125-v5_sn01

[out,r] = system('tar -xf HWS*.tar');

% load the files
dist = [1,10,29,51,84,105,140,180,240,295,351,435,490,565]; % beam propagation distance
files = dir('*.raw');
close all

... 49 more lines ...
Attachment 3: HWS_D1800125-v5_sn01_characterize_20180516.tar
  211   Fri May 18 16:32:37 2018 AidanLaserHartmann sensorHWS green LED fiber launcher - D1800125-v5_SN01

Title was wrong - this is actually config [12,2,4,125]

Quote:

Here is the output from D1800125-v5_SN01.

Quote:

And here's the output of the fiber launcher when I fixed it at 313mm from the camera, attached an iris to the front and slowly reduced the aperture of the iris.

The titles reflect the calculated second moment of the intensity profiles (an estimate of the equivalent Gaussian beam radius). The iris is successful in spatially filtering the central annular mode at first and then the outer annular mode. 

We'll need to determine the optimum diameter to get good transmission spatially without sacrificing too much power.

Quote:

 [Aidan, Marie]

We tested the output of the fiber launcher D1800125-v3. We were using a 6mm spacer in the SM1 lens tube and 11mm spacer in the SM05 lens tube and the 50 micron core fiber.

The output of the fiber launcher was projected directly onto the CCD. Images of these are attached (coordinates are in pixels where 100 pixels = 1.2mm)

There is a lot of high-spatial frequency light on the output. It looks like there is core and cladding modes in addition to a more uniform background. There was an indication that we could clear up these annular modes with an iris immediately after the fiber launcher but I didn't get any images. We're going to test this next week when we get an SM1 mountable iris.

 

 

 

  214   Thu Jun 21 17:12:38 2018 AidanLaserHartmann sensorTwo inch PBS from Edmund Optics - effect on ETM HWS transmission

I'm considering the 86-711 2" 532nm PBS from Edmund Optics for the ETM HWS at the sites.

https://www.edmundoptics.com/optics/polarization/linear-polarizers/532nm-50mm-diameter-high-energy-laser-line-polarizer/

The effect on the transmission through the system, compared to the THorlabs PBS, is shown in the attached plot.

Conclusion: it looks almost as effective as the Thorlabs PBS with the added benefit of being 2" in diameter.

Attachment 1: PBS_Thorlabs_v_EdmundOptics.pdf
PBS_Thorlabs_v_EdmundOptics.pdf
  215   Tue Jul 10 17:49:13 2018 Aria ChaderjianLaserGeneralJuly 10, 2018

Went down to the lab and showed Rana the setup. He's fine with me being down there as long as I let someone know. He also recommended using an adjustable mount  (three screws) for the test mirror instead of the mount with top bolt and two nubs on the bottom - he thinks the one with three screws as constraints for the silica will be easier to model (and be more symmetric constraints)

Mounted the f=8" lens (used a 2" pedestal) and placed it on the table so the image fit well on the CCD and so a sharp object in front of the lens resulted in a sharp image. The beam was clipping the f=4" lens (between gold mirror and test mirror) so I spent time moving that gold mirror and the f=4" lens around. I'll still need to finish up that setup.

 

Attachment 1: IMG_4518.jpg
IMG_4518.jpg
Attachment 2: IMG_4517.jpg
IMG_4517.jpg
  216   Thu Jul 12 18:48:21 2018 Aria ChaderjianLaserGeneralJuly 12, 2018

The beam reflecting off the test mirror was clipping the lens between gold mirror and test mirror, so I reconfigured some of the optics, unfortunately resulting in a larger angle of incidence.

From the test mirror, the beam size increases much too rapidly to fit onto the 2-inch diameter lens with f=8 that was meant to resize the beam for the CCD of the HWS. It seems that the f=8 lens can go about 6 inches from the test mirror, and an f ~ 2.3 (60 mm) lens can go about 2 inches in front of the CCD to give the appropriate beam size. However, the image doesn't seem very sharp.

The beam is also not hitting the CCD currently because of the increase in angle of incidence on the test mirror and limitations of the box. I'd like to move the HWS closer to the SLED (and will then have to move the SLED as well).

  217   Fri Jul 13 16:42:50 2018 Aria ChaderjianLaserGeneralJuly 13, 2018

The table is set up. The HWS and SLED were moved slightly, and a minimal angle between the test mirror and HWS was achieved.

There are two possible locations for the f=60mm lens that will achieve appropriate magnification onto the HWS: 64cm or 50 cm from the f=200mm lens. 

At 64cm away, approximately 79000 saturated pixels and 1054 average value.

At 50cm away, approximately 22010 saturated pixels and 1076 average value.

Currently the setup is at 64cm. Could afford to be more magnified, so might want to move the f=60mm lens around. Also, if we're going to need to be able to access the HWS (i.e. to screw on the array) we might want to move to the 50cm location.

  218   Mon Jul 23 10:04:19 2018 Aria ChaderjianLaserGeneralJuly 20, 2018

With Jon's help, I changed the setup to include a mode-matching telescope built from the f=60mm (1 inch diameter) lens and the f=100mm lens. These lenses are located after the last gold mirror and before the test optic. The height of the beam was also adjusted so that it is more centered on these lenses. Note: these two lenses cannot be much further apart from each other than they currently are, or the beam will be too large for the f=100mm lens.

We considered different possible mounts to use for the test optic, and decided to move it to a mount where there is less contact. The test optic was also moved closer to the HWS to achieve appropriate beamsize on the optic coming from the mode-matching telescope.

The f=200 lens is now approximately 2/3 of the distane from the test optic to the HWS, resulting in an appropriately sized beam at the HWS.

Current was also turned down to achieve 0 saturated pixels.

Attachment 1: 37767216_2077953269190954_5080703241788850176_n.jpg
37767216_2077953269190954_5080703241788850176_n.jpg
  219   Tue Jul 24 16:52:44 2018 Aria ChaderjianLaserGeneralJuly 23, 2018 and July 24, 2018

Attached the grid array of the HWS.

Applied voltage (5V, 7V, 9.9V, 14V) to the heater pad and took measurements of T and spherical power (aka defocus).

The adhesive of the temperature sensor isn't very sticky. The first time I did it it peeled off. (Second time partially peeled off). We want to put it on the side of Al if possible.

Bonded a mirror (thickness ~6 mm) to aluminum disk (thickness ~5 mm) and it's still curing.

  220   Fri Aug 3 15:46:12 2018 Aria ChaderjianLaserGeneralAugust 3, 2018

To the best of my ability, calculated the magnification of the plane of the test optic relative to the HWS (2.3) and input this value.

Increased the temperature slightly and saved data points of defocus to txt files when temperature leveled out. This was a slow process, as it takes a while for things to level out. I only got up to about 28.5C, and will need to continue this process.

I also plotted the best-fit defocus for each temperature from COMSOL (Temperature vs. Defocus), and looking at values from HWS it seems that we're off by a normalization factor of approx. 4.

  9   Thu Feb 4 19:45:56 2010 AidanMiscRing HeaterRing heater transfer function - increasing collection area

I mounted the thinner Aluminium Watlow heater inside a 14" long, 1" inner diameter cylinder. The inner surface was lined with Aluminium foil to provide a very low emissivity surface and scatter a lot of radiation out of the end. ZEMAX simulations show this could increase the flux on a PD by 60-100x. 

There was 40V across the heater and around 0.21A being drawn. The #9005 HgCdTe photo-detector was placed at one end of the cylinder to measure the far-IR. (Bear in mind this is a 1mmx1mm detector in an open aperture of approximately 490 mm^2), The measured voltage difference between OFF and the steady-state ON solution, after a 5000x gain stage, was around 270mV. This corresponds to 0.054mV at the photo-diode. Using the responsivity of the PD ~= 0.05V/W then this corresponds to about 10mW incident on the PD.

 

Attachment 1: low-emissivity-tube.jpg
low-emissivity-tube.jpg
  44   Wed May 26 14:58:04 2010 AidanMiscHartmann sensorHartmann sensor cooling fins added

14:55 -  Mindy stopped by with the copper heater spreaders and the cooling fins for the Hartmann sensor. We've set them all up and have turned on the camera to see what temperature above ambient is achieves.

17:10 - Temperature of the HWS with no active cooling( Digitizer = 44.1C, Sensor = 36.0C, Ambient = 21.4C)

 

Attachment 1: HWS_CONFIG1.jpg
HWS_CONFIG1.jpg
  49   Tue Jun 15 16:30:10 2010 Peter VeitchMiscHartmann sensorSpot displacement maps - temperate sensitivity tests

Results of initial measurement of temperature sensitivity of Hartmann sensor

"Cold" images were taken at the following temperature:
| before | 32.3 | 45.3 | 37.0 |
| after  | 32.4 | 45.6 | 37.3 |

"Hot" Images were taken at the following temperature:

| before | 36.5 | 48.8 | 40.4 |
| after  | 36.4 | 48.8 | 40.4 |

"before" are the temperatures just before taking 5000 images, and "after" are
the temperatures just after taking the images. First column is the temperature
measured using the IR temp sensor pointed at the heat sink, the second column the
camera temperature, and the third column the sensor board temperature.

Temperature change produced by placing a "hat" over the top of the HP assembly and the top of the heatsinks.

Averaged images "cool" and "hot" were created using 200 frames (each).

Aberration parameter values are as follows:

Between cool and hot images (cool spots - hot spots)

     p: 4.504320557133363e-005
    al: 0.408248504544179
   phi: 0.444644135542724
     c: 0.001216006036395
     s: -0.002509569677737
     b: 0.054773177423349
    be: 0.794567342929695
     a: -1.030687344054648

Between cool images only

     p: 9.767143368103721e-007
    al: 0.453972584677992
   phi: -0.625590459774765
     c: 2.738206187344315e-004
     s: 1.235384158257808e-006
     b: 0.010135170457321
    be: 0.807948378729832
     a: 0.256508288049258

Between hot images only

     p: 3.352035441252169e-007
    al: -1.244075541477539
   phi: 0.275705676833192
     c: -1.810992355666772e-004
     s: 7.076678388064736e-005
     b: 0.003706221758158
    be: -0.573902879552339
     a: 0.042442307609231

Attached are two contour plots of the radial spot displacements, one between
cool and hot images, and the other between cool images only. The color
bars roughly indicate the values of maximum and minimum spot
displacements.

Possible causes:

1. anisotropy of the thermal expansion of the invar foil HP caused by the rolling

2. non-uniform clamping of the HP by the clamp plate

3. vertical thermal gradient produced by the temperature raising method

4. buckling of the HP due to slight damage (dent)

Attachment 1: spot_displacements_same_temp_0611.jpg
spot_displacements_same_temp_0611.jpg
Attachment 2: spot_displacements_diff_temp_0611.jpg
spot_displacements_diff_temp_0611.jpg
  50   Wed Jun 16 11:47:11 2010 AidanMiscHartmann sensorSpot displacement maps - temperate sensitivity tests - PRISM

I think that the second plot is just showing PRISM and converting it to its radial components. This is due to the fact that the sign of the spot displacement on the LHS is the opposite of the sign of the spot displacement on the RHS. For spherical or cylindrical power, the sign of the spot displacement should be the same on both the RHS and the LHS.

I've attached a Mathematica PDF that illustrates this.

 


Quote:

Results of initial measurement of temperature sensitivity of Hartmann sensor

"Cold" images were taken at the following temperature:
| before | 32.3 | 45.3 | 37.0 |
| after  | 32.4 | 45.6 | 37.3 |

"Hot" Images were taken at the following temperature:

| before | 36.5 | 48.8 | 40.4 |
| after  | 36.4 | 48.8 | 40.4 |

"before" are the temperatures just before taking 5000 images, and "after" are
the temperatures just after taking the images. First column is the temperature
measured using the IR temp sensor pointed at the heat sink, the second column the
camera temperature, and the third column the sensor board temperature.

Temperature change produced by placing a "hat" over the top of the HP assembly and the top of the heatsinks.

Averaged images "cool" and "hot" were created using 200 frames (each).

Aberration parameter values are as follows:

Between cool and hot images (cool spots - hot spots)

     p: 4.504320557133363e-005
    al: 0.408248504544179
   phi: 0.444644135542724
     c: 0.001216006036395
     s: -0.002509569677737
     b: 0.054773177423349
    be: 0.794567342929695
     a: -1.030687344054648

Between cool images only

     p: 9.767143368103721e-007
    al: 0.453972584677992
   phi: -0.625590459774765
     c: 2.738206187344315e-004
     s: 1.235384158257808e-006
     b: 0.010135170457321
    be: 0.807948378729832
     a: 0.256508288049258

Between hot images only

     p: 3.352035441252169e-007
    al: -1.244075541477539
   phi: 0.275705676833192
     c: -1.810992355666772e-004
     s: 7.076678388064736e-005
     b: 0.003706221758158
    be: -0.573902879552339
     a: 0.042442307609231

Attached are two contour plots of the radial spot displacements, one between
cool and hot images, and the other between cool images only. The color
bars roughly indicate the values of maximum and minimum spot
displacements.

Possible causes:

1. anisotropy of the thermal expansion of the invar foil HP caused by the rolling

2. non-uniform clamping of the HP by the clamp plate

3. vertical thermal gradient produced by the temperature raising method

4. buckling of the HP due to slight damage (dent)

 

Attachment 1: Prism_as_radial_vector.pdf
Prism_as_radial_vector.pdf Prism_as_radial_vector.pdf Prism_as_radial_vector.pdf
  51   Thu Jun 17 07:40:07 2010 James KMiscHartmann sensorSURF Log -- Day 1, Getting Started

 For Wednesday, June 16:

I attended the LIGO Orientation and first Introduction to LIGO lecture in the morning. In the afternoon, I ran a few errands (got keys to the office, got some Computer Use Policy Documentation done) and toured the lab. I then got Cygwin installed on my laptop along with the proper SSH packets and was successfully able to log in to and interact with the Hartmann computer in the lab through the terminal, from the office. I have started reading relevant portions of Dr. Brooks' thesis and of "Fundamentals of Interferometric Gravitational Wave Detectors" by Saulson.
  52   Thu Jun 17 22:03:51 2010 James KMiscHartmann sensorSURF Log -- Day 2, Getting Started

For Thursday, June 17:

Today I attended a basic laser safety training orientation, the second Introduction to LIGO lecture, a Summer Research Student Safety Orientation, and an Orientation for Non-Students living on campus (lots of mandatory meetings today). I met with Dr. Willems and Dr. Brooks in the morning and went over some background information regarding the project, then in the afternoon I got an idea of where I should progress from here from talking with Dr. Brooks. I read over the paper "Adaptive thermal compensation of test masses in advanced LIGO" and the LIGO TCS Preliminary Design document, and did some further reading in the Brooks thesis.

I'm making a little bit of progress with accessing the Hartmann lab computer with Xming but got stuck, and hopefully will be able to sort that out in the morning and progress to where I want to be (I wasn't able to get much further than that, since I can't access the Hartmann computer in the lab currently due to laser authorization restrictions). I'm currently able to remotely open an X terminal on the server but wasn't able to figure out how to then be able to log in to the Hartmann computer. I can do it via SSH on that terminal, of course, but am having the same access restrictions that I was getting when I was logging in to the Hartmann computer via SSH directly from my laptop (i.e. I can log in to the Hartmann computer just fine, and access the camera and framegrabber programs, but for the vast majority of the stuff on there, including MATLAB, I don't have permissions for some reason and just get 'access denied'). I'm sure that somebody who actually knows something about this stuff will be able to point out the problem and point me in the right direction fairly quickly (I've never used SSH or the X Window system before, which is why it's taking me quite a while to do this, but it's a great learning experience so far at least).

Goals for tomorrow: get that all sorted out and learn how to be able to fully access the Hartmann computer remotely and run MATLAB off of it. Familiarize myself with the camera program. Set the camera into test pattern mode and use the 'take' programs to retrieve images from it. Familiarize myself with the 'take' programs a bit and the various options and settings of them and other framegrabber programs. Get MATLAB running and use fread to import the image data arrays I take with the proper data representation (uint16 for each array entry). Then, set the camera back to recording actual images, take those images from the framegrabber and save them, then import them into MATLAB. I should familiarize myself with the various settings of the camera at this stage, as well.

 

--James

  53   Sat Jun 19 17:31:46 2010 James KMiscHartmann sensorSURF Log -- Day 3, Initial Image Analysis
For Friday, June 18:
(note that I haven't been working on this stuff all of Saturday or anything, despite posting it now. It was getting late on Friday evening so I opted to just type it up now, instead)

(all matlab files referenced can be found in /EDTpdv/JKmatlab unless otherwise noted)

I finally got Xming up and running on my laptop and had Dr. Brooks edit the permissions of the controls account, so now I can fully access the Hartmann computer remotely (run MATLAB, interact with the framegrabber programs, etc.). I was able to successfully adjust camera settings and take images using 'take', saving them as .raw files. I figured out how to import these .raws into MATLAB using fopen and display them as grayscale images using the Imshow command. I then wrote a program (readimgs.m, as attached) which takes inputs a base filename and number of images (n), then automatically loads the first 'n' .raw files located in /EDTpdv/JKimg/ with the inputted base file name, formatting them properly and saving them as a 1024x1024x(n) matrix.

After trying out the test pattern of the camera, I set the camera into normal operating mode. I took 200 images of the HWS illuminated by the OLED, using the following camera settings:

 
Temperature data from the camera was, unfortunately, not taken, though I now know how to take it.
 
The first of these 200 images is shown below:
 
hws0000.png

As a test exercise in MATLAB and also to analyze the stability of the HWS output, I wrote a series of functions to allow me to find and plot the means and standard deviations of the intensity of each pixel over a series of images. First, knowing that I would need it in following programs in order to use the plot functions on the data, I wrote "ar2vec.m" (as attached), which simply inputs an array and concatenates all of the columns into a single column vector.

Then, I wrote "stdvsmean.m" (as attached), which inputs a 3D array (such as the 1024x1024x(n) array of n image files), which first calculates the standard deviation and mean of this array along the 3rd dimension (leaving, for example, two 1024x1024 arrays, which give the mean and standard deviation of each pixel over the (n) images). It then uses ar2vec to create two column vectors, representing the mean and standard deviation of each pixel. It then plots a scatterplot of the standard deviation of each pixel vs. its mean intensity (with logarithmic axes), along with histograms of the mean intensities and standard deviation of intensities (with logarithmic y-axes).

"imgdevdat.m" (as attached) is simply a master function which combines the previous functions to input image files, format them, analyze them statistically and create plots.

Running this function for the first 20 images gave the following output:

(data from 20 images, over all 1024x1024 pixels)

Note that the background level is not subtracted out in this function, which is apparent from the plots. The logarithmic scatter plot looks pretty linear, as expected, but there are interesting features arising between the intensities of ~120 to ~130 (the obvious spike upward of standard deviation, followed immediately by a large dip downward).

MATLAB gets pretty bogged down trying to plot over a million data points at a time, to the point where it's very difficult to do anything with the plots. I therefore wrote the function "minimgstat.m" (as attached), which is very similar to imgdevdat.m except that before doing the analysis and plotting, it reduces the size of the image array to the upper-left NxN square (where N is an additional argument of the function).

Using this function, I did the same analysis of the upper-left 200x200 pixels over all 200 images:

(data from 200 images, over the upper-left 200x200 pixels)

The intensities of the pixels don't go as high this time because the upper portion of the images are dimmer than much of the rest of the image (as is apparent from looking at the image itself, and as I demonstrate further a little bit later on). Do note the change in axis scaling resulting from this when comparing the image. We do, however, see the same behavior in the ~120-128 intensity level region (more pronounced in this plot because of the change in axis scaling).

I was interested in looking at which pixels constituted this band, so I wrote a function "imgbandfind.m" (as attached), which inputs a 2D array and a minimum and maximum range value, goes through the image array pixel-by-pixel, determines which pixels are within the range, and then constructs an RGB image which displays pixels within the range as red and images outside the range as black.

I inputted the first image in the series into this function along with the range of 120-129, and got the following:

(pixels in intensity range of 120-129 in first image)

So the pixels in this range appear to be the pixels on the outskirts of each wavefront dot near the vertical center of the image. The outer circles of the dots on the lower and upper portions of the image do not appear, perhaps because the top of the image is dimmer and the bottom of the image is brighter, and thus these outskirt pixels would then have lower and higher values, respectively. I plan to investigate this and why it happens (what causes this 'flickering' and if it is a problem at all) further.

The fact that the background levels are lower nearer to the upper portion of the image is demonstrated in the next image, which shows all intensity levels less than 70:
(pixels in intensity range of 0-70 in first image)

So the background levels appear the be nonuniform across the CCD, as are the intensities of each dot. Again, I plan to investigate this further. (could it be something to do with stray light hitting the CCD nonuniformly, maybe? I haven't thought through all the possibilities)
 
The OLED has been turned off, so my next immediate step will be to investigate the background levels further by analyzing the images when not illuminated by the OLED.
 
In other news: today I also attended the third Intro to LIGO lecture, a talk on Artificial Neural Networks and their applications to automated classification of stellar spectra, and the 40m Journal Club on the birth rates of neutron stars (though I didn't think to learn how to access the wiki until a few hours right before, and then didn't actually read the paper. I fully intend to read the paper for next week before the meeting).
 
Attachment 2: ar2vec.m
function V = ar2vec(A)
%AR2VEC V=ar2vec(A)
%concenates the columns of 2D array A into a single column vector V

sz = size(A);
n=sz(1,2);
i=1;
V=[];

while i<(n+1)
... 7 more lines ...
Attachment 3: readimgs.m
function arr = readimgs(imn,n)
%readimgs('basefilename',n) 
%- A function to load a series of .raw files outputted by 'take'
%and stored in /opt/EDTpdv/JKimg/
%  Inputs: 'basefilename' is a string input (for example, for series of
%   images "testpat####.raw" input 'testpat'). "n" is the number of images,
%   so for testpat0000-testpat0004 input n=5

i=0;
arr=[];
... 32 more lines ...
Attachment 4: stdvsmean.m
function M = stdvsmean(A)
%STDVSMEAN takes a 3D array of image data and computes
%stdev vs. mean for each pixel

%find means/st devs between each image
astd = std(double(A),0,3);
armn = mean(double(A),3);

%convert into column vectors of pixel-by-pixel data
asvec=ar2vec(astd);
... 33 more lines ...
Attachment 5: imgdevdat.m
function imgdevdat(basefilename,imgnum)
%IMGDEVDAT Inputs base file name and number of images stored as .raw files
%in ../EDTpdv/JKimg/, automatically imports as 1024x1024x(n) matrix, finds
%the mean and standard deviation of each pixel in each image and plots
A=readimgs(basefilename,imgnum);
stdvsmean(A)
end

Attachment 6: minimgstat.m
function imgdevdat(basefilename,imgnum,size)
%IMGDEVDAT Inputs base file name and number of images stored as .raw files
%in ../EDTpdv/JKimg/, automatically imports as (size)x(size)x(n) matrix, finds
%the mean and standard deviation of each pixel in each image and plots
A=readimgs(basefilename,imgnum);
smA=A(1:size,1:size,:);
stdvsmean(smA)
end
Attachment 7: imgbandfind.m
function [HILT] = imgbandfind(img,minb,maxb)
%IMGBANDFIND inputs an image array and minimum and maximum value,
% then finds all values of the array within that range, then plots with
%values in range highlighted in red against a black background

img=double(img);
maxv=max(max(img));
sizm=size(img);
rows=sizm(1,1);
cols=sizm(1,2);
... 20 more lines ...
  54   Tue Jun 22 00:21:47 2010 James KMiscHartmann sensorSurf Log -- Day 4, Hartmann Spot Flickering Investigation

 I started out the day by taking some images from the CCD with the OLED switched off, to just look at the pattern when it's dark. The images looked like this:

 
Taken with camera settings:

The statistical analysis of them using the functions from Friday gave the following result:

 
At first glance, the distribution looks pretty Poissonian, as expected. There are a few scattered pixels registering a little brighter, but that's perhaps not so terribly unusual, given the relatively tiny spread of intensities with even the most extreme outliers. I won't say for certain whether or not there might be something unexpected at play, here, but I don't notice anything as unusual as the standard deviation 'spike' seen from intensities 120-129 as observed in the log from yesterday.
 
Speaking of that spike, the rest of the day was spent trying to investigate it a little more. In order to accomplish this, I wrote the following functions (all attached):
 
-spotfind.m -- inputs a 3D array of several Hartmann images as well as a starting pixel and threshold intensity level. analyzes the first image, scanning starting at the starting pixel until it finds a spot (with an edge determined by the threshold level), after which it finds a box of pixels which completely surrounds the spot and then shrinks the matrix down to this size, localizing the image to a single spot
 
-singspotcent.m -- inputs the image array outputted from spotfind, subtracts an estimate of the background, then uses the centroiding algorithm sum(x*P^2)/sum(P^2) to find the centroid (where x is the coordinate and P is the intensity level), then outputs the centroid location
 
-hemiadd.m -- inputs the image from spotfind and the centroid from singspotcent, subtracts an estimate of the background, then finds the sum total intensity in the top half of the image above the centroid, the bottom half, the left half and the right half, outputs these values as n-component vectors for an n-image input, subtracts from each vector its mean and then plots the deviations in intensity from the mean in each half of the image as a function of time
 
-edgeadd.m -- similar to hemiadd, except that rather than adding up all pixels on one half of the image, it inputs a threshold, determines how far to the right of the centroid that the spot falls past this treshold and uses it as a radial length, then finds the sum of the intensities of a bar of 3 pixels on this "edge" at the radial length away from the centroid.
 
-spotfft.m -- performs a fast fourier transform on the outputs from edgeadd, outputting the frequency spectrum at which the intensity of these edge pixels oscillate, then plotting these for each of the four edge vectors. see an example output below.
 
--halfspot_fluc.m and halfspot_edgefluc.m -- master functions which combine and automate the functions previous
 
Dr. Brooks has suggested that the observed flickering might perhaps be an effect resulting from the finite thickness of the Hartmann Plate. The OLED can be treated as a point source and thus can be approximated as emitting a spherical wavefront, and thus the light from it will hit this edge at an angle and be scattered onto the CCD. If the plate vibrates, then (which it certainly must to some degree) the wavefront will hit this edge at a different angle as the edge is displaced temporarily through vibration, and thus this light will hit the CCD at a different point, causing the flickering (which is, after all, observed to occur near the edge of the spot). This effect certainly must cause some level of noise, but whether it's the culprit for our 'flickering' spike in the standard deviation remains to be seen.

Here is the frequency spectrum of the edge intensity sums for two separate spots, found over 128 images:
Intensity Sum Amplitude Spectrum of Edge Fluctuations, 128 images, spot search point (100,110), threshold level 110

128 images, spot search point (100,100), threshold level 129
At first glance, I am not able to conclude anything from this data. I should investigate this further.

A few things to note, to myself and others:
--I still should construct a Bode plot from this data and see if I can deduce anything useful from it
--I should think about whether or not my algorithms are good for detecting what I want to look at. Is looking at a 3 pixel vertical or horizontal 'bar' on the edge good for determining what could possibly be a more spherical phenomenon? Are there any other things I need to consider? How will the settings of the camera affect these images and thus the results of these functions?
--Am I forgetting any of the subtleties of FFTs? I've confirmed that I am measuring the amplitude spectrum by looking at reference sine waves, but I should be careful since I haven't worked with these in a while
 
It's late (I haven't been working on this all night, but I haven't gotten the chance to type this up until now), so thoughts on this problem will continue tomorrow morning..

Attachment 1: spotfind.m
function [spotM,r0,c0] = spotfind(M,level,rs,cs)
%SPOTFIND Inputs a 3D array of hartmann spots and spot edge level
%and outputs a subarray located around a single spot located near (rs,cs)
cut=level/65535;
A=double(M(:,:,1)).*double(im2bw(M(:,:,1),cut));

%start at (rs,cs) and sweep to find spot
r=rs;
c=cs;
while A(r,c)==0
... 34 more lines ...
Attachment 2: singspotcent.m
function [rc,cc] = singspotcent(A)
%SINGSPOTCENT returns centroid location for first image in input 3D matrix
MB=double(A(:,:,1));
[rn cn]=size(MB);
M=MB-mean(mean(min(MB)));
r=1;
c=1;
sumIc=0;
sumIr=0;
while c<(cn+1)
... 26 more lines ...
Attachment 3: hemiadd.m
function [topsum,botsum,leftsum,ritsum] = hemiadd(MB,rcd,ccd)
%HEMIADD inputs a 3D image matrix and centroid location and finds the difference of
%the sums of the top half, bottom half, left half and right half at each time
%compared to their means over that time

%round coordinates of centroid
rc=round(rcd);
cc=round(ccd);

%subtract approximate background
... 51 more lines ...
Attachment 4: edgeadd.m
function [topsum,botsum,leftsum,ritsum] = edgeadd(MB,rcd,ccd,edgemax)
%HEMIADD inputs a 3D image matrix and centroid location and finds the difference of
%the sums of 3 edge pixels at radial distance "radial" from centroid for
%the top half, bottom half, left half and right half at each time
%compared to their means over that time

%round coordinates of centroid
rc=round(rcd);
cc=round(ccd);

... 59 more lines ...
Attachment 5: spotfft.m
function spotfft(t,b,l,r)
%SPOTFFT Does an fft and plots the frequency spectrum of four input vectors
%Specifically, this is to be used with halfspot_edgefluc to find the
%frequencies of oscillations about the edges of Hartmann spots
[n,m]=size(t);
NFFT=2^nextpow2(n);
T=fft(t,NFFT)/n;
B=fft(b,NFFT)/n;
L=fft(l,NFFT)/n;
R=fft(r,NFFT)/n;
... 30 more lines ...
Attachment 6: halfspot_fluc.m
function [top,bot,lft,rgt] = halfspot_fluc(M,spotr,spotc,thresh)
%HALFSPOT_FLUC Inputs a 3D array of Hartmann sensor images, along with an
%approximate spot location and intensity threshhold. Finds a spot on the
%first image near (spotc,spotr) and defines boundary of spot near an
%intensity of 'thresh'. Outputs fluctuations of the intensity sums of the
%top, bottom, left and right halves of the spot about their means, and
%graphs these against each other automatically.

[I,r0,c0]=spotfind(M,thresh,spotr,spotc);
[r,c]=singspotcent(I);
... 7 more lines ...
Attachment 7: halfspot_edgefluc.m
function [top,bot,lft,rgt] = halfspot_edgefluc(M,spotr,spotc,thresh,plot)
%HALFSPOT_FLUC Inputs a 3D array of Hartmann sensor images, along with an
%approximate spot location and intensity threshhold. Finds a spot on the
%first image near (spotc,spotr) and defines boundary of spot near an
%intensity of 'thresh'. Outputs fluctuations of the intensity sums of the
%top, bottom, left and right edges of the spot about their means, and
%graphs these against each other automatically.
%
%For 'plot', specify 'time' for the time signal or 'fft' for the frequency

... 10 more lines ...
  55   Tue Jun 22 22:30:24 2010 James KMiscHartmann sensorSURF Log -- Day 5, more Hartmann image preliminary analysis

 Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):

hws.png

Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.

I tested to see whether the same 'flickering' is present in images under this setup.

For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:

750 microseconds:

750us.png

1000 microseconds:

1000us.png

1500 microseconds:

1500us.png

2000 microseconds:

2000us.png

3000 microseconds:

3000us.png

4000 microseconds:

4000us.png

5000 microseconds. Note that the background level is approaching the level of the feature:

5000us.png

6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:

8000us.png

 

When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):

hws3k.png

This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction). 

 

To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):

 

Set to 30Hz:

f30Hz.png

Set to 1Hz:

f1Hz.png

 

To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of  115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).

 

I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.

 

EDIT: Some late-night conjecturing: Consider the following,

I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.

Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.

So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.

Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.

This is very similar to the sort of behavior that we're seeing!

Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:

levelinstances.png

Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.

 

This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.

 

  56   Wed Jun 23 06:49:48 2010 AidanMiscHartmann sensorSURF Log -- Day 5, more Hartmann image preliminary analysis

Nice work!

 

Quote:

 Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):

hws.png

Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.

I tested to see whether the same 'flickering' is present in images under this setup.

For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:

750 microseconds:

750us.png

1000 microseconds:

1000us.png

1500 microseconds:

1500us.png

2000 microseconds:

2000us.png

3000 microseconds:

3000us.png

4000 microseconds:

4000us.png

5000 microseconds. Note that the background level is approaching the level of the feature:

5000us.png

6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:

8000us.png

 

When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):

hws3k.png

This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction). 

 

To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):

 

Set to 30Hz:

f30Hz.png

Set to 1Hz:

f1Hz.png

 

To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of  115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).

 

I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.

 

EDIT: Some late-night conjecturing: Consider the following,

I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.

Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.

So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.

Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.

This is very similar to the sort of behavior that we're seeing!

Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:

levelinstances.png

Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.

 

This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.

 

 

  57   Wed Jun 23 22:57:22 2010 James KMiscHartmann sensorSURF Log -- Day 6, Centroiding

 So in addition to taking steps towards starting to set stuff up for the experiment in the lab, I spent a good deal of the day figuring out how to use the pre-existing code for finding the centroids in spot images. I spent quite a bit of time trying to use an outdated version of the code that didn't work for the actual captured images, and then once I was directed towards the right version I was hindered for a little while by a bug.

The 'bug' turns out to be something very simple, yet relatively subtle. In the function centroid_images.m in '/opt/EDTpdv/hartmann/src/', the function was assuming a threshold of 0 with my images, even though it has not long before been working with an image that Dr. Brooks loaded. Looking through the code, I noticed that before finding the threshold using the MATLAB function graythresh, several adjustments were made so as to subtract out the background and normalize the array. After estimating and subtracting a background, the function divides the entries of the image array by the maximum value in the image so as to normalize this. For arrays composed of numbers represented as doubles, this is fine. However, the function that I wrote to import my image arrays into MATLAB outputs an image array with integer data. So when the function divided my integer image arrays by the maximum values in the array, it rounded every value in the array to the nearest integer -- that is, the "normalized" array only contained ones and zeros. The function graythresh views this as a black and white image, and thus outputs a threshold of 0.

To remedy this, I edited centroid_images.m to convert the image array into an array of doubles near the very beginning of the function. The only new line is simply "image=double(image);", and I made a note of my edit in a comment above that line. The function started working for me after I did that.

 

I then wrote a function which automatically centroids an input image and then plots the centroids as scatter-plot of red circles over the image. For an image taken off of the Hartmann camera, it gave the following:

centroidplot_nozoom.png

Zoomed in on the higher-intensity peaks, the centroids look good. They're a little offset, but that could just be an artifact of the plotting procedure; I can't say for certain either way. They all appear offset by the same amount, though:

centroidplot_zoom.png

One problem is that, for spots with a much lower relative intensity than the maximum intensity peak, the centroid appears to be offset:

centroidplot_zoom2.png

Better centering of the beam and more even illumination of the Hartmann plate could mitigate this problem, perhaps.

 

I also wrote a function which inputs two image matrices and outputs vector field plots representing the shift in each centroid from the first to the second images. To demonstrate that I could use this function to display the shifting of the centroids from a change in the wavefront, I translated the fiber mount of the SLED in the direction of the optical axis by about 6 turns of the z-control knob  (corresponding to a translation of about 1.9mm, according to the user's guide for the fiber aligner). This gave the following images:

 

Before the translation:

6turn_before.png

After:

6turn_after.png

 This led to a displacement of the centroids shown as follows:

6turnDisplacementVectors.png

Note that the magnitudes of the actual displacements are small, making the shift difficult to see. However, when we scale the displacement vectors up, we can get much more readily visible Direction vectors (having the same direction as the actual displacement vectors, but not the same magnitude):

6turnDirectionVectors.png

This was a very rough sort of measurement, since exposure time, focus of the microscope optic, etc. were not adjusted, and the centroids are compared between single images rather than composite images, meaning that random noise could have quite an effect, especially for the lower-magnitude displacements. However, this plot appears to show the centroids 'spreading out', which is as expected for moving the SLED closer to the sensor along the optical axis.

 

The following MATLAB functions were written for this (both attached):

centroidplot.m -- calls centroid_image and plots the data

centroidcompare.m -- calls centroid_image twice for two inputs matrices, using the first matrix's centroid output structure as a reference for the second. Does a vector field plot from the displacements and reference positions in the second output centroids structure.

Attachment 5: 6turn_before.png
6turn_before.png
Attachment 9: centroidplot.m
function centroiddata=centroidplot(M,N)
%a function to read the image matrix M and plot the centroids of each plot
%on the image
H=M(:,:,N);
cd /opt/EDTpdv/hartmann/src/
centroiddata = centroid_image(H);
cd /opt/EDTpdv/JKmatlab/

v=centroiddata.current_centroids;
r=v(:,1);
... 6 more lines ...
Attachment 10: centroidcompare.m
function centroiddata=centroidcompare(A,B,M,N)
%compares the Mth image in 3D image matrix A to Nth in B
H=A(:,:,M);
I=B(:,:,N);
cd /opt/EDTpdv/hartmann/src/
cent0=centroid_image(H);
centroiddata=centroid_image(I,cent0);
cd /opt/EDTpdv/JKmatlab
v=centroiddata.reference_centroids;
dv=centroiddata.displacement_of_centroids;
... 16 more lines ...
ELOG V3.1.3-