40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  TCS elog, Page 4 of 6  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Authorup Type Category Subject
  223   Fri Oct 26 09:49:47 2018 AidanMiscOpticsStarting 80C cure of epoxy

I've started an 80C cure of two materials bonded by EPOTEK 353ND. The objective is to see (after curing) how much the apparent glass transition temperature is increased over a room-temperature cure.

  224   Tue Dec 4 16:57:08 2018 AidanComputingHartmann sensorUpdated GIT version of HWS code

I changed the HWS code to the new git.ligo HWS version.

  • Object files are in ~/hws
  • scripts are in ~/hws-server
  • utilities are still in old git repo moved to ~/.HWS_code_temporary_home

I've set up some symbolic links to these directories to mimic the old directory structure, so ..

  • ~/pyHWS links to new object file directory
  • ~/pyHWS/scripts lnks to new scripts directory
  • ~/pyHWS/utilities links to old utilities directory
  225   Fri Feb 8 10:48:33 2019 AidanLab InfrastructureGeneralWater damage repair work in the TCS Lab

Caltech Facilities has determined that the walls in the SE corner of the TCS Lab in West Bridge were water damaged during last weekend’s rain. They are going to remove the plaster from the walls and dehumidify the area for a week or so. All tables in the room are going to be covered with plastic for this process. In the short term I’ve shutdown all the equipment in the lab (including FB4). The 2-micron cavity-testing fabrication has been moved next door to the QIL.

  242   Tue Sep 3 10:49:57 2019 AidanLab InfrastructureGeneralExhaust duct added for new bake area in TCS lab

Facilities came in on Friday and teed off a new duct to provide exhaust for the proposed new vacuum bake area in the TCS Lab. Photos are attached. 

We installed a plastic sheet between the work area and the rest of the lab (the rest of the lab was overpressurized relative to the work area). Also, they use a vacuum when doing any drilling.

 

  244   Tue Jun 15 08:47:27 2021 AidanLab InfrastructureGeneralCleaned up HWS table in preparation for lab move

I cleaned up the HWS table in preparation for replacement with the 4x10 table. We still need to move the cabinet and get the enclosure out of the way.

 

 

  245   Mon Jul 26 16:23:03 2021 AidanMiscFloodLab flooded from broken pipe leading to sump room

 

Koji: QIL/TCS entrance flooding. Check your lab

Anchal: Can someone take a look at CTN too?

Koij: TCS needs more people @aidan

Koji: CTN ok

Aidan: On my way

Shruti: Cryo seems fine

Aidan: There was a leak in a pipe in the wall of B265A. It was coming from the building air conditioner condensation overflow. Facilities has fixed the pipe and is working on clean-up

  246   Tue Jul 27 08:39:33 2021 AidanMiscFloodTuesday (27-Jul) morning check - lab looks okay

I checked the lab this morning. It was dry and there wall was in the same state as yesterday.

  247   Thu Aug 12 16:13:44 2021 AidanLab InfrastructureFloodCarpentry shop removed wet plaster sections from the wall - it's drying for a few days

The carpentry shop removed wet plaster sections from the wall following the flood (process was gentle scraping of wet plaster flakes, supervised by me). The wet section of wall needs a few days to dry and then they will plaster and paint it.

 

  248   Wed Aug 25 11:28:56 2021 AidanMiscFloodLab flooded from broken pipe leading to sump room

11:29AM - Lab has flooded again this morning. I'm calling PMA. Looks to be the same issue as before.

Quote:

 

Koji: QIL/TCS entrance flooding. Check your lab

Anchal: Can someone take a look at CTN too?

Koij: TCS needs more people @aidan

Koji: CTN ok

Aidan: On my way

Shruti: Cryo seems fine

Aidan: There was a leak in a pipe in the wall of B265A. It was coming from the building air conditioner condensation overflow. Facilities has fixed the pipe and is working on clean-up

 

  249   Thu Aug 26 09:00:39 2021 AidanMiscFloodLab flooded from broken pipe leading to sump room

Some photos of water and clean-up.

Summary: I came into the lab around 11:30AM and found water on the floor in the changing room outside QIL/TCS. Turns out the condesation overflow pipe from the AC blew out again. This time near the ceiling. Water was on the floor but also had sprayed a little onto the tool chest and East optical table. A few optics got wet on the table. Initial inspection looks like electronics were spared with the exception of the "broken" spectrum analyzer that was on the floor. 

Facilities came in and cleaned up the water. A small amount got into QIL but stayed near the door as the lab floor slopes up from the door area. They fixed the pipe and were looking into whether there was a blockage cuasing this problem. PMA was notified and John Denhart is coordinating follow-up.

Triage effort: given the AC was still active, John and I strung a temporary tarp across the two tables to block any spray.

Quote:

11:29AM - Lab has flooded again this morning. I'm calling PMA. Looks to be the same issue as before.

Quote:

 

Koji: QIL/TCS entrance flooding. Check your lab

Anchal: Can someone take a look at CTN too?

Koij: TCS needs more people @aidan

Koji: CTN ok

Aidan: On my way

Shruti: Cryo seems fine

Aidan: There was a leak in a pipe in the wall of B265A. It was coming from the building air conditioner condensation overflow. Facilities has fixed the pipe and is working on clean-up

 

 

  250   Wed Apr 27 11:41:53 2022 AidanLab InfrastructureGeneralLab clean-up to get ready for PD testing in old TCS Lab

We (Aidan, Koji, Radhika, Aaron) partially tidied up the TCS Lab. The front table is clean and ready to recieve PD tesitng optics, electronics and vacuum hardware. We moved all electronics units (oscilloscopes, power supplies, etc) to the rack in the NE corner of the lab. The back table was partially tidied up. We need to schedule cleaning of the remaining tables in the lab and also an inventory and disposal of obsolete equipment in all the cupboards.

 

  88   Wed Aug 4 09:57:38 2010 Aidan, JamesComputingHartmann sensorRMS measurements with Hartmann sensor

[INCOMPLETE ENTRY]

We set up the Hartmann sensor and illuminated it with the output from the fiber-coupled SLED placed about 1m away. The whole arrangement was covered with a box to block out ambient light. The exposure time on the Hartmann sensor was adjusted so that the maximum number of counts in a pixel was about 95% of the saturation level.

We recorded a set of 5000 images to file and analyzed them using the Caltech and Adelaide centroiding codes. The results are shown below. Basically, we see the same deviation from ideal improvement that is observed at Adelaide.

  142   Mon Apr 25 16:28:27 2011 Aidan, JoeComputingNetwork architectureFixed problem network drive fb1:/cvs on Ubuntu & CentOS machines

With Joe's help we fixed the failure of princess_sparkle to mount the fb1:/cvs directory when relying on /etc/fstab.

First we changed the mounting options in fstab to the following:

fb1:/cvs        /cvs            nfs     rw,bg,soft        1 1

When we got the following error trying it directly from the command line,

controls@princess_sparkle:~$ sudo mount /cvs
[sudo] password for controls:
mount: wrong fs type, bad option, bad superblock on fb1:/cvs,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

some quick Google searches suggested installing nfs-common, so we tried sudo apt-get install nfs-common and that seemed to do the trick.

CentOS

For the CentOS machines, the following was done:

sudo mkdir /cvs

and then the same mounting configuration was added to /etc/fstab
 

Additionally, all three machines now have a /users symbolic link to /cvs/users

  251   Wed May 11 19:46:21 2022 Aidan, Radhika, JordanLab InfrastructureVacuum chamberLeak cleaning on IR labs vacuum chamber - suspects

[Aidan, Jordan, Radhika]

Radhika and Jordan identified some particulates (hair and flecks of foil) on the O-ring on the IR labs dewer. Additionally, we saw a scratch in the O-ring groove and a nick on the metal of the base of the dewer where it meets the O-ring. All were in the leaky vicinity previously identified by the He testing.

We set up a cradle to hold the dewer while we are working on it. Still needs vertical supports.

R & J replaced the O-ring with a new one with Crytox applied. 

  157   Tue Jun 5 17:25:43 2012 Alex MauneyMiscaLIGO Modeling6/5/12 Daily Summary

- Had a meeting to talk about the basics of LIGO (esp. TCS) and discuss the project

- Created COMSOL model for the test mass with incident Gaussian beam.

- Added a ring heater to the previous file

- Set up SVN for the COMSOL repository

  158   Wed Jun 6 16:54:09 2012 Alex MauneyMiscaLIGO Modeling6/6/12 Daily Summary

- Got access to and started working with SIS on Rigel1

- Fixed SVN issues

- Refined COMSOL model parameters and worked on a better way to implement the heating ring to get the astigmatic heating pattern.

  160   Thu Jun 7 16:50:16 2012 Alex MauneyMiscaLIGO Modeling6/7/12 Daily Summary

- Created a COMSOL model with thermal deformations

- Added non-symmetrical heating to cause astigmatism

- Worked on a method to compute the optical path length changes in COMSOL

  162   Fri Jun 8 16:36:47 2012 Alex MauneyMiscaLIGO Modeling6/8/12 Daily Summary

- Tried to fix COMSOL error using the (ts) module, ended up emailing support as the issue is new in 4.3

- Managed to get a symmetric geometric distortion by fixing the x and y movements of the mirror to be zero (need to look for a better way to do this as this may be unphysical)

- Worked on getting the COMSOL data into SIS, need to look through the SIS specs to find out how we should be doing this (current method isn't working well)

 

  164   Mon Jun 11 17:11:01 2012 Alex MauneyMiscaLIGO Modeling6/11/12 Daily Summary

- Fixed the (ts) model, got strange results that indicate that the antisymmetric heating mode is much more prominent than previously thought

- Managed to get COMSOL data through matlab and into SIS

 

  166   Wed Jun 13 16:36:14 2012 Alex MauneyMiscaLIGO Modeling6/12 and 6/13 Daily Summary

- Realized that the strange deformations that we were seeing only occur on the face nearest the ring heater, and not on the face we are worried about (the HR face)

- Read papers by Morrison et al. and Kogelnik to get a better understanding of the mathematics and operations of the optical cavity modeled in SIS

- Read some of the SIS manual to better understand the program and the physics that it was using (COMSOL licenses were full)

  168   Thu Jun 14 16:51:03 2012 Alex MauneyMiscaLIGO Modeling6/14/12 Daily Summary

- Plugged the output of the model with uniform heating into SIS using both modification of the radius of curvature, and direct importation of deflection data

- Generated a graph for asymmetric heating and did the same

- Aligned axes in model to better match with the axes in MATLAB and SIS so that the extrema in deflections lie along x and y (not yet implemented in the data below)

  169   Mon Jun 18 16:30:36 2012 Alex MauneyMiscaLIGO Modeling6/18/12 Daily Summary

- Verified that the SIS output does match satisfy the equations for Gaussian beam propagation

- Investigated how changing the amount of data points going into SIS changed the output, as well as how changes in the astigmatic heating effect the output

     + The results are very dependent on number of data points (similar order changes to changing the heating)

     + Holding the number of data points the same, more assymetric heating tends to lead to more power in the H(2,0) mode, and less in the H(0,2)

 

  171   Tue Jun 19 16:24:52 2012 Alex MauneyMiscaLIGO Modeling6/19/12 Daily Summary

- Did more modeling for different levels of heating and different mesh densities for the SIS input.

- Lots of orientation stuff

- Started on progress report.

  172   Wed Jun 20 16:44:58 2012 Alex MauneyMiscaLIGO Modeling6/20/12 Daily Summary

- Attended a lot of meetings (Safety, LIGO Orientation)

- Finished draft of week 3 report (images attached)

 

  174   Thu Jun 21 16:54:45 2012 Alex MauneyMiscaLIGO Modeling6/21/12 Daily Summary

- Paper edits and more data generation for the paper (lower resolution grid data)

- Attended a talk on LIGO

 

  177   Wed Jun 27 16:43:56 2012 Alex MauneyMiscaLIGO Modeling6/27/12 Daily Summary

Plan for building the model

- Find the fields that would be incident on the beam splitter from each arm (This is done already)

- Propagate these through until they get to the OMC using the TELESCOPE function in SIS

- Combine the fields incident on the OMC in MATLAB and minimize the power to get the input field for the OMC (Most of this is done, just waiting to figure out what kind of format we need to use it as an SIS input)

- Model the OMC as an FP cavity in SIS

    + Need to think about how to align the cavity in a sensible way in SIS (need to find out more about how they actually do it)

- Pick off the fields from both ends of the OMC-FP cavity for analysis

- Add thermal effects to one of the arms and see how that changes the fields, specifically how the signal to noise ratio changes

  178   Thu Jun 28 16:27:37 2012 Alex MauneyMiscaLIGO Modeling6/28/12 Daily Summary

- Finished the MatLab code that both combines two fields and simulates the adjustment of the beamsplitter to minimize the power out (with a small offset).

- Added the signal recycling telescope to the SIS code that generates the fields

To Do: Make the OMC cavity in SIS

 

  180   Mon Jul 9 16:54:17 2012 Alex MauneyMiscaLIGO Modeling7/9/12 Summary

Made a COMSOL model that can include CO2 laser heating, self heating, and ring heating

Figured out how to run SIS out of a script and set up commands to run the two SIS stages of the model

  215   Tue Jul 10 17:49:13 2018 Aria ChaderjianLaserGeneralJuly 10, 2018

Went down to the lab and showed Rana the setup. He's fine with me being down there as long as I let someone know. He also recommended using an adjustable mount  (three screws) for the test mirror instead of the mount with top bolt and two nubs on the bottom - he thinks the one with three screws as constraints for the silica will be easier to model (and be more symmetric constraints)

Mounted the f=8" lens (used a 2" pedestal) and placed it on the table so the image fit well on the CCD and so a sharp object in front of the lens resulted in a sharp image. The beam was clipping the f=4" lens (between gold mirror and test mirror) so I spent time moving that gold mirror and the f=4" lens around. I'll still need to finish up that setup.

 

  216   Thu Jul 12 18:48:21 2018 Aria ChaderjianLaserGeneralJuly 12, 2018

The beam reflecting off the test mirror was clipping the lens between gold mirror and test mirror, so I reconfigured some of the optics, unfortunately resulting in a larger angle of incidence.

From the test mirror, the beam size increases much too rapidly to fit onto the 2-inch diameter lens with f=8 that was meant to resize the beam for the CCD of the HWS. It seems that the f=8 lens can go about 6 inches from the test mirror, and an f ~ 2.3 (60 mm) lens can go about 2 inches in front of the CCD to give the appropriate beam size. However, the image doesn't seem very sharp.

The beam is also not hitting the CCD currently because of the increase in angle of incidence on the test mirror and limitations of the box. I'd like to move the HWS closer to the SLED (and will then have to move the SLED as well).

  217   Fri Jul 13 16:42:50 2018 Aria ChaderjianLaserGeneralJuly 13, 2018

The table is set up. The HWS and SLED were moved slightly, and a minimal angle between the test mirror and HWS was achieved.

There are two possible locations for the f=60mm lens that will achieve appropriate magnification onto the HWS: 64cm or 50 cm from the f=200mm lens. 

At 64cm away, approximately 79000 saturated pixels and 1054 average value.

At 50cm away, approximately 22010 saturated pixels and 1076 average value.

Currently the setup is at 64cm. Could afford to be more magnified, so might want to move the f=60mm lens around. Also, if we're going to need to be able to access the HWS (i.e. to screw on the array) we might want to move to the 50cm location.

  218   Mon Jul 23 10:04:19 2018 Aria ChaderjianLaserGeneralJuly 20, 2018

With Jon's help, I changed the setup to include a mode-matching telescope built from the f=60mm (1 inch diameter) lens and the f=100mm lens. These lenses are located after the last gold mirror and before the test optic. The height of the beam was also adjusted so that it is more centered on these lenses. Note: these two lenses cannot be much further apart from each other than they currently are, or the beam will be too large for the f=100mm lens.

We considered different possible mounts to use for the test optic, and decided to move it to a mount where there is less contact. The test optic was also moved closer to the HWS to achieve appropriate beamsize on the optic coming from the mode-matching telescope.

The f=200 lens is now approximately 2/3 of the distane from the test optic to the HWS, resulting in an appropriately sized beam at the HWS.

Current was also turned down to achieve 0 saturated pixels.

  219   Tue Jul 24 16:52:44 2018 Aria ChaderjianLaserGeneralJuly 23, 2018 and July 24, 2018

Attached the grid array of the HWS.

Applied voltage (5V, 7V, 9.9V, 14V) to the heater pad and took measurements of T and spherical power (aka defocus).

The adhesive of the temperature sensor isn't very sticky. The first time I did it it peeled off. (Second time partially peeled off). We want to put it on the side of Al if possible.

Bonded a mirror (thickness ~6 mm) to aluminum disk (thickness ~5 mm) and it's still curing.

  220   Fri Aug 3 15:46:12 2018 Aria ChaderjianLaserGeneralAugust 3, 2018

To the best of my ability, calculated the magnification of the plane of the test optic relative to the HWS (2.3) and input this value.

Increased the temperature slightly and saved data points of defocus to txt files when temperature leveled out. This was a slow process, as it takes a while for things to level out. I only got up to about 28.5C, and will need to continue this process.

I also plotted the best-fit defocus for each temperature from COMSOL (Temperature vs. Defocus), and looking at values from HWS it seems that we're off by a normalization factor of approx. 4.

  110   Thu Feb 24 10:23:31 2011 Christopher GuidoLaserLaserLTG initial noise

Cheryl Vorvick, Chris Guido, Phil Willems

Attached is a PDF with some initial noise testing. There are 5 spectrum plots (not including the PreAmp spectrum) of the laser. The first two are with V_DC around 100 mV, and the other three are with V_DC around 200 mV. (As measured with the 100X gain preamplifier, so ideally 1 and 2 mV actual) We did one spectrum (at each power level)  with no attempt of noise reduction and one spectrum with the lights off and a make shift tent to reduce air flow. The 5th plot is at 200mv with the tent and the PZT on. (The other 4 have the PZT off).

 

The second plot is just the spectrums divided by their respectives V_DC to get an idea of the RIN.

  232   Mon Jul 22 18:44:53 2019 Edita BytyqiThings to BuyGeneralNeed to Order Gloves

Small/Medium size gloves need to be ordered in order to handle the optics carefully.

  233   Mon Jul 22 18:46:23 2019 Edita BytyqiLab Infrastructure Laser-Lens-HWS Setup

Today, I set up a system consisting of the 520 nm laser, a 2'' mirror and two lenses of focal lengths f1 = 40 cm and f2 = 20 cm. The goal was to collimate the beam coming from the laser, so it goes parallel through the test optic at a radius of ~2.5 cm and then focus it to a radius of ~ 1.2 cm to fit the CCD dimensions of the HWS. The mirror was placed about 1 cm close to the laser and the first lens is setup at a distance~f1=40cm from the mirror. The test optic is placed between the two lenses and the second lens is placed about 10 cm from the CCD. The distance between the two lenses isn't important and could change in the future. The lenses and mirrors are all labeled.

I measured the approximate angle of divergence (0.06 rad) of the laser by taking the beam diameter at different positions along the propagation axis. This allowed for the ABCD matrix calculations to be finalized and the focal lengths of the lenses be chosen accordingly. 

In order to have more space in the box, I removed everything that was not necessary to the side.

  234   Wed Jul 24 16:25:13 2019 Edita BytyqiLab InfrastructureOpticsUpdated 2-lens setup

The previous 2-lens setup focused the beam to a tight spot, however due to the divergence angle of the laser beam, a significant amount of power was not being captured by the fiirst lens at a distance of 40 cm from the source. The divergence angle seems to be bigger than 0.06 by a factor of 2, so a f = 20 cm lens was used to collimate the beam and a f = 30 cm lens was used to focus it. A mirror was used to reflect the beam, so we obtain steering control. Additionally, the focusing lens was placed on a small 1-axis stage in order to control the distance of the lens from the CCD, providing control over the focused beam size.

Note: The 30 cm lens was cleaned with methanol, however it still has some residue on the surface. The beam imaged to the Harrtmann Sensor looks good, however the lens will be cleaned by using a different solvent or replaced by a different 30 cm lens. The 3 lenses at the edge of the box will stay inside in order to prevent contamination, however they will not be used in the design.

  236   Mon Jul 29 18:53:16 2019 Edita BytyqiLab InfrastructureOpticsMounted Reflector and Heater

Since we set up the 2-lens system focusing the laser beam to the CCD, the next step was to mount the spherical reflector (31 mm wide) and the heater (~3 mm diameter). I used a small 3-axis stage to mount the heater, providing 3 degrees of freedom that would allow to manipulate the height of the heater, its position with respect to the reflector (left-right and in-out). The reflector was mounted in such a way that we can control its rotation angle, height and horizontal displacement. The current design is not quite sophisticated as it is just a first test, however I will look into different tools in the lab to see if I can use less mounts to get the same degrees of freedom.

The new heaters are supposed to be heated using AC. We used a DC power supply and ran ~30V through the wire, however only about ~50 mA of current was running through it. Jon will look into the specs of the new heaters to see if the power supply was the problem.

  237   Thu Aug 1 15:20:39 2019 Edita BytyqiLab InfrastructureOpticsReflector Mount and DC Supply

Yesterday, we were able to take some data using the 120 V DC power supply. The reflectors cut at the focal point and radius were both tested; the semi-circle cut proved to give a better focus, likely because roughly half the heat is lost using the focal-point reflectors. For upcoming tests, the semicircle reflectors will be used. We varied the surface shine by using the dull and reflective side of Al foil, as well as using the machined Al itself. The best result was given by using the more reflective side of Al foil.

Figure 1 shows the steady-state surface deformation profile detected by the HWS. The heaters don't have a uniform distribution along the wire, so more heat is radiated in the center of it, thus more of it is being focused to the center of the test optic. The data needs to be analyzed to determine the radius of the focus. Our rough estimate is about ~1.5 - 2 cm. We cannot collect any more data until we get a new power supply (AC 120 V).

Today, I came up with a new design for mounting the reflectors. I used a big 3-axis stage and a small 4-axis stage. This provides 5 degrees of freedom: 3 translational and 2 rotational, which is what we need for fine-tuning the focus and directing it at different angles incident to the test optic. The only problem with this design is that the 3-axis stage is too tall for the box, so the lid won't close.There is a smaller one available, but I have to figure out a way to increase its height, since the screw size is different from the ones on the pedestals available.

Additionally, Chub used metal-to-metal epoxy to glue a screw to the back of a reflector. I will wait until tomorrow to test it, because it is a slow acting epoxy. If it works, I have the necessary tools to do the same with the other reflectors. With the current deisgn the reflector wil be screwed in to where the round screw is in the stage. If it heats up a lot and affects the material of the stages, a small optical post (top of stage) will be used to make up for the absorbed heat.

 

  240   Mon Aug 12 21:15:12 2019 Edita BytyqiElectronics Determining heater/reflector focus

I took images of the heat pattern projected on a piece of paper produced by the semi-circle reflector. I used 108V to drive current throught he heater. I tested the reflector without any coating and then with the dull and shiny sides of Al foil. I wasn't able to test the focal-point cut reflector because I had to glue a screw to it with epoxy which cures overnight. I will do these measurements tomorrow. Figure 2 shows the setup I used to get the data. The shiny side of Al foil is better at IR, so we will use that for the wavefront measurements.

  241   Fri Aug 16 17:05:14 2019 Edita BytyqiElectronics FLIR Images of new reflector focusing heat

We got 11 new semi-circle cut reflectors of radius ~3.6 cm. I glued a screw to the back of one reflector using the same epoxy as for the previous reflectors. Due to the bigger ROC of the reflector, a tight focus is achievable at greater distances (~15 cm).

  4   Tue Dec 29 16:05:09 2009 FrankComputingDAQbooting VME crates from fb1

 http://nodus.ligo.caltech.edu:8080/AdhikariLab/514

  253   Thu Jun 23 10:53:17 2022 JCLab InfrastructureGeneralSeat For Photodiode Testing

[JC, Chub, Radhika]

Chub and I ordered a few parts from McMaster in order build a handrail-like stopper to keep the dewar from falling over. We also cut off the excess 8020 which was leaning over the table to fit. To hold down the support for the Dewar, Radhika and I decided to use C-clamps from the EE shop.

  255   Wed Jul 13 15:19:49 2022 JCElectronicsGeneralDesktop Computer

[JC]

The desktop computer is now running Debian Linux

  51   Thu Jun 17 07:40:07 2010 James KMiscHartmann sensorSURF Log -- Day 1, Getting Started

 For Wednesday, June 16:

I attended the LIGO Orientation and first Introduction to LIGO lecture in the morning. In the afternoon, I ran a few errands (got keys to the office, got some Computer Use Policy Documentation done) and toured the lab. I then got Cygwin installed on my laptop along with the proper SSH packets and was successfully able to log in to and interact with the Hartmann computer in the lab through the terminal, from the office. I have started reading relevant portions of Dr. Brooks' thesis and of "Fundamentals of Interferometric Gravitational Wave Detectors" by Saulson.
  52   Thu Jun 17 22:03:51 2010 James KMiscHartmann sensorSURF Log -- Day 2, Getting Started

For Thursday, June 17:

Today I attended a basic laser safety training orientation, the second Introduction to LIGO lecture, a Summer Research Student Safety Orientation, and an Orientation for Non-Students living on campus (lots of mandatory meetings today). I met with Dr. Willems and Dr. Brooks in the morning and went over some background information regarding the project, then in the afternoon I got an idea of where I should progress from here from talking with Dr. Brooks. I read over the paper "Adaptive thermal compensation of test masses in advanced LIGO" and the LIGO TCS Preliminary Design document, and did some further reading in the Brooks thesis.

I'm making a little bit of progress with accessing the Hartmann lab computer with Xming but got stuck, and hopefully will be able to sort that out in the morning and progress to where I want to be (I wasn't able to get much further than that, since I can't access the Hartmann computer in the lab currently due to laser authorization restrictions). I'm currently able to remotely open an X terminal on the server but wasn't able to figure out how to then be able to log in to the Hartmann computer. I can do it via SSH on that terminal, of course, but am having the same access restrictions that I was getting when I was logging in to the Hartmann computer via SSH directly from my laptop (i.e. I can log in to the Hartmann computer just fine, and access the camera and framegrabber programs, but for the vast majority of the stuff on there, including MATLAB, I don't have permissions for some reason and just get 'access denied'). I'm sure that somebody who actually knows something about this stuff will be able to point out the problem and point me in the right direction fairly quickly (I've never used SSH or the X Window system before, which is why it's taking me quite a while to do this, but it's a great learning experience so far at least).

Goals for tomorrow: get that all sorted out and learn how to be able to fully access the Hartmann computer remotely and run MATLAB off of it. Familiarize myself with the camera program. Set the camera into test pattern mode and use the 'take' programs to retrieve images from it. Familiarize myself with the 'take' programs a bit and the various options and settings of them and other framegrabber programs. Get MATLAB running and use fread to import the image data arrays I take with the proper data representation (uint16 for each array entry). Then, set the camera back to recording actual images, take those images from the framegrabber and save them, then import them into MATLAB. I should familiarize myself with the various settings of the camera at this stage, as well.

 

--James

  53   Sat Jun 19 17:31:46 2010 James KMiscHartmann sensorSURF Log -- Day 3, Initial Image Analysis
For Friday, June 18:
(note that I haven't been working on this stuff all of Saturday or anything, despite posting it now. It was getting late on Friday evening so I opted to just type it up now, instead)

(all matlab files referenced can be found in /EDTpdv/JKmatlab unless otherwise noted)

I finally got Xming up and running on my laptop and had Dr. Brooks edit the permissions of the controls account, so now I can fully access the Hartmann computer remotely (run MATLAB, interact with the framegrabber programs, etc.). I was able to successfully adjust camera settings and take images using 'take', saving them as .raw files. I figured out how to import these .raws into MATLAB using fopen and display them as grayscale images using the Imshow command. I then wrote a program (readimgs.m, as attached) which takes inputs a base filename and number of images (n), then automatically loads the first 'n' .raw files located in /EDTpdv/JKimg/ with the inputted base file name, formatting them properly and saving them as a 1024x1024x(n) matrix.

After trying out the test pattern of the camera, I set the camera into normal operating mode. I took 200 images of the HWS illuminated by the OLED, using the following camera settings:

 
Temperature data from the camera was, unfortunately, not taken, though I now know how to take it.
 
The first of these 200 images is shown below:
 
hws0000.png

As a test exercise in MATLAB and also to analyze the stability of the HWS output, I wrote a series of functions to allow me to find and plot the means and standard deviations of the intensity of each pixel over a series of images. First, knowing that I would need it in following programs in order to use the plot functions on the data, I wrote "ar2vec.m" (as attached), which simply inputs an array and concatenates all of the columns into a single column vector.

Then, I wrote "stdvsmean.m" (as attached), which inputs a 3D array (such as the 1024x1024x(n) array of n image files), which first calculates the standard deviation and mean of this array along the 3rd dimension (leaving, for example, two 1024x1024 arrays, which give the mean and standard deviation of each pixel over the (n) images). It then uses ar2vec to create two column vectors, representing the mean and standard deviation of each pixel. It then plots a scatterplot of the standard deviation of each pixel vs. its mean intensity (with logarithmic axes), along with histograms of the mean intensities and standard deviation of intensities (with logarithmic y-axes).

"imgdevdat.m" (as attached) is simply a master function which combines the previous functions to input image files, format them, analyze them statistically and create plots.

Running this function for the first 20 images gave the following output:

(data from 20 images, over all 1024x1024 pixels)

Note that the background level is not subtracted out in this function, which is apparent from the plots. The logarithmic scatter plot looks pretty linear, as expected, but there are interesting features arising between the intensities of ~120 to ~130 (the obvious spike upward of standard deviation, followed immediately by a large dip downward).

MATLAB gets pretty bogged down trying to plot over a million data points at a time, to the point where it's very difficult to do anything with the plots. I therefore wrote the function "minimgstat.m" (as attached), which is very similar to imgdevdat.m except that before doing the analysis and plotting, it reduces the size of the image array to the upper-left NxN square (where N is an additional argument of the function).

Using this function, I did the same analysis of the upper-left 200x200 pixels over all 200 images:

(data from 200 images, over the upper-left 200x200 pixels)

The intensities of the pixels don't go as high this time because the upper portion of the images are dimmer than much of the rest of the image (as is apparent from looking at the image itself, and as I demonstrate further a little bit later on). Do note the change in axis scaling resulting from this when comparing the image. We do, however, see the same behavior in the ~120-128 intensity level region (more pronounced in this plot because of the change in axis scaling).

I was interested in looking at which pixels constituted this band, so I wrote a function "imgbandfind.m" (as attached), which inputs a 2D array and a minimum and maximum range value, goes through the image array pixel-by-pixel, determines which pixels are within the range, and then constructs an RGB image which displays pixels within the range as red and images outside the range as black.

I inputted the first image in the series into this function along with the range of 120-129, and got the following:

(pixels in intensity range of 120-129 in first image)

So the pixels in this range appear to be the pixels on the outskirts of each wavefront dot near the vertical center of the image. The outer circles of the dots on the lower and upper portions of the image do not appear, perhaps because the top of the image is dimmer and the bottom of the image is brighter, and thus these outskirt pixels would then have lower and higher values, respectively. I plan to investigate this and why it happens (what causes this 'flickering' and if it is a problem at all) further.

The fact that the background levels are lower nearer to the upper portion of the image is demonstrated in the next image, which shows all intensity levels less than 70:
(pixels in intensity range of 0-70 in first image)

So the background levels appear the be nonuniform across the CCD, as are the intensities of each dot. Again, I plan to investigate this further. (could it be something to do with stray light hitting the CCD nonuniformly, maybe? I haven't thought through all the possibilities)
 
The OLED has been turned off, so my next immediate step will be to investigate the background levels further by analyzing the images when not illuminated by the OLED.
 
In other news: today I also attended the third Intro to LIGO lecture, a talk on Artificial Neural Networks and their applications to automated classification of stellar spectra, and the 40m Journal Club on the birth rates of neutron stars (though I didn't think to learn how to access the wiki until a few hours right before, and then didn't actually read the paper. I fully intend to read the paper for next week before the meeting).
 
  54   Tue Jun 22 00:21:47 2010 James KMiscHartmann sensorSurf Log -- Day 4, Hartmann Spot Flickering Investigation

 I started out the day by taking some images from the CCD with the OLED switched off, to just look at the pattern when it's dark. The images looked like this:

 
Taken with camera settings:

The statistical analysis of them using the functions from Friday gave the following result:

 
At first glance, the distribution looks pretty Poissonian, as expected. There are a few scattered pixels registering a little brighter, but that's perhaps not so terribly unusual, given the relatively tiny spread of intensities with even the most extreme outliers. I won't say for certain whether or not there might be something unexpected at play, here, but I don't notice anything as unusual as the standard deviation 'spike' seen from intensities 120-129 as observed in the log from yesterday.
 
Speaking of that spike, the rest of the day was spent trying to investigate it a little more. In order to accomplish this, I wrote the following functions (all attached):
 
-spotfind.m -- inputs a 3D array of several Hartmann images as well as a starting pixel and threshold intensity level. analyzes the first image, scanning starting at the starting pixel until it finds a spot (with an edge determined by the threshold level), after which it finds a box of pixels which completely surrounds the spot and then shrinks the matrix down to this size, localizing the image to a single spot
 
-singspotcent.m -- inputs the image array outputted from spotfind, subtracts an estimate of the background, then uses the centroiding algorithm sum(x*P^2)/sum(P^2) to find the centroid (where x is the coordinate and P is the intensity level), then outputs the centroid location
 
-hemiadd.m -- inputs the image from spotfind and the centroid from singspotcent, subtracts an estimate of the background, then finds the sum total intensity in the top half of the image above the centroid, the bottom half, the left half and the right half, outputs these values as n-component vectors for an n-image input, subtracts from each vector its mean and then plots the deviations in intensity from the mean in each half of the image as a function of time
 
-edgeadd.m -- similar to hemiadd, except that rather than adding up all pixels on one half of the image, it inputs a threshold, determines how far to the right of the centroid that the spot falls past this treshold and uses it as a radial length, then finds the sum of the intensities of a bar of 3 pixels on this "edge" at the radial length away from the centroid.
 
-spotfft.m -- performs a fast fourier transform on the outputs from edgeadd, outputting the frequency spectrum at which the intensity of these edge pixels oscillate, then plotting these for each of the four edge vectors. see an example output below.
 
--halfspot_fluc.m and halfspot_edgefluc.m -- master functions which combine and automate the functions previous
 
Dr. Brooks has suggested that the observed flickering might perhaps be an effect resulting from the finite thickness of the Hartmann Plate. The OLED can be treated as a point source and thus can be approximated as emitting a spherical wavefront, and thus the light from it will hit this edge at an angle and be scattered onto the CCD. If the plate vibrates, then (which it certainly must to some degree) the wavefront will hit this edge at a different angle as the edge is displaced temporarily through vibration, and thus this light will hit the CCD at a different point, causing the flickering (which is, after all, observed to occur near the edge of the spot). This effect certainly must cause some level of noise, but whether it's the culprit for our 'flickering' spike in the standard deviation remains to be seen.

Here is the frequency spectrum of the edge intensity sums for two separate spots, found over 128 images:
Intensity Sum Amplitude Spectrum of Edge Fluctuations, 128 images, spot search point (100,110), threshold level 110

128 images, spot search point (100,100), threshold level 129
At first glance, I am not able to conclude anything from this data. I should investigate this further.

A few things to note, to myself and others:
--I still should construct a Bode plot from this data and see if I can deduce anything useful from it
--I should think about whether or not my algorithms are good for detecting what I want to look at. Is looking at a 3 pixel vertical or horizontal 'bar' on the edge good for determining what could possibly be a more spherical phenomenon? Are there any other things I need to consider? How will the settings of the camera affect these images and thus the results of these functions?
--Am I forgetting any of the subtleties of FFTs? I've confirmed that I am measuring the amplitude spectrum by looking at reference sine waves, but I should be careful since I haven't worked with these in a while
 
It's late (I haven't been working on this all night, but I haven't gotten the chance to type this up until now), so thoughts on this problem will continue tomorrow morning..

  55   Tue Jun 22 22:30:24 2010 James KMiscHartmann sensorSURF Log -- Day 5, more Hartmann image preliminary analysis

 Today I spoke with Dr. Brooks and got a rough outline of what my experiment for the next few weeks will entail. I'll be getting more of the details and getting started a bit more, tomorrow, but today I had a more thorough look around the Hartmann lab and we set up a few things on the optical table. The OLED is now focused through a microscope to keep the beam from diverging quite as much before it hits the sensor, and the beam is roughly aligned to shine onto the Hartmann plate. The Hartmann images currently look like this (on a color scale of intensity):

hws.png

Where this image was taken with the camera set to exposure time 650 microseconds, frequency 58Hz. The visible 'streaks' on the image are believed to possibly be an artifact of the camera's data acquisition process.

I tested to see whether the same 'flickering' is present in images under this setup.

For frequency kept at 58Hz, the following statistics were found from a 200x200 pixel box within series of 10 images taken at different exposure times. Note that the range on the plot has been reduced to the region near the relevant feature, and that this range is not being changed from image to image:

750 microseconds:

750us.png

1000 microseconds:

1000us.png

1500 microseconds:

1500us.png

2000 microseconds:

2000us.png

3000 microseconds:

3000us.png

4000 microseconds:

4000us.png

5000 microseconds. Note that the background level is approaching the level of the feature:

5000us.png

6000 microseconds. Note that the axis setup is not restricted to the same region, and that the background level exceeds the level range of the feature. This demonstrates that the 'feature' disappears from the plot when the plot does not include the specific range of ~115-130:

8000us.png

 

When images containing the feature intensities are averaged over a greater number of images, the plot takes on the following appearance (for a 200x200 box within a series of 100 images, 3000us exposure time):

hws3k.png

This pattern changes a bit when averaged over more images. It looks as though this could, perhaps, just be the result of the decrease in the standard deviation of the standard deviations in each pixel resulting from the increased number of images being considered for each pixel (that is, the line being less 'spread out' in the y-axis direction). 

 

To demonstrate that frequency doesn't have any effect, I got the following plots from images where I set the camera to different frequencies then set the exposure time to 3000us (I wouldn't expect this to have any effect, given the previous images, but these appear to demonstrate that the 'feature' does not vary with time):

 

Set to 30Hz:

f30Hz.png

Set to 1Hz:

f1Hz.png

 

To make sure that something weird wasn't going on with my algorithm, I did the following: I constructed a 10-component vector of random numbers. Then, I concatenated that vector besides itself ten times. Then, I concatenated that vector into a 3D array by scaling the 2D vector with ten different integer multiples, ensuring that the standard deviations of each row would be integer multiples of each other when the standard deviation was found along the direction of the random change (I chose the integer multiples to ensure that some of these values would fall within the range of  115-130). Thus, if my function wasn't making any weird mistakes, I would end up with a linear plot of standard deviation vs. mean, with a slope of 1. When the array was inputted into the function with which the previous plots were found, the output plot was indeed observed to be linear, and a least squares regression of the mean/deviation data confirmed that the slope was exactly 1 and the intercept exactly 0. So I'm pretty certain that the feature observed in these plots is not any sort of 'artifact' of the algorithm used to analyze the data (and all the functions are pretty simple, so I wouldn't expect it to be, but it doesn't hurt to double-check).

 

I would conjecture from all of this that the observed feature in the plots is the result of some property of the CCD array or other element of the camera. It does not appear to have any dependence on exposure time or to scale with the relative overall intensity of the plots, and, rather, seems to depend on the actual digital number read out by the camera. This would suggest to me, at first glance, that the behavior is not the result of a physical process having to do with the wavefront.

 

EDIT: Some late-night conjecturing: Consider the following,

I don't know how the specific analog-to-digital conversion onboard the camera works, but I got to thinking about ADCs. I assume, perhaps incorrectly, that it works on roughly the same idea as the Flash ADCs that I dealt with back in my Digital Electronics class -- that is, I don't know if it has the same structure (a linear resistor ladder hooked up to comparators which compare the ladder voltages to the analog input, then uses some comb logic circuit which inputs the comparator outputs and outputs a digital level) but I assume that it must, at some level, be comparing the analog input to a number of different voltage thresholds, considering the highest 'threshold' that the analog input exceeds, then outputting the digital level corresponding to that particular threshold voltage.

Now, consider if there was a problem with such an ADC such that one of the threshold voltages was either unstable or otherwise different than the desired value (for a Flash ADC, perhaps this could result from a problem with the comparator connected to that threshold level, for example). Say, for example, that the threshold voltage corresponding to the 128th level was too low. In that case, an analog input voltage which should be placed into the 127th level could, perhaps, trip the comparator for the 128th level, and the digital output would read 128 even when the analog input should have corresponded to 127.

So if such an ADC was reading a voltage (with some noise) near that threshold, what would happen? Say that the analog voltage corresponded to 126 and had noise equivalent to one digital level. It should, then, give readings of 125, 126 or 127. However, if the voltage threshold for the 128th level was off, it would bounce between 125, 126, 127 and 128 -- that is, it would appear to have a larger standard deviation than the analog voltage actually possessed.

Similarly, consider an analog input voltage corresponding to 128 with noise equivalent to one digital level. It should read out 127, 128 and 129, but with the lower-than-desired threshold for 128 it would perhaps read out only 128 and 129 -- that is, the standard deviation of the digital signal would be lower for points just above 128.

This is very similar to the sort of behavior that we're seeing!

Thinking about this further, I reasoned that if this was what the ADC in the camera was doing, then if we looked in the image arrays for instances of the digital levels 127 and 128, we would see too few instances of 127 and too many instances of 128 -- several of the analog levels which should correspond to 127 would be 'misread' as 128. So I went back to MATLAB and wrote a function to look through a 1024x1024xN array of N images and, for every integer between an inputted minimum level and maximum level, find the number of instances of that level in the images. Inputting an array of 20 Hartmann sensor images, along with minimum and maximum levels of 50 and 200, gave the following:

levelinstances.png

Look at that huge spike at 128! This is more complex of behavior than my simple idea which would result in 127 having "too few" values and 128 having "too many", but to me, this seems consistent with the hypothesis that the voltage threshold for the 128th digital level is too low and is thus giving false output readings of 128, and is also reducing the number of correct outputs for values just below 128. And assuming that I'm thinking about the workings of the ADC correctly, this is consistent with an increase in the standard deviation in the digital level for values with a mean just below 128 and a lower standard deviation for values with a mean just above 128, which is what we observe.

 

This is my current hypothesis for why we're seeing that feature in the plots. Let me know what you think, and if that seems reasonable.

 

ELOG V3.1.3-