40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 67 of 348  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  17374   Fri Dec 30 11:38:45 2022 PacoSummaryCalibrationALS Calibration errors -- single arm actuation

Here are my thoughts on calibration errors. This applies to the single arm actuation calibration, but may easily be extended to calibrate the DARM residual noise for example.

According to the math in this post, there are four parameters whose estimates build the total calibration uncertainty: arm length, wavelength, loop gain, and beatnote fluctuation. Below is a table for how each is measured, what the sources of statistical versus systematic error are, how large each relative contribution roughly is, and how we may improve on them.

  Arm Length Wavelength ALS beat fluctuation AUX Loop gain
Current Estimate FSR scan using ALS, reference to POX/POY (11 MHz) sideband. NPRO Specification DFD + Phase tracker Swept sine
Statistical error Limited by scan range (e.g. DFD range) and resolution. No statistical error from specification Shot noise limited measurement ? Bendat Piersol Table 9.6 (depends on coherence)
Systematic error Limited by freq reference offsets (Marconi), and residual POS-PIT coupling NPRO half tuning range Flicker noise (electronics, other) Swept sine bias and servo drift (thermal, flicker)
Improvements More FSRs per scan at best possible resolution Iodine spectrsocopy Lower ALS residual frequency noise Higher AUX servo gain

Arm length

Statistical

The arm length has been estimated before by locking the arm with the ALS beat, scanning the arm length and looking at the IR resonances. From the statistical uncertainty standpoint the limit seems to be number of measurable FSRs. Using these numbers, the statistical uncertainty comes to ~ 0.6%. Other attempts claim to have improved on this by almost an order of magnitude giving ~ 0.02%. Simply scanning over more FSRs should improve this as the usual square root number of measurements statistical error reduction.

Systematic

Systematic error comes from the frequency referenced on this measurement (the Marconi 11 MHz sideband oscillator), nonlinearities in the mostly linear scan (e.g. POS to PIT coupling). I think it's safe to neglect frequency offsets > 1 Hz because of the Rb clock reference and the Marconi's own calibration. I'm not sure about the POS to PIT coupling magnitude and whether F2A filters are helping here, but offset scan nonlinearities should distort the FSR nonuniformly and this error may have sneaked into the statistical estimate above. From the posts referenced above,  the scan result seems extremely linear, but repeating the measurement and plotting the residuals may give an accurate estimate of the nonlinearity. I think either of the systematic errors discussed here should be below or around the ppm (0.0001%) level, but this should be confirmed.

Wavelength

Statistical

If we use the NPRO specification Mephisto's wavelength is 1064 nm and there is no statistical error.

If on the other hand we do iodine absorption spectroscopy, we may be able to see ~ 4 lines throughout the 30 GHz tuning range of the NPRO. Fun fact: 30 GHz correspond to 1 cm-1 (inverse centimeter). Assuming we can scan our laser with a 0.01 cm-1 resolution, it may be possible to estimate the absolute wavelength to 10 better than any line center. The ATLAS of iodine spectroscopy gives strong absorption lines around 532 nm to 0.001 cm-1, or 0.00001% accuracy. A simple Doppler broadened absorption should improve this further.

Systematic

If we use the NPRO specification the systematic error comes from the number of known significant figures ~ 0.047 %. A slightly better estimate comes from the Prometheus model with a frequency doubled output hitting near the P 83(33-0) line of iodine, at 18787.814 cm-1. This gives a nominal wavelength of 532.259 nm, or 1064.518 nm on the seed. Because the tuning range is 30 GHz, our systematic error may be +0.03 nm - 0.07 nm from 1064.52 nm. Taking the median of 0.05 nm, the relative systematic error from the Nd:YAG specification is 46.9 ppm = 0.0047%.

If on the other hand we do iodine spectroscopy, the systematic error will be dominated by the residual shifts on the iodine vapor, which are negligibly small compared to the Doppler broadened lines. They might add sub-ppm uncertainty to the absolute calibration.

Beat fluctuations

Statistical

The error in estimating beatnote fluctuations is statistically dominated if our beat detection is shot noise limited for example. Other stationary noises with power law spectra decaying faster than 1/f are subject to this effect. The allan deviation discriminates the timescales at which our measurement is dominated by statistical error. Currently our calibration lines have SNR of 10 to 100. Averaging seems to be limited to ~ 100 second timescales, so the statistical error on these measurements is ~ 0.1 to 1.0 %.

Systematic

As suggested in the above, the allan deviation discriminates the statistical dominated timescales for this measurement. There is a correspondence between noise spectra and allan deviation, so we should be able to point out what noises contribute to systematic drift in our ALS noise budget.

Loop gain

Statistical

Invoking Bendat and Piersol, Table 9.6 gives the statistical estimate of loop gain estimate with coherence gamma, and n_avg number of averages. Because of our resonant OL gain filters, assuming G ~ 100 there, and high coherence on the OLTF measurement so gamma ~ 0.9, with 12 averages we should get a relative loop gain error of 9.8%.

Systematic

Also from B&P, we should estimate how biased our TF is. This depends on delays, bandwidths, measurement device noises, etc. Furthermore, the analog electronics in the AUX servo should drift, but we neglect this contribution for now. A simple bias estimate (eq 9.75) tells us that the coherence is biased by the number of averages such that in our estimate above the unbiased coherence is roughly 0.897. This means our systematic contribution from TF bias is ~ 0.17 %.

We should remember that in the case of loop gain, the total error (systematic + statistical) gets an extra suppression factor of the order of the loop gain itself. Radhika's resonant digital gain filters should easily allocate 40 dB (or ~ 100) on every calibration line such that our total calibration error drops to the 0.1% level.


Conclusions

  • The main calibration error (at the 1% level) seems to be from the ALS beat frequency. We can simply crank the SNR up, but we should work on the ALS relative stability. I think the low frequency end of the ALS residual frequency noise is currently limited by DFD... I wonder if we can improve on this by implementing a digital PLL (e.g. using a Moku)
  17376   Sat Dec 31 19:27:32 2022 PacoSummaryCalibrationETMY actuation strength cal with 5 lines

Calibrated ETMY actuation strength using ALS. Attachment #1 shows the result, in close agreement with this previous measurement and a combined uncertainty estimate of 0.88%. The data was taken with 5 oscillators on, at slightly different strengths than with the ITMY run, and the actuation was on ETMY.

The ETMY actuation strength is

10.843e-9 / f^2 +- (0.88)% [m/count]

Note: gpstimes for raw data are 1356490036 to 1356495400 (a little bit over an hour and a half).

Attachment 1: ETMY_cal.pdf
ETMY_cal.pdf
  17377   Sat Dec 31 20:00:03 2022 ranaSummaryCalibrationETMY actuation strength cal with 5 lines
  1. how much frequency dependent variation in the transfer function do we expect? Are there resonances in the mechanical suspension cage or the violin modes?
  2. it would be interesting to make the same TF, but from counts to coil current, and see if the variation is there. I don't expect to see much variation in the electronics, but there might be something. In principle, we can just measure the voltage at the satellite box, but the coil cable's capacitance and the coil inductance make for some wiggle. I think the resonant frequency ought to be ~ 100 kHz, but perhaps there are some wiggles due to frequency response in the AI filter or something.
  17384   Wed Jan 4 12:12:57 2023 PacoSummaryCalibrationALS calibration error from DFD

[Paco, Anchal]

One of the crucial and currently limiting calibration errors is in the ALS beat. We think a major driver of calibration uncertainty may be the DFD TF calibration since it depends on the RF beat power.

The RF beat power as monitored by C1:ALS-BEATY_RF_POW shows relative fluctuations of 0.02% at the 16 Hz sampling rate (actually 0.09% rms from Attachment #1) once the single arm and YAUX laser are both locked. This sounds ok, but that's just a statistical estimate. The beat frequency changes every time we break and reacquire the lock, making the BEAT PD frequency response and DFD overall calibration change their nominal TF. This changes can be significantly larger than 0.02% (e.g 4.9% = observed change in the RF power after breaking and reacquiring the YAUX lock this morning). This is far more significant than the contribution from the RIN on the two lasers.

This kind of systematic error could be addressed by either/all:

  • Stabilizing the ALS BEAT absolute frequency (offset locking using freq counter and simple integrator or equivalent)
  • Rejecting RF beat amplitude fluctuations by mixing a stable DC voltage in the DFD, or using a comparator. 
    • Our plan this afternoon is to quickly measure the RF AM rejection level from a simple mixer and a stable voltage reference and plan ahead.
  • ISS on both lasers
Attachment 1: als_rf_power_screenshot.png
als_rf_power_screenshot.png
  17477   Wed Feb 22 23:40:48 2023 AlexUpdateCalibrationAdding calibration constants for sus matrix and filter control buttons to the sus control screen

The callibration constants were updated for the oplev pitch and yaw. The values were changed as denoted in 17471 were:

     PITH: 175.7→ 155 cts/urad

     YAW: 193 → 241 cts/urad

To make these changes for the oplev callibration constants I went to ETMY - SELECTED OPLEV SERVO BOX

I then opened OLMATRIX and turned off PITCH and YAW servos in the ETMY SUSPENSION SCREEN such that the system does not attempt to actively make corrections while values are being changed. 

Then I adjusted the matrix to include our updated calibration constants and reinitiated the oplev ptich and yaw servo's

This updated the calibration constants for everything

 

The next change that was made was the addition of the calibration filters for position, pitch, yaw and side into the sitemap view for the suspension systems.

Adding calibration filters will allow us to callibrate the pos, pitch, yaw, and side to true values of urad and umeters (see 17459)

The final screen may be seen bellow (the updated area is outlined in red):

InkedScreenshot_2023-02-22_18-28-41.jpg

When each of the filter buttons is clicked, the following screen will now appear (circled in yellow is the calibration constant gain we will be calculating and entering into the system):

InkedScreenshot_2023-02-22_18-29-00.jpg

To create the edits to the controls screen we must complete the following process

We can edit the original screen - right click > evaluate > edit this screen

Then I adjusted the width of the overall screen, and moved the right half of the modules over to the right so I could fit in some filter buttons. I then Navigated to the c1ioo wfs master screen using the open feature to copy a pre existing filter module

I then adjusted the filter module and its contents to correspond to the features and autogenerated model files from RTCDS

There was some rearranging and adjusting needed to get these files in place first. The autogenerated files from the RTCDS can be found in dir = "/opt/rtcds/caltech/c1/medm/c1sus/"

They were autogenerated with names "C1SUS_BS_PITCAL.adl", "C1SUS_BS_POSCAL.adl", "C1SUS_BS_YAWCAL.adl","C1SUS_BS_SIDECAL.adl" 

We copied these files to dir = "/opt/rtcds/userapps/trunk/sus/c1/medm/templates/NEW_SUS_SCREENS/"

The file names were changed to "SUS_PITCAL.adl", "SUS_POSCAL.adl", "SUS_YAWCAL.adl", "SUS_SIDECAL.adl" 

The directory we placed them in is where the models for c1 sus can be found that are referenced by the sitemap suspension monitor screen

Each file was then opened in Vscode and a few changes were made such that the specific naming values referenced by the different screens of the sitemap and different optics, are replaced by the overarching values seen in each instance of the screens.

There are approximately 50 referenced file names of "C1:SUS-BS_PITCAL" etc. In each instance we made the following changes:

"-BS" was changed to "-$(OPTIC)"

"C1:" was changed to "$(IFO):"

The new strings should read "$(IFO):SUS-$(OPTIC)_PITCAL"

Once this change was made we can now right click on the filter module box, click on "Label/Name/Args" button

In the display file, we must add the path name for the calibration directory "/opt/rtcds/userapps/trunk/sus/c1/medm/templates/NEW_SUS_SCREENS/SUS_POSCAL.adl"

And for the arguments box we will enter OPTIC=$(OPTIC), IFO=$(IFO)

You can also copy and paste the directory names in the file boxes using right click copy from the file manager and paste into the box using a single click of the mouse scroller wheel

Lastly, the PV limits were changed for each number output right click value box > PV limits > Precision > Source changed to "Default" with a value of 1.

The shown value of the position, pitch, yaw, and side was then changed to show the output from the newly added filter. This is done also by right clicking the value box and adjusting the "Readback Channel".

Value changed from "$(IFO):SUS-$(OPTIC)_TO_COIL_1_#_INMON" to the outputs from the filters which are

       "$(IFO):SUS-$(OPTIC)_POSCAL_OUTMON" (for others changing POSCAL to the appropriate variable)

This is how to edit and add the Medm screens for single suspension optics into the sitemap IFO SUS screen

 

Lastly, Tomohiro and I worked on acquiring 6 data sets from DC stepping through adjustments in pitch and yaw for MC1, MC2 and MC3. These datasets will be fit quadratically and combined with more tests dine by AC driving the stepper motors tomorrow to find the calibration constants for the mirrors.

Attachment 2: InkedScreenshot_2023-02-22_18-28-41.jpg
InkedScreenshot_2023-02-22_18-28-41.jpg
Attachment 3: InkedScreenshot_2023-02-22_18-29-00.jpg
InkedScreenshot_2023-02-22_18-29-00.jpg
  17492   Sat Mar 4 18:57:18 2023 PacoConfigurationCalibrationFPMI DARM calibration run

Locked FPMI, measured DARM and CARM OLTFs, locked YAUX and measured the analog loop TF at the cal line frequencies. Turned the cal lines on with the new filters Anchal added on MC2 (ResGain within and Notches outside the CARM bandwidth which is set to 200 Hz), and hope to get 3600 seconds of data this evening. Log and measurement data are saved under /opt/rtcds/caltech/c1/Git/40m/scripts/CAL/FPMI

  17499   Wed Mar 8 18:32:22 2023 AnchalConfigurationCalibrationFPMI DARM calibration run set to happen at 1 am

On rossa in tmux session name FPMI_DARM_Cal, a script is running to take FPMI DARM calibration data at 1:00 am on March 9th. Please do not disturb the experiment untill 6 am. To stop the script do following on rossa:

tmux a -t FPMI_DARM_Cal
ctrl-C

The script will lock both arms, run ASS, then lock FPMI, then tune beatnote frequency with Y AUX laser to around 40 MHz, set phase tracked UGF to 2 kHz, clear phase history, take OLTF of DARM from 2 kHz to 10 Hz, take OLTF of CARM and AUX loop at calibration line frequencies, turn on the calibration lines, and wait for FPMI to unlock or 5 hours to pass, whatever happens first. At the end it will turn off the calibration lines.

  17502   Thu Mar 9 19:20:44 2023 AnchalConfigurationCalibrationFPMI DARM calibration run set to happen at 1 am

Running this test again tonight. Will probably run it every night now.

 

  10436   Thu Aug 28 11:02:53 2014 SteveUpdateCalibration-RepairSR785 repair

SN 46,795 of 2003 is back.

Attachment 1: 08281401.PDF
08281401.PDF
  11641   Thu Sep 24 17:06:14 2015 ericqUpdateCalibration-RepairC1CAL Lockins

Just a quick note for now: I've repopulated C1CAL with a limited set of lockin oscillators/demodulators, informed by the aLIGO common LSC model. Screens are updated too. 

Rather than trying to do the whole magnitude phase decompostion, it just does the demodulation of the RFPD signals online; everything beyond that is up to the user to do offline. 

Briefly testing with PRMI, it seems to work as expected. There is some beating evident from the fact that the MICH and PRCL oscillation frequencies are only 2Hz apart; the demod low pass is currently at an arbitrary 1Hz, so it doesn't filter the beat much. 

Screens, models, etc. all svn'd.

  12040   Mon Mar 21 14:29:32 2016 SteveUpdateCalibration-Repair1W Innolight laser repair diagnoses

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

  12045   Thu Mar 24 07:56:09 2016 SteveUpdateCalibration-RepairNO Noise Eater for 1W Innolight

1W Innolight is NOT getting Noise Eater as it was decided yesterday at the 40m meeting. Corrected 3-25-2016

Repair quote with adding noise eater is in 40m wiki

Quote:

 

Quote:
Quote:

After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I

Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010

It's diodes should be replaced, based on it's age and performance.

RIN and noise eater bad. I will get a quote on this job.

The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956

Diagnoses from Glasglow:

“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”

 

  12070   Mon Apr 11 17:03:41 2016 SteveUpdateCalibration-Repair1W Innolight repair completed

The laser is back. Test report is in the 40m wiki as New Pump Diode Mephisto 1000

It will go on the PSL table.

  13456   Tue Nov 28 17:27:57 2017 awadeBureaucracyCalibration-RepairSR560 return, still not charging

I brought a bunch of SR560s over for repair from Bridge labs. This unit, picture attached (SN 49698), appears to still not be retaining charge. I’ve brought it back. 

Attachment 1: 96B6ABE6-CC5C-4636-902A-2E5DF553653D.jpeg
96B6ABE6-CC5C-4636-902A-2E5DF553653D.jpeg
Attachment 2: image.jpg
image.jpg
  14759   Mon Jul 15 03:30:47 2019 KruthiUpdateCalibration-RepairWhite paper as a Lambertian scatterer

I made some rough measurements, using the setup I had used for CCD calibration, to get an idea of how good of a Lambertian scatterer the white paper is. Following are the values I got:

Angle (degrees) Photodiode reading (V)  Ps (W) BRDF (per str) % error
12 0.864 2.54E-06 0.334 20.5
24 0.926 2.72E-06 0.439 19.0
30 1.581 4.65E-06 0.528 19.0
41 0.94 2.76E-06 0.473 19.8
49 0.545 1.60E-06 0.423 22.5
63 0.371 1.09E-06 0.475 28

Note: All the measurements are just rough ones and are prone to larger errors than estimated.

I also measured the transmittance of the white paper sample being used (it consists of 2 white papers wrapped together). It was around 0.002

Attachment 1: BRDF_paper.png
BRDF_paper.png
  14804   Wed Jul 24 04:20:35 2019 KruthiUpdateCalibration-RepairMC2 pitch and yaw calibration

Summary:  I calibrated MC2 pitch and yaw offsets to spot position in mm. Here's what I did:

  1. Changed the MC2 pitch and yaw offset values using  ezca.Ezca().write('IOO-MC2_TRANS_PIT_OFFSET', <pitch offset value> ) and ezca.Ezca().write('IOO-MC2_TRANS_YAW_OFFSET', <yaw offset value> )
  2. Waited for ~ 700-800 sec for system to adjust to the assigned values
  3. Took snapshots with the 2 GigEs I had installed - zoomed in and zoomed out. (I'll be using these to make a scatter loss map, verify the calibration results, etc)
  4. Ran the mcassDecenter script, which can be found in /scripts/ASS/MC. This enters the spot position in mm in the specified text file.

Results:  In the pitch/yaw vs pitch_offset/yaw_offset graph attached,

  • intercept_pitch = 6.63 (in mm) ,  slope_pitch = -0.6055 (mm/counts) 
  • intercept_yaw = -4.12 (in mm) ,  slope_yaw = 4.958 (mm/counts) 
Attachment 1: Pitchyaw_calibration.png
Pitchyaw_calibration.png
  15510   Sat Aug 8 07:36:52 2020 Sanika KhadkikarConfigurationCalibration-RepairBS Seismometer - Multi-channel calibration

Summary : 

I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer.

The calibration factors have been determined to be :

BS-X Channel: \small {\color{Blue} 2.030 \pm 0.079 }

BS-Y Channel: \small {\color{Blue} 2.840 \pm 0.177 }

BS-Z Channel: \small {\color{Blue} 1.397 \pm 0.182 }


Details :

The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well. 


Method :

Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency. 

  1. I first analyzed the regions in which the similar channels were found to be coherent to have a proper gain analysis. The EY seismometer was found to be the most stable one so it has been used as a reference. I saw the coherence between similar channels of the 2 seismometers and the bode plots together. A transfer function estimator was used to analyze the relative calibration in between all 3 pairs of seismometers. In the given frequency range EX and EY have a gain of 1 so their relative calibration is proper. The relative calibration in between the BS and the EY seismometers is not proper as the resultant gain is not 1. The attached plots show the discrepancies clearly : 
  • BS-X & EY-X Transfer Function : Attachment #1
  • BS-Y & EY-Y Transfer Function : Attachment #2

          The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3. 

  • BS-Z & EY-Z Transfer Function : Attachment #3

The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above.

Note: The standard error envelope plotted in the attached graphs is calculated as follows :

         1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later. 

         2. Calculate PSD for every segment (no averaging).

         3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment.

Discussions :

The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again. 

Attachment 1: BS_X-EY_X.png
BS_X-EY_X.png
Attachment 2: BS_Y-EY_Y.png
BS_Y-EY_Y.png
Attachment 3: BS_Z-EY_Z.png
BS_Z-EY_Z.png
Attachment 4: timeseries.png
timeseries.png
  227   Tue Jan 8 15:20:17 2008 PkpUpdateCamerasGigE update
[Tobin , Pinkesh]

Finally we got the camera doing something (other than giving out its attributes). The only thing that seems to work so far is a program called AAviewer, which converts the image into an ASCII format and displays it on the screen. If you want to play around with it, log into mafalda (131.215.113.23) via rana.ligo.caltech.edu. Access /cvs/cds/caltech/target/Prosilica/bin-pc/x86/ and there should be a few programs in there, one of which is AAviewer, which requires you to get an IP address (which is 131.215.113.103) for the camera right now. (You can also get the IP information via the ListCameras program). The camera is physically in the 40m near the network rack.

Other programs dont seem to be working and its probably due to the network/packetsize issues. Since linux2 can change its packetsize to a higher number, I will get it to compile on linux2 for now and then give it a shot.
  234   Thu Jan 10 13:45:52 2008 PkpUpdateCamerasGLIBC Error
So, I have tried to compile the camera files which are in /cvs/cds/caltech/target/Prosilica/examples for the past 2 days now and have been unable to get rid of the following error. (specifically ListCameras.cpp, as it doesnt have any other libraries required, which unnecessarily complicates things)

../../bin-pc/x86/libPvAPI.so: undefined reference to `__stack_chk_fail@GLIBC_2.4'
collect2: ld returned 1 exit status
make: *** [sample] Error 1

I used to get this error on mafalda too, but I had fixed it by installing the latest version of the glibc libraries. Inspite of doing so on linux2, the error still persists. I suspect it had something to do with it being a FC3 machine. My own laptop, which also runs Ubuntu works fine too. The problem with these Ubuntu machines is that they dont let me set the packet sizes to 9 kb which is required by the camera. Linux2 does.

If anyone has any idea how to resolve this issue, please let me know.

Thanks
Pinkesh.
  236   Fri Jan 11 17:01:51 2008 pkpUpdateCamerasGigE again
So, here I detail all the efforts on the GigE so far

(1) The GiGE camera requires a minimum of 9 kb packet size, which is not available on mafalda or on my laptop ( both of which run Ubuntu and the Camera programs compile there). The programs which require smaller sizes work perfectly fine on these machines. I tried to statically compile the files on these machines so that I could then port them to the other machines. But that fails because the static libraries given by the company dont work.

(2) On Linux2, which lets me set a packet size as high as 9 Kb, it doesnt compile because of a GLIBC error. I tried updating the glibc and it tells me that the version already existing is the latest ( which it clearly is not). So I tried to uninstall GLIBC and reinstall it, but it wont let me uninstall (it == rpm) glibc, since there are a lot of dependencies. A dead end in essence.

Steps being taken

(1) Locally installing the whole library suites on linux2. Essentially install another version of gcc and g++ and see if that helps.
(2) IF this doesnt work, then the only course of action I can take is to cannibalize linux2's GigE card and put it on mafalda. ( I need permission for this Smile ).

Once again any suggestions welcome.
  245   Thu Jan 17 15:11:13 2008 josephbUpdateCamerasWorking on Malfalda
1) I can statically compile the ListCamera code (which basically just goes out and finds what cameras are connected to the network) on Malfalda and use that compiled code to run on Linux2 without a problem. Simply needed to add explicit links to libpthread.a and librt.a.
(i.e. -Bstatic -L /usr/lib/ -lpthread -Bstatic -L /usr/lib -lrt)

With appropriate static libraries, it should be possible to port this code to other linux machines even if we can't get it to compile on the target machine itself.

2)I've modified the Snap.cpp file so that it uses a packet size of 1000 or less. This simply involves setting the "PacketSize" attribute with the built in functions they provide in their library. After un-commenting some lines in that code, I was able to save tiff type images from the camera of up to 400x240 pixels on Malfalda. The claimed maximum resolution for the camera is 752x480, but it doesn't seem to work with the current setup. The max number of pixels seems to about 100 times the packet size. I.e. packet size of 1000 will allow up to 400x240 (96000) but not 500x240 (120,000). Not sure if this is an issue just with snap code or the general libraries used.

3)Will be working towards getting video running over the next day or so.
  266   Fri Jan 25 11:38:16 2008 josephbConfigurationCamerasWorking GiGE video on Linux - sort of
1)I have been able to compile the SampleViewer program which can stream the video from the Prosilica 750C camera. This was accomplished on my 64-bit laptop running Ubuntu, after about 3 hours of explicitly converting strings to wxStrings and back again within the C++ code. (There was probably an easier way to simply overload the functions that were being called, but I wasn't sure how to go about doing so). By connecting it to the CDS network, I was able to immediately detect the camera and display the images.

Unfortunately, I have not yet been able to get it to compile on Mafalda with the x86 architecture. This may be do the fact that it has wxWidgets version 2.8.7 while my laptop has 2.8.4. Certainly the failure at compile time looks different from the errors earlier, and seem to be within the wxWidget code rather than the SampleViewer code. I may simply need to uninstall 2.8.7 and install 2.8.4 of wxWidgets.

The modified code that will compile on my machine has been copied to /cvs/cds/caltech/target/Prosilica/examples/SampleViewer2b.

2)The Snap program (under /cvs/cds/caltech/target/Prosilica/examples/Snap) also will now take full resolution images even on Mafalda. This was achieved by reducing the packet size to 1000 and also increasing the wait until timeout time up to 400 ms, which originally was at 100. Apparently, it takes on the order of 1 ms per packet as far as I can tell. So full resolution at 752x480 required something of order 360 packets.

To Do:
1) Get sample viewer to compile on Mafalda, and then statically compile it so it can be run from any Linux based machine.
2) Get a user friendly version of Snap up and running, statically compiled, with options for a continuous loop every X seconds and also to set desired parameters (such as height, width, file name to save to, save format, etc).
3) Figure out data analysis with the images in Matlab and an after the fact image viewer.

Attached is an example .tiff image from the Snap program.
Attachment 1: snap.tiff
  267   Fri Jan 25 13:36:13 2008 josephbConfigurationCamerasWorking GiGE video on Mafalda
Finally got the GiGE camera sample viewer video running on Mafalda by updating to the latest API (version 1.16 from Dec 16, 2007) from Prosilica and then using the modified Sample Viewer code I had written. The API version previously in cvs was 1.14.

It can currently be run by ssh -X into Mafalda and going to /cvs/cds/caltech/target/Prosilica/bin-pc/x86 and running the SampleViewer executable found there.
  289   Thu Jan 31 16:53:41 2008 josephbConfigurationCamerasImproving camera user interface
There's a new and improved version of Snap program at the moment people are free to play with.

Located in /cvs/cds/caltech/target/Prosilica/40mCode/

The program Snap now has a -h or --help option which describes some basic command line arguments. The height (in pixels), width (in pixels), exposure time (in micro seconds), file name to be saved to (in .tiff format), and packet size can all be set. The format type (i.e. pixel format such as Mono8 or Mono16) doesn't work at the moment.

At the moment, it only runs on mafalda.

Currently in the process of adding a loop option which will take images every X seconds, saving them to a given file name and then appending the time of capture to the file name.

After that need to add the ability to identify and choose the camera you want (as opposed to the first one it finds).

Lastly, I've been finding on occassion that the frame fails to save. However if you try again a few seconds later with the exact same parameters, it generally does save the second time. Not sure whats causing this, whether on the camera or network side of things.

I've attached two images, the first at default exposure time (15,000 microseconds) and the second at 1/5th that time (3,000 microseconds).
The command line used was "./Snap -E 3000 -F 'Camera_exp_3000.tiff' "
Attachment 1: Camera_exp_15000.tiff
Attachment 2: Camera_exp_3000.tiff
  292   Fri Feb 1 15:04:54 2008 josephbConfigurationCamerasSnap with looping functionality available
New GiGE camera code is available in /cvs/cds/caltech/target/Prosilica/40mCode/. Currently only runs on Mafalda.

Snap has expanded functionality to continuously loop infinitely or for a maximum number of images set by the user. File names generated with the loop option have the current Unix time and .tiff appended to them. So -f './test' will produce tiff files with format "test1234567.tiff". The -l option sets the number of seconds between images.

"./Snap -l 5 -i -f './test' " will cause the program to infinitely loop, saving images every 5 seconds. Using "-m 10" instead of "-i" will take a series of 10 images every 5 seconds (so taking a total of 50 seconds to run).

It also now defaults to 16-bit (in reality only 10 bit) output instead of 8 bit output. You can select between the two with -F 'Mono8' or -F 'Mono16'.

Use --help for a full list of options.

Note that if you ctrl-c out of the loop, you may need to run ./ResetCamera 131.215.113.104 (or whatever the IP is - use ./ListCameras to determine IP if necessary) in order to reset the camera because it doesn't close out elegantly at the moment.
  297   Tue Feb 5 15:32:29 2008 josephbConfigurationCamerasPMC and the GigE Camera
The PMC transmission video camera has been removed and replaced with the GigE GC750 camera for the moment.

A ND4.0 filter has been added in the path to that camera to reduce saturation for the moment.

The old camera has been placed on the elevated section inside the enclosure, and the cable for it is still on the table proper.

The Gige camera is currently running the Snap code on Linux3 with the following command line:

./Snap -E 2000 -l 60 -m 1440 -f './pmc_trans/pmc_trans'

So its going to be taking tiff images every minute for the next 24 hours into the cvs/cds/caltech/target/Prosilica/40mCode/pmc_trans/ directory.

Attached is an example image with exposure set to 2000, loaded into matlab and plotted with the surf command. 2500 microseconds looked like it was still saturating, but this seems to be a good level (with a max of 58560 out of 65535).
Attachment 1: pmc_trans.jpg
pmc_trans.jpg
  300   Wed Feb 6 16:50:47 2008 josephbConfigurationCamerasRegions of Interest and max frame rate
The Snap code has once again been modified such that setting the -l option to 0 will take images as fast as possible. Also, the -H and -W options set the height and width, while in principle the -Y and -X options set the position in pixels of the top edge and left edge of the image. It also seems possible to set these values such that the saved image wraps around. I'll be adding some command checking so that the user can't do this in the near future.

Doing some timed runs, using a -H 350 and -W 350 (as opposed to the full 752x480), 100 images can be saved in roughly 8 seconds, and 1000 images took about 73 seconds. This corresponds to a frame rate of about 12-13 frames per second (or a 12-13 Hz display). The size of this area was sufficient to cover the current PMC transmission beam.

The command line I used was

time ./Snap -l 0 -m 1000 -f 'test' -W 350 -H 350 -Y 50 -X 350 -E 2000

Interestingly enough, there would be bursts of failed frame saves if I executed commands in another terminal (such as using ls on the directory where the files were being stored).

As always, this code is available in /cvs/cds/caltech/target/Prosilica/40mCode/.
  301   Wed Feb 6 19:39:11 2008 ranaConfigurationCamerasRegions of Interest and max frame rate
We really need to look into making the 40m CDS network have an all GigE backbone so that we can have cooler cameras as well as collect multiple datastreams...
  378   Fri Mar 14 12:06:29 2008 josephbConfigurationCamerasGC750 looking at ETMX while locked
The GC750 (CMOS) is currently looking at the front of ETMX. Unfortunately, its being routed through a 10Mbit connection (which I will be purchasing a replacement for today), so getting it to send images to Mafalda/Linux 2 or 3 isn't working well, but by using a local gigabit switch and a laptop I can get sufficient speed for full images with the sample viewer.

The attached image is from a full 752x480 reslution with 10,000 microsecond exposure with the X-arm locked. Although it looks like I still need to work on the focusing. Will be switching the GC750 with the GC 650 (CCD) later today and comparing the resulting images.
Attachment 1: ETMX_zoom_exp_10000_750.tiff
  379   Fri Mar 14 14:59:51 2008 josephbConfigurationCamerasComparison between GC650 (CCD) and GC750 (CMOS) looking at ETMX
Attached are images taken of ETMX while locked.

The first two are 300,000 microsecond exposure time, with approximately the same focusing/zoom. (The 750 is slightly more zoomed in than the 650 in these images). The second are 30,000 microsecond exposures. The la

The CMOS appears to be more sensitive to the 1064 nm reflected light (resulting in bright images for the same exposure time). This may make a difference in applications where images are desired to be taken quickly and repeatedly.

Both seem to be resolving individual specks on the optic reasonably well.

Next test is to place both camera on a Gaussian beam (in a couple different modes say 00, 11, and so forth), probably using the PMC.
Attachment 1: ETMX_z2_exp_300000_650.tiff
Attachment 2: ETMX_z2_exp_300000_750.tiff
Attachment 3: ETMX_z2_exp_30000_650.tiff
Attachment 4: ETMX_z2_exp_30000_750.tiff
  434   Tue Apr 22 08:34:22 2008 josephbConfigurationCamerasCurrent Network Diagram
The attached network diagram has also been added to the 40m Wiki at http://lhocds.ligo-wa.caltech.edu:8000/40m/Image_Processing_with_GigE_Cameras
Attachment 1: Network.pdf
Network.pdf
  471   Thu May 8 16:40:36 2008 josephbConfigurationCamerasGige Camera currently on PSL table
Andrey and myself were working on the PSL table today, using a pickoff of a pickoff of the main beam (adding a microscope slide to pickoff ~4% of the original pickoff) to the GC750 GigeCam.

At the time we left, we scanned the area with a beam scan and didn't see any new stray beams, and nothing in any useful beam paths should have changed. We also strung a Cat 6 cable from the control room switch out to the PSL table in the cable trays, and then above the PSL table.

Currently, its not as well aligned as it could be, and also requires a very low exposure setting, of -E 50 or so to avoid saturation.
  481   Thu May 15 16:24:18 2008 josephbSummaryCameras 
The GC750 camera is currently looking at a very small pickoff of the PSL output (transmission of a Y1-1037-45-S mirror). The plan is to take images tomorrow with it and the GC650 from the same spot and do comparisons.

For those interested, the camera can be run with two codes, from mafalda. Use ssh -X mafalda to login, to allow the live stream to work with the SampleViewer code. The codes can be found in:

/cvs/cds/caltech/target/Prosilica/40mCode/Snap

and

/cvs/cds/caltech/target/Prosilica/bin-pc/x86/SampleViewer

Type Snap --help for a list of options for that program. Click the circle looking thing in SampleViewer to start the live stream. Note only 1 of the two programs can be running at a time, and the only way to change settings (such as exposure length) is with Snap at the moment.
  482   Fri May 16 14:38:50 2008 josephbSummaryCamerasTwo cameras setup
I've changed the pickoff setup from yesterday for the GigE cameras to include a 33% beam splitter (first one I could find). The reflection is going to the GC650 (CCD camera) while the transimission is going to the GC750 CMOS camera. This means the CMOS camera has roughly twice the light incident as the GC650 and should be kept in mind in all comparisons. The distances from the beam splitter are approximately the same both cameras, but some more accurate positioning might be useful.

Its very easy to get the GC650 camera into a bad state where you need to go out and cycle the power (simply unplug and re-plug in the power supply either at the camera or outlet). If the ListCamera program doesn't see it, this is probably necessary.

Andrey added at 6.30PM: Actually the 650 camera keeps crashing constantly. Every time I attempt to capture an image, the camera fails.
  506   Fri May 30 12:03:08 2008 josephb, AndreyConfigurationCamerasHead to head comparison of cameras
Andrey and myself - Joseph B. - have examined the output of the GC650 (CCD) and GC750 (CMOS) prosilica cameras. We did several live motion tests (i.e. rotate the turning mirror, move and rotate the camera, etc) and also used a microscope slide to try to eliminate back reflections and interference.

Both the GC650 and GC750 produce dark lines in the images, some of which look parallel, while others are in much stranger shapes, such as circles and arcs.

Moving the GC750 camera physically, we have the spot moving around, with the dark lines appearing to be fixed to the camera itself, and remain in the same location on the detector. I.e. coming back to the same spot keeps showing a circle. In reasonably well behaved sections, these lines are about 10% dips in power, and could in principle be subtracted out. Its possible that the camera was damaged with too much light incident in the past, although going back to the pmc_trans images that were taken, similar lines are still visible.

Moving the GC650 camera physically seems to change the position of the lines (if one also rotates the turning mirror to get to the same spot on the CCD). It seems as if a slight change in angle has a large effect on these dark bands, which can either be thin, or very large, bordering on the size of the spot size. My guess is (as the vendor suggested) the light is interacting with the electronics behind the surface layer rather than a surface defect producing these lines. Using a microscope slide in between the turning mirror and the GC650, we were able to produce new fringes, but didn't affect the underlying ones.

Placing a microscope slide in between the last turning mirror and the GC750 does not affect the dark lines (although it does seem to add some), nor does turning the final turning mirror, so it seems unlikely to be caused by back reflection in this case.

So it seems the CMOS may be more consistent, although we need to determine if the current line problems are due to exposure to too much light at some point in the past (i.e. I broke it) or they come that way from the factory.

Attached are the results of image-processing of the images from the two our cameras using Andrey's new Matlab script.
Attachment 1: Waveform_Reconstruction_May30-2008.pdf
Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf
  511   Mon Jun 2 12:20:35 2008 josephbBureaucracyCamerasBeam scan has moved
The beamscan has been moved from the Rana lab back over to the 40m, to be used to calibrate the Prosilica cameras.
  512   Tue Jun 3 02:15:29 2008 AndreySummaryCamerasFitting results

There have been a lot of work going on related to the processing of images captured by the cameras GC-650 and GC-750 recently.

In the end of the week of May 30 Joseph and me (Andrey) installed the two cameras capturing the images of the pick-off of the main beam on the PSL optical table. The cameras are located after the picked-off beam going towards the "PSL position QPD", after the 33-66 beamsplitter (33% of reflection and 66% of transmission).

Initially (on May 30) the GC-650 camera was taking the images of reflected beam, while the camera GC-750 was taking images of transmitted beam. On Monday June 2 we switched the positions of the cameras, so GC-650 appeared to be on the path of the transmitted beam and GC-750 on the path of the reflected beam.

I (Andrey Rodionov) was able in the weekend to succeed in writing a Matlab program that performs the two-dimensional Gaussian fitting of the captured images, and I used that program to fit the images from the cameras.

The program fits the camera data by a two-dimensional Gaussian surface:

Z = A * exp[ - 2 * (X - X_Shift)^2 / (Waist_X)^2 ] * exp[ - 2 * (Y - Y_Shift)^2 / (Waist_Y)^2 ] + CONST_Shift,

where A, X_Shift, Waist_X, Y_Shift, Waist_Y, CONST_Shift are 6 parameters of the fit.

Attached are the pdf-files showing the results: images taken with our cameras, the 2-dimensional Gaussian fit for these images and the surfaces of residuals. Residuals are differences between the exact beam profile and the result of fitting. In normalized version of residual graph I normalize it by the first coefficient of fitting A, the factor in front of the exponents.
Attachment 1: May30-GC650.pdf
May30-GC650.pdf May30-GC650.pdf May30-GC650.pdf May30-GC650.pdf May30-GC650.pdf May30-GC650.pdf
Attachment 2: May30-GC750.pdf
May30-GC750.pdf May30-GC750.pdf May30-GC750.pdf May30-GC750.pdf May30-GC750.pdf May30-GC750.pdf
Attachment 3: June02-GC650.pdf
June02-GC650.pdf June02-GC650.pdf June02-GC650.pdf June02-GC650.pdf June02-GC650.pdf June02-GC650.pdf
Attachment 4: June02-GC750.pdf
June02-GC750.pdf June02-GC750.pdf June02-GC750.pdf June02-GC750.pdf June02-GC750.pdf June02-GC750.pdf
  515   Tue Jun 3 12:33:36 2008 AndreyUpdateCamerasAndrey, Josephb

Continuing our work with cameras,

1) we removed both cameras from their places on Monday afternoon, and were taking the beam-scans with a special equipment (see elog-entry 511) from Bridge bld.,

2) and on Tuesday morning we putted back the GC-750 camera into the transmitted beam path, camera GC-650 into the reflected beam path. We plan to compare the images from the "reflection camera" for several different angles of tilt of the camera.
  517   Wed Jun 4 13:46:42 2008 josephbConfigurationCamerasChanging incident angle images
Attached are images from the GC650 and GC750 when the incident angle was varied from 0 tilt (normal incidence) to 5,10, and 20 degrees. Each time the beam was realigned via the last turning mirror to be on roughly the same spot. This light was a pickoff of the PSL table light just before it leaves the table.

Images include the raw data, fit to the data, residual normalized by peak power "w(1)", and normalized by the individual bin power.

The first pdf includes 0 degrees (normal) and ~5 degrees of tilt for the GC650 (CCD) camera.

The second pdf includes ~10 and ~20 degrees of tilt images for the GC650 (CCD) camera.

The third pdf includes 0 and ~5 degrees of tilt for the GC750 (CMOS) camera.

The fourth pdf includes ~10 and ~20 degrees of tilt for the GC750 (CMOS) camera.

Things to note:
1) GC750 camera seems to have a structure on the camera itself, somewhat circular in nature. One possible explanation is the camera was damage at a previous juncture due to too much light. Need to check earlier images for this problem.
2) GC650 has "bands" which change direction and thickness with angle. Also at higher incidence angle, the sensitivity seems to drop (unlike the GC750 where overall power level seems to stay constant with increasing angle of incidence).
3) GC650 seems to have a higher noise floor,seen from the last plot of each pdf (where each pixel of the residual is normalized by the power in the corresponding pixel of the fit).
Attachment 1: GC650_0dg_5dg.pdf
GC650_0dg_5dg.pdf GC650_0dg_5dg.pdf GC650_0dg_5dg.pdf GC650_0dg_5dg.pdf GC650_0dg_5dg.pdf GC650_0dg_5dg.pdf GC650_0dg_5dg.pdf
Attachment 2: GC650_10dg_20dg.pdf
GC650_10dg_20dg.pdf GC650_10dg_20dg.pdf GC650_10dg_20dg.pdf GC650_10dg_20dg.pdf GC650_10dg_20dg.pdf GC650_10dg_20dg.pdf GC650_10dg_20dg.pdf
Attachment 3: GC750_0dg_5dg.pdf
GC750_0dg_5dg.pdf GC750_0dg_5dg.pdf GC750_0dg_5dg.pdf GC750_0dg_5dg.pdf GC750_0dg_5dg.pdf GC750_0dg_5dg.pdf GC750_0dg_5dg.pdf
Attachment 4: GC750_10dg_20dg.pdf
GC750_10dg_20dg.pdf GC750_10dg_20dg.pdf GC750_10dg_20dg.pdf GC750_10dg_20dg.pdf GC750_10dg_20dg.pdf GC750_10dg_20dg.pdf GC750_10dg_20dg.pdf
  519   Wed Jun 4 16:57:12 2008 josephbConfigurationCamerasDark images from cameras (electronics noise measurement)
The attached pdfs are 1 second and 1 millisecond long integrations from the GC650 and GC750 cameras with a cap in place - i.e. no light.

They include the mean and standard deviation values.

The single bright pixel in the 1 second long exposure image for the GC650 seems to be a real effect. Multiple images taken show the same bright pixel (although with slightly varying amplitudes).

The last pdf is a zoom in on the z-axis of the first pdf (i.e. GC650 /w 1 sec exposure time).

I'm not really sure what to make of the mean remaining virtually fixed for the different integration times for both cameras. I guess 0 is simply offset, but doesn't result in any runaway integrations in general. Although there are certainly some stronger pixels in the long exposures when compared to the short exposures.

Its interesting to note the standard deviation actually drops from the long exposure to the short exposure, possibly influenced by certain pixels which seem to grow with time.

The one with the least variation from its "zero" was the 1 millisecond GC750 dark image.
Attachment 1: GC650_1sec_dark.pdf
GC650_1sec_dark.pdf
Attachment 2: GC650_1msec_dark.pdf
GC650_1msec_dark.pdf
Attachment 3: GC750_1sec_dark.pdf
GC750_1sec_dark.pdf
Attachment 4: GC750_1msec_dark.pdf
GC750_1msec_dark.pdf
Attachment 5: GC650_1sec_dark_zoom.pdf
GC650_1sec_dark_zoom.pdf
  520   Thu Jun 5 10:46:26 2008 josephbConfigurationCamerasApproximately uniform reflected white light
In an attempt to investigate the structures seen in previous images for the GC750, I aimed it at a relatively clean section of gray table top roughly a cm or two from the surface and took images (without a lens). As I was holding this with my hand, the angle wasn't completely even with the table, and thus there's a gradient of light in the pictures. However, one should in principle be able to pick out features (such as a circular spot with less sensitivity), but these do not show up.

In my mind, these images seem to indicate the electronics are fine, and suggest that the CMOS or CCD detectors themselves are undamaged (at least in regards to white light, as opposed to 1064nm). An issue with the plastic cap (protective piece) may be the culprit, or perhaps a tiny bit of dust, which the incoherent light from all angles goes around efficiently?

Will try blowing the cameras with clean nitrogen today and see if that removes or changes the circular structure we have seen.
Attachment 1: GC650_white_light.pdf
GC650_white_light.pdf
Attachment 2: GC750_white_light.pdf
GC750_white_light.pdf
  521   Thu Jun 5 13:35:23 2008 josephbConfigurationCamerasGC750 looking at 1064nm scattered light
I've taken 200 images of the GC750 (CMOS) camera while holding it by hand up to a beam card (also held by hand) in the path of ~5mW of beam power. I then averaged the images to produce the fourth attached plot.

Rob has pointed out the image looks a lot like PCB traces. So perhaps we're seeing the electronics behind the CMOS sensor?

I repeated the same experiment with HeNe laser light (again scattered off a card). These show none of the detailed structure (just what looks to be a large reflection from the card moving around depending on how steady my hand was). These are the first 3 attached plots. So only 1064nm light so far sees these features.

As a possible solution, I did a quick and dirty calibration by dividing a previous PSL output beam by the 1064 average scatter light values. These produce the last attached pdf (with multiple images). The original uncalibrated image is on top, while the very simply calibrated image is on the bottom of each plot.

It seems as the effect may be power dependent (which could still be calibrated properly, but would take a bit more effort than simply dividing), as determined by looking at the edges of the calibrated plot.
Attachment 1: GC750_HeNe_scatter_avg.pdf
GC750_HeNe_scatter_avg.pdf
Attachment 2: GC750_HeNe_scatter_avg2.pdf
GC750_HeNe_scatter_avg2.pdf
Attachment 3: GC750_HeNe_scatter_avg3.pdf
GC750_HeNe_scatter_avg3.pdf
Attachment 4: GC750_scatter_avg.pdf
GC750_scatter_avg.pdf
Attachment 5: GC750_nitrogen_white.pdf
GC750_nitrogen_white.pdf GC750_nitrogen_white.pdf GC750_nitrogen_white.pdf GC750_nitrogen_white.pdf GC750_nitrogen_white.pdf GC750_nitrogen_white.pdf GC750_nitrogen_white.pdf
  525   Fri Jun 6 16:47:04 2008 josephbConfigurationCameras GC650 scatter images of 1064nm light
Took images similar to the scattered light images from earlier, except with the CCD GC650 camera. The first three attached plots are an average of all 200 images, an average of the first 100 and then an average of the last 100 images.

They show no definite structure. The big red blob which changes with time may be a brighter reflection, although it virtually the same type of setup as the GC750 images.

To do this properly, I should grab a short focal length lens and simply blow up the beam to a size greater than the detector area and simply fix both cameras looking into.

The last set of plots are mean and standard deviation plots from a previous set of runs on 5/29/08 with the GC750 and GC650 running at the same time. The GC650 was receiving approximately 33% of the total power and GC750 was receiving 66% (in otherwords a factor of 2 more).
Attachment 1: GC650_scatter_200.pdf
GC650_scatter_200.pdf
Attachment 2: GC650_scatter_100a.pdf
GC650_scatter_100a.pdf
Attachment 3: GC650_scatter_100b.pdf
GC650_scatter_100b.pdf
  530   Wed Jun 11 15:30:55 2008 josephbConfigurationCamerasGC1280
The trial use GC1280 has arrived. This is a higher resolution CMOS camera (similar to the GC750). Other than higher resolution, it has a piece of glass covering and protecting the sensor as opposed to a plastic piece as used in the GC750. This may explain the reduced sensitivity to 1064nm light that the camera seems to exhibit. For example, the image averages presented here required a 60,000 microsecond exposure time, compared to 1000-3000 microseconds for similar images from the GC750. This is an inexact comparison, and the actual sensitivity difference will be determined once we have identical beams on both cameras.

The attached pdfs (same image, different angles of view) are from 200 averaged images looking at 1064nm laser light scattering from a piece of paper. The important thing to note is there doesn't seem to be any definite structure, as was seen in the GC750 scatter images.

One possibility is that too much power is reaching the CMOS detector, penetrating, and then reflecting back to the back side of the detector. Lower power and higher exposure times may avoid this problem, and the glass of the GC1280 is simply cutting down on the amount passing through.

This theory will be tested either this evening or tomorrow morning, by reducing the power on the GC750 to the point at which it needs to be exposed for 60,000 microseconds to get a decent image.

The other possibility is that the GC750 was damaged at some point by too much incident power, although its unclear what kind of failure mode would generate the images we have seen recently from the GC750.
Attachment 1: GC1280_60000E_scatter_2d.pdf
GC1280_60000E_scatter_2d.pdf
Attachment 2: GC1280_60000E_scatter_3d.pdf
GC1280_60000E_scatter_3d.pdf
  558   Tue Jun 24 17:12:10 2008 josephb, EricConfigurationCamerasGC750 setup, 1X4 Hub connected, ETMX images
The GC750 camera has been setup to look at ETMX. In addition, the new 1X4 rack mounted switch (131.215.113.200) has been connected via new cat6 cable to the control room hub (131.215.113.1?), thanks to Eric. The camera is now plugged into 1X4 rack switch and now has a gigabit connection to the control room computers as well as Mafalda (131.215.113.23).

By using ssh -X mafalda or ssh -X 131.215.113.23, then typing:

target
cd Prosilica/bin-pc/x86/
./Sampleviewer

A viewer will be brought up. By clicking on the 3rd icon from the left (looks like an eye) will bring up a live view.

Closing the view, and then cd ../../40mCode, and then running ./Snap --help will tell you how to use a simple code for taking .tiff images as well as setting things such as exposure length and size of image (in pixels) to send.

When the interferometer was set to an X-arm only configuration, we took two series of 200 images each, with two different exposure lengths.

Attached are three pdf images. The first is just a black and white single image, the second is an average of 100 images, and the third is the standard deviation of the 100 images.
Attachment 1: GC750_ETMX_E30000_single.pdf
GC750_ETMX_E30000_single.pdf
Attachment 2: GC750_ETMX_E30000_avg.pdf
GC750_ETMX_E30000_avg.pdf
Attachment 3: GC750_ETMX_E30000_std.pdf
GC750_ETMX_E30000_std.pdf
  566   Wed Jun 25 12:25:28 2008 EricSummaryCameras2D Gaussian Fitting Code
I initially wrote a script in MATLAB that takes pictures of the laser beam's profile and fits them to a two dimensional gaussian in order to determine the position and width of the beam. This code is now (mostly) ported to C so that it can be imbedded in the camera software package that Joe is writing. The fitting works fairly well for pictures with the beam directly incident on the camera, and less well for pictures of scatter off the end mirrors of the arms, since scatter from defects in the mirror have intensities much greater than the intensity of the beam's gaussian profile.

The next steps are to finish up porting the fitting code to C, and then modify it so it can better handle the images off the end mirror. Some thoughts on how to do this are to use a fourier transform and a low pass filter, or to simply use a center-of-mass calculation (with the defect peaks reduced in intensity), since position is more important than beam width in this calculation. The eventual goal is to include the edge of the optic in the picture and use the fit of the beam position in comparison to the optic's position to find the beam's location on the mirror.
  622   Wed Jul 2 10:35:02 2008 EricSummaryCamerasGeneral Summary
I finished up the 2D Gaussian fitting code, and, along with Joe, integrated into the Snap software so that it automatically does a fit to every 100th image. While the fitting works, it is too slow for use in any feedback to the servos. I put together a center of mass calculation to use instead that is somewhat less accurate but much faster (almost instantaneous versus 5-10 seconds). This has yet to be added to the Snap software, but doing so would not be difficult.

I put together a different fitting function for fitting the multiple lorentzian resonance peaks in a power spectrum that would result from sweeping the length of any of the mode cleaners. This simply doesn't work. I tested it on some of Josh Weiner's data collected on the OMC last year, and the data fits poorly. Attempting to fit it all at once requires fitting 80000 data points with 37 free parameters (12 peaks at 3 parameters per peak and 1 offset parameter), which cannot be done in any reasonable time period. Attempting to fit to one specific peak doesn't work due to the corruption of the other nearby peaks, even though they are comparatively small. The fit places the offset incorrectly if given the opportunity (green line in attemptedSinglePeakFitWithoutOffset.tiff and attemptedSinglePeakFitWithoutOffsetZoomed.tiff). Removing this as a parameter causes the fit to do a much better job (red line in these two graphs). The fit still places the peak 0.01 to the right of the actual peak, which worse than could simply be obtained by looking at the maximum point value. Additionally, this slight shift means that attempting to subtract out the peak so that the other peaks are accessible doesn't work -- the peaks are so steep that the error of 0.01 is enough to cause significant problems (red in attemptedPeakSubtraction.tiff is the attempted subtraction). Part of the problem is that the peaks are far from perfect lorentzians, as seen by cropping to any particular peak (OMCSweepSinglePeak.tiff ). This might be corrected in part by correcting for the conversion from PZT voltage to position, which isn't perfectly linear; though I doubt it would remove all the irregularities. At the moment, the best approach seems to be simply using a center of mass calculation cropped to the particular peak, though I have yet to try this.

Changing Josh's code to work for the digital cameras and the PMC or MC shouldn't be difficult. Changing to the MC or PMC should simply involve changing the EPICs tags for the OMC photodiodes and PZTs to those of the PMC or MC. Making the code work for the digital cameras should be as simple as redirecting the call to the framegrabber software to the Snap software.
Attachment 1: attemptedSinglePeakFitWithoutOffset.tiff
Attachment 2: attemptedSinglePeakFitWithoutOffsetZoomed.tiff
Attachment 3: attemptedPeakSubtraction.tiff
Attachment 4: OMCSweepSinglePeak.tiff
  657   Thu Jul 10 23:27:57 2008 JohnMetaphysicsCamerasSecret handshakes
Rob and I have joined the ranks of the illuminati and exercised our power.


Quote:
Osamu showed me the secret way to change the video labels for the quads and
so we fixed them. He made me swear not to divulge this art.

- Rana Adhikari
  660   Fri Jul 11 20:16:01 2008 EricDAQCamerasTaking data from the GC 750 Camera
Mafalda has been set up with a background process to constantly take data from the GC 750 camera (at the end of the x-arm) for the weekend. This camera will otherwise be inoperable until then.

In the small chance that this slows either Mafalda or the network to a crawl, the process to kill should have PID 26265.
  671   Tue Jul 15 10:09:42 2008 EricDAQCamerasDid anyone kill the picture taking process on Mafalda?
Did anyone kill the process on Mafalda that was taking pictures of the end mirror of the x-arm last Friday? I need to know whether or not it crashed of its own accord.
ELOG V3.1.3-