40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 43 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  14968   Mon Oct 14 16:34:42 2019 KojiUpdateCDSCM servo board testing

Input referred offsets on the IN1/IN2 were tested with different gain settings. The two inputs were plugged by the 50 ohm terminators. The output was monitored at OUT1 (SLOW Length Output). The fast path is AC coupled and has no sensitivity to the offset.

There is the EPICS monitor point for OUT1. With the multimeter it was confirmed that the EPICS monitor (C1:LSC-CM_REFL1_GAIN) has the right value except for the opposite sign because the output stage of OUT1 is inverting. The previous stages have no sign inversion. Therefore, the numbers below does not compensate the sign inversion.

Attachment 1 shows the output offset observed at C1:LSC-CM_REFL1_GAIN. There is some gain variation, but it is around the constant offset of ~26mV. This suggested that the most of the offset is not from the gain stages but from the later stages (like the boost stages). Note that the boost stages were turned off during the measurements.

Attachment 2 shows the input refered offset naively calculated from the above output offset. In dependent from which path was used, the offset with low gain was hugely enhanced.

Since the input referred offset without subtracting the static offset seemed useless, a constant offset of -26mV was subtracted from the calculation (Attachment 2). This shows that the input refered offset can go up to ~+/-20mV when the gain is up to -16dB. Above that, the offset is mV level.

I don't think this level of offset by whichever OP27 or AD829 becomes an issue when the input error signal is the order of a volt.
This suggests that it is more important to properly set the internal offset cancellation as well as to keep the gain setting to be high.


Attachment 1: in12_output_offset.pdf
Attachment 2: in12_input_offset.pdf
Attachment 3: in12_input_offset2.pdf
  14953   Tue Oct 8 17:59:29 2019 KojiUpdateCDSCM servo board testing (portal)

== Test Status ==

[done] Whitening gain switching test
[done] AA enable/disable switching
[0th order] LO Det Mon channel check
[none] PD I/F board check
[done] QPD I/F board check
[done] CM Board
[none] ALS I/F board

The photos of the latest board can be found as Attachments 3/4

With some input signals, the functionarities of the CM servo switches were tested.

  • Latch logic works. But latch alive signal is missing.
  • IN1 enable/disable, IN2 enable/disable are properly working
  • OUT2 toggle switch for REFL1/REFL2 mon is wokring
  • Boost / Super Boosts are working
  • EXC A enable/disable, EXC B enable/disable switches are working
  • Option 1 and Option 2 now isolate the input when either is enabled (as there is no option board)
  • 79Hz-1.6kHz pole zero pair works fine
  • OUT1 works fine
  • Disable/Enable switch for the fast path works
  • Polarity switch works
  • AO Gain property changes the gain
  • Limitter switch works (Attachments 4/5). The limitter clipps the output at 4~4.5V. The Limitter indicator also works.

After the tests the LSC cables were reconnected (Attachment 6)

Attachment 1: Screen_Shot_2019-10-08_at_18.36.04.png
Attachment 2: CM_Board_asof_191007_1.jpeg
Attachment 3: CM_Board_asof_191007_2.jpeg
Attachment 4: no_limitter.jpg
Attachment 5: with_limitter.jpg
Attachment 6: P_20191008_012442_vHDR_On.jpg
  9477   Sun Dec 15 21:01:19 2013 KojiUpdateLSCCM servo module installed

Now the module is inserted at the 2nd crate from the top of 1Y2 (LSC analog rack). It is next to the DCPD whitening module.

I found the backplane cable for the Common Mode servo module.
I traced a cable form XY220 at the right most module on the crate where iscaux2 is living.
This cable was connected to the upper backplane connector.

Switching of the module is tested. All the switches and gain control are doing their job.

It was found that the offset and slow readback are not responding.
I checked the schematic of the CM servo module (D040180).
It seems that there is another cable for the offset and read back voltages.

  9479   Mon Dec 16 20:08:43 2013 KojiUpdateLSCCM servo module installed

I found another backplane cable for the CM servo module. It is plugged to the module now.

I can see that C1:LSC-CM_SLOW_MON is responding to C1:LSC-CM_REFL_OFFSET.
But C1:LSC-CM_SUM_MON and C1:LSC-CM_FAST_MON are not replated to the given offset.
I probably need to check the cross connects.

  9483   Tue Dec 17 21:28:36 2013 JenneUpdateLSCCM servo slow output digitized

Den just plugged an output from the common mode board into an LSC whitening board (the spare channel that used to be called "PD_XXX_I" in the LSC model).  I have modified the LSC model to reflect the new name of the new signal ("CM_SLOW"), and have added this channel to the LSC input matrix.  Koji is, I believe, adding this channel to the LSC screen in the auxiliary error signals section.  I am also adding the _OUT of the filter bank to the DAQ channels block.

  9492   Thu Dec 19 03:29:34 2013 DenUpdateLSCCM servo test using yarm is complete

Koji, Den


  • lock yarm on IR, wire POY to CM input
  • transition arm to CM length path by actuating on IMC
  • increase AO gain for a stable crossover
  • engage CM boosts


  • arm can be kept on resonance and even acquired on MC2
  • stable length / AO crossover is achieved
  • high bandwidth loop can not be engaged because POY signal is too noisy and EOM is running out of range

We spent some time tuning CM slow servo such that fast path would be stable in the AO gain range -32db -> 29dB (UGF=20kHz) when all boosts are turned off and common gain is 25dB. Current filters that we use for locking are not good enough - AO can not be engaged due to oscillations around 1kHz. This is clearly seen from slow path closed loop transfer function. I will attach servo shapes tomorrow.

Attached plot "EOM" shows EOM rms voltage while changing AO gain from -10dB to 4dB. For UGF of 20kHz we need AO gain of 29dB.

It seems we can start using CM servo for CARM offset but the sensor should be at least factor of 30 better than POY. Add another factor of 10 if we would like to use BOOST 2 and BOOST 3.

Attachment 1: EOM.png
  10580   Tue Oct 7 19:40:58 2014 ericqUpdateLSCCM, REFL11 Wiring

I've changed the LSC rack wiring a little bit, to give us some flexibility when it comes to REFL11. 

Previous, the REFL11 demod I output was fed straight to the CM servo board, and the slow CM board output was hooked up to the REFL11I ADC channel. Thus, it wasn't really practical to ever even look at sensing angles in REFL11, since the I and Q inputs were subject to different signal paths/gains. (Also, doing LSC offsets would do wonky things to refl11 depending on the state of the switches on the CM board screen.)

Thus, I've hooked up the CM board slow output into the the previously existing, aptly named, CM_SLOW channel. The REFL11 demod board I output is split to IN1 of the CM board, and the REFL11 I ADC channel. 

So, there is no longer hidden behavior in behind the REFL11 input filters, channels are what they claim to be, and the CM board output is just as easily accessible to the LSC filters as before. 

  1862   Fri Aug 7 17:51:50 2009 ZachUpdateCamerasCMOS vs. CCD

The images that I just posted were taken with the CMOS camera.  We switched from the CCD to the CMOS because the CCD was exhibiting much higher blooming effects.  Unlike the CCD, there is a slight background structure if you look carefully in the amplitude image, but I can correct for this consistent background by taking a uniformly exposed image by placing a convex lens in front of the CMOS.  I will then divide each frame taken of the laser wavefront by the background image. 

  14760   Mon Jul 15 14:09:07 2019 MilindUpdateCamerasCNN LSTM for beam tracking

I've set up network with a CNN encoder (front end) feeding into a single LSTM cell followed by the output layer (see attachment #1). The network requires significantly more memory than the previous ones. It takes around 30s for one epoch of training. Attached are the predicted yaw motion and the fft of the same. The FFT looks rather curious. I still haven't done any tuning and these are only the preliminary results.


 Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).

Well, what about the previous conv nets?

What I did:

  1. Extensive tuning - of learning rate, batch size, dropout ratio, input size using a grid search
  2. Trained each network for 75 epochs and obtained weights, predicted motion and corresponding FFT, error etc.

What I observed:

  1. Loss curves look okay, validation loss isn't going up, so I don't think overfitting is the issue
  2. Training for over (even) 75 epochs seems to be pointless.

What I think is going wrong:

  1. Input size- relatively large input size: 350 x 350. Here, the input image size seems to be 128 x 128.
  2. Inadequate pre-processing.
    1. I have not applied any filters/blurs etc. to the frames.
    2. I have also not tried dimensionality reduction techniques such as PCA

What I will try now:

  1. Collect new data: with smaller amplitudes and different frequencies
  2. Tune the LSTM network for the data I have
  3. Try new CNN architectures with more aggressive max pooling and fewer parameters
  4. Ensembling the models (see this and this). Right now, I have multiple models trained either with same architecture and different hyperparameters or with different architectures. As a first pass, I intend to average the predictions of all the models and see if that improves performance.
Attachment 1: cnn-lstm.png
Attachment 2: fft_yaw.pdf
Attachment 3: yaw_motion.pdf
  14779   Fri Jul 19 16:47:06 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

I did a whole lot of hyperparameter tuning for convolutional networks (without 3d convolution). Of the results I obtained, I am attaching the best results below.

Define "best"?

The lower the power of the error signal (difference between the true and predicted X and Y positions), essentially mse, on the test data, the better the performance of the model. Of the trained models I had, I chose the one with the lowest mse.

Attached results:

  1. Attachment 1: Training configuration
  2. Attachment 2: Predicted motion along the Y direction for the test data
  3. Attachment 3: Predicted motion along the Y direction for the training data
  4. Attachment 4: Learning curves
  5. Attachment 5: Error in test predictions
  6. Attachment 6: Video of image histogram plots
  7. Attachment 7: Plot of percentage of pixels with intensity over 240 with time

(Note: Attachment 6 and 7 present information regarding a fraction of the data. However, the behaviour remains the same for the rest of the data.)

Observations and analysis:

  1. Data:
    1. From attachemtns 2, 3, 5: Maximum deviation from true labels at the peaks of applied dither/motion. Possible reasons:
      1. Stupid Cropping? I checked (by watching the video of cropped frames, i.e visually) to ensure that the entire motion of the beam spot is captured. Therefore, this is not the case.
      2. Intensity variation: The intensity (brightness?) of the beam spot varies (decreases) significantly at the maximum displacement. This, I think, is creating a skewed dataset with very few frames with low intensity pixels. Therefore, I think it makes sense to even this out and get more data points (frames) with similar (lower) pixel intensities. I can think of two ways of doing this:
        1. Collect more data with lower amplitude of sinusoidal dither. I used an amplitude of 80 cts to dither the optic. Perhaps something like 40 is more feasible. This will ensure the dataset isn't too skewed.
        2. Increase exposure time. I used an exposure time of 500us to capture data. Perhaps a higher exposure time will ensure that the image of the beam spot doesn't fade out at the peak of motion.
    2. From attachment 5, Saturated images?: We would like to gun for a maximum deviation of 10% (0.1 in this case) from the true values in the predicted labels (Tbh, I'm not sure why this is a good baseline, I ought to give that some thought. I think the maximum deviation of the OpenCV thing I did at the start might also be a good baseline?). Clearly, we're not meeting that. One possible reason is that the video might be saturated- (too many pixels at 255, bleeding into surrounding pixles) leading to loss of information. I set the exposure time to 500us precisely to avoid this. However, I also created videos of the image histograms of the frames to make sure the frames weren't saturated (Is there some better standard way of doing it?). From attachements 6 and 7, I think it's evident that saturation is not an issue. Consequently, I think increasing the exposure time and collecting data is a good idea.
  2. The network:
    1. From attachment 4: Training post 25 epochs seems to produce overfitting, though it doesn't seem too terrible (from attachments 2 and 3). The network is still learning after 75 epochs, so I'll tinker with the learning rate, dropout and maybe put in annealing.
    2. I don't think there is a need to change the architecture yet. The model seems to generalize okay (valdiation error is close to training error), therefore I think it'll be a good idea to increase dropout for the fully connected layers and train for longer/ with a higher learning rate.



P.S. I will also try the 2D convolution followed by the 1D convolution thing now. 

P.P.S. Gabriele suggested that I try average pooling instead of max pooling as this is a regression task. I'll give that a shot.


Attachment 1: readme.txt
Experiment file: train_both.py
batch_size: 32
dropout_probability: 0.5
eta: 0.0001
filter_size: 1
filter_type: median
initializer: Xavier
memory_size: 10
num_epochs: 75
activation_function: relu
... 22 more lines ...
Attachment 2: yaw_motion_test.pdf
Attachment 3: yaw_motion_train.pdf
Attachment 4: Learning_curves_replotted.pdf
Attachment 5: yaw_error_test.pdf
Attachment 6: intensity_histogram.mp4
Attachment 7: saturation_percentage.pdf
  14786   Sat Jul 20 12:16:39 2019 gautamUpdateCamerasCNNs for beam tracking || Analysis of results
  1. Make the MSE a subplot on the same axes as the time series for easier interpretation.
  2. Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.
  3. What is the minimum detectable motion given the CCD resolution?
  4. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully?
  5. What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.
  6. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  7. Is the time-sync problem Koji raised limiting this approach?
  14787   Sat Jul 20 14:43:45 2019 MilindUpdateCamerasCNNs for beam tracking || Analysis of results

<Adding details>

See Attachment #2.


Make the MSE a subplot on the same axes as the time series for easier interpretation.

Training dataset:

  1. Peak to peak amplitue in physical units: ?
  2. Dither frequency: 0.2 Hz
  3. Video data: zoomed in video of the beam spot obtained from GigE camera at 500us exposure time. Each frame has a resolution of 640 x 480 which I have cropped to 350 x 350. Attachment #1 is one such frame.
  4. Yes, therefore I am going to obtain video at lower amplitudes. I think that should help me avoid the problem of not-nominal-maximum value?
  5. Other details of the training dataset:
    1. Dataset created from four vides of duration ~ 30, 60, 60, 60 s at 25 FPS.
    2. 4032 training data points
      1. Input (one example/ data point): 10 successive frames stacked to form a 3D volume of shape 350 x 350 x 10
      2. Output (2 dimensional vector): QPD readings (C1:IOO-MC_TRANS_PIT_ERR, C1:IOO-MC_TRANS_YAW_ERR)
    3. Pre-processing: none
    4. Shuffling: Dataset was shuffled before every epoch
    5. No thresholding: Binary images are gonna be of little use if the expectation is that the network will learn to interpret intensity variations of pixels.

Do I need to provide any more details here?


Describe the training dataset - what is the pk-to-pk amplitude of the beam spot motion you are using for training in physical units? What was the frequency of the dither applied? Is this using a zoomed-in view of the spot or a zoomed out one with the OSEMs in it? If the excursion is large, and you are moving the spot by dithering MC2, the WFS servos may not have time to adjust the cavity alignment to the nominal maximum value.



What is the minimum detectable motion given the CCD resolution?

see attachment #4.

  1. Please upload a cartoon of the network architecture for easier visualization. What is the algorithm we are using? Is the approach the same as using the bright point scatterers to signal the beam spot motion that Gabriele demonstrated successfully


I wrote what I think is a handy script to observe if the frames are saturated. I thought this might be handy for if/when I collect data with higher exposure times. I assumed there was no saturation in the images because I'd set the exposure value to something low. I thought it'd be useful to just verify that. Attachment #3 has log scale on the x axis.


What is the significance of Attachment #6? I think the x-axis of that plot should also be log-scaled.


  1. Is the performance of the network still good if you feed it a time-shuffled test dataset? i.e. you have (pictures,Xcoord,Ycoord) tuples, which don't necessarily have to be given to the network in a time-ordered sequence in order to predict the beam spot position (unless the network is somehow using the past beam position to predict the new beam position).
  2. Is the time-sync problem Koji raised limiting this approach?


Attachment 1: frame0.pdf
Attachment 2: subplot_yaw_test.pdf
Attachment 3: intensity_histogram.mp4
Attachment 4: network2.pdf
  14807   Wed Jul 24 20:05:47 2019 MilindUpdateCamerasCNNs for beam tracking || Tales of desperation

At the lab meeting today, Rana suggested that I use the Pylon app to collect more data if that's what I need. Following this, Jon helped me out by updating the pylon version and installing additional software to record video. Now I am collecting data at

  1. higher exposure rate - 600 us magically gives me a saturation percentage of around 1%, see attachment #1 (i.e around 1% of the pixels in the region containing the beam spot are over 240 in value). Ths is a consequence of my discussion with Gabriele where we concluded that I was losing information due the low exposure rate I was using.
  2. For much longer: roughly 10 minutes
    1. at an amplitude of 40 cts for 0.2 Hz
    2. at an amplitude of 20 cts for 0.2 Hz
    3. at an amplitude of 10 cts for 0.2 Hz
    4. at an amplitude of 40 cts for 0.4 Hz
    5. at an amplitude of 20 cts for 0.2 Hz
    6. Random motion

Consequently I have dithered the MC2 optic from around 9:00 PM.

Attachment 1: saturation_percentage.pdf
  539   Wed Jun 18 16:37:54 2008 steve,ranaUpdateSAFETYCO2 test in the east arm
The CO2 laser and table are in the east arm for characterization of the mechanics. We
will not be operating it until we have an SOP (which is being written). No worries.
Attachment 1: co2.png
  4790   Mon Jun 6 18:29:01 2011 Jamie, JoeUpdateCDSCOMPLETE FRONT-END REBUILD (WITH PROBLEMS (fixed))

Today Joe and I undertook a FULL rebuild of all front end systems with the head of the 2.1 branch of the RCG.  Here is the full report of what we did:

  1. checked out advLigoRTS/branches/branch-2.1, r2457 into core/branches/branch-2.1
  2. linked core/release to branches/branch-2.1
  3. linked in models to core/release/src/epics/simLink using Joe's new script (userapps/release/cds/c1/scripts/link_userapps)
  4. remove unused/non-up-to-date models:
  5. c1dafi.md
  6. modified core/release/Makefile so that it can find models:
  7. --- Makefile	(revision 2451)
    +++ Makefile (working copy)
    @@ -346,7 +346,7 @@
    #MDL_MODELS = x1cdst1 x1isiham x1isiitmx x1iss x1lsc x1omc1 x1psl x1susetmx x1susetmy x1susitmx x1susitmy x1susquad1 x1susquad2 x1susquad3 x1susquad4 x1x12 x1x13 x1x14 x1x15 x1x16 x1x20 x1x21 x1x22 x1x23

    #MDL_MODELS = $(wildcard src/epics/simLink/l1*.mdl)
    -MDL_MODELS = $(shell cd src/epics/simLink; ls m1*.mdl | sed 's/.mdl//')
    +MDL_MODELS = $(shell cd src/epics/simLink; ls c1*.mdl | sed 's/.mdl//')

    World: $(MDL_MODELS)
  8. removed channel files for models that we know will be renumbered
    • For this rebuild, we are also building modified sus models, that are now using libraries, so the channel numbering is changing.
  9. make World
    • this makes all the models
  10. make installWorld
    • this installs all the models
  11. Run activateDQ.py script to activate all the relevant channels
    • this script was modified to handle the new "_DQ" channels
  12. make/install new awgtpman:
  13. cd src/gds
    cp awgtpman /opt/rtcds/caltech/c1/target/gds/bin
  14. turn off all watchdogs
  15. test restart one front end: c1iscex

    The c1iscex models (c1x01 and c1scx) did not come back up.  c1x01 was running long on every cycle, until the model crashed and brought down the computer.  After many hours, and with Alex's help, we managed to track down the issue to a patch from Rolf at r2361.  The code included in that patch should have been wrapped in an "#ifndef RFM_DIRECT_READ".  This was fixed and committed to branches/branch-2.1 at r2460 and to trunk at r2461.

  17. update to core/branches/branch-2.1 to r2460
  18. make World && make installWorld with the new fixed code
  19. restarted all computers
  20. restart frame builder
  21. burt restored to 8am this morning
  22. turned on all watchdogs

Everything is now green, and things seem to be working.  Mode cleaner is locked.  X arm locked.


  12061   Mon Apr 4 15:04:14 2016 gautamUpdateendtable upgradeCOMPONENT REMOVAL

I'm planning to start removing components from the X endtable tomorrow morning at ~10AM - if anyone thinks I should hold off and do some further checks/planning, let me know before this so that I can do the needful.

  3125   Sat Jun 26 21:13:19 2010 ranaSummaryComputer Scripts / ProgramsCOMSOL 4.0 Installation

I've installed COMSOL 4.0 for 32/64 bit Linux in /cvs/cds/caltech/apps/linux64/COMSOL40/

It seems to work, sort of.


  1. It did NOT work according to the instructions. The CentOS automount had mounted /dev/scd0 on /media/COMSOL40. In this configuration, I was getting a permission denied error when trying to run the default setup script. I did a 'sudo umount /dev/scd0' to get rid of this bad mount and then remounted using 'sudo mount /dev/dvd /mnt'. After doing this, I ran the setup script '/mnt/setup' and got the GUI which started installing as usual.
  2. I also pointed it at the linux64/matlab/ installation.
  3. It seems to not work right on Rosalba because of my previous java episode. The x-forwarding from megatron also fails. It does work on allegra, however.
  3536   Tue Sep 7 20:44:54 2010 YoichiHowToCOMSOL TipsCOMSOL example for calculating mechanical transfer functions

I added COMSOL example files to the 40m svn to demonstrate how to make transfer function measurements in COMSOL.


The directory also contains an (incomplete) explanation of the method in a PDF file.

  6994   Fri Jul 20 11:59:27 2012 ranaUpdateComputer Scripts / ProgramsCONLOG not running

WE tried to use the new conlog today and discovered that:

1) No one at the 40m uses conlog because they don't know that it ever ran and don't know how to use regexp.

2) It has not been running since the last time Megatron was rebooted (probably a power outage).

3) We could not get it to run using the instructions that Syracuse left in our wiki.

Emails are flying.

  6148   Fri Dec 23 15:55:12 2011 ranaSummaryComputer Scripts / ProgramsCONLOG: not working since Oct 1

Often people say "I don't use conlog because its real slow". Its a little like not driving because your car has no gas.

I looked into what's going on with conlog. No one has fixed its channel list in ~1 year so it didn't make much sense. Also since Oct 1 of this year, it expired the leap seconds epoch and has been waiting for someone to look at the log file and update the list of leap seconds.

Some issues:

  • Don't use the phrases like OUTPUT, OUTMON, OUT16, or INMON as the usual part of a channel name. These are filter modules words which use to exclude channels from conlog. Please fix ALL of the LOCKIN screens to get rid of the OUTPUT filter banks.
  • If you use an EPICS channel in a servo so that its getting changed 16 times a second, make sure to add it to the conlog exclude list.

There are a bunch of bad channels which are screwing up various tools (DV, DTT, etc.):

Examples: C1:LSC-Subsystem_NPRO_SW1, C1:-DOF2PD_MTRX_0_0_SW1, C1:BAD-BAR_CRAZY_2_RSET, C1:C1L-DOF2PD_MTRX_3_14_SW2, C1:DUB-SEIS_GUR2_Z_LIMIT, etc.

  • There are a bunch of old, unused directories in c1/medm/. EVERYONE take a look in there and delete the OLD dead ones so that we don't keep recording those channels.

 To fix up some of these issues, I have deleted several MEDM directories which I thought were old (there are several extras left from Aidan's Green time). I also have put a bunch of exclude variables into the conlog 'scan_adls' script to prevent it from adding some of the new worthless channels. Finally, I have started this command

../bin/strip_out_channels '.*STAT.*','.*_ALIVE.*','C1:PEM.*','.*_Name.*','C1:UCT.*','C1:MCP.*','C1:SP.*','C1:DU.*','C1:RF.*','C1:NIO.*','C1:TST.*','C1:SUP.*','C1:X.*','C1:FEC.*','.*_LFSERVO.*','.*FSS_SLOWDC.*','C1:LSC-LA_MTRX_21','C1:LSC-PD.*OFFSET','C1:LSC-ETM.*OFFSET' conlog*.log

which should strip lots of the excess conlog data out of the conlog directory. The only downside is that its setting all of the timestamps of the .log files to today instead of the historical times but I don't think we'll care about this too much. Hopefully it will speed things up to have less than 450 GB of conlog files...

update: 12 hours later, its still running and has removed ~100 GB so far. It will probably take the rest of the weekend to finish.

  16878   Fri May 27 12:15:30 2022 JCUpdateElectronicsCRT TV / Monitor 6

[Yehonathan, Paco, Yuta, JC]

As we were cleaning up this morning, we heard a high pitch sound that turned into a buzz. After searching for where the sound came from, we noticed the CRT TV went out. We swapped this out with a moniter and used a BNC to VGA adapter to display the cameras.

  16882   Tue May 31 14:44:02 2022 JCUpdateElectronicsCRT TV / Monitor 6

[Paco, JC]

Paco and I fixed the ethernet cable which was hanging. We stopped models c1x07 and c1su2, realigned the cable to follow the shelf from top, and returned to turn on the computers.


Note: There was not a long enough ethernet cable, so we used a female to female adapter and attached 2 ethernet cables.


[Yehonathan, Paco, Yuta, JC]

As we were cleaning up this morning, we heard a high pitch sound that turned into a buzz. After searching for where the sound came from, we noticed the CRT TV went out. We swapped this out with a moniter and used a BNC to VGA adapter to display the cameras.


  11663   Sun Oct 4 14:23:42 2015 jamieConfigurationCDSCSD network test complete

I've finished, for now, the CDS network tests that I was conducting.  Everything should be back to normal.

What I did:

I wanted to see if I could make the EPICS glitches we've been seeing go away if I unplugged everything from the CDS martian switch in 1X6 except for:

  • fb
  • fb1
  • chiara
  • all the front end machines

What I unplugged were things like megatron, nodus, the slow computers, etc.  The control room workstations were still connected, so that I could monitor.

I then used StripTool to plot the output of a front end oscillator that I had set up to generate a 0.1 Hz sine wave (see elog 11662).  The slow sine wave makes it easy to see the glitches, which show up as flatlines in the trace.

More tests are needed, but there was evidence that unplugging all the extra stuff from the switch did make the EPICS glitches go away.  During the duration of the test I did not see any EPICS glitches.  Once I plugged everything back in, I started to see them again.  However, I'm currently not seeing many glitches (with everything plugged back in) so I'm not sure what that means.  I think more tests are needed.  If unplugging everything did help, we still need to figure out which machine is the culprit.

  11665   Sun Oct 4 14:32:49 2015 jamieConfigurationCDSCSD network test complete

Here's an example of the glitches we've been seeing, as seen in the StripTool trace of the front end oscillator:

You can clearly see the glitch at around T = -18.  Obviously during non-glitch times the sine wave is nice and cleanish (there are still the very small discretisation from the EPICS sample times).

  11661   Sun Oct 4 12:07:11 2015 jamieConfigurationCDSCSD network tests in progress

I'm about to start conducting some tests on the CDS network.  Things will probably be offline for a bit.  Will post when things are back to normal.

  5755   Fri Oct 28 12:47:38 2011 jamieUpdateCDSCSS/BOY installed on pianosa

I've installed Control System Studio (CSS) on pianosa, from the version 3.0.2 Red Hat binary zip.  It should be available as "css" from the command line.

CSS is a new MEDM replacement. It's output is .opi files, instead of .adl files.  It's supposed to include some sort of converter, but I didn't play with it enough to figure it out.

Please play around with it and let me know if there are any issues.


  5756   Fri Oct 28 14:56:02 2011 JenneUpdateCDSCSS/BOY installed on pianosa


I've installed Control System Studio (CSS) on pianosa, from the version 3.0.2 Red Hat binary zip.  It should be available as "css" from the command line.

CSS is a new MEDM replacement. It's output is .opi files, instead of .adl files.  It's supposed to include some sort of converter, but I didn't play with it enough to figure it out.

Please play around with it and let me know if there are any issues.


 So far I've only given it about half an hour of my time, but it is *really* frustrating so far.  There don't seem to be any instructions on how to tell it what our channels are / how to link CSS to our EPICS databases.  Or, the instructions that are there say "do it!", but they neglect to mention how...  Also, there exists (maybe?) an ADL->BOY converter, but I can't find any buttons to click, or how to import an .adl, or what I'm supposed to do.  Also, it's not clear how to get to the editor to start making screens from scratch. 

It looks like it has lots of nifty indicators and buttons, but I would have felt better if I had been able to do anything.

Another thing that is going to be a problem:  the Shell Command button that we use all over the place in our MEDM screens is not supported by this program.  It's listed in the "limitations" of the ADL2BOY converter.  This may kill the CSS program immediately.  Jamie: did Rolf/anyone mention a game plan for this?  It's super nice to be able to run scripts from the screens.

Moral of the story:  I'm annoyed, and going to continue making my OAF screens in MEDM for now.

  5795   Thu Nov 3 15:14:22 2011 KojiUpdateCDSCSS/BOY installed on pianosa

How to run/use CSS/BOY


0) Everything runs on pianosa for now.

1) type css to launch CSS IDE.

2) You may want to create your own project folder as generally everything happens below this folder.

=== How to make a new project ===
- Right-click on tree view of Navigator pane
- You are asked to select a wizard. Select "General -> Project". Click "Next".
- Type in an appropriate project name (like KOJI). Click "Finish"
- The actual location of the project is /home/controls/CSS-Workspaces/Default/KOJI/ in the above example


1) Select the menu "CSS -> Trends -> Data Browser". A new data browser window appears.

2) Right-click on the data browser window. Select the menu "Add PV". Type in the channel name (e.g. "C1:LSC-ASDC_OUT16")

3) Once the plot configuration is completed, it can be saved as a template. Select the menu "File -> Save" and put it in your project folder.

4) Everything else is relatively straightforward. You can add multiple channels. Log scaling is also available.
I still don't find how to split the vertical axis to make a stacked charts, but I don't surprise even if it is not available.


0) Simply to say BOY is the alternative of MEDM. The screen file of BOY is named as  "*.opi" similar to "*.adl" for MEDM.

1) To create a new opi file, right-click on the navigator tree and select the menu "New -> Other".

2) You are asked to select a wizard. Select "BOY -> OPI file" and click "Next".

3) Type in the name of the opi file. Also select the location of the file in the project tree. Click "Finish".

4) Now you are in OPI EDITOR. Place your widgets as you like.

5) To test the OPI screen, push the green round button at the top right. The short cut key is  "CTRL-G".


1) Right click an OPI file in the navigator tree. Select the menu "Open WIth -> OPI Editor". That's it.


1) You need to copy your ADL file into your project folder. In this example, it is /home/controls/CSS-Workspaces/Default/KOJI/

2) Once the ADL file is in the project folder, it should appear in the navigator tree. If not, right-click the navigator pane and select "Refresh".

3) Make sure "ADL Parser" button at the left top part is selected, although this has no essential function.
This button just changes the window layout and does nothing. But the ADL Tree View pane would be interesting to see.

3) If you select the ADL file by clicking it,the tree structure of the ADL file is automatically interpreted and appears in the ADL Tree View pane.
But it is just a display and does nothing.

4) Right-click the ADL file in the navigator pane. Now you can see the new menu "BOY". Select "BOY -> Convert ADL File to OPI".

5) Now you get the opi version of that file.
The conversion is not perfect as we can imagine. It works fine for the simple screens.
(e.g. matrix screens)
But the filter module screens get wierd. And the new LSC screen did not work properly (maybe too heavy?)
ADL2OPI1.png ADL2OPI2.png


CSS has javascript/python scripting capability.
I suspect that we can make a wrapper to run external commands from python script although it is not obvious yet.

  9331   Sat Nov 2 22:49:44 2013 CharlesUpdateISSCTN ISS Noise Suppression Requirement - Updated 10/27

 Previously in elog 8959, I gave a very simple method for determining the noise suppression behavior of the ISS. Recently, I recalculated this requirement in a more correct fashion and again redesigned the ISS to be used in the CTN experiment.

  • Determining the Requirement

Just as before, the data from PSL elog 1270 is necessary to infer a noise suppression requirement. The data presented there by Evan consists of two noise spectra, 1) the unstabilized RIN presently observed in the CTN experiment readout and 2) the theoretical brownian noise produced by thermal processes in the mirror coating+substrate. The statement "TF_mag = (Unstabilized RIN) / (Calculated Brownian Noise Limit)", where TF_mag refers to the required open-loop gain of the ISS, is actually a first order approximation of the 'required' noise suppression. In fact if we wanted the laser noise to be suppressed below the calculated brownian noise level, it is more correct to say 

        Closed-loop ISS gain = (Calculated Brownian Noise Limit) / (Unstabilized RIN)

As this essentially gives a noise suppression spectrum i.e. a closed-loop gain in linear control theory. Below is a very simple block diagram showing how the ISS fits into the CTN experiment. The F(f) block represents my full servo board.


Some of the relevant quantities involved:



So looking at the block diagram, our full closed-loop transfer function is given by,


So then to determine the required F(f), i.e. the required transfer function for my servo, we consider the expression 


The plant transfer function is simply Plant = (C(f) * a * P * A) ~ 0.014 V/V, where I have ignored the cavity pole around 97 kHz as our open-loop transfer function ends up crossing unity gain around 10 kHz. In the above, I have included what I call a 'safety factor' of 10. Essentially, I want to design my servo such that it suppresses noise well beyond what is actually required so that we can be sure noise contributions to experiment readouts are not significantly influenced by the laser intensity noise.

  • Proposed Servo Design

Using the data Evan reported for the brownian noise and free-running RIN, I came up with an F(f) to the meet the requirement as shown below.


 Where the blue curve includes the safety factor mentioned before. This plot just demonstrates that using my modular ISS design, I can meet the given noise suppression requirements.

To be complete, I'll say a little more about the final design.  As usual, the servo consists of three stages. The first is the usual LP filter that is always 'on' when the ISS loop is closed. The boosts I have chosen to use consist of an integrator with a single zero and a filter that looks somewhat like a de-whitening filter. The simulated open-loop transfer functions are shown below.










  8959   Thu Aug 1 22:58:45 2013 CharlesUpdateISSCTN Servo - Explicit Requirement and Proposed Servo

 In PSL elog 1270, Evan elucidated the explicit requirements for the CTN ISS board. Essentially, the transfer function of the ISS should be something like:

     TF_mag = (Unstabilized RIN) / (Calculated RIN Requirement)

I took Evan's data and did exactly this. I then designed a servo (using the general design I proposed here) to meet this requirement with a safety factor of ~10. By safety factor, I mean that if the ISS operates exactly according to theory, it should suppress the noise by a factor of 10 more than what is necessary/set out by the requirement. Below is a plot of the loop gain obtained directly from the requirement (the above expression for TF_mag) and the transfer function of the servo I am proposing.


I don't have the actual schematics attached as I was working with a LISO file and have yet to update the corresponding Altium schematic. The LISO file is attached and I will add the schematics later, although one can reference the second link to find a simple drawing.

Attachment 2: CTNServo_v3.fil
# Stage 1
r R31 1.58k in n_inU3
op U3 ad829 p_inU3 n_inU3 outU3
r R35 1k p_inU3 gnd
c C33 1u p_inU3 gnd
c C32 10n n_inU3 outU3
r R34 158k n_inU3 outU3

# Stage 2
#r R41 15.8 outU3 n_inU4U5
... 24 more lines ...
  8964   Mon Aug 5 11:53:45 2013 EvanUpdateISSCTN Servo - Explicit Requirement and Proposed Servo

I goofed on the transfer function requirement by not giving you the plant transfer function, which looks to be about 0.014 V/V, independent of frequency (PSL:1278). This needs to be compensated for in the electronic transfer function.

  8759   Wed Jun 26 21:52:55 2013 CharlesUpdateISSCTN Servo Prototype Characterization

Following the circuit design in elog 8748, I constructed a prototype for the servo portion of the ISS (not including the differential amp) to be used in the CTN experiment. The device was built on a breadboard and its transfer function was measured with the Swept Sine measurement group of an SR785. For various excitation amplitudes, the transfer function (TF) was not consistent.


Recall the ideal transfer function for this particular servo and consider the following comparisons.

  • The unity gain frequency is consistent, and the measured TFs all exhibit some amount of 1/f behavior up to this point, but there is no zero around f~10^3 and individual low-frequency poles/zeros are not visible.
  • For each of the inputs, there is a feature that is not exhibited in the ideal TF. We see a large drop in gain a little past 10^3 Hz for a 100mV input, just past 10^2 Hz for a 10 mV input and around 10^1 Hz for a 1 mV input.
  • The ideal TF also goes as 1/f for f < 10 Hz, so I believe the low-frequency behavior of each of the above transfer functions is simply a physical limitation of the breadboard or the SR785, although I don't think this is caused by the circuit elements themselves. I used OP27 op-amps in the prototype as opposed to AD829 op-amps which are must faster and end up amplifying noise. To ensure that these op-amps were not the source of the gain limitation, I also tried using AD829 op-amps. The resulting transfer functions are shown below.
  • Both the frequency at which we see the anomalous feature and the maximum gain increase nearly proportional to the increasing input excitation amplitude.

This gain limitation is problematic for characterizing prototypes as my particular servo has very large gain at low frequencies. 


At the risk of looking too deeply into the above data,

  • It appears there is a slight change in slope around f ~ 10^3 Hz which would be consistent with the ideal TF.
  • For f > 10^3 Hz, one can easily see the TF goes as 1/f. The slope for f < 10^3 Hz is not as clear, although it obviously does not show 1/f^2 behavior as we would expect from the ideal TF.
  • We see the same gain limitation around G ~ 55 as we did with OP27 op-amps.

Unfortunately, the noise was too large for lower excitation amplitudes to be used to any effect. I'll try this again tomorrow, just as a sanity check, but otherwise I will proceed with learning Altium and drawing up schematics for this servo.


  8771   Thu Jun 27 18:24:25 2013 CharlesUpdateISSCTN Servo Prototype Characterization - Done Correctly

As I showed in [elog 8759], measuring the transfer function of my prototype servo was difficult due to physical limitations of either some portion of the construction or even the SR785 itself. To get around this, I tried using lower input excitation amplitudes, but ran into problems with noise.

Finding a TF consistent with theoretical predictions made by LISO was easy enough when I simply measured the TF of each of the two filter stages individually and then multiplied them to obtain the TF for the full servo. I still noticed some amount of gain limitation for 100 mV and 10 mV inputs, although I only had to lower the input to 5 mV to avoid this and thus did not see significant amounts of noise as I did with a 1 mV input. The individual transfer functions for each stage are shown below. Note that the SR785 has an upper cutoff frequency of 100 kHz so I could analyze the TF beyond this frequency. Additionally, the limited Gain Bandwidth Product of OP27 op-amps (used in the prototype) causes the magnitude and phase to drop off for f > 10^5 Hz approximately. The actual servo will use AD829 op-amps which have a much larger GBWP.


The measured TFs above are very close to ideal and agree quite well with theoretical predictions. Based on the [circuit schematics],

  • Stage 1 should have Gain ~ 10^3 until the pole at f ~ 10 Hz  
  • Stage 2 should exhibit a DC pole, a zero at f ~ 10^3 Hz and then unity gain for f > 10^3 Hz

Indeed, this is exactly what we can see from the above two TFs. We can also multiply the magnitudes and add the phases (full_phase = phase1 + phase2 - 180) to find the TF for the full servo and compare that to the ideal TF produced by LISO,


And we find exceptionally consistent transfer functions, which speaks to the functionality of my prototype 

As such, I'll proceed with designing this servo in Altium (most of which will be learning how to use the software)

Note that all TFs were taken using the netgpibdata python module. Measurement parameters were entered remotely using the TFSR785.py function (via control room computers) and following the examples on the 40m Wiki.

Attachment 3: TF-CTNServo_v2_Prototype-Individual_Stages.fig
Attachment 4: TF-CTNServo_v2_Prototype-Calc_vs_Meas.fig
  14594   Fri May 3 15:40:33 2019 gautamUpdateGeneralCVI 2" beamsplitters delivered

Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI.

  9006   Tue Aug 13 13:30:41 2013 Alex ColeConfigurationElectronicsCable Routing

 I routed cables (RG405 SMA-SMA) from several demodulator boards in rack 1Y2 to the RF Switch in rack 1Y1 using the overhead track. Our switch chassis contains two 8x1 switches. The COM of the "right" switch goes to channel 7 of the "left" switch to effectively form a 16x1 switch. The following is a table of correspondences between PD and RF Switch input.


PD Left/Right Switch Channel Number


POX11 L 0
AS55 R 1
REFL55 R 7
POP22 R 6
REFL165 R 5
REFL33 L 7


ThePOP110 demod board has not yet had a cable routed from it to the switch because I ran out of RG405.

We should also consider how important it is to include MCREFL in our setup. Doing so would require fabrication of a ~70 ft RG405 cable. 

Attachment 1: photo_(6).JPG
  3520   Fri Sep 3 11:03:41 2010 AlbertoFrogsElectronicsCable cutting tools

I found this very interesting German maker of cool cable cutting tools. It's called Jokari.

We should keep it as a reference for the future if we want to buy something like that, ie RF coax cable cutting knives.


  3522   Fri Sep 3 13:04:30 2010 KojiFrogsElectronicsCable cutting tools

Yeah, this looks nice.

And I also like to have something I have attached. This is "HOZAN P-90", but we should investigate American ones
so that we can cut the wires classified by AWG.


I found this very interesting German maker of cool cable cutting tools. It's called Jokari.

We should keep it as a reference for the future if we want to buy something like that, ie RF coax cable cutting knives.



Attachment 1: P90.jpg
  4346   Wed Feb 23 16:56:17 2011 Larisa ThorneUpdateVIDEOCable laying...continued

Having finished labeling the existing cables to match their new names, we (Steve, Kiwamu and Larisa) moved on to start laying new cables and labeling them according to the list.


Newly laid cables include: ETMXT (235'), ETMX (235'), POP (110') and MC2 (105').  All were checked by connecting a camera to a monitor and checking the clarity of the resulting image. Note that these cables were only laid, so they are not plugged in.


The MC2 cable needs to be ~10' longer; it won't reach to where it's supposed to. It is currently still in its place. 

The three other cables were all a lot longer than necessary.

  4462   Wed Mar 30 17:01:08 2011 Larisa ThorneUpdateVIDEOCable laying...continued

[Steve, Suresh, Kiwamu, Larisa]


Only the PRM/BS cable was laid today.

In one of the previous updates on cable laying, it was noted that the MC2 cable needed an additional 10' and the MC2T needed an additional 15' to reach their destinations.  We cut and put BNC ends on 10' and 15' cables and connected them to the original cables in order to make them long enough.


This concludes the laying of new cables. Suresh is currently working on the QUADs...

  4474   Thu Mar 31 08:31:44 2011 SureshUpdateVIDEOCable laying...continued

The video work has crossed a milestone.    

Kiwamu and Steve have shifted the three quads from the control room to the Video MUX rack (1Y1) and have wired them to the MUX.

The monitors in the control room have been repositioned and renumbered.  They are now connected directly to the MUX. 

Please see the new cable list for the input and output channels on the MUX.

As of today, all cables according the new plan are in place.  Their status   indicated on the wiki page above is not verified .  Please ignore that column for now, we will be updating that soon.

I shifted the MC1F/MC3F camera and the MC2F cameras onto the new cables.  Also connected the monitors at the BS chamber and end of the X arm to their respective cables.  I have removed the RG58B BNC (black) cables running from MC2 to BS and from ETMXF to the top of the Flow Bench. 

Some of the old video cables are still in place but are not used.   We might consider removing them to clear up the clutter. 

Some of the video cables in use are orange and if the lab's  cable color code is to be enforced these will have to be replaced with blue ones..

Some of the cables in use running from the MUX to the monitor in the control room are the white 50 Ohm variety.  There are also black RG59 Cables running the same way ( we have surplus cables in that path)  and we have to use those instead of the white ones. 

There are a number of tasks remaining:

a)  The inputs from the various existing cameras have to be verified. 

b) There are quite a few cameras which are yet to be installed.

c) The Outputs may not not be connected to their monitors.  That the monitors may still be connected to an old cable which is not connected to the MUX.  The new cable should be lying around close by.  So if you see a blank monitor please connect it to its new cable. 

d) The status column on the wiki page has to be updated.

e) Some of the currently in place may need to be replaced and some need to be removed.  We need to discuss our priorities and come up with a plan for that.

After checking everything we can certify that the video cabling system is complete.

I would like Joon Ho to take care of this verification+documenting process and declaring that the job is complete. 


Steve attached these two pictures.

Attachment 1: P1070489.JPG
Attachment 2: P1070494.JPG
  4492   Wed Apr 6 16:02:07 2011 Larisa ThorneUpdateElectronicsCable laying...continued

[Steve, Kiwamu, Larisa]


Having finished laying new cable last week, we moved on to connecting those on PSL table and AP table.

Cables connected:

--RCR, RCT, PMCR (all three are blue)

--OMCR (blue cable, ***now has a camera***), PMCT, IMCR, REFL, AS (white cable), OMCT (***now has camera***)


Unless otherwise noted, the cables are black on the AP table. Also on the AP table: cables were connected directly to the power source.

The wiki has been updated accordingly.


Steve noted that MC2T and POP cameras are not there.



  16805   Fri Apr 22 12:15:08 2022 AnchalUpdateBHDCable post installation

If someone gets time, let's put in all the cable posts and clean up our cable routing on the tables.


  16810   Mon Apr 25 16:57:57 2022 AnchalUpdateBHDCable post installation

[Anchal, Tega, JC]

We installed cable posts in ITMY, BS, and ITMX chambers for all the new suspensions. Now, there is no point where the OSEM connections are hanging freely.

In BS chamber, we installed one post for LO2 near the north edge of the table and another post for PR3 on the Western edge with the blue cable running around the table on the floor.

In ITMY chamber, we installed the cable post in between AS1 and AS4 with the blue cables running around the table on the floor. This is to ensure the useful part of the table remains empty for future and none of the OSEM cables are taught in air.


  8676   Wed Jun 5 10:46:43 2013 GautamUpdateGeneralCable re-routing at 1Y4

 There were 4 cables running over the front side of rack 1Y4 such that the front door could not be closed. I re-routed them (one at a time) through the opening on the top of the rack. The concerned channels were

  • Green refl mon
  • Err mon
  • Pzt out (temp) (has been marked "Door Damaged BNC")
  • Laser temp ctrl

Before and after pics attached.








  3369   Thu Aug 5 17:59:23 2010 KojiUpdatePSLCable removal from the control room

[Alberto, Kiwamu, and Koji]

We removed the BNC cables from the control room.
The work was as hard as the one I had when I swept a 300m tunnel with a vacuum...

If we could remove the video cables, that would be a real epoch.

We found that the cabling behind the AP table is still quite ugly....grurrrh

Attachment 1: IMG_2684.jpg
  16812   Mon Apr 25 18:00:03 2022 Ian MacMillanUpdateUpgradeCable supports update

I have designed new cable supports for the new ribbon cables running up the side of the tables in the vacuum chambers. 

The clamps that I have designed (shown in basic sketch attachment 1) will secure the cable at each of the currently used cable supports. 

The support consists of a backplate and a frontplate. The backplate is secured to the leg of the table using a threaded screw. The frontplate clamps the cable to the backplate using two screws: one on either side. Between two fascinating points, the cable should have some slack. This should keep the cable from being stiff and help reduce the transfer of seismic noise to the table. 

It is possible to stack multiple cables in one of these fasteners. Either you can put two cables together and clamp them down with one faceplate or you can stack multiple faceplates with one cable between each faceplate. in this case the stack would go backplate then cable then faceplate then cable then the second faceplate. this configuration would require longer screws.

The exact specifics about which size screws and which size plates to use still have not been measured by me. But it will happen

Attachment 1: Chamber_Leg_Ribbon_Cable_Attachments.pdf
  4739   Wed May 18 16:52:23 2011 SureshUpdateRF SystemCables for AS11 PD are in place

[Larisa, Suresh]

All the cables needed for the AS11 PD are in place... the heliax cable runs from the AS table to the PSL rack.  The LO and RF cables to demod board as well as the I and Q cables into the LSC Whitening board are connected.

The cables get rather densely packed when the LSC Whitening filter sits between the PD Interface Board and the LSC AA filter board.  This makes it difficult to access the SMA connectors on the LSC whitening filter.  So we shifted the LSC Whitening and AA Filter boards one slot to the right.  The LSC rack looks like this just now.  We have also shifted the binary cables at the back of the Eurocart by one slot so the same cables are associated with the cards.




  3342   Sat Jul 31 17:37:36 2010 josephbUpdateCDSCables needed for CDS test

Last Thursday, Kiwamu and I went through the cabling necessary for a full damping test of the vertex optics controled by the sus subsytem, i.e. BS, ITMX, ITMY, PRM, SRM.  The sus IO chassis is sitting in the middle of the 1X4 rack.  The c1sus computer is the top 1U computer in that rack.


The hardest part is placing the 2x D-sub connectors to scsi on the lemo break out boxes connected to the 110Bs.  The breakout boxes can be seen at the very top of the picture Kiwamu took here.  These will require a minor modification to the back panel to allow the scsi cable to get out.  There are two of these boxes in the new 1X5 rack.  These would be connected by scsi to the ADC adapters in the back of the sus IO chassis in 1X4.  The connectors are currently behind the new 1X5 rack (along with some spare ADCs/DACs/BOs.

There are 3 cables going from 40 IDC to 37 D-sub (the last 3 wires are not used and do not need to be connected, i.e. 38-40).  These plug into the blue and gold ADC adapter box, the top one shown here.  There is one spare connection which will remain unused for the moment.  The 40 IPC ends plug into the Optical Lever PD boxes in the upper right of the new 1X4 rack (as seen in the top picture here - the boards on the right). At the back of the blue and gold adapter box is a scsi adapter which goes to the back of the IO chassis and plugs into an ADC.

In the back of the IO chassis is a 4th ADC which can be left unconnected at this point.  It will eventually be plugged into the BNC breakout box for PEM signals over in the new 1X7 rack, but is unneeded for a sus test.


There are 5 cables going from 3 SOS dewhite/anti-image boards and 2 LSC anti-image boards into 3 blue and gold DAC adapter boxes.  Currently they plug into the Pentek DACs at the bottom of the new 1X4 rack.  Ideally we should be able to simply unplug these from the Pentek DACs and plug them directly into the blue and gold adapter boxes.  However at the time we checked, it was unclear if they would reach.  So its possible new cables may need to be made (or 40 pin IDC extenders made). These boxes are then connected to the back of the IO chassis by SCSI cables.  One of the DAC outputs will be left unconnected for now.

Binary Output:

The Binary output adapter boxes are plugged into the IO chassis BO cards via D-sub 37 cables.  Note one has to go past the ADC/DAC adapter board in the back of IO chassis and plug directly into the Binary Output cards in the middle of the chassis.  The 50 pin IDC cables should be unplugged from XY220s and plugged into the BO adapter boxes.  It is unclear if these will reach.


We have a short fiber cable (sitting on the top shelf of the new 1X3 rack) which we can plug into the master timing distribution (blue box located in the new 1X6 rack) and into the front of the SUS IO chassis.  It doesn't quite make it going through all the holes at the top of the racks and through the cabling trays, so I generally only plug it in for actual tests.

The IO chassis is already plugged into the c1sus chassis with an Infiniband cable.


So in Summary to plug everything in for a SUS test requires:

  • 6x SCSI cables (3 ADC, 3 DAC) (several near bottom of new 1X3 rack)
  • 4x 37 D-sub to 37 D-sub connector (end connectors can be found behind new 1X5/1X6 area with the IO chassis stuff - Need to be made) (4 BO)
  • 3x 40 IDC to 37 D-sub connectors (end connectors can be found behind new 1X5/1X6 area - Need to be made)(ADC)
  • 5x 64 pin ribbon to 40 IDC cable (already exist, unclear if they will reach) (DAC)
  • 8x 50 pin IDC ribbon (already exist, unclear if they will reach) (BO)
  • 1x Double fiber from timing master to timing card
  • 1x Infiniband cable (already plugged in)

Tomorrow, I will finish up a channel numbering plan I started with Kiwamu on Thursday and place it in the wiki and elog.  This is for knowing which ADC/DAC/BO channel numbers correspond to which signals.  Which ADCs/DACs/BOs the cables plug into matter for the actual control model, otherwise you'll be sending signals to the wrong destinations.

WARNING: The channel numbers on the front Binary Output blue and gold adapter boxes are labeled incorrectly.  Channels 1-16 are really in the middle, and 17-32 are on the left when looking at the front of the box.  The "To Binary IO Module" is correct.

  3848   Tue Nov 2 16:49:02 2010 JenneConfigurationCamerasCabling on the PSL table

Dear whomever setup the camera on the SW corner of the PSL table,

It would be handy if, even for temporary setups, all cables went underneath the white frame around the PSL table.  As it is now, the cables are in the way of the door.  The door is pretty much closed all the way, but if someone were to open other doors, the far door can easily be pushed all the way to the end of the track, thus squishing the cables.  Squished cables are bad cables.


  5583   Fri Sep 30 06:25:20 2011 kiwamuUpdateLSCCalbiration of BS, ITMs and PRM actuators
The AC responses of the BS, ITMs and PRM actuators have been calibrated.
 To perform some interferometric works such as #5582, the actuator responses must be measured.
     BS = 2.190e-08 / f2     [m/counts]
     ITMX  = 4.913e-09 / f2   [m/counts]
     ITMY  = 4.832e-09 / f2   [m/counts]
     PRM   = 2.022e-08 / f2  [m/counts]
The same technique as I reported some times ago (#4721) were used for measuring the BS and ITMs actuators.
In order to measure the PRM actuator, power-recycled ITMY (PR-ITMY) was locked and the same measurement was applied.
The sensor response of PR-ITMY was calibrated by exciting the ITMY actuator since the response of the ITMY had been already measured.
ELOG V3.1.3-