40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 328 of 337  Not logged in ELOG logo
ID Date Author Type Category Subject
  476   Wed May 14 13:14:19 2008 AndreySummaryComputersReflective Memory Network is restored

Reflective Memory Network is restored, all watchdogs and oplevs are returned to the "enabled" state.

In order to revive the computers, several things were done.

1) Following Mr. Adhikari's elog entry #353, I walked around the interferometer room, and switched off the power keys in all crates with computers whose names are contained in the MEDM Reflective Memory screen, including the rack with the framebuilder. By the way, it was nontrivial to find the switch in the 1Y4 crate that would shut off/on processors "c1susvme1" and "c1susvme2": the switch turned out to be located at the rear side of the crate, and it is not a key but it is a button.

2) I was trying to follow wiki-40 computer restart procedures, but every time that I was trying to run "startup.cmd" screen from the corresponding target subdirectory, I got the error message "Device or resource busy".
By the way, one more thing was learned: if you firstly open in terminal burtgooey, select the snap file, then reboot the processor, and then will try to burt-restore it, you will get the message "Status Not OK". In order to really burt-restore the processor which was recently rebooted, you need to close the terminal with burtgooey and open burtgooey in a new terminal window which should be opened after rebooting the processor.

Feeling that my activities according to wiki-40 procedures do not revive computers, I invited Alex Ivanov.

3) Alex tried to touch the memory card in "c1iovme" in rack 1Y2, because once before this card failed causing network problems, but this did not help.

4) We shutted off and restarted again (pressing the power-switching button) the black Linux machine "c1dcuepics" (located in the very bottom below the framebuilder). Alex says that this machine is responsible for all EPICS. It was not restarted for 182 days, and probably some process there went wrong.

After restarting this machine "c1dcuepics" we were able to follow wiki-40 procedures for restarting all other computers (whose names are on the MEDM RFM network). We ran correcponding "startup.cmd" files and burt-restored them without error messages.

Now all the computers work and communicate in a proper way.

Mr. Joseph Betzwiezer was helping me with all these activities (we decided that it is more important that cameras for now), thanks to him. But our joint skills turned out to be insufficient, so Alex Ivanov's contribution was the most important.
  475   Tue May 13 10:38:28 2008 steveUpdateComputersrfm network is down
The RFM network went down yesterday around 5pm
Only c1susvme1 is alive but it's timing is off.
Andrey is bringing the network up.

Andrey wants to make an addition first that our situation is very much similar to that described by Rana in his elog entry # 353 (March 03). All of the rectangular boxes are red, except for SUS1-c1susvme1 and AWG (only these two rectangles are green).
  474   Tue May 13 10:15:52 2008 steveUpdateSUSrestored damping of BS & PRM
I think our janitor was cleaning too heavy handedly.
The BS and PRM sus damping were lost.
They were restored.
  473   Fri May 9 10:15:36 2008 josephbUpdateComputersNodus has moved
Steve and myself moved Nodus from under the table in the control room, to just above the Rana computer in the control room rack.
  472   Fri May 9 08:40:24 2008 steveUpdateSUSETMY sus damping restored
ETMY lost damping at 19:10 last night.
There was no seismic event than.
Sus damping was restored this morning.
  471   Thu May 8 16:40:36 2008 josephbConfigurationCamerasGige Camera currently on PSL table
Andrey and myself were working on the PSL table today, using a pickoff of a pickoff of the main beam (adding a microscope slide to pickoff ~4% of the original pickoff) to the GC750 GigeCam.

At the time we left, we scanned the area with a beam scan and didn't see any new stray beams, and nothing in any useful beam paths should have changed. We also strung a Cat 6 cable from the control room switch out to the PSL table in the cable trays, and then above the PSL table.

Currently, its not as well aligned as it could be, and also requires a very low exposure setting, of -E 50 or so to avoid saturation.
  470   Thu May 8 02:06:13 2008 ranaSummaryCOCThermal Lensing in the ITMs and BS may be a problem
The iLIGO interferometers start to see thermal lensing effects with ~2W into the MC, a recycling
gain of ~50, and a beam waist on the ITMs of ~3.5 cm.

At the 40m, the laser power into the MC is 1/2 as much, the recycling gain is 4-5x less, but the
beam on the ITM has a 3 mm waist. So the power in the ITM bulk is 10x less but the power density
is 100x more
. Seems like the induced lens in the ITM bulk might be larger and that if there's
significant absorption on the ITM face (remember our Finesse is 4-5x higher) the beam size in the
arm cavity may also change enough to measure.

Someone (like Andrey) should calculate how much the beam sizes change with absorbed power.
  469   Thu May 8 01:50:25 2008 ranaSummaryASCArm Cavity HOM Resonances
Nothing new, but I calculated the frequencies of the first 22 higher order transverse modes and thought I might as well list them here.

To do this I took formula (23) from page 762 of Siegmans book and put it into this form:
         f_fsr
dfmn =   ----- * (m+n) * acos(sqrt(g1*g2))
           pi

and then calculated them from m+n = 1..22 (22 is not a magic number).

I also used the 'mod' function of matlab to calculate the frequency mod FSR so that we would know how far away
from a cavity resonance it is. I took as parameters: Larm = 38.55 m, Ritm = 1e6 m, Retm = 57.1 m. Kirk measured
the arm length some time ago; we need to measure the arm g-factor...maybe we'll put Tobin on this when he comes
by for a visit.

1.1936 (TEM01, TEM10)
2.3871
3.5807
0.8859 (TEM22, TEM13, TEM31)
2.0795
3.2730
0.5782
1.7718
2.9654
0.2706 (TEM55, ...)
1.4641
2.6577
3.8512
1.1564
2.3500
3.5436
0.8488
2.0423
3.2359
0.5411
1.7347
2.9282
  468   Thu May 8 01:07:24 2008 ranaSummaryLSCFrequency Noise test: MC Trans Backscatter
There is a wandering hump in the MC_F spectrum. It can move around on the time
scale of seconds between 40 and 200 Hz. It has an amplitude ~5-50x above the background spectrum. This seems new; I don't remember it
from a year ago. It is there in the IFO unlocked as well as the IFO locked as well as the locked + CM mode.

Tapping the AS table and/or the PSL table enclosures produces a broadband increase in the MC_F spectrum but doesn't
selectively effect the hump.

We thought it might be backscatter from the MC TRANS path and so we stuck in one of Steve's cool black glass V's into
this space. No effect. We should design a black glass V dump which we can replicate in large quantities for us and for
the sites. Something like the one on the LSC PDs, but with a 1 sq. inch opening area and a 2 inch depth.


We have also done this on the MC2 - TRANS beam before. No noise reduced there either.

The noise hump is appearing in MC_F but not in CARM_IN1 (after the CM handoff) so it seems like the MC has enough gain
to squash it. This also exonerates the MC REFL path since anything there would not be effected by the MC servo gain and
so would be visible in CARM.

My best guess is that there is something really, really scattery on the PSL table. But for now it looks like this is not a
big factor in the locking
issues.
  467   Wed May 7 15:25:41 2008 robConfigurationLSCAP33 -> POX33

Quote:

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.


Done. We're now in the 40m CDD configuration.
  466   Tue May 6 17:28:39 2008 robConfigurationLSCAP33 -> POX33

I am in the process of switching the POX166 and AP33 photodetectors, so that they become POX33 and AP166. The IFO_CONFIGURE buttons won't work until I finish.
  465   Mon May 5 15:53:14 2008 steveUpdateVACdrypump replaced & readbacks are out
The small turbo pump of the annulus system is tp3
It's drypump was replaced at 1.4 Torr after 8 months of operation.
The rebuilding cost of this SH100 drypump is getting ridiculously high $1349.
I'll look for a replacement.

Joseph rebooted c1vac2 card on Apr 24 entry#441
This restarted the readbacks of the turbos and ion pumps for a while.
That was nice. The readbacks are not working again

  464   Mon May 5 11:04:30 2008 robOmnistructureComputersNetwork setup

Mafalda was not connected to the network, and so our DMF-based seisBLRMS has not been running for ~1 week. I traced this to a broken ethernet cable connecting mafalda to the network switch in the rack next to the B&W printer. This cable has a broken connector at the switch side, which means it can't stay connected if there's any tension. It needs to be replaced.
  463   Thu May 1 12:46:02 2008 josephbConfigurationComputersNodus gateway is up
The computer Nodus is now acting as a gateway machine between the GC network and the martian network in the 40m. It has the same passwords as the rana gateway machine.

Its name on the GC side is nodus (ip: 131.215.115.52) and on the martian side is nodus113 (ip: 131.215.113.200). Will need to update the hosts file on the control room machines so you can just use the name nodus113 rather than the full ip.

Software is still being added to the computer, and it will remain in parallel with the rana gateway machine until everything has been working properly for a week or so.
  462   Thu May 1 08:31:51 2008 steveUpdateSUSearthquake trips etmy & mc1
Earthquake at Lake Isabel magnitude 4.4 at 1am today shakes the 40m
ETMY and MC1 watchdogs tripped.
Sus damping restored.
  461   Wed Apr 30 20:48:58 2008 AndreySummaryPEMNew Weather Channels

I created the new channels for the weather station, all letters are capital ones. They are of the form "C1 : PEM-WS_PARAMETER" where "PARAMETER" is temperature, pressure, wind,... characteristics (names are self-obvious).

These new weather channels are indicated on the "Weather Checklist" MEDM screen. Also, units of pressure were changed from Pascal to torr and mbars.

The new weather channels are also visible in Dataviewer. I updated the template, and as an example of Dataviewer data I attach the following 5-hour trends of weather parameters from 3.30PM to 8.30PM on April 30th.
Attachment 1: April30-5hours.png
April30-5hours.png
  460   Tue Apr 29 21:30:49 2008 AndreyUpdatePEMIn the process of renaming channels for Weather Station

I startted renaming channels for the weather station, and I will continue this tomorrow, on Wednesday.

I have restarted 'c1pem1' several times and reconfigured "C0DCU1" on the framebuilder MEDM screen.

Framebuilder now does not work.
  459   Tue Apr 29 21:09:12 2008 ranaDAQCDSFE Filters
These are new FE filters for downsampling and upsampling. We will be going from native hardware sampling rates of 64k down to 32k, 16k, and 2k.

The attached plot shows these filters. They are 3dB ripple, 40 dB stopband, 4th order elliptic filters in which I have moved the zeros around
into good places (e.g. to the Nyquist frequency).

I'm also attaching the .txt file containg the filter coefficients and the design strings. The filters are called x2, x4, and x32, for the
D2, D4, and D32 downsampling, respectively.
Attachment 1: fefilters.jpg
fefilters.jpg
Attachment 2: fefilters.txt
# FILTERS FOR ONLINE SYSTEM
#
# Computer generated file: DO NOT EDIT
#
# MODULES ULYAW
#
################################################################################
### ULYAW                                                                    ###
################################################################################
# SAMPLING ULYAW 65536
... 28 more lines ...
  458   Mon Apr 28 23:44:33 2008 AndreyUpdateComputer Scripts / ProgramsWeather.db

I was trying to figure out how to modify the file "Weather.db" so that the atm.pressure would be recalculated from Pa to bar before appearing in the EPICS screen, but so far I did not succeed. I restarted processor "c1pem1" several times. I will continue this tomorrow, and also I will modify the nmaes of the weather channels.
  457   Sun Apr 27 22:57:15 2008 ajwDAQComputersbr40m?

Quote:

The testpoint manager (which runs on fb40m) crashed this afternoon. Upon re-starting it, I found there was a rogue dtt process on op440m and also a daqd daemon running on br40m. One or both of these caused the tpman to crash. br40m is the frame broadcaster, which is never used here as we don't run DMT. I killed the daqd process there.

The way to find if there is a rogue process is to watch the output to the console from the tpman when you start it:

Allocate new TP handle 56 by 131.215.113.203
Allocate new TP handle 57 by 131.215.113.203
Allocate new TP handle 58 by 131.215.113.203
Allocate new TP handle 59 by 131.215.113.203
Allocate new TP handle 60 by 131.215.113.203
Allocate new TP handle 61 by 131.215.113.203
Allocate new TP handle 62 by 131.215.113.203
Allocate new TP handle 63 by 131.215.113.203
Allocate new TP handle 64 by 131.215.113.203
Allocate new TP handle 65 by 131.215.113.203
Allocate new TP handle 66 by 131.215.113.203
Allocate new TP handle 67 by 131.215.113.203
Allocate new TP handle 68 by 131.215.113.203


If you see something like this, with a new TP handle being allocated every few seconds, you need to log in to the corresponding host and kill whatever process has run away.


I *think* Alex is responsible for the daqd daemon running on br40m (he set up some new stuff recently, a data concentrator and broadcaster); I'll make sure he sees this post.
  456   Sun Apr 27 18:11:58 2008 robDAQComputersbr40m?

The testpoint manager (which runs on fb40m) crashed this afternoon. Upon re-starting it, I found there was a rogue dtt process on op440m and also a daqd daemon running on br40m. One or both of these caused the tpman to crash. br40m is the frame broadcaster, which is never used here as we don't run DMT. I killed the daqd process there.

The way to find if there is a rogue process is to watch the output to the console from the tpman when you start it:

Allocate new TP handle 56 by 131.215.113.203
Allocate new TP handle 57 by 131.215.113.203
Allocate new TP handle 58 by 131.215.113.203
Allocate new TP handle 59 by 131.215.113.203
Allocate new TP handle 60 by 131.215.113.203
Allocate new TP handle 61 by 131.215.113.203
Allocate new TP handle 62 by 131.215.113.203
Allocate new TP handle 63 by 131.215.113.203
Allocate new TP handle 64 by 131.215.113.203
Allocate new TP handle 65 by 131.215.113.203
Allocate new TP handle 66 by 131.215.113.203
Allocate new TP handle 67 by 131.215.113.203
Allocate new TP handle 68 by 131.215.113.203


If you see something like this, with a new TP handle being allocated every few seconds, you need to log in to the corresponding host and kill whatever process has run away.
  455   Sun Apr 27 05:09:30 2008 ranaConfigurationIOOMC WFS Whitening turned on
I hardwired on the MC WFS whitening filters.

The MAX333A switches which choose between whitening and bypass on that board were in the bypass position
because the Xycom220 connections are not there. So the control switch gets +15V but there is no pull
down to set it to the whitened mode.

The least invasive (easiest) change I could do was to tie all of those inputs to ground. This pulls a few mA
through the pull-down resistors but is otherwise innocuous. All of these control lines come in on the A-row
of the P1 connector, so I was able to solder a single wire across all of them to ground them all.

The WFS2 board had a blown electrolytic capacitor on the -15 V line and so there was probably some extra noise
getting in that way. I couldn't find any extra SMD to replace it so I cut the legs off of a 22 uF polarized
tantalum and stuck it in there. Its even close to being the same color. I checked out the other caps, and they were all
close to 68 uF as spec'd. This one had luckily blown open and so didn't suck down the Sorensen and destroy everything.

Plugged everything back in switched the WFS servos back on. Looks good. Took before and after spectra.

In the plot:

GREEN: Open loop dark noise before changes
RED: Open loop bright (MC locked but MCWFS off)
BLUE: Closed loop, MC locked

BLACK: Dark noise after whitening
ORANGE:Closed loop after whitening

The cursor is at 16.25 Hz, the SOS bounce mode.

The I ran the new setMCWFSgains script which uses pzgain to set the UGFs of the 4 loops to 4.01 Hz.
We have in the past had problems with high WFS gains causing instabilities with the CM servo around 10-30 Hz. If this happens we should
just lower the gain by a factor of ~5.
Attachment 1: mcnoise.png
mcnoise.png
  454   Sun Apr 27 02:11:11 2008 ranaConfigurationIOOMC WFS Notes
As noted in the elog from Friday, the WFS has been bad ever since someone switched on the digital whitening filters (FM1 & FM2)
in the MC WFS I&Q filter banks.

On Friday evening, John, Alan, and I went to the rack and verified that although the drawing shows a hookup for the whitening
filters, there is actually no such thing and so we can't have the whitening. So the anti-whitening turns on two lag filters
(2 poles at 4 Hz) and without the hardware this makes the servos unstable by adding 90 deg of phase lag at 4 Hz.

There are still several problems in this system:
- AD797 is used after the mixer. This is an unreliable, noisy part. We need to change this out
  with some OP27s so that this becomes reliable and has a more reasonable noise figure.

- Hard wire the whitening filters ON. We never want these to be off. Then we can turn on the
  anti-whitening. This will give us a factor of 100 better noise without filtering.

- The AD602 on the front of the whitening board has a 100 Ohm internal impedance and the 
  resistor between the demod board and the AD602 is 909 Ohms. This results in dividing the
  signal by 10.

- The signal at the ADC is ~100 cts peak-peak. The full ADC range is, of course, 65000 cts. So
  we could use a lot more gain. The mean quadrant signals are also ~100 cts so we could easily
  up the analog DC gain by a factor of 30 on top of the whitening filter increase.

- The AD602 at the input and the AD620 on the output are both variable gain stages but because
  of our lack of control are set to ambiguous gain levels. We should set the AD602 on the input
  to its max gain of 30 dB. With the -20 dB from the x10 voltage division, this will give us
  an overall gain of 3 for the puny demod signals.

  453   Sat Apr 26 11:21:15 2008 ajwOmnistructureComputersbackup of /cvs/cds restarted
The backup of /cvs/cds (which runs as a cron job on fb40m; see /cvs/cds/caltech/scripts/backup/000README.txt) 
has been down since fb40m was rebooted on March 3.
I was unable to start it because of conflicting ssh keys in /home/controls/.ssh .
With help from Dan Kozak, we got it to work with both sets of keys
( id_rsa, which allows one to ssh between computers in our 113 network without typing a password,
 and backup2PB which allows the cron job to push the backup files to the archive in Powell-Booth).

It still goes down every time one reboots fb40m, and I don't have a solution.
A simple solution is for the script to send an email whenever it can't connect via ssh keys
(requiring a restart of ssh-agent with a passphrase), but email doesn't seem to work on fb40m.
I'll see if I can get help on how to have sendmail run on fb40m.
  452   Sat Apr 26 01:45:38 2008 AndreySummaryPEMWeather Station enhancement
Two more things concerning weather monitoring have been done during this week.

1) A Dataviewer template was created, so that it allows to see "real-time" information from weather channels immediately, without adding many channels "manually".

If one wants to use this template,
open Dataviewer -> "File" -> "Restore Settings", /cvs/cds/caltech/users/Templates/Dataviewer_Templates/Weather.xml.

2) I wrote a couple of Matlab scripts that allow to read data (minute trends) from the Dataviewer channels over some time in the past, save the received data in mat-files, and plot those minute-trends. Thus, one can get plots that are very much similar to what one can see in Dataviewer. These two Matlab files are located in the directory
"/cvs/cds/caltech/users/weather_station". File "WeatherReading.m" allows reading from the weather channels (paths to mDV directory must be configured before using my script), file "WeatherTrends.m" allows plotting of those minute trends.

Unfortunately, hardware problems arise very often if we want to read for a somewhat long time in the past, so until now I have not succeeded in getting trends for more than 20 minutes. As an example, see the attached png-file with the 20-minutes trends of data from Thursday evening.

3) So far I did not have success in learning how to recalculate pressure from Pascals to mbars in EPICS (although I tried google-search).

4) I am making every effort in recent weeks not to put any personal or non-scientific information into elog, but this message could be important for all of us, so I cannot resist:
a shark in the Pacific Ocean has killed a swimmer near San-Diego (I saw this in russian news and then made a quick google-search).
http://latimesblogs.latimes.com/lanow/2008/04/this-just-in-fa.html
Attachment 1: Matlab_Weather_Trends.png
Matlab_Weather_Trends.png
  451   Fri Apr 25 20:53:02 2008 ranaConfigurationIOOMC WFS with more gain
Quick update: we found that the reason for the MC WFS instability was that the digital anti-whitening was one but not the analog whitening.

We turned off the digital filters and were able to increase the gain by a factor of ~30. It is left like this, but if it hampers IFO locking then best to just turn it back down to an overall gain of 0.1 or 0.05.
  450   Fri Apr 25 15:44:21 2008 steveUpdatePSLscattering measurments
In pursuit of a low back scattering, high power beam dump we looked at materials such:
polished copper, polished aluminum, diamond cut aluminum, variety of polished & heat treated stainless steel and shades of black glasses.
Black glass is ideal at low power. Superpolished SS 304 #8 is the only material that measures close to bg
We are still looking for a conductive low back scattering material that would be ideal for a good high power beam trap.

Black glass-shade 12, ss304 superpolished #8 and white paper-B98-24lbs back scattering were compared at 1064 nm

Atm 1: data plot numbers from front panel of SR830 display,
sensitivity: 1x1 mV for bg and ss
1x1 V for wp

Atm 2: drawing of measurement set up
Atm 3: SR830 lock.amp settings
Atm 4: view from steering mirror
Atm 5: view from black glass trap
Atm 6: white paper
Attachment 1: scatmeas20080425.xls
Attachment 2: scattring_set_up_20080425.ppt
Attachment 3: sr830settings.pdf
sr830settings.pdf
Attachment 4: viewfmirror.pdf
viewfmirror.pdf
Attachment 5: viewfbgtrap.pdf
viewfbgtrap.pdf
Attachment 6: wpnmount.pdf
wpnmount.pdf
  449   Fri Apr 25 13:53:11 2008 josephbSummaryComputersNetwork setup
This is the promised more in detail summary from Andrey's log ID 444.

What we did was go around to each hub, one at a time, unplug the network connection, and figure out which light on which hub went out. We then, went back to the control room, confirmed that we were still able to talk to the devices connected to the hub, and if not, rebooted them. This process was repeated for each hub.

As it stands, the hubs located at the ends of arms (in racks 1X4 and 1Y9) are connected to the really old 24 port 10 Base T hub located in 1Y7. In addition, the 5 port SMC hub is plugged into the 8 port SMC switch in 1Y5 (which actually has enough ports to simply move all the connections over to it, so I'm not sure why there are two...).

All other hubs/switches are connected back to the control room 24 port switch.

Attached is a simple diagram of the network connections for the 40m lab.
Attachment 1: 40m_network_90.pdf
40m_network_90.pdf
  448   Fri Apr 25 13:20:04 2008 AndreyUpdatePEMMicrophone test
In response to Rana's request, I tested the microphone (if it is alive or not) by clapping my hands and speaking aloud nearby.

The microphone is alive, see the attached "Full Data" for 5 minutes from Dataviewer.
Attachment 1: Microphone.png
Microphone.png
  447   Fri Apr 25 11:33:40 2008 AndreyConfigurationComputersComputer controlling vaccum equipment

Old computer (located in the south-end end of the interferometer room) that was almost unable to fulfill his duties of controlling vacuum equipment has been replaced to "Linux-3". MEDM runs on "Linux-3".

We checked later that day together with Steve Vass that vacuum equipment (like vacuum valves) can be really controlled from the MEDM-screen 'VacControl.adl'.

Unused flat LCD monitor, keyboard and mouse (parts of the former LINUX-3 computer) were put on the second shelf of the computer rack in the computer room near the HP printer.
  446   Thu Apr 24 23:50:10 2008 ranaUpdateGeneralSyringes in George the Freezer
There are some packets of syringes in the freezer which are labeled as belonging to an S. Waldman.
Thu Apr 24 23:48:55 2008

Be careful of them, don't give them out to the undergrads, and just generally leave them alone. I
will consult with the proper authorities about it.
  445   Thu Apr 24 23:27:48 2008 ranaUpdatePEMacoustic noise in MC_F
I looked at the coherence between the Microphone in the PSL (PEM-AS_MIC) and the MC_F channel.

We want to use a microphone to do Wiener/Adaptive noise cancellation on the MC and so we need to
have a coherence of more than ~0.1 in order for that to have any useful effect.

The attached plot shows the spectrum and coherence with and without the HEPA turned up. As you can
see, the HEPA noise is just barely noticeable in this microphone. Mad

We will need to get something with at least 20 dB more sensitivity.:P
  444   Thu Apr 24 22:06:47 2008 AndreySummaryComputersEthernet Cables and Hubs
Today in the morning (between 8.30AM and noon) Joe and I were working on understanding which ethernet cables connect "processors controlling the work of equipment in the interferometer room" and "Internet hub in the computer room".

Firstly, we took off several times the blue ethernet cables from the router located near ETMX in the morning. We were trying to understand which port in the hub is responsible for the interaction with that processor.

Secondly, we were working on reviving the connection with the computer controlling vacuum in the interferometer.

Later in the middle of the day (around 2PM) Joe continued some work with ethernet cables without me. We plan on continuing the cable work on Friday morning. A better and more detailed elog will appear then.
  443   Thu Apr 24 15:57:53 2008 steveConfigurationSAFETYSafety at AP-ISCT
I measured the output power of the psl after the mechanical shutter.

It was 1.1 W with Ophir power meter, than unlocked the MC and measured
the power at the MC-REFL Beam Dump at the AP-ISCT 0.9 W
Power on MC-REFL photodiode 92 mW

High power metal beam shields were installed around the beam path of
MC-REFL between AP-Viewport and MC-REFL Beam Dump.
Placed HIGH POWER LASER BEAM PATH warning signs on table frame and top
covers.

Last week I placed a small monitor on the top of the OOC that
monitors the resonant spot of MC2. Please keep an eye on this monitor
when working on the AP-ISCT

AP table should NOT be left uncovered. One experienced laser operator
has to be present if the top is removed and IR-viewer scan required.
We need your full cooperation to keep this lab safe.
Attachment 1: P1020197.JPG
P1020197.JPG
Attachment 2: mcrefl3.JPG
mcrefl3.JPG
  442   Thu Apr 24 14:10:26 2008 robUpdateLockinglocking work
Rob, Johnnie

We made some progress on locking last night (Wed night), namely that we were able to handoff (briefly) the CARM-MCL path the REFL-DC error signal. We tried this because we suspect that the reason the PO-DC is not a good CARM error signal is because at low powers, the dc light level in the recycling cavity is dominated by the +f2 RF sideband. Thus, REFL-DC should work a bit better at low powers, which it did. It wasn't super stable, though, so this will require a bit of work to make the transition reliable & stable. The next things to work on include setting the AO path gain properly and possibly going to higher arm powers before handing off (thus increasing the discriminant).

Another thing we found is that the alignment scripts are not working in an ideal fashion. Running the alignment scripts for the two arms (XARM & YARM) leaves the Michelson badly misaligned, making it impossible to get good DRM alignment. This will have to be fixed.
  441   Thu Apr 24 11:50:10 2008 josephbSummaryComputer Scripts / ProgramsUseful tidbits learned while tracking the network setup
In process of understanding the network setup I've learned several things:

1) The status lights on C0DAQ_RFMNETWORK.adl are controlled by the fiber network, as opposed to the ethernet network. However, even if everything is working properly on the VME end, you may still need to reboot it in order to be able to contact it via the ethernet (ssh or telnet).

2) After disconnecting the hub out by 1Y9, I was able to telnet into c1vac1, but not c1vac2. I was told that the Turbo pump and Ion pump readbacks on C0VACMONITOR.adl had not been working for awhile (years?). So I went out and rebooted the c1vac2 card. This seemed to restore the epic channels and we now have correct readbacks on the turbo pumps. The ion pumps all are reading no voltage, which is good because they're turned off. However C1:Vac-IPSE_mon is reading "On", although Steve assures me the actual unit is currently off, so there may be a minor channel issue there.
  440   Wed Apr 23 22:39:54 2008 AndreyDAQComputer Scripts / ProgramsProblem with "get_data" and slow PEM channels

It turns out that I cannot read minute trends for the slow weather channels for more than 1000 seconds back (roughly more than 15 minutes ago) using "get_data" script.

For comparison, I tried MC1 slow channels, and similar problem did not arise there. Probably, something is wrong with the memory of slow weather channels. At the same time, I can see minute-trends in Dataviewer as long ago as I want.

In response to
>>get_data('C1: PEM-weather_outsideTemp', 'minute', gps('now') - 3690, 3600);
I get the error message:
"Warning: Missong C1: PEM-weather_outsideTemp M data at 893045156".
  439   Tue Apr 22 22:51:30 2008 ranaConfigurationIOOMcWFS Status
I've been working a little on the MC WFS in the last few days. I have made many
changes to the sensing matrix script and also to the MCWFSanalyze.m script.

The output matrix, as it was, was not bad at low frequencies but was making noise in
the ~1 Hz band. Turning the gain way down made it do good things at DC and not make
things work higher.

The output matrix generating script now works after Rob fixed the XYCOM issue. Not sure
what was up there. As Caryn mentioned the SUS2.ini channels were all zero after Andrey's
PEM power cycle a few days ago. Rob booted c1susvme to get the SUS1 channels back and
today we did c1susvme2 to get the IOO-MC_L et. al. back.

Even after doing the matrix inversion there is some bad stuff in the output matrix. I
checked that the sensing matrix measurement has good coherence and I measured and set the
MC WFS RF phases (they were off by ~20-30 deg.). Still no luck.

My best guess now is that the RG filters I've used for POS damping and the movement of the
beam on the MC mirror faces has made a POS<->YAW instability at low frequencies. My next
move is to revert to velocity damping and see if things get better. Should also try redoing
the A2L on the MC1-3.
  438   Tue Apr 22 22:19:02 2008 robMetaphysicslorejiggling sliders

In the interests of tacit communication of scientific knowledge, I here reveal a nugget of knowledge which may or may not prove useful to new LIGOites: sometimes when front-end machines are rebooted, the hardware they control can wind up in a state which is not accurately represented by the EPICS values you may see. This can be easily rectified by momentarily changing the EPICS settings in question. For reference, this came up tonight in the context of the whitening gain sliders for the TransMon QPDs.
  437   Tue Apr 22 17:08:04 2008 CarynUpdateIOOno signal for C1:IOO-MC_L
C1:IOO-MC_L signal was at zero for the past few days
  436   Tue Apr 22 16:17:48 2008 robUpdateSUSend station sus front-end bug fix

Quote:
installed and started new susEtmx.o and susEtmy.o to fix a problem with ETMY optical lever variables.


What Alex means is that the EPICS values for the ETMY optical levers were being clobbered in the RFM. The calculations were being done correctly in the FE, so the DAQ/testpoints were working--it was just the EPICS/RFM communication via c1losepics that was bugged. This was a result of the recent SUS code changes to accept inputs from the ASS for adaptive feedforward.
  435   Tue Apr 22 10:59:24 2008 robUpdateSUSMC1 electronics busted

Quote:
I spent some time trying to fix the utter programming fiasco which was our MCWFS diagonalization script.

However, it still didn't work. Loops unstable. Using the matrix in the screen snapshot is OK, however.

Finally, I realized from looking at the imaginary part of the output matrix that there was something
wrong with the MC1 drive. The attached JPG shows TFs from pit-drives of the MC mirrors to WFS1.

MC1 & MC3 are supposed to have 28 elliptic low pass filters in hardware for dewhitening. The MC2
hardware is different and so we have given it a software 28 Hz ELP to compensate. But it looks like
MC1 doesn't have the low pass (no phase lag). I tried switching its COIL FM10 filters to make it
switch but no luck.

We'll have to engage the filters to make the McWFS work right and to get the MC noise down. This
needs someone to go check out the hardware I think.

I have turned the gain way down and this has stabilized the MC REFL signal as you can see from the StripTool screen.


This was just because the XYCOM was set to switch the "dewhites" based on FM9 rather than FM10. To check whether the hardware ellipDW filters were engaged, I drove MC1 & MC3 in position (using the MCL bank), and looked at the transfer functions MC2_MCL/MC1_MCL and MC2_MCL/MC3_MCL. This method uses the mode cleaner length servo to enable a relatively clear transfer function measurement of the ellipDW, modulo the loop gain of MCL and the fact that it's really hard to measure an ELP cascaded with a suspension. The hardware and the switching appear to be working fine.

It's now set up such that the hardware is ENGAGED when the coil FM10 filters are OFF, and I deleted all the FM10 filters from the coils of MC1 and MC3. Since we don't switch these filters on and off regularly, I see no need to waste precious SUS processor power on filters that just calculate "1".
  434   Tue Apr 22 08:34:22 2008 josephbConfigurationCamerasCurrent Network Diagram
The attached network diagram has also been added to the 40m Wiki at http://lhocds.ligo-wa.caltech.edu:8000/40m/Image_Processing_with_GigE_Cameras
Attachment 1: Network.pdf
Network.pdf
  433   Mon Apr 21 13:12:21 2008 robUpdateComputer Scripts / Programstdsread bugs

Quote:
There seems to be a problem with reading the C1:IOO-MASTER_OVERFLOW field
when it is read in as part of an array. The only way for me to describe it
is to just attach the terminal output in this entry...this is mainly for
Matt and Rob
.


I first noticed that the output of the MC-WFS sensing matrix was different than
the outputs from a year ago, namely that the excitation channel was not being
processed and outputted to the file. This made the output matrix diagonalization
scripts fail.

I noticed that there are several different copies of tdsread.cc sitting around.
Looks like they have been hacked in the last year but I am not sure if this
excitation channel readback is an intentional change; email has been sent to the
authors to find out -- they will probably post some kind of response in the log
to resolve what's up.


My guess is that the problem with the IOO channel is not related, but I'm not sure:
op440m:WFS>set ioo_head = "${ifo}:IOO-"
op440m:WFS>set sus_head = "${ifo}:SUS-"
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3
_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${ioo_head}MAS
TER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3_MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0 0 0
op440m:WFS>echo `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
0
op440m:WFS>echo "tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW"
tdsread C1:SUS-MC1_MASTER_OVERFLOW C1:IOO-MASTER_OVERFLOW C1:SUS-MC2_MASTER_OVERFLOW
op440m:WFS>



This is the same bug described in entry 180. I believe it has nothing to do with tdsread, which did not change in the time period before the bug appeared, but perhaps has something to do with other EPICS libraries somewhere (tdsread relies on these epics libraries to do its dirty work). Here is entry 180 for reference:


Quote:
tdsread has developed a strange new illness, whereby it cannot read EPICS values from two subsystems at once (e.g., getting an LSC and SUS value simultaneously). I thought this might have something to with the fact that both losepics and iscepics are running on the same box,
but the same thing happens with IOO EPICS records, so that's not the culprit.

This is new behaviour, and it's only happening on the solaris machines. I suspect some ENV/cshrc juju has caused it, as the tdsread executable is the same one from April, and I don't think our EPICS infrastructure has changed otherwise. In the near term we can either try running the scripts on linux, or modify the IFO scripts to not do these types of calls.


The solution that's been in effect for the past few months has just been to modify the scripts to not make these kinds of calls.
  432   Mon Apr 21 12:58:42 2008 robUpdateASScheck adaptive

Quote:


Caryn Palatchi (a Caltech undergrad who just started working with us)
illustrated to me today that using even 1000 FIR taps is not very effective
for low frequency noise cancellation if you have a 2048 Hz sample rate. More
precisely, the asymptotic Wiener filter which our 'LMS' algorithm converges
to, can often amplify the noise at frequencies below f_sample/N_taps.

A less obvious thing that she also noticed is that there is almost no cancellation
of the 16.25 Hz bounce mode when using such a short filter. That's because that
mode is fairly high Q: the transfer function from the Z-ACC to the cavity signal
goes through the high-Q vertical suspension resonance; the FF signal we send back
goes through the low-Q horizontal pendulum response only. Therefore the filter
needs to be able to simulate ~100 cycles at 16.25 Hz in order to cancel that peak.

Duh.

The message here is: we need to find a computationally efficient way to do FIR filtering
or its not going to ever be cool enough to help us find the Crab.


This is the reason for "RDNSAMP" parameter in the ASS code. The FIR filtration is applied at the downsampled rate, not the machine rate. So, if RDNSAMP=32, the effective sampling rate of the FIR filter is 64Hz, and thus noise cancellation should be good down to 64Hz/1000, or 64mHz, and the filter has an impulse response time that extends to 15 secs. I'm not convinced the filter length is what's limiting the performance at the bounce mode, but I agree that a faster FIR implementation would be good.
  431   Sun Apr 20 23:39:57 2008 ranaSummarySUSMC1 electronics busted
I spent some time trying to fix the utter programming fiasco which was our MCWFS diagonalization script.

However, it still didn't work. Loops unstable. Using the matrix in the screen snapshot is OK, however.

Finally, I realized from looking at the imaginary part of the output matrix that there was something
wrong with the MC1 drive. The attached JPG shows TFs from pit-drives of the MC mirrors to WFS1.

MC1 & MC3 are supposed to have 28 elliptic low pass filters in hardware for dewhitening. The MC2
hardware is different and so we have given it a software 28 Hz ELP to compensate. But it looks like
MC1 doesn't have the low pass (no phase lag). I tried switching its COIL FM10 filters to make it
switch but no luck.

We'll have to engage the filters to make the McWFS work right and to get the MC noise down. This
needs someone to go check out the hardware I think.

I have turned the gain way down and this has stabilized the MC REFL signal as you can see from the StripTool screen.
Attachment 1: mcwfs.jpg
mcwfs.jpg
  430   Sun Apr 20 20:29:46 2008 ranaUpdateComputer Scripts / Programstdsread bugs
There seems to be a problem with reading the C1:IOO-MASTER_OVERFLOW field
when it is read in as part of an array. The only way for me to describe it
is to just attach the terminal output in this entry...this is mainly for
Matt and Rob
.


I first noticed that the output of the MC-WFS sensing matrix was different than
the outputs from a year ago, namely that the excitation channel was not being
processed and outputted to the file. This made the output matrix diagonalization
scripts fail.

I noticed that there are several different copies of tdsread.cc sitting around.
Looks like they have been hacked in the last year but I am not sure if this
excitation channel readback is an intentional change; email has been sent to the
authors to find out -- they will probably post some kind of response in the log
to resolve what's up.


My guess is that the problem with the IOO channel is not related, but I'm not sure:
op440m:WFS>set ioo_head = "${ifo}:IOO-"
op440m:WFS>set sus_head = "${ifo}:SUS-"
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3
_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${ioo_head}MAS
TER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${ioo_head}MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0
op440m:WFS>set oflows = `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW ${sus_head}MC3_MASTER_OVERFLOW`
op440m:WFS>echo $oflows
0 0 0
op440m:WFS>echo `tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW`
ERROR: C1:IOO-MASTER_OVERFLOW value not read
0
op440m:WFS>echo "tdsread ${sus_head}MC1_MASTER_OVERFLOW ${ioo_head}MASTER_OVERFLOW ${sus_head}MC2_MASTER_OVERFLOW"
tdsread C1:SUS-MC1_MASTER_OVERFLOW C1:IOO-MASTER_OVERFLOW C1:SUS-MC2_MASTER_OVERFLOW
op440m:WFS>
  429   Sun Apr 20 18:23:27 2008 ranaSummaryLSClocking attempts
I noticed that the adaptive FF for the MC had stopped doing anything; this turned out
to be that the MC lost lock and the mcdown script turned off the FF path to MC1.

Although there's no elog, it looks like there was ~60 attempts at locking the IFO
between 12:38 and 4:27 on Saturday afternoon. I'm attaching here a plot showing
lock attempt durations and a histogram of lock times.
Attachment 1: quix.png
quix.png
  428   Fri Apr 18 19:46:08 2008 ranaUpdateASScheck adaptive
I restarted the adaptive code today using 'startass' and 'upass'.
I moved them into the scripts/ASS/ subdirectory.

Things seem OK. With a MU=0.03 and a TAU=0.00001, there is a still
a good factor of 10 reduction of the 3 Hz stack peak from the MC2
drive by doing FF into MC1.

I edited the ASS-TOP screen so that we could see such small numbers. I
also re-aligned the MC SUS to match the input beam (mainly MC3). The
cavity was locking on a TEM10 mode mostly -- we should look in the SUS
OSEM trends to see if MC3 has moved a lot in the last month or so.

Caryn Palatchi (a Caltech undergrad who just started working with us)
illustrated to me today that using even 1000 FIR taps is not very effective
for low frequency noise cancellation if you have a 2048 Hz sample rate. More
precisely, the asymptotic Wiener filter which our 'LMS' algorithm converges
to, can often amplify the noise at frequencies below f_sample/N_taps.

A less obvious thing that she also noticed is that there is almost no cancellation
of the 16.25 Hz bounce mode when using such a short filter. That's because that
mode is fairly high Q: the transfer function from the Z-ACC to the cavity signal
goes through the high-Q vertical suspension resonance; the FF signal we send back
goes through the low-Q horizontal pendulum response only. Therefore the filter
needs to be able to simulate ~100 cycles at 16.25 Hz in order to cancel that peak.

Duh.

The message here is: we need to find a computationally efficient way to do FIR filtering
or its not going to ever be cool enough to help us find the Crab.
Attachment 1: 0052_xray_thm45.jpg
0052_xray_thm45.jpg
  427   Fri Apr 18 16:48:13 2008 AndreyUpdatePEMRain collector of weather station

Today the rain collector of our weather station was cleaned. As a result, we checked that the rain indication on the weather monitor and on the MEDM screens is alive and working properly. I am adding some details about the roof sensors to the wiki-40 page about the weather station. See especially the link "More description of the roof sensors and their interaction with UNIX computers" from the main Weather Station page in wiki-40.

Pictures of the rain collector before (dirty, the opening is fully clogged with dust and dirt) and after (clean opening in the bottom of the bowl) the cleaning are attached.
Attachment 1: DSC_0520--before.JPG
DSC_0520--before.JPG
Attachment 2: DSC_0537--after.JPG
DSC_0537--after.JPG
ELOG V3.1.3-