After asking Alex specifically what he did yesterday after I left, he indicated he copied a bunch of stuff from Hanford, including the latest gds, fftw, libframe, root. We also now have the new dtt code as well. But those apparently were for the Gentoo build After asking Alex about the ezca tools this morning, he discovered they weren't complied in the gds code he brought over. We are in the process of getting the source over here and compiling the ezca tools.
Alex is indicating to me that the currently compiled new gds code may not run on the Centos 5.5 since it was compiled Gentoo (which is what our new fb is running and apparently what they're using for the front ends at Hanford). We may need to recompile the source on our local Centos 5.5 control machines to get some working gds code. We're in the process of transferring the source code from Hanford. Apparently this latest code is not in SVN yet, because at some point he needs to merge it with some other work other people have been doing in parallel and he hasn't had the time yet to do the work necessary for the merge.
For the moment, Alex is undoing the soft link changes he did pointing gds at the latest gds code he copied, and pointing back at the original install we had.
I found that several linux libraries have been moved around and disabled today. In particular, I see a bunch of new stuff in apps/linux/ and ezca tools are not working.
Also found that someone has pulled the power cable to the function generator I was using to set the VCO offset. This is the one on top of the Rb clocks. Why?? Why no elog? This is again a big waste of time.
We found a small PCB defect which is an excess copper shorting circuit on the daughter board,
it was removed and the signal on mixer monitor path is working properly.
We were checking the new TTFSS upto test 10a on the instruction, E1000405 -V1. There was no signal at MIXER mon channel.
It turned out that U3 OpAmp on the daughter board, D040424, was not working because the circuit path for leg 15 was shorted
because of the board's defect. We can see from fig1 that the contact for the OpAmp's leg (2nd from left) touches ground.
We used a knife to scrap it out, see fig 2, and now this part is working properly.
The vertex crane drive is overheating, it stopped functioning. Service man will be here tomorrow morning.
I crane was just turned on for for may be about 5 minutes. The vertical drive was fine for a while, but the horizontal did not worked at all.
The crane is tagged out again and the controller box is cooling down.
RA: Nice screen work. The old screens had a 'slow' slider effect when ramping the bias so that we couldn't whack the optic too hard. Is the new one instantaneous?
Looking at the sliders, I apparently still need to connect them properly. There's a mismatch between the medm screen channel name and the model name. At the moment there is no "slow" slider effect implemented, so they are effectively instantaneous. Talking with Alex, he suggests writing a little c-code block and adding it to the model. I can use the c code used in the filter module ramps as a starting point.
(Koji, Yuta, Kiwamu)
Now the pressure at P1 is 740 torr, which is close to the atmospheric pressure of 760 torr.
We changed the air cylinder twice in this evening, and the last cylinder ran out at about 23:00 pm.
We left it as it is. Steve is going to make a final touch for it tomorrow morning.
took two runs of the script as usual
The medm screens have been updated further, with the hidden matrices added in bright colors. An example screen shot is attached.
Megatron has been renamed c1ioo and moved to martian network. Similarly, c1sus and c1iscex are also on the martian network. Medm screens can be run on any of the control machines and they will work.
Currently the suspension controller is running on c1sus.
The frame builder is currently running on the fb machine *however* it is not working well. Test points and daq channels on the new front ends tended to crash it when Alex started the mx_stream to the fb via our new DAQ network (192.168.114.XXX, accessible through the front ends or fb - has a dedicated 1 gigabit network with up to 10 gigabit for the fb). So for the moment, we're running without front end data. Alex will be back tomorrow to work on it.
Alex claimed to have left the frame builder in a state where it should be recording slow data, however, I can't seem to access recent trends (i.e. since we started it today). The frame builder throws up an error "Couldn't open raw minute trend file '/frames/trend/minute_raw/C1:Vac-P1_pressure', for example. Realtime seems to work for slow channels however. Remember to connect to fb, not fb40m. So it seems the fb is still in a mostly non-functional state.
Alex also started a job to convert all the old trends to the correct new data format, which should finish by tomorrow.
Blocked PSL output beam into IFO
Checked: HV at IOO & OMC are off, jam nuts in position,
Closed V1 and VM3, opened VV1 to N2 regulator
We are venting at 1 Torr/min rate
We added the Thorlabs HV Driver in between the FSS and the NPRO today. The FSS is locking with it, but we haven't taken any loop gain measurements.
This box takes 0-10 V and puts out 0-150 V. I set up the FSS SLOW loop so that it now servos the output of FAST ot be at +5V instead of 0V. This is an OK
temporary solution. In the future, we should add an offset into the output of the FSS board so that the natural output is 0-10 V.
I am suspicious that the Thorlabs box has not got enough zip to give us a nice crossover and so we should make sure to measure its frequency response with a capacitive load.
Issues I notice on first glance:
How much current do you need for each voltages?
GE-82 was the only PNP transister I could find in the lab. It's too old but we just like to confirm any other components are still functioning.
Similarly, we can confirm the functionality of the other components by skipping those current boost transisters,
if we don't need more than 30mA.
Q3, a PZT2222A transistor, on D0901846 is replaced by a GE-82. However, the board is still not fully function.
Since Q3, PZT2222A, was broken, I went to Wilson house and got some SP3904's for replacement. But somehow, I broke it during
installation, and did not notice it, and resumed the test. When I got to test 8 on the list, the TTFSS did not work as specified.
Koji checked and found out that -15V, Nref, Vref voltages output did not work correctly. So the SP3904 I installed was removed
and replaced with another SP3904 by Koji, and Vref is working.
Q4 transistor is broken as well and it was replaced by GE 82.
Q1 might be broken too since -15V out is not working.
I'll go to Wilson house to get more transistors next week.
After the broken parts have been replaced, I have to make sure that I separate the power supply board from the rest of the circuit and
check if all V outputs are working, then reconnect the board and check if the current input is reasonable before resume the test.
I hope the wrong input voltage problem today wouldn't damage anything else.
To startup medm screens for the new suspension front end, do the following:
1) From a control room machine, log into megatron
ssh -X megatron
2) Move to the new medm directory, and more specifically the master sub-directory
3) Run the basic sitemap in that directory
medm -x sitemap.adl
The new matrix of filters replacing the old ULPOS, URPOS, etc type filters is now on the screens. This was previously hidden. I also added the sensor input matrix entry for the side sensor.
Lastly, the C1SUS.txt filter bank was updated to place the old ULPOS type filters into the correct matrix filter bank.
The suspension controls still need all the correct values entered into the matrix entries (along with gains for the matrix of filter banks), as well as the filters turned on. I hope to have some time tomorrow morning to do this, which basically involves looking at the old screens copying the values over. The watch dogs are still controlled by the old control screens. This will be fixed on Monday when I finish switching the front ends over from their sub-network to the main network, at which point logging into megatron will no longer be necessary.
The squeezing open frame rack was moved from the south side of the PSL enclosure to the north side of the SP table.
AC power breaker is PC-2 #1
We found that a transistor was broken from yesterday spark too. We partially fixed TTFSS, and it should be enough for testing purpose.
From yesterday test, we found that the RF amplifier for LO signal was broken. There was no spare at the electronic shop at Downs,
so we shorted the circuit for now. Another part which was broken too was a transistor, Q3 PZT2222A, on D0901846.
It was removed and two connections, which are for Q3's 1 and 3 legs, are shorted. Now the voltages out from the regulators are back to normal.
We are checking a MAX333A switch, U6A on D0901894. it seems that the voltage that controls the switch disappears.
There might be a bad connection somewhere. This will be investigated next.
Big Johnny and I hacked a function generator output into the cross-connect of the 80 MHz VCO driver so that we could modulate the
amplitude of the light going into the RefCav. The goal of this is to measure the coefficient between cavity power fluctuations and the
apparent length fluctuations. This is to see if the thermo-optic noise in coatings behaves like we expect.
To do this we disconnected the wire #2 (white wire) at the cross-connect for the 9-pin D-sub which powers the VCO driver. This is
called VCOMODLEVEL (on the schematic and the screen). In the box, this modulates the gain in the homemade high power Amp which
sends the actual VCO signal to the AOM.
This signal is filtered inside the box by 2 poles at 34 Hz. I injected a sine wave of 3 Vpp into this input. The mean value was 4.6 V. The
RCTRANSPD = 0.83 Vdc. We measure a a peak there of 1.5 mVrms. To measure the frequency peak we look in
the FSS_FAST signal from the VME interface card. With a 10 mHz linewidth, there's no peak in the data above the background. This signal
is basically a direct measure of the signal going to the NPRO PZT, so the calibration is 1.1 MHz/V.
We expect a coefficient of ~20 Hz/uW (input power fluctuations). We have ~1 mW into the RC, so we might expect a ~20 Hz frequency shift.
That would be a peak-height of 20 uV. In fact, we get an upper limit of 10 uV.
Later, with more averaging, we get an upper limit of 1e-3 V/V which translates to 1e-3 * 1.1 MHz / 1 mW ~ 1 Hz/uW. This is substantially lower
than the numbers in most of the frequency stabilization papers. Perhaps, this cavity has a very low absorption?
mafalda is up now.
I found that the cable for mafalda (the sole red cable) had a broken latch.
The cable was about falling off from the switch. As a first-aid, I used this technique to put a new latch, and put it into the switch.
Now I can logged in it. I did not rebooted it.
I am guessing that the NFS file system hangup may have caused some machines to get into an awkward state. We may be best off doing a controlled power cycle of everything...
Alex came over this morning and we began work on the frame builder change over. This required fb40m be brought down and disconnected from the RAID array, so the frame builder is not available.
He brought a Netgear switch which we've installed at the top of the 1X7 rack. This will eventually be connected, via Cat 6 cable, to all the front ends. It is connected to the new fb machine via a 10G fiber.
Alex has gone back to Downs to pickup a Symmetricon (sp?) card for getting timing information into the frame builder. He will also be bringing back a harddrive with the necessary framebuilder software to be copied onto the new fb machine.
He said he'd like to also put a Gentoo boot server on the machine. This boot server will not affect anything at the moment, but its apparently the style the sites are moving towards. So you have a single boot server, and diskless front end computers, running Gentoo. However for the moment we are sticking with our current Centos real time kernel (which is still compatible with the new frame builder code). However this would make a switch over to the new system possible in the future.
At the moment, the RAID array is doing a file system check, and is going slowly while it checks terabytes of data. We will continue work after lunch.
Punchline: things still don't work.
svn is back after starting apache on nodus.
Zach> Nodus seemed to be working fine again, and I was browsing the elog with no
Zach> problem. I tried making an entry, but when I started uploading a file it
Zach> became unresponsive. Tried SSHing, but I get no prompt after the welcome
Zach> blurb. ^C gives me some kind of tcsh prompt (">"), which only really
Zach> responds to ^D (logout). Don't know what else to do, but I assume someone
Zach> knows what's going on.
By gracefully rebooting nodus, the problem was solved.
It (">") actually was the tcsh prompt, but any commands with the shared or dynamic link libraries looked unfunctional.
I could use
to browse the directory tree. The main mounted file systems like /, /usr, /var, /cvs/cds/caltech looked fine.
I was afraid that the important library files were damaged.
in order to flush the file systems.
These should run even without the libraries as mount must properly work even before /usr is mounted.
They indeed did something to the system. Once I re-launch a new login shell, the prompt was still ">"
but now I could use most of the commands.
/, /usr, /var, /cvs/cds/caltech
I have rebooted by usual sudo-ing and now the services on nodus are back to the functional state again.
# nodus was working in the evening at around 9pm. I even made an e-log entry about that.
# So I like to assume this is not directly related to the linux1 incident. Something else could have happened.
I tested the new table top frequency stabilization system(TTFSS),
I haven’t finished it yet, and accidentally fried one amplifier in the circuit.
We received three sets of a new TTFSS system which will replace the current FSS.
It needs to be checked that the system works as specified before we can use it.
I followed the instruction written on E10000405-v1
The first test inspected how much the currents were drawn from the +/- 24 V power supply.
+24 V drew 350 mA and -24 V drew 160 mA as shown on pwr supply’s current monitor.
They exceeded the specified value which was 200 +/- 20 mA, but nothing went wrong during the test.
Nothing got overheated, all voltage outputs were correct so I proceeded.
I have gone down the list to 6, and everything works as specified.
- Correcting the document for the test procedure
I found a few errors on the instruction document. I’ll notify the author tomorrow.
- How GVA-81 amplifier on D0901894 rev A got fried
During the test, I used a mirror on a stick that looked like a dental tool to see under the board.
Unfortunately, the steel edge touched a board and caused a spark. The voltage on -24 dropped to -16.
I think this happened because the pwr supply tried to decrease the current from shorted circuit,
as I shorted it only short time ( a blink of an eye), it could not reduce the voltage to zero.
When I was checking the power supply and about to adjust the voltage back to the right value
(about 4-5 seconds after the spark,) smoke came out of the circuit.
Koji investigated the circuit and found that a GVA 81 amplifier was broken.
This was checked by applying 5V to the amp, and slightly increasing the current.
The voltage dropped to zero as the amp was broken, so its circuit was shorted.
I’ll see if I can replace this at EE lab at Downs.
If I cannot find a spare one, I’ll replace it with a resistor and resume the test procedure.
Because it amplifies LO signal, which won’t be used during the test.
Net switch mumbo-jumbo:
Although Rana is going to buy a replacement for the Netgear Switch for martian, I opened the lid of the Netgear as the fan already have stopped working.
Also the lid of the other network switch for GC (Black one) was opened as it has a broken fan and a noisy half-broken fan.
I have asked Steve to buy replacement fans. These would also be the replacement of the replacement.
During the work, it seemed that I accidentally toggled the power supply of linux1. It lead lengthy fsck of the storage.
This is why all of the machines which rely on linux1 got freezed. linux1 is back and the machines looked happy now.
If you find any machine disconnected from the network, please consult with me.
The Netgear Network Switch in the top shelf of Nodus' rack has a broken fan. It is the one interfaced to the Martian network.
The fan must have broken and it is has now started to produce a loud noise. It's like a truck was parked in the room with the engine running.
Also the other network switch, just below the Netgear, has one of its two fans broken. It is the one interfaced with the General Computer Side.
I tried to knock them to make the noise stop, but nothing happened.
We should consider trying to fix them. Although that would mean disconnecting all the computers.
[Aidan, Tara, Joe]
We pulled out what used to be the LSC/ASC fiber from the 1Y3 arm rack, and then redirected it to the 1X1 rack. This will be used as the c1ioo 1PPS timing signal. So c1ioo is using the old c1iovme fiber for RFM communications back to the bypass switch, and the old LSC fiber for 1PPS.
The c1sus machine will be using the former sosvme fiber for communications to the RFM bypass switch. It already had a 1 PPS timing fiber.
The c1iscex machine had a new timing fiber already put in, and will be using the c1iscey vme crate's RFM for communication.
We still need to pull up the extra blue fiber which was used to connect c1iscex directly to c1sus, and reuse it as the 1PPS signal to the new front end on the Y arm.
Alex has said he'll come in tomorrow morning to install the new FB code.
I've made a first pass at a rack diagram for the 1X1 and 1X2 racks, attached as png.
Gray is old existing boards, power supplies etc. Blue is new CDS computers and IO chassis, and gold is for the Alberto's new RF electronics. I still need to double check on whether some of these boards will be coming out (perhaps the 2U FSS ref board?).
John Miller has arrived from Australia with 3 bags of Wagonga Coffee. Trade bargaining has started on
250 mgs of Sumatran Mandehling, Timur and Papua New Guine.
The PSL out 2" OD beam guide tube was cut 1.5" shorter to 13.5"
The 10" OD 0.25" wall Al tube was replaced by a lighter, not anodized and thinner wall 0.094" tube of 15.5" lenght, that is 0.75" shorter.
The new position of the PSL table made these cuts necessary.
[Rana, Koji, Joe]
We pulled the phase shifters in the 1X2 rack out to make room for megatron. Megatron will be converted into c1ioo, and the 8 core, 1U computer will be used as c1lsc. A temporary ethernet cable was run from 1X2 to 1X3 to connect megatron to the same sub-network.
The c1lsc machine was worked on today, setting it up to run the real time code, along with the correct controls accounts, passwords, .cshrc files, etc. It needs to be moved from 1X1 to 1X4 tomorrow.
I talked to with Alex this morning, discussing what he needed to do to have a frame builder running that was compatible with the new front ends.
1) We need a heavy duty router as a separate network dedicated to data acquisition running between the front ends and the frame builder. Alex says they have one over at downs, although a new one may need to be ordered to replace that one.
2) The frame builder is a linux machine (basically we stop using the Sun fb40m and start using the linux fb40m2 directly.).
3) He is currently working on the code today. Depending on progress today, it might be installable tomorrow.
Larry stopped by today and had to disconnect the m25 machine (this is the 1st GC machine on the left as you walk into the control room) because its IP was conflicting with a machine over in Downs. Do not use 126.96.36.199 as the IP on this machine as this is already assigned to someone else. They couldn't figure out the root password to change it which is why it is not currently plugged into the network, and is not to be until an appropriate IP is assigned.
They've asked that whoever set the machine up to please contact them (extension 2974).
Our new 2W Mephisto has a pretty zippy "SLOW" temperature input. Tuning the perl PID servo, I found that the best response came from setting
the "P" and "D" terms to zero. This is because the internal temperature stabilization servo has a fairly high UGF. In the attached
image you can see how the open loop step response looks (loop is open then the "KI" parameter is set to zero). The internal servo
really has too little damping. There is a 30% overshoot when it gets a temperature step. For this kind of servo Innolight would have done better
to back off on the gain until they got back some phase margin.
New SLOW parameters:
timestep = 1.9 s
KP = 0
KI = 0.035
KD = 0
I installed the blue IP camera from ZoneNet onto the PSL table. It gets its power from the overhead socket and connects via Cat5 to the Netgear switch in the PSL/IO rack.
You can connect to it on the Martian network by connecting to http://192.168.113.201:3037. Your computer must have Java working in the browser to make that work.
So far, this works on rossa, but not the other machines. It will take someone with Joe/Kiwamu level linux savvy to fix the java on there. I also don't know how to fix the host tables, so someone please add this camera to the list and give it a name.
As you can see from the image, it is illuminating the PSL with IR LEDs. I've sent an email to the tech support to find out if we can disable the LEDs.
Cleaned up cables on the top and bottom. Vacuumed both areas. We still have some remaining shading from the MOPA umbilical and more unknown BNC cables hanging around.
I took the 5565 RFM card out of the IOVME machine to so I could put it in the new IO chassis that will be replacing it. It is no longer on the RFM network. This doesn't affect the slow channels associated with the auxilliary crate.
In doing a re-inventory prior to the IOO chassis installation, I re-discovered we had a missing interface board that goes in an IO chassis. This board connects the chassis to the computer and lets them talk to each other. After going to Downs we remembered Alex had taken a possibly broken interface board back to downs for testing.
Apparently the results of that testing was it was broken. This was about 2.5 months ago and unfortunately it hadn't been sent back for repairs or a replacement ordered. Its my fault for not following up on that sooner.
I asked Rolf what the plan for the broken one was. His response was they were planning on repairing it, and that he'd have it sent back for repairs today. My guess the turn around time for that is on the order of 3-4 weeks (based on conversations with Gary), however it could be longer. This will affect when the last IO chassis (LSC) can be made fully functional. I did however pickup the 100 foot fiber cable for going between the LSC chassis and the LSC computer (which will be located in 1X3).
As a general piece of information, according to Gary the latest part number for these cards is OSS-SHB-ELB-x4/x8-2.0 and they cost 936 dollars (latest quote).
The day before yesterday, I was cleaning a flow bench in the clean room.
I found that one SOS was standing there. It is the SRM suspension.
I thought of the nice idea:
- The installed PRM is actually the SRM (SRMU04). It is 2nd best SRM but not so diiferent form the best one.
==> Use this as the final SRM
- The SRM tower at the clean room
==> Use this as the final PRM tower.
==> The mirror (SRMU03) will be stored in a cabinet.
- The two SOS towers will be baked soon
==> Use them for the ETMs
This reduces the unnecessary maneuver of the suspension towers.
Two SOS suspensions for the ETMs were disassembled and packed for cleaning and baking by Bob.
These suspensions have been stored on the X end flow bench long years, and looked quite old.
They have some differences to the modern SOSs.
- The top suspension block is made of aluminum and had dog clamps to fix the wires.
- The side bars are not symmetric: the side OSEM can only be fixed at the right bar (left side in the picture).
- EQ stops were made of Viton.
- One of the tower bases seems to have finger prints (of Mike Zucker?).
I found that the OSEM plates had no play. We know that the arrangement of the OSEMs gets quite difficult
in this situation. Therefore the holes of the screws were drilled with the larger drill.
We decided to replace all of the screws to the new ones as all of the screws are Ag plated and got corroded
by silver sulfide (Ag2S). I checked our stock in the clean room. We have enough screws.
The attached plots show the PMC cavity line width measurement with 1 mW and 160 mW into the PMC. The two curves on each plot are the PMC transmitted power and the ramp of the fast input of the NPRO. The two measurements are consistent within errors - a few %. The PMC line width 3.5 ms (FWHM) x 4 V / 20 ms (slope of the ramp) x 1.1 MHz / V (NPRO fast actuator calibration from Innolight spec sheet) = 0.77 MHz.
Here is the output of the calculation using Malik Rakhmanov code:
modematching = 8.4121e-01
transmission1 = 2.4341e-03
transmission2 = 2.4341e-03
transmission3 = 5.1280e-05
averageLosses = 6.1963e-04
visibility = 7.7439e-01
fw = 0.77e6; % width of resonance (FWHM) in Hz
Plas = 0.164; % power into the PMC in W
% the following number refer to the in-lock cavity state
Pref = 0.037; % reflected power in W
Ptr = 0.0712; % transmitted power in W
Pleak = 0.0015; % power leaking from back of PMC in W
- NPRO injection current 1.0 A
- PMC losses ~32%
- FSS AOM diffraction efficiency ~52%
We removed the Lightwave MOPA Controller, PA#102, NPRO206 power supply to make room for the IOO chassy at 1X1 (south) rack.
The umbilical cord was a real pain to take out. It is shading its plastic cover. The unused Minco was disconnected and removed.
The ref. cavity ion pump controller- power supply was temporarily taken out also.
We removed the Lightwave MOPA Controller from 1X1 (south) It was a real painfully messy job to pull out the umbilical.
Note: the umbilical is shading it plastic cover. It is functional but it has to be taken out side and cleaned. Do not remove it from it's plastic bag in a clean environment.
Now Joe has room for IOO chassy in this rack.
We also removed the Minco temp controller and ref. cavity ion pump power supply.
Steps for RFM switch over:
1) Ensure the new frame builder code is working properly:
A) Get Alex to finish compiling the frame builder and test on Megatron.
B) Test the new frame builder code on fb40m (which is running Solaris) in a reversible way. Change directory structure away from Data1, Data2, to use actual times.
C) Confirm new frame builder code still records slow channels (c1dcuepics).
2) Ensure awg, tpman, and diagnostic codes (dtt) are working with the new front end code.
3) Physically move RFM cables from old front ends to the new front ends. Remove excess connections from the network.
4) Merge the megatron/c1sus/c1iscex/c1ioo network with the main network.
A) Update all the network settings on the machines as well as Linux1
B) Remove the network switch separating the networks.
4) Start the new frame builder code on fb40m.
Brilliant! This is the VERY way how the things are to be conquered!
The RefCav is locked and aligned. I changed the fast gain sign by changing the jumper setting on the TTFSS board. The RefCav visibility is 70%. The FSS loop ugf is about 80 kHz (plot attached) with FSS common gain max out at 30 dB. There is about 50 mW coming out of the laser and a few mW going to RefCav out of the back of the PMC. So the ugf can be made higher at full power. I have not made any changes to account for the PMC pole (the FSS is after the PMC now). The FSS fast gain was also maxed out at 30 dB to account for the factor of 5 smaller PZT actuation coefficient - it used to be 16 dB according to the (previous) snap shot. The RefCav TRANS PD and camera are aligned. I tuned up the phase of the error signal by putting cables in the LO and PD paths. The maximum response of the mixer output to the fast actuator sweep of the fringe was with about 2 feet of extra cable in the PD leg.
I am leaving the FSS unlocked for the night in case it will start oscillating as the phase margin is not good at this ugf.
The RefCav is locked and aligned. I changed the fast gain sign by changing the jumper setting on the TTFSS board. The RefCav visibility is 70%. The FSS loop ugf is about 80 kHz (plot attached. there is 10 dB gain in the test point path. this is why the ugf is at 10 dB when measured using in1 and in2 spigots on the front of the board.) with FSS common gain max out at 30 dB. There is about 250 mW coming out of the laser and 1 mW going to RefCav out of the back of the PMC. So the ugf can be made higher at full power. I have not made any changes to account for the PMC pole (the FSS is after the PMC now). The FSS fast gain was also maxed out at 30 dB to account for the factor of 5 smaller PZT actuation coefficient - it used to be 16 dB according to the (previous) snap shot. The RefCav TRANS PD and camera are aligned. I tuned up the phase of the error signal by putting cables in the LO and PD paths. The maximum response of the mixer output to the fast actuator sweep of the fringe was with about 2 feet of extra cable in the PD leg.
- connected the TTFSS cables (FSS fast goes directly to NPRO PZT for now)
- measured the reference cavity 21.5 MHz EOM drive to be 17.8 dBm
- turned on the HV for the FSS phase correcting EOM (aka PC) drive
- connected and turned on the reference cavity temperature stabilization
- connected the RefCav TRANS PD
- fine tuned the RefCav REFL PD angle
I completed a LIGO document describing design, construction and characterization of the RF System for the 40m upgrade.
It is available on the SVN under https://nodus.ligo.caltech.edu:30889/svn/trunk/docs/upgrade08/RFsystem/RFsystemDocument/
It can also be found on the 40m wiki (http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/RF_System#preview), and DCC under the number T1000461.
I changed the setpoint for the HVAC control (next to Steve) from 73F to 72F. This is to handle the temperature increase in the control room with the AC unit there turned off.
We know that the control setpoint is not linear, but I hope that it settles down after several hours. Lets wait until Tuesday evening before making another change.
On Friday, Valera and I calculated the modematching for reference cavity from AOM.
We scan the beam profile where the spot should be.
The first beam waist in the AOM is 103 um, the lens (f= 183 mm, I'm not sure if I have the focal length right) is 280 mm away.
The data is attached. The first column is marking on the rail in inches,
the second column is distance from the lens, the third and fourth column are
vertical and horizontal spot radius in micron. Note that the beam is very elliptic because of the AOM.