40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 137 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectdown
  4524   Thu Apr 14 12:57:15 2011 josephbUpdateCDSRFM network happy again

[Joe, Alex]

Problem Symptoms:

There were red lights on the status screen indicating RFM errors for the c1scy, c1mcs and c1rfm processes.

The c1iscey, c1sus machines were receiving data sent over the RFM network from the c1ioo computer with a bad time stamp, a few cycles too late.  The c1iscex computer was receiving data from c1ioo fine.

Problem:

The c1iscex RFM card had gotten into a bad state and was somehow slowing things down/corrupting data.  It didn't affect itself, but due to the loop topology was messing everyone else up.  Basically the only one who wasn't throwing an error was the culprit.

Solution:

Hard power cycling the c1iscex computer reset the RFM card and fixed the problem.

  13643   Tue Feb 20 21:14:59 2018 gautamUpdateCDSRFM network errors

I wanted to lock the single arm POX/POY config to do some tests on the BeatMouth. But I was unable to.

  • I tracked the problem down to the fact that the TRX and TRY triggers weren't getting piped correctly to the LSC model
  • In fact, all RFM channels from the end machines were showing error rates of 16384/sec (i.e. every sample).
  • After watchdogging ETMX, I tried restarting just the c1scx model - this promptly took down the whole c1iscex machine.
  • Then I tried the same with c1iscey - this time the models restarted successfully without the c1iscey machine crashing, but the RFM errors persisted for the c1scy channels.
  • I walked down to EX and hard rebooted c1iscex.
  • c1iscex came back online, and I ssh-ed in and did rtcds start --all.
  • This brought all the models back online, and the RFM errors on both c1iscex and c1iscey channels vanished.

Not sure what to make of all this, but I can lock the arms now.

  538   Wed Jun 18 16:07:57 2008 robSummaryComputersRFM network down

The RFM network tripped off around noon today. It's still down. The problem appears to be with the EPICS interface (c1dcuepics). Trying to restart one of the end stations yields the error: No response from EPICS.

Possible causes include (but not limited to): busted RFM card on c1dcuepics, busted PMC bus on c1dcuepics, busted fiber from c1dcuepics to the RFM switch. We need Alex.
  13436   Tue Nov 21 11:21:26 2017 gautamUpdateCDSRFM network down

I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.

  • The error seems to be in the receiving model only, i.e. c1rfm, all the sending models (e.g. c1scx) don't report any errors, at least on the CDS overview screen.
  • Judging by dataviewer trending of the c1rfm status word, seems like this happened on Sunday morning, around 11am.
  • I tried restarting both sender and receiver models, but error persists.
  • I got no useful information from the dmesg logs of either c1sus (which runs c1rfm), or c1iscex (which runs c1scx).
  • There are no physical red lights in the expansion chassis that I could see - in the past, when we have had some timing errors, this would be a signature.

Not sure how to debug further...

* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy).

Attachment 1: RFMerrors.png
RFMerrors.png
  1200   Sun Dec 21 14:18:04 2008 YoichiUpdateComputersRFM network bypass box's power supply is dead
I restarted the front-end computers by power cycling them one-by-one.
After issuing startup commands, most of them started normally at least by looking
at the output from telnet/ssh.
However, the status monitors of the FE computers on the EPICS screen are still red.
I noticed that all the LEDs on the VMIC 5594 RFM network bypass box are off.
According to the labels, fb40m, c0daqctrl, c0dcu are connected to the box.
This means (I believe) c1dcuepics cannot access the RFM network. So we have no control over
the FE computers through EPICS.

I pushed the reset button on the box, power cycled it, but nothing changed.
I checked the fuse and it was OK. Then I found that the power supply was dead.
It is a small AC adapter supplying +5VDC with a 5-pin DIN like connector.
We have to find a replacement.
  1201   Mon Dec 22 13:48:22 2008 YoichiUpdateComputersRFM network bypass box's power supply is dead
As a temporary fix, I cut the cable of the power supply and connected it to the Sorensen power supply +5V on the rack.
Now, the RFM bypass box is powered up, but some LEDs are red, which looks like a bad sign.
I restarted all the FE computers, but this time I got errors during the execution of the startup commands in the VxWorks machines.
The errors are "General Protection Fault" or "Invalid Opcode".
The linux machines do not show errors but still the status lights in EPICS are red.
We need Alex's help. He did not answer the phone, so Alberto left a voice mail.
  1202   Tue Dec 23 10:35:40 2008 YoichiUpdateComputersRFM network breakdown mostly fixed
Rana, Rolf, Alberto, Yoichi

The source of the problem was the RFM bypass box, as expected.
Rana pointed out that the long cable I used to bring the 5V from the Sorensen to the box
may cause a large voltage drop considering that the box is sucking ~3A.
So we connected the cable to another power supply (5V/5A linear power supply).
Then the LEDs on the bypass box turned green from red, and everything started to work.

A weired thing is that when I connected the cable to the wrong terminals of the power supply which
have lower current supply capabilities, the supply voltage dropped to 3V, but still the LEDs on the bypass box
turned green. This means the bypass box can live with 3V.
I noticed that there is a long cable from the Sorensen to the cross connect on the side of the rack, where I
connected my cable to the bypass box. This long cable had somewhat large resistance (1 or 2 Ohms) and dropped
the supply voltage to less than 3V ?
Anyway, the bypass box is now on a temporary power supply. Alberto was assigned a task to find a replacement power
supply.

There are two remaining problems.
c1susvme1 fails to start often claiming a DMA error on a Pentek. After several attempts, you can start the machine,
but after a while (1 hour ?) it fails again.
op340m is not responding to ssh login. It responds to ping.
We hooked up a monitor and keyboard (USB because the machine does not have a PS/2 port) to it and rebooted.
At the boot, it briefly displays a message "No keyboard, try TTYa", but after that no display signal.
Steve found me a serial cable. I will try to login to the machine using the serial port.

  610   Tue Jul 1 11:53:38 2008 YoichiUpdateComputersRFM network back
I took a tour of the FE machines and power cycled all of them.
After executing the software restart procedures of those computers, the RFM network got back to the normal state.
For some reason, the computers requiring startup.cmd (like c1lsc) halt after running this command. Actually the computer is running ok, but the command freezes. Basically, what it does is simply to load a kernel module. I don't know what is wrong.
Anyway, I just closed the terminal after running startup.cmd and it seems fine for now.
  614   Tue Jul 1 13:34:29 2008 robUpdateComputersRFM network back

Quote:

For some reason, the computers requiring startup.cmd (like c1lsc) halt after running this command. Actually the computer is running ok, but the command freezes. Basically, what it does is simply to load a kernel module. I don't know what is wrong.
Anyway, I just closed the terminal after running startup.cmd and it seems fine for now.


This is normal. On the linux RTFEs (Real-Time Front Ends), the real-time code totally hijacks the kernel, disallowing any interrupts. The system thus becomes totally unresponsive while the code is running, and communicates only through the RFM and the VME backplane.
  6760   Wed Jun 6 00:32:22 2012 JenneUpdateCDSRFM model is way overloading the cpu

We have too much crap in the rfm model.  CPU time for the rfm model is regularly above 60us, and sometimes in the mid-70's (but sometimes jumps down briefly to ~47us, which is where I think it "used" to sit, but I don't remember when I last thought about that number)

This is potentially causing lots of asynchronous grief.

  2637   Wed Feb 24 12:08:31 2010 KojiUpdateComputersRFM goes red -> recovered by the nuclear option

Most of the RFM went red this morning. I took the nuclear option and it seemed to be recovered.

  15737   Fri Dec 18 10:52:17 2020 gautamUpdateCDSRFM errors

As I was working on the IFO re-alignment just now, the rfm errors popped up again. I don't see any useful diagnostics on the web interface.

Do we want to take this opportunity to configure jumpers and set up the rogue master as Rolf suggested? Of course there's no guarantee that will fix anything, and may possibly make it impossible to recover the current state...

Attachment 1: RFMdiag.png
RFMdiag.png
  4516   Tue Apr 12 16:01:33 2011 josephbUpdateGeneralRFM errors

Problem:

Currently the c1scy, c1mcs, and c1rfm models are reporting an error with receiving some data sent over the GE Fanuc Reflected memory cards.

To be more exact, the C1:SUS-ETMY_ALS signal from the c1gcv FE code on the c1ioo computer going too the Y end is not being received However, the C1:SUS-ETMY_LSC signal is.  So the physical RFM card seems to be working.

Similarly, the TRY signal is being sent correctly from the Y end computer.  The X end is working fine and receiving both LSC and ALS signals.

The c1mcs and c1rfm models also receive data from the c1ioo computer and reporting receiving errors.

Theory:

Because the RFM cards are transmitting and receiving at least some channels, I'm guessing there was changes made to the C1.ipc file, which defines the memory locations of these various channels on the RFM network, and that when a model was rebuilt, a different one using the previous IPC file was not, and thus one of the computer is going to the wrong place to either read or write data.

Tomorrow, I'm planning on the  following:

1) Clean out the C1.ipc file (/opt/rtcds/caltech/c1/chans/ipc/)

2) Rebuild all models

3) Run activate_daq.py script

4) Restart models via script

If this doesn't clear up the problem, I'll continue  to bug hunt.

  14293   Tue Nov 13 21:53:19 2018 gautamUpdateCDSRFM errors

This problem resurfaced, which I noticed when I couldn't get the single arm locks going.

The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.

Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.

Attachment 1: RFMerrors.png
RFMerrors.png
  15609   Sat Oct 3 16:51:27 2020 gautamUpdateCDSRFM errors

Attachment #1 shows that the c1rfm model isn't able to receive any signals from the front end machines at EX and EY. Attachment #2 shows that the problem appears to have started at ~430am today morning - I certainly wasn't doing anything with the IFO at that time.

I don't know what kind of error this is - what does it mean that the receiving model shows errors but the sender shows no errors? It is not a new kind of error, and the solution in the past has been a series of model reboots, but it'd be nice if we could fix such issues because it eats up a lot of time to reboot all the vertex machines. There is no diagnostic information available in all the places I looked. I'll ask the CDS group for help, but I'm not sure if they'll have anything useful since this RFM technology has been retired at the sites (?).

In the meantime, arm cavity locking in the usual way isn't possible since we don't have the trigger signals from the arm cavity transmission. 


Update 1500 4 Oct: soft reboots of models didn't do the trick so I had to resort to hard reboots of all FEs/expansion chassis. Now the signals seem to be okay.

Attachment 1: RFMstat.png
RFMstat.png
Attachment 2: RFMerrs.png
RFMerrs.png
  15646   Wed Oct 28 09:35:00 2020 KojiUpdateCDSRFM errors

I'm starting the model restarts from remote. Then later I'll show up in the lab to do more hard resets.
==> It seems that the RFM errors are gone. Here are the steps.

  1. Shutdown all the watchdogs
  2. login to c1iscex. Shutdown all the realtime models: rtcds kill --all
  3. login to c1iscey. Shutdown all the realtime models: rtcds kill --all
  4. run scripts/cds/rebootC1LSC.sh on pianosa
  5. reboot c1iscex
  6. reboot c1isxey
  7. Wait until all the machines/models are up by the script
  8. restart c1iscex models
  9. restart c1iscey models
  10. some IPC errors are still visible on the CDS status screen. Lauch c1daf and c1oaf

 

Attachment 1: Screen_Shot_2020-10-28_at_10.06.00.png
Screen_Shot_2020-10-28_at_10.06.00.png
  2488   Fri Jan 8 15:40:14 2010 josephb, alexUpdateComputersRFM and RCG

Alex added a new module to the RCG, for generating RFMIO using floats.  This has been commited to CVS.

  2195   Fri Nov 6 17:04:01 2009 josephbConfigurationComputersRFM and Megatron

I took the RFM 5565 card dropped off by Jay and installed it into megatron.  It is not very secure, as it was too tall for the slot and could not be locked down.  I did not connect the RFM fibers at this point, so just the card is plugged in.

Unfortunately, on power up, and immediately after the splash screen I get "NMI EVENT!" and "System halted due to fatal NMI". 

The status light on the RFM light remains a steady red as well.  There is a distinct possibility the card is broken in some way.

The card is a VMIPMC-5565 (which is the same as the card used by the ETMY front end machine).  We should get Alex to come in and look at it on Monday, but we may need to get a replacement.

  2486   Fri Jan 8 10:38:35 2010 josephb, kojiUpdateComputersRFM and Megatron

Last night, we installed the VMI 5565 RFM card into Megatron.  After turning off the watchdogs for the ETMY optic, we disconnected the RFM fiber, and connected it to megatron, then powered it up. 

We modified the RCG code to have 3 rfmio blocks, which were reading 0x11a1c0 (ascPit), 0x11a1c4 (ascYaw), and 0x11a1c8 (lscPos).  These were connected to the approriate filter module inputs, and we also added grounds to the front of the rfmio blocks (we looked at the ass code which was setup that way, so we just did the same thing).  When we started it however, it didn't read properly.  If we turned off the input and set and offset, it calculated the output of the filter module correctly, (i.e. just the offset value), but as soon as we turned on the input, it was set to 0, no matter the offset value, which indicated it was reading something correctly.

After this test, the RFM fibers were reconnected to c1iscey, we rebooted c1iscey, and we confirmed that the system was working properly again.  We turned the watch dogs back on for ETMY.

 

 

  2487   Fri Jan 8 11:43:22 2010 josephb, alexUpdateComputersRFM and Megatron

Alex came over with a short RFM cable this morning.  We used it to connect the rfm card in c1iscey to the rfm card megatron

Alex renamed startup.cmd in /cvs/cds/caltech/target/c1iscey/ to startup.cmd.sav, so it doesn't come up automatically.  At the end we moved it back.

Alex used the vxworks command d to look at memory locations on c1iscey.  Such as d 0xf0000000, which is the start of the rfm code location.  So to look at 0x11a1c8 (lscPos) in the rfm memory, he typed "d 0xf011a1c8".  After doing some poking around, we look at the raw tst front end code (in /home/controls/cds/advLigo/src/fe/tst), and realized it was trying to read doubles.  The old rts code uses floats, so the code was reading incorrectly.

As a quick fix, we changed the code to floats for that part.  They looked like:

etmy_lsc = filterModuleD(dsp_ptr,dspCoeff,ETMY_LSC,cdsPciModules.pci_rfm[0]? *(\
(double *)(((void *)cdsPciModules.pci_rfm[0]) + 0x11a1c8)) : 0.0,0);

And we simply changed the double to float in each case.  In addition we changed the RCG scripts locally as well (if we do a update at some point, it'll get overwritten).  The file we updated was /home/controls/cds/advLigo/src/epics/util/lib/RfmIO.pm

Line 57 and Line 84 were changed, with double replaced with float.

return "cdsPciModules.pci_rfm[0]? *((float *)(((void *)cdsPciModules.pci
_rfm[$card_num]) + $rfmAddressString)) : 0.0";

. "  *((float *)(((char *)cdsPciModules.pci_rfm[$card_nu
m]) + $rfmAddressString)) = $::fromExp[0];\n"

This fixed our ability to read the RFM card, which now can read the LSC POS channel, for example.

Unfortunately, when we were putting everything the way it was with RFM fibers and so forth, the c1iscey started to get garbage (all the RFM memory locations were reading ffff).  We eventually removed the VME board, removed the RFM card, looked at it, put the RFM card back in a different slot on the board, and returned c1iscey to the rack.  After this it started working properly.  Its possible in all the plugging and unplugging that the card somehow had become loose.

The next step is to add all the channels that need to be read into the .mdl file, as well as testing and adding the channel which need to be written.

 

  9494   Thu Dec 19 14:40:42 2013 KojiUpdateCDSRFM Time over mitigation for c1mcs

I worked on the mitigation of c1mcs time-over issue this afternoon.

The timing for the c1mcs is successfully reduced from >60us to 45us.


The previous models are svned in redoubt as follows:

MCS rev. 6696
RFM rev. 6697
IOO rev. 6698

What I changed was:

- Remove connection from ALS (on c1ioo) to MCS (on c1sus). This should be all done in LSC. (# of RFM IPC in MCS -1)

- MC2 trans QPD filters are moved from IOO to MCS to reduce the RFM channels in MCS.
  Previously the signals for the 4 segments are sent. Now the processed siganls (pit/yaw/sum) are sent. (# of RFM IPC in IOO -1, MCS -1)

- WFS MC3 feedback channels are moved from MCS to RFM to distribute the RFM channels (# of RFM IPC in MCS -2, in RFM +2)

model    prev. timing[us] current timing[us]  diff in time[us]  diff in ch#
c1mcs         >60                45                -15              -4
c1rfm         47                 53                + 6              +2       
c1ioo         47                 36                -11              -1

Revisions of the new models:
MCS rev. 6702
RFM rev. 6701
IOO rev. 6700

  1936   Mon Aug 24 10:43:27 2009 AlbertoOmnistructureComputersRFM Network Failure

This morning I found that all the front end computers down. A failure of the RFM network drove all the computers down.

I was about to restart them all, but it wasn't necessary. After I power cycled and restarted C1SOSVME all the other computers and RFM network came back to their green status on the MEDM screen. After that I just had to reset and then restart C1SUSVME1/2.

  7193   Wed Aug 15 13:24:12 2012 DenUpdateCDSRFM -> OAF

Transmission of signals between RFM and OAF is bad again. Now we do not see any errors in IPC_ERR monitors so models think that they get all data but the data is wrong

oaf.png

  16097   Thu Apr 29 15:11:33 2021 gautamUpdateCDSRFM

The problem here was that the RFM errors cropped up again - seems like it started ~4am today morning judging by TRX trends. Of course without the triggering signal the arm cavity couldn't lock. I rebooted everything (since just restarting the rfm senders/receivers did not do the trick), now arm locking works fine again. It's a bit disappointing that the Rogue Master setting did not eliminate this problem completely, but oh well...

It's kind of cool that in this trend view of the TRX signal, you can see the drift of the ETMX suspension. The days are getting hot again and the temp at EX can fluctuate by >12C between day and night (so the "air-conditioning" doesn't condition that much I guess 😂 ), and I think that's what drives the drift (idk what the transfer function to the inside of the vacuum chamber is but such a large swing isn't great in any case). Not plotted here but i hypothesize TRY levels will be more constant over the day (modulo TT drift which affects both arms).

The IMC suspension team should double check their filters are on again. I am not familiar with the settings and I don't think they've been added to the SDF.

Attachment 1: RFM_errs.png
RFM_errs.png
Attachment 2: Screenshot_2021-04-29_15-12-56.png
Screenshot_2021-04-29_15-12-56.png
  16099   Thu Apr 29 17:43:16 2021 KojiUpdateCDSRFM

The other day I felt hot at the X end. I wondered if the Xend A/C was off, but the switch right next to the SP table was ON (green light).
I could not confirm if the A/C was actually blowing or not.

  6633   Wed May 9 11:31:50 2012 DenUpdateCDSRFM

I added PCIE memory cache flushing to c1rfm model by changing 0 to 1 in /opt/rtcds/rtscore/release/src/fe/commData2.c on line 159, recompiled and restarted c1rfm.

Jamie, do not be mad at me, Alex told me do that!

However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine. After reboot encounted the same C1SUS -> FB communication error and fixed it in the same was as in the previous case of C1SUS reboot. This happens already the second time (out of 2) after C1SUS machine reboot.

I changed /opt/rtcds/rtscore/release/src/fe/commData2.c back, recompiled and restarted c1rfm. Now everything is back. C1RFM -> C1OAF is still bad.

  6635   Wed May 9 15:02:50 2012 DenUpdateCDSRFM

Quote:

However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine.

 This happened because of the code bug -

// If PCIE comms show errors, may want to add this cache flushing
#if 1
if(ipcInfo[ii].netType == IPCIE)
          clflush_cache_range (&(ipcInfo[ii].pIpcData->dBlock[sendBlock][ipcIndex].data), 16); // & was missing - Alex fixed this
#endif
 

After this bug was fixed and the code was recompiled, C1:OAF_MCL_IN is OK, no errors occur during the transmission C1:OAF-MCL_ERR=0.

So the problem was in the PCIE card that could not send such amount of data and the last channel (MCL is the last) was corrupted. Now, when Alex added cache flushing, the problem is fixed.

We should spend some more attention to such problems. This time 2046 out of 2048 points were lost per second. But what if 10-20 points are lost, we would not notice that in the dataviewer, but this will cause problems.

  5991   Wed Nov 23 18:28:09 2011 KojiUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

The following channels have been registered in c1iool0 database, and are now recorded by FB

C1:IOO-RFAMPD_11MHZ
C1:IOO-RFAMPD_29_5MHZ
C1:IOO-RFAMPD_55MHZ
C1:IOO-RFAMPD_DCMON
C1:IOO-EOM_TEMPMON
C1:IOO-EOM_HEATER_DRIVEMON


PROCEDURE

1) The EPICS database file has been edited to rename/add some channels

/cvs/cds/caltech/target/c1iool0/ioo.db

REMOVED
#grecord(ao,"C1:IOO-RFAMPD_VC")
#grecord(ai,"C1:IOO-RFAMPD_TEMP")
#grecord(ai,"C1:IOO-RFAMPD_DCMON")
#grecord(bo,"C1:IOO-RFAMPD_BIAS_ENABLE")
#grecord(bi,"C1:IOO-RFAMPD_BIAS_STATUS")
#grecord(calc, "C1:IOO-RFAMPD_33MHZ_CAL")
#grecord(calc, "C1:IOO-RFAMPD_133MHZ_CAL")
#grecord(calc, "C1:IOO-RFAMPD_166MHZ_CAL")
#grecord(calc, "C1:IOO-RFAMPD_199MHZ_CAL")

ADDED/EDITED
grecord(ai,"C1:IOO-RFAMPD_11MHZ")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S25 @")
...

grecord(ai,"C1:IOO-RFAMPD_29_5MHZ")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S26 @")

...
grecord(ai,"C1:IOO-RFAMPD_55MHZ")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S27 @")

...
grecord(ai,"C1:IOO-RFAMPD_DCMON")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S28 @")

...
grecord(ai,"C1:IOO-EOM_TEMPMON")
                                                
        field(DTYP,"VMIVME-3113")                                               
        field(INP,"#C1 S29 @")

...
grecord(ai,"C1:IOO-EOM_HEATER_DRIVEMON")

        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S30 @")

2) The channels have been added to the frame builder database

/cvs/cds/rtcds/caltech/c1/chans/daq/C0EDCU.ini

[C1:IOO-RFAMPD_11MHZ]
[C1:IOO-RFAMPD_29_5MHZ]
[C1:IOO-RFAMPD_55MHZ]
[C1:IOO-RFAMPD_DCMON]
[C1:IOO-EOM_TEMPMON]
[C1:IOO-EOM_HEATER_DRIVEMON]

Note that this C0EDCU.ini is the file that has been registered in

/cvs/cds/rtcds/caltech/c1/target/fb/master

3) burt restore request files were updated

RFAM related settings were removed as they don't exist anymore.

/cvs/cds/caltech/target/c1iool0/autoBurt.req
/cvs/cds/caltech/target/c1iool0/
saverestore.req

4) c1iool0 were rebooted. Framebuilder restarted. c1iool0 were burtrestored.

  5995   Thu Nov 24 05:10:00 2011 KojiUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

EOM TEMPMON and HEATER DRIVEMON have been hooked up to the following channels.

C1:IOO-EOM_TEMPMON
C1:IOO-EOM_HEATER_DRIVEMON 

What a fragile circuit...

I found some of the resistors popped up from the board because of the tension by the Pomona grabbers.
I tried to fix it based on the schematic (photo) and the board photo.

  5997   Thu Nov 24 10:27:07 2011 JenneUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

Here is a drawing of where the monitors are coming from:

EOM_temp_sense_heater_drive_schematic_withMONs.png

 Since we can't put current into the ADC, the heater drivemon is measuring the input of the OP27, which is related to the amount of current sent to the heater.

Quote:

EOM TEMPMON and HEATER DRIVEMON have been hooked up to the the following channels.

C1:IOO-EOM_TEMPMON
C1:IOO-EOM_HEATER_DRIVEMON

 

  5998   Thu Nov 24 12:45:12 2011 ZachUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

Jenne: The point you indicate for the heater monitor is a virtual ground--it will be driven to zero by the circuit if it's functioning properly; the readout should be done at the input pin (2, I think) to the BUF634.

Koji: This is odd, as I made a point of not attaching any clips directly to resistors for exactly this reason. I was also careful to trim resistor/capacitor leads so that they were not towering over the breadboard and prone to bending (with the exception of the gain-setting resistor of the AD620, which was changed at the last minute). At the end of the day, it is a breadboard circuit with Pomona "readout", so it's not going to be truly resilient until I put it on a protoboard. Another thing: I think the small Pomona clips are absolutely terrible, since they slip off with piconewtons of tension; I could not find any more regular clips, so I used them against my better judgment.

  5999   Thu Nov 24 13:54:31 2011 KojiUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

Those clips for the readouts were the ones who popped out.
When I have restored the connections, I checked the schematic and the heater drive mon is clipped on the output side of the OP27.

Quote:

Jenne: The point you indicate for the heater monitor is a virtual ground--it will be driven to zero by the circuit if it's functioning properly; the readout should be done at the input pin (2, I think) to the BUF634.

Koji: This is odd, as I made a point of not attaching any clips directly to resistors for exactly this reason. I was also careful to trim resistor/capacitor leads so that they were not towering over the breadboard and prone to bending (with the exception of the gain-setting resistor of the AD620, which was changed at the last minute). At the end of the day, it is a breadboard circuit with Pomona "readout", so it's not going to be truly resilient until I put it on a protoboard. Another thing: I think the small Pomona clips are absolutely terrible, since they slip off with piconewtons of tension; I could not find any more regular clips, so I used them against my better judgment. 

 

  5686   Tue Oct 18 15:20:03 2011 kiwamuSummaryIOORFAM plan

[Suresh / Koji / Rana / Kiwamu]

Last night we had a discussion about what we do for the RFAM issue. Here is the plan.

 

(PLAN)

  1. Build and install an RFAM monitor (a.k.a StochMon ) with a combination of a power splitter, band-pass-filters and Wenzel RMS detectors.

       => Some ordering has started (#5682). The Wenzel RMS detectors are already in hands.

  2. Install a temperature sensor on the EOM. And if possible install it with a new EOM resonant box.

      => make a wheatstone bridge circuit, whose voltage is modulated with a local oscillator at 100 Hz or so.

  3. Install a broadband RFPD to monitor the RFAMs and connect it to the StochMon network.

      => Koji's broadband PD or a commercial RFPD (e.g. Newfocus 1811 or similar)

  4. Measure the response of the amount of the RFAM versus the temperature of the EO crystal.

      => to see whether if stabilizing the temperature stabilizes the RFAM or not.

  5.  Measure the long-term behavior of the RFAM.

      => to estimate the worst amount of the RFAM and the time scale of its variation

  6. Decide which physical quantity we will stabilize, the temperature or the amount of the RFAM.

  7. Implement a digital servo to stabilize the RFAMs by feeding signals back to a heater

     => we need to install a heater on the EOM.

  8. In parallel to those actions, figure out how much offsets each LSC error signal will have due to the current amount of the RFAMs.

    => Optickle simulations.

  9. Set some criteria on the allowed amount of the RFAMs

    => With some given offsets in the LSC error signal, we investigate what kind of (bad) effects we will have.

  397   Sun Mar 23 10:42:54 2008 ValeraSummaryElectronicsRFAM of the RF stabilization box is measured
I reconstructed Tobin's setup to measure the RFAM after the RF stabilization box in the 166 MHz modulation path.
The setup consisted of the splitter and the mixer followed by the RF low pass filter and the SR560 (gain x100).
The RF level into splitter was 20 dBm. The Mini-Circuits ZLW-3H (17 dBm LO) mixer was used. The LO was taken
straight out of the splitter and the RF path was attenuated by 11 dBm, The DC out of the mixer was 700 mV.
The noise floor was measured with the RF input of the mixer terminated on 50 Ohm. The 45 MHz measurement
in broad band setting looks better than the noise floor at high frequencies. I am not sure what was wrong with
one or both of those measurements. The 9 MHz measurements are above the noise floor.

The RFAM meets the AdvLIGO requirements in the detection band (f > 10 Hz).

The attached zipped files are:
SRS003 9 MHz DC-200 Hz
SRS004 9 MHz DC-26 kHz
SRS006 45 MHz DC-200 Hz
SRS005 45 MHz DC-26 kHz
SRS007 Noise floor DC-200 Hz
SRS008 Noise floor DC-26 kHz
Attachment 1: RFAM.zip
Attachment 2: amplitudenoise.pdf
amplitudenoise.pdf
  5964   Sun Nov 20 15:11:09 2011 kiwamuUpdateIOORFAM monitoring test

DO NOT CHANGE THE IFO ALIGNMENT UNTIL TOMORROW MORNING OR FURTHER NOTICE.

Plus, MC has to be kept locked with the WFS.

 

An RFAM measurement is ongoing

 

 Since the Stochmon turned out to be tricky to calibrate the outputs, Koji and I decided to monitor the RFAMs using REFL11 and REFL55 RFPDs while the beam is single-bounced from PRM.
This is, of course, not a permanent RFAM monitor, but at least it gives us a long-term continuous RFAM information for the first time.
Before the measurement I ran the offset zeroing scripts, therefore any offsets from electronics must be tiny in the acquired REFL signals.
The measurement has begun from approximately 3:00 pm.
 
 Also I found C1LSC.ini file again became default (no channels had been acquired).
So I replaced it with an archived ini file and then restarted fb.
  5966   Mon Nov 21 12:48:00 2011 JenneUpdateIOORFAM monitoring test

I don't think I touched/adjusted/whatever anything, but I did open the PSL table ~5-10min ago to measure the size of the Kiwamu-Box, so if the RFAM stuff looks funny for a few minutes, it was probably me.  Just FYI.

  5967   Mon Nov 21 14:15:25 2011 JenneUpdateIOORFAM monitoring test

Quote:

DO NOT CHANGE THE IFO ALIGNMENT UNTIL TOMORROW MORNING OR FURTHER NOTICE.

 [Mirko,  Jenne]

We're playing with the MC OAF, so we're actuating on MC2.  Again, FYI.

  5968   Mon Nov 21 14:35:28 2011 kiwamuUpdateIOORFAM monitoring test

REFL_RFAM.png

 This is a trend for a day long showing the REFL11/55 demod signals, REFLDC (corresponding to the MC transmitted power) and the PSL booth temperatire.

There are sudden jumps in the REFL55_I and REFL11_Q signals around 5:00 AM this morning, also at the same time the temperature suddenly went up.

But the quality of the signal turned out to be not so good because the fluctuation is still within 1 bit of the ADCs,

we have to try it again with a bigger gain in the analog whitening circuit.

Quote:

An RFAM measurement is ongoing

  5972   Mon Nov 21 17:48:36 2011 KojiUpdateIOORFAM monitoring test

Do we care about the AC? I thought what we care is the DC.

  5745   Thu Oct 27 03:32:45 2011 KojiSummaryIOORFAM monitor progress

[Suresh, Mirko, Koji]

A cable from the stochmon box to the cross connect for the EPICS ADCs is installed.

The power supply and the signal outputs are concentrated in a single DSub 9pin connector
that is newly attached on the box.

The connection from the stochmon side of the cable and the EPICS value was confirmed.
The calibration of them looks fine.

To do:
- Once the stochmon box is completed we can immediately test it.
- The EPICS channel names are still as they were. We need to update the database file of c1iool0, the chans file for the slow channel.


The pinout is as following

-------------
| 1 2 3 4 5 |   Female / Inside View
\  6 7 8 9  /
 \---------/

1 - 11MHz Signal
2 - 30MHz Signal
3 - 55MHz Signal
4 - NC
5 - +5V supply
6 - 11MHz Return
7 - 30MHz Return
8 - 55MHz Return
9 - Supply ground

  5760   Fri Oct 28 20:39:19 2011 MIrkoUpdateLSCRFAM monitor in place. ( Uncalibrated ) EPICS troubles

{Suresh, Jamie, Mirko]

We adapted the Stochmon box to include LP filters at 1.8Hz behind the RMS parts.
Then measured the RMS signals for different RF signal levels at 11.0.65, 29.5, 55.325MHz provided by a RF freq. generator.
As you can see in the data below the suppression of the BP filters of neighboring frequencies is only 35-35dB in power (see also manufacturer specs).

We therefor want to substract crosstalk, by calculating it out. We decided to use C-code in CDS. No computer crashing this time :)

We however ran into the problem that the RMS signal channels are acquired by the slow (EPICS) maschine c1iool0. Channels are (C1:IOO-RFAMPD_33MHZ , -"-133MHZ, -"-166MHZ) and we could not access those in the CDS c1ioo model. Using the EpicsIn block we got an CA.Exception stating that the variable was hosted on multiple servers. We then tried to use the EzcaRead to access the variables. Got an compile error, about the compiler not beeing able to connect all parts. It seems that the EzcaRead left behind a "ghost" part in the model (something with M1:SYS-FOO_BAR which is the default naming of the EzcaRead block) even after we deleted that block. We toyed around with the /opt/rtcds/caltech/c1/chans/daq/C1EDU_IOO.ini  and  /cvs/cds/caltech/target/c1iool0/ioo.db  files. We tried to uncomment the "old" (33,133,166) channels there to get rid of the conflict, but that didn't work.

We want to write the outputs to C1:IOO-RFAMPD_11MHZ , -"-29MHZ, -"-55MHZ EPICS channels.

We had to get the model back from the svn to get it running again.

Attachment 1: MC_DC11MHz.m
Pwr=[-60,-55,-50,-45,-40,-35,-30,-25,-20,-10,-5,0,5,10]';
Voltage11=[2.12,2.10,2.03,1.93,1.83,1.71,1.59,1.47,1.35,1.10,0.97,0.85,0.71,0.61]';

Voltage11=spline(Pwr,Voltage11,linspace(-60,10,15));
PwrSmooth=linspace(-60,10,15);

Voltage29=[2.14,2.14,2.14,2.14,2.14,2.14,2.13,2.12,2.09,1.94,1.84,1.73,1.61,1.49];
Voltage29=spline(Pwr,Voltage29,linspace(-60,10,15));

Voltage55=[2.16,2.16,2.16,2.16,2.16,2.16,2.16,2.15,2.14,2.13,2.10,2.04,1.94,1.83];
... 5 more lines ...
Attachment 2: MC_DC29MHz.m
Pwr=[-60,-55,-50,-45,-40,-35,-30,-25,-20,-15,-10,-5,0,5,10]';
Voltage11=[2.16,2.16,2.16,2.16,2.15,2.16,2.15,2.13,2.10,2.03,1.93,1.81,1.70,1.58,1.46]';

Voltage29=[2.12,2.10,2.03,1.93,1.83,1.70,1.59,1.47,1.34,1.22,1.09,0.97,0.84,0.71,0.61]';

Voltage55=[2.16,2.15,2.16,2.16,2.15,2.15,2.14,2.10,2.0,1.97,1.97,1.77,1.65,1.50,1.37]';

%%  Example 55MHz inj.

Voltage11=[2.00];
... 21 more lines ...
Attachment 3: MC_DC55MHz.m
Pwr=[-60,-55,-50,-45,-40,-35,-30,-25,-20,-15,-10,-5,0,5,10]';
Voltage11=[2.16,2.16,2.16,2.16,2.16,2.15,2.15,2.15,2.15,2.15,2.12,2.09,2.00,1.89,1.78]';

Voltage29=[2.14,2.14,2.14,2.13,2.14,2.13,2.11,2.06,1.98,1.88,1.76,1.64,1.52,1.40,1.27]';

Voltage55=[2.14,2.11,2.05,1.96,1.85,1.73,1.61,1.48,1.36,1.23,1.10,0.98,0.84,0.71,0.61]';

plot(Pwr,Voltage55)

%%
... 17 more lines ...
Attachment 4: RFAMPD.c
double x;
double y;
double z;

double temp1;
double temp2;

double Corrx;
double Corry;
double Corrz;
... 49 more lines ...
  5764   Sun Oct 30 14:08:35 2011 ranaUpdateElectronicsRFAM monitor in place. ( Uncalibrated ) EPICS troubles

Quote:

{Suresh, Jamie, Mirko]

We adapted the Stochmon box to include LP filters at 1.8Hz behind the RMS parts.
Then measured the RMS signals for different RF signal levels at 11.0.65, 29.5, 55.325MHz provided by a RF freq. generator.
As you can see in the data below the suppression of the BP filters of neighboring frequencies is only 35-35dB in power (see also manufacturer specs).

We therefor want to substract crosstalk, by calculating it out. We decided to use C-code in CDS. No computer crashing this time :)

 This is neat idea, but it seems like it would be easier to just add another set of rf BP filters inside of the StochMon box. Luckily, Steve was thinking ahead and ordered extra filters.

  6044   Tue Nov 29 22:10:18 2011 kiwamuUpdateRF SystemRFAM fluctuation reduced

Quote from #6035

I left the EOM stabilization running overnight, so we can finally see how the EOM temperature stabilization does over long periods of time.

The controller was turned on at ~8:40 UTC, and you can see that the Stochmon signals quiet down a lot right at that time. 

Indeed the fluctuation of the RFAM became quieter with the temperature control ON.

However the absolute value of the RFAMs stayed at relatively high value.

I guess we should be able to set the right temperature setpoint such that the absolute value of the RFAM is smaller.

Here is the calibrated RFAM data (for 5 hours around the time when Zach activated the temperature control last night):

RFAM_withEOMheater_edit.png

  6047   Tue Nov 29 23:03:34 2011 ZachUpdateRF SystemRFAM fluctuation reduced

I was hesitant to claim that this is definitely true without the control data we were taking after the heater was turned off today. This is because before I replaced the malfunctioning op amp last night, the heater was actually ON and injecting temperature noise into the system that would not be there with it off. I think the best idea is to compare the data from today (heater on vs. heater off, but with functioning circuit).

Quote:

 

Indeed the fluctuation of the RFAM became quieter with the temperature control ON.

  6050   Wed Nov 30 03:01:55 2011 kiwamuUpdateRF SystemRFAM fluctuation reduced

Okay I have turned ON the temperature control at 2:40 AM and will leave it ON for a while.

Quote from #6047

I was hesitant to claim that this is definitely true without the control data we were taking after the heater was turned off today. This is because before I replaced the malfunctioning op amp last night, the heater was actually ON and injecting temperature noise into the system that would not be there with it off. I think the best idea is to compare the data from today (heater on vs. heater off, but with functioning circuit).

 

  361   Wed Mar 5 17:35:24 2008 ranaUpdateIOORFAM during MC lock
I used an ezcaservo command to adjust the offsets for Alberto's StochMon channels. They are all
at +2 V with no light
on the RFAM PD (MC unlocked).

Then I looked at 5 minutes of second trend around when the MC locks. Since Alberto has chosen
to use +2V to indicate zero RF and a negative gain, there is a large RF signal when the StochMon
channels approach zero
.

From the plot one can see that the RFAM for the 133 & 199 MHz channels is much worse than for the 33 and 166.
Its also clear that the turn on of the WFS (when the RFAMPD's DC light level goes up) makes the single demod
signals get better but the double demod get worse.
Attachment 1: rfam.pdf
rfam.pdf
  4330   Sat Feb 19 05:25:20 2011 SureshUpdateElectronicsRF: Distribution box

Most of the RF cables required for the box are done.   There are two remaining and we will attend to these tonight. 

We expect to have finished the mechanical assembly by Sunday and start a quality test on Monday.

 

 

  4547   Wed Apr 20 21:53:01 2011 SureshConfigurationRF SystemRF system: Stray heliax cable

We found a stray unused heliax cable running from the LSC rack 1Y2 to a point between the cabinets 1X3 and 1X4. This cable will need to be redirected to the AS table in the new scheme.   It is labled C1LSC-PD5  The current situation has been updated as seen in the layout below

rogue_cable_1.png

Attachment 1: rogue_cable_1.png
rogue_cable_1.png
  4591   Fri Apr 29 18:24:05 2011 SureshUpdateRF SystemRF system: 1X2 Rack cabling

[Joe, Jamie, Suresh]

We have installed the IDE to SCSI adaptor module into the 1X2 rack and have connected the AA filter outputs to it.

P4290070.JPG

 

We have removed the following cables running between the 1X2 and 1X3 racks.

The long twisted pair ribbon cable which previously carried the ADC signals.

1X2-ASC 6, 1X2-ASC 47, 1X2-ASC 9, 1X2-ASC 8, 1X2-ASC 10, 1X2-ASC 7,

CAB-1X2-LSC 42, CAB 1X2-LSC 56,  CAB 1X2-LSC 41, CAB 1X2-LSC 43

1X3-2 ASC 47

We have also removed the following by mistake.  We will put them back them on Monday

1X2-LSC 21, 1X2-LSC-20.

We have also removed the ASC QPD cables and moved the QPD cards which were present in the middle Eurocate (#2) to the unused Eurocrate at the bottom position (#3).

The binary input cables at the back of the cards require to be supported so that their weight does not pull them out of the sockets at the back of the crates.

Some of the slots where we plan to plug in Demod boards (the 165 MHz boards)  are not currently connected to any binary output on the C1:LSC computer.  We need these binary controls for the fitlter modules on the cards.

When we eventually begin to use the 15PDs as planned, then we will occupy 30 ADC channels (I & Q outputs).  Currently we have just one ADC card installed on the C1:LSC providing 32 ADC channels.  Joe found another 16bit 32 channel ADC card in his stash but we need to get a timing+adaptor board for it. In general we are going to need the third Eurocrate.

A platform for the power supply of the RF Distribution box needs to be built and the power supply needs to be moved into the 1X2 rack rather than sit on top of 1X2 rack.

 

 

 

  4539   Mon Apr 18 14:11:44 2011 kiwamuUpdateLSCRF status

 We will make them all green !!

 RF_Work_Status.png

Again, all the files are available in the svn.

https://nodus.ligo.caltech.edu:30889/svn/trunk/suresh/40m_RF_upgrade/

ELOG V3.1.3-