40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 216 of 357  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  5972   Mon Nov 21 17:48:36 2011 KojiUpdateIOORFAM monitoring test

Do we care about the AC? I thought what we care is the DC.

  397   Sun Mar 23 10:42:54 2008 ValeraSummaryElectronicsRFAM of the RF stabilization box is measured
I reconstructed Tobin's setup to measure the RFAM after the RF stabilization box in the 166 MHz modulation path.
The setup consisted of the splitter and the mixer followed by the RF low pass filter and the SR560 (gain x100).
The RF level into splitter was 20 dBm. The Mini-Circuits ZLW-3H (17 dBm LO) mixer was used. The LO was taken
straight out of the splitter and the RF path was attenuated by 11 dBm, The DC out of the mixer was 700 mV.
The noise floor was measured with the RF input of the mixer terminated on 50 Ohm. The 45 MHz measurement
in broad band setting looks better than the noise floor at high frequencies. I am not sure what was wrong with
one or both of those measurements. The 9 MHz measurements are above the noise floor.

The RFAM meets the AdvLIGO requirements in the detection band (f > 10 Hz).

The attached zipped files are:
SRS003 9 MHz DC-200 Hz
SRS004 9 MHz DC-26 kHz
SRS006 45 MHz DC-200 Hz
SRS005 45 MHz DC-26 kHz
SRS007 Noise floor DC-200 Hz
SRS008 Noise floor DC-26 kHz
Attachment 1: RFAM.zip
Attachment 2: amplitudenoise.pdf
amplitudenoise.pdf
  5686   Tue Oct 18 15:20:03 2011 kiwamuSummaryIOORFAM plan

[Suresh / Koji / Rana / Kiwamu]

Last night we had a discussion about what we do for the RFAM issue. Here is the plan.

 

(PLAN)

  1. Build and install an RFAM monitor (a.k.a StochMon ) with a combination of a power splitter, band-pass-filters and Wenzel RMS detectors.

       => Some ordering has started (#5682). The Wenzel RMS detectors are already in hands.

  2. Install a temperature sensor on the EOM. And if possible install it with a new EOM resonant box.

      => make a wheatstone bridge circuit, whose voltage is modulated with a local oscillator at 100 Hz or so.

  3. Install a broadband RFPD to monitor the RFAMs and connect it to the StochMon network.

      => Koji's broadband PD or a commercial RFPD (e.g. Newfocus 1811 or similar)

  4. Measure the response of the amount of the RFAM versus the temperature of the EO crystal.

      => to see whether if stabilizing the temperature stabilizes the RFAM or not.

  5.  Measure the long-term behavior of the RFAM.

      => to estimate the worst amount of the RFAM and the time scale of its variation

  6. Decide which physical quantity we will stabilize, the temperature or the amount of the RFAM.

  7. Implement a digital servo to stabilize the RFAMs by feeding signals back to a heater

     => we need to install a heater on the EOM.

  8. In parallel to those actions, figure out how much offsets each LSC error signal will have due to the current amount of the RFAMs.

    => Optickle simulations.

  9. Set some criteria on the allowed amount of the RFAMs

    => With some given offsets in the LSC error signal, we investigate what kind of (bad) effects we will have.

  5991   Wed Nov 23 18:28:09 2011 KojiUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

The following channels have been registered in c1iool0 database, and are now recorded by FB

C1:IOO-RFAMPD_11MHZ
C1:IOO-RFAMPD_29_5MHZ
C1:IOO-RFAMPD_55MHZ
C1:IOO-RFAMPD_DCMON
C1:IOO-EOM_TEMPMON
C1:IOO-EOM_HEATER_DRIVEMON


PROCEDURE

1) The EPICS database file has been edited to rename/add some channels

/cvs/cds/caltech/target/c1iool0/ioo.db

REMOVED
#grecord(ao,"C1:IOO-RFAMPD_VC")
#grecord(ai,"C1:IOO-RFAMPD_TEMP")
#grecord(ai,"C1:IOO-RFAMPD_DCMON")
#grecord(bo,"C1:IOO-RFAMPD_BIAS_ENABLE")
#grecord(bi,"C1:IOO-RFAMPD_BIAS_STATUS")
#grecord(calc, "C1:IOO-RFAMPD_33MHZ_CAL")
#grecord(calc, "C1:IOO-RFAMPD_133MHZ_CAL")
#grecord(calc, "C1:IOO-RFAMPD_166MHZ_CAL")
#grecord(calc, "C1:IOO-RFAMPD_199MHZ_CAL")

ADDED/EDITED
grecord(ai,"C1:IOO-RFAMPD_11MHZ")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S25 @")
...

grecord(ai,"C1:IOO-RFAMPD_29_5MHZ")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S26 @")

...
grecord(ai,"C1:IOO-RFAMPD_55MHZ")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S27 @")

...
grecord(ai,"C1:IOO-RFAMPD_DCMON")
        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S28 @")

...
grecord(ai,"C1:IOO-EOM_TEMPMON")
                                                
        field(DTYP,"VMIVME-3113")                                               
        field(INP,"#C1 S29 @")

...
grecord(ai,"C1:IOO-EOM_HEATER_DRIVEMON")

        field(DTYP,"VMIVME-3113")                                              
        field(INP,"#C1 S30 @")

2) The channels have been added to the frame builder database

/cvs/cds/rtcds/caltech/c1/chans/daq/C0EDCU.ini

[C1:IOO-RFAMPD_11MHZ]
[C1:IOO-RFAMPD_29_5MHZ]
[C1:IOO-RFAMPD_55MHZ]
[C1:IOO-RFAMPD_DCMON]
[C1:IOO-EOM_TEMPMON]
[C1:IOO-EOM_HEATER_DRIVEMON]

Note that this C0EDCU.ini is the file that has been registered in

/cvs/cds/rtcds/caltech/c1/target/fb/master

3) burt restore request files were updated

RFAM related settings were removed as they don't exist anymore.

/cvs/cds/caltech/target/c1iool0/autoBurt.req
/cvs/cds/caltech/target/c1iool0/
saverestore.req

4) c1iool0 were rebooted. Framebuilder restarted. c1iool0 were burtrestored.

  5995   Thu Nov 24 05:10:00 2011 KojiUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

EOM TEMPMON and HEATER DRIVEMON have been hooked up to the following channels.

C1:IOO-EOM_TEMPMON
C1:IOO-EOM_HEATER_DRIVEMON 

What a fragile circuit...

I found some of the resistors popped up from the board because of the tension by the Pomona grabbers.
I tried to fix it based on the schematic (photo) and the board photo.

  5997   Thu Nov 24 10:27:07 2011 JenneUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

Here is a drawing of where the monitors are coming from:

EOM_temp_sense_heater_drive_schematic_withMONs.png

 Since we can't put current into the ADC, the heater drivemon is measuring the input of the OP27, which is related to the amount of current sent to the heater.

Quote:

EOM TEMPMON and HEATER DRIVEMON have been hooked up to the the following channels.

C1:IOO-EOM_TEMPMON
C1:IOO-EOM_HEATER_DRIVEMON

 

  5998   Thu Nov 24 12:45:12 2011 ZachUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

Jenne: The point you indicate for the heater monitor is a virtual ground--it will be driven to zero by the circuit if it's functioning properly; the readout should be done at the input pin (2, I think) to the BUF634.

Koji: This is odd, as I made a point of not attaching any clips directly to resistors for exactly this reason. I was also careful to trim resistor/capacitor leads so that they were not towering over the breadboard and prone to bending (with the exception of the gain-setting resistor of the AD620, which was changed at the last minute). At the end of the day, it is a breadboard circuit with Pomona "readout", so it's not going to be truly resilient until I put it on a protoboard. Another thing: I think the small Pomona clips are absolutely terrible, since they slip off with piconewtons of tension; I could not find any more regular clips, so I used them against my better judgment.

  5999   Thu Nov 24 13:54:31 2011 KojiUpdateIOORFAMPD channels / EOM monitor channels added to DAQ

Those clips for the readouts were the ones who popped out.
When I have restored the connections, I checked the schematic and the heater drive mon is clipped on the output side of the OP27.

Quote:

Jenne: The point you indicate for the heater monitor is a virtual ground--it will be driven to zero by the circuit if it's functioning properly; the readout should be done at the input pin (2, I think) to the BUF634.

Koji: This is odd, as I made a point of not attaching any clips directly to resistors for exactly this reason. I was also careful to trim resistor/capacitor leads so that they were not towering over the breadboard and prone to bending (with the exception of the gain-setting resistor of the AD620, which was changed at the last minute). At the end of the day, it is a breadboard circuit with Pomona "readout", so it's not going to be truly resilient until I put it on a protoboard. Another thing: I think the small Pomona clips are absolutely terrible, since they slip off with piconewtons of tension; I could not find any more regular clips, so I used them against my better judgment. 

 

  6633   Wed May 9 11:31:50 2012 DenUpdateCDSRFM

I added PCIE memory cache flushing to c1rfm model by changing 0 to 1 in /opt/rtcds/rtscore/release/src/fe/commData2.c on line 159, recompiled and restarted c1rfm.

Jamie, do not be mad at me, Alex told me do that!

However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine. After reboot encounted the same C1SUS -> FB communication error and fixed it in the same was as in the previous case of C1SUS reboot. This happens already the second time (out of 2) after C1SUS machine reboot.

I changed /opt/rtcds/rtscore/release/src/fe/commData2.c back, recompiled and restarted c1rfm. Now everything is back. C1RFM -> C1OAF is still bad.

  6635   Wed May 9 15:02:50 2012 DenUpdateCDSRFM

Quote:

However, this did not help, C1RFM did not start. I decided to restart all models on C1SUS machine in hope that C1RFM uses some other models and can't connect to them but this suspended C1SUS machine.

 This happened because of the code bug -

// If PCIE comms show errors, may want to add this cache flushing
#if 1
if(ipcInfo[ii].netType == IPCIE)
          clflush_cache_range (&(ipcInfo[ii].pIpcData->dBlock[sendBlock][ipcIndex].data), 16); // & was missing - Alex fixed this
#endif
 

After this bug was fixed and the code was recompiled, C1:OAF_MCL_IN is OK, no errors occur during the transmission C1:OAF-MCL_ERR=0.

So the problem was in the PCIE card that could not send such amount of data and the last channel (MCL is the last) was corrupted. Now, when Alex added cache flushing, the problem is fixed.

We should spend some more attention to such problems. This time 2046 out of 2048 points were lost per second. But what if 10-20 points are lost, we would not notice that in the dataviewer, but this will cause problems.

  16097   Thu Apr 29 15:11:33 2021 gautamUpdateCDSRFM

The problem here was that the RFM errors cropped up again - seems like it started ~4am today morning judging by TRX trends. Of course without the triggering signal the arm cavity couldn't lock. I rebooted everything (since just restarting the rfm senders/receivers did not do the trick), now arm locking works fine again. It's a bit disappointing that the Rogue Master setting did not eliminate this problem completely, but oh well...

It's kind of cool that in this trend view of the TRX signal, you can see the drift of the ETMX suspension. The days are getting hot again and the temp at EX can fluctuate by >12C between day and night (so the "air-conditioning" doesn't condition that much I guess 😂 ), and I think that's what drives the drift (idk what the transfer function to the inside of the vacuum chamber is but such a large swing isn't great in any case). Not plotted here but i hypothesize TRY levels will be more constant over the day (modulo TT drift which affects both arms).

The IMC suspension team should double check their filters are on again. I am not familiar with the settings and I don't think they've been added to the SDF.

Attachment 1: RFM_errs.png
RFM_errs.png
Attachment 2: Screenshot_2021-04-29_15-12-56.png
Screenshot_2021-04-29_15-12-56.png
  16099   Thu Apr 29 17:43:16 2021 KojiUpdateCDSRFM

The other day I felt hot at the X end. I wondered if the Xend A/C was off, but the switch right next to the SP table was ON (green light).
I could not confirm if the A/C was actually blowing or not.

  7193   Wed Aug 15 13:24:12 2012 DenUpdateCDSRFM -> OAF

Transmission of signals between RFM and OAF is bad again. Now we do not see any errors in IPC_ERR monitors so models think that they get all data but the data is wrong

oaf.png

  1936   Mon Aug 24 10:43:27 2009 AlbertoOmnistructureComputersRFM Network Failure

This morning I found that all the front end computers down. A failure of the RFM network drove all the computers down.

I was about to restart them all, but it wasn't necessary. After I power cycled and restarted C1SOSVME all the other computers and RFM network came back to their green status on the MEDM screen. After that I just had to reset and then restart C1SUSVME1/2.

  9494   Thu Dec 19 14:40:42 2013 KojiUpdateCDSRFM Time over mitigation for c1mcs

I worked on the mitigation of c1mcs time-over issue this afternoon.

The timing for the c1mcs is successfully reduced from >60us to 45us.


The previous models are svned in redoubt as follows:

MCS rev. 6696
RFM rev. 6697
IOO rev. 6698

What I changed was:

- Remove connection from ALS (on c1ioo) to MCS (on c1sus). This should be all done in LSC. (# of RFM IPC in MCS -1)

- MC2 trans QPD filters are moved from IOO to MCS to reduce the RFM channels in MCS.
  Previously the signals for the 4 segments are sent. Now the processed siganls (pit/yaw/sum) are sent. (# of RFM IPC in IOO -1, MCS -1)

- WFS MC3 feedback channels are moved from MCS to RFM to distribute the RFM channels (# of RFM IPC in MCS -2, in RFM +2)

model    prev. timing[us] current timing[us]  diff in time[us]  diff in ch#
c1mcs         >60                45                -15              -4
c1rfm         47                 53                + 6              +2       
c1ioo         47                 36                -11              -1

Revisions of the new models:
MCS rev. 6702
RFM rev. 6701
IOO rev. 6700

  2195   Fri Nov 6 17:04:01 2009 josephbConfigurationComputersRFM and Megatron

I took the RFM 5565 card dropped off by Jay and installed it into megatron.  It is not very secure, as it was too tall for the slot and could not be locked down.  I did not connect the RFM fibers at this point, so just the card is plugged in.

Unfortunately, on power up, and immediately after the splash screen I get "NMI EVENT!" and "System halted due to fatal NMI". 

The status light on the RFM light remains a steady red as well.  There is a distinct possibility the card is broken in some way.

The card is a VMIPMC-5565 (which is the same as the card used by the ETMY front end machine).  We should get Alex to come in and look at it on Monday, but we may need to get a replacement.

  2486   Fri Jan 8 10:38:35 2010 josephb, kojiUpdateComputersRFM and Megatron

Last night, we installed the VMI 5565 RFM card into Megatron.  After turning off the watchdogs for the ETMY optic, we disconnected the RFM fiber, and connected it to megatron, then powered it up. 

We modified the RCG code to have 3 rfmio blocks, which were reading 0x11a1c0 (ascPit), 0x11a1c4 (ascYaw), and 0x11a1c8 (lscPos).  These were connected to the approriate filter module inputs, and we also added grounds to the front of the rfmio blocks (we looked at the ass code which was setup that way, so we just did the same thing).  When we started it however, it didn't read properly.  If we turned off the input and set and offset, it calculated the output of the filter module correctly, (i.e. just the offset value), but as soon as we turned on the input, it was set to 0, no matter the offset value, which indicated it was reading something correctly.

After this test, the RFM fibers were reconnected to c1iscey, we rebooted c1iscey, and we confirmed that the system was working properly again.  We turned the watch dogs back on for ETMY.

 

 

  2487   Fri Jan 8 11:43:22 2010 josephb, alexUpdateComputersRFM and Megatron

Alex came over with a short RFM cable this morning.  We used it to connect the rfm card in c1iscey to the rfm card megatron

Alex renamed startup.cmd in /cvs/cds/caltech/target/c1iscey/ to startup.cmd.sav, so it doesn't come up automatically.  At the end we moved it back.

Alex used the vxworks command d to look at memory locations on c1iscey.  Such as d 0xf0000000, which is the start of the rfm code location.  So to look at 0x11a1c8 (lscPos) in the rfm memory, he typed "d 0xf011a1c8".  After doing some poking around, we look at the raw tst front end code (in /home/controls/cds/advLigo/src/fe/tst), and realized it was trying to read doubles.  The old rts code uses floats, so the code was reading incorrectly.

As a quick fix, we changed the code to floats for that part.  They looked like:

etmy_lsc = filterModuleD(dsp_ptr,dspCoeff,ETMY_LSC,cdsPciModules.pci_rfm[0]? *(\
(double *)(((void *)cdsPciModules.pci_rfm[0]) + 0x11a1c8)) : 0.0,0);

And we simply changed the double to float in each case.  In addition we changed the RCG scripts locally as well (if we do a update at some point, it'll get overwritten).  The file we updated was /home/controls/cds/advLigo/src/epics/util/lib/RfmIO.pm

Line 57 and Line 84 were changed, with double replaced with float.

return "cdsPciModules.pci_rfm[0]? *((float *)(((void *)cdsPciModules.pci
_rfm[$card_num]) + $rfmAddressString)) : 0.0";

. "  *((float *)(((char *)cdsPciModules.pci_rfm[$card_nu
m]) + $rfmAddressString)) = $::fromExp[0];\n"

This fixed our ability to read the RFM card, which now can read the LSC POS channel, for example.

Unfortunately, when we were putting everything the way it was with RFM fibers and so forth, the c1iscey started to get garbage (all the RFM memory locations were reading ffff).  We eventually removed the VME board, removed the RFM card, looked at it, put the RFM card back in a different slot on the board, and returned c1iscey to the rack.  After this it started working properly.  Its possible in all the plugging and unplugging that the card somehow had become loose.

The next step is to add all the channels that need to be read into the .mdl file, as well as testing and adding the channel which need to be written.

 

  2488   Fri Jan 8 15:40:14 2010 josephb, alexUpdateComputersRFM and RCG

Alex added a new module to the RCG, for generating RFMIO using floats.  This has been commited to CVS.

  4516   Tue Apr 12 16:01:33 2011 josephbUpdateGeneralRFM errors

Problem:

Currently the c1scy, c1mcs, and c1rfm models are reporting an error with receiving some data sent over the GE Fanuc Reflected memory cards.

To be more exact, the C1:SUS-ETMY_ALS signal from the c1gcv FE code on the c1ioo computer going too the Y end is not being received However, the C1:SUS-ETMY_LSC signal is.  So the physical RFM card seems to be working.

Similarly, the TRY signal is being sent correctly from the Y end computer.  The X end is working fine and receiving both LSC and ALS signals.

The c1mcs and c1rfm models also receive data from the c1ioo computer and reporting receiving errors.

Theory:

Because the RFM cards are transmitting and receiving at least some channels, I'm guessing there was changes made to the C1.ipc file, which defines the memory locations of these various channels on the RFM network, and that when a model was rebuilt, a different one using the previous IPC file was not, and thus one of the computer is going to the wrong place to either read or write data.

Tomorrow, I'm planning on the  following:

1) Clean out the C1.ipc file (/opt/rtcds/caltech/c1/chans/ipc/)

2) Rebuild all models

3) Run activate_daq.py script

4) Restart models via script

If this doesn't clear up the problem, I'll continue  to bug hunt.

  14293   Tue Nov 13 21:53:19 2018 gautamUpdateCDSRFM errors

This problem resurfaced, which I noticed when I couldn't get the single arm locks going.

The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.

Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.

Attachment 1: RFMerrors.png
RFMerrors.png
  15609   Sat Oct 3 16:51:27 2020 gautamUpdateCDSRFM errors

Attachment #1 shows that the c1rfm model isn't able to receive any signals from the front end machines at EX and EY. Attachment #2 shows that the problem appears to have started at ~430am today morning - I certainly wasn't doing anything with the IFO at that time.

I don't know what kind of error this is - what does it mean that the receiving model shows errors but the sender shows no errors? It is not a new kind of error, and the solution in the past has been a series of model reboots, but it'd be nice if we could fix such issues because it eats up a lot of time to reboot all the vertex machines. There is no diagnostic information available in all the places I looked. I'll ask the CDS group for help, but I'm not sure if they'll have anything useful since this RFM technology has been retired at the sites (?).

In the meantime, arm cavity locking in the usual way isn't possible since we don't have the trigger signals from the arm cavity transmission. 


Update 1500 4 Oct: soft reboots of models didn't do the trick so I had to resort to hard reboots of all FEs/expansion chassis. Now the signals seem to be okay.

Attachment 1: RFMstat.png
RFMstat.png
Attachment 2: RFMerrs.png
RFMerrs.png
  15646   Wed Oct 28 09:35:00 2020 KojiUpdateCDSRFM errors

I'm starting the model restarts from remote. Then later I'll show up in the lab to do more hard resets.
==> It seems that the RFM errors are gone. Here are the steps.

  1. Shutdown all the watchdogs
  2. login to c1iscex. Shutdown all the realtime models: rtcds kill --all
  3. login to c1iscey. Shutdown all the realtime models: rtcds kill --all
  4. run scripts/cds/rebootC1LSC.sh on pianosa
  5. reboot c1iscex
  6. reboot c1isxey
  7. Wait until all the machines/models are up by the script
  8. restart c1iscex models
  9. restart c1iscey models
  10. some IPC errors are still visible on the CDS status screen. Lauch c1daf and c1oaf

 

Attachment 1: Screen_Shot_2020-10-28_at_10.06.00.png
Screen_Shot_2020-10-28_at_10.06.00.png
  15737   Fri Dec 18 10:52:17 2020 gautamUpdateCDSRFM errors

As I was working on the IFO re-alignment just now, the rfm errors popped up again. I don't see any useful diagnostics on the web interface.

Do we want to take this opportunity to configure jumpers and set up the rogue master as Rolf suggested? Of course there's no guarantee that will fix anything, and may possibly make it impossible to recover the current state...

Attachment 1: RFMdiag.png
RFMdiag.png
  2637   Wed Feb 24 12:08:31 2010 KojiUpdateComputersRFM goes red -> recovered by the nuclear option

Most of the RFM went red this morning. I took the nuclear option and it seemed to be recovered.

  6760   Wed Jun 6 00:32:22 2012 JenneUpdateCDSRFM model is way overloading the cpu

We have too much crap in the rfm model.  CPU time for the rfm model is regularly above 60us, and sometimes in the mid-70's (but sometimes jumps down briefly to ~47us, which is where I think it "used" to sit, but I don't remember when I last thought about that number)

This is potentially causing lots of asynchronous grief.

  610   Tue Jul 1 11:53:38 2008 YoichiUpdateComputersRFM network back
I took a tour of the FE machines and power cycled all of them.
After executing the software restart procedures of those computers, the RFM network got back to the normal state.
For some reason, the computers requiring startup.cmd (like c1lsc) halt after running this command. Actually the computer is running ok, but the command freezes. Basically, what it does is simply to load a kernel module. I don't know what is wrong.
Anyway, I just closed the terminal after running startup.cmd and it seems fine for now.
  614   Tue Jul 1 13:34:29 2008 robUpdateComputersRFM network back

Quote:

For some reason, the computers requiring startup.cmd (like c1lsc) halt after running this command. Actually the computer is running ok, but the command freezes. Basically, what it does is simply to load a kernel module. I don't know what is wrong.
Anyway, I just closed the terminal after running startup.cmd and it seems fine for now.


This is normal. On the linux RTFEs (Real-Time Front Ends), the real-time code totally hijacks the kernel, disallowing any interrupts. The system thus becomes totally unresponsive while the code is running, and communicates only through the RFM and the VME backplane.
  1202   Tue Dec 23 10:35:40 2008 YoichiUpdateComputersRFM network breakdown mostly fixed
Rana, Rolf, Alberto, Yoichi

The source of the problem was the RFM bypass box, as expected.
Rana pointed out that the long cable I used to bring the 5V from the Sorensen to the box
may cause a large voltage drop considering that the box is sucking ~3A.
So we connected the cable to another power supply (5V/5A linear power supply).
Then the LEDs on the bypass box turned green from red, and everything started to work.

A weired thing is that when I connected the cable to the wrong terminals of the power supply which
have lower current supply capabilities, the supply voltage dropped to 3V, but still the LEDs on the bypass box
turned green. This means the bypass box can live with 3V.
I noticed that there is a long cable from the Sorensen to the cross connect on the side of the rack, where I
connected my cable to the bypass box. This long cable had somewhat large resistance (1 or 2 Ohms) and dropped
the supply voltage to less than 3V ?
Anyway, the bypass box is now on a temporary power supply. Alberto was assigned a task to find a replacement power
supply.

There are two remaining problems.
c1susvme1 fails to start often claiming a DMA error on a Pentek. After several attempts, you can start the machine,
but after a while (1 hour ?) it fails again.
op340m is not responding to ssh login. It responds to ping.
We hooked up a monitor and keyboard (USB because the machine does not have a PS/2 port) to it and rebooted.
At the boot, it briefly displays a message "No keyboard, try TTYa", but after that no display signal.
Steve found me a serial cable. I will try to login to the machine using the serial port.

  1200   Sun Dec 21 14:18:04 2008 YoichiUpdateComputersRFM network bypass box's power supply is dead
I restarted the front-end computers by power cycling them one-by-one.
After issuing startup commands, most of them started normally at least by looking
at the output from telnet/ssh.
However, the status monitors of the FE computers on the EPICS screen are still red.
I noticed that all the LEDs on the VMIC 5594 RFM network bypass box are off.
According to the labels, fb40m, c0daqctrl, c0dcu are connected to the box.
This means (I believe) c1dcuepics cannot access the RFM network. So we have no control over
the FE computers through EPICS.

I pushed the reset button on the box, power cycled it, but nothing changed.
I checked the fuse and it was OK. Then I found that the power supply was dead.
It is a small AC adapter supplying +5VDC with a 5-pin DIN like connector.
We have to find a replacement.
  1201   Mon Dec 22 13:48:22 2008 YoichiUpdateComputersRFM network bypass box's power supply is dead
As a temporary fix, I cut the cable of the power supply and connected it to the Sorensen power supply +5V on the rack.
Now, the RFM bypass box is powered up, but some LEDs are red, which looks like a bad sign.
I restarted all the FE computers, but this time I got errors during the execution of the startup commands in the VxWorks machines.
The errors are "General Protection Fault" or "Invalid Opcode".
The linux machines do not show errors but still the status lights in EPICS are red.
We need Alex's help. He did not answer the phone, so Alberto left a voice mail.
  538   Wed Jun 18 16:07:57 2008 robSummaryComputersRFM network down

The RFM network tripped off around noon today. It's still down. The problem appears to be with the EPICS interface (c1dcuepics). Trying to restart one of the end stations yields the error: No response from EPICS.

Possible causes include (but not limited to): busted RFM card on c1dcuepics, busted PMC bus on c1dcuepics, busted fiber from c1dcuepics to the RFM switch. We need Alex.
  13436   Tue Nov 21 11:21:26 2017 gautamUpdateCDSRFM network down

I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.

  • The error seems to be in the receiving model only, i.e. c1rfm, all the sending models (e.g. c1scx) don't report any errors, at least on the CDS overview screen.
  • Judging by dataviewer trending of the c1rfm status word, seems like this happened on Sunday morning, around 11am.
  • I tried restarting both sender and receiver models, but error persists.
  • I got no useful information from the dmesg logs of either c1sus (which runs c1rfm), or c1iscex (which runs c1scx).
  • There are no physical red lights in the expansion chassis that I could see - in the past, when we have had some timing errors, this would be a signature.

Not sure how to debug further...

* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy).

Attachment 1: RFMerrors.png
RFMerrors.png
  13643   Tue Feb 20 21:14:59 2018 gautamUpdateCDSRFM network errors

I wanted to lock the single arm POX/POY config to do some tests on the BeatMouth. But I was unable to.

  • I tracked the problem down to the fact that the TRX and TRY triggers weren't getting piped correctly to the LSC model
  • In fact, all RFM channels from the end machines were showing error rates of 16384/sec (i.e. every sample).
  • After watchdogging ETMX, I tried restarting just the c1scx model - this promptly took down the whole c1iscex machine.
  • Then I tried the same with c1iscey - this time the models restarted successfully without the c1iscey machine crashing, but the RFM errors persisted for the c1scy channels.
  • I walked down to EX and hard rebooted c1iscex.
  • c1iscex came back online, and I ssh-ed in and did rtcds start --all.
  • This brought all the models back online, and the RFM errors on both c1iscex and c1iscey channels vanished.

Not sure what to make of all this, but I can lock the arms now.

  4524   Thu Apr 14 12:57:15 2011 josephbUpdateCDSRFM network happy again

[Joe, Alex]

Problem Symptoms:

There were red lights on the status screen indicating RFM errors for the c1scy, c1mcs and c1rfm processes.

The c1iscey, c1sus machines were receiving data sent over the RFM network from the c1ioo computer with a bad time stamp, a few cycles too late.  The c1iscex computer was receiving data from c1ioo fine.

Problem:

The c1iscex RFM card had gotten into a bad state and was somehow slowing things down/corrupting data.  It didn't affect itself, but due to the loop topology was messing everyone else up.  Basically the only one who wasn't throwing an error was the culprit.

Solution:

Hard power cycling the c1iscex computer reset the RFM card and fixed the problem.

  608   Tue Jul 1 09:26:33 2008 steveUpdateComputersRFM network is down
  2192   Fri Nov 6 10:35:56 2009 josephbUpdateComputersRFM reboot fest and re-enabled ITMY coil drivers

As noted by Steve, the RFM network was down this morning.  I noticed that c1susvme1 sync counter was pegged at 16384, so I decided to start with reboots in that viscinity.

After power cycling crates containing c1sosvme, c1susvme1, and c1susvme2 (since the reset buttons didn't work) only c1sosvme and c1susvme2 came back normally.  I hooked up a monitor and keyboard to c1susvme1, but saw nothing.  I power cycled the c1susvme crate again, and this time I watched it boot properly.  I'm not sure why it failed the first time.

The RFM network is now operating normally.  I have re-enabled the watchdogs again after having turned them off for the reboots.  Steve and I also re-enabled the ITMY coil drivers when I noticed them not damping once the watch dogs were re-enabled.  The manual switches had been set to disabled, so we re-enabled them.

  5861   Thu Nov 10 11:52:00 2011 JenneUpdateCDSRFM signal transferring

I am not so happy with the control signals that are coming into the OAF via the RFM/Dolphin/shmem. 

The MCL/MCF signal travels via RFM from the IOO computer to the RFM model on the SUS computer, and then via dolphin to the OAF model on the LSC computer.

The MICH and PRCL signals travel via shmem from the LSC model to the OAF model, all on the LSC computer.  They don't go through the RFM model.

The seismometer channels travel via shmem between the PEM model on the SUS computer and the RFM model on the SUS computer, and then via dolphin between the SUS computer and the OAF model on the LSC computer.

Each pdf shows the power spectrum and a time series of the signals in their "original" model, and in the OAF model.  The seismometer is the only one that seems fine.  The time series match, except for a delay which is not surprising, since the signals have to travel.  The other signals seem pretty distorted.  What is going on??? Why can we trust some, but not all, of the signals that move between models and between computers???

 (This data was all taken while the MC was locked, but MICH and PRCL were not.  I don't think this should have any effect on the signal transfer though).

The MCL isn't soooo bad, so maybe we can keep moving forward with it, but I'm concerned that we're not really going to be successful OAF-ing the other degrees of freedom if the signals are so distorted.

Attachment 1: OAF_rfm_signals_MCL.pdf
OAF_rfm_signals_MCL.pdf
Attachment 2: OAF_rfm_signals_MICH.pdf
OAF_rfm_signals_MICH.pdf
Attachment 3: OAF_rfm_signals_PRCL.pdf
OAF_rfm_signals_PRCL.pdf
Attachment 4: OAF_rfm_signals_GUR1X.pdf
OAF_rfm_signals_GUR1X.pdf
  3845   Tue Nov 2 13:51:40 2010 josephb, yutaUpdateCDSRFM slowdown problem

Problem:

Each RFM memory location which needs to be read by a front end model slows the model significantly.

With no RFM memory locations to be read (replaced with grounds), the c1mcs model runs around 25 microseconds per cycle.

With 1 RFM memory location (MC_L), it runs around 29-33 microseconds.

With 3 RFM memory locations (MC_L, MC1_PIT, MC1_YAW), it runs around 45 microseconds.

With 7 RFM memory locations, the code generally doesn't run at all, going past the 62 microsecond maximum required to be able to keep up with the 16 kHz sample rate.

Last night Yuta somehow got it running with 7 RFM memory locations, but in that case, all the odd numbered RFM channels (1,3,5 as counted by the ipc file) did not work.  It was running at around 55 microseconds in that case.

The c1ioo code which is writing the data to the RFM card is experiencing no such slow down.

Current CDS status:

MC damp dataviewer diaggui AWG c1ioo c1sus c1iscex RFM Sim.Plant Frame builder  ...
                     ... 
  15719   Wed Dec 9 15:37:48 2020 gautamUpdateCDSRFM switch IP addr reset

I suspect what happened here is that the IP didn't get updated when we went from the 131.215.113.xxx system to 192.168.113.xxx system. I fixed it now and can access the web interface. This system is now ready for remote debugging (from inside the martian network obviously). The IP is 192.168.113.90.

Managed to pull this operation off without crashing the RFM network, phew.

BTW, a windows laptop that used to be in the VEA (I last remember it being on the table near MC2 which was cleared sometime to hold the spare suspensions) is missing. Anyone know where this is ?

Attachment 1: Screenshot_2020-12-09_15-39-20.png
Screenshot_2020-12-09_15-39-20.png
Attachment 2: Screenshot_2020-12-09_15-46-46.png
Screenshot_2020-12-09_15-46-46.png
  3293   Mon Jul 26 14:24:46 2010 josephb, kiwamuUpdateCDSRFM test take 1

Kiwamu and I strung a temporary RFM fiber from the c1iscex machine (in the new 1X9 rack) to the c1sus machine (in the new 1X4 rack).  This was connected into the respective RFM cards.  Once we put the fiber in correctly, the status lights came on the RFM card, which is a good sign.  This did not go through the RFM bypass, and did not interfere with any other RFM connections.

We created a simple model to test the RFM card, which basically was 4 RFM memory locations passing back and forth between 2 filters on each machine.  These models were called c1rf0 (on c1sus) and c1rf1 (on c1iscex).  We added 4 entries to the /cvs/cds/caltech/chans/ipc/C1.ipc file corresponding to the 4 RFM memory locations, set their ipcType=RFM and set the ipcRate to 65536.  The ipcNum were set from 0 to 3. The models ran, however, the data we were trying to pass over the RFM card did not seem to be being passed.  Currently trying to contact Alex via e-mail to get debugging advice, and confirm the ipc file is setup correctly.

  9012   Thu Aug 15 01:51:50 2013 KojiSummaryGeneralRFM<->Dolphin bridge distributed to c1rfm and c1mcs

Since the RFM-Dolphin bridges for the ASX model was added to the c1rfm model, c1rfm kept timing-out from the single sample time of 60us.

The model had 19 dolphin accesses, 21 RFM accesses, and 9 shared memory (SHM) accesses.

At the beginning 2 RFM and 2 SHM accesses were moved to c1sus (i.e. they were mistakenly placed on c1rfm).
But this actually made the c1sus model timed out. So the model was reverted.

The current configuration is that the WFS related bridges were accommdated in the c1mcs model.
This made the timing of c1rfm ~40us. So it is safe now.
On the other hand, the c1mcs model has the time consumption of ~59us. This is marginal now.

We need to understand why any RFM access takes such huge delay.

  2190   Fri Nov 6 07:55:59 2009 steveUpdateComputersRFMnetwork is down

The RFMnetwork is down.  MC2 sus damping restored.

  9004   Tue Aug 13 11:40:19 2013 Alex ColeSummaryElectronicsRFPD Demod Filter Frequency Response Measurement

 For the RF PD Frequency Response Measurement project, we get each PD signal from the "PD RF Mon" output of each demodulator board corresponding to our PD under test. Therefore we can't neglect the frequency response of various filters inside the demodulator board. I used our Agilent 4395 Network Analyzer to gather frequency response data for each demodulator board being considered for the RFPD frequency response project (AS55, REFL11, REFL33, REFL55, REFL165, POX11, POP22, POP110).

The NA swept over a frequency range of 1-500 MHz. Data was collected using NWAG4395A (from the netgpibdata directory). It should be noted that the command line options -a 16 -x 15 (averaging=16 and excitation amplitude=15 dBm[the max]), in addition to the usual command line options described in the help file, were used to minimize noise. 

The data is located in /users/alex.cole. The file names are in the format [PDNAME]DemodFilt_1000000.dat (e.g. REFL11DemodFilt_1000000.dat). Results for POP110 are shown below.

Attachment 1: photo_(3).JPG
photo_(3).JPG
Attachment 2: test.jpg
test.jpg
  15433   Fri Jun 26 16:53:38 2020 gautamUpdateElectronicsRFPD characterization

Summary:

While the vacuum system was knocked out, I measured the RF transimpedance (using the AM laser setup, didn't do the shot noise intercept current measurement for now) of all the RFPDs (except PMC REFL). At the very least, the following photodiodes are suspect:

  1. WFS heads - expected transimpedance is 50 kohm unattenuated, and 5 kohm attenuated. I measure values that are x10 lower than this, and the segments are significantly imbalanced. Morover, the attenuators for some quadrants appear to do nothing. This could be a problem with the Acromag system I guess, but the measured transimpedance is nowhere close to the "expected" value. See Attachments #1 and #2. You can also see that the response at 55 MHz is significantly attenuated, so I'm guessing trying to measure the AS port ASC sensing response is going to be difficult.

    Note that I assumed a 1kohm DC transimpedance, which is what I expect from the schematic and also is consistent with the DC voltage I measured, knowing the approximate optical power incident on the photodiode.
  2. POP 22/ POP 110 - this is a Thorlabs PDA10CF diode. It should have a flat gain profile out to ~100 MHz, but I measure some weird features. The other PDA10CF we use, at AS110, shows a more reasonable response. See Attachment #3. I don't know what kind of failure mode this is? Anyway I'll try testing another PDA10CF and if it looks more reasonable, I'll switch out this diode. FWIW, the measured AS110 gain is ~3kohms, whereas the datasheet tells us to expect 5 kohms.

For the remaining photodiodes, I measure a transimpedance that is within ~20% of what is on the wiki page. The notches may benefit from some retuning. While I have the data, I will fit this and post a more complete report on the wiki.

Update July 6 1145am: WFS response plots now have legends mapping quadrants, and I've also added the response of a spare PDA10CF (which is now the new POP22/POP110 photodiode).

Attachment 1: WFS1.pdf
WFS1.pdf
Attachment 2: WFS2.pdf
WFS2.pdf
Attachment 3: buildupMons.pdf
buildupMons.pdf
  15439   Mon Jun 29 15:56:02 2020 gautamUpdateElectronicsRFPD characterization

A more comprehensive report has been uploaded here. I'll zip the data files and add them there too. In summary:

  1. There are several problems with the WFS heads
    • Some attenuators don't seem to work. This could be a problem with the Acromag BIO, or with the relay on the head itself.
    • The measured transimpedance at 29.5 MHz is much lower than expected. We expect ~50 kohms with no attenuation, and 5 kohms without. I measure 100 ohm - 2 kohm with the attenuation disabled, and ~200 ohms with it enabled.
    • Quadrant #3 on both WFS heads behaves differently from the others. There is also evidence of a 200 MHz oscillation for quadrant 3.
    • For some reason, there is a relative minus sign between the TFs measured for the WFS and for the RFPDs. I don't understand where this is coming from - all the OpAmps in the LSC PDs and WFS heads are configured as non-inverting, so why should there be a minus sign? Is this indicative of the polarity of the LEMO output being somehow flipped?
  2. POX 11 photodiode does not have a notch at 22 MHz.
  3. AS55 resonance appears to have shifted closer to 60 MHz, would benefit from a retuning. But the notches appear fine.
  4. PDA10CF photodiode used as the POP22/POP110 readback appears broken in some strange way. As shown in the linked document, a spare PDA10CF in the lab has a much more reasonable response, so I am going to switch out the POP22/POP110 diode with this spare.

I'll upload the data and analysis notebook + liso fit files to the wiki as well shortly. The data, a Jupyter notebook making the plots, and the LISO fit files have been uploaded here.

I didn't do it this time but it'd be nice to also do the noise measurement and get an estimate for the shot-noise intercept current.

Quote:

While I have the data, I will fit this and post a more complete report on the wiki.

  11003   Wed Feb 11 17:31:11 2015 ericqUpdateLSCRFPD spectra

For future reference, I've taken spectra of our various RFPDs while the PRMI was sideband locked on REFL33, using a 20dB RF coupler at the RF input of the demodulator boards. The 20dB coupling loss has been added back in on the plots. Data files are attached in a zip.

Exceptions: 

  • The REFL165 trace was taken at the input of the amplifier that immediately preceeds the demod board. 
  • The 'POPBB' trace was taken with the coupler at the input of the bias tee, that leads to an amplifier, then splitter, then the 110 and 22 demod boards. 

I also completely removed the cabling for REFLDC -> CM board, since it doesn't look like we plan on using it anytime in the immediate future. 

Attachment 1: REFL.png
REFL.png
Attachment 2: AS.png
AS.png
Attachment 3: POP.png
POP.png
Attachment 4: 2015-02-PDSpectra.zip
  11004   Wed Feb 11 18:07:42 2015 ericqUpdateLSCRFPD spectra

After some discussion with Koji, I've asked Steve to order some SBP-30+ bandpass filters as a quick and cheap way to help out REFL33. (Also some SBP-60+ for 55MHz, since we only have 1*fmod and 2*fmod bandpasses here in the lab). 

  11008   Thu Feb 12 01:00:18 2015 ranaUpdateLSCRFPD spectra

The nonlinearity in the LSC detection chain (cf T050268) comes from the photodetector and not the demod board. The demod board has low pass or band pass filters which Suresh installed a long time ago (we should check out what's in REFL33 demod board). 

Inside the photodetector the nonlinearity comes about because of photodiode bias modulation (aka the Grote effect) and slew rate limited distortion in the MAX4107 preamp.

  16765   Thu Apr 7 20:41:15 2022 TommyUpdateElectronicsRFSoC 2x2 Board -- Gain Plotter

In this file (under Tommy), we have a notebook which runs through a spectrum of frequencies and determines the gain response of the attached filter. Below we have the output of a high pass filter. We use IQ demodulation to change IQ componets to DC. Then using a butterworth filter, we read out the DC components and determine the gain's magnitude and phase. However, the phase seems very noisy. This is because the oscillators in the different tiles are independent and a random phase is introduced by changing the mixer frequency in individual tiles. To resolve this we need Multi Tile Synchronization or "MTS". 

Original Pynq Support Forum Query: https://discuss.pynq.io/t/rfsoc-2x2-phase-measurement/3892

We also have the code to fit a resposne function using IIRregular, but this is not as useful without proper phase data.

Attachment 1: Screen_Shot_2022-04-07_at_8.40.23_PM.png
Screen_Shot_2022-04-07_at_8.40.23_PM.png
Attachment 2: Unknown-3.png
Unknown-3.png
Attachment 3: Unknown-4.png
Unknown-4.png
ELOG V3.1.3-