40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 282 of 348  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  11848   Fri Dec 4 13:12:41 2015 ericqUpdateCDSc1ioo Timing Overruns Solved

I had noticed for a while that the c1ioo frontend model had much higher variability than any of the other other 16k models, and would run longer than 60us multiple times an hour. This struck me as odd, since all it does is control the WFS loops. (You can see this on the Nov17 Summary page. (But somehow, the CDS tab seems broken since then, and I'm not sure why...))

This has happily now been solved! While poking around the model, I noticed that the MC2 transmission QPD data streams being sent over from c1mcs were using RFM blocks. This seemed weird to me, since I wasn't even aware that c1ioo had an RFM card. Since the c1sus and c1ioo frontends are both on the Dolphin network, I changed them to Dolphin blocks and voila! The cylce time now holds steady at 21usec. 


Update: I think I figured out the problem with the CDS summary pages. Looking at the .err files in /home/40m/public_html/summary/logs on the 40m LDAS account showed that C1:FEC-33_CPU_METER wasn't found in the frame files. Indeed, this channel was commented out in c1/chans/daq/C0EDCU.ini. I enabled it and restarted daqd. Hopefully the CDS tab will come back soon...

  4335   Tue Feb 22 00:18:47 2011 valeraConfiguration c1ioo and c1ass work and related fb crashes/restarts

I have been editing and reloading the c1ioo model last two days. I have restarted the frame builder several times. After one of the restarts on Sunday evening the fb started having problems which initially showed up as dtt reporting synchronization error. This morning Kiwamu and I tried to restart the fb again and it stopped working all together. We called Joe and he fixed the fb problem by fixing the time stamps (Joe will add details to describe the fix when he sees this elog).

The following changes were made to c1ioo model:

- The angular dither lockins were added for each optics to do the beam spot centering on MC mirrors. The MCL signal is demodulated digitally at 3 pitch and 3 yaw frequencies. (The MCL signal was reconnected to the first input of the ADC interface board).

- The outputs of the lockins go through the sensing matrix, DOF filters, and control matrix to the MC1,2,3 SUS-MC1(2,3)_ASCPIT(YAW) filter inputs where they sum with dither signals (CLOCK output of the oscillators).

- The MCL_TEST_FILT was removed

The arm cavity dither alignment (c1ass) status:

- The demodulated signals were minimized by moving the ETMX/ITMX optic biases and simultaneously keeping the arm buildup (TRX) high by using the BS and PZT2. The minimization of the TRX demodulated signals has not been successful for some reason.

- The next step is to close the servo loops REFL11I demodulated signals -> TMs and TRX demodulated signals -> combination of BS and PZTs.

The MC dither alignment (c1ioo) status:

- The demodulated signals were obtained and sensing matrix (MCs -> lockin outputs) was measured for pitch dof.

- The inversion of the matrix is in progress.

- The additional c1ass and c1ioo medm screens and up and down scripts are being made.

  3846   Tue Nov 2 15:24:18 2010 josephbUpdateCDSc1ioo and c1mcs only sending MC_L, MC1_PIT, MC1_YAW

In order to have the c1mcs model run, we're running with only 3 RFM channels between c1ioo and c1mcs at the moment.  This leaves the model at around 45 microseconds, and at least lets us damp.

Alex and I still need to track down why the RFM read calls are taking so much time to execute.

  6577   Fri Apr 27 08:11:19 2012 steveUpdateCDSc1ioo condition

 

 

Attachment 1: c1ioo.png
c1ioo.png
  9915   Tue May 6 10:22:28 2014 steveUpdateCDSc1ioo dolphin fiber

Quote:

Steve and I nicely routed the dolphin fiber from c1ioo in the 1X2 rack to the dolphin switch in the 1X4 rack.  I shutdown c1ioo before removing the fiber, but still all the dolphin connected models crashed.  After the fiber was run, I brought back c1ioo and restarted all wedged models.  Everything is green again:

green.png

 I put label  at the dolphin fiber end at 1X2 today.   After this I had to reset it, but it failed.

Attachment 1: dolphin1x2.png
dolphin1x2.png
  9916   Tue May 6 10:31:58 2014 jamieUpdateCDSc1ioo dolphin fiber

Quote:

I put label  at the dolphin fiber end at 1X2 today.   After this I had to reset it, but it failed.

 If by "fail" you're talking about the c1oaf model being off-line, I did that yesterday (see log 9910).  That probably has nothing to do with whatever you did today, Steve.

  9890   Thu May 1 10:23:42 2014 jamieUpdateCDSc1ioo dolphin fiber nicely routed

Steve and I nicely routed the dolphin fiber from c1ioo in the 1X2 rack to the dolphin switch in the 1X4 rack.  I shutdown c1ioo before removing the fiber, but still all the dolphin connected models crashed.  After the fiber was run, I brought back c1ioo and restarted all wedged models.  Everything is green again:

green.png

  9896   Fri May 2 01:01:28 2014 ranaUpdateCDSc1ioo dolphin fiber nicely routed

This C1IOO business seems to be wiping out the MC2_TRANS QPD servo settings each day.   What kind of BURT is being done to recover our settings after each of these activities?

(also we had to do mxstream restart on c1sus twice so far tonight -- not unusual, just keeping track)

  9903   Fri May 2 11:14:47 2014 jamieUpdateCDSc1ioo dolphin fiber nicely routed

Quote:

This C1IOO business seems to be wiping out the MC2_TRANS QPD servo settings each day.   What kind of BURT is being done to recover our settings after each of these activities?

(also we had to do mxstream restart on c1sus twice so far tonight -- not unusual, just keeping track)

I don't see how the work I did would affect this stuff, but I'll look into it.  I didn't touch the MC2 trans QPD signals.  Also nothing I did has anything to do with BURT.  I didn't change any channels, I only swapped out the IPCs.

  6580   Fri Apr 27 12:12:14 2012 DenUpdateCDSc1ioo is back

Rolf came to the 40m today and managed to figure out what the problem is. Reading just dmesg was not enough to solve the problem. Useful log was in

>> cat /opt/rtcds/caltech/c1/target/c1x03/c1x03epics/iocC1.log

Starting iocInit
The CA server's beacon address list was empty after initialization?
iocRun: All initialization complete
sh: iniChk.pl: command not found
Failed to load DAQ configuration file

iniChk.pl checks the .ini file of the model.

>> cat /opt/rtcds/rtscore/release/src/drv/param.c


int loadDaqConfigFile(DAQ_INFO_BLOCK *info, char *site, char *ifo, char *sys)
{

  strcpy(perlCommand, "iniChk.pl ");
  .........
  strcat(perlCommand, fname); // fname - name of the .ini file
  ..........
}

So the problem was not in the C1X03.ini. The code could not find the perl script though it was in the /opt/rtcds/caltech/c1/scripts directory. Some environment variables are not set. Rolf added /opt/rtcds/caltech/c1/scripts/ to $PATH variable and c1ioo models (x03, ioo, gcv) started successfully. He is not sure whether this is a right way or not, because other machines also do not have "scripts" directory in their PATH variable.

>> cat /opt/rtcds/caltech/c1/target/c1x03/c1x03epics/iocC1.log

Starting iocInit
The CA server's beacon address list was empty after initialization?
iocRun: All initialization complete

Total count of 'acquire=0' is 2
Total count of 'acquire=1' is 0
Total count of 'acquire=0' and 'acquire=1' is 2

Counted 0 entries of datarate=256     for a total of 0
Counted 0 entries of datarate=512     for a total of 0
Counted 0 entries of datarate=1024     for a total of 0
Counted 0 entries of datarate=2048     for a total of 0
Counted 0 entries of datarate=4096     for a total of 0
Counted 0 entries of datarate=8192     for a total of 0
Counted 0 entries of datarate=16384     for a total of 0
Counted 0 entries of datarate=32768     for a total of 0
Counted 2 entries of datarate=65536     for a total of 131072

Total data rate is 524288 bytes - OK

Total error count is 0

Rolf mentioned about automatic set up of variables - kiis of smth like that - probably that script is not working correctly. Rolf will add this problem to his list.

  6861   Sat Jun 23 19:57:22 2012 yutaSummaryComputersc1ioo is down

I tried to restart c1ioo becuase I can't live without him.

I couldn't ssh or ping c1ioo, so I did hardware reboot.
c1ioo came back, but now ADC/DAC stats are all red.

c1ioo was OK until 3am when I left the control room last night. I don't know what happened, but StripTool from zita tells me that MC lock went off at around 4pm.

  6865   Mon Jun 25 10:35:59 2012 JenneSummaryComputersc1ioo is down

Quote:

I tried to restart c1ioo becuase I can't live without him.

I couldn't ssh or ping c1ioo, so I did hardware reboot.
c1ioo came back, but now ADC/DAC stats are all red.

c1ioo was OK until 3am when I left the control room last night. I don't know what happened, but StripTool from zita tells me that MC lock went off at around 4pm.

 c1ioo was still all red on the CDS status screen, so I tried a couple of things.

mxstreamrestart (which aliases on the front ends to sudo /etc/init.d/mx_stream restart) didn't help

sudo shutdown -r now didn't change anything either....c1ioo came back with red everywhere and 0x2bad on the IOP

eventually doing as Jamie did for c1sus in elog 6742, rtcds stop all, then rtcds start all fixed everything.  Interestingly, when I tried rtcds start iop, I got the error
Cannot start/stop model 'iop' on host c1ioo, so I just tried rtcds start all, and that worked fine....started with c1x03, then c1ioo, then c1gcv.

  6867   Mon Jun 25 11:27:13 2012 JamieSummaryComputersc1ioo is down

Quote:

Quote:

I tried to restart c1ioo becuase I can't live without him.

I couldn't ssh or ping c1ioo, so I did hardware reboot.
c1ioo came back, but now ADC/DAC stats are all red.

c1ioo was OK until 3am when I left the control room last night. I don't know what happened, but StripTool from zita tells me that MC lock went off at around 4pm.

 c1ioo was still all red on the CDS status screen, so I tried a couple of things.

mxstreamrestart (which aliases on the front ends to sudo /etc/init.d/mx_stream restart) didn't help

sudo shutdown -r now didn't change anything either....c1ioo came back with red everywhere and 0x2bad on the IOP

eventually doing as Jamie did for c1sus in elog 6742, rtcds stop all, then rtcds start all fixed everything.  Interestingly, when I tried rtcds start iop, I got the error
Cannot start/stop model 'iop' on host c1ioo, so I just tried rtcds start all, and that worked fine....started with c1x03, then c1ioo, then c1gcv.

There seems to be a problem with how models are coming up on boot since the upgrade.  I think the IOP isn't coming up correctly for some reason, which is then preventing the rest of the models from starting since they depend on the IOP.

The simple way to fix this is to run the following:

ssh c1ioo
rtcds restart all

The "restart" command does the smart thing, by stopping all the models in the correct order (IOP last) and then restarting them also in the correct order (IOP first).

  17527   Wed Mar 29 15:59:01 2023 AnchalUpdateIOOc1ioo model updated to add sensing to optic angle matrices

I've updated c1ioo model with adding WFS sensor to optic angle matrix and output filter module option. The output filter modules are named like EST_MC1_PIT to signify that that these are "estimated" angles of the optic. We can change this naming convention if we don't like it. I've also started DQ on the outputs of these filter moduels at 512 Hz sampling rate.

No medm screens have been made for these changes yet. One can still access them through:

For SENS_TO_OPT_P Matrix

medm -x /cvs/cds/rtcds/caltech/c1/medm/c1ioo/C1IOO_SENS_TO_OPT_P.adl

For SENS_TO_OPT_Y Matrix

medm -x /cvs/cds/rtcds/caltech/c1/medm/c1ioo/C1IOO_SENS_TO_OPT_Y.adl

For filter modules:

medm -x /cvs/cds/rtcds/caltech/c1/medm/c1ioo/C1IOO_EST_MC1_PIT.adl
Attachment 1: WFSSensToOptAngMatrices.png
WFSSensToOptAngMatrices.png
  3870   Fri Nov 5 16:40:22 2010 josephb, alexUpdateCDSc1ioo now has working awgtpman

Problem:

We couldn't set testpoints on the c1ioo machine.

Cause:

Awgtpman was getting into some strange race condition.  Alex added an additional sleep statement and shifted boot order slightly in the rc.local file.  This apparently is only a problem on c1ioo, which is a Sun X4600.  It was using up 100% of a single CPU for the awgtpman process.

Status:

We now have c1ioo test points working.

Future:

Need to examine the startc1ioo script and see if needs a similar modification, as that was tried at several points but yielded a similar state of non-functioning test points. For the moment, reboot of c1ioo is probably the best choice after making modifications to the c1ioo or c1x03 models.

Current CDS status:

MC damp dataviewer diaggui AWG c1ioo c1sus c1iscex RFM Sim.Plant Frame builder  TDS
                     
  9881   Wed Apr 30 17:07:19 2014 jamieUpdateCDSc1ioo now on Dolphin network

The c1ioo host is now fully on the dolphin network!

After the mx stream issue from two weeks ago was resolved and determined to not be due to the introduction of dolphin on c1ioo, I went ahead and re-installed the dolphin host adapter card on c1ioo.  The Dolphin network configurations changes I made during the first attempt (see previous log in thread) were still in place.  Once I rebooted the c1ioo machine, everything came up fine:

dolphin.png

We then tested the interface by making a cdsIPCx-PCIE connection between the c1ioo/c1als model and the c1lsc/c1lsc model for the ALS-X beat note fine phase signal.  We then locked both ALS X and Y, and compared the signals against the existing ALS-Y beat note phase connection that passes through c1sus/c1rfm via an RFM IPC:

The signal is perfectly coherent and we've gained ~25 degrees of phase at 1kHz.  EricQ calculates that the delay for this signal has changed from:

ALSXonDolphin.pdf

122 us -> 61 us 

I then went ahead and made the needed modifications for ALS-Y as well, and removed ALS->LSC stuff in the c1rfm model.

Next up: move the RFM card from the c1sus machine to the c1lsc machine, and eliminate c1sus/c1rfm model entirely.

  9882   Wed Apr 30 17:45:34 2014 jamieUpdateCDSc1ioo now on Dolphin network

For reference, here are the new IPC entries that were made for the ALS X/Y phase between c1als and c1lsc:

controls@fb ~ 0$ egrep -A5 'C1:ALS-(X|Y)_PHASE' /opt/rtcds/caltech/c1/chans/ipc/C1.ipc
[C1:ALS-Y_PHASE]
ipcType=PCIE
ipcRate=16384
ipcHost=c1ioo
ipcNum=114
desc=Automatically generated by feCodeGen.pl on 2014_Apr_17_14:27:41
--
[C1:ALS-X_PHASE]
ipcType=PCIE
ipcRate=16384
ipcHost=c1ioo
ipcNum=115
desc=Automatically generated by feCodeGen.pl on 2014_Apr_17_14:28:53
controls@fb ~ 0$ 

After all this IPC cleanup is done we should go through and clean out all the defunct entries from the C1.ipc file.

  3827   Fri Oct 29 16:43:25 2010 josephbUpdateCDSc1ioo now talking to c1sus

Problem:

c1ioo was not able to talk to c1sus because of timing issues.  This prevented the mode cleaner length signal (MC_L) from getting to c1sus.

Solution:

The replacement c1ioo chassis from Downs with a more recent revision of the IO backplane works. 

The c1ioo is now talking to c1sus and transmitting a signal.

We connected the cable hanging off the DAQ interface board labeled MC OUT1 to the MC Servo board's output labeled OUT1.

During debugging I modified the c1x02, c1x03, c1mcs and c1ioo codes to print debugging messages.  This was done by modifying the /opt/rtcds/caltech/c1/advLigoRTS/src/fe/commData2.c file.  I have since reverted those changes.

Future:

We still need to check that everything is connected properly and that the correct signal is being sent to the MC2 suspension.

 

  3680   Fri Oct 8 15:57:32 2010 josephbUpdateCDSc1ioo status

I've been trying to figure out why the c1ioo machine crashes when I try to run the c1ioo front end.

I tried removing some RFM components from the c1ioo model, and then the c1gpt model (Kiwamu's green locking model) as an easier test case.  Both cause the machine to lock up once they start running.  Lastly, I tried running the c1x02 and c1sus models on the c1ioo machine instead of the c1sus machine, after first turning off all models on c1sus.  This led to the same lockup. 

Since those models run fine on the c1sus machine, I could only conclude that a recent change in the fe code generator or the Gentoo kernel and the Sun X4600 computer don't work together at the moment.

After talking to Alex, he got the idea to check if the monitor() and mwait() were supported on the c1ioo machine.  These function calls were added relatively recently, and are   used to poll a memory location to see if something has been written there, and then do something when it is.  Apparently the Sun X4600 computers do not support this call.  Alex has modified the code to not use these functions calls, at least for now.  He'd like to add a check to the code so it does use those calls on machines which have them supported.

After this change, the c1ioo and c1gpt front end codes do in fact run.

  3861   Thu Nov 4 15:27:33 2010 josephb, yutaUpdateCDSc1ioo test points not working

Problem:

We can't access any of the c1ioo computer testpoints.  Dataviewer and DTT both fail to read any data from them.

According to diag -l (when run on Rosalba in /opt/apps), the testpoints are not being set.  Also at some point during the day when debugging, we also somehow messed up all the front end connections to the framebuilder.

Errors reported by dataviewer:

Server error 13: no data found
datasrv: DataWriteRealtime failed: daq_send: Illegal seek
Server error 12: no such net-writer
Server error 13: no data found
datasrv: DataWriteRealtime failed: daq_send: Illegal seek
Server error 12: no such net-writer
read(); errno=0
Server error 6532: invalid channel name
Server error 16080: unknown error
datasrv: DataWriteRealtime failed: daq_send: Illegal seek

Error reported by daqd.log:

[Thu Nov  4 13:29:35 2010] About to request `C1:IOO-MC1_PIT_IN1' 10022
on node 34
[Thu Nov  4 13:29:35 2010] Requesting 1 testpoints; tp[0]=10022; tp[1]=32531

[Thu Nov  4 13:29:35 2010] About to request `C1:SUS-MC2_SUSPOS_IN1'
10094 on node 36
[Thu Nov  4 13:29:35 2010] Requesting 1 testpoints; tp[0]=10094; tp[1]=32531

[Thu Nov  4 13:29:38 2010] ETIMEDOUT: test point `C1:IOO-MC1_PIT_IN1'
(tp_num=10022) was not set by the test point manager; request failed
[Thu Nov  4 13:29:38 2010] About to clear `C1:IOO-MC1_PIT_IN1' 10022 on node 34

Attempted Fixes:

Remove all daq related files: /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1ioo.par and /opt/rtcds/caltech/c1/chans/daq/C1IOO.ini.  Rebuilt the front end code.

Double checked ethernet connection between c1ioo and the daq router and fb. 

Confirmed open mx was running on c1ioo. Confirmed awgtpman was running (although at one point I did find duplicate awgtpman running for c1x03, the IOP associated with c1ioo).

Rebooted the c1ioo machine. Confirmed all necessary codes came back.

Restarted the daqd process several times on the fb machine.

Current Status:

Framebuilder is running, and c1sus testpoints are available.  c1ioo test points are not.  Waiting to hear back from Alex on possible ideas.

  9910   Mon May 5 19:34:54 2014 jamieUpdateCDSc1ioo/c1ioo control output IPCs changed to PCIE Dolphin

Now the c1ioo in on the Dolphin network, I changed the c1ioo MC{1,2,3}_{PIT,YAW} and MC{L,F} outputs to go out over the Dolphin network rather than the old RFM network.

Two models, c1mcs and c1oaf, are ultimately the consumers of these outputs.  Now they are picking up the new PCIE IPC channels directly, rather than from any sort of RFM/PCIE proxy hops.  This should improve the phase for these channels a bit, as well as reduce complexity and clutter.  More stuff was removed from c1rfm as well, moving us to the goal of getting rid of that model entirely.

c1ioo, c1mcs, and c1rfm were all rebuild/installed/restarted, and all came back fine.  The mode cleaner relocked once we reenabled the autolocker.

c1oaf, on the other hand, is not building.  It's not building even before the changes I attempted, though.  I tried reverting c1oaf back to what is in the SVN (which also corresponds to what is currently running) and it doesn't compile either:

controls@c1lsc ~ 2$ rtcds build c1oaf
buildd: /opt/rtcds/caltech/c1/rtbuild
### building c1oaf...
Cleaning c1oaf...
Done
Parsing the model c1oaf...
YARM_BLRMS_SEIS_CLASS TP
YARM_BLRMS_SEIS_CLASS_EQ TP
YARM_BLRMS_SEIS_CLASS_QUIET TP
YARM_BLRMS_SEIS_CLASS_TRUCK TP
YARM_BLRMS_S_CLASS EpicsOut
YARM_BLRMS_S_CLASS_EQ EpicsOut
YARM_BLRMS_S_CLASS_QUIET EpicsOut
YARM_BLRMS_S_CLASS_TRUCK EpicsOut
YARM_BLRMS_classify_seismic FunctionCall
Please check the model for missing links around these parts.
make[1]: *** [c1oaf] Error 1
make: *** [c1oaf] Error 1
controls@c1lsc ~ 2$ 

I've been trying to debug it but have had no success.  For the time being I'm shutting off the c1oaf model, since it's now looking for bogus signals on RFM, until we can figure out what's wrong with it. 

Attachment 1: ioo-ipc.png
ioo-ipc.png
  1066   Wed Oct 22 09:42:41 2008 AlbertoDAQComputersc1iool0 rebooted and MC autolocker restarted
This morning I found the MC unlocked. The MC-Down script didn't work because of network problems in communicating with scipe7, a.k.a. c1iool0. Telneting to the computer was also impossible so I power cycled it from its key switch. The first time it failed so I repeated it a second time and then it worked.
Yoichi then restarted c1iovme. It was also necessary to restart the MC autolocker script according to the following procedure:
- ssh into op440m
- from op440m, ssh into op340m
- restart /cvs/cds/caltech/scripts/scripts/MC/autolockMCmain40
  2227   Tue Nov 10 17:01:33 2009 AlbertoConfigurationIOOc1ioovme and c1iool0 rebooted

This afternoon, while I was trying to add the StochMon channels to the frames, I rebooted the c1ioovme and c1iool0.

I had to do it twice because of a mispelling in the C1IOO.INI file that the first time prevented the computer to restart properly.

Eventually I restored the old .ini file, as it was before the changes.

After rebooting I also burtrestored.

During the process the mode cleaner got unlocked. Later on the autoclokcer couldn't engage. I had to run the MC_down and MC_up scripts.

  586   Fri Jun 27 19:59:44 2008 JohnUpdateComputersc1iovme
C1susvme2 and C1iovme crashed which sent the optics swinging and tripped the watchdogs.

Koji and I were able to restore c1susvme2 without any trouble.

We have been unable to revive c1iovme. We have tried telneting in and running startup.cmd,
the process runs for a while then hangs with "DAQ init failed -- exiting".

Resetting the board doesn't help. I didn't try keying the whole crate.

All optics are back to normal with damping restored.
  587   Sat Jun 28 03:10:25 2008 robUpdateComputersc1iovme

Quote:
C1susvme2 and C1iovme crashed which sent the optics swinging and tripped the watchdogs.

Koji and I were able to restore c1susvme2 without any trouble.

We have been unable to revive c1iovme. We have tried telneting in and running startup.cmd,
the process runs for a while then hangs with "DAQ init failed -- exiting".

Resetting the board doesn't help. I didn't try keying the whole crate.

All optics are back to normal with damping restored.


I tried keying the crate, then keying the DAQ controller & AWG, then powering down & restarting the framebuilder.
On coming up, the framebuild doesn't start a daqd process, and I can't get one to start by hand (it just prints "652", and then stops).
No error messages and daqd doesn't appear in the prstat.

I then tried keying the DAQ controller again (after the fb0 reboot), which blew the watchdogs on all the suspensions. So then I went around and keyed all the crates.

Now, the suspension controllers are back online. Still no c1iovme, and now the framebuilder/DAQ/AWG are also hosed. We can try keying all the crates again, in the order that Yoichi did last week.

After some more poking around, I found the daqd log file. It's now complaining about

Jun 28 03:00:39 fb daqd[546]: [ID 355684 user.info] Fatal error: channel `C1: PSL-FSS_MIXERM_F' is duplicated 126

This is the second error message like this. It first complained about C1: PSL-FSS_FAST_F, so I commented that out of C1IOOF.ini and rebooted the framebuilder (note this is an actual reboot of the full solaris machine). Eventually I discovered that C1IOOF.ini and C1IOO.ini are essentially identical. They presumably will keep getting these duplicate channel errors until one of them is completely removed.

C1IOO.ini has a modification time of seven PM on Friday night. Who did this and didn't elog it? I've now modified C1IOOF.ini, and I don't remember when it was last modified.
  917   Wed Sep 3 19:09:56 2008 YoichiDAQComputersc1iovme power cycled
When I tried to measure the sideband power of the FSS using the scan of the reference cavity, I noticed that the RC trans. PD signal was not
properly recorded by the frame builder.
Joe restarted c1iovme software wise. The medm screen said c1iovme is running fine, and actually some values were recorded by the FB.
Nonetheless, I couldn't see flashes of the RC when I scanned the laser frequency.
I ended up power cycling the c1iovme and run the restart script again. Now the signals recorded by c1iovme look fine.
Probably, the DAQ boards were not properly initialized only by the software reset.
I will re-try the sideband measurement tomorrow morning.
  918   Thu Sep 4 00:38:14 2008 ranaUpdatePSLc1iovme power cycled
Entry 663 has a plot of this using the PSL/FSS/SLOWscan script. It shows that the SB's were ~8x smaller than the carrier.
P_carrier   J_0(Gamma)^2 
--------- = ------------
P_SB        J_1(Gamma)^2

Which I guess we have to solve numerically for large Gamma?
  919   Thu Sep 4 07:29:52 2008 YoichiUpdatePSLc1iovme power cycled

Quote:
Entry 663 has a plot of this using the PSL/FSS/SLOWscan script. It shows that the SB's were ~8x smaller than the carrier.
P_carrier   J_0(Gamma)^2 
--------- = ------------
P_SB        J_1(Gamma)^2

Which I guess we have to solve numerically for large Gamma?


P_carrier/P_SB = 8 yields gamma=0.67.
  3159   Tue Jul 6 17:05:30 2010 Megan and JoeUpdateComputersc1iovme reboot

We rebooted c1iovme because the lines stopped responding to inputs on C1:I00-MC_DRUM1. This fixed the problem.

  3393   Tue Aug 10 16:55:38 2010 JennaUpdateElectronicsc1iovme restarted

 Alastair and I restarted the c1iovme around the time of my last elog entry (~3:20).

  14916   Mon Sep 30 15:51:59 2019 gautamUpdateCDSc1iscaux - some admin

I did the following:

  1. symlinked /cvs/cds/rtcds to /opt/rtcds.
  2. Added a line to /etc/systemd/system/modbusIOC.service that executes a burt-restore of the latest c1iscaux.snap file so that whitening gains etc are restored to their last saved value in the event of a service restart.
  14765   Tue Jul 16 16:00:01 2019 gautamUpdateCDSc1iscaux Supermicro setup

I worked on preparing for the c1iscaux upgrade a bit today.

  1. Attachment #1: This shows where the 120 GB solid-state hard-drive and the 2 RAM cards (2GB each) are installed.
    • I found that it required considerable application of force to get the RAM cards into their slots.
    • Note: the 4GB RAM is broken up into two separate physical cards, each 2GB. The labeling is a bit confusing, as each card suggests it is by itself 4GB.
  2. OS install for c1iscaux:
    • I followed Jon's instructions (and added some of mine to the wiki page to hopefully make this process even less thinking-intensive).
    • To be able to use the IP address 192.168.113.83, removed "bscteststand" from chiara martian.hosts and rev.113.168.192.in-addr.arpa as the last mention I could find of this machine was from 2009 (and I'm pretty sure it isn't an active unit anymore). I then restarted the bind9 process. 
    • The hostname for this machine is currently "c1iscaux3" for testing purposes, I will change it once we do the actual install.
    • There was an error in the installation instructions to allow incoming ssh connections - it is openssh-server that is required, not openssh-client. This has now been fixed on the wiki page instructions.
  3. Acromag static IP assignment:
    • Assigned 2 ADCs (XT1221), 5 DACs (XT1541) and 5 sinking BIO units (XT1111) static IP addresses (and labelled them for easy reference) using the windows laptop and the Acromag IP config utility.
    • I saw no reason not to use the 192.168.114.yyy scheme for the Acromag subnet on this machine, even though c1auxex and c1vac both have subnets with this addressing prefix. For reasons unknown to me, Jon opted to use 192.168.115.yyy for the c1susaux Acromag subnet.
  4. Followed the excellent step-by-step to install EPICS, Modbus and Asyn.
    • This took a while, ~1 hour, dominated by the building of EPICS. The other two took only a couple of minutes each.
    • The same combination suggested on Jon's wiki, of Modbus R2-11, EPICS base-7.0.1 and asyn4-33, are the most current at the time of installation.
    • Couple of typos that prevented straight up copy-pasting were fixed on the wiki.
  5. Playground for testing new database files:
    • made a directory /cvs/cds/caltech/target/c1iscaux3 and copied over the .db files from /cvs/cds/caltech/target/c1iscaux and /cvs/cds/caltech/target/c1iscaux2 over.
    • Johannes said he did not develop any code to automate the process of translating the old .db files into the new ones for the Acromag - I won't invest the time in developing any either as I think just manually editing the files will be faster. 
    • I think I will follow the c1susaux convention of grouping .db files by the physical electronics system where possible (e.g. REFL11 channels in one file, CM channels in one file etc), as I think this makes for easier debugging.
    • There is an old "PZT_AI.db" file which I think consists completely of obsolete channels.
  6. Next steps:
    • Wire up the crate [Chub]
    • Make the database files and modbus files for talking to the Acromags on the internal subnet [Gautam], check the .db files [Koji]
    • Wiring of whitening switching from P1 to P2 connector, Issue #1 in this elog (this will also requrie the installation of the DIN shrouds) [Koji]
    • Soldering of P2 interface boards [Gautam]
    • Bench testing [Gautam, Koji, Chub]
    • Installation and in-situ testing [Gautam, Koji, Chub]

All the required additional parts should be here by the end of the week - I'd like to aim for Wednesday 7/24 for the installation in 1Y3 and in-situ testing. While talking to Rana, I realized that we should also factor in the c1aux slow channels into this acromag crate - there is no need for a separate machine to handle the shutters and illuminators. But let's not worry about that for now, those channels can simply be added later.

Attachment 1: IMG_7769.JPG
IMG_7769.JPG
  14850   Mon Aug 19 14:36:21 2019 gautamUpdateCDSc1iscaux remaining work

Here is what is left to do:

  1. Strain relief of all cabling. Chub will take care of this in the coming days. I have said he can connect and disconnect cables as he pleases, but after this work, we may require a hard reboot of the Acromag chassis before restoring functionality to the channels, as it is known that the Acromags can sometimes get "stuck" by a sudden connection of voltage.
  2. Installation of DB15 cable to the P2 connector of the CM board and a DB9 cable to the ALS demod unit (LO and RF power monitors). These will arrive in the next couple of days and Chub will take care of the install.
  3. Design, manufacture and install of a custom version of the backplane P1 adaptor board with only 1 D37 connector - for some of the PD DC signals, a custom adaptor board, part number D010005 for which I can't find any schematics is already installed on the P2 connector, and makes the DC monitor signals available to 4 LEMO connectors. These signals are then digitized by the fast CDS system, presumably for PDH signal normalization. The footprint of this P2--->LEMO adaptor is such that we cannot simply install our P1---> 2xDB37 adaptor boards, because of space constraints. Fortunately, there is a simple fix to reduce the footprint of the board: remove the bottom DB37 connector, which is unused in the c1iscaux system except for the CM board. I recommend getting ~10 pcs of such boards, as it is also useful in a few other places, where the power cabling to the eurocrates are a space constraint. See Attachment #1 for a picture explaining this situation. Anyone want to volunteer to take care of this?
  4. In-situ testing. This is easiest done with some light available in the interferometer. Which in turn requires IMC to be locked. Which in turn requires satellite box fixing. Anyone want to volunteer to take care of this?
  5. Modify C0EDCU.ini to trend the new slow channels we may want long-term monitoring of (e.g. LO power levels to the Demod boards). Anyone want to volunteer to take care of this?
  6. Decide what to do about the CM latch logic. There are some contraints with the way the acromag register addressing works, that I've had to change the way the mbboDirect bits are controlled. Unfortunately, this seems to sometimes and unpredictably cause the bits to flip in a non-robust way, which is the whole point of having the latch in the first place. Either the latch logic needs to be improved, or we need to implement the latch logic in the fast CDS system, not the slow.

Today I set up the autoburt.req file for the c1iscaux channels, and confirmed that the snapshots are getting recorded. There were a lot of channels in the old autoburt.req file which I thought were un-necessary (and several which no longer exist), so now the only channels that are burt-ed are the whitening gains and states of the AA filters. If someone feels we need more channels to be snapshot recorded, you can add them to the file.

In the old target directory, there were also various versions of a "saverestore.req" file - why do we need this in addition to an autoburt? I guess it is possible they are used by the IFOconfigure scripts to setup some whitening gains etc...

Attachment 1: caseForSmallerFootprint.pdf
caseForSmallerFootprint.pdf
  14855   Fri Aug 23 18:46:17 2019 JonUpdateCDSc1iscaux remaining work

I added the list of new c1iscaux channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini and restarted the framebuilder. Koji had thought some of these channels might have previously existed under slightly different names. However, after looking through C0EDCU.ini and the other _SLOW.ini files, I did not find any candidates for removal. As far as I can tell, all of these channels are being recorded for the first time.

Quote:Koji 
  1. Modify C0EDCU.ini to trend the new slow channels we may want long-term monitoring of (e.g. LO power levels to the Demod boards). Anyone want to volunteer to take care of this?
  14857   Sun Aug 25 14:18:08 2019 gautamUpdateCDSc1iscaux remaining work

There were a bunch of useless / degenerate channels added - e.g. whitening gains which are alreay burt-snapshot. Maybe there are many more useless channels being trended, but no need to add more.

Copy-pasting wasn't done correctly - the first 4 added channels were duplicates. There are in fact 5 LO power mons, one for each of the frequencies 11, 33, 55, 110 and 165 MHz. 

I cleaned up. Basically only the detect-mon channels, and the ALS channels, are new in the setup now. I will review if any extra channels are required later. While checking that the daqd is happy, I noticed c1lsc FEs are in their stuck state, see Attachment #1. I guess a cable was bumped when the strain relief operation was underway. I'm not attempting a remote resuscitation.

Quote:

I added the list of new c1iscaux channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini and restarted the framebuilder. Koji had thought some of these channels might have previously existed under slightly different names. However, after looking through C0EDCU.ini and the other _SLOW.ini files, I did not find any candidates for removal. As far as I can tell, all of these channels are being recorded for the first time.

Attachment 1: Screen_Shot_2019-08-25_at_10.38.37_PM.png
Screen_Shot_2019-08-25_at_10.38.37_PM.png
  14903   Fri Sep 20 12:55:02 2019 gautamUpdateCDSc1iscaux testing

I was hoping that the dark / electronics noise level on the LSC photodiodes would be sufficient for me to test the whitening gain switching on the iLIGO Pentek whitening boards. However, this does not seem to be the case. I guess to be thorough, we have to do this kind of test. It's a bit annoying to have to undo and redo the SMA connections, but I can't think of any obvious easier way to test this functionality. More annoyingly, the sensing matrix infrastructure necessary to do the kind of test described in the linked elog is only available for some PDs. I don't really want to modify the c1cal model and go through another mass reboot cycle.

While I was at it, I was also thinking about the tests we want to do. Here is a quick first pass - if you can think of other tests we ought to do, please add them to the list!

  1. Whitening gain switching on the D990694 boards.
    • Need to inject some signal to do this in a clean way. 
    • With some signal injected, we need to switch the whitening gain through the 15 available levels and confirm that we see a +3dB gain for each step.
    • An example script to do this operation and make a diagnostic plot is at /cvs/cds/caltech/target/c1iscaux/testScripts/testWhtGain.py.
  2. AA enable/disable on the D000076 boards. Do we really need this functionality? Can't we permanently enable the AA, as was done for WF2?
    • Need to measure the TF with an SR785 or drive a high-freq line and confirm that the aliased peak height is attenuated as expected in DTT.
  3. LO Det Mon channel check
    • Zeroth level test can be done by turning Marconi OFF/ON, and confirming we see a change in the corresponding monitor channel, like I did here.
    • A more rigorous diagnostic would require these channels to be calibrated to dBm of LO power.
  4. PD INTF board check
    • Zeroth level check can be done by shining light onto PDs one at a time and confirming that the correct channel shows a response.
    • A more rigorous diagnostic would require these channels to be calibrated to mW of optical power incident on the PDs.
  5. QPD INTF board check
    • This is the IP-POS QPD readback.
    • Need to confirm the quadrant mapping, and that Pitch is really Pitch, Yaw is really Yaw.
    • A more rigorous diagnostic would require these channels to be calibrated to mm of position shift.
  6. CM Board
    • Need to determine what tests need to be done.
    • I have not yet implemented the fix for the MBBO gain channels for all the gains - only REFL1_GAIN is set up correctly now. Need to look at the hardware for the correct addressing of bits.
  7. ALS INTF board
    • This board isn't actually connected yet, pending strain relief of cabling at 1Y2.
    • The calibration of the board output volts to dBm is known, so we can easily check this functionality.
  14905   Mon Sep 23 10:49:34 2019 ranaUpdateCDSc1iscaux testing
  • I'd say permanently enable AA and AI. There's no reason to turn these off for usual channels. We can always undo one switch later if we want to use aliasing to sample a high frequency signal (ala SoCal).
  • The PD output should ~20 nV/rHz into the mixer, so that's ~7 nV into the whitening filter. We need 60 dB to be above the ADC noise.
  • I've forgotten what the current config is, but in iLIGO we hacked in a fixed whtiening on the Lt1128 input amp to the WF board so that the lock acquisition could be a little easier (better SNR). On Ch1, that's replacing R60 with a RC network. We want to make sure that the lock acq transients are not saturating the ADC, but can maybe put in a 40:200 stage.
  14906   Wed Sep 25 20:10:13 2019 KojiUpdateCDSc1iscaux testing

== Test Status ==
[none]
Whitening gain switching test
[none] AA enable/disable switching
[0th order] LO Det Mon channel check
[none] PD I/F board check
[done] QPD I/F board check
[none] CM Board
[none] ALS I/F board


- LO Det Mon channel check

The StripTool template for the test was made:
/cvs/cds/caltech/target/c1iscaux/testScripts/testDetectMons.str
Then, the RF output of the main Marconi was toggled a few times. -> Confirmed the channels are respopnding. (Attachment 1)

- IPPOS channel check

(0th order check) The StripTool template for the test was made:
/cvs/cds/caltech/target/c1iscaux/testScripts/testIPPOS.str
Then, the IPPOS QPD was shined with a phone LED. Initially I saw no response of the QPD. It turned out that the IPPOS IF module had no input cable connected. After the connection, all the 4 segments are responding to the phone LED and also the IFO beam.

(more careful check)
I decided to do more careful check of IPPOS. As there was a f~30mm lens on the oplev table, beam was focused such that only one element reacted to the incident beam. The beam power (a few mW) was too strong for a single QPD element, which saturates at ~6, an ND filter of OD0.6 was used to reduce the incident power.

Here are the results:
SEG1 (UPPER LEFT seen from the beam) | C1:ASC-IP_POS_QPD_Seg1_Mon 3.651+/-0.003 (N=10) | Incident Power 2.35+/-0.01 mW, QPD X_Calc (+) Y_Calc (+)

Segment Arrangement
(Seen from the beam)
Epics Channel CH output

Incident Power
(mW)

Polarity for the
X/Y_Calc channels
SEG1 UPPER LEFT C1:ASC-IP_POS_QPD_Seg1_Mon 3.651+/-0.003
(N=10)
2.35+/-0.01 X(+) / Y(+)
SEG2 LOWER LEFT C1:ASC-IP_POS_QPD_Seg2_Mon 3.607+/-0.002
(N=12)
2.35+/-0.01 X(+) / Y(-)
SEG3 LOWER RIGHT C1:ASC-IP_POS_QPD_Seg3_Mon 3.658+/-0.002
(N=11)
2.37+/-0.01 X(-) / Y(-)
SEG4 UPPER RIGHT C1:ASC-IP_POS_QPD_Seg4_Mon 3.529+/-0.004
(N=11)
2.30+/-0.01 X(-) / Y(+)

After the measurement, the lens and the filter were removed and the beam was adjusted to the center of the QPD.

Attachment 1: testDetectMons_190925.png
testDetectMons_190925.png
Attachment 2: testIPPOS_190925.png
testIPPOS_190925.png
  14908   Thu Sep 26 20:09:40 2019 KojiUpdateCDSc1iscaux testing

== Test Status ==

[done] Whitening gain switching test => Some issues found (POP110Q, Whitening3_8 not switching, ASDC overall behavior, REFL33Q needs recheck)
[done] AA enable/disable switching
[0th order] LO Det Mon channel check
[none] PD I/F board check
[done] QPD I/F board check
[none] CM Board
[none] ALS I/F board

And, the Y-arm lock was recovered! After some alignment work, the Y-arm was locked. The whitening gain for POY11 was +18dB. The servo gain was 0.015 (nominal).
Once the transmission reached 0.8, I could use ASS to align the cavity and the TTs.
The transmission reached just 1.00 at the end. Was the transmission recently normalized? (See attachment 5)


- Whitening Filter Gain Switching Test

Each whitening filters were tested individually. +50mV DC signal was connected to the 8 inputs using an SMA octopus cable.
The existing script ( /cvs/cds/caltech/target/c1iscaux/testScripts/testWhtGain.py ) did not work because cds.getdata failed to fetch all of the data requested. By giving some sleep before start downloading the data, the problem was avoided. Still some truncated data are seen in the result, but StripTools screenshots compliments the missing part.

Whitening Filters #2~4 were a little tricky because the code needed modification so that the spare channels can be tested.
The modified script is stored as /cvs/cds/caltech/target/c1iscaux/testScripts/testWhtGain_190926.py 

Whitening #1: No issue found.

Whitening #2: No issue found. Some of the step plots showed truncation of the data at the end. But this is an artifact of cds.getdat. The striptool show nothing irregular.

Whitening #3: POP110Q and the spare channel (CH8) did not show the reaction. REFL33Q showed some systematic gain deviation. It could just be the offset problem, but needs to be rechecked.

Whitening #4: The DC channels were found to be OK  except for ASDC. ASDC shows earlier saturation. The input was lowered to 5mVDC to avoid saturation in the second trial. The circuit needs to be checked. The spare channels look noisy, but this is because there is no way to turn off the whitening filters for them.


- AA Filter Test

Injected 11kHz 1Vpp sine wave to the whitening filters. The whiter gains were kept at 0dB. If the AA is disabled, the alias of the 11kHz signal appears in the time series.
-> Whitening #1, #3 and #4: the enable/disable worked correctly.
-> Whitening #2 AA
Bbypass no effect. this is an expected behavior.
 

Attachment 1: Wht1.pdf
Wht1.pdf Wht1.pdf Wht1.pdf Wht1.pdf Wht1.pdf
Attachment 2: Wht2.pdf
Wht2.pdf Wht2.pdf Wht2.pdf Wht2.pdf Wht2.pdf Wht2.pdf
Attachment 3: Wht3.pdf
Wht3.pdf Wht3.pdf Wht3.pdf Wht3.pdf Wht3.pdf Wht3.pdf Wht3.pdf
Attachment 4: Wht4.pdf
Wht4.pdf Wht4.pdf Wht4.pdf Wht4.pdf Wht4.pdf Wht4.pdf Wht4.pdf Wht4.pdf
Attachment 5: lock.png
lock.png
  14909   Fri Sep 27 15:59:53 2019 gautamUpdateCDSc1iscaux testing

I reset the normalization for both arms on Jul 9 2019.

Quote:

The transmission reached just 1.00 at the end. Was the transmission recently normalized? (See attachment 5)

  14921   Wed Oct 2 01:11:40 2019 KojiUpdateCDSc1iscaux testing

I worked on more troubleshooting of the whitening filters Tuesday afternoon

== Test Status ==

[done] Whitening gain switching test => Remaining issues ASDC overall behavior
[done] AA enable/disable switching
[0th order] LO Det Mon channel check
[none] PD I/F board check
[done] QPD I/F board check
[none] CM Board
[none] ALS I/F board


Issue 1: POP110Q did not show any gain switching [Resolved]

A DB37 breakout board was connected to the acromag front panel. I found that Ch6 (POP110Q) did not show any differential DC output. I searched around the other pins and found that the corresponding signal showed up on PIn36  instead of Pin35. Opening the front panel revealed that the internal wiring was wrong (Attachment 1). The wire which should have gone to Pin 35 was connected to Pin 36. By correcting the wiring, POP110Q started to show identical behavior to POP110I. (Attachment 2)

Issue 2: LSC reboot [Resolved]

A rough activity around the acromag chassis crashed c1lsc realtime processes (as usual). I ran usual rebooting script /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh. This successfully restored the status of the vertex RT processes.

Issue 3: REFL33 different behavior between I and Q [Resolved]

REFL33I and Q consistently showed a difference (Attachment 3). The whitening board was pulled out and powered with an extension card. The raw outputs were checked with a function generator and an oscilloscope connected. The outputs for 33I and Q were identical (Attachment 4). So I concluded that the observed difference was an artifact of the checking script.

Issue 4: Whitening 3_8 did not switch at all [Resolved]

To switch the gain stages, each channel of the whitening board takes a DAC output from acromag and convert it into 4bit digital signals. For CH8 of the WF#3, this signal did not reach the instrumentation amplifier AD620. After tracing the signal on the electronics bench, it was found that the CH8 gain input to the DIN96 connector is not conducive to the input of the AD620. As there were no exposed pads between the DIN96 connector and the AD620 input (pin2), a wire was additionally soldered (forgot to take a photo). This solved the gain switching issue as the test result indicates (Attachment 5). The noisiness came from the whitening filter which can not be turned off right now. For this reason, the test of the whitening part is pending too.

The StripTool plot during the overall WF#3 test is shown in Attachment 6.

Issue 5: ASDC behavior [Unresolved]

First of all, at this test, I found that WF#4 was not responding to the gain change at all. This issue was restored by power cycling the acromag chassis (as usual).

The whitening filter #4 was pulled, and the behavior of CH5,6,7,8 (CH8=ASDC) was compared. It was found that the analog outputs were identical and the problem lies further downstream.

Issue 6: Illeagal REFL11 LO cable [Unresolved]

This is a newly found issue. The cable between the LO distributor and the REFL11 demodulator is not the legit solder soaked RG402 coax, but flexible coax (Attachment 7). This cable needs to be replaced in the end. But for today, it was not so that we can have a consistent configuratin as before.

Issue 7: Signature of a damaged POPDC cable [Resolved]

The cable for POPDC cale indicated some damage at the WF#4 side. It was not a complete damage, and therefore the solder coating was added (Attachment 8).

Attachment 1: WF3_wiring.png
WF3_wiring.png
Attachment 2: POP110.pdf
POP110.pdf
Attachment 3: REFL33.pdf
REFL33.pdf
Attachment 4: P_20191001_174548_vHDR_On.jpg
P_20191001_174548_vHDR_On.jpg
Attachment 5: Whitening3_8.pdf
Whitening3_8.pdf
Attachment 6: Screenshot_WF3_191001.png
Screenshot_WF3_191001.png
Attachment 7: P_20191001_181052_vHDR_On.jpg
P_20191001_181052_vHDR_On.jpg
Attachment 8: POPDCcable.png
POPDCcable.png
  14939   Fri Oct 4 01:57:09 2019 KojiUpdateCDSc1iscaux testing

The AA filter for ASDC was fixed.

== Test Status ==

[done] Whitening gain switching test
[done] AA enable/disable switching
[0th order] LO Det Mon channel check
[none] PD I/F board check
[done] QPD I/F board check
[none] CM Board
[none] ALS I/F board


The AA filter for the 4th section of the LSC analog electronics bank (D000076) was pulled out for the test. On the workbench, questionable CH8 was checked. It tuned out that the filter amplifier module for the 8th-order elliptic filter at 7.5kHz was not properly working and exhibited unusual attenuation. This filter module (Frequency Devices Inc D68L8E-7.50kHz) was desoldered and replaced with a module from a spare board. Note that Gautam and I had tried to use this spare board instead of the current one, but it didn't give us any signal for an unknown reason. Since the desoldering required a lot of force and had a risk of damaging the PCB, a socket was made from an IC socket (see Attached 1). This change made CH8 functioning equally to the other channels do.


I took this opportunity to ckech the performance of the AA filters. For each channel, the input signal was injected from J3 using a pomona clip. The output was taken from pin 1, 5, 9, ... of J2. This is the + side of the differential output. The - side just has the equivalent performance but the signal polarity. The digital signals for the AA bypass switches were not connected. Fortunately, this was just fine as it made the anti-aliasing filters engaged.

Attachment 2 shows the transfer functions of all the channels. All the channels showed an identical response (at least visually). The transfer function for CH1 was fitted by LISO. The ZPK values are listed here:

pole 5.2860544577k 503.1473053928m
pole 5.9752193716k 1.0543411596
pole 8.9271953580k 3.5384364788
pole 8.2181747850k 3.4220607928
pole 182.1403534923k 1.1187869426 # This has almost no effect
zero 13.5305051680k 423.6130434049M
zero 15.5318357741k 747.6895990654k
zero 23.1746351749k 1.5412966100M


factor 989.1003181564m
delay 24.4846075283n

Attachment 3 shows the ASD of the output voltage noise measurement. Note the input was shorted for this measurement. The nominal output voltage was found to be 0.1 uV/rtHz and the 1/f noise corner freq was about 100Hz. Only CH3 showed a deviation from the typical values. It looks like this is neither an artifact nor transient noise. Fortunately, nothing is connected to this channel right now.

Attachment 1: P_20191003_172956_vHDR_On.jpg
P_20191003_172956_vHDR_On.jpg
Attachment 2: TF.pdf
TF.pdf
Attachment 3: PSD.pdf
PSD.pdf
Attachment 4: 191003_AA_Filter.zip
  14942   Sat Oct 5 00:03:21 2019 KojiUpdateCDSc1iscaux testing

[Gautam, Koji]

Input gain part of the CM servo board D1500308 was tested. A couple of problems were detected. One still remains.

== Test Status ==

[done] Whitening gain switching test
[done] AA enable/disable switching
[0th order] LO Det Mon channel check
[none] PD I/F board check
[done] QPD I/F board check
[in progress] CM Board
[none] ALS I/F board


We started to test the CM Servo board from the input stages. Initially, DC offsets were provided to IN1 and IN2 to check the gain on the oscilloscope or a StripTool plot. However, the results were confusing, AC measurements with SR785 was carried out in the end. It turned out that both IN1 and IN2 had some issues. IN1 showed an increment of the gain by 2dB every two gain steps, having suggested that the 1dB gain stage had a problem. IN2 showed sudden drop of the signal at the gain +8~+15dB and +24~+31dB, having suggested that a particular 8dB stage had a problem. The board was exposed with the extender and started tracing the signals.

CH1: The digital signal to switch the 1dB stage reached Pin 1A of the DIN96 connector. However, the latch logic (U47 74ALS573) does not spit out the corresponding level for this bit. Note that the next bit was properly working. We concluded that this 74ALS573 had failed and need to be replaced. We have no spare of this wide SOIC-20 chip, but Downs seems to have some spares (see Todd's spare parts list). We will try to get the chip on Monday.

CH2: The stage only used between +8dB and +15dB and between +24dB and +31dB is the +8dB stage (U9 and U2A). I found that the amped output signal did not reach the FET switch U2A (MAX333A). Therefore it was concluded that the opamp U9 (AD829) has an issue. In fact, the amp itself was working, but the output pin was not properly soldered to the pad.  Resoldering this chip made the issue gone. Note that this particular channel has some OP27s soldered instead of AD829. Gautam mentioned that there was some action on the board a few years back to deal with the offset issue. Next time when the board is polled out, I'll take the photos of the board.


Using SR785, the swept sine measurements between 100 and 100kHz were taken for all the gain settings for each channel. Between -31dB and -11dB, the input signal amplitude of 300mV was used. Between -10dB and +10dB, it was reduced to 100mV. For the rest, the amplitude was 10mV. Note that the data for +11dB for CH1 and +2dB for CH2 are missing presumably due to a data transfer issue.

The results are shown in Attachments 1~4.

Attachments 1 and 3 show the gain at each slider value. The measured gain was represented by the average between 1kHz and 10kHz. The missing 1dB every two slide values are seen for CH1. The phase delay at 100kHz is show in the lower plot. There is some delay and delay variation seen but it is in fact less than 1deg at 10kHz (see later) so it's effectfor CM servo (IMC AO path) is minimum. The gain for CH2 tracks the slider value nicely. The phase delay is larger than that of CH1, as expected because of OP27.

Attachments 2 and 4 show the transfer functions. The slider value was subtracted from the measured gain magnitude to indicate the deviation between them. The missing 1dB is obviously visible for CH1 in addition to the overall gain offset of ~0.2dB. CH2 also shows the gain offset of 0.1dB~0.2dB. The phase delay comes into the play around 20kHz particularly at higher gains where the UGF of the AO path is.


gautam: Here is the elog thread for IN2 opAmps going AD829-->OP27. Also, I guess Attachment #1 and #3 x-axes should be "Gain [dB]" rather than "Frequency [Hz]".

Attachment 1: REFL1_GAIN1.pdf
REFL1_GAIN1.pdf
Attachment 2: REFL1_GAIN2.pdf
REFL1_GAIN2.pdf
Attachment 3: REFL2_GAIN1.pdf
REFL2_GAIN1.pdf
Attachment 4: REFL2_GAIN2.pdf
REFL2_GAIN2.pdf
  14956   Tue Oct 8 20:23:03 2019 gautamUpdateCDSc1iscaux testing

Looking at the old latch.st code, looks like this is just a heartbeat signal to indicate the code is alive. I'll implement this. Aesthetically, it'd be also nice to have the hex representation of the "*_SET" channels visible on the MEDM screen.

 

Quote:

Latch logic works. But latch alive signal is missing.

  14912   Mon Sep 30 11:20:43 2019 gautamUpdateCDSc1iscaux testing - CM board code updated

DATED, SEE ELOG14941 for the most up-to-date info on latch.py.

I modified /cvs/cds/caltech/target/c1iscaux/latch.py and /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_CM.db to set up the mbbo logic for the other three channels on the CM board, namely REFL2 Gain, AO Gain, and the Super boosts. The systemctl processes were restarted on c1iscaux. We are now ready to perform systematic checks on the CM board functionality.

Remarks:

The addressing of the Acromag BIO registers is done in a way that is kind of inconvenient to use the EPICS mbboDirect protocol

  • The control word going to the Acromag is 16 bits in length
  • However, only the 4 least significant bits actually correspond to physical channels - the remaining 12 bits are "unused".
  • Because each Acromag BIO unit has 16 BIO channels, this means that they are grouped into four "banks" of 4 bits each.
  • The mbboDirect EPICS/modbus protocol is used to control multiple physical BIO channels using a single input, which is exactly what we want for the gain sliders on the CM board. However, one caveat is that the bits need to be consecutive.
  • This means that we have to break up the 6 bits used for the gain sliders (and in fact also the 2 bits used for the super boosts) into a least-significant-bits (LSB) group and a most-significant-bits (MSB) group.
  • What's more annoying is that our physical wiring scheme means that we can't uniformly decide on how this division into LSBs and MSBs work for all the channels - e.g. for REFL1 Gain, the LSB is the 4 least significant bits, while the MSB is the 2 most significant ones, while for REFL2 Gain, the roles are reversed.
  • In hindsight, the "clever" way to do the wiring assignment would have been to factor this in - but the problem is (sort of) easily fixed in software, and so I recommend we stick with the existing wiring scheme.

I tested the new latch.py script by toggling the various sliders (one at a time) between two values and monitoring the states of the various soft and "*_BITS" channels, see Attachment #1. The behavior seems consistent to me, but to be sure, we have to use Koji's LED tester board and confirm that the physical bits are being toggled correctly. The StripTool templates live in /cvs/cds/caltech/target/c1iscaux/CMdiag.

Quote:

I have not yet implemented the fix for the MBBO gain channels for all the gains - only REFL1_GAIN is set up correctly now. Need to look at the hardware for the correct addressing of bits

Attachment 1: CMsoftTest.png
CMsoftTest.png
  15423   Mon Jun 22 17:51:50 2020 gautamUpdateCDSc1iscaux was down

The machine needed a hard reboot as it was un-ssh-able. 

The exact time that the machine went down is unknown because the blinkys were not DQ-ed. I've now added these to the EDCU to make these channels actually useful, and we may look back on the reliability (or otherwise) of the Acromag system. To my memory, this is the ~5th time one of the new Acromag servers has needed a hard reboot. While this may be less frequent (?) than the VME machines, perhaps there is some other reason for these dropouts. Maybe something to do with the martian network?

Anyway the machine is back up and running now.

  4570   Tue Apr 26 22:56:01 2011 kiwamuUpdateLSCc1iscaux2 and c1iscaux restrated

While checking whitening filters on the LSC rack, I found some epics controls for the whitening looked not working.

So I powered two crates off : the top one and the bottom one on 1Y3 rack.

These crates contain c1iscaux and c1iscaux2. Then powered them on. But it didn't solve the issue.

  5992   Wed Nov 23 22:58:33 2011 KojiUpdateGeneralc1iscaux2 is back (re: recovery from the power shutdown)

Keying again did not help to solve the issue. I turned off the power at the back of the crate, and turn it on again.
Then the key worked again.

c1iscaux2 is burtrestored and running fine now.

Quote:

 - One of the VME rack on 1X3 is not showing the +/-15V green LED lights.

   This is the one on very upper side of the rack, which contains the old c1lsc machine and c1iscaux2. If we are still using c1iscaux2, it needs to be fixed.

 

  6598   Thu May 3 17:15:38 2012 KojiUpdateLSCc1iscaux2 rebooted/burtrestored

[Jenne/Den/Koji]

We saw some white boxes on the LSC screens.
We found c1iscaux2 is not running.

Once the target was power-cycled, these epics channels are back.
Then c1iscaux2 were burtrestored using the snapshot at 5:07 on 4/16, a day before the power glitch.

  5979   Tue Nov 22 18:15:39 2011 jamieUpdateCDSc1iscex ADC found dead. Replaced, c1iscex working again

c1iscex has not been running for a couple of days (since the power shutdown at least). I was assuming that the problem was recurrence of the c1iscex IO chassis issue from a couple weeks ago (5854).  However, upon investigation I found that the timing signals were all fine.  Instead, the IOP was reporting that it was finding now ADC, even though there is one in the chassis.

Since I had a spare ADC that was going to be used for the CyMAC, I decided to try swapping it out to see if that helped.  Sure enough, the system came up fine with the new ADC.  The IOP (c1x01) and c1scx are now both running fine.

I assume the issue before might have been caused by a failing and flaky ADC, which has now failed.  We'll need to get a new ADC for me to give back to the CyMAC.

ELOG V3.1.3-