40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 285 of 330  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  6171   Wed Jan 4 16:40:52 2012 JamieUpdateComputersfront-end fb communication restored

Communication between the front end models and the framebuilder has been restored.  I'm not sure exactly what the issue was, but rebuilding the framebuilder daqd executable and restarting seems to have fixed the issue.

I suspect that the problem might have had to do with how I left things after the last attempt to upgrade to RCG 2.4.  Maybe the daqd that was running was linked against some library that I accidentally moved after starting the daqd process.  It would have kept running fine as was, but if the process died and was attempted to be started again, it's broken linking might have kept it from running correctly.  I don't have any other explanation.

It turns out this was not (best I can tell) related to the new year time sync issues that wer seen at the sites.

  13133   Sun Jul 23 22:16:55 2017 Jamie, gautamUpdateCDSfront-end now running with new OS, RCG

All front ends and model are (mostly) running now

All suspensions are damped:

It should be possible at this point to do more recovery, like locking the MC.

Some details on the restore process:

  • all models were recompiled with the new RCG version 3.0.3
  • the new RCG does stricter simulink drawing checks, and was complaining about unterminated outputs in some of the SUS models.  Terminated all outputs it was concerned about and saved.
  • RCG 3.0 requires a new directory for doing better filter module diagnostics: /opt/rtcds/caltech/c1/chans/tmp
  • had to reset the slow machines c1susaux, c1auxex, c1auxey

The daqd is not yet running.  This is the next task.

I have been taking copious notes and will fully document the restore process once complete.

c1ioo issues

c1ioo has been giving us a little bit of trouble.  The c1ioo model kept crashing and taking down the whole c1ioo host.  We found a red light on one of the ADCs (ADC1).  We pulled the card and replaced it with a spare from the CDS cabinet.  That seemed to fix the problem and c1ioo became more stable.

We've still been seeing a lot of glitching in c1ioo, though, with CPU cycle times frequently (every couple of seconds) running above threshold for all models, up to 200 us.  I tried unloading every kernel module I could and shutting down every non-critical process, but nothing seemed to help.

We eventually tried stopping the c1ioo model altogether and that seemed to help quite a bit, dropping the long cycle rate down to something like one every 30 seconds or so.  Not sure what that means.  We should look into the BIOS again, to see if there could be something interacting with the newer kernel.

So currently the c1ioo model is not running (which is why it's all white in the CDS overview snapshot above).  The fact that c1ioo is not running and the remaining models are still occaissionly glitching is also causing various IPC errors on auxilliary models (see c1mcs, c1rfm, c1ass, c1asx). 

RCG compile warnings

the new RCG tries to do more checks on custom c code, but it seems to be having trouble finding our custom "ccodeio.h" files that live with the c definitions in USERAPPS/*/common/src/.  Unclear why yet.  This is causing the RCG to spit out warnings like the following:

Cannot verify the number of ins/outs for C function BLRMS.
    File is /opt/rtcds/userapps/release/cds/c1/src/BLRMSFILTER.c
    Please add file and function to CDS_SRC or CDS_IFO_SRC ccodeio.h file.

This are just warnings and will not prevent the model form compiling or warning.  We'll figure out what the problem is to make these go away, but they can be ignored for the time being.

model unload instability

Probably the worst problem we're facing right now is an instability that will occaissionally, but not always, cause the entire front end host to freeze up upon unloading an RTS kernel module.  This is a known issue with the newer linux kernels (we're using kernel version 3.2.35), and is being looked into.

This is particularly annoying with the machines on the dolphin network, since if one of the dolphin hosts goes down it manages to crash all the models reading from the dolphin network.  Since half the time they can't be cleanly restarted, this tends to cause a boot fest with c1sus, c1lsc, and c1ioo.  If this happens, just restart those machines, wait till they've all fully booted, then restart all the models on all hosts with "rtcds start all".

  13205   Mon Aug 14 19:41:46 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

I'm upgrading the linux kernel for all the front ends to one that is supposedly more stable and won't freeze when we unload RTS models (linux-image-3.2.88-csp).  Since it's a different kernel version it requires rebuilds of all kernel-related support stuff (mbuf, symmetricom, mx, open-mx, dolphin) and all the front end models.  All the support stuff has been upgraded, but we're now waiting on the front end rebuilds, which takes a while.

Initial testing indicates that the kernel is more stable; we're mostly able to unload/reload RTS modules without the kernel freezing.  However, the c1iscey host seems to be oddly problematic and has frozen twice so far on module unloads.  None of the other hosts have frozen on unload (yet), though, so still not clear.

We're now seeing some timing errors between the front ends and daqd, resulting in a "0x4000" status message in the 'C1:DAQ-DC0_*_STATUS' channels.  Part of the problem was an issue with the IRIG-B/GPS receiver timing unit, which I'll log in a separate post.  Another part of the problem was a bug in the symmetricom driver, which has been resolved.  That wasn't the whole problem, though, since we're still seeing timing errors.  Working with Jonathan to resolve.

  13215   Wed Aug 16 17:05:53 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

The CDS system has now been up moved to a supposedly more stable real-time-patched linux kernel (3.2.88-csp) and RCG r4447 (roughly the head of trunk, intended to be release 3.4).  With one major and one minor exception, everything seems to be working:

The remaining issues are:

  • RFM network down.  The IOP models on all hosts on the RFM network are not detecting their RFM cards.  Keith Thorne thinks that this is because of changes in trunk to support the new long-range PCIe that will be used at the sites, and that we just need to add a new parameter to the cdsParameters block in models that use RFM.  Him and Rolf are looking into for us.
  • The 3.2.88-csp kernel is still not totally stable.  On most hosts (c1sus, c1ioo, c1iscex) it seems totally fine and we're able to load/unload models without issue.  c1iscey is definitely problematic, frequently freezing on module unload.  There must be a hardware/bios issue involved here.  c1lsc has also shown some problems.  A better kernel is supposedly in the works.
  • NDS clients other than DTT are still unable to raise test points.  This appears to be an issue with the daqd_rcv component (i.e. NDS server) not properly resolving the front ends in the GDS network.  Still looking into this with Keith, Rolf, and Jonathan.

Issues that have been fixed:

  • "EDCU" channels, i.e. non-front-end EPICS channels, are now being acquired properly by the DAQ.  The front-ends now send all slow channels to the daq over the MX network stream.  This means that front end channels should no longer be specified in the EDCU ini file.  There were a couple in there that I removed, and that seemed to fix that issue.
  • Data should now be recorded in all formats: full frames, as well as second, minute, and raw_minute trends
  • All FE and DAQD diagnostics are green (other than the ones indicating the problems with the RFM network).  This was fixed by getting the front ends models, mx_stream processes, and daqd processes all compiled against the same version of the advLigoRTS, and adding the appropriate command line parameters to the mx_stream processes.
  13216   Wed Aug 16 17:14:02 2017 KojiUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

What's the current backup situation?

  13217   Wed Aug 16 18:01:28 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors
Quote:

What's the current backup situation?

Good question.  We need to figure something out.  fb1 root is on a RAID1, so there is one layer of safety.  But we absolutely need a full backup of the fb1 root filesystem.  I don't have any great suggestions, other than just getting an external disk, 1T or so, and just copying all of root (minus NFS mounts).

  13218   Wed Aug 16 18:06:01 2017 KojiUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

We also need to copy chiara's root. What is the best way to get the full image of the root FS?
We may need to restore these root images to a different disk with a different capacity.

Is the dump command good for this?

  13219   Wed Aug 16 18:50:58 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors
Quote:

The remaining issues are:

  • RFM network down.  The IOP models on all hosts on the RFM network are not detecting their RFM cards.  Keith Thorne thinks that this is because of changes in trunk to support the new long-range PCIe that will be used at the sites, and that we just need to add a new parameter to the cdsParameters block in models that use RFM.  Him and Rolf are looking into for us.

RFM network is back!  Everything green again.

Use of RFM has been turned off in adLigoRTS trunk in favor of the new long-range PCIe networking being developed for the sites.  Rolf provided a single-line patch that re-enables it:

controls@c1sus:/opt/rtcds/rtscore/trunk 0$ svn diff
Index: src/epics/util/feCodeGen.pl
===================================================================
--- src/epics/util/feCodeGen.pl    (revision 4447)
+++ src/epics/util/feCodeGen.pl    (working copy)
@@ -122,7 +122,7 @@
 $diagTest = -1;
 $flipSignals = 0;
 $virtualiop = 0;
-$rfm_via_pcie = 1;
+$rfm_via_pcie = 0;
 $edcu = 0;
 $casdf = 0;
 $globalsdf = 0;
controls@c1sus:/opt/rtcds/rtscore/trunk 0$

This patched was applied to RTS source checkout we're using for the FE builds (/opt/rtcds/rtscore/trunk, which is r4447, and is linked to /opt/rtcds/rtscore/release).  The following models that use RFM were re-compiled, re-installed, and re-started:

  • c1x02
  • c1rfm
  • c1x03
  • c1als
  • c1x01
  • c1scx
  • c1asx
  • c1x05
  • c1scy
  • c1tst

The re-compiled models now see the RFM cards (dmesg log from c1ioo):

[24052.203469] c1x03: Total of 4 I/O modules found and mapped
[24052.203471] c1x03: ***************************************************************************
[24052.203473] c1x03: 1 RFM cards found
[24052.203474] c1x03:     RFM 0 is a VMIC_5565 module with Node ID 180
[24052.203476] c1x03: address is 0xffffc90021000000
[24052.203478] c1x03: ***************************************************************************

This cleared up all RFM transmission error messages.

CDS upstream are working to make this RFM usage switchable in a reasonable way.

  6917   Thu Jul 5 10:49:38 2012 JamieUpdateCDSfront-end/fb communication lost, likely again due to timing offsets

All the front-ends are showing 0x4000 status and have lost communication with the frame builder.  It looks like the timing skew is back again.  The fb is ahead of real time by one second, and strangely nodus is ahead of real time by something like 5 seconds!  I'm looking into it now.

  6918   Thu Jul 5 11:12:53 2012 JenneUpdateCDSfront-end/fb communication lost, likely again due to timing offsets

Quote:

All the front-ends are showing 0x4000 status and have lost communication with the frame builder.  It looks like the timing skew is back again.  The fb is ahead of real time by one second, and strangely nodus is ahead of real time by something like 5 seconds!  I'm looking into it now.

 I was bad and didn't read the elog before touching things, so I did a daqd restart, and mxstream restart on all the front ends, but neither of those things helped.  Then I saw the elog that Jamie's working on figuring it out.

  6920   Thu Jul 5 12:27:05 2012 JamieUpdateCDSfront-end/fb communication lost, likely again due to timing offsets

Quote:

All the front-ends are showing 0x4000 status and have lost communication with the frame builder.  It looks like the timing skew is back again.  The fb is ahead of real time by one second, and strangely nodus is ahead of real time by something like 5 seconds!  I'm looking into it now.

This was indeed another leap second timing issue.  I'm guessing nodus resync'd from whatever server is posting the wrong time, and it brought everything out of sync again.  It really looks like the caltech server is off.  When I manually sync form there the time is off by a second, and then when I manually sync from the global pool it is correct.

I went ahead and updated nodus's config (/etc/inet/ntp.conf) to point to the global pool (pool.ntp.org).  I then restarted the ntp daemon:

  nodus$ sudo /etc/init.d/xntpd stop
  nodus$ sudo /etc/init.d/xntpd start

That brought nodus's time in sync.

At that point all I had to do was resync the time on fb:

  fb$ sudo /etc/init.d/ntp-client restart

When I did that daqd died, but it immediately restarted and everything was in sync.

  10046   Mon Jun 16 21:56:53 2014 JenneUpdateComputer Scripts / Programsfstab updates, MC autolocker running, FSS slow loop alive

[Rana, Jenne]

Mc autolocker:

MC autolocker is running.  The trouble was with caput and caget and cavput on Pianosa.  Rana has switched those lines in mcup and mcdown over to ezcaread and ezcawrites, and that seems to have fixed things. For the MC2tickleON and -OFF scripts, we left the caput and caget and cavput, and saw that they do run successfully on Ottavia.  (We tried testing again on Pianosa, and now it seems to be okay with cavput, but we promise it was hanging up on this earlier this evening.)  Anyhow, it's all a bit confusing, but it seems to be running fine now.

The autolocker is now running on Ottavia, and Rana is putting it into Ottavia's cronjob list, and commented it out on op340m.

 

fstab changes:

We have removed the option "all_squash" from /etc/exports on Chiara (both lines).  We then checked that the files have ownership "controls controls" on Chiara, Pianosa and Rossa.  Ottavia still has ownership of "nobody nogroup", so we still need to figure that out. 

 

FSS slow loop:

We confirmed that the slow loop is running.  Also, since caput and caget seem to take a while, and the real PID integral gain is the value that we set times a sampling rate, the effective gain had changed.  So, Rana compensated by changing C1:PSL-FSS_SLOWKI from 0.03 to 0.1. 

 

Other thoughts:

Do we have an autoburt saver on another computer, in addition to op340m?  (It's in the op340m crontab list)  We really only want one of these at a time.

 

  4049   Mon Dec 13 22:21:41 2010 kiwamuSummarySUSfunny output matrix of ETMX: solved !

I found that a few connections in the simulink model of c1scx was incorrect, so I fixed them correctly.

It had been a mystery why we had to put a funny matrix on ETMX (see this entry).

But now we don't have to do such a voodoo magic because the problem was solved.

Now the damping of ETMX is happily running with an ordinary output matrix.


 --(details)

 I looked at the wiring diagram of the ETMX suspension (it's on Ben's web page) and confirmed that the coils are arranged in order of UL, LL, UR, LR.

But then I realized that in our simulink model they had been arranged in order of UL, UR, LL, LR.

So UR and LL had been swapped incorrectly !

So I just disconnected and plugged them into the right outputs in the simulink model.

   I rebooted c1iscex in order to reactivate c1scx front end code.

After rebooting it, I changed the output matrix to the usual one, then everything looked okay.

(actually it's been okay because of the combination of the wrong connections and the funny matrix).

  2101   Fri Oct 16 03:16:50 2009 rana, robSummaryLSCfunny timing setup on the LSC

While measuring the Piezo Jena noise tonight we noticed that the LSC timing is setup strangely.

Instead of using the Fiber Optic Sander Liu Timing board, we are just using a long 4-pin LEMO cable which comes from somewhere in the cable tray. This is apparent in the rack pictures (1X3) that Kiwamu has recently posted in the Electronics Wiki. I think all of our front ends are supposed to use the fiber card for this. I will ask Jay and Alex what the deal is here - seems like to me that this can be a cause for timing noise on the LSC.

We should be able to diagnose timing noise between the OMC and the LSC by putting in a signal in the OMC and looking at the signal on the LSC side. Should be a matlab script that we can run whenever we are suspicious of this. This is an excellent task for a new visiting grad student to help learn how to debug the digital control system.

  2104   Fri Oct 16 13:25:18 2009 KojiSummaryLSCfunny timing setup on the LSC

Could be this.

http://ilog.ligo-la.caltech.edu/ilog/pub/ilog.cgi?group=detector&task=view&date_to_view=10/02/2009&anchor_to_scroll_to=2009:10:02:13:33:49-waldman

Quote:

We should be able to diagnose timing noise between the OMC and the LSC by putting in a signal in the OMC and looking at the signal on the LSC side. Should be a matlab script that we can run whenever we are suspicious of this. This is an excellent task for a new visiting grad student to help learn how to debug the digital control system.

 

  13400   Tue Oct 24 20:14:21 2017 jamieSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ

In order to try to isolate CPU6 for the c1dnn neural network reconstruction model, I set the CPUAffinity in /etc/systemd/system.conf to "0" for the front end machines.  This sets the cpu affinity for the init process, so that init and all child processes are run on CPU0.  Unfortunately, this does not affect the kernel threads.  So after reboot all user space processes where on CPU0, but the kernel threads were still spread around.  Will continue trying to isolate the kernel as well...

In any event, this amount of isolation was still good enough to get the c1dnn user space model running fairly stably.  It's been running for the last hour without issue.

I added the c1dnn channel and testpoint files to the daqd master file, and restarted daqd_dc on fb1, so now the c1dnn channels and test points are available through dataviewer etc.  We were then able to observe the reconstructed signals:

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

  13401   Wed Oct 25 09:32:14 2017 GabrieleSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ
Quote:

 

 

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

Here are the demodulation phases and rotation matrices tuned for the network. For the matrices, I am assuming that the input is [I, Q] and the output is [I,Q].

POP22
phi = 153 degrees
[[-0.894, 0.447],
 [-0.447, -0.894]]

REFL11
phi = 93 degrees
[[-0.058, 0.998],
 [-0.998, -0.058]]

REFL55
phi = -90 degrees
[[0.000, -1.000],
 [1.000, 0.000]]

AS55
phi = 7 degrees
[[0.993, 0.122],
 [-0.122, 0.993]]

  11762   Fri Nov 13 17:33:39 2015 gautamUpdateLSCg-factor measurements
Quote:

ROC_ETMY = 59.3 +/- 0.1 m.

Summary:

I followed a slightly different fitting approach to Yutaro's in an attempt to determine the g-factor of the Y arm cavity (details of which are below), from which I determined the FSR to be 3.932 +/- 0.005 MHz (which would mean the cavity length is 38.12 +/- 0.05 m) and the RoC of ETMY to be 60.5 +/- 0.2 m. This is roughly consistent (within 2 error bars) of the ATF measurement of the RoC of ETMY quoted here.

Details:

I set up the problem as follows: we have a bunch of peaks that have been identified as TEM00, TEM10... etc, and from the fitting, we have a bunch of central frequencies for the Lorentzian shapes. The equation governing the spacing of the HOM's from the TEM00 peaks is:

\Delta f_{HOM_{mn}} = \frac{FSR}{\pi} (m+n)cos^{-1}(\sqrt{g_1 \times g_2})

The main differences in my approach are the following:

  1. I attempt to simultaneously find the optimal value of FSR, g1 and g2, by leaving all these as free parameters and defining an objective function that is the norm of the difference between the observed and expected values of \Delta f_{HOM_{mn}} (code in Attachment #1). I then use fminsearch in MATLAB to obtain the optimal set of parameters.
  2. I do not assume that the "unknown" peak alluded to in my previous elog is a TEM40 resonance - so I just use the TEM10, TEM20 and TEM30 peaks. I did so because in my calculations, the separation of these peaks from the TEM00 modes are not consistent with (m+n) = 4 in the above equation. As an aside, if I do impose that the "unknown" peak is a TEM40 peak, I get an RoC of 59.6 +/- 0.3 m.

Notes:

  1. The error in the optimal set of parameters is just the error in the central positions of the peaks, which is in turn due to (i) error in the calibration of the frequency axis and (ii) error in the fit to each peak. The second of these are negligible, the error in my fits are on the order of Hz, while the peaks themselves are of order MHz, meaning the fractional uncertainty is a few ppm - so (i) dominates.
  2. I am not sure if leaving the FSR as a free parameter like this is the best idea (?) - the FSR and arm length I obtain is substantially different from those reported in elog 9804 - by almost 30cm! However: the RoC estimate does not change appreciably if I do the fitting in a 2 step process: first find the FSR by fitting a line to to the 3 TEM00 peaks (I get FSR = 3.970 +/- 0.017 MHz) and then using this value in the above equation. The fminsearch approach then gives me an RoC of 60.7 +/- 0.3 m

 

 

  12646   Tue Nov 29 17:46:18 2016 rana, gautamFrogsComputer Scripts / Programsgateway PWD change

We found that someone had violated all rules of computer security decency and was storing our nodus password as a plain text file in their bash_profile.

After the flogging we have changed the pwd and put the new one in the usual secret place.

  13316   Mon Sep 18 15:00:15 2017 rana, gautamFrogsComputer Scripts / Programsgateway PWD change

We implemented the post-SURF-season nodus password change today.

New password can be found at the usual location.

  13420   Wed Nov 8 17:04:21 2017 gautamUpdateCDSgds-2.17.15 [not] installed

I wanted to use the foton.py utility for my NB tool, and I remember Chris telling me that it was shipping as standard with the newer versions of gds. It wasn't available in the versions of gds available on our workstations - the default version is 2.15.1. So I downloaded gds-2.17.15 from http://software.ligo.org/lscsoft/source/, and installed it to /ligo/apps/linux-x86_64/gds-2.17.15/gds-2.17.15. In it, there is a file at GUI/foton/foton.py.in - this is the one I needed. 


Turns out this was more complicated than I expected. Building the newer version of gds throws up a bunch of compilation errors. Chris had pointed me to some pre-built binaries for ubuntu12 on the llo cds wiki, but those versions of gds do not have foton.py. I am dropping this for now.

  6223   Wed Jan 25 17:32:03 2012 steveUpdateGreen Lockinggeen pointing into y arm is misaligned

I  placed an other Y2-LW-1-2050-UV-45P/AR steering mirror into the beam path of the green beam launching in order to avoid the ~30 degrees use of the 45 degrees mirror. The job is not finished.

  6227   Thu Jan 26 10:17:01 2012 steveUpdateGreen Lockinggeen pointing into y arm is realigned

Quote:

I  placed an other Y2-LW-1-2050-UV-45P/AR steering mirror into the beam path of the green beam launching in order to avoid the ~30 degrees use of the 45 degrees mirror. The job is not finished.

 The alignment is finished after the realization that the 3rd steering mirror had to be adjusted too.

The input power increased from 1.2 to 1.4 mW

  15851   Mon Mar 1 11:40:15 2021 Anchal, PacoSummaryIMCgetting familiar with IMC controls

[Paco, Anchal]

tl;dr: Done no harm, no lasting change.

Learn burtgooey

- Use /cvs/cds/caltech/target/c1psl/autoBurt.req as input to test snapshot "/users/anchal/BURTsnaps/controls_1210301_101310_0.snap" on rossa after not succeeding in donatella

- Browse /opt/rtcds/caltech/c1/burt/autoburt/snapshots/TODAY just to know where the snapshots are living. Will store our morning work specific snapshots in local user directories (e.g. /users/anchal/BURTsnaps)

Identifying video monitors

- Switched channels around on video controls; changed C1:VID-MON7 to 16, back to 30, then C1:VID-QUAD2_4 to 16, to 18, then 20, back to 16, to 14 (which identified as PMCT), to 1 (IMC). Anyways, looks like IMC is locked.

[Yehonathan, Paco, Anchal]

Unlocking MC

- From IOO/LockMC, MC_Servo, FSS --> closed PSL shutter, reopen it and see the lock recovers almost instantly. Try MCRFL shutter, no effect. Toggled PSL shutter one more time, lock recovered.

- From IOO/LockMC, MC_Servo, toggle OPTION (after IP2A), lose and recover lock in similar fashion. MCRFL gets most of the light.

- Looked at IFO_OVERVIEW just to get familiar with the various signals.

 

  15852   Mon Mar 1 12:36:38 2021 gautamSummaryIMCgetting familiar with IMC controls

Pretty minor thing - but PMCT and PMCR were switched on Quad 2 for whatever reason. I switched them back because I prefer the way it was. I have saved snapshots of the preferred monitor config for locking but I guess I didn't freeze the arrangement of the individual quadrants within a quad. This would be more of a problem if the ITMs and ETMs are shuffled around or something like that.

Quote:
 

- Switched channels around on video controls; changed C1:VID-MON7 to 16, back to 30, then C1:VID-QUAD2_4 to 16, to 18, then 20, back to 16, to 14 (which identified as PMCT), to 1 (IMC). Anyways, looks like IMC is locked.

  3719   Thu Oct 14 13:15:14 2010 Leo SingerConfigurationComputersgit installed on rossa

I installed git on rossa using:

$ sudo yum install git

  16503   Mon Dec 13 15:05:47 2021 TegaUpdatePEMgit repo for temp sensor and sus medm

[temperature sensor]

git repo: https://git.ligo.org/40m/tempsensor.git

todo

Update the temp sensor channels to fit with cds format, ie. "C1:PEM-TEMP_EX", "C1:PEM-TEMP_EY", "C1:PEM-TEMP_BS"

- Use FLOAT32_LE data format for the database file (/cvs/cds/caltech/target/c1pem1/tempsensor/C1PEMaux.db) to create the new channels.

- Keep the old datadase code and channels so we can compare with new temp channels afterwards. Also we need a 1-month overlap b4 deleting the old channels.

 

[sus medm screen]

git repo: https://git.ligo.org/40m/susmedmscreen.git

todo (from talk with Koji)

- Link stateword display to open "C1CDS_FE_STATUS.adl"

- Damp filter and Lock filter buttons should open a 3x1 filter screen so that the 6 filters are opened by 2 buttons compared to the old screen that has 3 buttons connected to 2X1 filter screen

- Make the LOCKIN signla modulation flow diagramlook more like the old 40m screen since that is a better layout

- Move load coefficient button to top of sus medm screen (beside stateword)

- The rectangular red outline around the oplev display is confusing and needs to be modified for clarity

- COMM tag block should not be 3D as this suggests it is a button. Make it flat and change tag name to indicate individual watchdog control as this better reflect its functionality. Rename current watchdog switch to watchdog master is it does what the 5 COMM switches do at once.

- Macro pass need to be better documented so that when we call the sus screens from locations other than sitemap, we should know what macro variables to pass in, like DCU_ID etc.

- Edit sitemap.adl to point only to the new screens. Then create a button on the new screen that points to the old screen. This way, we can still access the old screen without clogging sitemap.

- Move the new screen location to a subfolder of where the current sus screens reside, /opt/rtcds/userapps/trunk/sus/c1/medm/templates

- Rename the overview screen (SUS_CUST_HSSS_OVERVIEW.adl) to use the SUS_SINGLE nomenclature, i.e. SUS_SINGLE_OVERVIEW.adl

- Keep an eye of the cpu usage of c1pem as we add BLRMS block for other optics. 

 

 

  16504   Tue Dec 14 11:33:29 2021 TegaUpdatePEMgit repo for temp sensor and sus medm

[Temperature sensor]

Added new temp EPICs channels to database file (/cvs/cds/caltech/target/c1pem1/tempsensor/C1PEMaux.db)

Added new temp EPICs channels to slow channels ini file (/opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini)

 

[SUS medm screen]

Moved new SUS screen to location : /opt/rtcds/userapps/trunk/sus/c1/medm/templates/NEW_SUS_SCREENS

Place button on the new screen to link to the old screen and replace old screens link on sitemap.

Fixed Load Coefficient button location issue

Fixed LOCKIN flow diagram issue

Fixed watchdog labelling issue

Linked STATE WORD block to FrontEnd STATUS screen

Replaced the 2x1 pit/yaw filter screens for LOCK and DAMP fliters with 3x1 LPY filter screen

*Need some more time to figure out the OPTLEV red indicator

  6231   Fri Jan 27 06:07:47 2012 kiwamuUpdateLSCglitch hunting

I went through various IFO configurations to see if there are glitches or not.

Here is a summary table of the glitch investigation tonight. Some of the cells in the table are still not yet checked and they are just left blank.

IFO configuration  Yarm
Xarm
MICH
Half PRMI
low finesse PRMI
PRMI (carrier)
PRMI (sideband)
DRMI
AS55 NO NO NO   up conversion noise glitch glitch glitch
REFL11 NO NO NO   up conversion noise
glitch glitch glitch
REFL33 NO NO NO   - glitch glitch glitch
REFL55 NO NO NO   up conversion noise
glitch glitch glitch
REFL165 NO NO NO   - glitch glitch glitch
POX11 - NO NO     glitch glitch glitch
POY11 NO - NO     glitch glitch glitch
POP55 - -            
                 

 

        Low finesse PRMI      

The low finesse PRMI configuration is a power-recycled MIchelson with an intentional offset in MICH to let some of the cavity power go through MICH to the dark port.

To lock this configuration I used ASDC plus an offset for MICH and REFL33 for PRCL.

The MICH offset was chosen so that the ASDC power becomes the half of the maximum.

In this configuration NO glitches ( a high speed signal with an amplitude of more than 4 or 5 sigma) were found when it was locked.

Is it because I didn't use AS55 ?? or because the finesse is low ??

Also, as we have already known, the up conversion noise (#6212) showed up -- the level of the high frequency noise are sensitive to the 3 Hz motion.

  6202   Tue Jan 17 01:02:07 2012 kiwamuUpdateLSCglitch hunting in REFL RFPDs : strange

A very strange thing is going on.

The REFL11 and REFL55 demod signals show high frequency noise depending on how big signals go to the POS actuator of PRM.

This noise shows up even when the beam is single-bounced back from PRM ( the rest of the suspensions are misaligned) and it's very repeatable.

Any idea ?? Am I crazy ?? Is PRM making some fringes with some other optics ??

 

 

(background)

 The most annoying thing in the central part locking is glitches showing up in the LSC error signals (#6183).
The symptom is that when the motion in PRCL at 3 Hz becomes louder, somehow we get glitches in both the MICH and PRCL error signals.
In the frequency domain, those glitches are mostly contribute to a frequency band of about 30 - 100 Hz.
Last Thursday Koji and I locked the half PRM (PRMI with either ITMX or ITMY misaligned) to see if we still have the glitches in this simpler configuration.
Indeed there were the same kind of glitches --a loud 3 Hz motion triggers the glitches.
It was shown particularly in the REFL11 signal but not so much in the REFL33 while AS55 didn't show any glitches.
 
 
(Still glitches even in the single bounce beam)
   We were suspecting some kind of coupling from a beam jitter, so that the 3 Hz motion somehow brings the beam spot to a bad place somewhere in the REFL paths.
I misaligned all the suspensions except for PRM such that the beam directly bounces back from PRM and go to the REFL port.
Indeed there still were glitches in the REFL11 and REFL55 demod signals. It showed up once per 30 sec or so and pushes up the noise floor around 30 - 100 Hz.
There might be a little bit of glitches also in the REFL33, but the ADC noise floor and the expected glitch noise level were comparable and hence it was difficult to see the glitches in REFL33.
 

(Glitch is related to the PRM POS actuation)

 In the single-bounce configuration I started shaking the PIT and YAW motions of PRM at 3 Hz using the realtime LOCKIN oscillator to see if I can reproduce the glitches.
However no significant glitches were found in this test.
Then I started shaking the POS instead of the angular DOFs, and found that it causes the glitches.
At this point it didn't look like a glitch any more, it became more like a stationary noise.
 
 The attached screen shot is the noise spectrum of the REFL11_I.
The red curve is the one taken when I injected the 3 Hz excitation in POS by the LOCKIN oscillator.
The excitation is at 3 Hz with an amplitude of 1000 counts.
As a comparison I plotted the same spectrum when no excitation was injected and it is plotted in pink.
 

 PRMsingle_bounce.png

It seems there is a cut off frequency at 100 Hz.
This frequency depends on the amplitude of the excitation -- increasing the amplitude brings the cut off frequency higher.
This noise spectrum didn't change with and without the oplevs and local damping.
 

(Possible scenario)

A possible reason that I can think of right now is : PRM is interfering with some other optics for some reason.
But if it's true, why I didn't see any fringes in the AS demod signals in the half PRM configuration ?
 

Quote from #6183

 We tried to figure out what is causing spikes in the REFL33 signal, which is used to lock PRCL.

No useful information was obtained tonight and it is still under investigation.

  6208   Tue Jan 17 19:07:47 2012 ranaUpdateLSCglitch hunting in REFL RFPDs : strange

 Another possibility is that there is some beam clipping of the REFL beam before it gets to the PD. Then there could be a partial reflection from that creating a spurious interference. Then it would only show the fringe wrapping if you excite the scatterer or the PRM.

  6224   Thu Jan 26 05:40:10 2012 kiwamuUpdateLSCglitch in the analog demodulated signals

Indeed the glitches show up in the analog demodulated signals. So it is not an issue of the digital processing.

With an oscilloscope I looked at the I/Q monitor outputs of the LSC demodulators, including REFL11, REFL33, REFL55, POY11, AS55 while keep locking the carrier-resonant PRMI.

I saw some glitches in REFL11, REFL55 and AS55. But I didn't see any obvious glitches in REFL33 and PO11 because the SNR of those signals weren't good enough.

 


(some example glitches)

The attached plot below is an example shot of the actual signals when the carrier resonant PRMI was locked.

The first upper row is the spectrogram of REFL11_I, REFL55_I, REFL33_I and AS55_Q in linear-linear scale.

The second row shows the actual time series of those data in unit of counts.

The bottom row is for some DC signals, including REFLDC, ASDC and POYDC.

 

glitch.png

You can see that there are so many glitches in the actual time series of the demod signals (actually I picked up the worst time chunk).

It seems that  most of the glitches in REFL11, REFL33 and AS55  coincide.

The typical time scale of the glitches was about 20 msec or so.

Note that the PRMI was locked by REFL33 and AS55 as usual.

  6284   Thu Feb 16 03:47:16 2012 kiwamuUpdateLSCglitch table

I updated the table which I posted some time ago (#6231). The latest table is shown below.

It seems that the glitches show up only when multiple DOFs are locked.

Interesting thing is that when the low finesse PRMI is locked with a big MICH offset (corresponding to a very low finesse) it doesn't show the glitches.

Qualitatively speaking, the glitch rate becomes higher as the finesse increases.

I will try SRMI tomorrow as this is the last one which I haven't checked the presence of the glitches.

 

 

 Yarm

(POY11 -->

ETMY)

Xarm

(POX11 --> ETMX)

MICH

(AS55-->BS)

or

(AS55 --> ITMs)

Half PRMI

(REFL11 --> PRM)

or

(REFL33 --> PRM)

low finesse PRMI

(ASDC --> ITMs)

(REFL33 --> PRM)

PRMI (carrier)

(AS55 --> ITMs)

(REFL33 --> PRM)

PRMI (sideband)

(AS55 --> ITMs)

(REFL33 --> PRM)

DRMI
AS55 NO NO NO NO glitch (depends on finesse)
glitch glitch glitch
REFL11 NO NO NO NO glitch (depends on finesse)
glitch glitch glitch
REFL33 NO NO NO NO - glitch glitch glitch
REFL55 NO NO NO NO glitch(depends on finesse) glitch glitch glitch
REFL165 NO NO NO - - - - -
POX11 - NO NO NO  - glitch glitch glitch
POY11 NO - NO NO  - glitch glitch glitch
POP55 - - - -  -  - - -
                 

 

 

  9445   Thu Dec 5 16:20:26 2013 SteveUpdateCDSglitches are gone

Quote:

c1scy had frequent time-over. This caused the glitches of the OSEM damping servos.

Today Eric Q was annoyed by the glitches while he worked on the green PDH inspection at the Y-end.

In order to mitigate this issue, low priority RFM channels are moved from c1scy to c1tst.
The moved channels (see Attachment 1) are supposed to be less susceptible to the additional delay.

This modification required the following models to be modified, recompiled, reinstalled, and restarted
in the listed order:
c1als, c1sus, c1rfn, c1tst, c1scy

Now the models are are running. CDS status is all green.
The time consumption of c1scy is now ~30us (porevious ~60us)
(see Attachment 2)

I am looking at the cavity lock of TEM00 and I have witnessed no glitch any more.
In fact, the OSEM signals have no glitch. (see Attachment 3)

We still have c1mcs having regularly time-over. Can I remove the WFS->OAF connections temporarily?

 Koji cleaned up very nicely.

  6321   Sat Feb 25 14:27:26 2012 kiwamuUpdateLSCglitches in the RFPD outputs

Last night I took a closer look at the LSC analog signals to find which components are making the glitches.

I monitored the RFPD output signals and the demodulated signals at the same time with an oscilloscope when the PRMI was kept locked.

Indeed the RFPD outputs have some corresponding fast signals although I only looked at the RELL11 I and Q signals.

(REFL33 didn't have sufficiently a high SNR to see the glitches with the oscilloscope.)

I will check the rest of channels.

  12674   Thu Dec 8 10:13:43 2016 SteveUpdateLSCglitching ITMY_UL has a history

 

 

  12654   Thu Dec 1 08:02:57 2016 SteveUpdateLSCglitching ITMY_UL_LL

 

 

  1427   Wed Mar 25 09:55:45 2009 steveUpdateIOOglitching sensors of MC

SUS-MC1_SENSOR_SIDE and SUS-MC2_SENSOR_UL are glitching

Yesterday's 4.8mag earthquake at Salton Sea is shown on Channel 1

  11863   Tue Dec 8 15:40:48 2015 SteveUpdateVACglitchy RGA scan at day 434

The noise floor of the Rga scan is glitching less today

 

  16582   Thu Jan 13 16:08:00 2022 YehonathanUpdateBHDgluing magnets after AS1/4 misfortune

{Yehonathan, Anchal, Paco}

In the cleanroom, we removed AS1 and AS4 from their SOS towers. We removed the mirrors from the adapters and put them in their boxes. The broken magnets were collected from the towers and their surfaces were cleaned as well as the magnet sockets on the two adapters and on the side block from where the magnets were knocked off.

We prepared our last batch of glue (more glue was ordered three days ago) and glue 2 side magnets and 2 face magnets. We also took the chance and apply glue on the counterweights on the thick optic adapters so there is no need to look for alternatives for now.

The peek screws and nuts were assembled on the thick optics SOS towers instead of the metal screws and nuts that were used as upper back EQ stops.

  10750   Wed Dec 3 08:36:49 2014 SteveUpdateLSCgood IFO status
  5911   Wed Nov 16 12:21:33 2011 KojiUpdateeloggooglebot (Re: restarted)

- elogd itself is a sort of web server which has no freedom to put our own file, we can not put robots.txt

- If we include elog using proxy in the usual web tree of Apache, we can put robots.txt at the root.

- So far, if we prevent "page0" browse by google, we will be saved for a while.

- It seems that the indexing is set to be refused by the following meta tags. But it does not prohibit googlebot to use "page0" URL, of course.

<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
  4855   Wed Jun 22 15:24:10 2011 kiwamuUpdateABSLgot a laser controller for LightWave

Peter King came over to the 40m with a laser controller and gave it to us.

We will test it out with the LightWave NPRO, which was used for MOPA.

  3805   Thu Oct 28 03:39:58 2010 yutaUpdateIOOgot very stable MC locking

(Rana, Suresh, Jenne, Kiwamu, Kevin, Yuta)

Summary:
  Last night we locked MC by feeding back the signal to NPRO PZT.
  But it was not so stable.
  We wanted more gain to lower the seismic motion, but we don't need high gain in high frequency part(>~1kHz) because it may cause something bad(NRPO PZT oscilatting, PMC not able to catch up with the NPRO frequency change, etc).
  So, we put DC gain boost today.
  It successfully made MC locking stable!

What we did:
1. Lowered the main laser temperature from 32.2° C to 31.8° C.
  When we increased the laser temperature, PMC transmission get lower.  32.2° C was on the cliff, so we put it to plateau region.

2. Lowered the gain of PMC servo (2dB instead of 8dB last night), because PMC was oscillating.
  We got 5.3V OMC transmission.

3. Made 3Hz pole, 30Hz zero filter and put in NPRO PZT servo loop.
  0dB at DC, -20dB at high frequency (see Kevin's elog #3802)

4. Put more gain to NPRO PZT servo loop to compensate -20dB at high frequency.
  In a word, we put DC gain boost.
  Attachment #1 is the MEDM screen screenshot for MC servo.

5. Aligned the beam into QPD at MC2 trans.
  We put lens in front of the QPD.
  Now, we can see the actual motion of the beam, and resonance peaks(Attachment #2; not locked at the highest).
    (We added 30Hz LPF after each 4 quadrant inputs to reduce noise)

Plan:
 - optimize MC suspension alignments
 - activate OL DAQ channels
 - reduce RFAM
 - install tri-mod box
 - QPD signal at MC2 should be more high(currently, ~7counts which equals to ~4mV)
 - change temperature of X-end laser to get green beat

  115   Mon Nov 19 14:32:10 2007 steveBureaucracySAFETYgrad student safety training
John Miller and Alberto Stochino has received the 40m safety bible.
They still have to read the laser operation manual and sign off on it.
  12099   Fri Apr 29 00:55:46 2016 gautamUpdateendtable upgradegreen PDH locked to Xarm

Using the modulation frequency suggested here, I hooked up the PDH setup at the X-end and succeeded in locking the green to the X arm. I then rotated the HWP after the green Faraday to maximize TRX output, which after a cursory alignment optimization is ~0.2 (I believe we were used to seeing ~0.3 before the end laser went wonky). Obviously much optimization/characterization remains to be done. But for tonight, I am closing the PSL and EX laser shutters and applying first contact to the window once more courtesy more PEEK from Koji's lab in W Bridge. Once this is taken care of, I can install the Oplev tomorrow, and then set about optimizing various things in a systematic way.. MC autolocker has also been disabled...

Side note: for the IR Transmon QPD, we'd like a post that is ~0.75" taller given the difference in beam height from the arm cavity and on the endtable. I will put together a drawing for Steve tomorrow..

  3656   Tue Oct 5 21:22:46 2010 yutaUpdateVACgreen beam reached PSL table

YES ! We got the green light coming out from the OMC chamber to the PSL table !

 (Koji, Kiwamu, Yuta)

Background

As a result of the work in the chambers, the green beam reached the OMC chamber yesterday.
Today's mission was to let the green beam coming out from the chambers to the PSL table.

What we did:
1. Installed the last three steering mirrors.
  The mirrors were 532 HR mirror with AR and 1deg wedge (CVI Y2-LW-1-2050-UV-45-P/AR).
  Two of them are placed to the MC chamber. Another one was to the OMC chamber

During putting the mirrors to the MC chamber, we found that a cable tower was sitting on a position exactly where we wanted to put one of the mirrors.

So we moved the tower to the very south west corner. 

 

2. Installed a periscope to the MC chamber

The function of this periscope is to lower the beam height of the green light which is risen up by another periscope in the BS chamber.

We aligned it to the green beam, so that the beam hits the center of the mirrors on the periscope.  

 

3. Aligned the optics

We aligned the green mirrors so that we can let the green beam go out from the chamber.

Actually the inside of the OMC chamber didn't look like the same as that of our optical layout.

For example there is unknown base plate, which apparently disturbs the location of our last steering mirror.

Therefore we had to change the designed position of the steering mirror.

Now the mirror is sitting near the designed position (~ 1/2 inch off), but it's fine because it doesn't clip any 1064 beam.

 
Result:

The green beam is now hitting the north wall of the PSL table.

 

Notes:

The green beam looks having some fringes, it may be caused by a multiple reflection from a TT when the green beam goes through it. We are going to check it. 

 

Next work:

- Damping of the suspended optics

- Resurrect MC and its stable lock

- Remove MCT pickoff path

- Align optics in the main path

- Recycled Michelson lock

 IMG_3627.jpg

 

  9665   Mon Feb 24 17:21:42 2014 SteveUpdateGreen Lockinggreen fiber status today

Quote:

Alex, Gautam and Steve,

Single mode fiber 50m long is layed out into cable tray that is attached to the beam tube of the Y arm.

It goes from ETMY to PSL enclosure. It is protected at both ends with " clear- pvc, slit corrugated loom tubing " 1.5" ID

The fiber is not protected between 1Y1 and 1Y4

 The X -arm fiber is in the high cable tray and it has has  coupler mounts.

 The Y -arm fiber is in the low cable tray and it has no coupler mounts.

 The fibers are only protected at entering and exiting the trays.

 We have only 68 ft spare 1.5"  ID protective plastic tubing.

  9419   Thu Nov 21 09:56:15 2013 SteveUpdateSUSgreen glass beam dumps

 Green welding glass  is used in these Koji designed dumps (D1102375)

We have 10 pieces of hexagonal  dumps for 5.5" high beam They require 1 5/8" space.  Atm1 

Atm2, Large V traps are 3" tall only, 5 pieces

Atm3, Diamond shapes come with 2" and 1" square green glass ( after Koji's correction I removed the not needed glass ) D1102445 and D1102442

 

Baked green glass pieces in stock: 30 pieces of 2" x 2" ,---  30 pieces of 1" x 1",David 4-17-2014

Baked diamond holders in stock: 10 pieces of 2" and 10 pieces of 1"David 4-17-2014

PEEK shims 2" and  1"

Baked green glass pieces blank:  4 pieces of 7" x 9"

Baked green glass pieces with 40 mm hole on 7" x 9" for SUS tower:  7 pieces.

NOTE: in December 2012 we talked about 50 mm aperture need. What diameter is the right one  today? 51 mm aperture plates are cut 4-10-2014

  355   Tue Mar 4 10:08:21 2008 robUpdateComputersgreen lights unreliable when c0daqctrl down

So far I've tried powering off the framebuilder, power-cycling the RAID (it was showing an error message about bad IDE channel #4), and rebooting the LSC (just for fun). When I reset the LSC, its green light on the RFM_NETWORK screen did not turn red, making all these lights suspect. The iscepics40m process is what controls these red/green lights, so maybe it's gone wonky. It appears to be running however, on c1dcuepics, and it also seems to be functioning correctly in other ways (it's communicating correctly with the LSC).

Update: Alex and Jay came by. The solution was to reset the c0daqctrl processor, which apparently was not done in Rana's rebooting spree. Or maybe it needed to be done last.
ELOG V3.1.3-