40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 200 of 355  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  10654   Thu Oct 30 02:54:38 2014 diegoUpdateLSCIR Resonance Script Status

[Diego, Jenne]

The script is moving forward and we feel we are close, however we still have a couple of issues, which are:

1) some python misbehaviour between the system environment and the anaconda one; currently we call bash commands within the python script in order to avoid using the ezca library, which is the one complaining;

2) the fine scan is somewhat not so robust yet, need to investigate more; the main suspects are the wavelet parameters given to the algorithm, and the Offset and Ramp parameters used to perform the scan.

Here is an example of a best case scenario, with 20s ramp and 500 points:

 

Attachment 1: AllPython_findIRresonance_WL_X_ramp_20_500_2.png
AllPython_findIRresonance_WL_X_ramp_20_500_2.png
Attachment 2: AllPython_findIRresonance_WL_Y_ramp_20_500_2.png
AllPython_findIRresonance_WL_Y_ramp_20_500_2.png
Attachment 3: AllPython_findIRresonance_WL_ramp_20_500_2.png
AllPython_findIRresonance_WL_ramp_20_500_2.png
  10674   Thu Nov 6 01:48:30 2014 diegoUpdateLSCIR Resonance Script Status

 Tonight I tried some more tests on the script; it seems to work better, with both performance and robustness improved, although the Xarm behaved badly almost all the time. I did not perform all the tests I wanted because the ALS lock was pretty unstable tonight (not only because of the X arm), with more than a few lock losses; after the last lock loss, however, I couldn't restore the Xarm. I'll do some more tests as soon I can recover it, or post the result of the first batch of tests.

In addition, I encountered the following error multiple times, but I have no idea about what could it be:

Thu Nov 06 02:00:13 PST 2014
medmCAExceptionHandlerCb: Channel Access Exception:
Channel Name: Unavailable
Native Type: Unavailable
Native Count: 0
Access: Unavailable
IOC: Unavailable
Message: Virtual circuit disconnect
Context: fb.martian.113.168.192.in-addr.arpa:5064
Requested Type: TYPENOTCONN
Requested Count: 0
Source File: ../cac.cpp
Line number: 1214
 

  10676   Thu Nov 6 03:29:00 2014 diegoUpdateLSCIR Resonance Script Status

EDIT on X arm: I found different settings in C1SUS_ITMX, with respect to ETMX, ITMY and ETMY (namely LSC/DAMP is OFF and LSC/BIAS is ON); I don't know if this is intended or for some reason ITMX was not recovered properly after the lock loss, so I didn't change anything, but it may be worth looking into that.

 

Still no luck in recovering the X arm, I am giving up for tonight; honestly I didn't try many things, as I don't know well the system and didn't want to mess things up.

 

Preliminary results so far:

I confirm that the best settings for the ramp of the ALS scan are 20s and 500 points; this causes however the script to be fairly slow (80s for the scan/data collection, 7s for the coarse peak finding, 17s for the fine peak finding, total ~2 min); in the best cases the TR*_OUT obtained is around 0.90, as shown in the first plot (early in the evening, all the following plots are in chronological order, if that can help finding the reason for the X arm misbehaviour...):

AllPython_findIRresonance_WL_ramp_20_500_0.png

 

However, after a few minutes somehow the TR*_OUT went down a bit, without any kind of intervention; also, it is visible the instability of the X arm:

AllPython_findIRresonance_WL_ramp_20_500_0_1.png

 

Even when X arm was somewhat stable, its performance and robustness were (far) worse than the Y arm ones:

AllPython_findIRresonance_WL_ramp_20_500_6.png

The following plot shows (about the Y arm only) that there is still some margin, as the maximum value of TRY_OUT is not completely kept at the end of the procedure:

AllPython_findIRresonance_WL_ramp_20_500_7_Y_rise.png

 

Finally the last plot I managed to obtain, before the X arm went completely crazy...

AllPython_findIRresonance_WL_ramp_20_500_9.png

 

The next step, after obviously figuring out the X arm situation, is to try some averaging during the fine scan, I don' t know if this will improve the situation, however it shouldn't impact on the execution time. Tomorrow I'll post something more detailed on the script itself and the wavelet implementation.

  10680   Thu Nov 6 12:53:09 2014 diegoUpdateASCX arm restored

[Diego, Koji]

X arm has been restored, after modifying the two parameters mentioned in http://nodus.ligo.caltech.edu:8080/40m/10676 (C1SUS_ITMX:  LSC/DAMP and LSC/BIAS); after that, a manual re-alignment of ETMX was necessary due to heavy PIT misalignment. I will check the ALS lock once work on the Y arm is done.

  10687   Fri Nov 7 17:44:10 2014 diegoUpdateLSCIR Resonance Script Status

Yesterday I did some more tests with a modifies script; the main difference is that scipy's default wavelet implementation is quite rigid, and it allows only very few choices on the wavelet. The main issue is that our signal is a real, always positive symmetrical signal, while wavelets are defined as 0-integral functions, and can be both real or complex, depending on the wavelet; I found a different wavelet implementation, and I combined it with some modified code from the scipy source, in order to be able to select different wavelets. The result is the wavelet_custom.py module, which lives in the same ALS script directory and it is called by the script. In both the script and the module there the references I used while writing them. It is now possible to select almost any wavelet included in this custom module; "almost" means that the scipy code that calls the find_peaks_cwt routine is picky on the input parameters of the wavelet function, I may dig into that later. For the last tests, instead of using a Ricker wavelet (aka Mexican hat, or Derivative of Gaussian Order 2), I used a DOG(6), as it also has two lesser positive lobes, which can help in finding the resonance; the presence of negative lobes is, as I said, unavoidable. I attach an example of the wavelet forms that are possible, and in my opinion, excluding the asymmetric and/or complex ones, the DOG(6) seems the best choice, and it has provided slightly better results. There are other wavelet around, but they are not included in the module so I should implement them myself, I will first see if they seem fitting our case before starting writing them into the module. However, the problem of not finding the perfect working point (the "overshoot-like" plot in my previous elog) is not completely solved. Eric had a good idea about that: during the fine scan, the the PO*11_ERR_DQ signals should be in their linear range, so I could also use them and check their zero crossing to find the optimal working. I will be working on that.

Attachment 1: wavelets.nb.zip
  10697   Tue Nov 11 19:46:35 2014 diegoUpdateComputer Scripts / ProgramsStatus of the new nodus

The new nodus machine is being brought to life; until installation is finished and everything is fine, the old nodus will be unharmed. For future reference:

New Nodus hardware:

Case: SuperMicro SC825MTQ-R700U

M/B: SuperMicro X8DTU-F

CPU: 2x Intel Xeon X5650

RAM: 3x Kingston KVR1333D3S8R9S (2GB)

         3x Samsung M393B5673EH1-CH9 (2GB)

           Total 12 GB

HDD: Seagate ST3400832AS (400GB)

 

Current software situation and current issues :

1) Ubuntu Server 12.04.5 is installed and updated

2) The usual 'controls' user is present, with UID=1001 and GID=1001

3) Packages installed: nfs-common, rpcbind, ntp, dokuwiki, apache2, php5, openssh-server, elog-2.8.0-2 [from source], make, gcc, libssl-dev [dependencies for elog], subversion

4) Network: interface eth0 is set up (static IP and configuration in /etc/network/interfaces); eth1 is recognized and added, but not configured yet

5) DNS: configuration is in /etc/resolvconf/resolv.conf.d/base (since /etc/resolv.conf is overwritten by the resolvconf program using the 'base' database)

6) ntp is installed and (presumably) configured, but ntpd misbehaves (namely, all the servers are found, but a 

tail /var/log/syslog

shows that no actual synchronization is performed, and the daemon keeps

Listening on routing socket on fd #22 for interface updates

7) dokuwiki apache2 php subversion elog are installed but not configured yet (I need info about their current state, configuration and whereabouts)

8) I copied and merged the old nodus' .bashrc and .cshrc into new nodus' .bashrc, need to know if something has to be added

9) backup frames, backup user dirs and 40m public_html are not set yet, as in #7

 

Is there something missing?

If there is something missing from here (ligo/cds software, smartmontools/hddtemp and similar, or anything else) tell me and I'll set them up.

  10732   Fri Nov 21 18:23:01 2014 diegoUpdateSUSAnti-Jitter Telescope for OpLevs

EDIT: some images look bad on the elog, and the notebook is parsed, which is is bad. Almost everything posted here is in the compressed file attachment.

 

As we've been discussing, we want to reduce the laser's jitter effect on the QPDs of the OpLevs, without losing sensitivity to angular motion of the mirror; the current setup is roughly described in this picture:

1.pdf

 

 The idea is to place an additional lens (or lenses) between the mirror and the QPD, as shown in the proposed setup in this picture:

2.pdf

 

 I did some ray tracing calculations to find out how the system would change with the addition of the lens. The step-by-step calculations are done at the several points shown in the pictures, but here I will just summarize. I chose to put the telescope at a variable relative distance x from the QPD, such that x=0 at the QPD, and x=1 at the mirror.

 

Here are the components that I used in the calculations:

 

Propagator

propagator.png

 

Tilted Mirror

tilted_flat_mirror.png

 

Telescope

telescope.png

 

I used a 3x3 matrix formalism in order to have easier calculations and reduce everything to matrix multiplications; that because the tilted mirror has an annoying addictive term, which I could get rid of:

2x2_3x3.png

 

Therefore, n the results the third line is a dummy line and has no meaning.

 

For the first case (first schematic), we have, for the final r and Theta seen at the QPD:

result_old.png

 

 

In the second case, we have a quite heavy output, which depend also on x and f:

 result_new.png

 

Now, some plots to help understand the situation.

What we want if to reduce the angular effect on the laser displacement, without sacrificing the sensitivity on the mirror signal. I defined two quantities:

beta.png

gamma.png

Beta is the laser jitter we want to reduce, while Gamma is the mirror signal we don't want to lose. I plotted both of them as a function of the position x of the new lens, for a range of focal lengths f. I used d1 = d2 = 2m, which should be a realistic value for the 40m's OpLevs.

 

Plot of Beta

20141121_Plot_Real_Beta_f.pdf

 

Plot of Gamma

20141121_Plot_Real_Gamma_f.pdf

 

Even if it is a bit cluttered, it is useful to see both of the same plot:

 

Plot of Beta & Gamma

20141121_Plot_Real_BetaGamma_f.pdf

 

 

 Apart from any kind of horrific mistakes that I may have done in my calculations, it seems that for converging lenses our signal Gamma is always reduced more than the jitter we want to suppress. For diverging lenses, the opposite happens, but we would have to put the lens very near to the mirror, which is somehow not what I would expect. Negative values of Beta and Gamma should mean that the final values at the QPD level are on the opposite side of the axis/center of symmetry of the QPD with respect to their initial position.

 

I will stare at the plots and calculations a bit more, and try to figure out if I missed something  obvious. The Mathematica notebook is attached.

Attachment 14: 141121_antijitter_telescope.tar.bz2
  10733   Mon Nov 24 20:24:29 2014 diegoUpdateSUSAnti-Jitter Telescope for OpLevs

I stared a bit longer at the plots and thanks to Eric's feedback I noticed I payed too much attention to the comparison between Beta and Gamma and not enough attention to the fact that Beta has some zero-crossings...

I made new plots, focusing on this fact and using some real values for the focal lengths; some of them are still a bit extreme, but I wanted to plot also the zero-crossings for high values of x, to see if they make sense.

 

Plot of Beta and Gamma

 20141124_Plot_Real_BetaGamma_f.pdf

 

 

Plot of Beta and Gamma (zoom)

 

 20141124_Plot_Real_BetaGamma_f_Zoom.pdf

 

If we are not interested in the sign of our signals/noises (apart from knowing what it is), it is maybe more clear to see regions of interest by plotting Beta and Gamma in absolute value:

 

Plot of Beta and Gamma (Abs)

 20141124_Plot_Real_BetaGamma_Abs_f.pdf

 

 

I don't know if putting the telescope far from the QPD and near the mirror has some disadvantage, but that is the region with the most benefit, according to these plots.

 

The plots shown so far only consider the coefficients of the various terms; this makes sense if we want to exploit the zero-crossing of Beta's coefficient and see how things work, but the real noise and signal values also depend on the Alpha and Theta themselves. Therefore I made another kind of plot, where I put the ratio r'(Alpha)/r'(Theta) and called it Tau. This may be, in a very rough way, an estimate of our "S/N" ratio, as Alpha is the tilt of the mirror and Theta is the laser jitter; in order to plot this quantity, I had to introduce the laser parameters r and Theta (taken from the Edmund Optics 1103P datasheet), and also estimate a mean value for Alpha; I used Alpha = 200 urad. In these plots, the contribute of r'(r) is not considered because it doesn't change adding the telescope, and it is overall small.

In these plots the dashed line is the No Telescope case (as there is no variable quantity), and after the general plot I made two zoomed subplots for positive and negative focal lengths.

 

Plot of Tau (may be an estimate of S/N)

20141124_Plot_Real_Tau_f.pdf

 

 

Plot of Tau (positive f)

20141124_Plot_Real_Tau_f_Pos.pdf

 

Plot of Tau (negative f)

20141124_Plot_Real_Tau_f_Neg.pdf

 

If these plot can be trusted as meaningful, they show that for negative focal lengths our tentative "S/N" ratio is always decreasing which, given the plots shown before, it does little sense: although for these negative f Gamma never crosses zero, Beta surely does, so I would expect one singular value each.

Attachment 2: 20141124_Plot_Real_BetaGamma_f_Zoom.pdf
20141124_Plot_Real_BetaGamma_f_Zoom.pdf
Attachment 3: 20141124_Plot_Real_BetaGamma_Abs_f.pdf
20141124_Plot_Real_BetaGamma_Abs_f.pdf
  10745   Tue Dec 2 01:27:22 2014 diegoUpdateASCASS Scripts for arms

I updated the medm C1ASS page for the Arm scripts:

ON : same as before

FREEZE OUTPUTS: calls new FREEZE_DITHER.py script, which sets Common Gain and LO Amplitudes to 0, therefore freezing the current output values

START FROM FROZEN  OUTPUTS: calls new UNFREEZE_DITHER.py script, which sets Commong Gain and LO Amplitudes as in the DITHER_ASS_ON.py script, but no burt restore is performed

OFFLOAD OFFSETS: it's the old "SAVE OFFSETS", calls the WRITE_ASS_OFFSET.py script

OFF: same as before

StripTool: same as before

 

 

  10747   Wed Dec 3 01:18:15 2014 diegoUpdateLSCIR Resonance Script Status

Tonight I started testing a new method for the fine scan:

  • the idea is to use the zero crossings of the PO*11_ERR_DQ signals after (or as an alternative of) the fine scan, but such signals are quite dirty, so I need to find some good way to smooth/filter them;
  • I didn't manage to make many tests, because:
    • once arms were locked fine with ALS, the CARM & DARM lock wasn't very robust, in both acquiring and maintaining lock;
    • during the night, the slow OFSs of the arms misbehaved, and at least once per arm they raised their warning box (independently from each other, and it was hastily recovered), even for values that had been perfectly fine before; I am confused about this;
    • as a result, notwithstanding many tries, the beatnotes are gone;
  • I have enough information to push the script a little further, but I'll do more testing soon;

 

  10766   Mon Dec 8 20:53:51 2014 diegoSummaryGeneralDec 8 - Check Frequency Counter module

Quote:

Quote:

Attached is the timeline for Frequency Offset Locking related activities. All activities will be done mostly in morning and early afternoon hours.

[Diego, Manasa]

We looked into the configuration and settings that the frequency counters (FC) and Domenica (the R pi to which the FCs talk to) were left at . After poking around for a few hours, we were able to readout the FC output and see it on StripTool as well.

We have made a list of modifications that should be done on Domenica and to the readout scripts to make the FC module automated and user-friendly.

I will prepare a user manual that will go on the wiki once these changes are made.

 

 OUTDATED: see elog 10779

 

I started working on the scripts/FOL directory (I did a backup before tampering around!):

  • I still need to make some serious polishing in the folder, and into the Raspberry Pi itself, in order to have a clean and understandable environment;
  • as of now, I created an single armFC.c program, which takes as arguments the device (/dev/hidraw0 for the X arm, and /dev/hidraw1 for the Y arm) and the value to write into the frequency counter (0x3 for initialization and 0x2 for actual use); hence, no more need for recompilation!
  • I improved the codetorun.py script (and gave the fellow a proper name, epics_channels.py) which handles the initialization AND the availability of the channels;
  • On the Raspberry Pi, I created two init scripts, /etc/init.d/epics_server.sh and /etc/init.d/epics_channels.sh, which start at the end of the boot process with default runlevels; the former starts the softIOc process (epics itself), while the latter executes the constantly running epics_channels.py script; as they are services, they can be started/stopped with the usual sudo /etc/init.d/NAME start|stop|restart

 

As a result, as soon as the Raspberry Pi completes its boot process, the two beatnote channels are immediately available.

 

  10770   Tue Dec 9 16:06:46 2014 diegoSummaryGeneralDec 8 - Check Frequency Counter module

Quote:

Quote:

Quote:

Attached is the timeline for Frequency Offset Locking related activities. All activities will be done mostly in morning and early afternoon hours.

[Diego, Manasa]

We looked into the configuration and settings that the frequency counters (FC) and Domenica (the R pi to which the FCs talk to) were left at . After poking around for a few hours, we were able to readout the FC output and see it on StripTool as well.

We have made a list of modifications that should be done on Domenica and to the readout scripts to make the FC module automated and user-friendly.

I will prepare a user manual that will go on the wiki once these changes are made.

 

 I started working on the scripts/FOL directory (I did a backup before tampering around!):

  • I still need to make some serious polishing in the folder, and into the Raspberry Pi itself, in order to have a clean and understandable environment;
  • as of now, I created an single armFC.c program, which takes as arguments the device (/dev/hidraw0 for the X arm, and /dev/hidraw1 for the Y arm) and the value to write into the frequency counter (0x3 for initialization and 0x2 for actual use); hence, no more need for recompilation!
  • I improved the codetorun.py script (and gave the fellow a proper name, epics_channels.py) which handles the initialization AND the availability of the channels;
  • On the Raspberry Pi, I created two init scripts, /etc/init.d/epics_server.sh and /etc/init.d/epics_channels.sh, which start at the end of the boot process with default runlevels; the former starts the softIOc process (epics itself), while the latter executes the constantly running epics_channels.py script; as they are services, they can be started/stopped with the usual sudo /etc/init.d/NAME start|stop|restart

 

As a result, as soon as the Raspberry Pi completes its boot process, the two beatnote channels are immediately available.

 

 OUTDATED: see elog 10779

 

 Update and corrections:

 

  • I forgot to log that I added a udev rule in /etc/udev/rules.d/98-hidraw-permissions.rules in order to let the controls user access the devices without having to sudo all the time;
  • I updated the ~/.bashrc and /opt/epics/epics-euser-env.sh files to fix syntax errors and add some aliases we usually use;
  • since /etc/init.d/ doesn't support automatic respawn of processes, I purged the two scripts I did yesterday and added two lines to /etc/inittab. This works just as fine (I tried a couple of reboots to verify that) and the two processes now respawn automatically even if killed (and, I assume, if they die for any other reason)
  • Another thing I forgot: for the time being, during the cleanup, the Raspberry Pi works on the network share script directory. Once cleaning is done and everything is fixed, everything will run locally on the RPi, and the scripts/FOL directory on chiara will be used as backup/repository.
  10772   Wed Dec 10 14:22:37 2014 diegoUpdateComputer Scripts / ProgramsStatus of the new nodus

[Diego, Steve]

We ran a Cat 6+ Ethernet cable from the 1X7 rack (where the new nodus is located) to the fast GC switch in the control room rack; now I will learn how to setup the 'outside world' network, iptables, and the like.

 

I remind that the current hardware/software status is posted in elog 10697 ; if additions or corrections are needed, let me know.

 

After I check a couple of things, we can use the new nodus (which is currently known in the martian network as rosalba) as a local test to see that everything is working. After that (and, mostly, after I'll have the network working), we will sync the data from the old nodus to the new one and make the switch.

  10779   Thu Dec 11 12:39:31 2014 diegoUpdateComputer Scripts / ProgramsFrequency Offset Locking scripts status

 I finished the polishing in the scripts/FOL directory, this is the current status and this post replaces my two previous posts on the subject:

  • the Raspberry Pi operates locally: everything is in its /opt/FOL directory, which is a mirror of /opt/rtcds/caltech/c1/scripts/FOL/ ; some backup/sync scripts should be set up, tell me what kind (sync direction, place to call the script from, etc..) is recommended and I'll set it up;

 

  • the /opt/FOL directory contains:
    • ADC_interface                 : Akhil's ADC interface software and dependencies;
    • akhilTestCodes                : Akhil's work directory with his programs and data;
    • backup                             : two zip files with a full backup of the FOL stuff for both chiara and domenica at 2014/12/08, before my work on the directory;
    • fcreadoutApp                   : the EPICS app compiled on domenica. I didn't modify anything in particular here, as I don't know much about EPICS Apps; I'm not even sure if it is used by now, as I launch EPICS manually by just giving him a .db file (see below).
    • armFC*                        : it is the single program that constantly fetches data for the channels: it takes as arguments the RPi device (/dev/hidraw0 for the X arm, and /dev/hidraw1 for the Y arm) and the value to write into the frequency counter (0x3 for initialization and 0x2 for actual use); hence, there is no more need for recompilation!
    • epics_channels.py :  this is the new version of the old codetorun.py script; it handles the initialization and the availability of the two beatnote channels;
    • fcreadout / freqCountIOC : these are the binaries of the EPICS apps that I found on chiara/domenica; they are not used as of now, but could be useful;
    • fcreadout.db           : it is the database file that is loaded by EPICS to handle the channels;
    • FOLPID.pl              : the Perl PID controller; it is still the old version, we will work on this one later on (see Manasa's schedule at elog 10760 for info)

 

  • Domenica's environment:
    • as I said, everything runs locally from /opt/FOL;
    • in particular, I added in /etc/inittab two lines that launch EPICS and the python script for the channels; respawn is supported so these processes should always be available. For this to happen, DO NOT MOVE armFC, epics_channels.py and fcreadout.db from the /opt/FOL directory on domenica!
    • I added a udev rule in /etc/udev/rules.d/98-hidraw-permissions.rules to let the controls user access the /dev/hidraw* devices without having to sudo all the time;
    • I updated the ~/.bashrc and /opt/epics/epics-user-env.sh files to fix syntax errors and add some aliases we usually use.
  10793   Fri Dec 12 19:38:49 2014 diegoUpdateComputer Scripts / ProgramsStatus of the new nodus

Quote:

[Diego, Steve]

We ran a Cat 6+ Ethernet cable from the 1X7 rack (where the new nodus is located) to the fast GC switch in the control room rack; now I will learn how to setup the 'outside world' network, iptables, and the like.

 

I remind that the current hardware/software status is posted in elog 10697 ; if additions or corrections are needed, let me know.

 

After I check a couple of things, we can use the new nodus (which is currently known in the martian network as rosalba) as a local test to see that everything is working. After that (and, mostly, after I'll have the network working), we will sync the data from the old nodus to the new one and make the switch.

[Diego, EricQ]

Update: work is almost completed; the old nodus is still online, as I don't feel confident to make the switch and leave it on its own for the weekend. However, the new nodus is online with the IP address 131.215.114.87, so everyone can check that everything works. From my tests I can say that:

After everything will be in place, I will save every reasonably important configuration file of nodus into the svn.

 

I remind that every change made while accessing the 131.215.114.87 machine will be purged during the sync&switch

 

  10798   Mon Dec 15 16:27:57 2014 diegoUpdateComputer Scripts / ProgramsStatus of the new nodus

Quote:

 Nodus (solaris) is dead, long live Nodus (ubuntu).

Diego and I are smoothing out the Kinks as they appear, but the ELOG is running smoothly on our new machine. 

SVN is working, but your checkouts may complain because they expect https, and we haven't turned SSL on yet...

 [Diego, EricQ]

SSL, https and backups are now working too!

A backup of nodus's configuration (with some explaining) will be done soon.

  10802   Tue Dec 16 00:20:06 2014 diegoUpdateOptical LeversBS & PRM OL realignment

[Rana, Diego]

We manually realigned the BS and PRM optical levers on the optical table.

  10805   Tue Dec 16 20:49:25 2014 diegoUpdateComputer Scripts / ProgramsStatus of the new nodus

Quote:

Quote:

 Nodus (solaris) is dead, long live Nodus (ubuntu).

Diego and I are smoothing out the Kinks as they appear, but the ELOG is running smoothly on our new machine. 

SVN is working, but your checkouts may complain because they expect https, and we haven't turned SSL on yet...

 [Diego, EricQ]

SSL, https and backups are now working too!

A backup of nodus's configuration (with some explaining) will be done soon.

 Nodus should be visible again from outside the Caltech Network; I added some basic configuration for postfix and smartmontools; configuration files and instructions for everything are in the svn in the nodus_config folder

  10806   Tue Dec 16 20:51:18 2014 diegoUpdateLSCPRMI loops need help

Quote:

[...]

Diego is going to give us some spectra of the MC error point at various levels of pockel's cell drive.  Is it always the same frequencies that are popping up, or is it random?

 I found out that the Spectrum Analyzer gives bogus data... Since now is locking time, tomorrow I'll go and figure out what is not working

  10817   Fri Dec 19 14:25:48 2014 diegoUpdateComputer Scripts / Programselog restarted

elog was not responding for unknown reasons, since the elogd process on nodus was alive; anyway, I restarted it. 

  10822   Fri Dec 19 19:21:04 2014 diegoUpdateCDSSOS!!! HELP!! EPICS freeze 45min+ so far!

Quote:

[Jenne, Diego]

The EPICS freeze that we had noticed a few weeks ago (and several times since) has happened again, but this time it has not come back on its own.  It has been down for almost an hour so far. 

 So far, we have reset the Martian network's switch that is in the rack by the printer.  We have also power cycled the NAT router.  We have moved the NAT router from the old GC network switch to the new faster switch, and reset the Martian network's switch again after that.

We have reset the network switch that is in 1X6.

We have reset what we think is the DAQ network switch at the very top of 1X7.

So far, nothing is working.  EPICS is still frozen, we can't ping any computers from the control room, and new terminal windows won't give you the prompt (so perhaps we aren't able to mount the nfs, which is required for the bashrc).

We need help please!

[EricQ]

 

EricQ suggested it may be some NFS related issue: if something, maybe some computer in the control room, is asking too much to chiara, then all the other machines accessing chiara will slow down, and this could escalate and lead to the Big Bad Freeze. As a matter of fact, chiara's dmesg pointed out its eth0 interface being brought up constantly, as if something is making it go down repeatedly. Anyhow, after the shutdown of all the computers in the control room, a  reboot of chiara, megatron and the fb was performed.

 

[Diego]

Then I rebooted pianosa, and most of the issues seem gone so far; I had to "mxstream restart" all the frontends from medm and everyone of them but c1scy seems to behave properly. I will now bring the other machines back to life and see what happens next.

  10823   Fri Dec 19 20:32:11 2014 diegoUpdateCDSSOS!!! HELP!! EPICS freeze 45min+ so far!

[Diego, Jenne]

 

Everything seems reasonably back to normal:

Notes:

  • the machines in the control room have been rebooted;
  • the c1iscey frontend now behaves;
  • I saw on nodus, which remained up and running the whole time, a bunch of   nfs: server chiara is not responding, timed out  messages, belonging to the freezing time; it may be that the sync option for the nfs share is too resource demanding, or some other network issue;
  • the FSS was doing strange stuff and the MC couldn't recover the lock; the MCautolocker script wasn't running because of the lock loss of the MC and the lack of communication between the machines; so we did a sudo initctl start MCautolocker on megatron and recovered the MC too.
  10826   Sun Dec 21 18:46:06 2014 diegoUpdateIOOMC Error Spectra

The error spectra I took so far are not that informative, I'm afraid. The first three posted here refer to Wed 17 in the afternoon, where things were quiet, the LSC control was off and the MC was reliably locked. The last two plots refer to Wed night, while Q and I were doing some locking work; in particular, these were taken just after one of the locklosses described in elog 10814. Sadly, they aren't much different from the "quiet" ones.

I can add some considerations though: Q and I saw some weird effects during that night, using a live reading of such spectra, which couldn't be saved though; such effects were quite fast both in appearance and disapperance, therefore difficult to save using the snapshot measurement, which is the only one that can save the data as of now; moreover, these effects were certainly seen during the locklosses, but sometimes also in normal circumstances. What we saw was a broad peak in the range 5e4-1e5 Hz with peak value ~1e-5 V/rtHz, just after the main peak shown in the attached spectra.

Attachment 1: SPAG4395_17-12-2014_170951.pdf
SPAG4395_17-12-2014_170951.pdf
Attachment 2: SPAG4395_17-12-2014_172846.pdf
SPAG4395_17-12-2014_172846.pdf
Attachment 3: SPAG4395_17-12-2014_175147.pdf
SPAG4395_17-12-2014_175147.pdf
Attachment 4: SPAG4395_18-12-2014_003414.pdf
SPAG4395_18-12-2014_003414.pdf
Attachment 5: SPAG4395_18-12-2014_003506.pdf
SPAG4395_18-12-2014_003506.pdf
  10840   Tue Dec 23 18:43:33 2014 diegoUpdateComputer Scripts / ProgramsFSS Slow servo moved to megatron

Quote:

I ssh'd in, and was able to run each script manually successfully. I ran the initctl commands, and they started up fine too. 

We've seen this kind of behavior before, generally after reboots; see ELOGS 10247 and 10572

In the plot it is shown the behaviour of the PSL-FSS_SLOWDC signal during the last week; the blue rectangle marks an approximate estimate of the time when the scripts were moved to megatron. Apart from the bad things that happened on Friday during the big crash, and the work ongoing since yesterday, it seems that something is not working well. The scripts on megatron are actually running, but I'll try and have a look at it.

  10856   Tue Jan 6 03:09:17 2015 diegoUpdateLSCPRFPMI status & IFO status

 [Jenne, Rana, EricQ, Diego]

Tonight we worked on getting the IFO back in a working status after the break, and then tried some locking.

  • the MC is behaving better, it could stay in a stable condition for hours, even if a couple of times it lost lock, and one of them persisted for a little time;
  • we managed to get to arm power of 20ish, before losing lock (this happened a couple of times);
  • the main thing seems to be that we have only ~ 20 degrees of phase margin at UGF for DARM, which is evidently too little;
  • one hypothesis is that DARM may change sign due to some weird length/angular interaction, and that this messes up the actuation causing the lockloss;
  • one other possibility is that maybe, when arm power rises, there are some weird flashes that go back to the MC and then cause the locklosses, but this has to be verified;
  • attached there is a plot of the last lockloss (and a zoom of it), which seems to point at DARM as the culprit;

 Lockloss_20150106_074552.png

 

Lockloss_20150106_074552_zoom.png

 

We left the IFO uncontrolled and in a "flashy" state so that tomorrow we can look into the "back-flashing to the MC" hypothesis.

  10861   Wed Jan 7 02:56:15 2015 diegoUpdateLSCUGF Servo for DARM

 [Jenne, Diego]

Today we began implementing the UGF Servos. Things we did:

  • we updated the LSC model with both DARM and CARM servos, and moved them from after the control system to before it, at the level of the error signal;
  • we updated the medm screens; new buttons are located in the main LSC screen;
  • we started commissioning the DARM servo, at first using DARM for the lock of the single Y arm, then we moved on to the PRFPMI lock and the usual transition from ALS to Transmission;
  • although we had several lock losses during the night, we managed to tweak the parameters of the DARM UGF servo (phases, excitation, gains), which now seems to work sufficiently fine;
  • the filters added to the I and Q filter banks are a single lowpass in each, while the only filter in the main servio is a standard integrator;
  • we don't have a step response at the moment, but we can say that the settling time of the servo is in the range of 10 seconds;
  • we updated the ALSdown.py and ALSwatch.py scripts with a call to a new UGFdown.py script; this script, located in the scripts/PRFPMI folder, takes care of disabling the servos and putting the excitation to zero in case of a lock loss; re-enablement of such things must be done manually;
  10870   Wed Jan 7 14:35:44 2015 diegoUpdateSUSSUS Drift Monitor

The SUS Drift Monitor screen has been updated:

  • removed the old dead channels from the MEDM screen;
  • updated the SUS models with new 'mute' channels where the expected values should be put;
  • updated the MEDM screen with the new channels
  • values are still 0 since I don't know what these expected values should be, at this time

 150107_SUS_DRIFTMON_screen.png

  10881   Thu Jan 8 23:02:30 2015 diegoUpdateSUSSUS Drift Monitor

The MEDM screen has been updated: the new buttons, one for each optic, call the scripts/general/SUS_DRIFTMON_update_reference.py script, which measures (and averages) for 30s the current values of the POS/PIT/YAW drifts, and then sets the average as the new reference value.

 

  10888   Tue Jan 13 01:11:51 2015 diegoUpdateLSCResponse of error signals to MICH EXC

For several MICH offsets, I measured the response of REFL33Q, ASDC and the ratio ASDC/POPDC to a MICH EXC. It appears that there is no frequency-dependent effect. The plots for MICH_OFFSET = 0.0 and 2.0 are slightly lower in magnitude: the reason is they were the first measurements done, and after that a little realignment of BS was necessary, so probably that is the reason.

 

 

Attachment 1: MICH_to_REFL33Q_ASDC_12Jan2015_1.pdf
MICH_to_REFL33Q_ASDC_12Jan2015_1.pdf
Attachment 2: MICH_to_REFL33Q_ASDC_12Jan2015_2.pdf
MICH_to_REFL33Q_ASDC_12Jan2015_2.pdf
Attachment 3: MICH_to_REFL33Q_ASDC_12Jan2015_3.pdf
MICH_to_REFL33Q_ASDC_12Jan2015_3.pdf
  10896   Tue Jan 13 15:11:30 2015 diegoUpdateLSCTransitioned to ASDC MICH (PRMI and PRFPMI)

These are the parameters of the UGF servos we used last night:

DOF / Parameters Exc. Frequency (Hz) Exc. Gain Loop Gain
DARM 110.1 0.025 -0.03
MICH n/a n/a n/a
PRCL 150.001 2.0 -0.03
CARM 115.1 0.01 -0.03

 

Some tweaking of such parameters and the commissioning of the MICH servo will be done soon; an elog post about the UGF scripts/medm screens also will be done soon.

  10902   Thu Jan 15 03:18:11 2015 diegoUpdateLSCUGF servo now linear again

The UGF servos were recommissioned today:

  • suitable values of frequency, excitation, phases and gain were found;
  • the phases were chosen in order to maximize the I signal and suppress the Q one;
  • the servos seemed sufficiently stable when in a quiet state, but they didn't performed well in other cases;
  • I also found out that DARM & CARM and MICH & PRCL are maybe too much coupled, but that could be actually due to the main loops rather than the UGF ones;
  • however, after some weird rampings with no apparent reasons, and after some quite bad and glitchy step responses, I found out that the effect of the chosen phases vanished: the I and Q signals were of the same order of magnitude again, probably causing the bad performance;
  • Jenne and I tried to increase che SINGAINs and COSGAINs (but keeping them equal to each other): this has the good effect of separating more the I and Q signals, but it's just a zoom effect: there still are mixing effects and, more important, some zero-crossings into negative values that cause the signal going into the servo to go crazy.

Our idea is that we need with some thinking about these servos and most of all try to figure out all this phase thing before we can start to trust the servos to be used for locking.

  10911   Fri Jan 16 04:14:05 2015 diegoUpdateLSCUGF servo now linear again

UGF Servos' commissioning still going on, updates of today:

  • on Rana's suggestion, we don't use anymore the Q-signal rejection at the level of Phase 1 and Phase 2; instead, a proper complex division is made between those two signals (with a check in case of zero); then the resulting signal is demodulated with a new Phase 3, which is the one used to select the I signal while zeroing the Q one; changes to the model and the screens have been made;
  • a new evaluation of all the parameters for the four servos has been made; aside for the new phase, and the zeroing of the other phases (because they are not used anymore for the selection), the parameters are not dissimilar from the prevoius ones
  • the PRCL and MICH servos seem sufficiently stable;
  • CARM and DARM are stable only for a short amount of time; what usually happens is that one of the two starts drifting in one random direction, and usually the other one follows shortly after; it is not clear if there is a relation or if they stop being stable after a similar amount of time; I still noticed a few lowest limits appearing in the input signal, which should be avoided; I'll check the model again tomorrow;
  • the weird thing about CARM and DARM is that at the same time when one of them starts drifting, its I and Q signals begin to be comparable; when the servo is shut off, they resume their normal state;
  • an increase in the excitation gain improves the separation of I and Q and also reduces their variations, but a high peak in the loop due to this might not be a good idea.

 

  10916   Fri Jan 16 20:37:52 2015 diegoUpdateLSCUGF servo now linear again

I found an error in the model of the UGF servos, I have now corrected it; for future reference, now the division between TEST2 and TEST1 is properly done with complex math: given

TEST1 = a + i b\hspace{.5cm},\hspace{.5cm}}TEST2 = c + id

we have that TEST3:

 

TEST3 = \frac{TEST2}{TEST1} = \left(\frac{ac+bd}{a^2+b^2}\right) + \left(\frac{ad-bc}{a^2+b^2} \right)i

 

TEST3 is the actual signal that is now phase rotated to select only the I signal while rejecting the Q one.

 

All the updates to the model, the screens and the script have been SVNed.

  10926   Tue Jan 20 21:58:04 2015 diegoUpdateLSCSome locking, may need to modify UGF part again

 

Quote:

I have been playing with the IFO tonight.  Mostly, I wanted to make sure that all of the scripts for the carm_cm_up sequence were working, and they seem to all be fine. 

I also turned on all 4 UGF servos.  My big ah-ha moment for the night is that the excitation is multiplied by the gain multiplier.  This means that if the UGF servo is multiplying by a small number (less than 1), the excitation will get smaller, and could get small enough that it is lost in the noise.  Now the error signal for the UGF servo is very noisy, and can dip to zero.  Since we can't take the log of zero, there are limiters in the model, but they end up giving -80dB to the error point of the UGF servo.  This makes it all freak out, and often lose lock, although sometimes you just get a weird step in the UGF servo output. 

Anyhow, we need to be mindful of this, and offload the UGF servos regularly.  I think the better thing to do though will be to divide the excitation amplitude by the gain multiplier.  This will undo the fact that it is multiplied by that number, so that the number of counts that we put into the excitation amplitude box is what we expect. 

 

After some brainstorming with Jenne and Q, both the model and the medm screen have been modified: the entire block "Test1 - injection of the excitation - Test2" has been moved after the servo output. In this way we avoid completely the multiplication problem without having to perform divisions that could lead to division-by-zero problems. Because of how the logic is done now, one UnitDelay block had to be inserted before each one of Test1 and Test2.

 

Since the UGF Servo has been heavily modified lately, I'll post the current status of the model (as an attachment, as inpage images lose too much quality).

 

Attachment 1: UGF_SERVO_MDL.png
UGF_SERVO_MDL.png
  10930   Thu Jan 22 15:35:41 2015 diegoUpdateSUSBounce/Roll Measurements

I measured the bounce/roll frequencies for all the optics, and updated the Mechanical Resonances wiki page accordingly.

I put the DTT templates I used in the /users/Templates/DTT_BounceRoll folder; I wrote a python script which takes the exported ASCII data from such templates and does all the rest; the only tricky part is to remember to export the channel data in the order "UL UR LL" for each optic; the ordering of the optics in a single template export is not important, as long as you remember it...

Anyhow, the script is documented and the only things that may need to be modified are:

  • lines 21, 22: the "starting points" FREQ_B and FREQ_R (to accomodate noisy or bad data, as ETMX was for the Roll part in both the measurements I took);
  • line 72: the parameters of the slepian window used to average the data: the first one is the most important and indicates how much averaging will happen; more averaging means less noise but broader and lower peaks, which shouldn't be a big issue since we care only about the peak position, not its amplitude; however, if the peak is already shallow, too much averaging will make things worse instead of better;
  • lines 110, 118: the initial guess for the fit parameters;

The script is in scripts/SUS/BR_freq_finder.py and in the SVN. I attach the plots I made with this method.

Attachment 1: BR_Jan2015.tar.bz2
  10931   Thu Jan 22 18:36:11 2015 diegoUpdateIOOMC Flashes

I've been looking into the data of Jan 06 and Jan 15 taken during daytime, as the night before we left the PRC aligned in order to allow the IFO to flash; the purpose is to find out if some flashes from the IFO could propagate back to the IMC and cause it to lose lock; I will show here a sample plot, all of the others are in the archive attached.

My impression is that these locklosses of the IMC are not caused by these flashes: the signals MC_[F/L] seem quite stable until the lock loss, and I don't see any correlation with what happens to REFLDC that could cause the lockloss (apart from its drop as a consequence of the lockloss itself); in addition, in most occasions I noticed that the FSS started to go crazy just before the lock loss, and that suggests me that the lockloss source is internal to the IMC.

I can't see anything strange happen to MC_TRANS either as long as the IMC is locked, no fluctuations or weird behaviour. I also plotted the MC_REFL_SUM channel. but it is too slow to be useful for this kind of "hunt".

Attachment 1: 1104612646_zoom_1.png
1104612646_zoom_1.png
Attachment 2: elog.tar.bz2
  10932   Thu Jan 22 22:15:30 2015 diegoUpdateSUSCentered OpLevs

[Diego, Jenne]

We centered the OpLevs for ITMX and ITMY.
 

  10944   Tue Jan 27 15:45:26 2015 diegoUpdateLSCSmall tweaks to the locking

The UGF Servo medm page has been updated to reflect the last changes, namely the return of the sum of squares and the disappearance of Test3.

 

  10960   Fri Jan 30 03:12:15 2015 diegoUpdateLSCCARM on REFL11I

[Jenne, Diego]

Tonight we continued following the plan of last night: perform the transition of CARM to REFL11_I while on MICH offset at -25%:

  • we managed to do the transition several times, keeping the UGF servos on for MICH and PRCL but turning off the DARM and CARM ones, because their contribution was rather unimportant and we feared that their excitations could affect negatively the other loops (as loops tend to see each other's excitation lines);
  • we had to tweak the MICH and PRCL UGF servos:
    • the excitation frequency for MICH was lowered to ~41 Hz, while PRCL's one was lowered to ~50 Hz;
    • PRCL's amplitude was lowered to 75 because it was probably too high and it affected the CARM loop, while MICH's one was increased to 300 because during the reduction of the CARM offset it was sinking into the noise; after a few tries we can say they don't need to be tweaked on the fly during the procedure but can be kept fixed from the beginning;
    • after the transition to REFL11_I for CARM, we engaged also its UGF servo, still at the highest frequency of the lot (~115 Hz) and with relatively low amplitude (2), to help keeping the loop stable;
    • as DARM was still on ALS, we didn't engage its UGF servo during or after the transition, but we just hold its output from the initial part of the locking sequence (after we lowered its frequency to 100 Hz;
  • however, at CARM offset 0 our arm power was less that what we had yesterday: we managed to get higher than ~8, but after Koji tweaked the MC alignment we reached ~10; we still don't understand the reason of the big difference with respect to what the simulations show for MICH offset at 25% (arm power ~50);
  • after the CARM transition to REFL11_I we felt things were pretty stable, so we tried to reduce the MICH offset to get us in the ~ -10% range, however we never managed to get past ~ -15% before losing lock, at arm power around 20;
  • we lost lock several times, but for several different reasons (IMC lost lock a couple of times, PRCL noise increased/showed some ringing, MICH railed) but our main concern is with the PRCL loop:
    • we took several measurements of the PRCL loop: the first one seemed pretty good, and it had a bigger phase bubble than usual; however, the subsequent measurements showed some weird shapes we struggle to find a reason for; these measurements were taken at different UGF frequencies, so maybe it is worth looking for some kind of correlation; morever, in the two weird measurements the UGFs are not where they are supposed to be, even if the servo was correctly following the input (or so it seemed); the last measurement was interrupted just before we lost lock because of PRCL itself;
    • we noticed a few times during the night that the PRCL loop noise in the 300-500 Hz range increased suddenly and we saw some ringing; at least a couple of times it was PRCL who threw us out of lock; this frequency range is similar to the 'weird' range we found in our measurements, so we definitely need to keep an eye on PRCL on those frequencies;
  • in conclusion, the farthest we got tonight was CARM on REFL11_I at 0 offset, DARM at 0 offset still on ALS and MICH at ~ 15% offset, arm power ~20.

 

Attachment 1: PRCL_29Jan2015_Weird_Shape.pdf
PRCL_29Jan2015_Weird_Shape.pdf
Attachment 2: ArmPowers20_MICHoffsetBeingReduced_0CARMoffset_29Jan2015.pdf
ArmPowers20_MICHoffsetBeingReduced_0CARMoffset_29Jan2015.pdf
  10965   Mon Feb 2 22:59:49 2015 diegoUpdateLSCCM board input switched to AS55

[Diego, Jenne]

We just changed the input to the CM board from REFL11 to AS55.

 

  10966   Tue Feb 3 04:01:55 2015 diegoUpdateLSCCM servo & AO path status

[Diego, Jenne]

Tonight we worked on the CM board and AO path:

  • at first we changed the REFL1 input to the CM board from REFL11 to AS55, as written in my previous elog; we tried following Koji's procedures from http://nodus.ligo.caltech.edu:8080/40m/9500 but we didn't get any result: we could lock using the regular digital path but no luck at all for the analog path;
  • then we decided to follow the procedure to the letter, using POY11Q as input to the CM board;
    • we still couldn't lock following the Path #2, even after adjusting the gains to match the current configuration for the Yarm filter bank;
    • we had some more success using Path #1, but we had to lower the REFL1 Gain to ~3-4 (from the original 31) because of the different configuration of the Yarm filter bank, in order to have the same sensing in both of them; we managed to acquire lock a few times, it's not super stable but it can keep lock for a while;
    • when we tried to increase the gain of the MC filter bank and the AO Gain, however, we immediately had some gain peaking, and we couldn't go further then 0.15 and 9db respectively. We currently don't have an answer for that.
    • anyhow, we took a few measurements with the SR785:

 

The BLUE plot is at MC Gain = 0.10 and REFL1 Gain = 4dB; the GREEN plot is for MC Gain = 0.10 and REFL1 Gain = 3dB, which seemed a more stable configuration; after this last configuration, we increased the MC Gain to 0.15 and the AO Gain from 8dB to 9dB and took another measurement, the RED plot; this is as far as we got as of now. We also couldn't increase the REFL11 Gain because it made things unstable and more prone to unlock.

So, some little progress on the AO path procedure, but we are very low on our UGF and we have to find a way to increase our gains without breaking the lock and avoiding the gain peaking we have witnessed tonight.

 

Notes:

  • is the REFL1 Gain dB slider supposed to go to negative dBs? During the night we also tried to use negative dBs, but it seemed it wasn't doing anything instead;
  • when we plugged POY11Q to the CM board, we noticed that it wasn't connected to anything at the moment; since we phase rotate POY11, we were assuming that we were using that signal somewhere. We are confused by this...
  • we remind that REFL11 is no more connected to the CM board input, as POY11 is.
Attachment 1: CARM_03-02-2015_031754.pdf
CARM_03-02-2015_031754.pdf
  10971   Wed Feb 4 04:51:14 2015 diegoUpdateLSCCARM Transition to REFL11 using CM_SLOW Path

[Diego, Jenne, Eric]

Tonight we kept on following our current strategy for locking the PRFPMI:

  • the first few tries were pretty unsuccessful: the PRMI lock wasn't much stable, and we never managed to reduce CARM offset to zero before losing lock;
  • then we did some usual manteinance: we fixed the X arm green beatnote, fixed the phase tracker and given much attention to ASS alignment, since the X arm wasn't doing great;
  • the last few locks were consintently better: we managed to get to CARM offset zero "easily", but the arm power was not very high (~8);
  • then we tried to transition CARM to REFL11, both with the usual configuration and using CM_SLOW, using REFL11 as input for the Common Mode Board;
    • with the usual configuration, we lost lock right after the transition, because of MICH hitting the rail;
    • we did a very smooth CARM transition directly to REFL11 on the CM_SLOW path; we managed to take a spectrum with the SR785, but we couldn't take any more measurements before losing lock because of some weird glitch, as we can see from the lockloss plot;
  • another thing that helped tonight was changing the ELP of the MICH filter bank: it went from 4th order to 6th order, and from 40 dB suppression to 60 dB suppression;

both of the last two locks, the most stable ones (one transition to usual REFL11 and one transition to "CM_SLOW" REFL11) were acquired actuating on MC2;

 


EDITs by JCD:  At least one of the times that we lost PRMI lock (although kept CARM and DARM lock on ALS), was due to MICH hitting the rail, even after we increased the limiter to 15,000 counts.


 Here is the transfer function between CARM ALS (CARM_IN1) and REFL11 coming through the CM board (CARM_B), just before we transitioned over.  Coherence was taken simultaneously as usual, I just printed it to another sheet.

CARM_3Feb2015_CarmBwasCMslow_CarmAwasLiveALS.pdf

CARM_3Feb2015_CarmBwasCMslow_CarmAwasLiveALS_Coh.pdf


Here is the lockloss plot for the very last lockloss.  This is the time that we were sitting on REFL11 coming through the CM_SLOW path.  A DTT transfer function measurement was in progress (you can see the sine wave in the CARM input and output data), but I think we actually lost lock due to whatever this glitch was near the right side of the plots.  This isn't something that I've seen in our lockloss plots before.  I'm not sure if it's coming from REFL11, or the CM board, or something else.  We know that the CM board gives glitches when we are changing gain settings, but that was not happening at this time.


Q: Here's the SR785 TF of CARM locked through CM board, but still only digital control; nothing exciting. Excitation amplitude was only 1mV, which explains the noisy profile. 

Attachment 1: CARM_3Feb2015_CarmBwasCMslow_CarmAwasLiveALS.pdf
CARM_3Feb2015_CarmBwasCMslow_CarmAwasLiveALS.pdf
Attachment 2: CARM_3Feb2015_CarmBwasCMslow_CarmAwasLiveALS_Coh.pdf
CARM_3Feb2015_CarmBwasCMslow_CarmAwasLiveALS_Coh.pdf
Attachment 3: Glitch_in_CARM_and_PRCL_3Feb2015.png
Glitch_in_CARM_and_PRCL_3Feb2015.png
Attachment 4: slowCM_04-02-2015_042805.png
slowCM_04-02-2015_042805.png
  10979   Thu Feb 5 04:35:14 2015 diegoUpdateLSCCARM Transition to REFL11 using CM_SLOW Path

[Diego, Eric]

Tonight was a sad night... We continued to pursue our strategy, but with very poor results:

  • before doing anything, we made sure we had a good initial configuration: we renormalized the arm powers, retuned the X arm green beatnote, did extensive ASS alignment;
  • since the beginning of the night we faced a very uncooperative PRMI, which caused a huge number of locklosses, often just by itself, without even managing to reduce the MICH offset before reducing the CARM one;
  • we had to reduce the PRCL gain to -0.002 in order to acquire PRMI lock, but keeping it such or restoring it to -0.004 once lock was acquired either didn't improve the PRMI stability at all;
  • we also tweaked a bit the PRCL and MICH UGF servos (namely, their frequencies to ~80 Hz and ~40 Hz respectively) and that seemed to help earlier during the night, but not much longer;
  • we only managed to transition CARM to REFL11 via CM SLOW twice;
    • the first time we lost lock almost immediately, probably because of a non-optimal offset between CARM A and B;
    • the second time we managed to stay there a little longer, but then some spike in the PRCL loop and/or the MICH loop hitting the rails threw us out of lock (see the lockloss plot);
    • both times we transitioned at arm power ~18;
  • during the night we used an increased analog ASDC whitening gain, as from Eric's elog here http://nodus.ligo.caltech.edu:8080/40m/10972 ; even with this fix, though, MICH is still often hitting the rails and causing the lock losses;
  • the conclusion for tonight is that we need to figure what is going on with the PRMI...

 

Attachment 1: 4Feb2015_Transition_CARM_REFL11_CM_SLOW_AP_18.png
4Feb2015_Transition_CARM_REFL11_CM_SLOW_AP_18.png
  10982   Fri Feb 6 03:21:17 2015 diegoUpdateLSCCARM Transition to REFL11 using CM_SLOW Path

[Diego, Jenne]

We kept struggling with the PRMI, although it was a little better than yesterday:

  • we retuned the X Green beatnote;
  • we managed to reach lower CARM offsets than yesterday night, but we still can't keep lock long enough to perform a smooth transition to CM SLOW/REFL11;
  • we tweaked MICH a bit:
    • the ELP in FM8 now is always on, because it seems to help;
    • we tried using a new FM1 1,1:0,0 instead of FM2 1:0 because we felt we needed a little more gain at low frequencies, but unfortunately this didn't change much MICH's behaviour;
    • now, after catching PRMI lock, the MICH limiter is raised to 30k (in the script), as a possible solution for the railing problem; the down/relock scripts take care of resetting it to 10k while not locked/locking;

So, still no exciting news, but PRMI lock seems to be improving a little.

  10990   Mon Feb 9 17:23:17 2015 diegoUpdateComputer Scripts / ProgramsNew laptops

I forgot to elog about these ones, my bad... The new/updated laptops are giada, viviana and paola; paola is already in the lab, while giada and viviana are in the control room waiting for a new home. The Pool of Names Wiki page has already been updated to reflect the changes.

  10991   Mon Feb 9 17:47:17 2015 diegoUpdateLSCCM servo & AO path status

I wrote the script with the recipe we used, using the Yarm and AS55 on the IN2 of the CM board; however, the steps where the offset should be reduced are not completely deterministic, as we saw that the initial offset (and, therefore, the following ones) could change because of different states we were in. In the script I tried to "servo" the offset using C1:LSC-POY11_I_MON as the reference, but in the comments I wrote the actual values we used during our best test; the main points of the recipe are:

  • misalign the Xarm and the recycling mirrors;
  • setting up CARM_B for POY11 locking and enabling it;
  • setting up CARM_A for CM_SLOW;
  • setting up the CM_SLOW filter bank, with only FM1 and FM4 enabled;
  • setting up the CARM filter bank: FM1 FM2 FM6 triggered, only FM3 and FM5 on; usual CARM gain = 0.006;
  • setting up CARM actuating on MC2;
  • turn off the violin filter FM6 for MC2;
  • setting up the default configuration for the Common Mode Servo and the Mode Cleaner Servo; along with all the initial parameters, here is where the initial offset is set;
  • turn on the CARM output and, then, enable LSC mode;
  • wait until usual POY11 lock is acquired and, a bit later, transition from CARM_B to CARM_A;
  • then, the actual CM_SLOW recipe:
    • CM_AO_GAIN = 6 dB;
    • SUS-MC2_LSC FM6 on (the 300:80 filter);
    • CM_REFL2_GAIN = 18 dB;
    • servo CM_REFL_OFFSET;
    • CM_AO_GAIN = 9 dB;
    • CM_REFL2_GAIN = 21 dB;
    • servo CM_REFL_OFFSET;
    • CM_REFL2_GAIN = 24 dB;
    • servo CM_REFL_OFFSET;
    • CM_REFL2_GAIN = 27 dB;
    • servo CM_REFL_OFFSET;
    • CM_REFL2_GAIN = 31 dB;
    • servo CM_REFL_OFFSET;
    • CM_AO_GAIN = 6 dB;
    • SUS-MC2_LSC FM7 on (the :300 compensating filter);

I tried the procedure and it seems fine, as it did during the tries Q and I made; however, since it touches many things in many places, one should be careful about which state the IFO is into, before trying it.

The script is in scripts/CM/CM_Servo_OneArm_CARM_ON.py and in the SVN.

 

  150   Fri Nov 30 20:13:57 2007 dmassSummaryGeneralHeNe UniPhase Laser
Data for the Uniphase 1.9 mW HeNe laser (labeled: "051507 From ISCT-BS") SN: 1284131 Model: 1103P

I used the Photon Beamscanner to obtain all data, then fit w(z) as shown on the plot with parameters w_0, z_R, and hidden parameter delta,
where z = delta + x, z is waist distance, x is distance from the laser.

Copies of the matlab code used to fit (/plot) are attached in .zip below.
Attachment 1: Matlabcode.zip
Attachment 2: UniPhaseWaist.jpg
UniPhaseWaist.jpg
  192   Sun Dec 16 16:52:40 2007 dmassUpdateComputersComputer on the end Fixed
I had Mike Pedraza look at the computer on the end (tag c21256). It was running funny, and turns out it was a bad HD.

I backed up the SURF files as attachments to their wiki entries. Nothing else seemed important so the drive was (presumably) swapped, and a clean copy of xp pro was installed. The username/login is the standard one.

Also - that small corner of desk space is now clean, and it would be lovely if it stayed that way.
  4978   Fri Jul 15 19:00:18 2011 dmassMetaphysicselogCrashes

Elog crashed a couple times, restarted it a couple times.

  15387   Tue Jun 9 15:02:56 2020 eHangUpdateBHDAstigmatism and scattering plots

Using the updated AOI's for the LO path: (4.8, 47.9, 2.9, 4.5) deg for (LO1, LO2, LO3, LO4), we obtain the following results. 

First two plots are scattering plots for the t and s planes, respectively. Note that here we have changed to 0.5% fractional RoC error and 3 mm positional error. We have also changed the meaning of the colors: pink:MM>0.98; olive 0.95<MM<=0.98, and grey MM<=0.95. It seems that both planes would benefit statistically if we make the LO3-LO4 distance longer by a few mm. 

We also consider how much we could compensate for the MM error in the last plot. We have a few mm window to make both planes better than 0.95. 

Attachment 1: LO_MM_t_scat_stock.pdf
LO_MM_t_scat_stock.pdf
Attachment 2: LO_MM_s_scat_stock.pdf
LO_MM_s_scat_stock.pdf
Attachment 3: LO_MM_adj_stock.pdf
LO_MM_adj_stock.pdf
ELOG V3.1.3-