40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 142 of 357  Not logged in ELOG logo
ID Date Author Type Category Subject
  10844   Fri Dec 26 18:20:42 2014 ranaUpdateComputer Scripts / ProgramsFSS Slow servo thresh change

Quote:

 

In the plot it is shown the behaviour of the PSL-FSS_SLOWDC signal during the last week; the blue rectangle marks an approximate estimate of the time when the scripts were moved to megatron. Apart from the bad things that happened on Friday during the big crash, and the work ongoing since yesterday, it seems that something is not working well. The scripts on megatron are actually running, but I'll try and have a look at it.

I guessed that what was happening was that the SLOW servo settings were not restored to the right values after the code movements / reboots. The ON threshold for the servo was set at +6 counts and the channel is MC TRANS. Since the ADC noise on that channel is ~50 counts, this means that the servo keeps pushing the laser temperature off in some direction when the MC is unlocked.

I reset the threshold to +6666 counts (the aligned MC transmission is ~16000 for the  TEM00 mode) so that it only turns on when we're in a good locked state.

  10843   Fri Dec 26 17:45:21 2014 SteveUpdateVACvac pressure rose to 1.3 mTorr

We run out of N2 for the vacuum system. The pressure peaked at 1.3 mTorr with MC locked. V1 did not closed because the N2 pressure sensor failed.

We are back to vac normal. I will be here tomorrow to check on things.

Attachment 1: noN2.png
noN2.png
Attachment 2: backtoVacNormal.png
backtoVacNormal.png
  10842   Wed Dec 24 08:25:05 2014 ranaConfigurationIOOnotes on MC locking

 I've updated the scripts for the MC auto locking. Due to some permissions issues or general SVN messiness, most of the scripts in there were not saved anywhere and so I've overwritten what we had before. 

After all of the electronics changes from Monday/Tuesday, the lock acquisition had to be changed a lot. The MC seems to catch on the HOM more often. So I lowered a bunch of the gains so that its less likely to hold the HOM locks.

A very nice feature of the Autolocker running on megatron is that the whole 'mcup' sequence now runs very fast and as soon as it catches the TEM00, it gets to the final state in less than 2 seconds.

I've also increased the amplitude of the MC2 tickle from 100 to 300 counts to move it through more fringes and to break the HOM locks more often. Using the 2009 MC2 Calibration of 6 nm/count, this is 1.8 microns-peak @ 0.03 Hz, which seems like a reasonable excitation.

Using this the MC has relocked several times, so its a good start. We'll have to work on tuning the settings to make things a little spicier as we move ahead.

 

That directory is still in a conflicted state and I leave it to Eric/Diego to figure out what's going on in there. Seems like more fallout from the nodus upgrade:

controls@chiara|MC > svn up

svn: REPORT of '/svn/!svn/vcc/default': Could not read chunk size: Secure connection truncated (https://nodus.ligo.caltech.edu:30889)

  10841   Tue Dec 23 20:50:39 2014 rana, kojiUpdateIOOSeven transfer functions

Today we decided to continue to modify the TTFSS board.

The modified schematic can be found here: https://dcc.ligo.org/D1400426-v1 as part of the 40m electronics DCC Tree.

What we did

1) Modify input elliptic filter (L1, C3, C4, C5) to give zero and pole at 30 kHz and 300 kHz, respectively. L1 was replaced with a 1 kOhm resistor.  C3 was replaced with 5600 pF. C4 and C5 were removed. So the expected locations of the zero and pole were at 28.4 kHz and 256 kHz, respectively. This lead filter replaces the Pomona box, and does so without causing the terrible resonance around 1 MHz.

2) Removed the notch filters for the PC and fast path. This was done by removing L2, L3, and C52.

At this point we tested the MC locking and measured the transfer function. We successfully turned up the UGF to 170kHz and two super-boosts on.

3) Now a peak at 1.7MHz was visible and probably causing noise. We decided to revert L2 and adjusted C50 to tune the notch filter in the PC path to suppress this possible PC resonance. Again the TF was measured. We confirmed that the peak at 1.7MHz is at -7dB and not causing an oscillation. The suppression of the peak is limited by the Q of the notch. Since its in a weird feedback loop, we're not sure how to make it deeper at the moment.

4) The connection from the MC board output now goes in through the switchable Test1 input, rather than the fixed 'IN1'. The high frequency gain of this input is now ~4x higher than it was. I'm not sure that the AD829 in the MC board can drive such a small load (125 Ohms + the ~20 Ohms ON resistance of the MAX333A) very well, so perhaps we ought to up the output resistor to ~100-200 Ohms?


Also, we modified the MC Servo board: mainly changed the corner frequencies of the Super Boost stages and some random cleanup and photo taking. I lost the connecting cable from the CM to the AO input (unlabeled).

  1.  The first two Super Boost stages were changed from 20k:1k to 10k:500 to give us back some phase margin and keep the same low freq gain. I don't really know what the gain requirement is for this servo here at the 40m. The poles and zeros were chosen for iLIGO so as to have the frequency noise be 10x less than the SRD at 7 kHz.
  2. The third Super Boost (which we never used) was changed from 10k:500 to ~3k:150 (?) just in case we want a little more low freq gain.
  3. There was some purple vestigial wiring on the back side of the board with a flying resistor; I think this was a way to put a DC offset in to the output of the board, but its not needed anymore so I removed it.

 

Attachment 1: MC_OLTF.pdf
MC_OLTF.pdf
Attachment 2: MC_OLTF2.pdf
MC_OLTF2.pdf
Attachment 3: matlab.zip
  10840   Tue Dec 23 18:43:33 2014 diegoUpdateComputer Scripts / ProgramsFSS Slow servo moved to megatron

Quote:

I ssh'd in, and was able to run each script manually successfully. I ran the initctl commands, and they started up fine too. 

We've seen this kind of behavior before, generally after reboots; see ELOGS 10247 and 10572

In the plot it is shown the behaviour of the PSL-FSS_SLOWDC signal during the last week; the blue rectangle marks an approximate estimate of the time when the scripts were moved to megatron. Apart from the bad things that happened on Friday during the big crash, and the work ongoing since yesterday, it seems that something is not working well. The scripts on megatron are actually running, but I'll try and have a look at it.

  10839   Tue Dec 23 16:49:32 2014 ericqUpdateelogStrange ELOG search

 So, despite having registered users, it turns out that the "Author" field is still open for editing when making posts. I.e. we don't really need to make new accounts for everyone. 

Thus, I've made a user named "elog" with the old write password that can write to all ELOGs. 

(Also, I've added a user called "jamie")

  10838   Tue Dec 23 15:37:32 2014 SteveUpdateVACTP3 drypump replaced

Quote:

Quote:

 

 TP2's fore line - dry pump replaced at performance level 600 mTorr after 10,377 hrs of continuous operation.

Where are the foreline pressure gauges? These values are not on the vac.medm screen.

The new tip seal dry pump lowered the small turbo foreline pressure 10x

TP2fl after 2 day of pumping 65mTorr

 TP2 dry pump replaced at fore pump pressure 1 Torr,  TP2 50K_rpm 0.34A

 Top seal life 6,362 hrs

 New seal performance at 1 hr  36 mTorr, 

 Maglev at 560 Hz, cc1 6e-6 Torr

 

 TP3 dry pump  replaced at 540 mT as TP3 50K_rpm 0.3A with annulos load. It's top seal life time was 11,252 hrs

 

 

  10837   Tue Dec 23 14:33:24 2014 SteveUpdateVACChiara gets UPS

Quote:

Quote:

We had an unexpected power shutdown for 5 sec at ~ 9:15 AM.

Chiara had to be powered up and am in the process of getting everything else back up again.

Steve checked the vacuum and everything looks fine with the vacuum system.

PSL Innolight laser and the 3 units of IFO air conditions turned on.

The vacuum system reaction to losing power: V1 closed and Maglev shut down. Maglev is running on 220VAC so it is not connected to VAC-UPS.  V1 interlock was triggered by Maglev "failure" message.

Maglev was reset and started. After Chiara was turned on manually I could bring up the vac control screen through Nodus and opened V1

"Vacuum Normal" valve configuration was recovered instantly.

 

Chiara needs UPS 

It is arriving Thursday

 EricQ and Steve,

Steve preset the vacuum for safe-reboot mode with C1vac1 and C1vac2 running normal: closed valves as shown, stopped Maglev & disconnected valves V1 plus valves with moving labels.

(The position indicator of the valves changes to " moving " when its cable disconnected )

Eric shut down Chiara, installed APC's UPS Pro 1000 and restarted it.

All went well. Nothing unexpected happened. So we can conclude that the vacuum system with running C1vac1 and C1vac2 is not effected by Chiara's losing AC power.

Attachment 1: prepUP.png
prepUP.png
  10836   Tue Dec 23 14:30:11 2014 ericqUpdateCDSChiara moved to UPS

Quote:

 Steve and I switched chiara over to the UPS we bought for it, after ensuring the vacuum system was in a safe state. Everything went without a hitch. 

Also, Diego and I have been working on getting some of the new computers up and running. Zita (the striptool projecting machine) has been replaced. One think pad laptop is missing an HD and battery, but the other one is fine. Diego has been working on a dell laptop, too. I was having problems editing the MAC address rules on the martian wifi router, but the working thinkpad's MAC was already listed. 

 Turns out that, as the martian wifi router is quite old, it doesn't like Chrome; using Firefox worked like a charm and now also giada (the Dell laptop) is on 40MARS.

  10835   Tue Dec 23 14:27:16 2014 ericqUpdateCDSChiara moved to UPS

 Steve and I switched chiara over to the UPS we bought for it, after ensuring the vacuum system was in a safe state. Everything went without a hitch. 

Also, Diego and I have been working on getting some of the new computers up and running. Zita (the striptool projecting machine) has been replaced. One think pad laptop is missing an HD and battery, but the other one is fine. Diego has been working on a dell laptop, too. I was having problems editing the MAC address rules on the martian wifi router, but the working thinkpad's MAC was already listed. 

  10834   Tue Dec 23 13:18:37 2014 manasaUpdateGeneralX end AUX laser fiber setup

Quote:

 

Since we will not be doing any major locking, I am taking this chance to move things on the X end table and install the fiber coupler.

The first steering mirror shown in the earlier elog will be a Y1 (HR mirror) and the second one will be a beam sampler (similar to the one installed at the Y endtable for the fiber setup). 

Configuration:

Doubler --> Y1 ---> Lens (f=12.5cm) ---> Beam sampler --->Fiber coupler

The fiber coupler mount will be installed in the green region to the right of the TRX camera. 

This work will involve moving around the TRX camera and the optic that brings the trans image on it.

Let me know if this work should not be done tomorrow morning for any reason.

I was working around the X endtable and PSL table today.

1. Y1 mirror, beam sampler and the fiber coupler have been installed.

2. Removed TRX camera temporarily. The camera will be put back on the table once we have the filter for 532nm that can go with it.

3. Removed an old fiber mount that was not being used from the table. 

4. Lowered the current for X end NPRO while working and put it back up at 2A before closing.

5. The fibers running from the X end to the PSL table are connected at an FC/APC connector on the PSL table. 

6. Found the HEPA left on high (probably from yesterday's work around the PSL table). I have brought it back down and left it that way.

I have not installed the coupling lens as yet owing to the space restrictions - not enough space for footprint of the lens. I have to revisit the telescope design again.

  10833   Tue Dec 23 01:55:35 2014 rana, kojiUpdateIOOSeven transfer functions

Some TFs of the TTFSS box

Attachment 1: MC_FSS_TF.pdf
MC_FSS_TF.pdf
  10832   Mon Dec 22 21:53:08 2014 rana, kojiUpdateIOOSeven transfer functions

Today we were looking at the MC TFs and pulled out the FSS box to measure it. We took photos and removed a capacitor with only one leg.

Still, we were unable to see the weird, flat TF from 0.1-1 MHz and the bump around 1 MHz. Its not in the FSS box or the IMC servo card. So we looked around for a rogue Pomona box and found one sneakily located between the IMC and FSS box, underneath some cables next to the Thorlabs HV driver for the NPRO.

It was meant to be a 14k:140k lead filter (with a high frequency gain of unity) to give us more phase margin (see elog 4366; its been there for 3.5 years).

From the comparison below, you can see what the effect of the filter was. Neither the red nor purple TFs are what we want, but at least we've tracked down where the bump comes from. Now we have to figure out why and what to do about it.

* all of the stuff above ~1-2 MHz seems to be some kind of pickup stuff.

** notice how the elog is able to make thumbnails of PDFs now that its not Solaris!

Attachment 1: MC_OLG.pdf
MC_OLG.pdf
  10831   Mon Dec 22 17:06:14 2014 manasaUpdateGeneralX end AUX laser fiber setup

Quote:

 I looked at the endtable for possible space to setup optics in order to couple the X end laser into a PM fiber.

Attached is the layout of where the setup will go and what are the existing stuff that will be moved.

ETMXtable.png

Since we will not be doing any major locking, I am taking this chance to move things on the X end table and install the fiber coupler.

The first steering mirror shown in the earlier elog will be a Y1 (HR mirror) and the second one will be a beam sampler (similar to the one installed at the Y endtable for the fiber setup). 

 

Configuration:

Doubler --> Y1 ---> Lens (f=12.5cm) ---> Beam sampler --->Fiber coupler

The fiber coupler mount will be installed in the green region to the right of the TRX camera. 

This work will involve moving around the TRX camera and the optic that brings the trans image on it.

Let me know if this work should not be done tomorrow morning for any reason.

  10830   Mon Dec 22 16:21:15 2014 ericqUpdateelogStrange ELOG search

In order to fix ELOG search, I have started running ELOG v2.9.2 on Nodus.

Sadly, due to changes in the software, we can no longer use one global write password. Instead, we must now operate with registered users.

Based on recent elog users, I'll be creating user accounts with the following names, using the same old ELOG write password. (These will be valid across all logbooks)

  • ericq
  • rana
  • koji
  • diego
  • jenne
  • manasa
  • Steve
  • Kate
  • Zach
  • Evan
  • Aidan
  • Chris
  • Dmass
  • nicolas
  • Gabriele
  • xiaoyue

All of these users will be "Admins" as well, meaning they can add new users and change settings, using the "Config" link.

Let me know if I neglected to add someone, and sorry for the inconvenience.

RXA: What Eric means to say, is that "upgrading" from Solaris to Linux broke the search and made us get a new elog software that;s worse than what we had.

  10829   Mon Dec 22 15:46:58 2014 KurosawaSummaryIOOSeven transfer functions

IMC OL TF has been measured from 10K to 10M

Attachment 1: MC_OLTF.pdf
MC_OLTF.pdf
  10828   Mon Dec 22 15:11:08 2014 ranaUpdateIOOMC Error Spectra

che tristezza  

What we want is to have the high and low noise spectra on the same plot. The high noise one should be triggered by a high PC DRIVE signal.

  10827   Mon Dec 22 13:34:34 2014 KojiUpdateelogStrange ELOG serach

I tried to find my own entry and faced with a strange behavior of the elog.

The search button invoked the following link and no real search has been done:

http://nodus.ligo.caltech.edu:8080/40m/?mode=summvry&reverse=0&reverse=1&npp=50&m&y&Authorthor=Koji

Summvry? Authorthor?

If I ran the following link, it returned correct search. So something must be wrong.

http://nodus.ligo.caltech.edu:8080/40m/?mode=summary&npp=50&Author=Koji

  10826   Sun Dec 21 18:46:06 2014 diegoUpdateIOOMC Error Spectra

The error spectra I took so far are not that informative, I'm afraid. The first three posted here refer to Wed 17 in the afternoon, where things were quiet, the LSC control was off and the MC was reliably locked. The last two plots refer to Wed night, while Q and I were doing some locking work; in particular, these were taken just after one of the locklosses described in elog 10814. Sadly, they aren't much different from the "quiet" ones.

I can add some considerations though: Q and I saw some weird effects during that night, using a live reading of such spectra, which couldn't be saved though; such effects were quite fast both in appearance and disapperance, therefore difficult to save using the snapshot measurement, which is the only one that can save the data as of now; moreover, these effects were certainly seen during the locklosses, but sometimes also in normal circumstances. What we saw was a broad peak in the range 5e4-1e5 Hz with peak value ~1e-5 V/rtHz, just after the main peak shown in the attached spectra.

Attachment 1: SPAG4395_17-12-2014_170951.pdf
SPAG4395_17-12-2014_170951.pdf
Attachment 2: SPAG4395_17-12-2014_172846.pdf
SPAG4395_17-12-2014_172846.pdf
Attachment 3: SPAG4395_17-12-2014_175147.pdf
SPAG4395_17-12-2014_175147.pdf
Attachment 4: SPAG4395_18-12-2014_003414.pdf
SPAG4395_18-12-2014_003414.pdf
Attachment 5: SPAG4395_18-12-2014_003506.pdf
SPAG4395_18-12-2014_003506.pdf
  10825   Sat Dec 20 00:00:03 2014 ericqUpdateComputer Scripts / ProgramsFSS Slow servo moved to megatron

I ssh'd in, and was able to run each script manually successfully. I ran the initctl commands, and they started up fine too. 

We've seen this kind of behavior before, generally after reboots; see ELOGS 10247 and 10572

  10824   Fri Dec 19 20:44:23 2014 JenneUpdateComputer Scripts / ProgramsFSS Slow servo moved to megatron

Today Q moved the FSS slow servo over to some init thing on megatron, and some time ago he did the same thing to the MC auto locker script.  It isn't working though.

Even though megatron was rebooted, neither script started up automatically.  As Diego mentioned in elog 10823, we ran sudo initctl start MCautolocker and sudo initctl start FSSslow, and the blinky lights for both of the scripts started.  However, that seems to be the only thing that the scripts are doing.  The MC auto locker is not detecting lockloses, and is not resetting things to allow the MC to relock.  The MC is happy to lock if I do it by hand though.  Similarly, the blinky light for the FSS is on, but the PSL temperature is moving a lot faster than normal.  I expect that it will hit one of the rails in under an hour or so. 

The MC autolocker and the FSS loop were both running earlier today, so maybe Q had some magic that he used when he started them up, that he didn't include in the elog instructions?

  10823   Fri Dec 19 20:32:11 2014 diegoUpdateCDSSOS!!! HELP!! EPICS freeze 45min+ so far!

[Diego, Jenne]

 

Everything seems reasonably back to normal:

Notes:

  • the machines in the control room have been rebooted;
  • the c1iscey frontend now behaves;
  • I saw on nodus, which remained up and running the whole time, a bunch of   nfs: server chiara is not responding, timed out  messages, belonging to the freezing time; it may be that the sync option for the nfs share is too resource demanding, or some other network issue;
  • the FSS was doing strange stuff and the MC couldn't recover the lock; the MCautolocker script wasn't running because of the lock loss of the MC and the lack of communication between the machines; so we did a sudo initctl start MCautolocker on megatron and recovered the MC too.
  10822   Fri Dec 19 19:21:04 2014 diegoUpdateCDSSOS!!! HELP!! EPICS freeze 45min+ so far!

Quote:

[Jenne, Diego]

The EPICS freeze that we had noticed a few weeks ago (and several times since) has happened again, but this time it has not come back on its own.  It has been down for almost an hour so far. 

 So far, we have reset the Martian network's switch that is in the rack by the printer.  We have also power cycled the NAT router.  We have moved the NAT router from the old GC network switch to the new faster switch, and reset the Martian network's switch again after that.

We have reset the network switch that is in 1X6.

We have reset what we think is the DAQ network switch at the very top of 1X7.

So far, nothing is working.  EPICS is still frozen, we can't ping any computers from the control room, and new terminal windows won't give you the prompt (so perhaps we aren't able to mount the nfs, which is required for the bashrc).

We need help please!

[EricQ]

 

EricQ suggested it may be some NFS related issue: if something, maybe some computer in the control room, is asking too much to chiara, then all the other machines accessing chiara will slow down, and this could escalate and lead to the Big Bad Freeze. As a matter of fact, chiara's dmesg pointed out its eth0 interface being brought up constantly, as if something is making it go down repeatedly. Anyhow, after the shutdown of all the computers in the control room, a  reboot of chiara, megatron and the fb was performed.

 

[Diego]

Then I rebooted pianosa, and most of the issues seem gone so far; I had to "mxstream restart" all the frontends from medm and everyone of them but c1scy seems to behave properly. I will now bring the other machines back to life and see what happens next.

  10821   Fri Dec 19 18:08:46 2014 JenneUpdateCDSSOS!!! HELP!! EPICS freeze 45min+ so far!

[Jenne, Diego]

The EPICS freeze that we had noticed a few weeks ago (and several times since) has happened again, but this time it has not come back on its own.  It has been down for almost an hour so far. 

 So far, we have reset the Martian network's switch that is in the rack by the printer.  We have also power cycled the NAT router.  We have moved the NAT router from the old GC network switch to the new faster switch, and reset the Martian network's switch again after that.

We have reset the network switch that is in 1X6.

We have reset what we think is the DAQ network switch at the very top of 1X7.

So far, nothing is working.  EPICS is still frozen, we can't ping any computers from the control room, and new terminal windows won't give you the prompt (so perhaps we aren't able to mount the nfs, which is required for the bashrc).

We need help please!

  10820   Fri Dec 19 16:59:32 2014 ericqUpdateComputer Scripts / ProgramsFSS Slow servo moved to megatron

Given that op340m showed some undesired behavior, and that the FSS slow seems prone to railing lately, I've moved the FSS slow servo job over to megatron in the same way I did for the MC autolocker. 

Namely, there is an upstart configuration (megatron:/etc/init/FSSslow.conf), that invokes the slow servo. Log file is in the same old place (/cvs/cds/caltech/logs/scripts), and the servo can be (re)started by running: 

controls@megatron|~ > sudo initctl start FSSslow

Maybe this won't really change the behavior. We'll see

  10819   Fri Dec 19 16:39:25 2014 ericqUpdateComputer Scripts / Programselog autostart

I've set up nodus to start the ELOG on boot, through /etc/init/elog.conf. Also, thanks to this, we don't need to use the start-elog.csh script any more. We can now just do:

controls@nodus:~  $ sudo initctl restart elog

I also tweaked some of the ELOG settings, so that image thumbnails are produced at higher resolution and quality. 

  10818   Fri Dec 19 15:59:49 2014 JenneUpdateLSCLockloss from Wed

I swapped out one of the channels on Q's lockloss plotter - we don't need POP22Q, but I do want the PC drive.  

So, we still need to look into why the PC drive goes crazy, and if it is related to the buildup in the arms or just something intrinsic in the current FSS setup, but it looks like that was the cause of the lockloss that Q and Diego had on Wednesday.

1102929772_PCdriveRailed.png

  10817   Fri Dec 19 14:25:48 2014 diegoUpdateComputer Scripts / Programselog restarted

elog was not responding for unknown reasons, since the elogd process on nodus was alive; anyway, I restarted it. 

  10816   Thu Dec 18 16:21:08 2014 ericqUpdateComputer Scripts / Programsscripts not being backed up!

 I just stumbled upon this while poking around:

Since the great crash of June 2014, the scripts backup script has not been workingon op340m. For some reason, it's only grabbing the PRFPMI folder, and nothing else. 

Megatron seems to be able to run it. I've moved the job to megatron's crontab for now.

  10815   Thu Dec 18 15:41:30 2014 ericqUpdateComputer Scripts / ProgramsOffsite backups of /cvs/cds going again

Since the Nodus switch, the offsite backup scripts (scripts/backup/rsync.backup) had not been running successfully. I tracked it down to the weird NFS file ownership issues we've been seeing since making Chiara the fileserver. Since the backup script uses rsync's "archive" mode, which preserves ownership, permissions, modification dates, etc, not seeing the proper ownership made everything wacky. 

Despite 99% of the searches you do about this problem saying you just need to match your user's uid and gid on the NFS client and server, it turns out NFSv4 doesn't use this mechanism at all, opting instead for some ID mapping service (idmapd), which I have no inclination of figuring out at this time. 

Thus, I've configured /etc/fstab on Nodus (and the control room machines) to use NFSv3 when mounting /cvs/cds. Now, all the file ownerships show up correctly, and the offsite backup of /cvs/cds is churning along happily. 

  10814   Thu Dec 18 02:23:34 2014 ericqUpdateLSCLocking update

 [ericq, Diego]

Some locking efforts tonight; many locklosses due to PRC angular motion. Furthest progress was arm powers of 15, and I've stared at the corresponding lockloss plot, with little insight into what went wrong. (BTW, lastlock.sh seems to catch the lock loss reliably in the window)

15lockloss.png

CARM and DARM loops were measured not long before this lock loss, and had nominal UGFs (~120Hz, ~20deg PM). However, there was a reasonably clear 01 mode shape at the AS camera, which I did nothing to correct. Here's a spectrum from *just* before the lockloss, recovered via nds. Nothing stands out to me, other than a possible loss of DARM optical gain. (I believe the references are the error signal spectra taken in ALS arms held away + PRMI on 3F configuration)

15spec.pdf

The shape in the DARM OLTF that we had previously observed and hypothesized as possible DARM optical spring was not ever observed tonight. I didn't induce a DARM offset to try and look for it either, though.

Looking into some of the times when I was measuring OLTFs, the AS55 signals do show coherence with the live DARM error signal at the excitation frequencies, but little to no coherence under 30Hz, which probably means we weren't close enough to swap DARM error signals yet. This arm power regime is where the AS55 sign flip has been modeled to be...


A fair amount of time was spent in pre-locking prep, including:

  • The usual X green beat alignment tweak, to make the X ALS control not terrible
  • Both ITM oplevs were centered
  • For some reason, I had to flip the signs of the REFL165 matrix elements for the PRMI...
  • "Restore PRMI SB" has ASC automatically enabled, which results in unexpected kicks even with LSC off, which caused a few minutes head-scratching
  10813   Wed Dec 17 19:31:55 2014 KojiUpdateASCASS retuned

I wonder what to do with the X arm.

The primary purpose of the ASS is to align the arm (=transmission), and the secondary purpose is to adjust the input pointing.

As the BS is the only steering actuator, we can't adjust two dof out of 8 dof.
In the old (my) topology, the spot position on ITMX was left unadjusted.

If my understanding of the latest configuration, the alignment of the cavity (=matching of the input axis with the cavity axis)
is deteriorated in order to move the cavity axis at the center of the two test masses. This is not what we want as this causes
deterioration of the power recycling gain.

  10812   Wed Dec 17 19:04:12 2014 jenneUpdateASCASS retuned

I made the Xarm follow the new (old) topology of Length -> test masses, and Trans -> input pointing.

It takes a really long time to converge (2+ min), since the input pointing loops actuate on the BS, which has an optical lever, which is slow.  So, everything has to be super duper slow for the input pointing to be fast relative to the test mass motion.

Also, between last night and this afternoon, I moved the green ASX stuff from a long list of ezca commands to a burt file, so turning it on is much faster now.  Also, I chose new frequencies to avoid intermodulation issues, set the lockin demodulation phases, and tuned all 4 loops.  So, now the green ASX should work for all 4 mirrors, no hand tuning required.  While I was working on it, I also removed the band pass filters, and made the low pass filters the same as we are using for the IR ASS.  The servos converge in about 30 seconds.

  10811   Wed Dec 17 18:14:36 2014 ericqUpdateASCTransmon QPD -> ASC servos ready for comissioning

 I have completed all of the model modifications and medm screen updates to allow for feedback from the transmon QPD pitch and yaw signals to the ITMs. Now, we can design and test actual loops...

newASCscreen.png

The signals come from c1sc[x/y] to c1rfm via RFM, and then go to c1ass via dolphin. 


Out of curiosity about the RFM+dolphin delay, I took a TF of an excitation at the end SUS model (C1:SUS-ETM[X/Y]_QPD_[PIT/YAW]_EXC) to the input FM in the ASC model (C1:ASC-ETM[X/Y]_QPD_[PIT/YAW]_IN1). All four signals exhibit the same delay of 122usec. I saved the dtt file in Templates/ASC/transmonQPDdelay.xml

This is less than a degree under 20Hz, so we don't have to worry about it. 

  10810   Wed Dec 17 14:42:13 2014 JenneUpdateLSCPRMI loops need help

EricQ's crazy people filter has been deleted.  I'm trying to lock right now, to see if all is well in the world.

  10809   Wed Dec 17 13:14:38 2014 ericqUpdateLSCPRMI loops need help

Quote:

However, the PRMI would not acquire lock with the arms held off resonance. 

This is entirely my fault. 

Last week, while doing some stuff with PRY, I put this filter in SUS_PRM_LSC, to stop some saturations from high frequency sensing noise

oops.pdf

After the discussion at today's meeting, it struck me that I might have left it on. Turns out I did. 

20 degree phase lag at 200Hz can explain the instability, some non-flat shape at few hundreds of Hz explains the non 1/f shape.

Sorry about all that...

  10808   Wed Dec 17 11:57:56 2014 manasaSummaryGeneralY arm optical layout

I was working around the PSL table and Y endtable today.

I modified the Y arm optical layout that couples the 1064nm light leaking from the SHG crystal into the fiber for frequency offset locking.

The ND filter that was used to attenuate the power coupled into the fiber has been replaced with a beam sampler (Thor labs BSF-10C). The reflected power after this optic is ~1.3mW and the trasmitted power has been dumped to a razor blade beam dump (~210mW).

Since we have a spare fiber running from the Y end  to the PSL table, I installed an FC/APC fiber connector on the PSL table to connect them and monitored the output power at the Y end itself. After setting up, I have ~620uW of Y arm light on the PSL table (~48% coupling).

During the course of the alignment, I lowered the power of the Y end NPRO and disengaged the ETMY oplev. These were reset after I closed the end table.

Attached is the out of loop noise measurement of the Y arm ALS error signal before (ref plots) and after.

 

Attachment 1: 58.png
58.png
  10807   Wed Dec 17 01:51:44 2014 rana, jenneUpdateASCASS retuned

Did a big reconfig to make the Y-arm work again since it was bad again.

  1. Undid Koji's topology change. The A2L loops now feedback to the arm mirrors to adjust the cavity axis. The cavity transmission signals now feedback to the input beam.
  2. The UGF of the Trans->Input beam servos is ~5-10x higher than the A2L servos.
  3. The Trans loops have a ~10-15 s settling time.
  4. The Input Matrix has been adjusted to fit with our intuition:The ETM tilt moves the beam equally on the ITM and ETM faces.
  5. The Output Matrix has also been adjusted to do like this: we're using an intuitive matrix inverse rather than one based on measurement. It turns out to be a reasonable guess and we can tune this later.
  6. Seems stable with many kinds of steps and misalignments. Seems not reliable if the arm power is less than ~0.5.
  7. Reducing the dither amplitudes to make the power fluctuation less than 5% made it much more stable.

With the arm aligned and the A2L signals all zeroed, we centered the beam on QPDY (after freezing the ASS outputs). I saw the beam going to the QPD on an IR card, along with a host of green spots. Seems bad to have green beams hitting the QPD alogn with the IR, so we are asking Steve to buy a bunch of the broad, dielectric, bandpass filters from Thorlabs (FL1064-10), so that we can also be immune to the EXIT sign. I wonder if its legal to make a baffle to block it on the bottom side?

P.S. Why is the Transmon QPD software different from the OL stuff? We should take the Kissel OL package and put it in place of our old OL junk as well as the Transmons.

Attachment 1: ASSconfig_141217_0205.png
ASSconfig_141217_0205.png
  10806   Tue Dec 16 20:51:18 2014 diegoUpdateLSCPRMI loops need help

Quote:

[...]

Diego is going to give us some spectra of the MC error point at various levels of pockel's cell drive.  Is it always the same frequencies that are popping up, or is it random?

 I found out that the Spectrum Analyzer gives bogus data... Since now is locking time, tomorrow I'll go and figure out what is not working

  10805   Tue Dec 16 20:49:25 2014 diegoUpdateComputer Scripts / ProgramsStatus of the new nodus

Quote:

Quote:

 Nodus (solaris) is dead, long live Nodus (ubuntu).

Diego and I are smoothing out the Kinks as they appear, but the ELOG is running smoothly on our new machine. 

SVN is working, but your checkouts may complain because they expect https, and we haven't turned SSL on yet...

 [Diego, EricQ]

SSL, https and backups are now working too!

A backup of nodus's configuration (with some explaining) will be done soon.

 Nodus should be visible again from outside the Caltech Network; I added some basic configuration for postfix and smartmontools; configuration files and instructions for everything are in the svn in the nodus_config folder

  10804   Tue Dec 16 03:43:09 2014 JenneUpdateLSCPRMI loops need help

[Jenne, Rana, Diego]

After deciding that the Yend QPD situation was not significant enough to prevent us from locking tonight, we got started.  However, the PRMI would not acquire lock with the arms held off resonance. 

This started some PRMI investigations.

With no arms, we can lock the PRMI with both REFL55 I&Q or REFL165 I&Q.  We checked the demod phase for both Refl 55 and 165.  REFL55 did not need changing, but REFL165 was off significantly (which probably contributed to the difficulty in using it to acquire lock).  I didn't write down what REFL165 was, but it is now -3 degrees.  To set the phase (this is also how Rana checked the 55 phase), I put in an oscillation using the sensing matrix oscillators.  For both REFL165I and 165Q, I set the sensing matrix demod phases such that all of the signal was in the I phase (so I_I and Q_I, and basically zero in I_Q and Q_Q).  Then, I set the main PD demod phase so that the REFL165Q phase (the Q_I phase) was about zero.

Here are the recipes for PRMI-only, REFL55 and REFL165:

Both cases, actuation was PRCL = 1*PRM and MICH = (0.5*BS - 0.2625*PRM).  Trigger thresholds for DoFs and FMs were always POP22I, 10 up and 0.5 down.

REFL55, demod phase = 31deg.

MICH = 2*R55Q, gain = 2.4, trig FMs 2, 6, 8.

PRCL = 12*R55I, gain = -0.022, trig FMs 2,6,9.

REFL165, demod phase = -3deg.

MICH = -1*R165Q, gain = 2.4, trig FMs 2,6,8.

PRCL = 2.2*R165I, gain = -0.022, trig FMs 2,6,9.

These recipes assume Rana's new resonant gain filter for MICH's FM6, with only 2 resonant gains at 16 and 24 Hz instead of a whole mess of them: elog 10803.  Also, we have turned down the waiting time between the MICH loop locking, and the filters coming on.  It used to be a 5 second delay, but now is 2 sec.  We have been using various delays for the PRCL filters, between 0.2s and 0.7s, with no particular preference in the end.

We compared the PRCL loop with both PDs, and note that the REFL 165 error signal has slightly more phase lag, although we do not yet know why.  This means that if we only have a marginally stable PRCL loop for REFL55, we will not be stable with REFL165. Also, both loops have a non-1/f shape at a few hundred Hz.  This bump is still there even if all filters except the acquisition ones (FM4,5 for both MICH and PRCL) are turned off, and all of the violin filters are turned off.  I will try to model this to see where it comes from.

PRMI_55vs165_15Dec2014.pdf

To Do list:

Go back to the QPDY situation during the daytime, to see if tapping various parts of the board makes the noise worse.  Since it goes up to such high frequencies, it might not be just acoustic.  Also, it's got to be in something common like the power or something, since we see the same spectra in all 4 quadrants. 

The ASS needs to be re-tuned. 

Rana was talking about perhaps opening up the ETMX chamber and wiggling the optic around in the wire.  Apparently it's not too unusual for the wire to get a bit twisted underneath, which creates a set of places that the optic likes to go to.

Diego is going to give us some spectra of the MC error point at various levels of pockel's cell drive.  Is it always the same frequencies that are popping up, or is it random?

  10803   Tue Dec 16 01:50:27 2014 rana, diegoFrogsLSCMICH filter is nuts

 This is ridiculous.

How many RGs can I fit into one button???

Attachment 1: badMICHrg.pdf
badMICHrg.pdf
  10802   Tue Dec 16 00:20:06 2014 diegoUpdateOptical LeversBS & PRM OL realignment

[Rana, Diego]

We manually realigned the BS and PRM optical levers on the optical table.

  10801   Mon Dec 15 22:45:59 2014 JenneUpdateElectronicsYend QPD modified

 

 [Jenne, Rana, Diego]

We did some test on the modified QPD board for the Yend; we saw some weird oscillations at high frequencies, so we went and check more closely directly within the rack. The oscillations disappear when the cable from the QPD is disconnected, so it seems something is happening within the board itself; however, looking closely at the board with an oscilloscope in several locations, with the QPD cable connected or disconnected, there is nothing strange and definitely nothing changing if the cable is connected or not. In the plots there are the usual channels we monitor, and the 64kHz original channels before they are downsampled.

Overall it doesn't seem being a huge factor, as the RMS shows at high frequencies, maybe it could be some random noise coming up, but anyway this will be investigated further in the future.

Attachment 1: QPD_Ytrans_oscillating_15Dec2014.pdf
QPD_Ytrans_oscillating_15Dec2014.pdf
Attachment 2: QPD_IOPchannels_Ytrans_oscillating_15Dec2014.pdf
QPD_IOPchannels_Ytrans_oscillating_15Dec2014.pdf
  10800   Mon Dec 15 22:40:09 2014 ranaSummaryPSLPMC restored

 Found that the PMC gain has been set to 5.3 dB instead of 10 dB since 9 AM this morning, with no elog entry.

SadToastFace.jpg

I also re-aligned the beam into the PMC to minimize the reflection. It was almost all in pitch.

  10799   Mon Dec 15 22:30:50 2014 JenneUpdateElectronicsYend QPD modified

Details later - empty entry for a reply.

Short story - Yend is now same as Xend filters-wise and lack of gain sliders -wise.  Both ends have 13.7k resistors around the AD620 to give them gains of ~4.5.

Xend seems fine.

Yend seems not fine.  Even the dark noise spectrum sees giganto peaks.  See Diego's elog 10801 for details on this investigation.

  10798   Mon Dec 15 16:27:57 2014 diegoUpdateComputer Scripts / ProgramsStatus of the new nodus

Quote:

 Nodus (solaris) is dead, long live Nodus (ubuntu).

Diego and I are smoothing out the Kinks as they appear, but the ELOG is running smoothly on our new machine. 

SVN is working, but your checkouts may complain because they expect https, and we haven't turned SSL on yet...

 [Diego, EricQ]

SSL, https and backups are now working too!

A backup of nodus's configuration (with some explaining) will be done soon.

  10797   Mon Dec 15 12:53:13 2014 ericqUpdateComputer Scripts / ProgramsStatus of the new nodus

 Nodus (solaris) is dead, long live Nodus (ubuntu).

Diego and I are smoothing out the Kinks as they appear, but the ELOG is running smoothly on our new machine. 

SVN is working, but your checkouts may complain because they expect https, and we haven't turned SSL on yet...

  10796   Sat Dec 13 14:26:36 2014 ericqUpdateLSCMismatched gains on ETMY Transmon QPD

Yesterday, we were seeing anomalously high low frequency RIN in the y-arm (rms of 4% or so). I swung by the lab briefly to check this out. Turns out, despite TRY of 1.0, there was reasonable misalignment. ASS with the excitation lowered by a factor of two, and overall gain at 0.5 or so aligned things to TRY=1.2, and the RIN is back down to ~0.5% I reset the Thorlabs FM to make the power = 1.0

I then went to center the transmitted beam on the transmon QPD. Looking at the quadrant counts as I moved the beam around, things looked odd, and I poked around a little... 

I strongly suspect that we have significantly mismatched gains for the different quadrants on the ETMY QPD. 

Reasoning: With the y-arm POY locked, I used a lens to focus down the TRY beam, to illuminate the quadrants individually. Quadrants 2 and 3 would go up to 3 counts, while 1 and 4 would go up to 0.3 and 0.6, respectively. (These counts are in some arbitrary units that were set by setting the sum to 1.0 when pitch and yaw claimed to be centered, but mismatched gains makes that meaningless.)

I haven't looked more deeply into where the mismatch is occurring. The four individual whitening gain sliders did affect the signals, so the sliders don't seem sticky, however I didn't check the actual change in gains. Will the latest round of whitening board modifications help this?

Hopefully, once this is resolved, the DC transmission signals will be much more reliable when locking...

  10795   Sat Dec 13 00:35:11 2014 ranaUpdateElectronicsXend QPD whitening board modified

 

 16 bit. There aren't any 14-bit ADCs anywhere in LIGO. The aLIGO suspensions have 18-bit DACs.

The Y-End gains seem reasonable to me. I think that we only use TRX/Y as error signals once we have arm powers of >5 so we should consider if the SNR is good enough at that point; i.e. what would be the actual arm motion if we are limited only by the dark noise?

I seem to remember that the estimate for the ultimate arm power is ~200, considering that we have such high losses in the arms.

ELOG V3.1.3-