40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 247 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  10916   Fri Jan 16 20:37:52 2015 diegoUpdateLSCUGF servo now linear again

I found an error in the model of the UGF servos, I have now corrected it; for future reference, now the division between TEST2 and TEST1 is properly done with complex math: given

TEST1 = a + i b\hspace{.5cm},\hspace{.5cm}}TEST2 = c + id

we have that TEST3:

 

TEST3 = \frac{TEST2}{TEST1} = \left(\frac{ac+bd}{a^2+b^2}\right) + \left(\frac{ad-bc}{a^2+b^2} \right)i

 

TEST3 is the actual signal that is now phase rotated to select only the I signal while rejecting the Q one.

 

All the updates to the model, the screens and the script have been SVNed.

  5645   Mon Oct 10 16:32:18 2011 steveUpdateSUSUL sensor of ETMY is recovered

 I lost UL osem voltage this morning when I was checking the actual connection at rack ETMY

This after noon I disconnected  the 64 pins IDE connector from satelite amp at the rack, and the two 25 pins Dsubs at this juction board.

UL OSEM recovered after reconnecting these three connectors.

Atm3, bad connection.........noisy UL

 

Attachment 1: ETMY_UL.png
ETMY_UL.png
Attachment 2: ETMY_OSEM_UL.png
ETMY_OSEM_UL.png
Attachment 3: noisyETMY_UL.png
noisyETMY_UL.png
  2692   Mon Mar 22 02:03:57 2010 ranaSummaryElectronicsUPDH Box #17: Ready

It took too long to get this box ready for action. I implemented all of the changes that I made on the previous one (#1437). In addition, since this one is to be used for phase locking, I also made it have a ~flat transfer function. With the Boost ON, the TF magnitude will go up like 1/f below ~1 kHz.

The main trouble that I had was with the -12V regulator. The output noise level was ~500 nV/rHz, but there was a large oscillation at its output at ~65 kHz. This was showing up in the output noise spectrum of U1 (the first op-amp after the mixer). Since the PSRR of the OP27 is only ~40 dB at such a high frequency, it is not strange to see the power supply noise showing up (the input referred noise of the OP27 is 3.5 nV/rHz, so any PS noise above ~350 nV/rHz becomes relavent).

I was able to tame this by putting a 10 uF tantalum cap on the output of the regulator. However, when I replaced the regulator with a LM7912 from the blue box, it showed an output noise that went up like 1/f below 50 kHz !! I replaced it a couple more times with no benefit. It seems that something on the board must now be damaged. I checked another of the UPDH boxes, and it has the same high frequency oscillation but not so much excess voltage noise. I found that removing the protection diode on the output of the regulator decreased the noise by a factor of ~2. I also tried replacing all of the 1 uF caps that are around the regulator. No luck.

Both of the +12 V regulators seem fine: normal noise levels of ~200 nV/rHz and no oscillations.

Its clear that the regulator is not functioning well and my only guess is that its a layout issue on the board or else there's a busted component somewhere that I can't find. In any case, it seems to be functioning now and can be used for the phase locking and PZT response measurements.

  2693   Mon Mar 22 10:07:30 2010 KojiSummaryElectronicsUPDH Box #17: Ready

For your reference: Voltage noise of LM7815/LM7915 (with no load)

Attachment 1: 15V_power_supply.pdf
15V_power_supply.pdf
  1763   Mon Jul 20 10:35:06 2009 steveUpdateVACUPS batteries replaced

 APC Smart-UPS (uninterruptible power supply) batteries RBC12 replaced at 1Y8 vacuum rack.

Their life span were 22 months.

  13234   Mon Aug 21 16:35:48 2017 gautamUpdateVACUPS checkup

[steve, gautam]

At Rolf/Rich Abbott's request, we performed a check of the UPS today.

Steve believed that the UPS was functioning as it should, and the recent accidental vent was because the UPS batteries were insufficiently charged when the test was performed. Today, we decided to try testing the UPS.

We first closed V1, VM1 and VA6 using the MEDM screen. We prepared to pull power on all these valves by loosening the power connections (but not detaching them). [During this process, I lost the screw holding the power cord fixed to the gate valve V1 - we are looking for a replacement right now but it seems to be an odd size. It is cable tied for now.]

The battery charge indicator LEDs on the UPS indicated that the batteries were fully charged.

Next, we hit the "Test" button on the UPS - it has to be held down for ~3 seconds for the test to be actually initiated, seems to be a safety feature of the UPS. Once the test is underway, the LED indicators on the UPS will indicate that the loading is on the UPS batteries. The test itself lasts for ~5seconds, after which the UPS automatically reverts to the nominal configuration of supplying power from the main line (no additional user input is required).

In this test, one of the five battery charge indicator LEDs went off (5 ON LEDs indicate full charge).

So on the basis of this test, it would seem that the UPS is functioning as expected. It remains to be investigated if the various hardware/software interlocks in place will initiate the right sequence of valve closures when required.


Quote:
 

Never hit O on the Vacuum UPS !

Note: the " all off " configuration should be all valves closed ! This should be fixed now.

In case of  emergency  you can close V1 with disconnecting it's actuating power as shown on Atm3 if you have peumatic pressure 60 PSI 

 

  1858   Fri Aug 7 16:14:57 2009 robOmnistructureVACUPS failed

Steve, Rana, Ben, Jenne, Alberto, Rob

 

UPS in the vacuum rack failed this afternoon, cutting off power to the vacuum control system.  After plugging all the stuff that had been plugged into the UPS into the wall, everything came back up.  It appears that V1 closed appropriately, TP1 spun down gracefully on its own battery, and the pressure did not rise much above 3e-6 torr. 

 

The UPS fizzed and smelled burnt.  Rana will order a new, bigger, better, faster one.

 

  1864   Fri Aug 7 19:34:40 2009 steveSummaryVACUPS failed

The Maglev is running on single phase 220V and that voltage  was not interrupted. TP1 was running undisturbed with V1 and V4 closed.

It is independent of the UPS 120V.

  1912   Sat Aug 15 18:57:48 2009 ranaUpdateVACUPS failed

As Rob noted last Friday, the UPS which powers the Vacuum rack failed. When we were trying to move the plugs around to debug it, it made a sizzling sound and a pop. Bad smells came out of it.

Ben came over this week and measured the quiescent power consumption. The low power draw level was 11.9 A and during the reboot its 12.2 A. He measured this by ??? (Rob inserts method here).

So what we want is a 120 V * 12.2 A  ~ 1.4 kVA UPS with ~30-50% margin. We look for this on the APC-UPS site:

On Monday, we will order the SUA2200 from APC. It should last for ~25 minutes during an outage. Its $1300. The next step down is $200 cheaper and gives 10 minutes less uptime.

  15721   Wed Dec 9 20:14:49 2020 gautamUpdateVACUPS failure

Summary:

  1. The (120V) UPS at the vacuum rack is faulty.
  2. The drypump backing TP2 is faulty.
  3. Current status of vacuum system: 
    • The old UPS is now powering the rack again. Sometime ago, I noticed the "replace battery" indicator light on this unit was on. But it is no longer on. So I judged this is the best course of action. At least this UPS hasn't randomly failed before...
    • main vol is being pumped by TP1, backed by TP3.
    • TP2 remains off.
    • The annular volumes are isolated for now while we figure out what's up with TP2.
    • The pressure went up to ~1 mtorr (c.f. ~600utorr that is the nominal value with the stuck RV2) during the whole episode but is coming back down now.
  4. Steve seems to have taken the reliability of the vacuum system with him.

Details:

Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.

Going to the rack, I saw (unsurprisingly) that the 120V UPS was off. 

  • Pushed the power on button - the LCD screen would briefly light up, say the line voltage was 120 V, and then turned itself off. Not great.
  • I traced the power connection to the UPS itself to a power strip under the rack - then I moved the plug from one port to another. Now the UPS stays on. okay...
  • but after ~3 mins while I'm hunting for a VGA cable, I hear an incessant beeping. The UPS display has the "Fault" indicator lit up. 
  • I decided to shift everything back to the old UPS. After the change was made, I was able to boot up the c1vac machine again, and began the recovery process.
  • When I tried to start TP2, the drypump was unusually noisy, and I noticed PTP2 bottomed out at ~500 torr (yes torr). So clearly something is not right here. This pump supposedly had its tip-seal replaced by Jordan just 3 months ago. This is not a normal lifetime for the tip seal - we need to investigate more in detail what's going on here...
  • Decided that an acceptable config is to pump the main volume (so that we can continue working on other parts of the IFO). The annuli are all <10mtorr and holding, so that's just fine I think.

Questions:

  1. Are the failures of TP2 drypump and UPS related? Or coincidence? Who is the chicken and who is the egg?
  2. What's up with the short tip seal lifetime?
  3. Why did all of this happen without any of our systems catching it and sending an alert??? I have left the UPS connected to the USB/ethernet interface in case anyone wants to remotely debug this.

For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).

Some photos and a screen-cap of the Vac medm screen attached.

Attachment 1: rackBeforenAfter.pdf
rackBeforenAfter.pdf
Attachment 2: IMG_0008.jpg
IMG_0008.jpg
Attachment 3: IMG_0009.jpg
IMG_0009.jpg
Attachment 4: vacStatus.png
vacStatus.png
  15724   Thu Dec 10 13:05:52 2020 JonUpdateVACUPS failure

I've investigated the vacuum controls failure that occurred last night. Here's what I believe happened.

From looking at the system logs, it's clear that there was a sudden loss of power to the control computer (c1vac). Also, the system was actually down for several hours. The syslog shows normal EPICS channel writes (pressure readback updates, etc., and many of them per minute) which suddenly stop at 4:12 pm. There are no error or shutdown messages in the syslog or in the interlock log. The next activity is the normal start-up messaging at 7:39 pm. So this is all consistent with the UPS suddenly failing.

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

Preventing this in the future:

First, there are too many electronics on the 1 kVA UPS. The reason I asked us to buy a dual 208/120V UPS (which we did buy) is to relieve the smaller 120V UPS. I envision moving the turbo pumps, gauge controllers, etc. all to the 5 kVA unit and reserving the smaller 1 kVA unit for the c1vac computer and its peripherals. We now have the dual 208/120V UPS in hand. We should make it a priority to get that installed.

Second, there are 1 Hz "blinker" channels exposed for c1vac and all the slow controls machines, each reporting the machine's alive status. I don't think they're being monitored by any auto-notification program (running on a central machine), but they could be. Maybe there already exists code that could be co-opted for this purpose? There is an MEDM screen displaying the slow machine statuses at Sitemap > CDS > SLOW CONTROLS STATUS, pictured in Attachment 2. This is the only way I know to catch sudden failures of the control computer itself.

Attachment 1: TP2_time_history.png
TP2_time_history.png
Attachment 2: slow_controls_monitors.png
slow_controls_monitors.png
  15725   Thu Dec 10 14:29:26 2020 gautamUpdateVACUPS failure

I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.

Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.

Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.

As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.


I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.


Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).

Quote:
 

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

Attachment 1: vacDiag1.png
vacDiag1.png
  15722   Thu Dec 10 11:07:24 2020 ChubUpdateVACUPS fault

Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?

  15723   Thu Dec 10 11:17:50 2020 ChubUpdateVACUPS fault

I can't find anything in the manual that describes the nature of the FAULT message.  In fact, it's not mentioned at all.  If the unit detects a fault at its output, I would expect a bit more information.  This unit does a programmable level of input error protection, too, usually set at 100%.  Still, there is no indication in the manual whether an input issue would be described as a fault; that usually means a short or lifted ground at the output.

Quote:

Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?

  15560   Sun Sep 6 13:15:44 2020 JonUpdateDAQUPS for framebuilder

Now that the old APC Smart-UPS 2200 is no longer in use by the vacuum system, I looked into whether it can be repurposed for the framebuilder machine. Yes, it can. The max power consumption of the framebuilder (a SunFire X4600) is 1.137kW. With fresh batteries, I estimate this UPS can power the framebuilder for >10 min. and possibly as long as 30 min., depending on the exact load.

@Chub/Jordan, this UPS is ready to be moved to rack 1X6/1X7. It just has to be disconnected from the wall outlet. All of the equipment it was previously powering has been moved to the new UPS. I have ordered a replacement battery (APC #RBC43) which is scheduled to arrive 9/09-11.

  15537   Mon Aug 24 08:13:56 2020 JonUpdateVACUPS installation

I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time.

  15538   Mon Aug 24 11:25:07 2020 JonUpdateVACUPS installation

I'm leaving the lab shortly. We're not ready to switch over the vac equipment to the new UPS units yet.

The 120V UPS is now running and interfaced to c1vac via a USB cable. The unofficial tripplite python package is able to detect and connect to the unit, but then read queries fail with "OS Error: No data received." The firmware has a different version number from what the developers say is known to be supported.

The 230V UPS is actually not correctly installed. For input power, it has a general type C14 connector which is currently plugged into a 120V power strip. However this unit has to be powered from a 230V outlet. We'll have to identify and buy the correct adapter cable.

With the 120V unit now connected, I can continue to work on interfacing it with python remotely. The next implementation I'm going to try is item #2 of this plan [ELOG 15446].

Quote:

I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time.

  15446   Wed Jul 1 18:03:04 2020 JonConfigurationVACUPS replacements

​I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:

  • Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
  • Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
  • Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.

I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.

  15465   Thu Jul 9 18:00:35 2020 JonConfigurationVACUPS replacements

Chub has placed the order for two new UPS units (115V for TP2/3 and a 220V version for TP1).

They will arrive within the next two weeks.

Quote:

​I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:

  • Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
  • Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
  • Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.

I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.

  1496   Sun Apr 19 11:34:33 2009 josephbHowToCamerasUSB Frame Grabber - How to

To use the Sensoray 2250 USB frame grabber:

Ensure you have the following packages installed: build-essential, libusb-dev

Download the Linux manual and linux SDK from the Sensoray website at:

http://www.sensoray.com/products/2250data.htm

Go to the Software and Manual tab near the bottom to find the links.  The software can also be found on the 40m computers at /cvs/cds/caltech/users/josephb/sensoray/

The files are Manual2250LinuxV120.pdf and s2250_v120.tar.gz

Run the following commands in the directory where you have the files.

tar -xvf s2250_v120.tar.gz

cd s2250_v120

make

cd ezloader

make

sudo make modules_install

cd ..

At this point plug in the 2250 frame grabber.

sudo modprobe s2250_ezloader

Now you can run the demo with

./sraydemo or ./sraydemo64

Options will show up on screen.  A simple set to start with is "encode 0", which sets the recording type, "recvid test.mpg", which starts the recording in file test.mpg, and "stop", which stops recording.  Note there is no on screen playback.  One needs an installed mpeg player to view the saved file, such as Totem (which can screen cap to .png format) or mplayer.

All these instructions are on the first few pages of the Manual2250LinuxV120 pdf.

 

 

  13277   Wed Aug 30 22:15:47 2017 ranaOmnistructureComputersUSB flash drives moved

I have moved the USB flash drives from the electronics bench back into the middle drawer of the cabinet next to the AC which is west of the fridge. Drawer re-enlabeled.

  9192   Thu Oct 3 02:46:58 2013 ranaMetaphysicsPEMUSGS Furlough

ObamaCare_GOP.png

  7749   Tue Nov 27 00:26:00 2012 jamieOmnistructureComputersUbuntu update seems to have broken html input to elog on firefox

 After some system updates this evening, firefox can no longer handle the html input encoding for the elog.  I'm not sure what happened.  You can still use the "ELCode" or "plain" input encodings, but "HTML" won't work.  The problem seems to be firefox 17.  ottavia and rosalba were upgraded, while rossa and pianosa have not yet been.

I've installed chromium-browser (debranded chrome) on all the machines as a backup.  Hopefully the problem will clear itself up with the next update.  In the mean time I'll try to figure out what happened.

To use chromium: Appliations -> Internet -> Chromium

  6192   Thu Jan 12 21:22:16 2012 Leo SingerConfigurationWIKI-40M UpdateUnable to create Wiki page

 I can't create a new page on the 40m wiki.  The page that I was trying to create is

http://blue.ligo-wa.caltech.edu:8000/40m/Stewart

I get this message when I try to save the new page:

Page could not get locked. Unexpected error (errno=13).

  6193   Thu Jan 12 23:13:42 2012 KojiConfigurationWIKI-40M UpdateUnable to create Wiki page

Quote:

 I can't create a new page on the 40m wiki.  The page that I was trying to create is

http://blue.ligo-wa.caltech.edu:8000/40m/Stewart

I get this message when I try to save the new page:

Page could not get locked. Unexpected error (errno=13).

This address for wiki is obsolete. Recently it was switched to https://wiki-40m.ligo.caltech.edu/
Jamie is working on automatic redirection from the old wiki to the new place.

The new one uses albert.einstein authentication.

 

  11889   Thu Dec 17 01:55:16 2015 ericqUpdateLSCUncooperative AUX X

[ericq, Gautam]

We were not able to fix the excess frequency noise of the AUX X laser by the usual laser diode current song and dance. Unfortunately, this level of noise is much too high to have any realistic chance of locking.  angry

We're leaving things back in the IR beat -> phase tracker state with free running AUX lasers, on the off chance that there may be anything interesting to see in the overnight data. This may be limited by our lack of automatic beatnote frequency control. (Gautam will soon implement this via digital frequency counter). I've upped the FINE_PHASE_OUT_HZ_DQ frame rate to 16k from 2k, so we can see more of the spectrum.

For the Y beat, there is the additional weird phenomenon that the beat amplitude slowly oscillates to zero over ~10 minutes, and then back up to its maximum. This makes it hard for the phase tracker servo to stay stable... I don't have a good explanation for this. 

  11892   Fri Dec 18 17:37:04 2015 ranaUpdateLSCUncooperative AUX X

Here's how we should diagnose the EX laser:

  1. Compare IR RIN of laser out to 100 kHz with that of another similar NPRO.
  2. Look at time series of IR beat signal with a fast scope. Are there any high frequency glitches?
  3. Disconnect all of the cables to the EX laser PZT and temperature control. Does the frequency noise change?
  4. Change the temperature by +/- 1 deg to move away from mode hop regions. Remeasure RIN and frequency noise and plot.
  10912   Fri Jan 16 04:14:38 2015 ericqUpdateCDSUnexpected CDS behavior

EDIT: Sleepy Eric doesn't understand loops. The conditions for this observation included active oplev loops. Thus, obviously, looking at the in-loop signal after the ASC signl joins the oplev signal will produce this kind of behavior. 


After some talking with Rana, I set out on making an even better-er QPD loop. I made some progress on this, but a new mystery halted my progress. 

I sought to have a more physical undertanding of the plant TF I had measured. Earlier, I had assumed that the 4Hz plant features I had measured for the QPD loops were coming from the oplev-modified pendulum response, but this isn't actually consistent with the loop algebra of the oplev servos. I had seen this feature in both the oplev and qpd error signals when pushing an excitation from the ASC-XARM_PIT (and so forth) FMs. 

However, when exciting via the SUS-ETMX-OLPIT FMs (and so forth), this feature would not appear in either the QPD or oplev error signals. That's weird. The outputs of these two FMs should just be summed, right before the coil matrix. 

I started looking at the TF from ASC-YARM_PIT_OUT to SUS-ETMY_TO_COIL_1_2, which should be a purely digital signal routing of unity, and saw it exhibit the phase shape at 4Hz that I had seen in earlier measurements. Here it is:

I am very puzzled by all of this. indecision Needs more investigation.

Attachment 1: digitalProblem.pdf
digitalProblem.pdf
  2902   Mon May 10 16:59:35 2010 AlbertoUpdate40m UpgradingUnexpected oscilaltionin the POY11 PD

The measured transimpedance of the latest POY11 PD matches my model very well up to 100 MHz. But at about ~216MHz I have a resonance that I can't really explain.

2010-05-10_POY11_CalibratedOpticalResponse0-500MHz.png

 

 The following is a simplified illustration of the resonant circuit:

POX11.png

 

Perhaps my model misses that resonance because it doesn't include stray capacitances.

While I was tinkering with it, i noticed a couple of things:

- the frequency of that  oscillation changes by grasping with finger the last inductor of the circuit (the 55n above); that is adding inductance

- the RF probe of the scope clearly shows me the oscillation only after the 0.1u series capacitor

- adding a small capacitor in parallel to the feedback resistor of the output amplifier increases the frequency of the oscilaltion

  2904   Mon May 10 18:56:53 2010 ranaUpdateElectronicsUnexpected oscilaltionin the POY11 PD

Where did you get the 55nH based notch from? I don't remember anything like that from the other LSC PD schematics. This is certainly a bad idea. You should remove it and put the notch back over by the other notch.

  2905   Mon May 10 19:09:45 2010 ranaUpdateElectronicsUnexpected oscilaltionin the POY11 PD

Quote:

Where did you get the 55nH based notch from? I don't remember anything like that from the other LSC PD schematics. This is certainly a bad idea. You should remove it and put the notch back over by the other notch.

 Why is it a bad idea?

You mean putting both the 2-omega and the 55MHz notches next to each other right after the photodiode?

  10567   Mon Oct 6 10:04:58 2014 manasaUpdateGeneralUnexpected power shutdown

We had a unexpected power shutdown for 5 sec at ~ 9:15 AM.

Chiara had to be powered up and am in the process of getting everything else back up again.

Steve checked the vacuum and everything looks fine with the vacuum system.

  10568   Mon Oct 6 10:23:43 2014 SteveUpdateVACUnexpected power shutdown

Quote:

We had an unexpected power shutdown for 5 sec at ~ 9:15 AM.

Chiara had to be powered up and am in the process of getting everything else back up again.

Steve checked the vacuum and everything looks fine with the vacuum system.

PSL Innolight laser and the 3 units of IFO air conditions turned on.

The vacuum system reaction to losing power: V1 closed and Maglev shut down. Maglev is running on 220VAC so it is not connected to VAC-UPS.  V1 interlock was triggered by Maglev "failure" message.

Maglev was reset and started. After Chiara was turned on manually I could bring up the vac control screen through Nodus and opened V1

"Vacuum Normal" valve configuration was recovered instantly.

 

Chiara needs UPS 

It is arriving Thursday

  10569   Mon Oct 6 10:28:18 2014 manasaUpdateGeneralUnexpected power shutdown

Quote:

We had a unexpected power shutdown for 5 sec at ~ 9:15 AM.

Chiara had to be powered up and am in the process of getting everything else back up again.

Steve checked the vacuum and everything looks fine with the vacuum system.

 The last time we had a power failure IFO recovery elog

  10571   Mon Oct 6 17:04:51 2014 ericqUpdateGeneralUnexpected power shutdown

I brought back the PMC, MC and Arms.

PMC:

  • Same as when we replaced the busted sorensen, the kepco regulators in 1X1 (which power the FSS HV amp, PMC PZT and WFS) needed to be brought back up in the proper order. (Middle two are  + and - for the FSS, need to be rolled up in unison). Also the same as that occasion, sticky sliders prevented the full voltage range on the PMC PZT from being accessible. I touched every button on the PMC and FSS screens, which seemed to fix it. 
  • I then realigned the PMC to ~0.80 transmission

MC:

  • Needed to do some hand alignment to get a lock
  • Measured spot positions, they were all under 2mm
  • Despite centering the beams on the WFS and setting the offsets, WFS would not turn on successfully
  • Also, the autolocker on megatron isn't doing anything but blinking
  • Also also, MC2 is exhibiting some intermittent alignment wandering. The SUSDOF traces look like flat ramps lasting a few minutes. 

Arms:

  • No green was evident anywhere, but it didn't take to much alignment tweaking to get IR flashes
  • No signals were evident on RFPDs, confirmed light on PDs and power to demod boards. 
  • Turned out the 11MHz Marconi was not doing anything, and needed to be reset to the modulation frequency in ELOG 10314 (which reminds me that I need to update the sticker on the marconi)
  • Locked arms, ASS'ed, oplev spots were acceptable. 

 

  10572   Mon Oct 6 17:36:17 2014 ericqUpdateGeneralUnexpected power shutdown

The autolocker is now working, but I didn't change anything to make it so. I was just putting in some echo statements, to see where it was getting hung up, and it started working... This isn't the first time I've had this experience. 

It turns out IOO had a bad BURT restore. I restored from 5AM this morning, the WFS are ok now. 

  10573   Mon Oct 6 18:15:12 2014 manasaUpdateGeneralUnexpected power shutdown: end green alignment

After Q brought back the IR, I went to check the green situation.

1. The end lasers had to be turned ON. 
2. The heaters for the doubler crystals had to be enabled. The heaters are at the set values.
3. The X arm PZTs for the steering mirrors had to be powered up (Set voltage 100V and current 6.7mA)
4. I aligned the green to the already IR-aligned arms.

Green PSL alignment has to be done after Q finishes his work on the MC WFS.

  10570   Mon Oct 6 11:09:52 2014 JenneUpdateGeneralUnexpected power shutdown: slow computers

As per other slow computers, which Chris figured out in elog 10189, I added all the rest of the slow computers to Chiara's /etc/hosts file, so that they would come up when Manasa went and keyed the crates. 

Computers that were already there:

  • c1auxex
  • c1psl
  • c1iscaux

Computers that I added today:

  • c1susaux
  • c1auxey
  • c1iscaux2
  • c1pem1
  • c1aux
  • c1iool0
  • c1vac1

Manasa keyed all of these crates *except* for the vac computer, since Steve said that the vacuum system is up and running fine.

  10579   Tue Oct 7 16:55:16 2014 SteveUpdateVACUnexpected sweaty valves

 Pump  spool valves V5, V4, V3 sweating a lot. VM3 and VC2 not so much.

They are VAT valves F28-62887-03, 11, 14 and so on ~15-16 years old.

 I'm speculating that some plastic is aging-braking down at the atmospheric-pneumatic side of valves.
The vacuum side is not effected, according to vacuum pressure readings.

May be some condensation from the small turbos? No

I'm looking for an identical valve to examine, but I can not find one.

We are using industrial grade 99.96% Nitrogen to actuate these valves.

Valves are not effected are  dry: VA6, V6, V7 and all annuloses.

 

Attachment 1: sweatyV5.jpg
sweatyV5.jpg
Attachment 2: sweatyV5f.jpg
sweatyV5f.jpg
  3643   Mon Oct 4 13:48:41 2010 Leo SingerConfigurationComputersUninstalled gstreamer-devel and gstreamer-plugins-base-devel on rosalba

 I uninstalled gstreamer-devel and gst-plugins-base-devel on Rosalba.  Here is the command I ran:

$ sudo yum remove gstreamer-devel gstreamer-plugins-base-devel

 

Actually, I had installed these myself a few days earlier, before I knew that I should be recording such changes in the elog.  I'm sorry!

  2421   Wed Dec 16 11:21:20 2009 AlbertoUpdateABSLUniversal PDH Box Servo Filters
Yesterday I measured the shape of the servo filter of both the old and the new Universal PDH boxes.
Here they are compared.

NewandOlfFilterTF.png

The way the filter's transfer function has been measured is by a swept sine between the "SERVO INPUT" and the "PIEZO DRIVE OUTPUT" connection on the box front panel. The spectrum analyzer used for the measurement is the SR785 and the source amplitude is set at 0.1V.

The two transfer functions are clearly different. In particular the old one looks like a simple integrator, whereas the new one already includes some sort of boost.

That probably explains why the new one is unable to lock the PLL. Indeed what the PLL needs, at least to acquire lock, is an 1/f filter.

I thought the two boxes were almost identical, at least in the filter shapes. Also the two schematics available in the DCC coincide.

Attachment 1: NewandOlfFilterTF.png
NewandOlfFilterTF.png
  2423   Wed Dec 16 11:55:47 2009 ranaUpdateABSLUniversal PDH Box Servo Filters

 

 To me, they both look stable. I guess that the phase has to go to -180 deg to be unstable.

Why does the magnitude go flat at high frequencies? That doesn't seem like 1/f.

How about a diagram of what inputs and outputs are being measured and what the gain knob and boost switch settings are?

  2476   Tue Jan 5 09:18:38 2010 AlbertoOmnistructureElectronicsUniversal PDH Box Stored in the RF Cabinet

FYI: I stored the Universal PDH boxes in the RF cabiner in the Y arm.

  8789   Tue Jul 2 00:25:14 2013 gautamUpdateGreen LockingUniversal PDH box tuning

 [Koji, Annalisa, Gautam]

Annalisa noticed that over the weekend the Y-arm green PDH was locked to a sideband, despite not having changed anything on the PDH box (the sign switch was left as it was). On friday, we tried turning on and off some of the filters on the slow servo (C1ALS_Y_SLOW) which may have changed something but this warranted further investigation. We initially thought that the demodulation phase was not at the optimal value, and decided to try introducing some capacitances in the path from the function generator to the LO input on the universal PDH box. We modelled the circuit and determined that significant phase change was introduced by capacitances between 1nF and 100nF, so we picked out some capacitors (WIMA FKP) and set up a breadboard on which to try these out.

After some trial and error, Koji dropped by and felt that the loop was optimized for the old laser, the various loop parameters had not been tweaked since the new laser was installed. The following parameters had to be optimized for the new laser;

  • Servo gain
  • LO frequency
  • LO modulation depth
  • Demodulation phase

The setup was as follows: 

  • PDH box error signal to Oscilloscope CH1
  • Green PD output to Oscilloscope CH2
  • No capacitor between Function Generator and the PDH box
  • 0.1Hz triangle wave (30 counts amplitude) applied to ETMY via awggui (so as to sweep the cavity and see stronger, more regular TEM00 flashes)

The PDH error signal did not have very well-defined features, so Koji tweaked the LO frequency and the modulation depth till we got a reasonably well-defined PDH signal. Then we turned the excitation off and locked the cavity to green. The servo gain was then optimized by reducing oscillations in the error signal. Eventually, we settled on values for the Servo Gain, LO frequency and modulation depth such that the UGF was ~20kHz (determined by looking at the frequency of oscillation of the error signal on an Oscilloscope), and the PDH signal had well-defined features (while the cavity was unlocked). The current parameters are

  • LO frequency: 205.020 kHz
  • modulation depth: 0.032 Vpp

We then proceeded to find the optimal demodulation phase by simulating the circuit with various capacitances between the function generator and the PDH box (circuit diagram and plots attached). The simulation seemed to suggest that there was no need to introduce any additional capacitance in this path (introducing a 1nF capacitance added a phase-lag of ~90 degrees-this was confirmed as the error-signal amplitude decreased drastically when we hooked up a 1nF capacitor on our makeshift breadboard). In the current configuration, the LO is connected directly to the PDH box.

 

Misc Points:

  • The phase shifter in the PDH box is not connected: the IC in the box, JSPHS-26, is designed for operation in the range 18-26MHz. If necessary, an all-pass-filter could be introduced, with a tuneable rheostat to adjust the phase for our frequency range. Right now, turning the knob marked "LO phase angle" on the front panel doesn't do anything. The mixer on the PDH board is also not used for the same reason.
  • PSL shutter was closed sometime earlier this evening, because we suspected some IR light was reaching the Green PD on the y-endtable, and was influencing the error signal. Its back open now.
  • Useful information about the old y-end laser relevant to selecting the right LO frequency, modulation depth, and servo gain can be found here and in elog 2746 and subsequent replies, though the details of how the measurement were made aren't entirely clear. The idea is that the characteristics of the piezoelectric element in the laser has some characteristics which will determine the optimal LO frequency, modulation depth and servo gain.

 

To Do:

Now that we are reasonably confident that the loop parameters are optimal, we need to stabilise the C1ALS_Y_SLOW loop to stabilise the beat note itself. Appropriate filters need to be added to this servo.

 

Circuit Diagram: 50 ohm input impedance on the source, 50 ohm output impedance seen on the PDH box, capacitance varied between 1nF and 100nF in steps.

circuit.pdf

Plots for various capacitances: Gold-green trace (largest amplitude) direct from LO, other traces at input to PDH box.

model.pdf

  8548   Wed May 8 16:10:09 2013 JamieUpdateCDSUnknown DAQ channels in c1sus c1x02 IOP?

Someone for some reason added full-rate DAQ specification to some ADC3 channels in the c1sus IOP model (c1x02):

#DAQ Channels

TP_CH15 65536
TP_CH16 65536
TP_CH17 65536
TP_CH18 65536
TP_CH19 65536
TP_CH20 65536
TP_CH21 65536

These appear to be associated with c1pem, so I'm guessing it was Den (particularly since he's the worst about making modifications to models and not telling anyone or logging or svn committing).

I'm removing them.

  2150   Tue Oct 27 17:58:25 2009 JenneConfigurationPEMUnknown PEM channels in the PEM-ADCU?

Does anyone know what the channels plugged in to the PEM ADCU, channels 5,6,7,8 are?  They aren't listed in the C1ADCU_PEM.ini file which tells the channel list/dataviewer/everything about all the rest of the signals which are plugged into that ADCU, so I'm not sure if they are used at all, or if they're holdovers from some previous time.  The cables are not labeled in a way that makes clear what they are.  Thanks!

  13949   Tue Jun 12 14:47:37 2018 gautamBureaucracyGeneralUnlabelled components from EX moved to SP table and labelled

Steve mentioned two unlabelled optics were found at EX, relics from the Endtable upgrade.

  • One was a 1" 45 deg p-pol optic (Y1-1025-C-45P), it looks a bit scratched.
  • The other was a Beam Sampler (BSF10-C).

These are now labelled and forked down on the SP table.

  1528   Tue Apr 28 12:55:57 2009 CarynDAQPEMUnplugged Guralp channels

For the purpose of testing out the temperature sensors, I stole the PEM-SEIS_MC1X,Y,Z channels.

I unplugged Guralp NS1b, Guralp Vert1b, Guralp EW1b cables from the PEM ADCU(#10,#11,#12) near 1Y7 and put temp sensors in their place (temporarily).

  1571   Sun May 10 13:34:32 2009 carynUpdatePEMUnplugged Guralp channels

I unplugged Guralp EW1b and Guralp Vert1b and plugged in temp sensors temporarily. Guralp NS1b is still plugged in.

  1597   Mon May 18 01:54:35 2009 ranaUpdatePEMUnplugged Guralp channels
To see if Caryn's data dropouts were happening, I looked at a trend of all of our temperature channels. Looks OK now.

Although you can't see it because I zoomed in, there's a ~24 hour relaxation happening before Caryn's sensors equilibrate.
I guess that's the insulating action of the cooler? We need a picture of the cooler in the elog for posterity.
Attachment 1: Untitled.png
Untitled.png
ELOG V3.1.3-