40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 83 of 335  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  8286   Wed Mar 13 15:30:37 2013 Max HortonUpdateSummary PagesFixing Plot Limits

Jamie has informed me of numpy's numpy.savetxt() method, which is exactly what I want for this situation (human-readable text storage of an array).  So, I will now be using:

      # outfile is the name of the .png graph.  data is the array with our desired data.
      numpy.savetxt(outfile + '.dat', data)

to save the data.  I can later retrieve it with numpy.loadtxt()

  605   Mon Jun 30 15:56:22 2008 JenneUpdateElectronicsFixing the LO demod signal
To make the alarm handler happy, at Rana and John's suggestion I replaced R14 of the MC's Demod board, changing it from 4.99 Ohms to 4.99 kOhms. This increased the gain of the LO portion of the demod board by a factor of 10. Sharon and I have remeasured the table of LO input to the demod board, and the output on the C1:IOO-MC_DEMOD_LO channel:

Input Amplitude to LO input on demod board [dBm]: | Value of channel C1:IOO-MC_DEMOD_LO
------------------------------------------------- | -----------------------------------
-10 | -0.00449867
-8 | 0.000384331
-6 | 0.0101503
-4 | 0.0296823
-2 | 0.0882783
0 | 0.2543
2 | 0.542397
4 | 0.962335
6 | 1.65572
8 | 2.34911
10 | 2.96925
  3617   Tue Sep 28 21:11:52 2010 koji, taraUpdateElectronicsFixing the new TTFSS

We found a small PCB defect which is an excess copper shorting circuit on the daughter board,

it was removed and the signal on mixer monitor path is working properly.

 

 We were checking the new TTFSS upto test 10a on the instruction, E1000405 -V1. There was no signal at MIXER mon channel.

It turned out that U3 OpAmp on the daughter board, D040424, was not working because the circuit path for leg 15 was shorted

because of the board's defect. We can see from fig1 that the contact for the OpAmp's leg (2nd from left) touches ground.

We used a knife to scrap it out, see fig 2, and now this part is working properly.

 

Attachment 1: before.jpg
before.jpg
Attachment 2: after.jpg
after.jpg
Attachment 3: before.jpg
before.jpg
Attachment 4: after.jpg
after.jpg
  3811   Thu Oct 28 16:38:54 2010 josephbUpdateCDSFlaky fb, reverted inittab changes on fb

Problem:

Yuta reported many of the signals being displayed by dataviewer "fuzzier" than normal.  And diaggui was not working.

Running "diag -i" reported:

Diagnostics configuration:
awg 21 0 192.168.113.85 822095893 1 192.168.113.85
awg 36 0 192.168.113.85 822095908 1 192.168.113.85
awg 37 0 192.168.113.85 822095909 1 192.168.113.85
tp 21 0 192.168.113.85 822091797 1 192.168.113.85
tp 36 0 192.168.113.85 822091812 1 192.168.113.85
tp 37 0 192.168.113.85 822091813 1 192.168.113.85

This seems to be missing an nds type line between the 3 awgs and the 3 tp lines.

The daqd code (the framebuilder) is being especially flaky today, and I'm starting to see new errors.

[Thu Oct 28 16:13:46 2010] Couldn't open full trend frame file
`/frames/trend/second/9723/C-

T-972342780-60.gwf' for writing; errno 13
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
epicsThreadOnceOsd epicsMutexLock failed.
Segmentation fault (core dumped)

or

[Thu Oct 28 16:17:06 2010] Couldn't open full frame file
`/frames/full/9723/.C-R-972343024-16.gwf' for writing; errno 13
CA client library tcp receive thread terminating due to a C++ exception
FATAL: exception not rethrown
CA client library tcp receive thread terminating due to a C++ exception
CA client library tcp receive thread terminating due to a C++ exception
FATAL: exception not rethrown
cac: tcp send thread received an unexpected exception - disconnecting
Aborted (core dumped)

What was done today that might have affected it:

A new c1ioo chassis from Downs was connected to c1ioo.  I also connected c1ioo to the DAQ network (192.168.114.xxx) which talks to the frame builder.

I started downloading the necessary files to be able to follow Keith's instructions for a standard control / teststand setup in /opt/apps , /opt/rtapps, etc.  However, it has not actually been installed yet.

Yuta added additional OL channels to the DAQ config for being recorded.

Attempted Fixes:

I reverted the inittab changes I made in this elog.  Didn't help.

I disconnected c1ioo from the DAQ network.  Didn't help.

Rebooted the frame builder machine.  Didn't help.

I've sent an e-mail to Alex describing the problem to see if he has any idea where we went wrong.

Yuta may try restoring the old DAQ channel choices and see if that makes a difference.

Current Status:

daqd framebuilder code still won't stay up.  So no channels at the moment.

  3814   Thu Oct 28 21:20:11 2010 yutaUpdateCDSFlaky fb, tried DAQ re-install, but no help

Summary:
  Unfortunately, fb is flakier than normal. We can't use dataviewer and diaggui now.
  I thought it might be because editting .ini files(list of DAQ channels) in /cvs/cds/rtcds/caltech/c1/chans/daq/ without using GUI was doing something wrong.
  So, I re-installed DAQ, but it didn't help.

What I did:
1. ssh c1sus, went to /opt/rtcds/caltech/c1/core/advLigoRTS/ and ran
  make uninstall-daq-c1SYS
  make install-daq-c1SYS

It didn't help.
More than that, MC suspension damping went wrong. So;

2. Rebooted c1sus machine.
 I restored MC suspension damping by doing this.
 (Similar thing happened Tuesday when we were trying to lock MC)

Conclusion:
  Editting .ini DAQ channel list file wasn't wrong. (or, I failed in finding anything wrong right now)

Quote:

Attempted Fixes:

Yuta may try restoring the old DAQ channel choices and see if that makes a difference.

 

  16003   Wed Apr 7 02:50:49 2021 KojiUpdateSUSFlange Inspections

Basically I went around all the chambers and all the DB25 flanges to check the invac cable configurations. Also took more time to check the coil Rs and Ls.

Exceptions are the TTs. To avoid unexpected misalignment of the TTs, I didn't try to disconnect the TT cables from the flanges.

Upon the disconnection of the SOS cables, the following steps are taken to avoid large impact to the SOSs

  • The alignment biases were saved or recorded.
  • Gradually moved the biases to 0
  • Turned off the watchdogs (thus damping)

After the measurement, IMC was lock and aligned. The two arms were locked and aligned with ASS. And the PRM alignment (when "misalign" was disengaged) was checked with the REFL CCD.
So I believe the SOSs are functioning as before, but if you find anything, please let me know.
 

  16404   Thu Oct 14 18:30:23 2021 KojiSummaryVACFlange/Cable Stand Configuration

Flange Configuration for BHD

We will need total 5 new cable stands. So Qty.6 is the number to be ordered.


Looking at the accuglass drawing, the in-vaccum cables are standard dsub 25pin cables only with two standard fixing threads.

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110070_3.pdf

For SOSs, the standard 40m style cable bracket works fine. https://dcc.ligo.org/D010194-x0

However, for the OMCs, we need to make the thread holes available so that we can mate DB25 male cables to these cables.
One possibility is to improvise this cable bracket to suspend the cables using clean Cu wires or something. I think we can deal with this issue in situ.


Ha! The male side has the 4-40 standoff (jack) screws. So we can hold the male side on the bracket using the standoff (jack) screws and plug the female cables. OK! The issue solved!

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110029_3.pdf

Attachment 1: 40m_flange_layout_20211014.pdf
40m_flange_layout_20211014.pdf
  6196   Fri Jan 13 16:16:05 2012 Leo SingerUpdateStewart platformFlexure type for leg joints

I had been thinking of using this flexure for the bearings for the leg joints, but I finally realized that it was not the right type of bearing.  The joints for the Stewart platform need to be free to both yaw and pitch, but this bearing actually constrains yaw (while leaving out-of-plane translation free).

  257   Wed Jan 23 20:52:40 2008 ranaSummaryEnvironmentFlooding from construction area
We noticed tonight around 7 PM that there was a lot of brown water in the control room and also in the interferometer area mostly concentrated around the north wall between the LSC rack and the AP table.

The leak was mainly in the NW corner of the interferometer area.

The construction crew had set up sandbags, plastic sheet, and gravel to block the drains outside of the 40m along the north wall. The rain had produced ponds and lakes outside in the construction area. Once the level got high enough this leaked through holes in the 40m building walls (these are crappy walls).

We called the on-call facilities team (1 guy). He showed up, cut through the construction fence lock, and then unblocked the drains. This guy was pretty good (although inscrutable); he adjusted the sandbags to control the flow of the lake into the drains. He went along the wall and unblocked all 3 drains; there were mini-lakes forming there which he felt would eventually start leaks all along our north wall.

In the morning we'll need volunteers to move equipment around under Steve's direction while the floor gets mopped up. There's dirt and mud all over, underneath the chambers and racks.

Luckily Alberto spotted this early and he, Jon, Andrey and Steve kept the water from spreading and then scooped it all up with a wet-vac that the facilities guy brought over.
Extra Napoleon to them for late evening mud clearing work.

Many pictures were taken: Update and pictures will appear later.
Attachment 1: Shop-Vac_Action.MOV
Attachment 2: Flooding.pdf
Flooding.pdf Flooding.pdf Flooding.pdf Flooding.pdf Flooding.pdf
  9246   Wed Oct 16 10:32:40 2013 SteveUpdateSUSFlowbench effect on oplev error signals

 ETMX _OPLEV_...ERROR signals are effected by turning ON the  flow bench motor  at the south end 

 The broadband noise [~ 5-60 Hz] is much higher when the motor is running.

The flow bench was turned off.

 

 

Attachment 1: FlowBenchOnOff.png
FlowBenchOnOff.png
  5951   Fri Nov 18 19:07:07 2011 JenneUpdateRF SystemFoam house on EOM

[Jenne, Zach, Frank]

Frank helped Zach and I cable up at PT-100 RTD, and make sure it worked with the Newport Model 6000 Laser Diode Controller.  We're using this rather than the Newport 3040 Temperature Controller because Dmass says the output of that isn't working.  So we're using just the temp control part of the Laser Diode controller.

The back of the controller has a 15-pin D-sub, with the following useful connections.  All others were left Not Connected. 

1 & 2 (same) - Pin 2 is one side of TEC output (we have it connected to one side of a resistive heater)

3 & 4 (same) - Pin 4 is the other side of the TEC output (connected to the other side of the resistive heater)

7 - connected to one side of PT-100 temp sensor

8 - connected to other side of PT-100 temp sensor

I used aluminum tape to attach the sensor and heater to the 40m's EOM, and we plugged in the controller.  It seems to be kind of working.  Zach figured out the GPIB output stuff, so we can talk to it remotely.

 

  5954   Sat Nov 19 00:09:02 2011 ZachUpdateRF SystemFoam house on EOM

Quote:

I used aluminum tape to attach the sensor and heater to the 40m's EOM, and we plugged in the controller.  It seems to be kind of working.  Zach figured out the GPIB output stuff, so we can talk to it remotely. 

I stole the Prologix wireless GPIB interface from the SR785 that's down the Y-Arm temporarily. The address is 192.168.113.108. (Incidentally, I think some network settings have been changed since the GPIB stuff was initially configured. All the Prologix boxes have 131.215.X.X written on them, while they are only accessible via the 192.168.X.X addresses. Also, the 40MARS wireless router is only accessible from Martian computers at 192.168.113.226---not 131.215.113.226).

In any case, the Newport 6000 is controllable via telnet. I went through the remote RTD calibration process in the manual, by measuring the exact RTD resistance with an ohmmeter and entering it in. Despite this, when the TEC output is turned on, the heating way overshoots the entered set temperature. This is probably because the controller parameters (gain, etc.) are not set right. We have left it off for the moment.

Here are a couple command examples:

1. Turning on the TEC output 

nodus:~>telnet 192.168.113.108 1234
Trying 192.168.113.108...
Connected to 192.168.113.108.
Escape character is '^]'.
TEC:OUT on
TEC:OUT
TEC:OUT?
++read eoi
1
 
2. Measuring the current temperature
 
TEC:T?
++read eoi
32.9837
 
3. Reading and then changing the set temperature
 
TEC:SET:T?
++read eoi
34.0000
TEC:T 35.0
TEC:SET:T?
++read eoi
35.0000
 
4. Figuring out that the temperature is unstable and then turning off the TEC (this is important)
 
TEC:T?
++read eoi
36.2501
TEC:OUT off
TEC:OUT?
++read eoi
0
 
(The "++read eoi" lines are the commands you give the Prologix to read the controlled device output.)
 
As I understand, Frank has some code that will pull data in realtime and put it into EPICS. This would be nice.
  6104   Mon Dec 12 11:16:02 2011 JenneUpdateRF SystemFoam house on EOM

Foam house installed on EOM a few min ago.  We'll leave it until ~tomorrow, then try out the heater loop.

  5242   Mon Aug 15 17:38:07 2011 jamieUpdateGeneralFoil aperture placed in front of ETMY

We have placed a foil aperture in front of ETMY, to aid in aligning the Y-arm, and then the PRC.  It obviously needs to be removed before we close up.

  15022   Wed Nov 13 19:34:45 2019 gautamUpdatePEMFollow-up on seismometer discussion

Attachment #1 shows the spectra of our three available seismometers over a period of ~10ksec.

  • I don't understand why the z-axis motion reported by the T240 is ~10x lower at 10 mHz compared to the X and Y motions. Is this some electronics noise artefact?
  • The difference in the low frequency (<100mHz) shapes of the T240 compared to the Guralps is presumably due to the difference in the internal preamps / readout boxes (?). I haven't checked yet.
  • There is almost certainly some issue with the EX Guralp. IIRC this is the one that had cabling issues in the past, and also is the one that was being futzed around for Tctrl, but also could be that its masses need re-centering, since it is EX_X that is showing the anomalous behaviour.
  • The coherence structure between the other pairs of sensors is consistent.

Attachment #2 shows the result of applying frequency domain Wiener filter subtraction to the POP QPD (target) with the vertex seismometer signals as witness channels.

  • The dataset was PRMI locked with the carrier resonant, ETMs misaligned.
  • The dashed lines in these plots correspond to the RMS for the solid line with the same color.
  • For both PIT and YAW, I am using BS_X and BS_Y seismometer channels for the MISO filter inputs.
  • In particular for PIT, I notice that I am unable to get the same level of performance as in the past, particularly around ~2-3 Hz.
  • The BS seismometer health indicators don't signal any obvious problems with the seismometer itself - so something has changed w.r.t. how the ground motion propagates to the PR2/PR3? Or has the seismometer sensing truly degraded? I don't think the dataset I collected was particularly bad compared to the past, and I confirmed similar performance with a separate PRMI lock from a different time period.
Attachment 1: seisAll_20191111.pdf
seisAll_20191111.pdf
Attachment 2: ffPotential.pdf
ffPotential.pdf
  15023   Wed Nov 13 20:15:56 2019 ranaUpdatePEMFollow-up on seismometer discussion

this is due to the Equivalence Principle: local accelerations are indistinguishable from spacetime curvature. On a spherical Earth, the local gradient of the metric points in the direction towards the center of the Earth, which is colloquially known as "down".

Quote:

I don't understand why the z-axis motion reported by the T240 is ~10x lower at 10 mHz compared to the X and Y motions. Is this some electronics noise artefact?

 

  15024   Wed Nov 13 23:40:15 2019 gautamUpdatePEMFollow-up on seismometer discussion

Here is some disturbance in the spacetime curvature, where the local gradient of the metric seems to have been modulated (in the "downward" as well as in the other two orthogonal Cartesian directions) at ~1 Hz - seems real as far as I can tell, all the suspensions were being shaken about and all the seismometers witnessed it, though the peak is pretty narrow. A broader, less prominent peak also shows up around 0.5 Hz. We couldn't identify any clear source (no LN2 fill-up / obvious CES activity). This event lasted for ~45 mins, and stopped around 2315 local time. Shortly (~5min) after the ~1 Hz peak died down, however, the 3-10 Hz BLRMS channel reports an increase by ~factor of 2. 

Onto trying some locking now that the suspensions have settled down somewhat.

Quote:

this is due to the Equivalence Principle: local accelerations are indistinguishable from spacetime curvature. On a spherical Earth, the local gradient of the metric points in the direction towards the center of the Earth, which is colloquially known as "down".

Attachment 1: seisAll_20191111_1Hz.pdf
seisAll_20191111_1Hz.pdf
  15025   Thu Nov 14 12:11:04 2019 ranaUpdatePEMFollow-up on seismometer discussion

at 1 Hz' this effect is not large so that's real translation. at lower frequencies a ground tilt couples to the horizontal sensors at first order and so the apparent signal is amplified by the double integral. drawing a free body diagram u can c that

x_apparent = (g / s^2) * theta

but for vortical this not tru because it already measures the full free fall and the tilt only shows up at 2nd order

  15027   Fri Nov 15 00:18:41 2019 ranaUpdatePEMFollow-up on seismometer discussion

The large ground motion at 1 Hz started up again tonight at around 23:30. I walked around the lab and nearby buildings with a flashlight and couldn't find anything whumping. The noise is very sinusoidal and seems like it must be a 1 Hz motor rather than any natural disturbance or traffic, etc. Suspect that it is a pump in the nearby CES building which is waking up and running to fill up some liquid level. Will check out in the morning.

Estimate of displacement noise based on the observed MC_F channel showing a 25 MHz peak-peak excursion for the laser:

dL = 25e6 * (13 m / (c / lambda)

      = 1 micron

So this is a lot. Probably our pendulum is amplifying the ground motion by 10x, so I suspect a ground noise of ~0.1 micron peak-peak.

(this is a native PDF export using qtgrace rather than XMgrace. uninstall xmgrace and symlink to qtgrace.)

Attachment 1: MCshake.pdf
MCshake.pdf
  15030   Fri Nov 15 12:16:48 2019 gautamUpdatePEMFollow-up on seismometer discussion

Attachment #1 is a spectrogram of the BS sesimometer signals for a ~24 hour period (from Wednesday night to Thursday night local time, zipped because its a large file). I've marked the nearly pure tones that show up for some time and then turn off. We need to get to the bottom of this and ideally stop it from happening at night because it is eating ~1 hour of lockable time.

We considered if we could look at the phasing between the vertex and end seismometers to localize the source of the disturbance.

Attachment 1: BS_ZspecGram.pdf.zip
  15032   Mon Nov 18 14:32:53 2019 gautamUpdatePEMFollow-up on seismometer discussion

The nightly seismic activity enhancement continued during the weekend. It always shows up around 10pm local time, persists for ~1 hour, and then goes away. This isn't a show stopper as long as it stops at some point, but it is annoying that it is eating up >1 hour of possible locking time. I walked over to CES, no one there admitted to anything - there is an "Earth Surface Dynamics Laboratory" there that runs some heavy equipment right next to us, but they claim they aren't running anything post ~530pm. Rick (building manager ?) also doesn't know of anything that turns on with the periodicity we see. He suggested contacting Watson but I have no idea who to talk to there who has an overview of what goes on in the building. 😢 

  15036   Tue Nov 19 21:53:57 2019 gautamUpdatePEMFollow-up on seismometer discussion

The shaking started earlier today than yesterday, at ~9pm local time.

While the IFO is shaking, I thought (as Jan Harms suggested) I'd take a look at the cross-spectra between our seismometer channels at the dominant excitation frequency, which is ~1.135 Hz. Attachment #1 shows the phase of the cross spectrum taken for 10 averages (with 30mHz resolution) during the time period when the shaking was strong yesterday (~1500 seconds with 50% overlap). The logic is that we can use the relative phasing between the seismometer channels to estimate the direction of arrival and hence, the source location. However, I already see some inconsistencies - for example, the relative phase between BS_Z and EX_Z suggests that the signal arrives at the EX seismometer first. But the phasing between EX_Y and BS_Y suggests the opposite. So maybe my thinking about the problem as 3 co-located sensors measuring plane-wave disturbances originating from the same place is too simplistic? Moreover, Koji points out that for two sensors separated by ~40m, for a ground wave velocity of 1.5 km/s, the maximum phase delay we should see between sensors is 30 msec, which corresponds to ~10 degrees of phase. I guess we have to undo the effects of the phasing in the electronics chain.

Does anyone have some code that's already attempted something similar that I can put the data through? I'd like to not get sucked into writing fresh code.

🤞 this means that the shaking is over for today and I get a few hours of locking time later today evening.

Another observarion is that even after the main 1.14 Hz peak dies out, there is elevated seismic acitivity reported by the 1-3 Hz BLRMS band. This unfortunately coincides with some stack resonance, and so the arm cavity transmission reports greater RIN even after the main peak dies out. Today, it seems that all the BLRMS return to their "nominal" nighttime levels ~10 mins after the main 1.14 Hz peak dies out.

Attachment 1: seisxSpec.pdf
seisxSpec.pdf
  15417   Fri Jun 19 14:03:50 2020 JordanUpdateVACForepump Tip Seal Replacement

Tip Seals were replaced on the forepumps for TP2 and TP3, and both are ready to be installed back onto the forelines.

TP2 Forepump Ultimate Pressure: 180 mtorr

TP3 Forepump Ultimate Pressure: 120 mtorr

  7348   Thu Sep 6 10:57:27 2012 JenneUpdateGeneralForgot to turn green refl pd back on

Quote:

I couldn't understand the Y-End green setup as the PD was turned off and the sign of the servo was flipped. Once they are fixed, I could lock the cavity with the green beams.

Quote:

[EricQ, Jenne, brains of other people]

Get green spots co-located with IR spots on ETMs, ITMs, check path of leakage through the arms, make sure both greens get out to PSL table

 

 I had turned the green refl PD off on Tuesday while we were doing the IPANG alignment, since the beam was not so bright, and the LED on top of the PD was very annoyingly bright.  I forgot to turn it back on.  The sign flip on the servo, I can't explain.

  5793   Thu Nov 3 13:00:52 2011 JenneUpdatePhotosFormatting of MEDM screen names

Quote:

After lots of trial and error, and a little inspiration from Koji, I have written a new script that will run when you select "update snapshot" in the yellow ! button on any MEDM screen. 

Right now, it's only live for the OAF_OVERVIEW screen.  View snapshot and view prev snapshot also work. 

Next on the list is to make a script that will create the yellow buttons for each screen, so I don't have to type millions of things in by hand.

The script lives in:  /cvs/cds/rtcds/caltech/c1/scripts/MEDMsnapshots, and it's called....wait for it....... "updatesnap".

 Currently the update snapshot script looks at the 3 letters after "C1" to determine what folder to put the snapshots in.  (It can also handle the case when there is no C1, ex. OAF_OVERVIEW.adl still goes to the c1oaf folder).  If the 3 letters after C1 are SYS, then it puts the snapshot into /opt/rtcds/caltech/c1/medm/c1sys/snap/MEDM_SCREEN_NAME.adl

Mostly this is totally okay, but a few subsystems seem to have incongruous names.  For example, there are screens called "C1ALS...." in the c1gcv folder.  Is it okay if these snapshots go into a /c1als/snap folder, or do I need to figure out how to put them in the exact same folder they currently exist in?  Or, perhaps, why aren't they just in a c1als folder to begin with? It seems like we just weren't careful when organizing these screens.

Another problem one is the C1_FE_STATUS.adl screen.  Can I create a c1gds folder, and rename that screen to C1GDS_FE_STATUS.adl?  Objections?

 

  6041   Tue Nov 29 18:31:40 2011 DenUpdatedigital noiseFoton error

 In the previous elog we've compared Matlab and Foton SOS representations using low-order filter. Now we move on to high order filters and see that Foton is pretty bad here.

We consider Chebyshev filter of the first type with cuf off frequency 12 Hz and ripple 1 dB. In the table below we summarize the GAINS obtained by Matlab and Foton for different digital filter orders.

Order Matlab Foton
2 5.1892960e-06 5.1892960e-06
4 6.8709377e-12 6.8709378e-12
6 5.4523201e-16 9.0950127e-18
8 5.3879305e-21 1.2038559e-23

 

 

 

 

 

We can see that for high orders the gains are completely different (ORDER of 2!!!). Interesting that besides of very bad GAIN, SOS-MATRIX Foton calculates pretty well. I checked up to 5 digit - full coincidence. Only GAIN is very bad.

The filter considered is cheby1("LowPass",6,1,12) and is a part of the bad Cheby filter where we loose coherence and see some other strange things.

  6061   Thu Dec 1 18:30:39 2011 Vladimir, DenUpdatedigital noiseFoton error

Foton Matlab Error     

We investigated some more the discrepancy between Matlab and Foton numbers. The comparison of cheby1(k, 1, 2*12/16384) was done between versions implemented in Matlab, R and Octave. Filters created by R and Octave agree with Foton.

Also, we found that Matlab has gross precision errors for cutoff frequencies just smaller than used in our fitler, for example cheby1(6, 2*3/16384) fails miserably.

  6062   Fri Dec 2 17:43:46 2011 ranaUpdatedigital noiseFoton error

It would be useful to see some plots so we could figure out exactly what magnitude and phase error correspond to "gross" and "miserable".

  15287   Tue Mar 31 09:39:41 2020 gautamUpdateCDSFoton for shaped noise injections

I'd like to re-measure the transfer function from driving MC2 position to the MC_L_DQ channel (for feedforward purposes). Swept sine would be one option, but I can't get the "Envelope" feature of DTT to work, the excitation amplitude isn't getting scaled as specified in the envelope, and so I'm unable to make the measurement near 1 Hz (which is where the FF is effective). I see some scattered mentions of such an issue in past elogs but no mention of a fix (I also feel like I have gotten the envelope function to work for some other loop measurement templates). So then I thought I'd try broadband noise injection, since that seems to have been the approach followed in the past. Again, the noise injection needs to be shaped around ~1 Hz to avoid knocking the IMC out of lock, but I can't get Foton to do shaped noise injections because it doesn't inherit the sample rate when launched from inside DTT/awggui - this is not a new issue, does anyone know the fix?

Note that we are using the gds2.15 install of foton, but the pre-packaged foton that comes with the SL7 installation doesn't work either.

Update:

The envelope feature for swept-sine wasn't working because i specified the frequency grid in the wrong order apparently. Eric von Reis has been notified to include a sorting algorithm in future DTT so that this can be in arbitrary order. fixing that allows me to run a swept sine with enveloped excitation amplitude and hence get the TF I want, but still no shaped noise injections via foton 😢 

  15288   Tue Mar 31 23:35:50 2020 ranaUpdateCDSFoton for shaped noise injections

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  15289   Tue Mar 31 23:54:57 2020 gautamUpdateCDSFoton for shaped noise injections

The problem is that foton does not inherit the model sample rate when launched from DTT/awggui. This is likely some shared/linked/dynamic library issue, the binaries we are running are precompiled presumably for some other OS. I've never gotten this to work since we changed to SL7 (but I did use it successfully in 2017 with the Ubuntu12 install).

Quote:

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  3659   Wed Oct 6 12:00:23 2010 josephb, yuta, kiwamuUpdateCDSFound and fixed filter sampling rate problem with suspensions

While we looking using dtt and going over the basics of its operation, we discovered that the filter sample rates for the suspensions were still set to 2048 Hz, rather than 16384 Hz which is the new front end.  This caused the filters loaded into the front ends to not behave as expected.

After correcting the sample rate, the transfer functions obtained from dtt are now looking like the bode plots from foton.

We fixed the C1SUS.txt and C1MCS.txt files in the /opt/rtcds/caltech/c1/chans/ directory, by changing the SAMPLING lines to have 16384 rather than 2048.

  14500   Fri Mar 29 11:43:15 2019 JonUpdateUpgradeFound c1susaux database bug

I found the current bias output channels, C1:SUS-<OPTIC>_<DOF>BiasAdj, were all pointed at C1:SUS-<OPTIC>_ULBiasSet for every degree of freedom. This same issue appeared in all eight database files (one per optic), so it looks like a copy-and-paste error. I fixed them to all reference the correct degree of freedom.

  8653   Thu May 30 01:02:41 2013 ManasaUpdateGreen LockingFound it!

X green beat note found!

Key points
1. Near-field and far-field alignment on the PSL table. The near-field alignment checked by looking at the camera and the far-field alignment checked by allowing the beams to propagate by removing the DC PD.
2. Check laser temperature and get a sense of how the offset translates to the actual laser temperature.
3. Get an idea of the expected temperature of laser using the plot in elog.

Data
PSL laser temperature = 31.45 deg C

X end laser temperature = 39.24 deg C
C1-ALS-X_SLOW_SEERVO2_OFFSET = 4810
Amplitude of beat note = -40dBm

I do not understand why
1. The amplitude of beatnote falls linearly with frequency (peak traced using 'hold' option of the spectrum analyzer).
2. I found the beat note at the RF output of the PD. Earlier, while I was trying to search for the beatnote from the RFmon output of the betabox, there was a strong peak at 29.6MHz that existed even when the green shutters were closed. It's source has to be traced.

Next
Solve beatbox puzzle and lock arm using ALS.

IMG_0598.JPGIMG_0599.JPG

  4980   Sun Jul 17 18:23:23 2011 JenneUpdatePSLFound the PMC unlocked

It was unlocked since ~4:30am.  No idea why.  It's relocked so I can try round N of measuring the PRC length.

Attachment 1: PMCunlocked_17July2011.png
PMCunlocked_17July2011.png
  3755   Thu Oct 21 18:45:50 2010 KojiUpdatePSLFound the beat at 1064nm

[Koji Suresh]

We found the beat at 1064nm. T(PSL)=26.59deg, T(X-end)=31.15deg.

The X-end laser was moved to the PSL table.

The beating setup was quickly constructed with mode matching based on beam profile measurements by the IR cards.
We used the 1GHz BW PD, Newfocus #1611-FS-AC.

As soon as we swept the Xtal temp of the X-end laser, we found the strong beating.

  3756   Thu Oct 21 19:10:39 2010 AidanUpdatePSLFound the beat at 1064nm

Quote:

[Koji Suresh]

We found the beat at 1064nm. T(PSL)=26.59deg, T(X-end)=31.15deg.

The X-end laser was moved to the PSL table.

The beating setup was quickly constructed with mode matching based on beam profile measurements by the IR cards.
We used the 1GHz BW PD, Newfocus #1611-FS-AC.

As soon as we swept the Xtal temp of the X-end laser, we found the strong beating.

 

  3759   Fri Oct 22 01:23:13 2010 KojiUpdatePSLFound the beat at 1064nm

[Koji / Suresh]

We worked on the 1064 beating a bit more.

- First of all, FSS and FSS SLOW servo were disabled not to have variating Xtal temp for the PSL.

- The PSL Xtal temp (T_PSL) was scanned from 22deg-45deg while we search the Xtal temp (T_Xend) for the Xend laser to have the beat freq well low (f<30MHz).
The pumping current for each laser was I_PSL = 2.101 [A] and I_Xend = 2.000 [A]

For a certain T_PSL, we found multiple T_Xend because the freq of the laser is not a monotonic function of the Xtal temperature. (see the innolight manual).

T_Xend to give us the beating was categorized in the three sets as shown in the figure. The set on "curve2" is the steadiest one. (Use this!)
The trends were quite linear but the slope was slightly off from the unity.

- T_PSL was scanned to see the trend of the PMC output.

The PMC was sometimes locked to the mode with lower transmission (V_PMCT ~ 3.0V).
When T_PSL ~ 31deg we consistently locked the PMC at higer transmission (V_PMCT ~ 5.3V).

At the moment we decided the operating point of T_PSL = 32.25 deg, V_PMCT = 5.34, where we found the beat at T_Xend=38.28deg.

- We cleaned up the PSL table more than how it was. Returned the tools to their original places.
The X-end laser was shut down and was left on the PSL table.

NEXT:
Kiwamu can move the X-end laser to the Xend and realign it.
Then we should be able to see the green beating quite easily.

Attachment 1: 101021_beat.pdf
101021_beat.pdf
  8913   Tue Jul 23 21:32:43 2013 KojiUpdateIOOFound the cause of mysterious MC motion

Thesedays we were continuously annoyed by unELOGGED activities of the interferometer.

MC2 LOCKIN was left on and has continuously injected frequency noise and beam pointing modulation
during all of the comissioning / vent preparation.

C1:SUS-MC2_LOCKIN2_OSC_FREQ was 0.075
C1:SUS-MC2_LOCKIN2_OSC_CLKGAIN was 99

For more than a week ago we noticed that the curve of the MC WFS stripchart suddenly got THICKER.
MC WFS, arm transmission, beam pointing... everything was modulated.
It was not WFS instability, and it was not the cavity mirrors.

Today I made the investigation and finally tracked down the cause of this issue to be on MC2 suspension.
Then it was found that this LOCKIN was ON.

There is no direct record of this lockin in the frame files.
From the recorded channel "C1:IOO-WFS2-YAW_OUT16" (which is the trace on the StripTool chart on the wall)
It was turned on at July 10th, 2:00UTC (July 9th, 7PM PDT)

  8917   Wed Jul 24 14:26:24 2013 ranaUpdateIOOFound the cause of mysterious MC motion

Yes, this was not ELOG'd by me, unfortunately. This was the MC tickler which I described to some people in the control room when I turned it on.

As Koji points out, with the MCL path turned off this injects frequency noise and pointing fluctuations into the MC. With the MCL path back on it would have very small effect. After the pumpdown we can turn it back on and have it disabled after lock is acquired. Unfortunately, our LOCKIN modules don't have a ramp available for the excitation and so this will produce some transients (or perhaps we can ezcastep it for now). Eventually, we will modify this CDS part so that we can ramp the sine wave.

  9046   Wed Aug 21 19:26:19 2013 ranaUpdateIOOFound the cause of mysterious MC motion

Quote:

Yes, this was not ELOG'd by me, unfortunately. This was the MC tickler which I described to some people in the control room when I turned it on.

As Koji points out, with the MCL path turned off this injects frequency noise and pointing fluctuations into the MC. With the MCL path back on it would have very small effect. After the pumpdown we can turn it back on and have it disabled after lock is acquired. Unfortunately, our LOCKIN modules don't have a ramp available for the excitation and so this will produce some transients (or perhaps we can ezcastep it for now). Eventually, we will modify this CDS part so that we can ramp the sine wave.

 I've written a new TICKLE script using the newly found 'cavget' and 'cavput' programs. They are in the standard epics distribution as extension binaries. They allow multichannel read/write as well as ramping, delays, incremental steps, etc. http://www.aps.anl.gov/epics/tech-talk/2012/msg01465.php.

Running from the command line, they seem to work fine, but I've left it OFF for now. I'll switch it into the MC autolocker at some point soon.

  6160   Tue Jan 3 17:25:27 2012 KojiUpdatePSLFound the laser was off

I found the PSL laser has been off for four hours. Nobody seemed to know why.

I just turned it on and it is now providing about 10% lower power compared with one before the shutdown.
Let's keep the eyes on the power if it can recover as the housing gets warm.

  6582   Mon Apr 30 13:00:50 2012 SureshUpdateCDSFrame Builder is down

Frame builder is down.  PRM has tripped its watch dogs.  I have reset the watch dog on PRM and turned on the OPLEV. It has damped down.  Unable to check what happened since FB is not responding.

There was an minor earthquake yesterday morning which people could feel a few blocks away.  It could have caused the the PRM to unlock.

Jamie,Rolf,  is it okay or us to restart the FB?  

  6583   Mon Apr 30 13:58:25 2012 JamieUpdateCDSFrame Builder is down

Quote:

Frame builder is down.  PRM has tripped its watch dogs.  I have reset the watch dog on PRM and turned on the OPLEV. It has damped down.  Unable to check what happened since FB is not responding.

There was an minor earthquake yesterday morning which people could feel a few blocks away.  It could have caused the the PRM to unlock.

Jamie,Rolf,  is it okay or us to restart the FB?  

 If it's down it's alway ok to restart it.  If it doesn't respond or immediately crashes again after restart then it might require some investigation, but it should always be ok to restart it.

  6584   Mon Apr 30 16:56:05 2012 SureshUpdateCDSFrame Builder is down

Quote:

Quote:

Frame builder is down.  PRM has tripped its watch dogs.  I have reset the watch dog on PRM and turned on the OPLEV. It has damped down.  Unable to check what happened since FB is not responding.

There was an minor earthquake yesterday morning which people could feel a few blocks away.  It could have caused the the PRM to unlock.

Jamie,Rolf,  is it okay or us to restart the FB?  

 If it's down it's alway ok to restart it.  If it doesn't respond or immediately crashes again after restart then it might require some investigation, but it should always be ok to restart it.

I tried restarting the fb in two different ways.  Neither of them re-established the connection to dtt or epics.

1) I restarted the fb from the control room console with the 'shutdown' command.  No change.

2) I halted the machine with 'shutdown -h now' and restarted it with the hardware reset button on its front-panel. No change.

The console connected to the fb showed that the network file systems did not load.  Could this have resulted in failure to start several services since it could not find the files which are stored on the network file system?

The fb is otherwise healthy since I am able to ssh into it and browse the directory structure.

  6586   Mon Apr 30 20:43:33 2012 SureshUpdateCDSFrame Builder is down

Quote:

Quote:

Quote:

Frame builder is down.  PRM has tripped its watch dogs.  I have reset the watch dog on PRM and turned on the OPLEV. It has damped down.  Unable to check what happened since FB is not responding.

There was an minor earthquake yesterday morning which people could feel a few blocks away.  It could have caused the the PRM to unlock.

Jamie,Rolf,  is it okay or us to restart the FB?  

 If it's down it's alway ok to restart it.  If it doesn't respond or immediately crashes again after restart then it might require some investigation, but it should always be ok to restart it.

I tried restarting the fb in two different ways.  Neither of them re-established the connection to dtt or epics.

1) I restarted the fb from the control room console with the 'shutdown' command.  No change.

2) I halted the machine with 'shutdown -h now' and restarted it with the hardware reset button on its front-panel. No change.

The console connected to the fb showed that the network file systems did not load.  Could this have resulted in failure to start several services since it could not find the files which are stored on the network file system?

The fb is otherwise healthy since I am able to ssh into it and browse the directory structure.

[Mike, Rana]

The fb is okay.  Rana found that it works on Pianosa, but not on Allegra or Rossa.  It also works on Rosalba, on which Jamie recently installed Ubuntu.

The white fields on the medm  'Status' screen for fb are an unrelated problem.

 

 

  6591   Tue May 1 08:18:50 2012 JamieUpdateCDSFrame Builder is down

Quote:

 

I tried restarting the fb in two different ways.  Neither of them re-established the connection to dtt or epics.

 Please be conscious of what components are doing what.  The problem you were experiencing was not "frame builder down".  It was "dtt not able to connect to frame builder".  Those are potentially completely different things.  If the front end status screens show that the frame builder is fine, then it's probably not the frame builder.

Also "epics" has nothing whatsoever to do with any of this.  That's a completely different set of stuff, unrelated to DTT or the frame builder.

  10600   Mon Oct 13 16:08:49 2014 JenneUpdateCDSFrame builder is mad

I think the daqd process isn't running on the frame builder. 

Daqd_problem_maybe.png

I tried telnetting' to fb's port 8087 (telnet fb 8087) and typing "shutdown", but so far that is hanging and hasn't returned a prompt to me in the last few minutes.  Also, if I do a "ps -ef | grep daqd" in another terminal, it hangs. 

I wasn't sure if this was an ntp problem (although that has been indicated in the past by 1 red block, not 2 red blocks and a white one), so I did "sudo /etc/init.d/ntp-client restart", but that didn't make any change.  I also did an mxstream restart just in case, but that didn't help either. 

I can ssh to the frame builder, but I can't do another telnet (the first one is still hung).  I get an error "telnet: Unable to connect to remote host: Invalid argument"

Thoughts and suggestions are welcome!

  10601   Mon Oct 13 16:57:26 2014 KojiUpdateCDSFrame builder is mad

CPU load seems extremely high. You need to reboot it, I think

controls@fb /proc 0$ cat loadavg
36.85 30.52 22.66 1/163 19295

  10602   Mon Oct 13 17:09:38 2014 ericqUpdateCDSFrame builder is mad

 

This CPU load may have been me deleting some old frame files, to see if that would allow daqd to come back to life. 

Daqd was segfaulting, and behaving in a manner similar to what is described here: (stack exchange link). However, I couldn't kill or revive daqd, so I rebooted the FB. 

Things seem ok for now...

 

ELOG V3.1.3-