40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  PSL, Page 10 of 52  Not logged in ELOG logo
ID Date Author Type Category Subject
  2156   Wed Mar 28 15:29:42 2018 KojiNotesscatterHigher quality vaccan windows: 40m stock of wedged windows

That was unfortunate. But why does BK7 uncompatible with the purpose? We need UV fused silica only for the high power reason, I thought.

  2155   Wed Mar 28 11:29:19 2018 awadeNotesscatterHigher quality vaccan windows: 40m stock of wedged windows

I just checked the 40m's stock of wedged, AR coated, optics in the pull out draws.   

It looks like there is about nine CVI W2-LW-1-1025-UV-1064-45P windows: these are 1° wedged and coated on both sides for 45 incident p-pol.  Don't think this is what we want (i.e. 45 degree polarized). Also, 1 inch might be inconveniently small to use in practice.

There is one CVI W2-LW-1-2050-C-1064-0, this is the not UV grade fused silica so should probably not be used. Also we need four.

Everything else is either coated only on one side, the wrong type of glass, wedge or coating.

 

  2154   Mon Mar 26 16:54:29 2018 CraigDailyProgressNorth CavityNCAV likes to switch to TEM10 mode

A couple of observations from the lab today:

1) The TRANS and REFL for both cavities are a lot more stable ever since we landed the table, but still are not completely stationary on the order of 30 minutes.  They are now correlated, meaning that as TRANS goes up, REFL also goes up, meaning these changes are most likely due to the EAOM temperature fluctuations.  These fluctuations are much less than the alignment drifts we were having from the air springs, and slightly less than from the floating table wobbling around.  Need to turn on the ISS.

2) North cavity sometimes glitches and jumps from the TEM00 to the TEM10 mode. Pics because it did happen.  Turning on the autolocker helps here since the TEM01 mode doesn't have a high enough threshold TRANS voltage, so it unlocks the cavity and tries to find the TEM00 mode again.  Unclear why the NCAV would make a massive jump in frequency like this in the first place, could result from a PZT glitch.

Attachment 1: NorthCavityLockedtoTEM10Mode.jpg
NorthCavityLockedtoTEM10Mode.jpg
  2153   Fri Mar 23 18:22:53 2018 CraigDailyProgressBEATFloated vs Landed Table Spectrum

I landed the table today to diagnose our TRANS and REFL power fluctuations and enable easier scattering searches.  (It seems that our TRANS and REFL power fluctuations have ceased, need more time to be sure, but this points to cavity misalignments from slight table motion causing the power fluctuations)

I noticed right away that the spectrum changed, in particular, the scattering shelf extended to even higher frequency.  I made a plot comparing the beatnote from just before landing the table and just after using ~/Git/cit_ctnlab/ctn_labdata/scripts/SeismicAndScatteringStudy.ipynb
Data GPSTimes for Floated Table = 1205884032 to 1205884332
Data GPSTimes for Landed Table = 1205888438 to 1205888738

Attachment 1: CTNLabBeatnoteASD_gpsStart_1205884032.pdf
CTNLabBeatnoteASD_gpsStart_1205884032.pdf
  2152   Fri Mar 23 15:12:00 2018 CraigDailyProgressscatterPowerpoint Trans Table Diagram

This diagram will be for quickly keeping track of scattering beams/scattering resonances on the trans table.  Need to improve by making some drawing in SolidWorks or something.

Attachment 1: 20180323_CTNLabTransmissionTableDrawing.pdf
20180323_CTNLabTransmissionTableDrawing.pdf
Attachment 2: 20180323_CTNLabTransmissionTableDrawing.pptx
  2151   Thu Mar 22 17:41:24 2018 CraigDailyProgressBEATBeatnote Spectrogram jupyter notebook

I've created a spectrogram (Fig 1) ipython notebook in ~/Git/cit_ctnlab/ctn_labdata/scripts/SeismicAndScatteringStudy.ipynb. I also attached a median beatnote ASD (Fig 2) for reference.
This is the beginnings of a study to look for coherence between our accelerometers and beatnote to figure out the velocity of our scatterers.

It takes a little while for nds2 to acquire the data from Gabriele's cymac3, because the data is sampled at 65536 Hz.  For 300 seconds of data it takes the script 172 seconds to retrieve the data. 

I'm currently thinking about ways to make this spectrogram plot on the order of days as opposed to minutes so we can get a long-term idea about our beatnote.  Even looking at this, it seems like the scattering shelf does oscillate a bit at around 10 Hz.  Our 500 Hz hump seems pretty constant on this time scale as well.

Attachment 1: CTNLabBeatnoteSpectrogram_TimeLength_300s_gpsStart_1205785966.pdf
CTNLabBeatnoteSpectrogram_TimeLength_300s_gpsStart_1205785966.pdf
Attachment 2: CTNLabBeatnoteASD_gpsStart_1205785966.pdf
CTNLabBeatnoteASD_gpsStart_1205785966.pdf
  2150   Wed Mar 21 11:07:17 2018 awadeDailyProgressscatterThe Sentinels: Transmission Table Black (Green) Glass Beam Dumps

Which direction are you trying to dump? Into or out-of the can?

The scatter going inward is being deflected rather than trapped. It is still attenutated which is better. That which is reflected off the cavities/window is normally reflected back off the black (green) glass. Would we be better with just two pieces of glass (in a Vee)? Or a double trap with a Vee pointing forward and backward (an X-dump?)

 

Quote:

Scattering is a huge problem in our setup, and we aren't sure where exactly the offending scattering is coming from.  The most basic thing to do is to go through our entire optics table, find all stray beams, dump them, then see what kind of spectrum we're left with.  At that point we can try more advanced techniques, like buzzing and damping resonant optics or upconverting the scatter source out of our band.

Many stray beams are coming directly from our cavities and polluting our transmission table.  Also, some beams are trying to make their way back into the can.  These beams tend to be close to the main beam, making dumping difficult.
To aid with dumping these mutant beams, I have created what I call the Sentinels.  The Sentinels stand guard at the transmission window of the vaccan, daring any puny beams to interfere with the main beam.
Related image

 

 

  2149   Wed Mar 21 00:40:01 2018 CraigDailyProgressscatterThe Sentinels: Transmission Table Black Glass Beam Dumps

Scattering is a huge problem in our setup, and we aren't sure where exactly the offending scattering is coming from.  The most basic thing to do is to go through our entire optics table, find all stray beams, dump them, then see what kind of spectrum we're left with.  At that point we can try more advanced techniques, like buzzing and damping resonant optics or upconverting the scatter source out of our band.

Many stray beams are coming directly from our cavities and polluting our transmission table.  Also, some beams are trying to make their way back into the can.  These beams tend to be close to the main beam, making dumping difficult.
To aid with dumping these mutant beams, I have created what I call the Sentinels.  The Sentinels stand guard at the transmission window of the vaccan, daring any puny beams to interfere with the main beam.
Related image

 

Attachment 1: TheSentinels_TransmissionTableBlackGlassBeamDumps.jpg
TheSentinels_TransmissionTableBlackGlassBeamDumps.jpg
  2148   Mon Mar 19 14:22:57 2018 CraigDailyProgressOtherCavity Power fluctuations vs Temp Fluctuations

Considering I can't even lock the North cavity today because of alignment drift, I'd upgrade this to super high priority.

Looks like you need to connect the air springs via this black tubing, and not the clear 1/4'' tubing.  There are 8 of these metal screw clamps on the North side of the table and 5 on the other side.  These are surely the cause of leaking, but I don't have any good ideas for how to eliminate them.  I'll google around for ways to eliminate this black tubing from the setup so we don't have 13 leaky connections.

At this rate, the cylinder will hit low 100's later this week.  This is why I'm disconnecting the air springs and shimming up the vaccan for now. 

Quote:

I guess going back to the wall supply would be a stupid step backwards.  We need to fix this leak, its either in the tubing or the air springs themselves.  

Can you check out what the connection to the air springs is and whether this can be connected to the standard clear tubing airlines?  We are using the standard 1/4" tubing (see Newport Pneumatic Isolator Accessories).  I'm not even sure if they are supposed to be permanently hooked up to air, the Newport air springs seem to have a Schrader valve (cars/bike valve) on the side and the description suggest that they only need to be pumped up and leveled once.   They do degrade. If that is the case then we need to assess if these older rubber diaphragms need replacing.

This is moderate priority. Scattering is top of the list.

---

We need to return this cylinder when getting a new one.  It will need to reach the low 100s, then we hit reorder on N2.

 

Quote:

Ever since I attached the vaccan air springs to the cylinder, it has been rapidly losing air pressure.  It is now down to ~1100 psi, where on Thursday it was ~1800 psi.  At this rate we will have to order a new cylinder, and figure out how to make the air springs less leaky, as this is affecting our alignment over time.

For now, I'll relock the cavities, replace the shims, and turn off the air springs as they are causing more harm than good.  We will rely on the floating table until then.

 

 

Attachment 1: NorthAirSpringTubing2.jpg
NorthAirSpringTubing2.jpg
Attachment 2: NorthAirSpringTubing.jpg
NorthAirSpringTubing.jpg
  2147   Mon Mar 19 13:29:24 2018 awadeDailyProgressOtherCavity Power fluctuations vs Temp Fluctuations

I guess going back to the wall supply would be a stupid step backwards.  We need to fix this leak, its either in the tubing or the air springs themselves.  

Can you check out what the connection to the air springs is and whether this can be connected to the standard clear tubing airlines?  We are using the standard 1/4" tubing (see Newport Pneumatic Isolator Accessories).  I'm not even sure if they are supposed to be permanently hooked up to air, the Newport air springs seem to have a Schrader valve (cars/bike valve) on the side and the description suggest that they only need to be pumped up and leveled once.   They do degrade. If that is the case then we need to assess if these older rubber diaphragms need replacing.

This is moderate priority. Scattering is top of the list.

---

We need to return this cylinder when getting a new one.  It will need to reach the low 100s, then we hit reorder on N2.

 

Quote:

Ever since I attached the vaccan air springs to the cylinder, it has been rapidly losing air pressure.  It is now down to ~1100 psi, where on Thursday it was ~1800 psi.  At this rate we will have to order a new cylinder, and figure out how to make the air springs less leaky, as this is affecting our alignment over time.

For now, I'll relock the cavities, replace the shims, and turn off the air springs as they are causing more harm than good.  We will rely on the floating table until then.

 

  2146   Mon Mar 19 13:09:43 2018 awadeMiscElectronics EquipmentPhase Noise for 21.5 MHz Sine Wave

Great, it looks like you've got your setup working.

A few things about eloging, though. More information is almost always better. It would be good to add a bit more about your setup so that people know what you actually did and so you can repeat it if you come back in the future to look at your posts.

Maybe you can add another post with a schematic of your experiment labeled with part numbers, frequencies, power levels etc: everything somebody else would need if they were to do the build they same setup. Elog also has the ability to include latex markup which is handy for posting a few key equations.  For example, there are a few Rigol function generators, I find it helpful in the elogs I make to explicitly include part numbers and also hyperlink those labels to the website/datasheet of those components.  You want to actually explain what you did in some detail; some people use dot points, others write full sentences and paragraphs.  The main thing is you include lots of context and useful information about what happened.

With the plot, it looks ok, but you want to increase the font sizes and include units on the y-axis.  I'm not really sure what measurement you made was. It says its a transfer function but it should be in units that make sense like rad/sqrtHz or Hz/sqrtHz. Craig is good at making plots in python maybe have a chat to him about how to make nice plots.

Quote:

The Phase Detector method was used to measure the phase noise of the 21.5 MHz Sine Wave generated by the RIGOL Waveform Function Generator. Noise measurements were taken using the SR785 across frequencies spanning 0.25 Hz to 102.4 kHz.

 

  2145   Mon Mar 19 12:10:27 2018 CraigDailyProgressOtherCavity Power fluctuations vs Temp Fluctuations

Ever since I attached the vaccan air springs to the cylinder, it has been rapidly losing air pressure.  It is now down to ~1100 psi, where on Thursday it was ~1800 psi.  At this rate we will have to order a new cylinder, and figure out how to make the air springs less leaky, as this is affecting our alignment over time.

For now, I'll relock the cavities, replace the shims, and turn off the air springs as they are causing more harm than good.  We will rely on the floating table until then.

Quote:

I added a pressured pipe T to the nitrogen cylinder in our lab so we could float the table and vaccan air springs from the same cylinder, as opposed to before when the air springs were floated from the wall.  This way we know that the air pressure is not changing every fifteen minutes.

The cylinder pressure is ~1800 psi right now, and I set the pressure regulator to 40 psi again (I took before and after pictures, pics 1 and 3).  I also took a picture of the wall gauge before I disconnected the air springs from it, it was set to ~40 psi as well.

The final plot is the same nine channels I looked at this morning, 25 minutes of data.  It seems our guess was right, because there are no longer ~15 minute TRANS and REFL waves.  Cool.

Quote:

Our initial impression of cavity power fluctuations was that temperature fluctuations in the EAOMs were the cause.  To check this, I made some REFL DC monitors yesterday.
Plotted is one hour of data from today.  Some notes:
1) TRANS DC and REFL DC for both cavites are breathing together every fifteen minutes, and are anticorrelated (one goes up, the other goes down). 
2) The temperature monitors are not fluctuating with the same regularity as the power monitors.
3) The REFL DC for the north PMC is fluctuating with temperature

This leads me to believe our power fluctuations are caused by changes of alignment into the cavities from the air springs holding up our vacuum can. 
Right now, the air springs are hooked up to the wall.  There is probably some pressure regulator which switches on and off every fifteen minutes.  To fix this, I'm going to switch the vacuum air springs over to our nitrogen cylinder we have in the lab and see if the fluctuations go away.

 

 

Attachment 1: AirSpringPressureRegulator_20180319.jpg
AirSpringPressureRegulator_20180319.jpg
  2144   Sun Mar 18 10:17:16 2018 Shu FaySummary Plotting program quickplot.py

quickplot.py makes quick plots of data from desired channels. See: https://github.com/shufay/LIGO-plots.

On ws1 cd to ~/Git/LIGO-plots. In Ipython:  %run quickplot.py <channel 1> <channel 2> ... <(optional) gpsLength> < (optional) gpsStop>

To see usage: %run quickplot.py usage

Arguments:

<channel 1> <channel 2> ... Channels that you want to make plots of

<gpsLength> Length of time to fetch data. Default is 3600s.

<gpsStop> GPS time to fetch data until. Default is now. So the default parameters would fetch data from (now-3600, now).

Attachment 1: Figure_2.png
Figure_2.png
Attachment 2: Figure_1.png
Figure_1.png
  2143   Sat Mar 17 14:32:05 2018 StellaMiscElectronics EquipmentPhase Noise for 21.5 MHz Sine Wave

The Phase Detector method was used to measure the phase noise of the 21.5 MHz Sine Wave generated by the RIGOL Waveform Function Generator. Noise measurements were taken using the SR785 across frequencies spanning 0.25 Hz to 102.4 kHz.

Attachment 1: RIGOLPhasenoise_Spectra.pdf
RIGOLPhasenoise_Spectra.pdf
  2142   Fri Mar 16 13:09:35 2018 awade, CraigDailyProgressTempCtrlBeatnote Stabilization: Relay tuning of PID feedback coefficients

Beat note slewing is still a problem and it will be some time before Shruti (this year's summer SURF) is here to tackle it with some intelligent controls: i.e. neural networks/machine learning, Kalman filters, Wiener filters (?) etc.

This post is an intermediate one, I will post something in more detail about using relay tests to find PID parameters at some later time.

PID controlling beat note still not well tuned

In a previous test of controlling the beat note frequency with PID feedback to the north cavity heat shield I guessed P, I and D values.  Reaching optimal values based on the usual human methods of driving the loop close to instability and then backing off prove to be very difficult.  For the cavity-heater system the time constants for settling to thermal equilibrium are very long.  The critical dampening period appears to be on the order of 20 minutes (see below).  However, with the loop engaged and very close to point of inducing instability the oscillations can extend out to many hours. Its difficult to keep track of movements of the various gains and their impact, especially when many other parts of the experiment are dropping lock, drifting, ringing, etc. I found it difficult to discriminate the goodness of the last adjustment when there are a bunch of spurious step functions induced from humans interacting in the lab and outside.

It is also really difficult to assess whether one has truly hit the optimal critically dampened condition with no long term instability causing oscillations.  Lots of people offer advice and rules of thumb on how to tune to 'good enough'.  If tolerances are relatively loose then it might be ok. But we can do better. Ideally we would actively drive the system to probe the plant's properties and come up with an optimization that set the values in an objective way.  An active optimization will maximize the useful information for the given integration time and remove the human biases in tuning PID for plants with very large time lag.

You can see an example of one set of values I chose in PSL:2095.  It took some 12 hours to fully dampen down, this was a bad choice of values.  After this I gave up on manual tuning to work on other things.

The Relay Test

There is a way of probing the plant under control to estimate appropriate values for P, I and D values: a relay test.  There is some information for this auto tuning method in [1]. It basically consists of switching out the PID block for a relay function.  The relay does a hard switch from +a to -a depending on the sign of the error signal: it is a step function set about some mean operating point.  This active feedback induces an oscillation in the plant and, once the system settles into regular oscillations, the relay function leads the plant by 90°.  Although the square wave hard switch will induce Fourier components at higher harmonics of the lowest natural frequency of the plant, in almost all non-pathological plants there is a dominant pole that filters these frequency components out.  Thus, the dominant oscillations induced during a relay test give some clear information about the characteristic response of the plant in a relatively robust way.

This method is similar to the step function test that people sometimes do.  That is a much older method from the 1940's.  You provide a step kick in the actuation and fit the impulse response to retrieve the critical period and amplitude of the plant's response.  The disadvantage of this method is that one is only getting information from a single kick, you also have to fit the plant response along with any sensor noise etc.  It is much better to integrate for a longer period and lock-in on the frequency of interest. 

From the induced relay oscillations we can extract a critical dampening period (Tc) and critical dampening coefficient (Kc) from the ratio of the relay amplitude and induced peak-to-peak error signal amplitude. These values give the frequency at which the plant's Nyquist curve first cuts the real axis. This represents the first frequency at which the plant lags the driving signal by -180°.  This is all the information we need to make a critically dampen PID loop (in principle). There is a standard lookup table for choices of Kp, Ki and Kd gain values for Tc and Kc given in most text books.  It turns out these 'standard' values are well known to be bad and frequently give a loop tuning that have excessive oscillations.  They get copied from textbook to textbook as part of the canon of PID tuning wisdom. Values that I found to work well with initial test on the laser slow controls were that given in Table 1 of [2]. For clarity I've tabulated these values below (in case the link dies).

Improved Zeigler-Nichols tuning constants
Type Kp Ti Td
Original textbook values 0.6Kc Tc/2 Tc/8
Little overshoot 0.33Kc Tc/2 Tc/3
No overshoot 0.2Kc Tc/2 Tc/3

Where Ti and Td are integral and derivative terms in the 'standard' form of the PID controller, hence 

K_i = \frac{K_p}{T_i}; \qquad K_d = K_pT_d

Initial tests

I did an initial test of this auto tuning method on the laser slow controls.  It pretty much guessed first time the PID values that we had set manually with only 120 seconds of integration time. The typical characteriztic impulse response time is on the order of 4.5 seconds in that case.  That isn't a bad effort.

In my initial test on the cavity shield-beat note feedback I chose a relay amplidue of 0.05 W, an average actuator offset of 0.775 W and a setpoint of 26.5 MHz and triggered the autotune function for 4 hours.  See attachment 1 for what happened (sorry no proper plot, not worth it at this stage).  Basically the initial average actuator point was set a little too high for the set point (producing an asymmetric response) and the whole system was below the critical dampening amplitude and converges on the set point. The relay amplitude needs to be turned up to induce a much larger response.  I would also guess that there should be an extremely slow feedback to the actuator mean value to keep the plant response symmetric.   

This initial test was a failure.   For reference the suggested loop adjustment based on median period and amplitude was  kp, ki, kd = 0.76544, 0.01486, 26.28024. Bad.F

 

Gradient descent on PID values

This was a failure in my initial tests on laser slow controls.  For a cost function I integrated the error function over the course of a step test that was roughly 10 times the characteristic response time of the plant (laser slow frequency input). I sampled two values for the proportional term and performed a step test on each.  I then computed a local gradient of the cost function.  This cost function I was using was too susceptible to sensor noise and gave more or less 50:50 guesses in either direction for the next move in tuning parameters even when it was clear it needed to be moved down.  So it was as good as a random walk simulation. 

Might get back to this later.  There might be more time efficient ways that extract information more efficiently.

 

Integration into the PIDLocker_beta.py python script

I have integrated the relay test into the beta version of our python locker.  Its called with the --autotune flag, use age is something like 

>  python PIDLocker_beta.py PIDConfig_NCAVHeater.ini --autotune -d 0.05 -t 14400

For more information run  

> python PIDLocker_beta.py --help

You can get the script from our ctn_scripts git repo here: https://git.ligo.org/cit-ctnlab/ctn_scripts, its called PIDLocker_beta.py.  I have also attached a snapshot of the current version below for future reference. All the auto tune functionality is contained within a function called RelayAutoTune.  It isn't truely 'auto' as you can see from above section, but with a bit of playing around with offset and relay amplitude you can get it to work.

More to come later. For some future reading, mostly for interesting ideas, see [3].

References

[1]  Åström, K. J. & Murray, R. M. Feedback Systems: An Introduction for Scientists and Engineers, Control And Cybernetics 36, (2008) (link). There many versions, they are not equal.

[2]  Wilson, D. I. Relay-based PID Tuning, Autom. Control Feb/March, 10–12 (2005) (link).

[3]  Hornsey, S., A Review of Relay Auto-tuning Methods for the Tuning of PID-type Controllers, Reinvention: an International Journal of Undergraduate Research 2, issue 2 (2012) (link)

 

Attachment 1: RelayTest_BNtoCavShieldHeater_Failure2018-03-16_at_3.05.48_PM.png
RelayTest_BNtoCavShieldHeater_Failure2018-03-16_at_3.05.48_PM.png
Attachment 2: PIDLocker_beta_snapshot20180316.tar.gz
  2141   Thu Mar 15 22:50:58 2018 Shu Fay UngSummary helping to upgrade lab data acquisition system

Hi, I'm an undergrad and I'll be helping to upgrade the lab data acquisition system. I'm starting off with getting data from fb4, making plots of lab temperature, laser power etc which would lead to posting them into html pages.

Link to Github repo: https://github.com/shufay/LIGO-plots

- Shu Fay

  2140   Thu Mar 15 22:48:21 2018 CraigDailyProgressOtherCavity Power fluctuations vs Temp Fluctuations

Quick note on all this:  It seems to take the air springs a very long time to come to a steady state, i.e. the air springs are severely over-coupled. 

Before when I posted the plots showing the completely steady TRANS and REFL DC stuff, I had forgotten to remove the shims I had used to steady the vaccan.  When I removed them, the air springs took over fully supporting the vaccan, and began slowly drifting again, hurting the alignment into the cavities again.  This time, though, there was no fifteen minute oscillation, because our cylinder is outputing 40 psi no matter what.

So I took to making small adjustments to our regulator to get good alignment. This worked, but over a long period of time (~30 minutes) TRANS and REFL began decaying (TRANS decreasing, REFL increasing) due to poor alignment.  I'm still playing this game of making very fine air pressure adjustment, wait for a long time, then see where we're at as far as vaccan alignment.  I'll keep playing until we're at a steady state with the best alignment, then we can revert to realigning the periscopes.

Quote:

I added a pressured pipe T to the nitrogen cylinder in our lab so we could float the table and vaccan air springs from the same cylinder, as opposed to before when the air springs were floated from the wall.  This way we know that the air pressure is not changing every fifteen minutes.

The cylinder pressure is ~1800 psi right now, and I set the pressure regulator to 40 psi again (I took before and after pictures, pics 1 and 3).  I also took a picture of the wall gauge before I disconnected the air springs from it, it was set to ~40 psi as well.

The final plot is the same nine channels I looked at this morning, 25 minutes of data.  It seems our guess was right, because there are no longer ~15 minute TRANS and REFL waves.  Cool.

Quote:

Our initial impression of cavity power fluctuations was that temperature fluctuations in the EAOMs were the cause.  To check this, I made some REFL DC monitors yesterday.
Plotted is one hour of data from today.  Some notes:
1) TRANS DC and REFL DC for both cavites are breathing together every fifteen minutes, and are anticorrelated (one goes up, the other goes down). 
2) The temperature monitors are not fluctuating with the same regularity as the power monitors.
3) The REFL DC for the north PMC is fluctuating with temperature

This leads me to believe our power fluctuations are caused by changes of alignment into the cavities from the air springs holding up our vacuum can. 
Right now, the air springs are hooked up to the wall.  There is probably some pressure regulator which switches on and off every fifteen minutes.  To fix this, I'm going to switch the vacuum air springs over to our nitrogen cylinder we have in the lab and see if the fluctuations go away.

 

 

  2139   Thu Mar 15 20:26:35 2018 CraigDailyProgressComputersMethod to get data off fb4 robustly using python

It is a well known issue that our framebuilder is running slowly relative to our "real" time.  However, we are physicists, so this is not entirely unprecedented.  I have calculated that for every second we experience, the framebuilder experinces 0.89285 seconds, which means that fb4 must be travelling at 45% the speed of light.

However, despite the inevitable logistical issues associated with fb4 being used as a high energy experiment, it still stores valuable data for us in the CTN Lab.  We would like to be able to access this data via python robustly, even if our times do not sync up.  We can just get the time that fb4 thinks it is directly off of it.

First, we need direct access between ws1 and fb4 with no password.  I followed the instructions from this site, and it worked.  V easy.

Next, we need to use python to run an ssh command "caget -t -f10 C4:DAQ-DC0_GPS", where C4:DAQ-DC0_GPS is the channel representing the gpstime fb4 thinks it is.  Followed the directions from this stackoverflow response on ws1.  Boom.  Code copied below for clarity.

In [1]: import subprocess
In [2]: import sys
In [3]: HOST="controls@10.0.1.156"
In [4]: COMMAND="caget -t -f10 C4:DAQ-DC0_GPS"
In [5]: ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
   ...:                        shell=False,
   ...:                        stdout=subprocess.PIPE,
   ...:                        stderr=subprocess.PIPE)
In [6]: result = ssh.stdout.readlines()     
In [7]: print result
['1205175551.0000000000\n']
In [11]: fb4gps = float(result[0].strip('\n'))
In [12]: fb4gps
Out[12]: 1205175551.0

Now we have the time that fb4 thinks it is.  Finally we have to make sure we can plot data from fb4 using that gpstime.  Following the directions from my old elog:

In [8]: import nds2
In [14]: from pylab import *
In [15]: ion()
In [13]: c = nds2.connection('10.0.1.156', 8088)
In [17]: chanName = 'C3:PSL-SCAV_FSS_SLOWOUT'
In [26]: fb4gps = int(fb4gps)
In [32]: data = c.fetch(fb4gps-60, fb4gps, [chanName])
In [33]: plot(data[0].data)
Out[33]: [<matplotlib.lines.Line2D at 0x7ff3c20431d0>]


This should display interactively the latest data from our south slow volt controller, no matter what time fb4 thinks it is.

  2138   Thu Mar 15 18:37:06 2018 CraigDailyProgressOtherCavity Power fluctuations vs Temp Fluctuations

I added a pressured pipe T to the nitrogen cylinder in our lab so we could float the table and vaccan air springs from the same cylinder, as opposed to before when the air springs were floated from the wall.  This way we know that the air pressure is not changing every fifteen minutes.

The cylinder pressure is ~1800 psi right now, and I set the pressure regulator to 40 psi again (I took before and after pictures, pics 1 and 3).  I also took a picture of the wall gauge before I disconnected the air springs from it, it was set to ~40 psi as well.

The final plot is the same nine channels I looked at this morning, 25 minutes of data.  It seems our guess was right, because there are no longer ~15 minute TRANS and REFL waves.  Cool.

Quote:

Our initial impression of cavity power fluctuations was that temperature fluctuations in the EAOMs were the cause.  To check this, I made some REFL DC monitors yesterday.
Plotted is one hour of data from today.  Some notes:
1) TRANS DC and REFL DC for both cavites are breathing together every fifteen minutes, and are anticorrelated (one goes up, the other goes down). 
2) The temperature monitors are not fluctuating with the same regularity as the power monitors.
3) The REFL DC for the north PMC is fluctuating with temperature

This leads me to believe our power fluctuations are caused by changes of alignment into the cavities from the air springs holding up our vacuum can. 
Right now, the air springs are hooked up to the wall.  There is probably some pressure regulator which switches on and off every fifteen minutes.  To fix this, I'm going to switch the vacuum air springs over to our nitrogen cylinder we have in the lab and see if the fluctuations go away.

 

Attachment 1: NitrogenCylinderRegulatorBefore.jpg
NitrogenCylinderRegulatorBefore.jpg
Attachment 2: WallAirGaugeBefore.jpg
WallAirGaugeBefore.jpg
Attachment 3: NitrogenCylinderRegulatorAfter.jpg
NitrogenCylinderRegulatorAfter.jpg
Attachment 4: Screen_Shot_2018-03-15_at_6.43.59_PM.png
Screen_Shot_2018-03-15_at_6.43.59_PM.png
  2137   Thu Mar 15 15:58:02 2018 CraigDailyProgressBEATCavity Power fluctuations vs Temp Fluctuations

Btw when I propped up the vaccan using shims in preparation for venting the air springs, our old friend in the beatnote ASD reappeared: the broad hump from 100-10000 Hz. 
This was a problem for us in Dec-Jan, but it went away and we really didn't understand why at the time.  Turns out it's probably upconversion of seismic activity coupling into our cavities.

Quote:

Our initial impression of cavity power fluctuations was that temperature fluctuations in the EAOMs were the cause.  To check this, I made some REFL DC monitors yesterday.
Plotted is one hour of data from today.  Some notes:
1) TRANS DC and REFL DC for both cavites are breathing together every fifteen minutes, and are anticorrelated (one goes up, the other goes down). 
2) The temperature monitors are not fluctuating with the same regularity as the power monitors.
3) The REFL DC for the north PMC is fluctuating with temperature

This leads me to believe our power fluctuations are caused by changes of alignment into the cavities from the air springs holding up our vacuum can. 
Right now, the air springs are hooked up to the wall.  There is probably some pressure regulator which switches on and off every fifteen minutes.  To fix this, I'm going to switch the vacuum air springs over to our nitrogen cylinder we have in the lab and see if the fluctuations go away.

 

Attachment 1: Beatnote_ASD_gpstime_1205189250.pdf
Beatnote_ASD_gpstime_1205189250.pdf
  2136   Thu Mar 15 14:25:05 2018 CraigDailyProgressOtherCavity Power fluctuations vs Temp Fluctuations

Our initial impression of cavity power fluctuations was that temperature fluctuations in the EAOMs were the cause.  To check this, I made some REFL DC monitors yesterday.
Plotted is one hour of data from today.  Some notes:
1) TRANS DC and REFL DC for both cavites are breathing together every fifteen minutes, and are anticorrelated (one goes up, the other goes down). 
2) The temperature monitors are not fluctuating with the same regularity as the power monitors.
3) The REFL DC for the north PMC is fluctuating with temperature

This leads me to believe our power fluctuations are caused by changes of alignment into the cavities from the air springs holding up our vacuum can. 
Right now, the air springs are hooked up to the wall.  There is probably some pressure regulator which switches on and off every fifteen minutes.  To fix this, I'm going to switch the vacuum air springs over to our nitrogen cylinder we have in the lab and see if the fluctuations go away.

Attachment 1: Screen_Shot_2018-03-15_at_2.53.16_PM.png
Screen_Shot_2018-03-15_at_2.53.16_PM.png
  2135   Thu Mar 15 10:29:04 2018 awadeDailyProgressscatterAddressing 500 Hz scatter pickup

Going back the original issue of scattering, it appears that there is light being back reflected from somewhere in the post PMC path but before the reference cavities.  

Reducing number of optics after the North PMC 

I had installed a bunch of polarization optics before the north 14.75 MHz EOM in an effort to reduce RFAM (see attachement 1).  It looks like stuffing so many optics in such a small space is a bad idea.  You can see weak retro reflected beams from the wave plates and, probably, the PBS as well.  The short propagation distance makes it difficult to angle optics enough to be able to separate them from the main beam laterally to dump.  The EOM can't really be moved because the mode matching solution is a little tight for the available space.

After talking with rana and Craig yesterday it seems like the Pre Mode Cleaner (PMC) should be filtering polarization well enough when locked that the PBS and quarter-wave plate (QWP) are unnecessary. I removed all but the half-wave plate (HWP) and checked the residual polarization on transmission with a diagnostic PBS in place. I found 2 µW of power out of 1.2 mW was remaining when tuned all the way to s-pol.: this is a 1:600 extinction ratio which is about what we would expect from such a beam cube.  This measurement may be biased by the lower limit of the power meter, PBS should be giving 1:1000.  

I moved the PBS to before the PMC to clean up light out of the 21.5 MHz PMC phase modulator. The only optics in the post PMC-> EOM path are now a lens, a steering mirror and a half-wave plate (see attachment #2).  After realigning the PMC cavity and the north refcav I was able to reduce the RFAM to -55 dBm, which is good enough for now.  These slight changes in RFAM level mean that the FSS offset will need some adjustment.  I was unable to see any improvement in the beat spectrum as the beat note had drifted down to 2 MHz.  I turned the heating down a small amount and left it overnight to settle.

I didn't angle the HWP or lens by that much, this shouldn't be necessary because the PMC is a traveling wave cavity.  The elements should be pretty close to normal.  The glass beam dump should be checked to ensure it is not clipping any retro-reflected beams on the rough edge of the glass.

Clamping down the PMC

I never clamped down the PMC. It is just sitting on the ball baring points. This isn't great.

When I realized the tapped holes on the side of the base I went looking for clamps.  They are pictured in attachment #3 but they do not fit.  It turns out there were some issues with the choice of ball bearings on which the PMC sits.  The ball barrings sit over holes so that the PMC when placed will realigned exactly with its previous position on the base.  Antonio had found that the holes drilled for the ball barrings were spec'ed a little too big.  For standard increments of bearings size the closest size fits nicely over the hole but under force they actually slip down into the hole and are almost impossible to get out.  He bought the next ball bearing size up. However, this means that the clamps no longer reach the full height PMC assembly.  The assumed tolerances were made too tight on all these components, the next edition of drawings should allow for some wiggle room.

The drawings should be updated with at least 1-3 mm of range on slot cut side pieces for the clamps so there is room for changes in height due to ball bearing size.  Possibly even more, if future people want to put Viton or Sorbothane dampening into the clamping. The non-tapped holes should also be changed to through-all. Or at least drill with a narrower diameter through-all. This will help future users poke out objects that get stuck in the holes.  

For that matter the design of the clamps seems wrong.  There is a bar that goes over the top that is fixed with a slot-cut piece affixed to each side. This is intuitively wrong as the bolts all go in horizontally when the clamping force needs to be applied is downwards!  It means that the clamps are locking a vertically applied force from the sides; to bolt the PMC down you need to apply force to the bar and tighten the bolts at the same time for two different clamping bars.  The screws should have at least one vertical pair on each clamp so that tension can be applied in the same direction as the clamping force.  

PMC documents on the DCC

For future reference, here is a list of all the PMC documents on the DCC:

Evan's technical note for PMC design considerations: LIGO-T1600071.

I can't find assembly procedures on the DCC.  There was a report from one of Kate Dooley's summer students, LIGO-T1600503, that shows a jig for gluing the PZT. 

 

Quote:

Today I buzzed the table and determined there was a strong 500 Hz dirty resonance on the first steering mirror after the PMC. 
This caused me to go around tightening bolts everywhere, including the offending steering mirror and the optics around it.  This did not reduce the resonance.
I tightening the PMC REFL steering mirror as well, and this caused a misalignment onto the PMC REFL PD.  This took me a little while to figure out why the North path refused to lock.  I realigned the PMC REFL steering mirror into the PD.
After I got the North PMC locking again, the North path itself was not locking anymore.  I reranged the autolocker slow volts, but this did not help. 
Turns out the North Trans PD threshold voltage was not high enough.  This is likely because of the bolt tightening, causing some slight misalignment into the North cavity, lowering the overall circulating power in the cavity.  I lowered the autolocker threshold from 1.1 volts to 1.0 volts, and aligned the North Trans PD.  We need to rescan the North cavity to get better alignment/mode matching, but I'm gonna put this off until we replace this offending 500 Hz post-PMC steering mirror.
While I was realigning the Trans PD, I noticed that even touching the trans optics tables causes large ~1Hz oscillations in the trans voltage.  This is definitely exacerbating any scattering problem we have.  Also, the Trans PD output for both paths is "breathing", going up and down with a period of about a minute.  This is bad for our autolocker's threshold.  It's possible that we should build two periscopes for the north and south paths to eliminate these elevated tables which cause coherent oscillations on all trans optics.  We could copy Tara's front periscope design.

 

Attachment 1: 2018-03-14_16.58.21.jpg
2018-03-14_16.58.21.jpg
Attachment 2: 2018-03-14_20.40.18.jpg
2018-03-14_20.40.18.jpg
Attachment 3: 2018-03-14_16.58.13.jpg
2018-03-14_16.58.13.jpg
  2134   Wed Mar 14 15:19:52 2018 CraigDailyProgressDAQAdded four new ADC channels to vader (10.0.1.50)

Last night I soldered together four BNCs and attached them to one of the acromags with some free channels.  Turns out this acromag is named vader (10.0.1.50), and was being used for temperature sensors.
I just added all four channels into /home/controls/modbus/db/LaserSlowControlsAndMonitors.db , and named the first two additions C3:PSL-SCAV_REFL_DC and C3:PSL-NCAV_REFL_DC

All of this was done so I could tell if the "breathing" we see in the TRANS_DC is really representative of power in the cavities or just some crap happening on our tiny ISS board in transmission.  Seems like it's real power in the cavity, but we'll see after like an hour.
If so, we need to control the temperature of our EAOMs.  Thermal hats, peltiers, I don't care, needs to happen ASAP.
 

  2133   Tue Mar 13 18:18:04 2018 CraigDailyProgressComputersCreated framebuilder config file creator

In this lab we create and destroy EPICS channels all the time.  We need a quick way to gather all of the channels from our .db modbus database files and print out a .ini file configured to play nice with our framebuilder.  This is exactly what channelFramebuilderConfigFileCreator.py does.  It's located in Git/cit_ctnlab/ctn_scripts/

I copied all the regular expression stuff I wrote into channelDumper.py, and just reconfigured the output for the framebuilder .ini syntax.
If you run it using python channelFramebuilderConfigFileCreator.py, it creates a C3CTN.ini file on acromag1 in /home/controls/CTNWS/data/C3CTN.ini.

You have to copy this file to fb4 (10.0.1.156) /opt/rtcds/caltech/c4/chans/daq/C3CTN.ini:
scp /home/controls/CTNWS/data/C3CTN.ini controls@10.0.1.156:/opt/rtcds/caltech/c4/chans/daq/C3CTN.ini

Then follow awade's instructions from PSL ELOG 2014 to restart the framebuildin'.

  2132   Tue Mar 13 15:32:49 2018 CraigDailyProgressISSTrans DC jumpiness

When the cavities relock themselves, our Trans DC values change drastically.


I just noticed this when the North cavity lost lock while I was messing around with some optics.  The autolockers worked their magic, but upon relocking the NCAV_TRANS_DC value went from ~3 V to ~4 V.  It's been as low as 0.9 Volts as well, that's why I changed the thresholds for awade's NCAV autolocker .ini file.
What the heck?  We know they are locking to the same fringe, at least for the Fabry-Perot cavity.  The PMC is locked to a different fringe though, maybe this increased it's power output?  But that wouldn't explain the South path jumps.
Unclear why this would happen.

First plot: Last 30 minutes.
Second plot: Last 12 hours.  EVEN CRAZIER

Attachment 1: Screen_Shot_2018-03-13_at_3.32.17_PM.png
Screen_Shot_2018-03-13_at_3.32.17_PM.png
Attachment 2: Screen_Shot_2018-03-13_at_3.43.39_PM.png
Screen_Shot_2018-03-13_at_3.43.39_PM.png
  2131   Mon Mar 12 15:22:42 2018 Craig, JaimeDailyProgressPEMNotes from today's work

Note on (2) from below:
It seems that fb4's system time is fine.  I have synced up ws1 as well so that both system times are okay by installing ntp: $ sudo apt-get install ntp.  This worked automatically on ws1.
Jaime and I restarted fb4, but only ten minutes after doing so, fb4's frame time was three minutes behind.  Jaime suspects this is because the frame gpstime (C4:DAQ-DC0_GPS) is unregulated.  Jaime is currently working on preventing fb4 frame time drift.

Quote:

More notes:
1) The AOMs temperature drift are probably responsible for the "breathing" we're seeing in the Trans DC PDs.  We should get the ISSs working again as soon as possible.  Probs need thermal hats for the AOMs to stop thermal drift.

2) I am unable to access current CTN Lab data on fb4 using my python method from a few elogs ago.  (Now to get on our workstation you have to access through port 22: $ ssh -Y controls@131.215.115.216 -p 22)
I am also unable to access data using data viewer directly on fb4...  It says

Connecting to NDS Server localhost (TCP port 8088)
Connecting.... done
No data found

read(); err=Bad file descriptor
read(); err=Bad file descriptor
T0=18-03-10-06-36-21; Length=10800 (s)
No data output.

All of this used to work.  This needs to be resolved tomorrow.
Additionally, I think that the fb4 frame times are off by two days, because when I press "Time Now" in Data Viewer it gives me some time from March 10th (It's March 12th right now).  This is especially confusing because the $ date command on fb4 gives the correct time, but the frames are not at the correct time. 

3) I created nine new channels for autolocker tuning in real time.  awade should use these in autolocker.py in Git/cit_ctnlab/ctn_scripts/.  I already put the channel names in the three .ini files, ALConfig_NCAV.ini, ALConfig_SCAV.ini, and ALConfig_NPMC.ini.  Will need to restart acromag to activate these channels.
Channels:
On acromag1, in ~/modbus/db/PMCInterfaceControls.db
C3:PSL-NCAV_PMC_AUTOLOCKER_SLOW_SWEEP_BOTTOM
C3:PSL-NCAV_PMC_AUTOLOCKER_SLOW_SWEEP_TOP
C3:PSL-NCAV_PMC_AUTOLOCKER_SLOW_SWEEP_STEPSIZE
On acromag1, in ~/modbus/db/AutoLockerSoftChannels.db
C3:PSL-NCAV_FSS_AUTOLOCKER_SLOW_SWEEP_BOTTOM
C3:PSL-NCAV_FSS_AUTOLOCKER_SLOW_SWEEP_TOP
C3:PSL-NCAV_FSS_AUTOLOCKER_SLOW_SWEEP_STEPSIZE
C3:PSL-SCAV_FSS_AUTOLOCKER_SLOW_SWEEP_BOTTOM
C3:PSL-SCAV_FSS_AUTOLOCKER_SLOW_SWEEP_TOP
C3:PSL-SCAV_FSS_AUTOLOCKER_SLOW_SWEEP_STEPSIZE

Quote:

Today I buzzed the table and determined there was a strong 500 Hz dirty resonance on the first steering mirror after the PMC. 
This caused me to go around tightening bolts everywhere, including the offending steering mirror and the optics around it.  This did not reduce the resonance.
I tightening the PMC REFL steering mirror as well, and this caused a misalignment onto the PMC REFL PD.  This took me a little while to figure out why the North path refused to lock.  I realigned the PMC REFL steering mirror into the PD.
After I got the North PMC locking again, the North path itself was not locking anymore.  I reranged the autolocker slow volts, but this did not help. 
Turns out the North Trans PD threshold voltage was not high enough.  This is likely because of the bolt tightening, causing some slight misalignment into the North cavity, lowering the overall circulating power in the cavity.  I lowered the autolocker threshold from 1.1 volts to 1.0 volts, and aligned the North Trans PD.  We need to rescan the North cavity to get better alignment/mode matching, but I'm gonna put this off until we replace this offending 500 Hz post-PMC steering mirror.
While I was realigning the Trans PD, I noticed that even touching the trans optics tables causes large ~1Hz oscillations in the trans voltage.  This is definitely exacerbating any scattering problem we have.  Also, the Trans PD output for both paths is "breathing", going up and down with a period of about a minute.  This is bad for our autolocker's threshold.  It's possible that we should build two periscopes for the north and south paths to eliminate these elevated tables which cause coherent oscillations on all trans optics.  We could copy Tara's front periscope design.

 

 

  2130   Mon Mar 12 01:25:23 2018 CraigDailyProgressPEMNotes from today's work

More notes:
1) The AOMs temperature drift are probably responsible for the "breathing" we're seeing in the Trans DC PDs.  We should get the ISSs working again as soon as possible.  Probs need thermal hats for the AOMs to stop thermal drift.

2) I am unable to access current CTN Lab data on fb4 using my python method from a few elogs ago.  (Now to get on our workstation you have to access through port 22: $ ssh -Y controls@131.215.115.216 -p 22)
I am also unable to access data using data viewer directly on fb4...  It says

Connecting to NDS Server localhost (TCP port 8088)
Connecting.... done
No data found

read(); err=Bad file descriptor
read(); err=Bad file descriptor
T0=18-03-10-06-36-21; Length=10800 (s)
No data output.

All of this used to work.  This needs to be resolved tomorrow.
Additionally, I think that the fb4 frame times are off by two days, because when I press "Time Now" in Data Viewer it gives me some time from March 10th (It's March 12th right now).  This is especially confusing because the $ date command on fb4 gives the correct time, but the frames are not at the correct time. 

3) I created nine new channels for autolocker tuning in real time.  awade should use these in autolocker.py in Git/cit_ctnlab/ctn_scripts/.  I already put the channel names in the three .ini files, ALConfig_NCAV.ini, ALConfig_SCAV.ini, and ALConfig_NPMC.ini.  Will need to restart acromag to activate these channels.
Channels:
On acromag1, in ~/modbus/db/PMCInterfaceControls.db
C3:PSL-NCAV_PMC_AUTOLOCKER_SLOW_SWEEP_BOTTOM
C3:PSL-NCAV_PMC_AUTOLOCKER_SLOW_SWEEP_TOP
C3:PSL-NCAV_PMC_AUTOLOCKER_SLOW_SWEEP_STEPSIZE
On acromag1, in ~/modbus/db/AutoLockerSoftChannels.db
C3:PSL-NCAV_FSS_AUTOLOCKER_SLOW_SWEEP_BOTTOM
C3:PSL-NCAV_FSS_AUTOLOCKER_SLOW_SWEEP_TOP
C3:PSL-NCAV_FSS_AUTOLOCKER_SLOW_SWEEP_STEPSIZE
C3:PSL-SCAV_FSS_AUTOLOCKER_SLOW_SWEEP_BOTTOM
C3:PSL-SCAV_FSS_AUTOLOCKER_SLOW_SWEEP_TOP
C3:PSL-SCAV_FSS_AUTOLOCKER_SLOW_SWEEP_STEPSIZE

Quote:

Today I buzzed the table and determined there was a strong 500 Hz dirty resonance on the first steering mirror after the PMC. 
This caused me to go around tightening bolts everywhere, including the offending steering mirror and the optics around it.  This did not reduce the resonance.
I tightening the PMC REFL steering mirror as well, and this caused a misalignment onto the PMC REFL PD.  This took me a little while to figure out why the North path refused to lock.  I realigned the PMC REFL steering mirror into the PD.
After I got the North PMC locking again, the North path itself was not locking anymore.  I reranged the autolocker slow volts, but this did not help. 
Turns out the North Trans PD threshold voltage was not high enough.  This is likely because of the bolt tightening, causing some slight misalignment into the North cavity, lowering the overall circulating power in the cavity.  I lowered the autolocker threshold from 1.1 volts to 1.0 volts, and aligned the North Trans PD.  We need to rescan the North cavity to get better alignment/mode matching, but I'm gonna put this off until we replace this offending 500 Hz post-PMC steering mirror.
While I was realigning the Trans PD, I noticed that even touching the trans optics tables causes large ~1Hz oscillations in the trans voltage.  This is definitely exacerbating any scattering problem we have.  Also, the Trans PD output for both paths is "breathing", going up and down with a period of about a minute.  This is bad for our autolocker's threshold.  It's possible that we should build two periscopes for the north and south paths to eliminate these elevated tables which cause coherent oscillations on all trans optics.  We could copy Tara's front periscope design.

 

  2129   Sun Mar 11 18:43:07 2018 CraigDailyProgressPEMNotes from today's work

Today I buzzed the table and determined there was a strong 500 Hz dirty resonance on the first steering mirror after the PMC. 
This caused me to go around tightening bolts everywhere, including the offending steering mirror and the optics around it.  This did not reduce the resonance.
I tightening the PMC REFL steering mirror as well, and this caused a misalignment onto the PMC REFL PD.  This took me a little while to figure out why the North path refused to lock.  I realigned the PMC REFL steering mirror into the PD.
After I got the North PMC locking again, the North path itself was not locking anymore.  I reranged the autolocker slow volts, but this did not help. 
Turns out the North Trans PD threshold voltage was not high enough.  This is likely because of the bolt tightening, causing some slight misalignment into the North cavity, lowering the overall circulating power in the cavity.  I lowered the autolocker threshold from 1.1 volts to 1.0 volts, and aligned the North Trans PD.  We need to rescan the North cavity to get better alignment/mode matching, but I'm gonna put this off until we replace this offending 500 Hz post-PMC steering mirror.
While I was realigning the Trans PD, I noticed that even touching the trans optics tables causes large ~1Hz oscillations in the trans voltage.  This is definitely exacerbating any scattering problem we have.  Also, the Trans PD output for both paths is "breathing", going up and down with a period of about a minute.  This is bad for our autolocker's threshold.  It's possible that we should build two periscopes for the north and south paths to eliminate these elevated tables which cause coherent oscillations on all trans optics.  We could copy Tara's front periscope design.

  2128   Sun Mar 11 16:50:59 2018 Craig, awadeDailyProgressscatter500 Hz resonant scatter hump in beatnote ASD

After buzzing the table with a probe at 500 Hz, the source of the 500 Hz resonance is the first steering mirror after the PMC.  The PMC itself also exhibits a smaller 500 Hz resonance, unclear how much of that is actually the PMC mount vs. coupling through the PMC to the first steering mirror.

Quote:

If you look at our beatnote ASD you can see a broad dirty hump at 500 Hz.
awade played a pure 500 Hz tone through our lab speakers, and you could see the resonance peak being driven.  Blue are the driven spectra, orange are the non-driven spectra.
If we turn up the speakers it causes our North path to lose lock.

Some sort of mechanical resonance in the North path is causing this.  Buzzing is underway.
 

 

  2127   Fri Mar 9 16:10:41 2018 Craig, awadeDailyProgressscatter500 Hz resonant scatter hump in beatnote ASD

If you look at our beatnote ASD you can see a broad dirty hump at 500 Hz.
awade played a pure 500 Hz tone through our lab speakers, and you could see the resonance peak being driven.  Blue are the driven spectra, orange are the non-driven spectra.
If we turn up the speakers it causes our North path to lose lock.

Some sort of mechanical resonance in the North path is causing this.  Buzzing is underway.
 

Attachment 1: 500HzDrivenScatteringResonanceSpectrum_09-03-2018_160128_Spectra.pdf
500HzDrivenScatteringResonanceSpectrum_09-03-2018_160128_Spectra.pdf
  2126   Fri Mar 9 02:15:08 2018 CraigDailyProgressPLLPLL Autolocker is all

I just wrote a huge elog explaining the entire PLLautolocker.py but the ELOG ate it. 
Basically, it works.  It's on acromag1 in a tmux session.  Check out the sweet MEDM screens I made.  You can use them to control PLLautolocker.py.

  2124   Wed Mar 7 21:07:10 2018 Craig, awade, Johannes, Gautam, oh myDailyProgressDAQFrequency Counter added as Precav Beatnote Monitor

Last night we had an issue where my PLL autolocker lost the lock at some point during the night.  The PLL autolocker (located on acromag1 at ~/Git/labutils/netgpibdata/Marconi2023A_BeatnoteTrack.py) relies on the linearity of the mixing of the Marconi signal and the beatnote, and most of the time this is okay.  However, sometimes FSS ringing and fast temperature slewing can cause the beatnote to move in frequency too fast for the Marconi to keep up, causing the autolocker to fail.
What's worse, sometimes the autolocker fails to realize it has failed, since the mixer error signal goes to zero as the frequencies of the local oscillator and beatnote diverge.  The autolocker sits there thinking it's doing a great job locking the Marconi to the beatnote when it fact it's doing NOTHING.
What's EVEN worse, we use the PLL autolocker calculated beatnote frequency soft channel as the error signal for our North Cavity Shield Temperature PID.  If the PLL autolocker fails, the error signal it sends to the NCST PID is completely wrong, causing the North cavity shield heater to actuate as hard as it can to control the beatnote frequency, but it can't because the PLL autolocker is constantly lying to it.

In order to counter this issue, we scrounged up a frequency counter for use as the error signal for the North Cavity Shield Temp PID.
Gautam gave us a spare 40m UFC-6000 frequency counter.
Johannes kindly lent us his script ufc.py and associated service ufc.service, as well as some instructions for communicating with the frequency counter which I will go over.  What Johannes sent me is attached as ufc-6000.tar.gz.
awade modified the python script ufc.py to be a bit more pythonic, which is attached as ufc2.py, and increased the precav beatnote signal strength so the frequency counter could detect the beatnote.
I made the dang thing work with ws1.


Johannes sent some instructions to allow USB devices to communicate with python directly.  Apparently USBs are often restricted from talking with the rest of the computer directly unless you define some udev rules to allow some specifically identified USB devices through.
To do this, I went to the directory /etc/udev/rules.d/, and created a file 99-usb-serial.rules.  Inside this file I typed:
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="20ce", ATTR{idProduct}=="0010", MODE="0666", GROUP="plugdev"
Then, when I ran python ufc.py, I was able to retrieve the UFC-6000's reported frequency on ws1.

Some commands that were useful for figuring all of this out:
lsusb, which tells you about your usb devices connected to your computer
man udev, which tells you everything you need to know about udev on your particular Linux machine, and how to configure the .rules files.
lsusb -v -d ????x???? (where ????x???? represents the idVendor and idProduct attributes of your USB device), which tells you even more about a single USB device.
groups, which tells you what groups your username is a member of.  Not sure what a group is though.  controls was a member of plugdev, I guess this is enough.


Next steps, create a soft channel for the precav beatnote, and switch out the North Cavity Shield Temp PID error signal in the .ini file.
Also, I can modify the PLL Autolocker Marconi2023A_BeatnoteTrack.py to read in the beatnote soft channel value first to get the PLL into the linear region, then start it's locking process.  It can also use the new soft channel as a constant check to see if the Marconi has gone completely haywire.  Finally, the PLL Autolocker must have ANDChannels added to it to make sure that both North and South RAMP_EN channels are true.


EDIT: Added soft channel C3:PSL-PRECAV_BEATNOTE_FREQ which is monitoring the frequency counter in real time.

Attachment 2: ufc-6000.tar.gz
Attachment 3: ufc2.tar
  2123   Wed Mar 7 18:31:25 2018 CraigDailyProgressFSSFSS Fastmon RMS monitors for automatic gain cycling

Added ANDChannels to rmsMonitor.py.  So far the only ANDChannel in the list is C3:PSL-?CAV_FSS_RAMP_EN, which turns on and off the FSS.  Works pretty well. 
Can make this into a Linux service later, right now it's still running on acromag1 in a tmux session.


awade created Linux services for these scripts, located in /etc/init/ on acromag1:
PSL-RMSMonitor-north.conf
PSL-RMSMonitor-south.conf

To restart these after changes to rmsMonitor.py, run i.e. sudo service PSL-RMSMonitor-north.conf restart.

Quote:

Maybe add an optional binary engage channel to the *.ini parser (see for example PIDLocker_beta.py).  The default when no channel is given should be 'always on', otherwise it should activate the loop on a given list of soft channels. Running a bunch of scripts in tmux is fine for a while when testing but becomes a pain in the long term.

You can use the function,

def ANDChannels(chanList):  # find boolean AND of a list of binary EPICs chans
    return all([RCPID.read(ii, log=False) for ii in chanList])

to take a list of binary IOC channels and return true if all are state 1.  You can then make rmsMonitor.py 's behavior dependent on the FSS and PMC lock state as well as a medm binary switch.  We want our lockers and controls have a hierarchical structure so they have a sequence of behavior built into the logic of their rules of engagement.  If the parts have a well defined mode of engage and failure the whole system will be much more predictable and stable.  This also avoids the need for a central manager script of all the tasks. We can run all this stuff as daemons under services (in /etc/init) and let Linux do the heavy lifting of keeping it all alive.

On that note, the Marconi2023A_BeatnoteTrack.py script lost the PLL lock last night but failed to exit. Its inability to sense if its locked at zero PLL DC error signal or just way out of range is a problem.  When it does this it fails to drop the North cavity PID loop.  This morning the north cavity heater had railed at 0 watts which will take hours to recover back to stable ~100 MHz offset. We may need an out of loop frequency sensor to track wide variations of frequency.

 

Quote:

Very often in our lab our FSS boxes "ring", i.e. the EOM and PZT actuators fight each other for control of the laser frequency, instead of working together.  If the EOM actuator rails, the PZT comes stomping in trying to lock high frequency laser frequency noise, but the EOM comes back in and says, "no, it's my job", but the PZT is all like "obviously you can't do your job cause you're not strong enough", which really only makes the EOM angry, causing rail-to-rail actuation jumps and high, nonlinear noise in our FSS loops.  This is bad for our PLL autolocker, as the high noise hurts the PLL control signal and eventually causes it to lose the beatnote, and obviously bad for the beatnote ASD itself, which we are monitoring at all times on our webpage.
So today I created rmsMonitor.py, a python script which monitors the RMS of the FSS Fastmon voltage, the PZT control signal.  If the Fastmon RMS ever exceeds 250 mV, rmsMonitor.py will call awade's gaincycle.py on the offending FSS box, which brings both Common and Fast Gain values to their lowest setting, then steadily ramps them back up to where they were in a nice way such that ringing won't start up again.  In this way we can automatically eliminate ringing whenever it starts.


rmsMonitor.py lives in ~/Git/cit_ctnlab/ctn_scripts/,  and has two associated .ini files, RMSMonitor_North.ini and RMSMonitor_South.ini.  Inside the .ini files is defined the Fastmon path channel name, i.e. C3:PSL-NCAV_FSS_FASTMON for the north path, and the max rms limit, which is currently 250 mV for both paths.

To run this script on our North path, call
$ python rmsMonitor.py RMSMonitor_North.ini &
Every two seconds, the script should print out something like
C3:PSL-NCAV_FSS_FASTMON rms = 0.0647327284305 V
which is the channel name and the rms calculated for that channel in that two seconds.  Again, if rms is ever above 250 mV, it triggers gaincycle.py for that path and eliminates ringing.


These scripts are perpetually running in tmux sessions named RMSNorth and RMSSouth on acromag1.  To access the north tmux session, log onto acromag1 and run  $ tmux attach -t RMSNorth
These scripts will need to be turned off when debugging persistent FSS ringing.

 

 

  2122   Wed Mar 7 11:46:07 2018 awadeDailyProgressFSSFSS Fastmon RMS monitors for automatic gain cycling

Maybe add an optional binary engage channel to the *.ini parser (see for example PIDLocker_beta.py).  The default when no channel is given should be 'always on', otherwise it should activate the loop on a given list of soft channels. Running a bunch of scripts in tmux is fine for a while when testing but becomes a pain in the long term.

You can use the function,

def ANDChannels(chanList):  # find boolean AND of a list of binary EPICs chans
    return all([RCPID.read(ii, log=False) for ii in chanList])

to take a list of binary IOC channels and return true if all are state 1.  You can then make rmsMonitor.py 's behavior dependent on the FSS and PMC lock state as well as a medm binary switch.  We want our lockers and controls have a hierarchical structure so they have a sequence of behavior built into the logic of their rules of engagement.  If the parts have a well defined mode of engage and failure the whole system will be much more predictable and stable.  This also avoids the need for a central manager script of all the tasks. We can run all this stuff as daemons under services (in /etc/init) and let Linux do the heavy lifting of keeping it all alive.

On that note, the Marconi2023A_BeatnoteTrack.py script lost the PLL lock last night but failed to exit. Its inability to sense if its locked at zero PLL DC error signal or just way out of range is a problem.  When it does this it fails to drop the North cavity PID loop.  This morning the north cavity heater had railed at 0 watts which will take hours to recover back to stable ~100 MHz offset. We may need an out of loop frequency sensor to track wide variations of frequency.

 

Quote:

Very often in our lab our FSS boxes "ring", i.e. the EOM and PZT actuators fight each other for control of the laser frequency, instead of working together.  If the EOM actuator rails, the PZT comes stomping in trying to lock high frequency laser frequency noise, but the EOM comes back in and says, "no, it's my job", but the PZT is all like "obviously you can't do your job cause you're not strong enough", which really only makes the EOM angry, causing rail-to-rail actuation jumps and high, nonlinear noise in our FSS loops.  This is bad for our PLL autolocker, as the high noise hurts the PLL control signal and eventually causes it to lose the beatnote, and obviously bad for the beatnote ASD itself, which we are monitoring at all times on our webpage.
So today I created rmsMonitor.py, a python script which monitors the RMS of the FSS Fastmon voltage, the PZT control signal.  If the Fastmon RMS ever exceeds 250 mV, rmsMonitor.py will call awade's gaincycle.py on the offending FSS box, which brings both Common and Fast Gain values to their lowest setting, then steadily ramps them back up to where they were in a nice way such that ringing won't start up again.  In this way we can automatically eliminate ringing whenever it starts.


rmsMonitor.py lives in ~/Git/cit_ctnlab/ctn_scripts/,  and has two associated .ini files, RMSMonitor_North.ini and RMSMonitor_South.ini.  Inside the .ini files is defined the Fastmon path channel name, i.e. C3:PSL-NCAV_FSS_FASTMON for the north path, and the max rms limit, which is currently 250 mV for both paths.

To run this script on our North path, call
$ python rmsMonitor.py RMSMonitor_North.ini &
Every two seconds, the script should print out something like
C3:PSL-NCAV_FSS_FASTMON rms = 0.0647327284305 V
which is the channel name and the rms calculated for that channel in that two seconds.  Again, if rms is ever above 250 mV, it triggers gaincycle.py for that path and eliminates ringing.


These scripts are perpetually running in tmux sessions named RMSNorth and RMSSouth on acromag1.  To access the north tmux session, log onto acromag1 and run  $ tmux attach -t RMSNorth
These scripts will need to be turned off when debugging persistent FSS ringing.

 

  2121   Wed Mar 7 02:58:13 2018 ranaDailyProgressComputerscymac3 ADC is spiky

I recommend Craig write a script called whatsupDAQ.py. It could be run whenever to collate some DAQ status indicators and report what's up. Something like the CDS overview MEDM screen, but command line.

  2120   Tue Mar 6 22:16:15 2018 CraigDailyProgressFSSFSS Fastmon RMS monitors for automatic gain cycling

Very often in our lab our FSS boxes "ring", i.e. the EOM and PZT actuators fight each other for control of the laser frequency, instead of working together.  If the EOM actuator rails, the PZT comes stomping in trying to lock high frequency laser frequency noise, but the EOM comes back in and says, "no, it's my job", but the PZT is all like "obviously you can't do your job cause you're not strong enough", which really only makes the EOM angry, causing rail-to-rail actuation jumps and high, nonlinear noise in our FSS loops.  This is bad for our PLL autolocker, as the high noise hurts the PLL control signal and eventually causes it to lose the beatnote, and obviously bad for the beatnote ASD itself, which we are monitoring at all times on our webpage.
So today I created rmsMonitor.py, a python script which monitors the RMS of the FSS Fastmon voltage, the PZT control signal.  If the Fastmon RMS ever exceeds 250 mV, rmsMonitor.py will call awade's gaincycle.py on the offending FSS box, which brings both Common and Fast Gain values to their lowest setting, then steadily ramps them back up to where they were in a nice way such that ringing won't start up again.  In this way we can automatically eliminate ringing whenever it starts.


rmsMonitor.py lives in ~/Git/cit_ctnlab/ctn_scripts/,  and has two associated .ini files, RMSMonitor_North.ini and RMSMonitor_South.ini.  Inside the .ini files is defined the Fastmon path channel name, i.e. C3:PSL-NCAV_FSS_FASTMON for the north path, and the max rms limit, which is currently 250 mV for both paths.

To run this script on our North path, call
$ python rmsMonitor.py RMSMonitor_North.ini &
Every two seconds, the script should print out something like
C3:PSL-NCAV_FSS_FASTMON rms = 0.0647327284305 V
which is the channel name and the rms calculated for that channel in that two seconds.  Again, if rms is ever above 250 mV, it triggers gaincycle.py for that path and eliminates ringing.


These scripts are perpetually running in tmux sessions named RMSNorth and RMSSouth on acromag1.  To access the north tmux session, log onto acromag1 and run  $ tmux attach -t RMSNorth
These scripts will need to be turned off when debugging persistent FSS ringing.

  2119   Tue Mar 6 08:04:11 2018 GabrieleDailyProgressComputerscymac3 ADC is spiky

The real time model X3TST was not running. I likely forgot to restart it after the power outage of a week ago. 

I restarted and now I can see a sinusoid in the data:

from gwpy.time import tconvert
import nds2
from pylab import *
ion()
c2 = nds2.connection('cymac3.ligo.caltech.edu', 8088)
backTime = 10 
chanName = 'X3:TST-BEAT_OUT_DQ'
gps = tconvert('now').gpsSeconds
data = c2.fetch(gps-backTime*2, gps-backTime, [chanName])
x = data[0].data
plot(x)

 

Quote:

I've been working on our new ws1 all day today, trying to get it back to where ws3 was.  I got git working, recreated our python virtual environment, and got apache2 going again.  However, the last step is actually getting beatnote data off cymac3 and plotting it.  Unfortunately, I'm getting gigantic spikes from the cymac3 and I'm not sure why.

I opened a fresh ipython session and ran the following:
from gwpy.time import tconvert
import nds2
from pylab import *
ion()
c2 = nds2.connection('cymac3.ligo.caltech.edu', 8088)
backTime = 300 # 5 mins of time
chanName = 'X3:TST-BEAT_OUT_DQ'
data = c2.fetch(tconvert('now').gpsSeconds - 60 - backTime, tconvert('now').gpsSeconds - backTime, [chanName]) # read data from 5 minutes ago.
plot(data[0].data)

If I do this, I get the plot posted below.
I did a test for our three other channels for the accelerometers, and they all exhibit similar spikiness.
(chanNames = ['X3:TST-ACC_X_OUT_DQ', 'X3:TST-ACC_Y_OUT_DQ', 'X3:TST-ACC_Z_OUT_DQ'])

Unclear what's wrong with cymac3.  I'm worried this is happening for all its channels, but I'm not sure and don't want to mess with another lab's ADC.

 

  2118   Mon Mar 5 22:55:37 2018 Craig, awadeDailyProgressBEATNorth and South Cavities Relocked

While Craig was messing around on computers all day, awade got to work on the optics table aligning the North path.  He managed to lock the North at 60% visibility without even touching our new mode matching lens positions.  We think we can do better in the near future.

But while we had two TEM00 modes, we decided to get a betnote measurement.
Beatnote strength: +1 dBm (This is including an ND filter on our optics)
Beatnote frequency: 103.3 MHz

Also, apparently our new vaccan temp control PID script had a sign flip in it, so we've been heating our can maximally for a while today, up to 40 degrees C.  We fixed this, which will cause our can to go to 30 C and the beatnote will slew violently.  This made getting a beatnote ASD difficult. 

Attachment 1: NorthCavityAndSouthCavityTogetherInPerfectHarmony.jpg
NorthCavityAndSouthCavityTogetherInPerfectHarmony.jpg
Attachment 2: SlowVoltsAndPIDSettings.jpg
SlowVoltsAndPIDSettings.jpg
Attachment 3: StitchedSpectrum_TransBeatnote_FMDevn_10kHz_SR560Gain_20_Avgs_20_Span_102p4kHz_05-03-2018_231449_Spectrum.pdf
StitchedSpectrum_TransBeatnote_FMDevn_10kHz_SR560Gain_20_Avgs_20_Span_102p4kHz_05-03-2018_231449_Spectrum.pdf
Attachment 4: StitchedSpectrum_TransBeatnote_FMDevn_10kHz_SR560Gain_20_Avgs_20_Span_102p4kHz_05-03-2018_231449.tgz
  2117   Mon Mar 5 22:17:45 2018 CraigDailyProgressComputerscymac3 ADC is spiky

I've been working on our new ws1 all day today, trying to get it back to where ws3 was.  I got git working, recreated our python virtual environment, and got apache2 going again.  However, the last step is actually getting beatnote data off cymac3 and plotting it.  Unfortunately, I'm getting gigantic spikes from the cymac3 and I'm not sure why.

I opened a fresh ipython session and ran the following:
from gwpy.time import tconvert
import nds2
from pylab import *
ion()
c2 = nds2.connection('cymac3.ligo.caltech.edu', 8088)
backTime = 300 # 5 mins of time
chanName = 'X3:TST-BEAT_OUT_DQ'
data = c2.fetch(tconvert('now').gpsSeconds - 60 - backTime, tconvert('now').gpsSeconds - backTime, [chanName]) # read data from 5 minutes ago.
plot(data[0].data)

If I do this, I get the plot posted below.
I did a test for our three other channels for the accelerometers, and they all exhibit similar spikiness.
(chanNames = ['X3:TST-ACC_X_OUT_DQ', 'X3:TST-ACC_Y_OUT_DQ', 'X3:TST-ACC_Z_OUT_DQ'])

Unclear what's wrong with cymac3.  I'm worried this is happening for all its channels, but I'm not sure and don't want to mess with another lab's ADC.

Attachment 1: cymac3BeatnoteDataSpikes.png
cymac3BeatnoteDataSpikes.png
  2116   Sun Mar 4 17:33:01 2018 awadeDailyProgressComputersWS1 Up

Workstation is back.  I was able to fully restore medm screens, scripts and noisebudget from Git on WS1.  Good version control is the way to go it seems.

Can't say the same for Criag's Apache noise budget stuff.  I found a SATA to USB converter and am able to mount the WS3 HD directly onto WS1.  The original HD  is intact so all the original stuff is accessible.  We just need to move it into place on the new computer.  I've left the WS3 machine HD plugged in and it is mounted in /media/controls/. 

Quote:

I just tried to ssh into ws3 only to find it unresponsive.  I was going to check the router but then found that the computer seemed to be off.

On closer inspection the computer seems to have some kind of power issue.  At the moment all I am seeing is a blinking amber power light on the box. One LED indicator is on on the motherboard, otherwise there are no fans or HD spinning. 

I have sequentially pulled the DVD drive, HD and RAM.  Fans won't spin up.  Various forums suggest that its either a motherboard issue or a power supply issue.  Given that we are seeing the basic amber flashing LED on the front panel I would hazard a guess that its not the power supply.

WS3 is a computer that Larry Wallace gave me that had been retired from desktop use.  Don't think this is worth days of diagnosis and, more importantly, days of down time to fix it.  I had made a clone of the computer, but that isn't much use if we need to recover to a completely different computer. For now I am going to pull ws2 from the QIL lab and attempt to get it going as the interface computer in the PSL lab.  All the important medm screens and python scripts were committed to Gitlab so we should be good. WS2 was running an old CentOS (Redhat) operation system, no out of service period.  I will switch out its HD for a fresh 250 Gb and do a fresh install of debian as a start.

Edit Sun Mar 4 00:13:37 2018: Scratch using WS2 as a stop gap machine, it won't boot from USB and the CDROM drive is busted.  We're going to have to use WS1 for now, it already has Debian installed and LIGO tools (I think).

Edit Sun Mar 4 12:48:39 2018: Last night I ended up just moving WS1 into the PSL lab.  I had previously installed Debian and all the LIGO tools (see ATF:2181) which is now come in handy.  I've changed the machine's IP to 10.0.1.34 and we can now SSH remotly using the ussual gateway address and port 22. We may want to reporpose the 'gateway' box as it is not currently in use as a ssh landing point.

 

  2115   Sun Mar 4 05:22:05 2018 ranaMiscPurchasesTorque spec for optical mounts

The case I was describing is with the BA-1, BA-2, or BA-3 from thorlabs, using a steel (18-8) screw and a SS washer. In this case, you want to go to ~75% before plastic deformation of the aluminum. In this situation, the screw and the base will be elasticly deformed. More tightness will make it plastic and then slowly drift. Less will just give you less stiffness against vibration.

For the case of the 1/4-20 screw with washer in a fork clamp, I expect it can go more, but probably not necessary. To test for drift, we would need an ultra stable Mach-Zender and a long term visibility test, as was done for the stability tests of the Japanese Super Duralumin mounts that Koji has.

Dennis Coyne has a formula to figure out these numbers. I'm going to get to the bottom of this and make it part of this summer's course on Lab Skills.

  2114   Sat Mar 3 23:25:37 2018 awadeDailyProgressComputersWS3 Down

I just tried to ssh into ws3 only to find it unresponsive.  I was going to check the router but then found that the computer seemed to be off.

On closer inspection the computer seems to have some kind of power issue.  At the moment all I am seeing is a blinking amber power light on the box. One LED indicator is on on the motherboard, otherwise there are no fans or HD spinning. 

I have sequentially pulled the DVD drive, HD and RAM.  Fans won't spin up.  Various forums suggest that its either a motherboard issue or a power supply issue.  Given that we are seeing the basic amber flashing LED on the front panel I would hazard a guess that its not the power supply.

WS3 is a computer that Larry Wallace gave me that had been retired from desktop use.  Don't think this is worth days of diagnosis and, more importantly, days of down time to fix it.  I had made a clone of the computer, but that isn't much use if we need to recover to a completely different computer. For now I am going to pull ws2 from the QIL lab and attempt to get it going as the interface computer in the PSL lab.  All the important medm screens and python scripts were committed to Gitlab so we should be good. WS2 was running an old CentOS (Redhat) operation system, no out of service period.  I will switch out its HD for a fresh 250 Gb and do a fresh install of debian as a start.

Edit Sun Mar 4 00:13:37 2018: Scratch using WS2 as a stop gap machine, it won't boot from USB and the CDROM drive is busted.  We're going to have to use WS1 for now, it already has Debian installed and LIGO tools (I think).

Edit Sun Mar 4 12:48:39 2018: Last night I ended up just moving WS1 into the PSL lab.  I had previously installed Debian and all the LIGO tools (see ATF:2181) which is now come in handy.  I've changed the machine's IP to 10.0.1.34 and we can now SSH remotly using the ussual gateway address and port 22. We may want to reporpose the 'gateway' box as it is not currently in use as a ssh landing point.

  2113   Fri Mar 2 19:33:14 2018 awadeMiscPurchasesNew Wiha 2.0-7.0 Nm torque control driver

New Torque Driver

I purchased a new torque driver for use in the CTN lab.  It is the Wiha TorqueVario 2.0-7.0 Nm (model 28655), pictured below. This is the highest value driver in their variable torque driver selection with a range that is appropriate for tightening 1/4-20 bolts on the table.  We already have the much lower range Adjustable TorqueVario 15 - 80 In/Oz (model 28501), but this range is only appropriate for very low torque applications like fastening PBS and mirrors. In the future we might like to get a mid-range driver to cover the whole range.

I also purchased a selection of Phillips and flat head driver blades to go with the driver heads because they were relatively cheap. All the Wiha blades are exchangeable between their torque tools, they will likely come in handy for a range of precision applications.

So How Much Should We Torque?

So far I've only tested the new driver qualitatively. 

The range of torques applied by humans in the lab varies widely and there isn't a lot of (good) advice out there on the optimal value.  Over torquing leads to deformation of the table, this can misalign optics in the short term and that then undergo a very slow relaxation over a long period of time.  In the worst cases the table or opto-mechanical components go past their elastic limit. An under tensioned bolt is obviously bad.  Without a strong rigid connection to the table the optic mount may be free to move in a number of lower frequency modes that otherwise wouldn't be allowed (bad).  Rana has recommended an applied torque of ~5-6 pound-inch (6.8-8.1 Nm), this is supposed to be just below the limit of most aluminum and steal plates before they go from elastic to plastic deformation.

Here is an except from LIGOX chat channel from Rana:

Most of the time, it doesn't matter. You can use whatever seems right to you. However, in situations where precision matters, you have to consider what the requirement on the fastening is: e.g. when clamping a 1/4" thick base to an optical table, we use 1/4-20 screws because that's what the table is tapped for. The screw length should be chosen so as to use all the threads in the table.

But, how much torque should be used?

If too much is used, the aluminum base will be deformed so that it is no longer in the elastic regime. Once there is significant plastic deformation, there will be slow mis-alignment of the mount.

Washers increase the total force which can be used, since it reduces the pressure on the soft aluminum given a fixed force. For the usual set of Thorlabs hardware we have the correct torque is ~5-6 ft. lbs [6.8-8.1 Nm]. Similar numbers can be found for other cases by considering what materials are being used.

Initial Qualitative Test

I tested a few different values of torque on the table for a 1/4-20 bolts (with washers) directly on the table in the ATF lab. I used the south table in the ATF lab as many of the tapped holes in the CTN lab have been damaged from over-torquing and contaminants in the treads.  

A dozen 1/4-20 bolts (with washers) were fastened with identical torque values starting at 2 Nm (see second picture). Between each torque cycle I undid the bolt under test with a regular ball driver to get a feel for force used. From their I incremented the applied torque by 0.5 Nm on each tightening cycling working up to 7 Nm. When undoing a bolt there is a kind of a 'crack'. This is the point at which the fastener goes from vertical contact friction to loose thread-only friction.

I found that the 'soft crack' point was 2.0 - 2.5 Nm.  The transition to 'hard crack' (an audible click) occurs at about 3.5 Nm.  However, interestingly, the variability between 3.5 Nm tightened bolts seemed to higher; 3 out of 12 bolts gave a softish crack.  Its likely that the particulars of washer-table-bolt surfaces may change the crack point.  I found that a torque of 4.0 Nm gave a guaranteed hard crack without seeming qualitatively excessive. The transition between regimes was a rapid one and above 4.0 Nm the friction hold was about the same, giving about the same 'crack'.

I found the recommended 6.8-8.0 Nm was very tight.  A value of 7 Nm required a very strong grip on the driver, this is the kind of torque that might only easily be applied using a T-handle driver or a long allen key. A value of 7 Nm seemed unreasonably high compared to what is usually used, this is at the upper range I've seen in various labs. 

My initial recommendation is for 3.5-4.0 Nm tightening of regular bolts for most mounts.  My usual peronsal choice is for a soft crack at around 2.5 Nm.  

Procuring Fixed Value Drivers For General Lab Use

Wiha sells fixed value torque drivers in increments of 0.5 Nm (see Wiha EasyTorque), these fit the standard blades and are reasonably priced. They also sell fixed value Wing Handles that have a compact profile.  We may want to do some scientifically rigorous tests of various post-fork and base-dogclamp combinations to see what the best objective torque value is.

 

 

Attachment 1: IMG_2318.JPG
IMG_2318.JPG
Attachment 2: IMG_2319.JPG
IMG_2319.JPG
  2112   Wed Feb 28 18:58:01 2018 CraigHowToComputersGetting CTN lab data from framebuilder using python nds2

Jaime and John R have done some work enabling NDS on our framebuilder in the subbasement, fb4.  I have figured out how to get data from fb4 to our CTN lab computer ws3, using python nds2.

1) Log into ws3 from your local machine: $ ssh -Y controls@131.215.115.216 -p 2022

2) Run ipython on ws3: $ ipython

3) Enter the following code into the ipython session:

In [1]: import nds2
In [2]: c = nds2.connection('10.0.1.156', 8088)  # 10.0.1.156 is the fb4 local ip address.  8088 is the fb4 NDS broadcast port number.
In [3]: gpstime = 1201902464  # using old data from Feb 5th, because our lab has been out of commission for a couple of days.
In [4]: chanName = 'C3:PSL-SCAV_FSS_SLOWOUT'  # retrieve the SCAV slow laser control
In [5]: from pylab import *
In [6]: ion()
In [7]: data = c.fetch(gpstime, gpstime+50000, [chanName])
In [8]: plot(data[0].data)

A plot should appear on your computer.  It should be the same as the plot posted below.  It should take about 10 seconds to appear.

Quote:

Steps to getting data from our framebuilder, since many of the steps are pretty hard to remember.  These steps were performed on a Macbook running OSX Sierra 10.12.5.  Dataviewer was opened locally using XQuartz.


1) SSH into PSL Lab computer ws3 through the public port 2022:

$ ssh -Y controls@131.215.115.216 -p 2022   (I often alias this on my PC .bashrc as some command: $ alias ws3="ssh -Y controls@131.215.115.216 -p 2022")

2) SSH into framebuilder4 fb4:

$ fb4 ("fb4" is already aliased on ws3 to be the command $ ssh -Y controls@10.0.1.156)

3) Launch dataviewer:

$ dvlaunch ("dvlaunch" is an alias to $ LIGONDSIP=localhost dataviewer.  This tells dataviewer to look locally for frames.)

4) Dataviewer will launch.  Click the "Signal" tab.  Click the "Slow" button.  Channel options "C3" and "C4" should appear.  The PSL Lab is "C3".  Choose what channels you want to plot.

5) Click the "Playback" tab.  Click "Second Trend" because other modes don't work.  Unclick "Min" and "Max".  Select X Axis Time as "GPS".  Choose your start and stop times you want plotted. Finally select your signals.  Click "Start".

Wait a while.  The terminal you ran dvlaunch from should give a progress report.  After all data is retreived a plots page should automatically appear with the channels and plot start/stop times you requested.


If you want to save the data from your plot:

6) Click on the plot you want to save data from.  It will have little black boxes in the corners of the plot when selected.

7) Click the "Data -> Export -> ASCII" tab.  A window called "Grace: Write sets" should open.

8) Click on the option under "Write set(s)", and change the Format from "%.8g" to "%.10g" to get all the digits of the GPS time.  Change Selection to whatever you want to name your datafile.  Click "OK". 

Your data should be saved.  Make sure it is formatted well.

 

Attachment 1: SCAV_SLOWOUT_examplePlot.png
SCAV_SLOWOUT_examplePlot.png
  2111   Wed Feb 28 13:43:15 2018 awade, CraigMiscSafetyPipe work scheduled for lab Friday March 2nd

The lab is all wrapped up and ready to go.  I'm trying to get in contact with plumbing to see if we can move the job forward to Thursday.  

Edit awade Wed Feb 28 14:58:12 2018: Got throught o Raymond at ext 1252, the job is locked in for Friday morning and can't be moved forward.  Hopefully we can get to cleaning the lab by Friday afternoon and reboot for the weekend.

 

 

Quote:

The Caltech plumbing shop called to say they will have all the parts they need to start work Friday morning.  There is a bit of masonary work to be done to the down pipe hole.  So there will be some dust.

I have ordered materials from McMaster for wrapping the experiment up again.  I got extra so we will have a cling-wrap-kit® for the WB labs ready to go in the future. I didn't get a tracking recipt, but the order went through around COB on the 26th.  Should be here today or tomorrow.

 

Attachment 1: IMG_2310.JPG
IMG_2310.JPG
Attachment 2: IMG_2311.JPG
IMG_2311.JPG
Attachment 3: IMG_2312.JPG
IMG_2312.JPG
  2110   Wed Feb 28 11:50:21 2018 awadeMiscSafetyPipe work scheduled for lab Friday March 2nd

The Caltech plumbing shop called to say they will have all the parts they need to start work Friday morning.  There is a bit of masonary work to be done to the down pipe hole.  So there will be some dust.

I have ordered materials from McMaster for wrapping the experiment up again.  I got extra so we will have a cling-wrap-kit® for the WB labs ready to go in the future. I didn't get a tracking recipt, but the order went through around COB on the 26th.  Should be here today or tomorrow.

Quote:

I called facilities.  They say with their new purchase approval process, and lead time on parts, that they expect the repair jobs on piping would start early next week.  

There will be a bit of manual alterations to the through hole coming down into the ceiling so we need to do some dust mitigation.  I will order some more cling wrap and salvage some of the plastic sheeting from the last episode.

 

  2109   Wed Feb 28 10:20:40 2018 awadeDailyProgressTempCtrlHeater and temperaure sensors in cryo lab

You should check a few things.  Get a 200-300 MHz oscilloscope with a probe and look to see if the circuit has any oscillations.  This should be your first reaction to many problems: looking a little wider than the audio band can often reveal important problems that people miss. We found that the heating elements had some unexpected impedance that made our feed back unstable to the MOSFET at very high frequencies.  The solution there is to put some capacitors over the heater to dampen and maybe in some other places too. We found that very high frequency oscillations actually coupled back to the temperature sensing circuit.  You may want to check to see if you can see any pickup in your temperature sensing here circuit too.

Another thing.  From what I remember of your circuit you are transmitting the signal referenced to a common ground (rather than floated differential signal).  If you're heater is loading up a bunch of current at the table end of a common ground this will generates a potential drop between the power supply/rack and the 0 volt reference of the circuit at the table (see Ohm's law). There is a good table on the American wire gauge wiki page that gives standard resistance for different gauge wires, you can calculated expected potential difference generated by your heater circuit current from this.  Check you grounding situation.  Are you pinning one ADC pin to ground on the Acromag (maybe you shouldn't)? Are you using appropriately chunky wire to establish ground at the table? Are you committing the cardinal sin of having two or more separate paths to ground from the table? Take some time early in your experiment to ensure your grounding network is topologically like a tree rather than a fungal Mycorrhizal network. See attached figure for reference.

A source of wisdom on grounding that rana recommends is Morrison, R. (1998), Grounding and shielding techniques (4th edition), New York: Wiley (link). Craig and I have borrowed out two of the copies, any edition will do so maybe get one that is physically on the shelf or, even better, there is an electronic copy.

 

 

 

Quote:

I think I'm seeing a similar problem that y'all were when I use my heater circuit (which is I believe the same as your heater circuit, it's the one Kira and Kevin are using at the 40m). Our temperature readout circuits might be slightly different.

Basically when I have the heater on the board as my readout op amps, I get up to a few tenths of a volt jump in my temperature readout; however, even after moving this circuit to a different board and using a separate power supply, I'm getting about a millivolt shift. This is not good for <1K control. I'm also well within the limits of my power supplies, have voltage regulatorsbefore the OP amps, etc... I will try swapping out the OP amp as you did, but thought it was a pretty weird problem.

 

Attachment 1: GroundingDoGroundingDont.jpg
GroundingDoGroundingDont.jpg
  2108   Tue Feb 27 14:17:13 2018 AaronDailyProgressTempCtrlHigh Current Draw for Vaccan Temp Control Causing Nonlinear Voltage Spikes

I think I'm seeing a similar problem that y'all were when I use my heater circuit (which is I believe the same as your heater circuit, it's the one Kira and Kevin are using at the 40m). Our temperature readout circuits might be slightly different.

Basically when I have the heater on the board as my readout op amps, I get up to a few tenths of a volt jump in my temperature readout; however, even after moving this circuit to a different board and using a separate power supply, I'm getting about a millivolt shift. This is not good for <1K control. I'm also well within the limits of my power supplies, have voltage regulatorsbefore the OP amps, etc... I will try swapping out the OP amp as you did, but thought it was a pretty weird problem.

Quote:

After some extensive testing, the circuit appears to be working as expected.  The only exception is the affect the temperature control circuit has on all other electronics connected to the +24V kepco power supply.  The model number on our electronics rack is ATE 36-3M, and is a Size "B" Quarter Rack model, rated for "Approx 100 watts" of power, with max DC voltage of 36V and max current of 3 amps, according to Table 1.1 of the manual.  Our current readings on the power supply show between 1 and 2.5 amps at 24 volts, with the current depending on the 0 to 1.5 amps the vaccan heater draws.  So our max power output from the power supply is 60 watts, well within the power limits.

 

  2107   Tue Feb 27 03:25:34 2018 ranaDailyProgressMode matchingemcee Hammer my mode

And Lee@MIT also has a python alamode as part of his ultimate ('simulate everything') package:

https://github.com/nicolassmith/alm/issues/15

  2106   Mon Feb 26 14:16:38 2018 awadeMiscSafetyWater leak in the lab

I called facilities.  They say with their new purchase approval process, and lead time on parts, that they expect the repair jobs on piping would start early next week.  

There will be a bit of manual alterations to the through hole coming down into the ceiling so we need to do some dust mitigation.  I will order some more cling wrap and salvage some of the plastic sheeting from the last episode.

Quote:

Checked the piping again this morning.  The water is just a slow drip now. The facilities people put duct tape around the pipe crack and isolated the source of water on the floor above.  Don't think its safe yet to turn the rack back on, given the proximity of the water and the quality of the patch job. 

I called facilities to find out what the status of the job was and the timelines for fixing.  They didn't have the poly pipe in stock and have reordered. The earliest they can get to starting on the job will be Monday.  The guy responsible is away today.  We should call ext 4969 (Caltech plumbing shop) early on Monday to get an update on expected completion time of the job. Until then we should redirect effort into SURF search, noise budgeting, scatter modeling, PID modeling, FSS sub-noise budget etc.

 

ELOG V3.1.3-