ID |
Date |
Author |
Type |
Category |
Subject |
13139
|
Mon Jul 24 19:57:54 2017 |
gautam | Update | CDS | IMC locked, Autolocker re-enabled |
Now that all the front end models are running, I re-aligned the IMC, locked it manually, and then tweaked the alignment some more. The IMC transmission now is hovering around 15300 counts. I re-enabled the Autolocker and FSS Slow loops on Megatron as well.
Quote: |
MX/OpenMX network running
Today I got the mx/open-mx networking working for the front ends. This required some tweaking to the network interface configuration for the diskless front ends, and recompiling mx and open-mx for the newer kernel. Again, this will all be documented.
|
|
13140
|
Tue Jul 25 00:03:01 2017 |
rana | Omnistructure | Treasure | coffee pot lid |
I have recommissioned the Zojirushi coffee pot lid. You may, once again, align the dots in order to make the carafe pourable.
Details:
The Zojirushi lid is a two part mechanism:
- The top part of the lid must be removed for cleaning.
- When replacing the lid the two components must be aligned to < 3 mrad precision so that the "teeth" are able to land in the groove.
- There is a 4-fold degeneracy in this process. To break the degeneracy, align the dot on top with the spout gap (visible from the bottom view).
- After proper alignment and mating, the two parts should snap together and the relative alignment wiggle available should be < 2 mrad.
- After screwing the two-piece lid onto the carafe, ensure that the 2 dots are separated by < 170 deg in the closed position.
|
13141
|
Tue Jul 25 02:03:59 2017 |
gautam | Update | Optical Levers | Optical lever tuning thoughts |
Summary:
Currently, I am unable to engage the coil-dewhitening filters without destroying cavity locks. One reason why this is so is because the present Oplev servos have a roll-off at high frequencies that is not steep enough - engaging the digital whitening + analog de-whitening just causes the DAC output to saturate. Today, Rana and I discussed some ideas about how to approach this problem. This elog collects these thoughts. As I flesh out these ideas, I will update them in a more complete writeup in T1700363 (placeholder for now). Past relevant elogs: 5376, 9680.
- Why do we need optical levers?
- To stabilize the low-frequency seismic driven angular motion of the optics.
- In what frequency range can we / do we need to stabilize the angular motion of the optics? How much error signal suppression do we need in the control band? How much is achievable given the current Oplev setup?
- To answer these questions, we need to build a detailed Oplev noise budget.
- Ultimately, the Oplev error signal is sensing the differential motion between the suspended optic and the incident laser beam.
- What frequency range does laser beam jitter dominate the actual optic motion? What about mechanical drifts of the optical tables the HeNes sit on? And for many of the vertex optics, the Oplev beam has multiple bounces on steering mirrors on the stack. What is the contribution of the stack motion to the error signal?
- The answers to the above will tell us what lower and upper UGFs we should and can pick. It will also be instructive to investigate if we can come up with a telescope design near the Oplev QPD that significantly reduces beam jitter effects (see elog 10732). Also, can we launch/extract the beam into/from the vacuum chamber in such a way that we aren't so susceptible to motion of the stack?
- What are some noises that have to be measured and quantified?
- Seismic noise
- Shot noise
- Electronics noise of the QPD readout chain
- HeNe intensity noise (does this matter since we are normalizing by QPD sum?)
- HeNe beam pointing / jitter noise (How? N-corner hat method?)
- Stack motion contribution to the Oplev error signal
- How do we design the Oplev controller?
- The main problem is to frame the right cost function for this problem. Once this cost function is made, we can use MATLAB's PSO tool (which is what was used for the PR3 coating design optimization, and also successfully for this kind of loop shaping problems by Rana for aLIGO) to find a minimum by moving the controller poles and zeros around within bounds we define.
-
What terms should enter the cost function?
- In addition to those listed in elog 5376
- We need the >10Hz roll-off to be steep enough that turning on the digital whitening will not significantly increase the DAC output RMS or drive it to saturation.
- We'd like for the controller to be insensitive to 5% (?) errors in the assumed optical plant and noise models i.e. the closed loop shouldn't become unstable if we made a small error in some assumed parameters.
- Some penalty for using excessive numbers of poles/zeros? Penalty for having too many high-frequency features.
- Other things to verify / look into
- Verify if the counts -> urad calibration is still valid for all the Oplevs. We have the arm-cavity power quadratic dependance method, and the geometry method to do this.
- Check if the Oplev error signals are normalized by the quadrant sum.
- How important is it to balance the individual quadrant gains?
- Check with Koji / Rich about new QPDs. If we can get some, perhaps we can use these in the setup that Steve is going to prepare, as part of the temperature vs HeNe noise invenstigations.
Before the CDS went down, I had taken error signal spectra for the ITMs. I will update this elog tomorrow with these measurements, as well as some noise estimates, to get started. |
13142
|
Tue Jul 25 08:48:57 2017 |
Steve | Update | VAC | RGA scan at d278 |
The RGA did not shut down at the turbo pump controller failing.
Quote: |
Ifo pressure was 5.5 mTorr this morning. The PSL shutter was still open. TP2 controller failed. Interlock closed V1, V4 and VM1
Turbo pump 2 is the fore pump of the Maglev. The pressure here was 3.9 Torr so The Magelv got warm ~38C but it was still rotating at 560 Hz normal with closed V1
What I did:
Looked at pressures of Hornet and Super Bee Instru Tech. Inc
Closed all annuloses and VA6, disconnected V4 and VA6 and turned on external fan to cool Maglev
Opened V7 to pump the Maglev fore line with TP3
V1 opened manually when foreline pressure dropped to <2mTorr at P2 and the body temp of the Maglev cooled down to 25-27 C
VM1 opened at 1e-5 Torr
Valve configuration: vacuum normal with annuloses not pumped
Ifo pressure 8.5e-6 Torr -IT at 10am, P2 foreline pressure 64 mTorr, TP3 controller 0.17A 22C 50Krpm
note: all valves open manually, interlock can only close them
Quote: |
While walking down to the X end to reset c1iscex I heard what I would call a "rythmic squnching" sound coming from under the turbo pump. I would have said the sound was coming from a roughing pump, but none of them are on (as far as I can tell).
Steve maybe look into this??
|
PS: please call me next time you see the vacuum is not Vacuum Normal
|
|
Attachment 1: RGA278d.png
|
|
13143
|
Tue Jul 25 14:04:06 2017 |
Steve | Update | VAC | turbo controller installed and we are running at vac normal |
Gautam and Steve,
Spare Varian turbo-V 70 controller, Model 969-9505, sn 21612 was swapped in. It is running the turbo fine @ 50Krpm but it does not allow it's V4 valve to be opened............
It turns out that TP2 @ 75Krpm will allow V4 to open and close. This must be a software issue.
So Vacuum Normal is operational if TP2 is running 75,000 rpm
We want to run at 50,000 rpm on the long term.
Note: the RS232 Dsub connector on the back of this controller is mounted 180 degrees opposite than TP3 and old failed TP2 controller
PS: controller is shipping out for repair 7-28-2017
|
Attachment 1: TP2@75Krpm.png
|
|
13144
|
Tue Jul 25 14:27:19 2017 |
Steve | Update | safety | safety training |
Kira Dubrovina and Naomi Wharton received 40m specific basic safety training. |
Attachment 1: safety.jpg
|
|
13145
|
Wed Jul 26 19:13:07 2017 |
Jamie | Update | CDS | daqd showing same instability as before |
I recompiled daqd on the updated fb1, similar to how I had before, and we're seeing the same instability: process crashes when it tries to write out the second trend (technically it looks like it crashes while it's trying to write out the full frame while the second trend is also being written out). Jonathan Hanks and I are actively looking into it and i'll provide further report soon. |
13146
|
Thu Jul 27 22:42:24 2017 |
gautam | Update | SUS | Seismic noise, DAC noise, and Coil Driver electronics noise |
Summary:
Yesterday at the meeting, we talked about how the analog de-whitening filters in the coil driver path may be more aggressive than necessary. I think Attachment #1 shows that this is indeed the case.
Details:
I had done some modeling and measurement of some of these noises while I was putting together the initial DRMI noise budget, but I had never put things together in one plot. In Attachment #1, I've plotted the following:
- Quadrature sum of seismic noise (from GWINC calculations) for 3 suspended optics (I'm sticking to the case of 3 optics since I've been doing all the noise-budgeting for MICH - for DARM, it will be 4 suspended optics).
- The unfiltered DAC noise estimate. The voltage noise was measured in this elog. To convert this to displacement noise for 3 suspended optics, I've used the value of 1.55e-9/f^2 m/ct as the actuator coefficient. This number should be accurate under the assumption that the series resistance on the coil driver board output is 400 ohms (we could increase this - by how much depends on how much actuation range is needed).
- Coil driver board and de-whitening board electronics noises (added in quadrature). I've used the LISO model noises, which line up well with the measured noises in elogs 13010 and 13015.
- The DAC noise filtered by the de-whitening transfer function, separately for the cases of using one or both of the available biquad stages. This cannot be lower than the preceeding trace (electronics noise of de-whitening and coil driver boards), so should be disregarded where it dips below it.
It would seem that the coil driver + de-whitening board electronic noises dominate above ~150Hz. The electronics noise is ~10nV/rtHz at the output of the coil driver board, which is only a factor of 100 below the DAC noise - so the stopband attenuation of ~70dB on the de-whitening boards seems excessive.
We can lower this noise by a factor of 2.5 if we up the series resistance on the coil driver boards from 400ohm to 1kohm, but even so, the displacement noise is ~1e-18 m/rtHz. I need to investigate the electronics noises a little more carefully - I only measured it for the case when both biquad stages were engaged, I will need to do the model for all permutations - to be updated.
Attachment #2 has an iPython notebook used to generate this plot along with all the data.
Edit 28 Jul 2.30pm: I've added Attachment #3 with traces for different assumed values of the series resistance on the coil driver board - although I have not re-computed the Johnson noise contribution for the various resistances. If we can afford to reduce the actuation range by a factor of 25, then it looks like we get to within a factor of ~5 of the seismic noise at ~150Hz. |
Attachment 1: noiseComparison.pdf
|
|
Attachment 2: deWhiteConfigs.zip
|
Attachment 3: noiseComparison_resistances.pdf
|
|
13147
|
Fri Jul 28 15:36:32 2017 |
gautam | Update | Optical Levers | Optical lever tuning thoughts |
Attachment #1 - Measured error signal spectrum with the Oplev loop disabled, measured at the IN1 input for ITMY. The y-axis calibration into urad/rtHz may not be exact (I don't know when this was last calibrated).
From this measurement, I've attempted to disentangle what is the seismic noise contribution to the measured plant output.
- To do so, I first modelled the plant as a pair of complex poles @0.95 Hz, Q=3. This gave the best agreement with measurement by eye, I didn't try and optimize this too carefully.
- Next, I assumed all the noise between DC-10Hz comes from only seismic disturbance. So dividing the measured PSD by the plant transfer function gives the spectrum of the seismic disturbance. I further assumed this to be flat, and so I averaged it between DC-10Hz.
- This will be a first seismic noise model to the loop shape optimizer. I can probably get a better model using the GWINC calculations but for a start, this should be good enough.
It remains to characterize various other noise sources.
Quote: |
Before the CDS went down, I had taken error signal spectra for the ITMs. I will update this elog tomorrow with these measurements, as well as some noise estimates, to get started.
|
I have also confirmed that the "QPD" Simulink block, which is what is used for Oplevs, does indeed have the PIT and YAW outputs normalized by the SUM (see Attachment #2). This was not clear to me from the MEDM screen.
GV 30 Jul 5pm: I've included in Attachment #3 the block diagram of the general linear feedback topology, along with the specific "disturbances" and "noises" w.r.t. the Oplev loop. The measured (open loop) error signal spectrum of Attachment #1 (call it y) is given by:

If it turns out that one (or more) term(s) in each of the summations above dominates in all frequency bands of interest, then I guess we can drop the others. An elog with a first pass at a mathematical formulation of the cost-function for controller optimization to follow shortly. |
Attachment 1: errSig.pdf
|
|
Attachment 2: QPD_simulink.png
|
|
Attachment 3: feedbackTopology.pdf
|
|
13148
|
Fri Jul 28 16:47:16 2017 |
gautam | Update | General | PSL StripTool flatlined |
About 3.5 hours ago, all the PSL wall StripTool traces "flatlined", as happens when we had the EPICS freezes in the past - except that all these traces were flat for more than 3 hours. I checked that the c1psl slow machine responded to ping, and I could also telnet into it. I tried opening the StripTool on pianosa and all the traces were responsive. So I simply re-started the PSL StripTool on zita. All traces look responsive now. |
13149
|
Fri Jul 28 20:22:41 2017 |
Jamie | Update | CDS | possible stable daqd configuration with separate DC and FW |
This week Jonathan Hanks and I have been trying to diagnose why the daqd has been unstable in the configuration used by the 40m, with data concentrator (dc) and frame writer (fw) in the same process (referred to generically as 'fb'). Jonathan has been digging into the core dumps and source to try to figure out what's going on, but he hasn't come up with anything concrete yet.
As an alternative, we've started experimenting with a daqd configuration with the dc and fw components running in separate processes, with communication over the local loopback interface. The separate dc/fw process model more closely matches the configuration at the sites, although the sites put dc and fwprocesses on different physical machines. Our experimentation thus far seems to indicate that this configuration is stable, although we haven't yet tested it with the full configuration, which is what I'm attempting to do now.
Unfortunately I'm having trouble with the mx_stream communication between the front ends and the dc process. The dc does not appear to be receiving the streams from the front ends and is producing a '0xbad' status message for each. I'm investigating. |
13150
|
Sat Jul 29 14:05:19 2017 |
gautam | Update | General | PSL StripTool flatlined |
The PMC was unlocked when I came in ~10mins ago. The wall StripTool traces suggest it has been this way for > 8hours. I was unable to get the PMC to re-lock by using the PMC MEDM screen. The c1psl slow machine responded to ping, and I could also telnet into it. But despite burt-restoring c1psl, I could not get the PMC to lock. So I re-started c1psl by keying the crate, and then burt-restored the EPICS values again. This seems to have done the trick. Both the PMC and IMC are now locked.
Unrelated to this work: It looks like some/all of the FE models were re-started. The x3 gain on the coil outputs of the 2 ITMs and BS, which I had manually engaged when I re-aligned the IFO on Monday, were off, and in general, the IMC and IFO alignment seem much worse now than it was yesterday. I will do the re-alignment later as I'm not planning to use the IFO today. |
13151
|
Sat Jul 29 16:24:55 2017 |
jamie | Update | General | PSL StripTool flatlined |
Quote: |
Unrelated to this work: It looks like some/all of the FE models were re-started. The x3 gain on the coil outputs of the 2 ITMs and BS, which I had manually engaged when I re-aligned the IFO on Monday, were off, and in general, the IMC and IFO alignment seem much worse now than it was yesterday. I will do the re-alignment later as I'm not planning to use the IFO today. |
This was me. I restarted the front ends when I was getting the MX streams working yesterday. I'll try to me more conscientious about logging front end restarts. |
13152
|
Mon Jul 31 15:13:24 2017 |
gautam | Update | CDS | FB ---> FB1 |
[jamie, gautam]
In order to test the new daqd config that Jamie has been working on, we felt it would be most convenient for the host name "fb" (martian network IP 192.168.113.202) to point to the physical machine "fb1" (martian network IP 192.168.113.201).
I made this change in /var/lib/bind/martian.hosts on chiara, and then ran sudo service bind9 restart. It seems to have done the job. So as things stand, both hostnames "fb" and "fb1" point to 192.168.113.201.
Now, when starting up DTT or dataviewer, the NDS server is automatically found.
More details to follow. |
13153
|
Mon Jul 31 18:44:40 2017 |
Jamie | Update | CDS | CDS system essentially fully recovered |
The CDS system is mostly fully recovered at this point. The mx_streams are all flowing from all front ends, and from all models, and the daqd processes are receiving them and writing the data to frames:

Remaining unresolved issues:
- IFO needs to be fully locked to make sure ALL components of all models are working.
- The remaining red status lights are from the "FB NET" diagnostics, which are reflecting a missing status bit from the front end processes due to the fact that they were compiled with an earlier RCG version (3.0.3) than the mx_streams were (3.3+/trunk). There will be a new release of the RTS soon, at which point we'll compile everything from the same version, which should get us all green again.
- The entire system has been fully modernized, to the target CDS reference OS (Debian jessie) and more recent RCG versions. The management of the various RTS components, both on the front ends and on fb, have as much as possible been updated to use the modern management tools (e.g. systemd, udev, etc.). These changes need to be documented. In particular...
- The fb daqd process has been split into three separate components, a configuration that mirrors what is done at the sites and appears to be more stable: The "target" directory for all of these components is now:
- daqd_dc: data concentrator (receives data from front ends)
- daqd_fw: receives frames from dc and writes out full frames and second/minute trends
- daqd_rcv: NDS1 server (raises test points and receives archive data from frames from 'nds' process)
The "target" directory for all of these new components is:
- /opt/rtcds/caltech/c1/target/daqd
All of these processes are now managed under systemd supervision on fb, meaning the daqd restart procedure has changed. This needs to be simplified and clarified.
- Second trend frames are being written, but for some reason they're not accessible over NDS.
- Have not had a chance to verify minute trend and raw minute trend writing yet. Needs to be confirmed.
- Get wiper script working on new fb.
- Front end RTS kernel will occaissionally crash when the RTS modules are unloaded. Keith Thorne apparently has a kernel version with a different set of patches from Gerrit Kuhn that does not have this problem. Keith's kernel needs to be packaged and installed in the front end diskless root.
- The models accessing the dolphin shared memory will ALL crash when one of the front end hosts on the dolphin network goes away. This results in a boot fest of all the dolphin-enabled hosts. Need to figure out what's going on there.
- The RCG settings snapshotting has changed significantly in later RCG versions. We need to make sure that all burt backup type stuff is still working correctly.
- Restoration of /frames from old fb SCSI RAID?
- Backup of entirety of fb1, including fb1 root (/) and front end diskless root (/diskless)
- Full documentation of rebuild procedure from Jamie's notes.
|
13154
|
Mon Jul 31 20:35:42 2017 |
Koji | Summary | Computers | Chiara backup situation summary |
Summary
- CDS Shared files system: backed up
- Chiara system itself: not backed up
controls@chiara|~> df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/sda1 450420 11039 416501 3% /
udev 15543 1 15543 1% /dev
tmpfs 3111 1 3110 1% /run
none 5 0 5 0% /run/lock
none 15554 1 15554 1% /run/shm
/dev/sdb1 2064245 1718929 240459 88% /home/cds
/dev/sdd1 1877792 1426378 356028 81% /media/fb9bba0d-7024-41a6-9d29-b14e631a2628
/dev/sdc1 1877764 1686420 95960 95% /media/40mBackup
/dev/sda1 : System boot disk
/dev/sdb1 : main cds disk file system 2TB partition of 3TB disk (1TB vacant)
/dev/sdc1 : Daily backup of /dev/sdb1 via a cron job (/opt/rtcds/caltech/c1/scripts/backup/localbackup)
/dev/sdd1 : 2014 snap shot of cds. Not actively used. USB
https://nodus.ligo.caltech.edu:8081/40m/11640
|
13155
|
Mon Jul 31 23:39:02 2017 |
rana | Update | COC | Cavity Scan Simulation Code |
Hiro Yamamoto has updated SIS (Static Interferometer Simulation) to allow us to do the MCMC based inference of the 40m arm cavity mirror maps.
The latest version is in git.ligo.org: IFOsim/SIS/
In the examples directory I have put 3 files:
- mcmcCavityScans.m - runs many cavity scans using parfor and saves the data
- plotCavityScans.m - loads the .mat file with the data and plots it
- plotCavityScans.py - python file which also loads & plots, but nicer since python has a transparency option for the traces.
Attached is the plots and the data. The first attached plot is a low resolution one: 200 scans of 100 frequency points each. Second plot is 200 scans of 300 points each.
The run was done assuming perfect LIGO arm params with a random set of Zernike perturbations for each run. The amplitude of each Zernike was chosen from a Normal distribution with a standard deviation of 10 nm.
We need to come up with a better guess for the initial distribution from which to sample, and also to use the more smart sampling that one does using the MCMC Hammer. |
Attachment 1: manyCavityScans-SIS.pdf
|
|
Attachment 2: manyCavityScans-SIS.pdf
|
|
Attachment 3: MonteCarlo_CavityScans.mat
|
13156
|
Tue Aug 1 16:05:01 2017 |
gautam | Update | Optical Levers | Optical lever tuning - cost function construction |
Summary:
I've been trying to put together the cost-function that will be used to optimize the Oplev loop shape. Here is what I have so far.
Details:
All of the terms that we want to include in the cost function can be derived from:
- A measurement of the open-loop error signal [using DTT, calibrated to urad/rtHz]. We may want a breakdown of this in terms of "sensing noises" and "disturbances" (see the previous elog in this thread), but just a spectrum will suffice for the optimal controller given the current noises.
- A model of the optical plant, P(s) [validated with a DTT swept-sine measurement].
- A model of the controller, C(s). Some/all of the poles and zeros of this transfer function is what the optimization algorithm will tune to satisfy the design objectives.
From these, we can derive, for a given controller, C(s):
- Closed-loop stability (i.e. all poles should be in the left-half of the complex plane), and exactly 2 UGFs. We can use MATLAB's allmargin function for this. An unstable controller can be rejected by assigning it an extremely high cost.
- RMS rrror signal suppression in the frequency band (0.5Hz - 2Hz). We can require this to be >= 15dB (say).
- Minimize gain peaking and noise injection - this information will be in the sensitivity function,
. We can require this to be <= 10dB (say).
- RMS of the control signal between 10 Hz and 200 Hz, multiplied by the digital suspension whitening filter, should be <10% of the DAC range (so that we don't have problems engaging the coil de-whitening).
- Smallest gain margin (there will be multiple because of the various notches we have) should be > 10dB (say). Phase margin at both UGFs should be >30 degrees.
- Terms 1-5 should not change by more than 10% for perturbations in the plant model parameters (f0 and Q of the pendulum) at the 10% (?) level.
We can add more terms to the cost function if necessary, but I want to get some minimal set working first. All the "requirements" I've quoted above are just numbers out of my head at the moment, I will refine them once I get some feeling for how feasible a solution is for these requirements.
Quote: |
An elog with a first pass at a mathematical formulation of the cost-function for controller optimization to follow shortly.
|
For a start, I attempted to model the current Oplev loop. The modeling of the plant and open-loop error signal spectrum have been described in the previous elogs in this thread.
I am, however, confused by the controller - the MEDM screen (see Attachment #2) would have me believe that the digital transfer function is FM2*FM5*FM7*FM8*gain(10). However, I get much better agreement between the measured and modelled in-loop error signal if I exclude the overall gain of 10 (see Attachments #1 for the models and #3 for measurements).
What am I missing? Getting this right will be important in specifying Term #4 in the cost function...
GV Edit 2 Aug 0030: As another sanity check, I computed the whitened Oplev control signal given the current loop shape (with sub-optimal high-frequency roll-off). In Attachment #4, I converted the y-axis from urad/rtHz to cts/rtHz using the approximate calibration of 240urad/ct (and the fact that the Oplev error signal is normalized by the QPD sum of ~13000 cts), and divided by 4 to account for the fact that the control signal is sent to 4 coils. It is clear that attempting to whiten the coil driver signals with the present Oplev loop shapes causes DAC saturation. I'm going to use this formulation for Term #4 in the cost function, and to solve a simpler optimization problem first - given the existing loop shape, what is the optimal elliptic low-pass filter to implement such that the cost function is minimized?
There is also the question of how to go about doing the optimization, given that our cost function is a vector rather than a scalar. In the coating optimization code, we converted the vector cost function to a scalar one by taking a weighted sum of the individual components. This worked adequately well.
But there are techniques for vector cost-function optimization as well, which may work better. Specifically, the question is if we can find the (infinite) solution set for which no one term in the error function can be made better without making another worse (the so-called Pareto front). Then we still have to make a choice as to which point along this curve we want to operate at. |
Attachment 1: loopPerformance.pdf
|
|
Attachment 2: OplevLoop.png
|
|
Attachment 3: OL_errSigs.pdf
|
|
Attachment 4: DAC_saturation.pdf
|
|
13157
|
Tue Aug 1 19:23:06 2017 |
rana | Update | ALS | X - arm alignment |
Rana, Naomi
We dither locked the X arm and then aligned the green beam to it using the PZTs. Everything looks ready for us to do a mode scan tomorrow.
We got buildup for Red and Green, but saw no beat in the control room. Quick glance at the PSL seems OK, but needs more investigation. We did not try moving around the X-NPRO temperature.
Tomorrow: get the beat, scan the PhaseTracker, and get data using pyNDS. |
13158
|
Wed Aug 2 09:40:55 2017 |
Steve | Update | Electronics | spare ILIGO electronics |
Spare ILIGO electronics temporarly stored in the east arm. We need cabinet space. |
Attachment 1: iLIGOspares.jpg
|
|
Attachment 2: spareIligo.jpg
|
|
13159
|
Wed Aug 2 14:47:20 2017 |
Koji | Summary | Computers | Chiara backup situation summary |
I further made the burt snapshot directories compressed along with ELOG 11640. This freed up additional ~130GB. This will eventually help to give more space to the local backup (/dev/sdc1)
controls@chiara|~> df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/sda1 450420 11039 416501 3% /
udev 15543 1 15543 1% /dev
tmpfs 3111 1 3110 1% /run
none 5 0 5 0% /run/lock
none 15554 1 15554 1% /run/shm
/dev/sdb1 2064245 1581871 377517 81% /home/cds
/dev/sdd1 1877792 1426378 356028 81% /media/fb9bba0d-7024-41a6-9d29-b14e631a2628
/dev/sdc1 1877764 1698489 83891 96% /media/40mBackup
|
13160
|
Wed Aug 2 15:04:15 2017 |
gautam | Configuration | Computers | control room workstation power distribution |
The 4 control room workstation CPUs (Rossa, Pianosa, Donatella and Allegra) are now connected to the UPS.
The 5 monitors are connected to the recently acquired surge-protecting power strips.
Rack-mountable power strip + spare APC Surge Arrest power strip have been stored in the electronics cabinet.
Quote: |
this is not the right one; this Ethernet controlled strip we want in the racks for remote control.
Buy some of these for the MONITORS.
|
|
13161
|
Thu Aug 3 00:59:33 2017 |
gautam | Update | CDS | NDS2 server restarted, /frames mounted on megatron |
[Koji, Nikhil, Gautam]
We couldn't get data using python nds2. There seems to have been many problems.
- /frames wasn't mounted on megatron, which was the nds2 server. Solution: added /frames 192.168.113.209(sync,ro,no_root_squash,no_all_squash,no_subtree_check) to /etc/exportfs on fb1, followed by sudo exportfs -ra. Using showmount -e, we confirmed that /frames was being exported.
- Edited /etc/fstab on megatron to be fb1:/frames/ /frames nfs ro,bg,soft 0 0. Tried to run mount -a, but console stalled.
- Used nfsstat -m on megatron. Found out that megatron was trying to mount /frames from old FB (192.168.113.202). Used sudo umount -f /frames to force unmount /frames/ (force was required).
- Re-ran mount -a on megatron.
- Killed nds2 using /etc/init.d/nds2 stop - didn't work, so we manually kill -9'ed it.
- Restarted nds2 server using /etc/init.d/nds2 start.
- Waited for ~10mins before everything started working again. Now usual nds2 data getting methods work.
I have yet to check about getting trend data via nds2, can't find the syntax. EDIT: As Jamie mentioned in his elog, the second trend data is being written but is inaccessible over nds (either with dataviewer, which uses fb as the ndsserver, or with python NDS, which uses megatron as the ndsserver). So as of now, we cannot read any kind of trends directly, although the full data can be downloaded from the past either with dataviewer or python nds2. On the control room workstations, this can also be done with cds.getdata. |
13162
|
Thu Aug 3 10:51:32 2017 |
rana | Update | CDS | NDS2 server restarted, /frames mounted on megatron |
same issue on NODUS; I edited the /etc/fstab and tried mount -a, but it gives this error:
controls@nodus|~ 1> sudo mount -a
mount.nfs: access denied by server while mounting fb1:/frames
needs more debugging - this is the machine that allows us to have backed up frames in LDAS. Permissions issues from fb1 ? |
13163
|
Thu Aug 3 11:11:29 2017 |
gautam | Update | CDS | NDS2 server restarted, /frames mounted on nodus |
I added nodus' eth0 IP (192.168.113.200) to the list of allowed nfs clients in /etc/exportfs on fb1, and then ran sudo mount -a on nodus. Now /frames is mounted.
Quote: |
needs more debugging - this is the machine that allows us to have backed up frames in LDAS. Permissions issues from fb1 ?
|
|
13164
|
Thu Aug 3 19:46:27 2017 |
Jamie | Update | CDS | new daqd restart procedure |
This is the daqd restart procedure:
$ ssh fb1 sudo systemctl restart daqd_*
That will restart all of the daqd services (daqd_dc, daqd_fw, daqd_rcv).
The front end mx_stream processes should all auto-restart after the daqd_dc comes back up. If they don't (models show "0x2bad" on DC0_*_STATUS) then you can execute the following to restart the mx_stream process on the front end:
$ ssh c1<host> sudo systemctl restart mx_stream
|
13165
|
Thu Aug 3 20:15:11 2017 |
Jamie | Update | CDS | dataviewer can not raise test points |
For some reason dataviewer is not able to raise test points with the new daqd setup, even though dtt can. If you raise a test point with dtt then dataviewer can show the data fine.
It's unclear to me why this would be the case. It might be that all the versions of dataviewer on the workstations are too old?? I'll look into it tomorrow to see if I can figure out what's going on. |
13166
|
Fri Aug 4 09:07:28 2017 |
rana | Update | CDS | CDS system essentially NOT fully recovered |
Tried getting trends with dataviewer just now since Jamie re-enabled the minute_raw frame writing yesterday. Unable to get trends still:
Connecting to NDS Server fb1 (TCP port 8088)
Connecting.... done
Server error 18: trend data is not available
datasrv: DataWriteTrend failed in daq_send().
unknown error returned from daq_send()T0=17-08-04-08-02-22; Length=28800 (s)
No data output. |
13167
|
Fri Aug 4 18:25:15 2017 |
gautam | Update | General | Bilinear noise coupling |
[Nikhil, gautam]
We repeated the test that EricQ detailed here today. We have downloaded ~10min of data (between GPS times 11885925523 - 11885926117), and Nikhil will analyze it. |
Attachment 1: bilinearTest.pdf
|
|
13168
|
Sat Aug 5 11:04:07 2017 |
gautam | Update | SUS | MC1 glitches return |
See Attachment #1, which is full (2048Hz) data for a 3 minute stretch around when I saw the MC1 glitch. At the time of the glitch, WFS loops were disabled, so the only actuation on MC1 was via the local damping loops. The oscillations in the MC2 channels are the autolocker turning on the MC2 length tickle.
Nikhil and I tried the usual techniques of squishing cables at the satellite box, and also at 1X4/1X5, but the glitching persists. I will try and localize the problem this weekend. This thread details investigations the last time something like this happened. In the past, I was able to fix this kind of glitching by replacing the (high speed) current buffer IC LM6321M. These are present in a two places: Satellite box (for the shadow sensor LED current drive), and on the coil driver boards. I think we can rule out the slow machine ADCs that supply the static PIT and YAW bias voltages to the optic, as that path is low-passed with a 4th order filter @1Hz, while the glitches that show up in the OSEM sensor channels do not appear to be low-passed, as seen in the zoomed in view of the glitch in Attachment #2 (but there is an LM6321 in this path as well). |
Attachment 1: MC1_glitch_Aug42017.png
|
|
Attachment 2: MC1_glitch_zoomed.png
|
|
13169
|
Mon Aug 7 16:00:41 2017 |
rana | Update | General | Bilinear noise coupling |
These are not the angular test parameters that we're looking for:
recall that what we want is the low frequency beam spot variations and the feedback to be limited to a small high frequency band.
e.g. only inject noise at 40-50 Hz, not loud enough to find at 2x the injected frequency.
It should NOT be the case that the high frequency injected noise be dominating the RMS.
The coupling should be ~1e-3; some combination of beam spot mis-centering and beam spot motion. |
13170
|
Mon Aug 7 22:50:57 2017 |
Koji | Update | General | New wifi router for the GC network installed |
I have replaced the old 11n wifi router (CISCO / Linksys) for the GC network with a new one with 11ac technology.
The new one is a 3band wifi router. Thus it has one 2.4GHz (11n) SSID and two 5GHz (11ac) SSIDs. All these have been set to be hidden. Just come to the 40m and find the necessary info for the connection.
Note that the user id / password for the admin tool have been changed from the default values. |
13171
|
Tue Aug 8 17:04:26 2017 |
Steve | Update | VAC | unintended pump down |
IFO pressure 2 Torr, PSL shutter closed. I'm pumping down with 2 roughing pumps with ion pump gate valves open and annulosses at atm.
The vacuum envelope was vented to 17 Torr while I was replacing the USP battery stack. More about this later.....
Do not plan on using the interferrometer tonight. I will complete the pumpdown tomorrow morning.
|
Attachment 1: pumping_down.png
|
|
13172
|
Tue Aug 8 17:44:11 2017 |
Steve | Update | VAC | unintended pump down |
Pumpdown stopped for over night at ~ 1 Torr
The roughing line disconnected. Valves condition indicator "moving " means that it is closed and it's cable disconnected so it can not move.
The RGA is off and VM1 is stuck.
Quote: |
IFO pressure 2 Torr, PSL shutter closed. I'm pumping down with 2 roughing pumps with ion pump gate valves open and annulosses at atm.
The vacuum envelope was vented to 17 Torr while I was replacing the USP battery stack. More about this later.....
Do not plan on using the interferrometer tonight. I will complete the pumpdown tomorrow morning.
|
|
Attachment 1: stopped_pumping.png
|
|
13173
|
Tue Aug 8 20:48:06 2017 |
gautam | Update | SUS | ITMX stuck |
Somewhere between CDS model restarts and the IFO venting, ITMX got stuck.
I shook it loose using the usual bias slider technique. It appears to be free now, I was able to lock the green beam on a TEM00 mode without touching the green input pointing. The ITMX Oplev spot has also returned to within its MEDM display bounds. |
13174
|
Wed Aug 9 11:33:49 2017 |
gautam | Update | Electronics | MC2 de-whitening |
Summary:
The analog de-whitening filters for MC2 are different from those on the other optics (i.e. ITMs and ETMs). They have one complex pole pair @7Hz, Q~sqrt(2), one complex zero pair @50Hz, Q~sqrt(2), one real pole at 2.5kHz, and one real zero @250Hz (with a DC gain of 10dB).
Details:
I took the opportunity last night to measure all 4 de-whitening channel TFs. Measurements and overlaid LISO fits are seen in Attachment #1.
The motivation behind this investigation was that last week, I was unable to lock the IMC to one of the arms. In the past, this has been done simply by routing the control signal of the appropriate arm filter bank (e.g. C1:LSC-YARM_OUT) to MC2 instead of ETMY via the LSC output matrix (if the matrix element to ETMY is 1, the matrix element to MC2 is -1).
Looking at the coil output filter banks on the MC2 suspension MEDM screen (see Attachment #2), the positions of filters in the filter banks is different from that on the other optics. In general, the BIO outputs of the DAC are wired such that disengaging FM9 on the MEDM screen engages the analog de-whitening path. FM10 then has the inverse of the de-whitening filter, such that the overall TF from DAC to optic is unity. But on MC2, these filters occupy FM7 and FM8, and FM9 was originally a 28Hz Elliptic Low-pass filter.
So presumably, I was unable to lock the IMC to an arm because for either configuration of FM9 (ON or OFF), the signal to the optic was being aggressively low-passed. To test this hypothesis, I simply copied the 28Hz elliptic to FM6, put a gain of 1 on FM9, left it engaged (so that the analog path TF is just flat with gain x3), and tried locking the IMC to the arm again - I was successful. See Attachment #3 for comparison of the control signal spectra of the X-arm control signal, with the IMC locked to the Y-arm cavity.
In this test, I also confirmed that toggling FM9 in the coil output filter banks actually switches the analog path on the de-whitening boards.
Since I now have the measurements for individual channels, I am going to re-configure the filter arrangement on MC2 to mirror that on the other optics.
Unrelated to this work: the de-whitening boards used for MC1 and MC3 are D000316, as opposed to D000183 used for all other SOS optics. From the D000316 schematic, it looks like the signals from the AI board are routed to this board via the backplane. I will try squishing this backplane connector in the hope it helps with the glitching MC1 suspension.
GV Aug 13 11:45pm - I've made a DCC page for the MC2 dewhitening board. For now, it has the data from this measurement, but if/when we modify the filter shape, we can keep track of it on this page (for MC2 - for the other suspensions, there are other pages). |
Attachment 1: MC2deWhites.pdf
|
|
Attachment 2: MC2Coils.png
|
|
Attachment 3: MC2stab.pdf
|
|
13175
|
Wed Aug 9 11:59:51 2017 |
Steve | Update | VAC | unintended pump down at vacuum normal |
pd80b has reached Vac Normal. IFO pressure 0.5 mTorr
We need our vauum channels back in dataviewer.
Quote: |
Pumpdown stopped for over night at ~ 1 Torr
The roughing line disconnected. Valves condition indicator "moving " means that it is closed and it's cable disconnected so it can not move.
The RGA is off and VM1 is stuck.
Quote: |
IFO pressure 2 Torr, PSL shutter closed. I'm pumping down with 2 roughing pumps with ion pump gate valves open and annulosses at atm.
The vacuum envelope was vented to 17 Torr while I was replacing the USP battery stack. More about this later.....
Do not plan on using the interferrometer tonight. I will complete the pumpdown tomorrow morning.
|
|
|
Attachment 1: pdc.png
|
|
Attachment 2: pd80b@vacnormal.png
|
|
13176
|
Wed Aug 9 12:05:57 2017 |
rana | Update | Electronics | data archiving |
This kind of data fitting and analysis is really useful. We should figure out a way to archive it. Perhaps the data files and fitting stuff can be put into GIT in some smart way? The fit results can be added to the 40m MC electronics DCC tree. Then the links can be added to this elog. |
13177
|
Wed Aug 9 12:35:47 2017 |
gautam | Update | ALS | Fiber ALS |
Last week, we were talking about reviving the Fiber ALS box. Right now, it's not in great shape. Some changes to be made:
- Supply power to the PDs (Menlo FPD310) via a power regulator board. The datasheet says the current consumption per PD is 250 mA. So we need 500mA. We have the D1000217 power regulator board available in the lab. It uses the LM2941 and LM2991 power regulator ICs, both of which are rated for 1A output current, so this seems suitable for our purposes. Thoughts?
- Install power decoupling capacitors on the PDs.
- Clean up the fiber arrangement inside the box.
- Install better switches, plus LED indicators.
- Cover the box.
- Install it in a better way on the PSL table. Thoughts? e.g. can we mount the unit in some electronics rack and route the fibers to the rack? Perhaps the PSL IR and one of the arm fibers are long enough, but the other arm might be tricky.
Previous elog thread about work done on this box: elog11650 |
Attachment 1: IMG_3942.JPG
|
|
13178
|
Wed Aug 9 15:15:47 2017 |
gautam | Update | SUS | MC1 glitches return |
Happened again just now, although the characteristics of the glitch are very different from the previous post, its less abrupt. Only actuation on MC1 at this point was local damping. |
Attachment 1: MC1_glitch.png
|
|
13179
|
Wed Aug 9 16:34:46 2017 |
rana | Update | VAC | Vacuum Document recovered |
Steve and I found the previous draft of the 40m Vacuum Document. Someone in 2015 had browsed into the Docs history and then saved the old 2013 version as the current one.
We restored the version from 2014 which has all of Steve's edits. I have put that version (which is now the working copy) into the DCC: https://dcc.ligo.org/E1500239.
The latest version is in our Google Docs place as usual. Steve is going to have a draft ready for us to ready be Tuesday, so please take a look then and we can discuss what needs doing at next Wednesday's 40m meeting. |
13180
|
Wed Aug 9 19:21:18 2017 |
gautam | Update | ALS | ALS recovery |
Summary:
Between frequent MC1 excursions, I worked on ALS recovery today. Attachment #1 shows the out-of-loop ALS noise as of today evening (taken with arms locked to IR) - I have yet to check loop shapes of the ALS servos, looks like there is some tuning to be done.
On the PSL table:
- First, I locked the arms to IR, ran the dither alignment servos to maximize transmission.
- I used the IR beat PDs to make sure a beat existed, at approximately.
- Then I used a scope to monitor the green beat, and tweaked steering mirror alignment until the beat amplitude was maximized. I was able to improve the X arm beat amplitude, which Koji and Naomi had tweaked last week, by ~factor of 2, and Y arm by ~factor of 10.
- I used the DC outputs of the BBPDs to center the beam onto the PD.
- Currently, the beat notes have amplitudes of ~-40dBm on the scopes in the control room (there are various couplers/amplifiers in the path so I am not sure what beatnote amplitude this translates to at the BBPD output). I have yet to do a thorough power budget, but I have in my mind that they used to be ~-30dBm. To be investigated.
- Removed the fiber beat PD 1U chassis unit from the PSL table for further work. The fibers have been capped and remain on the PSL table. Cleaned the NW corner of the PSL table up a bit.
To do:
- Optimization of the input pointing of the green beam for X (with PZTs) and Y (manual) arms.
- ALS PDH servo loop measurement. Attachment #1 suggests some loop gain adjustment is required for both arms (although the hump centered around ~70Hz seem to be coming from the IR lock).
- Power budgeting on the PSL table to compare to previous such efforts.
Note: Some of the ALS scripts are suffering from the recent inablilty of cdsutils to pull up testpoints (e.g. the script that is used to set the UGFs of the phase tracker servo). The workaround is to use DTT to open the test points first (just grab 0.1s time series for all channels of interest). Then the cdsutils scripts can read the required channels (but you have to keep the DTT open). |
Attachment 1: ALS_oolSpec.pdf
|
|
13181
|
Thu Aug 10 09:10:55 2017 |
steve | Update | General | dataviewer is recovering |
It can look back 7 days trends now. There is still no vacuum channels. I can bring back the channels through the restore directory, but there are no data. |
Attachment 1: 7dm.png
|
|
13182
|
Thu Aug 10 09:31:57 2017 |
Steve | Update | SUS | ITMX sensor voltage |
There must be some bad connection
Quote: |
Somewhere between CDS model restarts and the IFO venting, ITMX got stuck.
I shook it loose using the usual bias slider technique. It appears to be free now, I was able to lock the green beam on a TEM00 mode without touching the green input pointing. The ITMX Oplev spot has also returned to within its MEDM display bounds.
|
|
Attachment 1: 9daysITMX.png
|
|
Attachment 2: vacGlitchITMX.png
|
|
13183
|
Thu Aug 10 14:13:23 2017 |
Kira | Summary | PEM | temperature sensor |
Goal is to build a temperature sensor accurate to 1 mK. Schematic is shown below. This does not take into account the DC gain that occurs.
Parts that would be used for this: LM317 regulator, AD592 temperature transducer, OP amp (low input noise and high impedance), 100K (or maybe 10k) resistor. This is what is currently proposed, but the exact parts we use could be changed to better fit the sensor. The resistor and the OP amp will be decided depending on the output of the AD592.
Once this is built, I would like to create a few copies of it and put them into an insulated container and measure the output from each one. This would allow us to calculate the temperature noise of the circuit, as we can take out the variations due to temperature changes inside the container by comparing the outputs.
I can also model the noise in the circuit to see how much noise there is before building it. There are three terms to the noise that we have, and we need to decide which one dominates at low frequencies.
Our final goal is to create an additional circuit that could cancel out the DC gain. I have attached an additional schematic proposed by Rana that would help with this issue. I will leave this second half for when the first part works. |
Attachment 1: IMG_20170810_121637~2.jpg
|
|
Attachment 2: IMG_20170810_134422~2.jpg
|
|
13184
|
Thu Aug 10 14:14:17 2017 |
Kira | Update | PEM | previously built temp sensor |
I decided to see what was inside the sensor that had been previously made. According to elog 1102, the temperature sensor is LM34, the specs of which can be found here:
http://www.ti.com/lit/ds/symlink/lm34.pdf
The wiring of this sensor confused me, as it appears that the +Vs end (white) connects to the input, but both the ground (left) and the Vout (middle) pins are connected to the box itself. I don't see how the signal can be read. |
Attachment 1: IMG_20170810_112315.jpg
|
|
13185
|
Thu Aug 10 14:25:52 2017 |
gautam | Update | CDS | Slow EPICS channels -> Frames re-enabled |
I went into /opt/rtcds/caltech/c1/target/daqd, opened the master file, and uncommented the line with C0EDCU.ini (this is the file in which all the slow machine channels are defined). So now I am able to access, for example, the c1vac1 channels.
The location of the master file is no longer in /opt/rtcds/caltech/c1/target/fb, but is in the above mentioned directory instead. This is part of the new daqd paradigm in which separate processes are handling the data transfer between FEs and FB, and the actual frame-writing. Jamie will explain this more when he summarizes the CDS revamp.
It looks like trend data is also available for these newly enabled channels, but thus far, I've only checked second trends. I will update with a more exhaustive check later in the evening.
So, the two major pending problems (that I can think of) are:
- Inability to unload models cleanly
- Inability of dataviewer (and cdsutils) to open testpoints.
Apart from this, dataviewer frequently hangs on Donatella at startup. I used ipcs -a | grep 0x | awk '{printf( "-Q %s ", $1 )}' | xargs ipcrm to remove all the extra messages in the dataviewer queue.
Restarting the daqd processes on fb1 using Jamie's instructions from earlier in this thread works - but the mx_stream processes do not seem to come back automatically on c1lsc, c1sus and c1ioo (reasons unknown). I've made a copy of the mxstreamrestart.sh script with the new mxstream restart commands, called mxstreamrestart_debian.sh, which lives in /opt/rtcds/caltech/c1/scripts/cds. I've also modified the CDS overview MEDM screen such that the "mxstream restart" calls this modified script. For now, this requires you to enter the controls password for each machine. I don't know what is a secure way to do it otherwise, but I recall not having to do this in the past with the old mxstreamrestart.sh script. |
13186
|
Thu Aug 10 15:45:34 2017 |
Steve | Update | VAC | accidental vent to 17 Torr |
Finally we got the cold cathode gauge working. IFO pressure 7e-6 Torr-it, vacuum normal valve configuration with all 4 ion pump gate valves closed at ~ 9e-6 Torr. The cryo pump was also pumped yesterday to remove the accumulated outgassing build up.
The accidental vent was my mistake; made when I was replacing the battery pack of the UPS. The installed pack measured 51 V without any load. The "replace battery" warning light did not go out after the batteries were replaced.
I then mistakenly and repeatedly pushed the "test" button to reset this, but I did not wait long enough for the batterries to get fully charged. The test put the full load on the new batteries and their charging condition got worse. I made the mistake when trying to put the load from the battery to online and pushed "O" so the power was cut and the computer rebooted to the all off condition. On top of this, I disconnected the wrong V1 cable to close V1. As the computer rebooted it's interlock closed V1 at 17 Torr.
Never hit O on the Vacuum UPS !
Note: the " all off " configuration should be all valves closed ! This should be fixed now.
In case of emergency you can close V1 with disconnecting it's actuating power as shown on Atm3 if you have peumatic pressure 60 PSI |
Attachment 1: Mag_UPS.jpg
|
|
Attachment 2: UPSbatteryPack.jpg
|
|
Attachment 3: closing_V1.jpg
|
|
13187
|
Thu Aug 10 21:01:43 2017 |
gautam | Update | SUS | MC1 glitches debugging |
I have squished cables in all the places I can think of - but MC1 has been glitching regularly today. Before starting to pull electronics out, I am going to attempt a more systematic debugging in the hope I can localize the cause.
To this end, I've disabled the MC autolocker, and have shutdown the MC1 watchdog. I plan to leave it in this state overnight. From this, I hope to look at the free-swinging optic spectra to see that this isn't a symptom of something funky with the suspension itself.
Some possible scenarios (assuming the free swinging spectra look alright and the various resonances are where we expect them to be):
- With the watchdog shutdown, the PIT/YAW bias voltages still goto the coil (low-passed by 4 poles @1Hz). So if the glitching happens in this path, we should see it in both the shadow sensors and the DC spot positions on the WFS.
- If the glitching happens in the shadow sensor readout electronics/cabling, we should see it in the shadow sensor channels, but NOT in the DC spot positions on the WFS (as the watchdog is shutdown, so there should be no actuation to the coils based on OSEM signals).
- If we don't see any glitches in WFS spot positions or shadow sensors, then it is indicative of the problem being in the coil driver board / dewhitening board/anti-aliasing board.
- I am discounting the problem being in the Satellite box, as we have switched around the MC1 satellite box multiple times - the glitches remain on MC1 and don't follow a Satellite Box. Of course there is the possibility that the cabling from 1X5/1X6 to the Satellite box is bad.
MC1 has been in a glitchy mood today, with large (MC-REFL spot shifts by ~1 beam diameter on the CCD monitor) glitches happening ~every 2-3 hours. Hopefully it hasn't gone into an extended quiet period. For reference, I've attached the screen-grab of the MC-QUAD and MC-REFL as they are now.
GV 9.20PM: Just to make sure of good SNR in measuring the pendulum eigenfreqs, I ran /opt/rtcds/caltech/c1/scripts/SUS/freeswing MC1 in a terminal . The result looked rather violent on the camera but its already settling down. The terminal output:
The following optics were kicked:
MC1
Thu Aug 10 21:21:24 PDT 2017
1186460502
Quote: |
Happened again just now, although the characteristics of the glitch are very different from the previous post, its less abrupt. Only actuation on MC1 at this point was local damping.
|
|
Attachment 1: MC_QUAD_10AUG2017.jpg
|
|
Attachment 2: MCR_10AUG2017.jpg
|
|
13188
|
Thu Aug 10 21:22:06 2017 |
rana | Summary | PEM | temperature sensor |
- Should we use the AD590 or the AD592? They're sort of the same, but have slightly different packages and specs.
- Given the sensitivity of the AD590 to power supply drift, I would recommend using a precision voltage reference like the AD581 or AD587. Take a look at the datasheets and order a few different varieties so we can see what works best for us. I believe that the voltage regulators have too much drift to use for precision temperature control.
- The AD590 datasheet has a few example circuits showing how we can subtract off the offset which comes the 1 uA/K coefficient of the AD590 (i.e. we would have 295 uA at room temperature, trying to stabilize to +/- 0.1 K)
- For the first prototypes its fine to use some solderable protoboard assembly, but we will eventually have to figure out how to package this thing and interface it with our Acromag slow controls system.
also, I've attached some temperature noise spectra from the LISA group at the AEI in Hannover. It will be interesting to see if we get the same results. |
Attachment 1: tempsensnoise2.pdf
|
|
Attachment 2: tempnoise_final.pdf
|
|