I have attached the graph for the seismometer temperature fluctuations over 3 days. As we can see, there is a noticeable fluctuation in daily temperature as well as a difference between days in the maximum and minimum temperatures. I will repeat this test but take the can off to see if there's any difference between having the can on or off.
I guess it's fine for now while we are still finalizing the setup at EX, but we should eventually line up the seismometer axes with the IFO axes. Is there a photo of the orientation of the seismometer pre heater can tests? If not, probably good to make some sort of markings on the granite slab / seismometer to allow easy lining up of these axes...
In the IMC actuation chain, it looks like the MC1/MC3 de-whitening boards, and also all three MC optics' coil driver boards, are showing higher noise than expected from LISO modeling. One possible candidate is thick film resistors on the coil driver boards. The plan is to debug these further by pulling the board out of the Eurocrate and investigating on the electronics bench.
Why bother? Mainly because I want to see how good the IR ALS noise is, and currently, the PSL frequency noise is causing the measurement to be worse than references taken from previous known good times.
Sometime ago, rana suggested to me that I should do this measurement more systematically.
I've now restored all the wiring at 1X6 to their state before this work.
Article from EE Times, describing why metal foil (NOT metal film) resistors are really better than wirewound when it comes to everything except high power dissipation.
Need to do some diggin to see if we can find ~1k metal foil resistors which can handle ~1W of heat.
Steve: here it is
Aim: To find telescopic lens solution to image test mass onto the sensor of GigE camera.
I wrote a python code to find an appropriate combination of lenses to focus the optic onto the camera keeping in mind practical constraints like distance of GigE camera from the optic ~ 1m and distance between the lenses need to be in accordance with the Thorlab lens tubes available. We have to image both the enire optic of size 3" and beam spot of 1" using this combination of lens. The image size that efficiently utilizes the entire sensor array is 1/4". Therefore the magnification required for imaging the entire optic is 1/12 and that for the beam spot is 1/4.
I checked the website of Thorlabs to get the available focal lengths of 2" lenses (instead of 1" lenses to collect sufficient power). I have tried several combination of lenses and the ones I found close enough to what is required have been listed below along with thier colorbar plots.
a) 150mm-150mm (Attachment 2 & 3)
With this combination, object distance varies like 50cm for 1" beam spot to 105cm for 3" spot. Therefore, it posses a difficulty that there is a difference of ~48cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
b) 125mm-150mm (Attachment 4 & 5)
With this combination, object distance varies like 45cm for 1" beam spot to 95cm for 3" spot. There is a difference of ~43cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
c) 125mm-125mm (Attachment 6 & 7)
The object distance varies like 45cm for 1" beam spot to 90cm for 3" spot. There is a difference of ~39cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
Sensitivity check was also done for these combination of lenses. An error of 1cm in object distance and 5mm in the distance between the lenses gives an error in magnification <2%.
The schematic of the telescopic lens system has been given in Attachment 8.
Below is analysis of measurements I had taken of the AUX-PSL PLL using an SR560 as the servo controller (1 Hz single-pole low-pass, gain varied 100-500). The resulting transfer function is in good agreement with that found by Gautam and Koji (#13848). The optimal gain is found to be 200, which places the UGF at 15 kHz with a 45 deg phase margin.
For now I have reverted the PLL to use the SR560 instead of the LB1005. The issue with the LB1005 is that the TTL input for remote control only "freezes" the integrator, but does not actually reset it. This is fine if the lock is disabled in a controlled way (i.e., via the medm interface). However, if the lock is lost uncontrollably, the integrator is stuck in a garbage state that prevents re-locking. The only way to reset this integrator is to manually flip a switch on the controller box (no remote reset). Rana suggests we might be able to find a workaround using a remote-controlled relay before the controller.
Target: Phase locking can be acheived by giving a scan to the oscilator frequency. This frequency is now controlled using the knobe on the AM/FM signal generator 2023B. But we need to control it remotely by giving the inputs of start frequency, end frequency and the steps.
The frequency oscilator and the computer is connected with the help of GPIB Ethernet converter. The IP address of the converter I used is '192.168.113.109' and its GPIB address is 10.
I could change the oscilator frequency by changing the input frequency with the help of the code I made (Inorder to check this code, I have changed the oscilator frequency multiple times. I hope it didn't create trouble to anyone). Now I am trying to make this code better by adding certain features like numpy, argument parse etc, which I will be able complete by next week. I am also considering to develop the code to have a sliding system to control the oscillatory frequency.
For record: The maximum limit of frequency which i changed upto is 100MHz.
Tip-Tilt Suspension Design:
Designed a new ECD plate and changed dimensions of the side arms after discussing with Koji. After getting feedback on the changes, I will finish the assembly and send it to him to get approved for manufacturing.
Here is the result of my test. I think I'll leave the can on over the weekend because there's a long period of time where the seismometer heated up by 0.8 degrees so I can't fully see the fluctuations over a full 24 hour period.
I setup a basic MEDM screen for remote control of the PLL.
The Slow control voltage slider allows the frequency of the laser to be moved around via the front panel slow control BNC.
The TTL signal slider provides 0/5V to allow triggering of the servo. Eventually this functionality will be transferred to the buttons (which do not work for now).
The screen can be accessed from the PSL dropdown menu in sitemap. We can make this better eventually, but this should suffice for initial setup.
To obtain a colored version with good contrast of the grayscale image of scattering of light by dust particles on the surface of test mass, got using GigE camera. The original and colored images are attached here.
The ITMX oplev still clipping
The ITMX oplev beam is clipping. It will be corected with locked arm
I'm working near 1X5 and there is an SR785 adjacent to the electronics rack with some cabling running along the floor. I plan to continue in the evening so please leave the setup as is.
During the course of this work, I noticed the +15V Sorensen in 1X6 has 6.8 A of current draw, while Steve's February2018 label says the current draw is 8.6A. Is this just a typo?
Steve: It was most likely my mistake. Tag is corrected to 6.8A
I'm still in the process of electronics characterization, so the SR785 is still hooked up. MC3 coil driver signal is broken out to measure the output voltage going to the coil (via Gainx100 SR560 Preamp), but MC is locked.
I've moved my setup to the actual seismometer. I attached the temperature sensor to the seismometer (attachment 1) with duct tape, though this is temporary. I will be monitoring the temperature fluctuations of the seismometer for a whole day then take the can off and repeat the test. The can isn't clamped down so the insulation isn't perfect, so I'd expect to see some noticeable fluctuations even with the can on. I've also labeled the long cable for the temperatuse sensor readout (attachments 2 and 3). There will also be an out of loop sensor added in later, but for this test since I am not running the loop it doesn't matter which sensor I monitor. Attachment 4 is a picture of the current setup.
Attached is supporting documentation for the AUX-PSL PLL electronics installed in the lower PSL shelf, as referenced in #13845.
Some initial loop measurements by Gautam and Koji (#13848) compare the performance of the LB1005 vs. an SR560 as the controller, and find the LB1005 to be advantageous (a higher UGF and phase margin). I have some additional measurements which I'll post separately.
Pickoffs of the AUX and PSL beams are routed onto a broadband-sensitive New Focus 1811 PD. The AUX laser temperature is tuned to place the optical beat note of the two fields near 50 MHz. The RF beat note is sensed by the AC-coupled PD channel, amplified, and mixed-down with a 50 MHz RF source to obtain a DC error signal. The down-converted term is isolated via a 1.9-MHz low-pass filter in parallel with a 50 Ohm resistor and fed into a Newport LB1005 proportional-integral (PI) servo controller. Controller settings are documented in the below schematic. The resulting control signal is fed back into the fast PZT actuator input of the AUX laser.
The EPICS process on the c1ioo front end had died mysteriously. As a result, MC autolocker wasn't working, since the autolocker control variables are EPICS channels defined in the c1ioo model. I restarted the model, and now MCautolocker works.
The final set-up of stack measurment with 3 load cells and 4 leveling wedge mounts as Atm 1
Sensor voltages BEFORE and AFTER this attempt.
Earlier today Kira and I reconnected the EX seismometer. I just took some spectra of all three seismometers, shown in the attachments, to compare with past data and to do a rough check of the calibration.
This elog has a spectra from 2010 (GUR1 is now EY) and this elog has one for BS at lower frequencies from 2017. Note that the EX seismometers now have strong peaks that are not at 60 Hz harmonics. Other than these peaks, these old spectra roughly match up with the ones taken today, so the callibration is still roughly the same. I couldn't find any old data for EX (GUR2) though so I don't know for sure that these peaks weren't there before.
gautam 20180517 0930: In 2017, Gur2 (now EX) looked like this. Still peaky, but the peaks seem shifted in frequency. Steve also informed me that the Gur1 and Gur2 cables were swapped n times, so perhaps we shouldn't read too much into that.
As described in this elog, the ADC for the seismometers now has the signals wired directly to the ADC instead of going through an AA board or other circuit to remove any common mode noise. This elog describes one test of the common mode rejection of this setup. Guantanamo suggested comparing directly with a recent spectrum taken a few months before the new setup described in this elog.
Today I took a spectrum (attachment 1) of C1:PEM-MIC_2 (Ch17) and C1:PEM-MIC_3 (Ch18) with input to the ADC terminated with 50 Ohms. These are two of the channels plotted in the previous spectrum, though I don't know how that plot was normalized. It's clear that there are now strong 60 Hz harmonic peaks that were not there before, so this new setup does have worse common mode rejection.
As I suspected, when the SR560 is operated in 1 Hz, first order LPF mode, the (electronic) transfer function has a zero at ~5kHz (!!!).
This is what allowed the PLL to be locked with this setting with UGF of ~30kHz. On the evidence of Attachment #3, there is also some flattening of the electrical TF at low frequencies when the SR560 is driving the NPRO PZT. I'm pretty sure the flattening is not a data download error but since this issue needs further investigation anyway, I'm not reading too much into it. I fit the model with LISO but since we don't have low frequency (~1Hz) data, the fit isn't great, so I'm excluding it from the plots.
We also did some PLL loop characterization. We decided that the higher output range (10Vp bs 10Vpp for the SR560) of the LB1005 controller means it is a better option for the PLL. The lock state can also be triggered remotely. It was locked with UGF ~ 60kHz, PM ~45deg.
We also measured the actuation coefficient of the NPRO laser PZT to be 4.89 +/- 0.02 MHz/V. Quoted error is (1-sigma) from the fit of the linear part of the measured transfer function to a single pole at DC with unknown gain. I used the "clean" part of the measurement that extends to lower frequencies for the fit, as can be seen from the residuals plot. Good to know that even though the LDs are dying, the PZT is still going strong :D.
Remaining loop characterization (i.e. verification of correct scaling of in loop suppression with loop gain etc.) is left to Jon.
Some other remarks:
Since there have been various software/hardware activity going on (stack weighing, AUX laser PLL, computing timing errors etc etc), I decided to do a check on the state of the IFO.
Since we think we already know the stack mass to ~25% (i.e. 5000 +/- 1000 lbs), we decided to restore the ETMX stack. Procedure followed was:
I will upload the photos to the PICASA page and post the link here later.
In this case, we only need a mass estimate of the end chamber contents with an accuracy of ~25%. If we think we have that already, we don't need to keep doing the jacks-strain gauge adventure.
[jon, steve, gautam]
Some points which Jon will elaborate upon (and put photos of) in his detailed elog about this setup:
We are now in a state where the PLL can be locked remotely from the control room by tweaking the AUX laser temperature . Tomorrow, Keerthana will work on getting Craig's/Johannes' Digital Frequency Counter script working here, I think we can easily implement a PLL autolocker if we have some diagostic that tells us if the PLL us locked or not.
Steve informed me that there is an acoustic hum inside the PSL enclosure which wasn't there before. Indeed, it is at ~295Hz, and is from the Bench power supply used to power the ZFL500HLN amplifier. This will have to go...
I tried calibrating the other channels today, but they still fluctuate. Sometimes they do stabilize at +/- 10V, but then suddenly drop to 5 or 6 V before climbing back up to 10. Turning the legacy off made it go only up to 6.67V. This happens for all the channels, even after doing a factory reset and recalibrating. Not sure what's happening here.
Pooja and Keirthana received 40m specific basic safety training.
The marconi RF output was turned on and thus the RF generator condition was restored to the nominal state on Friday 11th.
Note that for Signal Recycling, which is what Kevin tells us we need to do, there is a DARM pole at ~150 Hz.
To be quantitative, since we are looking at smaller squeezing levels and considering the possibility of using 5 W input power, it is possible to see a small amount of squeezing below vacuum with no SRM.
Attachment 1 shows the amount of squeezing below vacuum obtainable as a function of homodyne angle with no SRM and 5 W incident on the back of PRM. The optimum homodyne angle at 210 Hz is 89.2 deg which gives -0.38 dBvac of squeezing. Figure 2 is the displacement noise at this optimal homodyne angle and attachment 3 is the same noise budget shown as the ratio of the various noise sources to the unsqueezed vacuum.
The other parameters used for these calculations are:
So maybe it's worth considering going for less squeezing with no SRM if that makes it technically more feasible.
follow up email from Dennis 5-13-2018. The last line agrees with the numbers in elog13821.
Hi Steve & Gautam,
I've made some measurements of the spare (damaged) 40m bellows. Unfortunately neither of our coordinate measurement arms are currently set up (and I couldn't find an appropriate micrometer or caliper), so I could not (yet) directly measure the thickness. However from the other dimensional measurements, and a measurement of the axial stiffness (100 lb/in), and calculations (from the Standards of the Expansion Joint Manufacturers Association (EJMA), 6th ed., 1993) I infer a thickness of 0.010 inch in . This is close to a value of 0.012 in used by MDC Vacuum for bellows of about this size.
I calculate that the maximum allowable torsional rotation is 1.3 mrad. This corresponds to a differential height, across the 32 in span between support points, of 0.041 in.
In addition using the EJMA formulas I find that one can laterally displace the bellows by 0.50 inch (assuming a simultaneous axial displacement of 0.25 inch, but no torsion), but no more than ~200 times. I might be good to stay well below this limit, say no more than ~0.25 inch (6 mm).
If interested I've uploaded my calculations as a file associated with the bellows drawing at D990577-A/v1.
BTW in some notes that I was given (by either Larry Jones or Alan Weinstein) related to the 40m Stacis units, I see a sketch from Steve dated 3/2000 faxed to TMC which indicates 1200 lbs on each of two Stacis units and 2400 on the third Stacis.
I think the root of the problem is that the /opt/rtapps/ and /cvs/cds/rtapps/ mounting locations point to the same directory on the nfs server. Gautam and I were cleaning up the /cvs/cds/caltech/target/ directory, placing the previous contents of /cvs/cds/caltech/target/c1auxex/, including database files and startup instructions in /cvs/cds/caltech/target/c1auxex_oldVME/, and then moved /cvs/cds/caltech/target/c1auxex2/, which has the channel database and initialization files for the Acromac DAQ, to /cvs/cds/caltech/target/c1auxex/.
This also required updating the systemd entries on c1auxex to point to the changed directory. While confirming that everything worked as before we noticed that upon startup the EPICS IOC complains about not being able to find the caRepeater binary. This was not new and has not limited DAQ functionality in the past, but we wanted to fix this, as it seemed to be some simple PATH issue. While the paths are all correctly defined in the user login shell, systemd runs on a lower level and doesn't know about them. One thing we tried was to let systemd execute /cvs/cds/rtapps/epics-220.127.116.11_long/etc/epics-user-env.sh initializing EPICS. It was strange that the content of that file was pointing to /opt/rtapps/epics-18.104.22.168_long/base, which is not mounted on the slow machines, so we changed the /opt/ it to /cvs/cds/, not realizing that the frontends read from the same directory (as Gautam said, /cvs/cds does not exist as a mount point on the frontend). It ended up not working this way, and apparently I forgot to change it back during clean up. But worse, never elogged it!
In the end, we managed to to give systemd the correct path definitions by explicitly calling them out in /cvs/cds/caltech/target/c1auxex/ETMXenv, to which a reference was added in the systemd service file. The caRepeater warning no longer appears.
As suspected, this was indeed a path problem. Johannes will elog about it later, but in short, it is related to some path variables being changed in order to try and streamline the EPICS processes on the new c1auxex machine (Acromag Era). It is confusing that futzing around with the slow computing system messes with the realtime system as well - aren't these supposed to be decoupled? Once the paths were restored by Johannes, everything compiled and restarted fine. We even have a beam on the AS camera, which was what triggered this whole thing.
Anyways, Attachment #1 shows the current status. I am puzzled by the red TIMING indicators on the c1x04 and c1x02 processes, it is absent from any other processes. How can this be debugged further?
I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-22.214.171.124_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-126.96.36.199_long/base. Why should this have changed?
I found the c1lsc machine to be completely unresponsive today. Looking at the trend of the state word, it happened sometime yesterday (Saturday). The usual reboot procedure did not work - I am not able to bring back any of the models on any of the machines, during the restart procedure, they all fail. The logfile reads (for the c1ioo front end, but they all behave the same):
Not sure what is going on here, or what "Corrutped EPICS data" is supposed to mean. Thinking that something was messed up the last time the model was compiled, I tried recompiling the IOP model. But I'm not able to even compile the model, it fails giving the error message
I've shutdown all watchdogs until this is resolved.
Good question! I've never calculated what the resonance frequency would be if had an inductive resistor with our cable capacitance (~50 pF/m I guess).
I was a bit hasty in posting the earlier plots. In the earlier plot, the "OLG" trace was OLG * anti dewhitening as Rana pointed out.
Here are the updated ones, and a cartoon (Attachment #5) of the loop topology I assumed. I've excluded things like violin filters, AA/AI etc. The overall gain scaling I mentioned in the previous elog amounts to changing the optical sensing response in this cartoon. I now also show the DARM suppression (Attachment #4) for this OLG and the DARM linewidths for RSE. I don't think the conclusions change.
Note that for Signal Recycling, which is what Kevin tells us we need to do, there is a DARM pole at ~150 Hz. I assume we will cancel this in the digital controller and so can achieve a similar OLG shape. This would modify the control signal spectrum a little around 150Hz. But for a UGF on the loop of ~150 Hz, we should still be able to roll-off the control signal at high frequencies and so the RMS shouldn't be dramatically affected.
Steve is looking into acquiring 4.5kohm Vishay Wirewound resistors with 1% tolerance. Plan is to install two in parallel (so that we get 2kohm effective resistance) and then snip off one once we are convinced we won't have any actuation range issues. Do these look okay? They're ~$1.50ea on mouser assuming we get 100. Do we need the non-inductive winding?
I think "OLG" trace is not labeled right; it would be good to see the actual OLG in addition to whatever that trace actually is.
As discussed at the meeting earlier this week, we will use some old *MOPA* channels for interfacing with the PLL system Jon is setting up. He is going to put a sketch+photos up here shortly, but in the meantime, Koji helped me identify a channel that can be used to tune the temperature of the Lightwave NPRO crystal via front panel BNC input. It is C1:PSL-126MOPA_126CURADJ, and is configured to output between +/-10V, which is exactly what the controller can accept. The conversion factor from EPICS value to volts is currently set to 1 (i.e. EPICS value of +1 corresponds to +1V output from the DAC). With the help of the wiring diagram, we identified pins 3 and 4 on cross-connect #J7 as the differential outputs corresponding to this channel. Not sure if we need to also setup a TTL channel for servo ENABLE/DISABLE, but if so, the wiring diagram should help us identify this as well.
The cable from the DAC to the cross-connect was wrongly labelled. I fixed this now.
Based on the first plot, however, my conclusion is that:
The replacement Acromag we scooped from the West Bridge E-Shop does actually seem to work, although we thought it was broken - at first it was just outputting zeros, but after I did the calibration procedure, applying +10 V and -10 V, respectively, it was reporting voltage correctly, over the full range. I don't know why the factory settings would be messed up, but it had been out of the box before. I did this only with channel 7, so you need to calibrate channels 0-6 and confirm that they indeed also work properly.
See Attachment #1 for the projected control signal ASDs. The main assumption in the above is that all other control loops can be low-passed sufficiently such that even with anti-dewhitening, we won't run into saturation issues.
DARM control loop:
De-whitening and Anti-De-whitening:
It remains to add the control signals for Oplev, local damping, and ASC to make sure we have sufficient headroom, but given that current projections are predicting using up only ~1000cts of the ~23000cts (RMS) available from the DAC, I think it is likely we won't run into saturations. Need to also figure out what the implication of the reduced actuation range will be on handling the locking transient.
Looking at Steve's plot, I was reminded of the ITMY UL OSEM issue. The numbers don't make sense to me though - 300um of DC shift in UL with negligible shifts in the other coils should have made a much bigger DC shift in the Oplev spot position.
20180508 4:49am Cabazon earth quake 4.5M at 79 miles away. ETMX is in load cell measurment condition.
There was an earthquake, all watchdogs were tripped, ITMX was stuck, and c1psl was dead so MCautolocker was stuck.
Watchdogs were reset (except ETMX which remains shutdown until we finish with the stack weight measurement), ITMX was unstuck using the usual jiggling technique, and the c1psl crate was keyed.
As suspected - the problem was with the TTs. I tested the TT signal chain by driving a low frequency sine wave using AWG and looking at the signal on an o'scope. But I saw nothing, neither at the AI board monitor point, nor at the actual coil driver mon point. I decided to look at the IOP testpoints for the DAC channels, to see if the signals were going through okay on the digital side. But the IOP channels were flatlined, as viewed on dataviewer (see Attachment #1). This despite the fact that the DAC output monitor screen in the model itself was showing some sensible numbers, again see Attachment #1.
Looking at the CDS overview screen, there were no red flags. But there was a red indicator sneakily hidden away in the IOP model's CDS status screen, the "DAC" field in the state word is red. As Attachment #2 shows, a change in the state word is correlated with the time ASDC went to 0.
Note that there are also no errors on the c1lsc frontend itself, judging by dmesg. I want to do a model restart, but (i) this will likely require reboots of all vertex FEs and (ii) I want to know if any CDS experts want to sniff for clues to what's going on before a model restart wipes out some secret logfiles. I'm a little confused that the rtcds isn't throwing up any errors and causing models to crash if the values are not being written to the registers of the DAC. It may also be that the DAC card itself is dead . To re-iterate, all the EPICS readbacks were suggesting that I am injecting a signal right up to the IOP.
Quoting from the runtime diagnostics note:
There is no beam going into the IFO at the moment. There was definitely a spot on the AS camera after I restored the suspensions yesterday, as you can see from the ASDC level in Attachment #1. But at around 2pm Pacific yesterday, the ASDC level has gone to 0. I suspect the TTs. There is no beam on the REFL camera either when PRM is aligned, and PRM's DC alignment is good as judged by Oplev.
Normally, I am able to recover the beam by scanning the TTs around with some low frequency sine waves, but not today. We don't have any readback (Oplev/OSEM) of the TT alignment, and the DC bias values havent jumped abnormally around the time this happened, judging by the OUT16 monitor points (see Attachment #2). The IMC was also locked at the time when this abrupt drop in ASDC level happened. Unfortunately, we don't have a camera on the Faraday so I don't know where the misalignment is happening, but the beam is certainly not making it to the BS. All the SOS optics (e.g. BS, ITMX and ITMY) are well aligned as judged by Oplev.
Being debugged now...
Here are a few things I will be working on:
I did some more BeatMouth characterization. My primary objective was to do a power budget. Specifically, to measure the insertion loss of the mating sleeves and of the fiber beam splitters. All power numbers quoted in this elog are measured with the fiber power meter. Main takeaways:
Remarks / Questions:
example of plots illustrating DAC range / saturation
Using the Wiener Filter estimate of the DARM disturbance we will have to cancel, I computed how the control signal would look like for a few scenarios. Our DACs are 16-bit, +/-10V (i.e. +/-32,768cts-pk, or ~23,000cts RMS). We need to consider the shape of the de-whitening filter to conclude whether it is feasible to increase the series resistance by x10 or not.
Note that in this first computation, I have not considered
While doing this calculation, I have accounted for the fact that right now, the analog de-whitening filters in the ETM drive chain have a x3 gain which we will remove. Actually this is an assumption, I have not yet measured a transfer function, maybe I'll do one channel at EY to confirm. Also, the actuator gains themselves need to be confirmed.
As I was looking at the coil driver schematic more closely, I realized that there are actually two separate series resistances, one for the fast controls path, and another for the DC bias voltage from the slow ADCs. So I think we have been underestimating the Johnson noise of the coil drivers by sqrt(2). I've also attached screenshots of the IFOalign and MCalign screens. The two ITMs and ETMX have pitch DC bias values that are compatible with a x10 increase of the series resistance. But even so, we will have ~3pA/rtHz per coil from the two resistances.
gautam 8pm May8: Seems like I had confirmed the x3 gain in the EX de-whitening board when Johannes and I were investigating the AI board offset.
We tried to estimate what the load cell measurement should yield. Here is the weight breakdown (fudge factor for Al table is to try and account for tapped holes):
As part of investigation into this issue, Jonathan Hanks pointed out that the "minute trends" being recorded by our system were actually only being recorded every 120 seconds (a.k.a. 2 minutes). He had fixed the appropriate line in the parameter file, but had not restarted the FW processes. I had restarted it on Friday. (but failed to elog it !)
To check if this made any difference, I pulled 1 hour of "minute trend" data for the PSL table temperature channel from ~1 hour ago, and compared the number of datapoints against a 1 hour minute trend time series from 2 May. I've put the display with # of datapoints (for an identical length of time) from before [left] and after [right] the restart next to the plots in Attachment #1. Seems like we are getting minute trends written every 60 seconds now, as it should be .
The unavailability of trends from nodus is a separate issue for which JH has suggested another fix, to be elogged separately.
for whatever reason, I am unable to get minute or second trends from nodus for any channels (IMC, PEM, etc) since the reboot. has there been some more recent FB failure or is this still a bug since last years FB catastrophe?
The 3IFO EOM was formerly tuned as the H2 EOM, so the resonant frequencies are different from the nominal aLIGO ones.
PORT1: 8.628MHz / 101 +/- 6 mrad_pk/V_pk
PORT2: 24.082MHz / 41.2 +/- 0.7 mrad_pk/V_pk
PORT3: 43.332MHz / 62.2 +/- 4 mrad_pk/V_pk
9MHz modulation is about x2.4 better than the one installed at LHO.
24MHz modulation is about x14 better. (This is OK as the new 24MHz is not configured to be resonant.)
45MHz modulation is about x1.4 better.
Caution: Because of this work and my negligence, the RF output of the main Marconi for the IFO modulation is probably off. The amplifier (freq gen. box) was turned on. Therefore, we need to turn the Marconi on for the IFO locking.
I worked on my EOM m easurement using the beat setup. As there was the aux injection electronics, I performed my measurement having tried not to disturb the aux setup. The aux Marconi, the splitted PD output, and an open channel of the oscilloscope were used for my purpose. I have brought the RF spectrum analyzer from the control room. I think I have restored all the electronics back as before. I have re-aligned the beat setup after the EOM removed. Note that the aux NPRO, which had been on, was turned off to save the remaining life of the laser diode.