Using the Wiener Filter estimate of the DARM disturbance we will have to cancel, I computed how the control signal would look like for a few scenarios. Our DACs are 16-bit, +/-10V (i.e. +/-32,768cts-pk, or ~23,000cts RMS). We need to consider the shape of the de-whitening filter to conclude whether it is feasible to increase the series resistance by x10 or not.
Note that in this first computation, I have not considered
While doing this calculation, I have accounted for the fact that right now, the analog de-whitening filters in the ETM drive chain have a x3 gain which we will remove. Actually this is an assumption, I have not yet measured a transfer function, maybe I'll do one channel at EY to confirm. Also, the actuator gains themselves need to be confirmed.
As I was looking at the coil driver schematic more closely, I realized that there are actually two separate series resistances, one for the fast controls path, and another for the DC bias voltage from the slow ADCs. So I think we have been underestimating the Johnson noise of the coil drivers by sqrt(2). I've also attached screenshots of the IFOalign and MCalign screens. The two ITMs and ETMX have pitch DC bias values that are compatible with a x10 increase of the series resistance. But even so, we will have ~3pA/rtHz per coil from the two resistances.
gautam 8pm May8: Seems like I had confirmed the x3 gain in the EX de-whitening board when Johannes and I were investigating the AI board offset.
example of plots illustrating DAC range / saturation
There was an earthquake, all watchdogs were tripped, ITMX was stuck, and c1psl was dead so MCautolocker was stuck.
Watchdogs were reset (except ETMX which remains shutdown until we finish with the stack weight measurement), ITMX was unstuck using the usual jiggling technique, and the c1psl crate was keyed.
There is no beam going into the IFO at the moment. There was definitely a spot on the AS camera after I restored the suspensions yesterday, as you can see from the ASDC level in Attachment #1. But at around 2pm Pacific yesterday, the ASDC level has gone to 0. I suspect the TTs. There is no beam on the REFL camera either when PRM is aligned, and PRM's DC alignment is good as judged by Oplev.
Normally, I am able to recover the beam by scanning the TTs around with some low frequency sine waves, but not today. We don't have any readback (Oplev/OSEM) of the TT alignment, and the DC bias values havent jumped abnormally around the time this happened, judging by the OUT16 monitor points (see Attachment #2). The IMC was also locked at the time when this abrupt drop in ASDC level happened. Unfortunately, we don't have a camera on the Faraday so I don't know where the misalignment is happening, but the beam is certainly not making it to the BS. All the SOS optics (e.g. BS, ITMX and ITMY) are well aligned as judged by Oplev.
Being debugged now...
As suspected - the problem was with the TTs. I tested the TT signal chain by driving a low frequency sine wave using AWG and looking at the signal on an o'scope. But I saw nothing, neither at the AI board monitor point, nor at the actual coil driver mon point. I decided to look at the IOP testpoints for the DAC channels, to see if the signals were going through okay on the digital side. But the IOP channels were flatlined, as viewed on dataviewer (see Attachment #1). This despite the fact that the DAC output monitor screen in the model itself was showing some sensible numbers, again see Attachment #1.
Looking at the CDS overview screen, there were no red flags. But there was a red indicator sneakily hidden away in the IOP model's CDS status screen, the "DAC" field in the state word is red. As Attachment #2 shows, a change in the state word is correlated with the time ASDC went to 0.
Note that there are also no errors on the c1lsc frontend itself, judging by dmesg. I want to do a model restart, but (i) this will likely require reboots of all vertex FEs and (ii) I want to know if any CDS experts want to sniff for clues to what's going on before a model restart wipes out some secret logfiles. I'm a little confused that the rtcds isn't throwing up any errors and causing models to crash if the values are not being written to the registers of the DAC. It may also be that the DAC card itself is dead . To re-iterate, all the EPICS readbacks were suggesting that I am injecting a signal right up to the IOP.
Quoting from the runtime diagnostics note:
20180508 4:49am Cabazon earth quake 4.5M at 79 miles away. ETMX is in load cell measurment condition.
Looking at Steve's plot, I was reminded of the ITMY UL OSEM issue. The numbers don't make sense to me though - 300um of DC shift in UL with negligible shifts in the other coils should have made a much bigger DC shift in the Oplev spot position.
See Attachment #1 for the projected control signal ASDs. The main assumption in the above is that all other control loops can be low-passed sufficiently such that even with anti-dewhitening, we won't run into saturation issues.
DARM control loop:
De-whitening and Anti-De-whitening:
It remains to add the control signals for Oplev, local damping, and ASC to make sure we have sufficient headroom, but given that current projections are predicting using up only ~1000cts of the ~23000cts (RMS) available from the DAC, I think it is likely we won't run into saturations. Need to also figure out what the implication of the reduced actuation range will be on handling the locking transient.
I think "OLG" trace is not labeled right; it would be good to see the actual OLG in addition to whatever that trace actually is.
Based on the first plot, however, my conclusion is that:
As discussed at the meeting earlier this week, we will use some old *MOPA* channels for interfacing with the PLL system Jon is setting up. He is going to put a sketch+photos up here shortly, but in the meantime, Koji helped me identify a channel that can be used to tune the temperature of the Lightwave NPRO crystal via front panel BNC input. It is C1:PSL-126MOPA_126CURADJ, and is configured to output between +/-10V, which is exactly what the controller can accept. The conversion factor from EPICS value to volts is currently set to 1 (i.e. EPICS value of +1 corresponds to +1V output from the DAC). With the help of the wiring diagram, we identified pins 3 and 4 on cross-connect #J7 as the differential outputs corresponding to this channel. Not sure if we need to also setup a TTL channel for servo ENABLE/DISABLE, but if so, the wiring diagram should help us identify this as well.
The cable from the DAC to the cross-connect was wrongly labelled. I fixed this now.
I was a bit hasty in posting the earlier plots. In the earlier plot, the "OLG" trace was OLG * anti dewhitening as Rana pointed out.
Here are the updated ones, and a cartoon (Attachment #5) of the loop topology I assumed. I've excluded things like violin filters, AA/AI etc. The overall gain scaling I mentioned in the previous elog amounts to changing the optical sensing response in this cartoon. I now also show the DARM suppression (Attachment #4) for this OLG and the DARM linewidths for RSE. I don't think the conclusions change.
Note that for Signal Recycling, which is what Kevin tells us we need to do, there is a DARM pole at ~150 Hz. I assume we will cancel this in the digital controller and so can achieve a similar OLG shape. This would modify the control signal spectrum a little around 150Hz. But for a UGF on the loop of ~150 Hz, we should still be able to roll-off the control signal at high frequencies and so the RMS shouldn't be dramatically affected.
Steve is looking into acquiring 4.5kohm Vishay Wirewound resistors with 1% tolerance. Plan is to install two in parallel (so that we get 2kohm effective resistance) and then snip off one once we are convinced we won't have any actuation range issues. Do these look okay? They're ~$1.50ea on mouser assuming we get 100. Do we need the non-inductive winding?
Good question! I've never calculated what the resonance frequency would be if had an inductive resistor with our cable capacitance (~50 pF/m I guess).
I found the c1lsc machine to be completely unresponsive today. Looking at the trend of the state word, it happened sometime yesterday (Saturday). The usual reboot procedure did not work - I am not able to bring back any of the models on any of the machines, during the restart procedure, they all fail. The logfile reads (for the c1ioo front end, but they all behave the same):
Not sure what is going on here, or what "Corrutped EPICS data" is supposed to mean. Thinking that something was messed up the last time the model was compiled, I tried recompiling the IOP model. But I'm not able to even compile the model, it fails giving the error message
I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-126.96.36.199_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-188.8.131.52_long/base. Why should this have changed?
I've shutdown all watchdogs until this is resolved.
As suspected, this was indeed a path problem. Johannes will elog about it later, but in short, it is related to some path variables being changed in order to try and streamline the EPICS processes on the new c1auxex machine (Acromag Era). It is confusing that futzing around with the slow computing system messes with the realtime system as well - aren't these supposed to be decoupled? Once the paths were restored by Johannes, everything compiled and restarted fine. We even have a beam on the AS camera, which was what triggered this whole thing.
Anyways, Attachment #1 shows the current status. I am puzzled by the red TIMING indicators on the c1x04 and c1x02 processes, it is absent from any other processes. How can this be debugged further?
I think the root of the problem is that the /opt/rtapps/ and /cvs/cds/rtapps/ mounting locations point to the same directory on the nfs server. Gautam and I were cleaning up the /cvs/cds/caltech/target/ directory, placing the previous contents of /cvs/cds/caltech/target/c1auxex/, including database files and startup instructions in /cvs/cds/caltech/target/c1auxex_oldVME/, and then moved /cvs/cds/caltech/target/c1auxex2/, which has the channel database and initialization files for the Acromac DAQ, to /cvs/cds/caltech/target/c1auxex/.
This also required updating the systemd entries on c1auxex to point to the changed directory. While confirming that everything worked as before we noticed that upon startup the EPICS IOC complains about not being able to find the caRepeater binary. This was not new and has not limited DAQ functionality in the past, but we wanted to fix this, as it seemed to be some simple PATH issue. While the paths are all correctly defined in the user login shell, systemd runs on a lower level and doesn't know about them. One thing we tried was to let systemd execute /cvs/cds/rtapps/epics-184.108.40.206_long/etc/epics-user-env.sh initializing EPICS. It was strange that the content of that file was pointing to /opt/rtapps/epics-220.127.116.11_long/base, which is not mounted on the slow machines, so we changed the /opt/ it to /cvs/cds/, not realizing that the frontends read from the same directory (as Gautam said, /cvs/cds does not exist as a mount point on the frontend). It ended up not working this way, and apparently I forgot to change it back during clean up. But worse, never elogged it!
In the end, we managed to to give systemd the correct path definitions by explicitly calling them out in /cvs/cds/caltech/target/c1auxex/ETMXenv, to which a reference was added in the systemd service file. The caRepeater warning no longer appears.
Note that for Signal Recycling, which is what Kevin tells us we need to do, there is a DARM pole at ~150 Hz.
To be quantitative, since we are looking at smaller squeezing levels and considering the possibility of using 5 W input power, it is possible to see a small amount of squeezing below vacuum with no SRM.
Attachment 1 shows the amount of squeezing below vacuum obtainable as a function of homodyne angle with no SRM and 5 W incident on the back of PRM. The optimum homodyne angle at 210 Hz is 89.2 deg which gives -0.38 dBvac of squeezing. Figure 2 is the displacement noise at this optimal homodyne angle and attachment 3 is the same noise budget shown as the ratio of the various noise sources to the unsqueezed vacuum.
The other parameters used for these calculations are:
So maybe it's worth considering going for less squeezing with no SRM if that makes it technically more feasible.
The marconi RF output was turned on and thus the RF generator condition was restored to the nominal state on Friday 11th.
Pooja and Keirthana received 40m specific basic safety training.
Since we think we already know the stack mass to ~25% (i.e. 5000 +/- 1000 lbs), we decided to restore the ETMX stack. Procedure followed was:
I will upload the photos to the PICASA page and post the link here later.
In this case, we only need a mass estimate of the end chamber contents with an accuracy of ~25%. If we think we have that already, we don't need to keep doing the jacks-strain gauge adventure.
Since there have been various software/hardware activity going on (stack weighing, AUX laser PLL, computing timing errors etc etc), I decided to do a check on the state of the IFO.
As described in this elog, the ADC for the seismometers now has the signals wired directly to the ADC instead of going through an AA board or other circuit to remove any common mode noise. This elog describes one test of the common mode rejection of this setup. Guantanamo suggested comparing directly with a recent spectrum taken a few months before the new setup described in this elog.
Today I took a spectrum (attachment 1) of C1:PEM-MIC_2 (Ch17) and C1:PEM-MIC_3 (Ch18) with input to the ADC terminated with 50 Ohms. These are two of the channels plotted in the previous spectrum, though I don't know how that plot was normalized. It's clear that there are now strong 60 Hz harmonic peaks that were not there before, so this new setup does have worse common mode rejection.
Earlier today Kira and I reconnected the EX seismometer. I just took some spectra of all three seismometers, shown in the attachments, to compare with past data and to do a rough check of the calibration.
This elog has a spectra from 2010 (GUR1 is now EY) and this elog has one for BS at lower frequencies from 2017. Note that the EX seismometers now have strong peaks that are not at 60 Hz harmonics. Other than these peaks, these old spectra roughly match up with the ones taken today, so the callibration is still roughly the same. I couldn't find any old data for EX (GUR2) though so I don't know for sure that these peaks weren't there before.
gautam 20180517 0930: In 2017, Gur2 (now EX) looked like this. Still peaky, but the peaks seem shifted in frequency. Steve also informed me that the Gur1 and Gur2 cables were swapped n times, so perhaps we shouldn't read too much into that.
The final set-up of stack measurment with 3 load cells and 4 leveling wedge mounts as Atm 1
Sensor voltages BEFORE and AFTER this attempt.
The EPICS process on the c1ioo front end had died mysteriously. As a result, MC autolocker wasn't working, since the autolocker control variables are EPICS channels defined in the c1ioo model. I restarted the model, and now MCautolocker works.
I've moved my setup to the actual seismometer. I attached the temperature sensor to the seismometer (attachment 1) with duct tape, though this is temporary. I will be monitoring the temperature fluctuations of the seismometer for a whole day then take the can off and repeat the test. The can isn't clamped down so the insulation isn't perfect, so I'd expect to see some noticeable fluctuations even with the can on. I've also labeled the long cable for the temperatuse sensor readout (attachments 2 and 3). There will also be an out of loop sensor added in later, but for this test since I am not running the loop it doesn't matter which sensor I monitor. Attachment 4 is a picture of the current setup.
I'm working near 1X5 and there is an SR785 adjacent to the electronics rack with some cabling running along the floor. I plan to continue in the evening so please leave the setup as is.
During the course of this work, I noticed the +15V Sorensen in 1X6 has 6.8 A of current draw, while Steve's February2018 label says the current draw is 8.6A. Is this just a typo?
Steve: It was most likely my mistake. Tag is corrected to 6.8A
I'm still in the process of electronics characterization, so the SR785 is still hooked up. MC3 coil driver signal is broken out to measure the output voltage going to the coil (via Gainx100 SR560 Preamp), but MC is locked.
The ITMX oplev still clipping
The ITMX oplev beam is clipping. It will be corected with locked arm
To obtain a colored version with good contrast of the grayscale image of scattering of light by dust particles on the surface of test mass, got using GigE camera. The original and colored images are attached here.
Here is the result of my test. I think I'll leave the can on over the weekend because there's a long period of time where the seismometer heated up by 0.8 degrees so I can't fully see the fluctuations over a full 24 hour period.
Target: Phase locking can be acheived by giving a scan to the oscilator frequency. This frequency is now controlled using the knobe on the AM/FM signal generator 2023B. But we need to control it remotely by giving the inputs of start frequency, end frequency and the steps.
The frequency oscilator and the computer is connected with the help of GPIB Ethernet converter. The IP address of the converter I used is '192.168.113.109' and its GPIB address is 10.
I could change the oscilator frequency by changing the input frequency with the help of the code I made (Inorder to check this code, I have changed the oscilator frequency multiple times. I hope it didn't create trouble to anyone). Now I am trying to make this code better by adding certain features like numpy, argument parse etc, which I will be able complete by next week. I am also considering to develop the code to have a sliding system to control the oscillatory frequency.
For record: The maximum limit of frequency which i changed upto is 100MHz.
Aim: To find telescopic lens solution to image test mass onto the sensor of GigE camera.
I wrote a python code to find an appropriate combination of lenses to focus the optic onto the camera keeping in mind practical constraints like distance of GigE camera from the optic ~ 1m and distance between the lenses need to be in accordance with the Thorlab lens tubes available. We have to image both the enire optic of size 3" and beam spot of 1" using this combination of lens. The image size that efficiently utilizes the entire sensor array is 1/4". Therefore the magnification required for imaging the entire optic is 1/12 and that for the beam spot is 1/4.
I checked the website of Thorlabs to get the available focal lengths of 2" lenses (instead of 1" lenses to collect sufficient power). I have tried several combination of lenses and the ones I found close enough to what is required have been listed below along with thier colorbar plots.
a) 150mm-150mm (Attachment 2 & 3)
With this combination, object distance varies like 50cm for 1" beam spot to 105cm for 3" spot. Therefore, it posses a difficulty that there is a difference of ~48cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
b) 125mm-150mm (Attachment 4 & 5)
With this combination, object distance varies like 45cm for 1" beam spot to 95cm for 3" spot. There is a difference of ~43cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
c) 125mm-125mm (Attachment 6 & 7)
The object distance varies like 45cm for 1" beam spot to 90cm for 3" spot. There is a difference of ~39cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
Sensitivity check was also done for these combination of lenses. An error of 1cm in object distance and 5mm in the distance between the lenses gives an error in magnification <2%.
The schematic of the telescopic lens system has been given in Attachment 8.
Article from EE Times, describing why metal foil (NOT metal film) resistors are really better than wirewound when it comes to everything except high power dissipation.
Need to do some diggin to see if we can find ~1k metal foil resistors which can handle ~1W of heat.
Steve: here it is
In the IMC actuation chain, it looks like the MC1/MC3 de-whitening boards, and also all three MC optics' coil driver boards, are showing higher noise than expected from LISO modeling. One possible candidate is thick film resistors on the coil driver boards. The plan is to debug these further by pulling the board out of the Eurocrate and investigating on the electronics bench.
Why bother? Mainly because I want to see how good the IR ALS noise is, and currently, the PSL frequency noise is causing the measurement to be worse than references taken from previous known good times.
Sometime ago, rana suggested to me that I should do this measurement more systematically.
I've now restored all the wiring at 1X6 to their state before this work.
I guess it's fine for now while we are still finalizing the setup at EX, but we should eventually line up the seismometer axes with the IFO axes. Is there a photo of the orientation of the seismometer pre heater can tests? If not, probably good to make some sort of markings on the granite slab / seismometer to allow easy lining up of these axes...
I have attached the graph for the seismometer temperature fluctuations over 3 days. As we can see, there is a noticeable fluctuation in daily temperature as well as a difference between days in the maximum and minimum temperatures. I will repeat this test but take the can off to see if there's any difference between having the can on or off.
Today Steve and I tried to to capture the image of scattering of light by dust particles on the surface of ETMX using GigE camera. The image ( at gain =100, exposure time = 125000) obtained has been attached. Unlike the previous images, a creepy shape of bright spots was seen. Gautam helped us lock infrared light and see the image. A similar less intense shape was seen. This may be because of the dust on the lens.
Today, I tested the new mini-circuit frequency counter by connecting it with the beat signal output. The frequency counter works fine. Now I am trying to get a display of the frequency in the computer screen using python programming. I have made the code for remotely changing oscilator frequency and it is saved in the folder 'ksnair'. A picture of the new mini circuits frequency counter is attached below. Part no: UFC-6000, S/N: 11501040012, Run: M075270.
It appears that one of the wires was disconnected overnight or this morning so I wasn't able to gather data over a full 24 hour period. Perhaps someone accidentally kicked it. I placed some cones in that area so hopefully the wires won't be in the way as much and I can get the data tomorrow. From the data I do have it seems that the seismometer is at a colder temperature when the can is not on, though it is difficult to see by how many degrees the temperature fluctuates. I've included the data from 5 days back to see the comparison.
I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice. Plans:
If there are no objections, I will execute Step #5 in the next couple of hours. I'm going to start with Steps 1-4.
(keerthana, gautam, jon)
In the morning, Jon gave me an overview of the Auxiliary laser system which we are planning to setup. Based on the diagram he uploaded in the elog, I have made the MEDM diagram for controlling and displaying the parameters. Here the parameters which we will be controlling are temperature (in terms of voltage), oscilator frequency ( with the help of IFR 2023B), the frequency offset and the PID controls. The display includes the beat frequency, error signal voltage, control voltage and a switch to give feed back to the AUX laser. As the frequency counter is not connected at the moment, I haven't included its channel number in it. The screenshot of the diagram is attached with this. I am also considering to give a PID feedback to the slow control from the AUX feedback signal. The screen can be accessed from the PSL dropdown menu in sitemap.
This work is now complete. MC1 coil driver board has been reinstalled, local damping of MC1 restored, and IMC has been locked. Detailed report + photos to follow, but measurement of the noise (for one channel) on the electronics workbench shows a broadband noise level of 5nV/rtHz () around 100Hz, which is lower than what was measured here and consistent with what we expect from LISO modeling (with fast input terminated with 50ohm, slow input grounded).
I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice.
This time the test went without issue. The first attachment is the data for the past 24 hours and the second attachment is the full data over 6 days. The average temperature fluctuations (from highest point to lowest point) for the can on was 0.43 C and for the can off it came out to 0.55 C. In addition the seismometer with the can off is about 1 C cooler than with the can on. I'd like to leave the can off until the end of the week so we can get a comparable data set for both the can on and off. Eventually I'll need to figure out a way to clamp the can down to the block in order to get better insulation and hopefully get even smaller temperature fluctuations.
In any case, if it is indeed true that the optic sees this current noise, the place to make the measurement is probably the Sat. Box. Who knows what the pickup is over the ~15m of cable from 1X6 to the optic.
Detailed report + photos to follow
All models on the c1lsc front end were dead. Looking at slow trend data, looks like this happened ~6hours ago. I rebooted c1lsc and now all models are back up and running to their "nominal state".
Rana said that it wasn't necessary to gather more data on the temperature fluctuations so I have reconnected the heater circuit and restarted the PID loop with the can on the seismometer.
We will need to order a few things for our final setup.
There is an effort to switch to an all-digital system for the GigE camera feeds similar to the one running at LLO, which uses Joe Betzwieser's custom SnapPy package to interface with the cameras in Python and aggregate their feeds into a fancy GUI. Joe's code is a SWIG-wrapping of the commercial camera-driver API, Pylon, from Basler. The wrapping allows the low-level camera driver methods to be called from within Python, and their feeds are forwarded to a gstreamer stream also initiated from within Python. The problem is that his wrapping (and the underlying Pylon software itself) is only runnable on an older version of Ubuntu. Efforts to run his software on newer distributions at the 40m have failed.
I'm working on a fix to essentially rewrite his high-level SnapPy code (generators of GUIs, etc.) to use the newest version of Pylon (pylon5) to interface at a low level with the cameras. I discovered that since the last attempt to digitize the camera system, Basler has released their own official version of a Python wrapping for Pylon on github (PyPylon).
Progress so far:
The next and final step is to modify Joe's SnapPy package to import pypylon instead of his custom wrapping of an older version of the camera software, and update all of the Pylon calls to use the new methods. I'll hopefully get back to this early next week.
I've updated the parts list to be an excel document and included every single part we will need. This is ony a first draft so it will probably be updated in the future. I also made a mistake in hole sizing for the front panel so I've updated it and attached it as well (second attachment).
Edit: re-attached the EX can panel fpd file so that everything is in one place
Chris replaced some air condition filters and ordered some replacement filter today.
Yesterday morning was dusty. I wonder why?
The PRM sus damping was restored this morning.
Yesterday afternoon at 4 the dust count peaked 70,000 counts
Manasa's alergy was bad at the X-end yesterday. What is going on?
There was no wind and CES neighbors did not do anything.
Air cond filters checked by Chris. The 400 days plot show 3 bad peaks at 1-20, 2-5 & 2-19
Last night, Rana fact-checked my story about the coil driver noise measurement. Conclusions:
Note: All measurements were made with the fast input of the coil driver board terminated with 50ohms and bias input shorted to ground with a crocodile clip cable.
The first goal is to figure out where this pickup is happening, and if it is actually going to the optic. To this end, I will put a passive 100 kHz filter between the coil driver output and the preamp (Busby Box instead of SR560). By getting a clean measurement of the noise floor with the coil driver board in the Eurocrate (with the bias input driven), we can confirm that the optic isn't being buffeted by the excess coil driver noise. If we confirm that the excess noise is not a measurement artefact, we need to think about were the pickup is actually happening and come up with mitigation strategies.
RXA: good section EMI/RFI in Op Amp Applications handbook (2006) by Walt Jung. Also this page: http://www.electronicdesign.com/analog/what-was-noise