ID |
Date |
Author |
Type |
Category |
Subject |
13641
|
Mon Feb 19 14:27:25 2018 |
gautam | Update | General | Fibel ALS input polarization tuning |
Summary:
Current configuration of PSL free-space to fiber coupling is:
- 3.25 mW / 4.55mW (~71%) coupling efficiency, both numbers measured with Ophir power meter, Filter OFF
(I choose to define it in this way as opposed to the reciprocal) of 75 (~19dB). The uncertainty in this number is large (see discussion), but I am confident that we have >10dB, which while isn't as good as can be, is sufficient for the main motivation behind this work.
Motivation:
I had noticed that the RF beat amplitude was fluctuating by up to 20dBm as viewed on the control room analyzer. As detailed in my earlier elog, I suspected this to be because of random polarization drift between the PSL and EX fields incident on the Fiber coupled PDs. Since I am confident the problem is optical (as opposed to something funny in the electronics), we'd like to be able to isolate which of the many fiber segments is dominating the contribution to this random polarization drift.
Some useful references:
- General writeup about how PM fibers work and PER. Gives maximum achievable PERs for a given misalignment of incident beam relative to one of the two birefringent axes.
- Another similar writeup. This one put me onto the usefulness of the alignment keys on the fibers.
- Thorlabs PM980 specs - this tells us about the orientation of the two axes for the kind of fibers we use.
Procedure and details:
- The principle of operation behind polarization maintaining (PM) fibers is that intentional birefringence is introduced along two perpendicular axes in the fiber.
- As a result of which light propagates with different phase velocity along these axes.
- For an arbitrary incident field with E-field components along both axes, it is almost impossible to predict the output polarization as we do not know the length of propagation along each axis to sufficient precision (it is also uncontrolled w.r.t. environmental fluctuations). So even if you launch linear polarization into the fiber, it is most likely that the output polarization state will be elliptical.
- But if we align the incident, linear polarization along one of the two axes, then we can accurately predict the polarizaiton at the output, to the extent that the fiber doesn't couple power in the two axes during propagation. I cant find a spec for the isolation between axes for the fiber we use, but the specs I could find for other fiber manufacturers suggest that this number is >30dB, so I think the assumption is a fair one.
- A useful piece of information is that the alignment key on the fibers gives us information about the orientation of the birefringent axes inside the fiber. For the Thorlabs fibers, it seems that the alignment key lines up with the stress-inducing rods inside the fiber (i.e. the slow axis). I confirmed this by looking at the fiber with the fiber scope.
- The PSL pickoff beam I am using for this setup is from the transmission of the PBS after the Faraday. So this field should have relatively pure P-polarization.
- The way I have set up the fiber on the PSL table, the fast axis of the fiber corresponds to P-polarization (i.e. E field oscillates parallel to the plane of the optical table). Actually, it was this alignment that I tweaked in this work.
- Using the information about the alignment key defining polarization axes on the fiber, I also set up the output fiber coupler such that the fast axis lined up as near parallel to the plane of the optical table as possible. In this way, the beam incident on the PBS at the output of my setup should be pure P-polarizaiton if I setup my input alignment into the fiber well.
- I tweaked the rotation of the Fiber mount at the input coupler to maximize the ratio of P_p / P_s, as measured by the pair of PDs at the output.
- As #1 of my listed references details, you need to align the incident linear polarization to one of the two birefringent axes to closer than 6 degrees to achieve a PER of >20dB. While this sounds like a pretty relaxed requirement, in practise, it is about as good as we can hope to achieve with the mounts we have, as there is no feature that allows us to lock the rotational degree of freedom once we have optimized the alignment. Any kind of makeshift arrangement like taping the rotating part to the mount is also flaky, as during the taping, we may ruin the alignment.
- Attachment #1 shows the result of my alignment optimization - the ratio P_p / P_s is about 75.
- The uncertainty on the above number is large. Possible sources of error:
- Output coupler is not really aligned such that fast axis corresponds to P-polarizaiton for the output PBS.
- The two photodiodes' gain balance was not checked.
- The polarization content of the input beam was not checked.
- The PBS at the output could be slightly misaligned relative to the S/P polarization directions defined by the tabletop.
- The PBS extinction ratio was not checked.
- But anyways, this is a definite improvement on the situation before. And despite the large uncertainty, I am confident that P_p / P_s is better than 10dB.
- Moreover, Steve and I installed protective tubing on the lengths of fiber that were unprotected on the PSL table, this should help in reducing stress induced polarization drifts in the fiber, at least in these sections of fiber.
- So I think the next step is to monitor the stability of the RF beatnote amplitude after these improvements. At some point, we need to repeat this procedure for the EX and EY fibers as well.
- If the large drifts are still seen, the only thing we can exclude as a result of this work is the section of fiber from the PSL light coupler to the beat mouth.
|
Attachment 1: IMG_6900.JPG
|
|
13642
|
Tue Feb 20 13:59:30 2018 |
Koji | Update | General | Modulation depth measurement for an aLIGO EOM |
Last night I worked at the PSL table for the modulation depth measurement for an aLIGO EOM. Let me know if the IFO behavior is unusual.
What I did was:
- Cranked up the HEPA speed to 100
- Placed an aLIGO EOM in the AUX beat path (south side of the PSL laser). (It is still on the PSL table as of Feb 20, 2018)
- Closed the PSL shutter
- Turned off the main Marconi forr 11MHz. The freq and output power of this marconi have not been touched.
- Turned off the freq generation unit
- Worked on my measurement with the spectrum and network analyzers + aux marconi.
- Turned down the HEPA speed to 30
- Turned on the freq generation unit
- Turned on the main Marconi
- Opened the PSL shutter => IMC locked
|
13650
|
Thu Feb 22 16:11:14 2018 |
Koji | Update | General | aLIGO EOM crystal replacement |
aLIGO EOM crystal replacement
- The entire operation has been performed at the south flow bench @40m.
- We knew that the original crystal in the aLIGO EOM we are testing has some problem. This was replaced with a spare RTP crystal.
- Once the housing was removed, it was obvious that the crystal has a crack (Attachment 1).
It seemed that it was produced by either a mechanical stress or a thermally induced stress (e.g. soldering).
- I wanted to make sure the new crystal is properly aligned interms of the crystal axis.
The original crystal has the pencil marking at the top saying "Z" "12". The new (spare) crystal has "Z" and "11".
So the new crystal was aligned in the same way as the original one. (Attachment 2)
- I took an opportunity to measure the distribution of the electrode lengths (Attachment 3). The lengths are 14, 5, and 14mm, respectively.
|
Attachment 1: IMG_3421.JPG
|
|
Attachment 2: IMG_3426.JPG
|
|
Attachment 3: IMG_3427.JPG
|
|
13652
|
Thu Feb 22 17:19:47 2018 |
Koji | Update | General | Modulation depth measurement for an aLIGO EOM |
aLIGO EOM test: Setup
- The modulation signal was supplied from an aux Marconi.
- Between the Marconi and the EOM, a 20dB coupler (ZFDC-20-5) was inserted. There the Marconi was connected to the output port, while the EOM was to the input port. This way, we can observe how much of the RF power is reflected back to Marconi.
- The beat setup (40m ELOG 13567) was used for the measurement. The EOM was placed in the beam path of the beat setup in the PSL side.
- To eliminate the modulation sidebands of 11MHz and 55MHz, the 40m Marconi and the freq generator were turned off (in this order).
- The nominal amplitude of the carrier beat note was -15dBm ~ -16dBm.
- The cable from the source to the EOM was ~3m. And the loss of this cable was ~0.4dB.
Measurement
- The EOM had three input ports.
- 9MHz input - In reality, there was no matching circuit.
- Center port - matched at 24.1MHz and 118.3MHz. 24.1MHz port has no amplification (just matching), and 118.3MHz is resonant.
- 45.5MHz port - resonantly matched at 45.5MHz
- The Marconi output power was set to be +13dBm. For the 45MHz measurement, 20dB attenuator is inserted right next to the Marconi so that the VSWR seen from the Marconi was improbed. (Marconi did not like the full reflection of unmatched circuit and shutdown due to the protection function.)
- The amplitude ratios between the sidebands and the carrier were multiplied by a factor of 2, to obtain the modulaiton depths. ( BesselJ(1,m)/BesselJ(0,m) ~ m/2 )
- The result is found in Attachment 2.
- The center port showed the modulation response of 0.7mrad/V and 15mrad/V for 24.1MHz and 118.3MHz, respectively. This suggests that the amplification factor for 118.3MHz is ~x21.
- The VSWR of the center port is below 1.5 at the target frequencies. That's as tuned in Downs and has not been changed by the crystal replace.
- The 45MHz port has the modulation response of 0.034mrad/V. This later tuned out that the amplification of ~x19. The circuit is well matched at the resonant frequency.
- The linearity was checked with the 45MHz port (Attachment 3). The input power (idrectly connected to the EOM without 20dB attn) was varied between -17dBm to +13dBm. There was no sign of non linearity.
- The modulation response at 24MHz was compared at various input ports. (Attachment 4)
- The input signal was amplified tobe 23dBm by ZHL-3A for better sideband visibility. The actual amplifier output was ~30dBm, and a 6dB ATTN was used to improve the VSWR to protect the amplifier.
- The 9MHz port showed 3.6mrad/V and 1.8mrad/V with the port unterminated and terminated, respectively. This factor of two difference is as expected.
This 1.8mrad/V is roughly x2.6 higher compared to the one of the matched 24/118MHz port. This is close number to the ratio of the plate sizes (14mm/5mm = 2.8).
- With the current condition, the 9MHz (unterminated), 9MHz (terminated), 24/118MHz, and 45MHz ports requires 22dBm, 27dBm, 36dBm, and 21dBm to realize the current modulation depth of 0.014 at 24MHz.
- Comparing this matched 9MHz performance, the amplification of the 45MHz port at 45MHz was determined to be ~x19.
- Considering these results, the modulation response of the center port at 24MHz seems too low. We don't want to supply 36dBm for the 0.014rad modulation (nominal number for H1).
Here are some thoughts:
- Use the 45MHz or 9MHz port for 24MHz modulation. Probably the unit is unmatched but, we can come up with the idea to improve the VSWR at 24MHz somehow?
- Redistribute the plate length to have better modulation at 24MHz. Can we achieve sufficient modulation capability with the frequency of the long and short ports swapped? We hope that we don't need to start over the matching of the 24/118MHz again because the capacitances of the ports are almost the same.
|
Attachment 1: IMG_3436.JPG
|
|
Attachment 2: modulation_depth.pdf
|
|
Attachment 3: modulation_linearity.pdf
|
|
Attachment 4: modulation_24MHz.pdf
|
|
13654
|
Fri Feb 23 20:46:04 2018 |
Udit Khandelwal | Summary | General | CAD Summary 2018/02/23 |
I have more or less finished cadding the test mass chamber by referring to the drawings Steve gave me. Finer details like lugs and bolts and window flaps can be left for later. Here's a quick render:

|
13669
|
Thu Mar 8 01:10:22 2018 |
gautam | Update | General | CDS recovery after work at LSC rack |
This required multiple hard reboots, but seems like all the RT models are back for now. The only indicator I can't explain is the red DC field on c1oaf. Also, the SUS model seems to be overclocking more frequently than usual, though I can't be sure. The "timing" field of this model's state word is RED, while the other models all seem fine. Not sure what could be going on.
Will debug further tomorrow, when I probably will have to do all this again as I'll need to recompile c1lsc for the ALS electronics test with the new ADC card from the differential AA board. |
Attachment 1: CDS-recovery.png
|
|
13670
|
Thu Mar 8 14:41:25 2018 |
gautam | Update | General | CDS recovery after work at LSC rack |
As I had found before, restarting the c1oaf model fixed the DC error. There is however still a pesky red indicator light on the "ADC0" in c1oaf. Trying to open up the ADC MEDM screen to investigate this further leads to the blank screen on the bottom right of Attachment #1. Probably has something to do with the fact that the model has an ADC block (because every model needs one?) but no signals are actually being piped to the model directly from the ADC.
Another observation, though I don't have any hypothesis as to why this was happening: on the c1sus machine, the c1sus model would frequently overclock, and then eventually, crash. I observed this behaviour at least 3 times between last night and now. The other models seemed fine though, in fact, IMC stayed locked. Why should this have been the case? It remains to be seen if this was somehow connected to the red DC indicator on c1oaf, though why should this be the case? Isn't the DC just concerned with writing data to frames? Any sort of IPC should be independent? Attachment #2 shows that there's been a definite increase in the maximum time on c1sus clock-cycle since yesterday (it's a 10 day minute trend plot of the model clock cycle timing and also the maximum time). Why? Koji and I did switch off all the Sorensens at the LSC rack for about 30mins, but why should this affect anything at 1X6? There are no red lights in either the c1lsc or c1sus expansion chassis. Curiously, the PRM also seems to be glitchy - as I'm sitting in the control room, I see a spot flashing across vertically on the REFL CRT monitor sporadically. Note that nominally, with PRM misaligned, the REFL CRT should be dark. dmesg on c1sus doesn't shed any light on the issue.
Seems like some high level voodoo .
Edit 330pm: The model just crashed again. dmesg rather unhelpfully just says "ADC timeout". Unclear how to debug further. See Attachment #3.
Quote: |
This required multiple hard reboots, but seems like all the RT models are back for now. The only indicator I can't explain is the red DC field on c1oaf. Also, the SUS model seems to be overclocking more frequently than usual, though I can't be sure. The "timing" field of this model's state word is RED, while the other models all seem fine. Not sure what could be going on.
Will debug further tomorrow, when I probably will have to do all this again as I'll need to recompile c1lsc for the ALS electronics test with the new ADC card from the differential AA board.
|
|
Attachment 1: CDS-recovery.png
|
|
Attachment 2: c1sus_timing.png
|
|
Attachment 3: c1sus_crashed.png
|
|
13672
|
Thu Mar 8 18:15:42 2018 |
gautam | Update | General | CDS recovery after work at LSC rack |
I was forced into a simultaneous power-cycle rebooting of the three vertex FEs just now. I took the opportunity to completely disconnect the c1sus expansion chassis from all power and then restart it.
Everything is back up right now, and the weird timing issues I noticed in the sus model seem to be gone now (I'll need a longer baseline to be sure and I'll post a trend of the CPU timing tomorrow). It's disconcerting that apparently the only way to get everything back up and running is the nuclear option of power-cycling all FE related electronics. I was considering borrowing an ADC adapter card from the Y end and measuring the calibrated IR ALS noise with the digital system, but if I'm going to have to go through this whole dance each time I do a model recompile on c1lsc (which I'm going to have to in order to get the extra ADC recognized), I'm wondering if it's just better to wait till we get the new adapter cards we ordered. I think I'm going to work on tuning the input coupling into the fiber at EX in the next couple of days instead.
Quote: |
Seems like some high level voodoo .
Edit 330pm: The model just crashed again. dmesg rather unhelpfully just says "ADC timeout". Unclear how to debug further. See Attachment #3.
|
|
13677
|
Fri Mar 9 20:35:41 2018 |
Udit Khandelwal | Summary | General | Summary 2018/03/09 |
1. Optical Table Layout
I had discussed with Koji a way to record coordinates of optical table equipments in a text file, and load to solidworks. The goal is to make it easier to move things around on the table in the CAD. While I have succeeded in importing coordinates through txt files, there is still a lot of tediousness in converting these points into sketches. Furthermore, the task has to be redone everytime a coordinate is added to or changed in the txt file. Koji and I think that this can all be automated through solidworks macros, so I will explore that option for the next two weeks.
2. Vacuum Chamber CADs
Steve will help find manufacturing drawings of the BS chamber. I have completed the ETM chambers, while the ITM ones are identical to them so I will reuse parts for the CAD. |
13678
|
Mon Mar 12 13:58:37 2018 |
gautam | Update | General | projector light bulb blown |
Bulb went out ~10am today. Looks like the lifetime of this bulb was <100 days.
Steve: bulb is arriving next week
|
13695
|
Wed Mar 21 10:00:35 2018 |
steve | Update | General | projector light bulb replaced |
Light bulb replaced.
Quote: |
Bulb went out ~10am today. Looks like the lifetime of this bulb was <100 days.
Steve: bulb is arriving next week
|
|
13707
|
Mon Mar 26 23:49:27 2018 |
gautam | Update | General | New ADC Adaptor Board installed in C1LSC expansion chassis |
Todd informed me that the ADC Timing adaptor boards we had ordered arrived today. I had to solder on the components and connectors as per the schematic, though the main labor was in soldering the high density connectors. I then proceeded to shut down all models on c1lsc (and then the FE itself). Then classic problem of all vertex machines crashing when unloading models on c1lsc happened (actually Koji noticed that this was happening even on c1ioo). Anyways this was nothing new so I decided to push ahead.
I had to get a cable from Downs that connects the actual GS ADC card to this adaptor board. I powered off the expansion chassis, installed the adaptor board, connected it to the ADC card and restarted the expansion chassis and also the FE. I also reconnected the SCSI cable from the AA board to the adaptor card. It was a bit of a struggle to get all the models back up and running again, but everything eventually came back(after a few rounds of hard rebooting). I then edited the c1x04 and c1lsc simulink models to reflect the new path for the X arm ALS error signals. Seems to work alright.
At some point in the afternoon, I noticed a burning smell concentrated near the PSL table. Koji traced the smell down to the c1lsc expansion chassis. We immediately powered the chassis off. But Steve later informed me that he had already noticed an odd burning smell in the morning, before I had done any work at the LSC rack. Looking at the newly installed adaptor card, there wasn't any visual evidence of burning. So I decided to push ahead and try and reboot all models. Everything came back up normally eventually, see Attachment #1. Particle count in the lab seems a little higher than usual (actually, according to my midnight measurement, they are ~factor of 10 lower than Steve's 8am measurements), but Steve didn't seem to think we should read too much into this. Let's monitor the situation over the coming days, Steve should comment on the large variance seen in the particle counter output which seems to span 2 orders of magnitude depending on the time of the day the measurement is made... Also note that there is a BIO card in the C1LSC expansion chassis that is powered by a lab power supply unit. It draws 0 current, even though the label on it says otherwise. I a not sure if the observed current draw is in line with expectations.
The spare (unstuffed) adaptor cards we ordered, along with the necessary hardware to stuff them, are in the Digital FE hardware cabinet along the east arm.
Steve: particle count in the 40m is following outside count, wind direction, weather condition .....etc. The lab particle count is NOT logged ! This is bad practice. |
Attachment 1: CDS_20180326.png
|
|
13713
|
Wed Mar 28 16:44:27 2018 |
Steve | Update | General | AP table today |
MCRefl is absent, it is under investigation. I removed a bunch of hardware and note all spare optics along the edges.
|
Attachment 1: AP_Table_20180328.png
|
|
13717
|
Thu Mar 29 12:03:37 2018 |
Jon Richardson | Summary | General | Proof-of-Concept SRC Gouy Phase Measurement |
I've been developing an idea for making a direct measurement of the SRC Gouy phase at RF. It's a very different approach from what has been tried before. Prior to attempting this at the sites, I'm interested in making a proof-of-concept measurement demonstrating the technique on the 40m. The finesse of the 40m SRC will be slightly higher than at the sites due to its lower-transmission SRM. Thus if this technique does not work at the 40m, it almost certainly will not work at the sites.
The idea is, with the IFO locked in a signal-recycled Michelson configuration (PRM and both ETMs misaligned), to inject an auxiliary laser from the AS port and measure its reflection from the SRC using one of the pre-OMC pickoff RFPDs. At the sites, this auxiliary beam is provided by the newly-installed squeezer laser. Prior to injection, an AM sideband is imprinted on the auxiliary beam using an AOM and polarizer. The sinusoidal AOM drive signal is provided by a network analyzer, which sweeps in frequency across the MHz band and demodulates the PD signal in-phase to make an RF transfer function measurement. At the FSR, there will be a AM transmission resonance (reflection minimum). If HOMs are also present (created by either partially occluding or misaligning the injection beam), they too will generate transmission resonances, but at a frequency shift proportional to the Gouy phase. For the theoretical 19 deg one-way Gouy phase at the sites, this mode spacing is approximately 300 kHz. If the transmission resonances of two or more modes can be simultaneously measured, their frequency separation will provide a direct measurement of the SRC Gouy phase.

The above figure illustrates this measurement configuration. An attached PDF gives more detail and the expected response based on Finesse modeling of this IFO configuration. |
Attachment 1: src_gouy_phase_v3.pdf
|
|
13720
|
Fri Mar 30 03:23:50 2018 |
Koji | Update | General | aLIGO EOM work |
I have been working on the aux beat setup on the PSL table between 9PM-3AM.
This work involved:
- Turning off the main marconi
- Turning off the freq generation unit (incl IMC modulation)
- Closing the PSL shutter
After the work, these were reverted and the IMC and both arms have been locked. |
13725
|
Mon Apr 2 15:14:21 2018 |
Koji | Update | General | Modulation depth measurement for an aLIGO EOM |
The new matching circuit was tested.
Results:
f_nominal f_actual response required mod. drivng power
[MHz] [MHz] [mrad/V] [rad] needed [dBm]
9.1 9.1 55 0.22 => 22
118.3 118.2 16 0.01 => 6
45.5 45.4 45 0.28 => 25
24.1 N/A 2.1 0.014 => 27
Comments:
- 9.1MHz and 118.3MHz: They are just fine.
- 24.1MHz: Definitely better (>x3) than the previous trial to combine 118MHz & 24MHz.
We got about the same modulation with the 50Ohm terminated bare crystal (for the port1).
So, this is sort of the best we can do for the 24.1MHz with the current approach.
The driving power of 27dBm is required at 24.1MHz
- About the 45MHz
- The driving power of 27dBm is required at 24.1MHz
- The maximum driving power with the AM stabilized driver is 23dBm, nominally to say.
- I wonder how we can reduce resistance (and capacitance) of the 45MHz further...?
- I also wonder if the IFO can be locked with reduced modulation (0.28 rad->0.2 rad)
- Can the driver max power be boosted a bit? (i.e. adding an attenuator in the RF power detection path)
|
Attachment 1: modulation_depth.pdf
|
|
Attachment 2: impedance_eom.pdf
|
|
13755
|
Mon Apr 16 22:09:53 2018 |
Kevin | Update | General | power outage - BLRM recovery |
I've been looking into recovering the seismic BLRMs for the BS Trillium seismometer. It looks like the problem is probably in the anti-aliasing board. There's some heavy stuff sitting on top of it in the rack, so I'll take a look at it later when someone can give me a hand getting it out.
In detail, after verifying that there are signals coming directly out of the seismometer, I tried to inject a signal into the AA board and see it appear in one of the seismometer channels.
- I looked specifically at C1:PEM-SEIS_BS_Z_IN1 (Ch9), C1:PEM-SEIS_BS_X_IN1 (Ch7), and C1:PEM-ACC_MC2_Y_IN1 (Ch27). All of these channels have between 2000--3000 cts.
- I tried injecting a 200 mVpp signal at 1.7862 Hz into each of these channels, but the the output did not change.
- All channels have 0 cts when the power to the AA board is off.
- I then tried to inject the same signal into the AA board and see it at the output. The setup is shown in the first attachment. The second BNC coming out of the function generator is going to one of the AA board inputs; the 32 pin cable is coming directly from the output. All channels give 4.6 V when when the board is powered on regardless of wheter any signal is being injected.
- To verify that the AA board is likely the culprit, I also injected the same signals directly into the ADC. The setup is shown in the second attachment. The 32 pin cable is going directly to the ADC. When injecting the same signals into the appropriate channels the above channels show between 200--300 cts, and 0 cts when no signal is injected.
|
Attachment 1: AA.jpg
|
|
Attachment 2: ADC.jpg
|
|
13756
|
Tue Apr 17 09:57:09 2018 |
Steve | Update | General | seismometer interfaces |
Quote: |
I've been looking into recovering the seismic BLRMs for the BS Trillium seismometer. It looks like the problem is probably in the anti-aliasing board. There's some heavy stuff sitting on top of it in the rack, so I'll take a look at it later when someone can give me a hand getting it out.
In detail, after verifying that there are signals coming directly out of the seismometer, I tried to inject a signal into the AA board and see it appear in one of the seismometer channels.
- I looked specifically at C1:PEM-SEIS_BS_Z_IN1 (Ch9), C1:PEM-SEIS_BS_X_IN1 (Ch7), and C1:PEM-ACC_MC2_Y_IN1 (Ch27). All of these channels have between 2000--3000 cts.
- I tried injecting a 200 mVpp signal at 1.7862 Hz into each of these channels, but the the output did not change.
- All channels have 0 cts when the power to the AA board is off.
- I then tried to inject the same signal into the AA board and see it at the output. The setup is shown in the first attachment. The second BNC coming out of the function generator is going to one of the AA board inputs; the 32 pin cable is coming directly from the output. All channels give 4.6 V when when the board is powered on regardless of wheter any signal is being injected.
- To verify that the AA board is likely the culprit, I also injected the same signals directly into the ADC. The setup is shown in the second attachment. The 32 pin cable is going directly to the ADC. When injecting the same signals into the appropriate channels the above channels show between 200--300 cts, and 0 cts when no signal is injected.
|
|
Attachment 1: BS_Tril_Intrf-1X5.jpg
|
|
Attachment 2: Gurs_Intf-1X1.jpg
|
|
13763
|
Wed Apr 18 20:33:19 2018 |
Kevin | Update | General | seismometer interfaces |
Steve, the pictures you posted are not the AA board I was referring to. The attached pictures show the board which is sitting beneath the GPS time server. |
Attachment 1: front.jpg
|
|
Attachment 2: back.jpg
|
|
Attachment 3: connectors.jpg
|
|
13764
|
Wed Apr 18 22:46:23 2018 |
johannes | Configuration | General | AS port laser injection |
Using Gautam's Finesse file and the cad files for the 40m optical setup I propagated the arm mode out of the AS port. For the location of the 3.04 mm waist I used the average distance to the ITMs, which is 11.321 m from the beam spot on the 2 inch mirror on the AS table close to the viewport. The 2inch lens focuses the IFO mode to a 82.6 μm waist at a distance of 81 cm, which is what we have to match the aux laser fiber output to.
I profiled the fiber output and obtained a waist of 289.4 μm at a distance of 93.3 cm from the front edge of the base of the fiber mount. Next step is to figure out the lens placement and how to merge the beam paths. We could use a simple mirror if we don't need AS110 and AS55, we could use a polarizing BS and work with s polarization, or we find a Faraday Isolator.
While doing a beam scan with the razor blade method I noticed that the aux laser has significant intensity noise. This is seen on the New Focus 1611 that is used for the beat signal between PSL and aux laser, as well as on the fiber output PD. There is a strong oscillation around 210 kHz. The oscillation frequency decreases when the output power is turned down, the noise eater has no effect. Koji suggested it could be light scattering back into the laser because I couldn't find a usable Faraday Isolator back when I installed the aux laser in the PSL enclosure. I'll have to investigate this a little further, look at the spectrum, etc. This intensity noise will appear as amplitude noise of the beat note, which worries me a little.

|
Attachment 1: ASpath.svg.png
|
|
13766
|
Thu Apr 19 01:04:00 2018 |
gautam | Configuration | General | AS port laser injection |
For the arm cavity ringdowns, I guess we don't need AS55/AS110 (although I think the camera will still be useful for alignment). But for something like RC Gouy phase characterization, I'd imagine we need the AS detectors to lock various cavities. So I think we should go for a solution that doesn't disturb the AS PD beams.
It's hard to tell from the plot in the manual (pg 52) what exactly the relaxation oscillation frequency is, but I think it's closer to 600 kHz (is this characteristic of NdYAG NPROs)?? Is the high RIN on the light straight out of the NPRO?
Quote: |
We could use a simple mirror if we don't need AS110 and AS55, we could use a polarizing BS and work with s polarization, or we find a Faraday Isolator.
There is a strong oscillation around 210 kHz. The oscillation frequency decreases when the output power is turned down, the noise eater has no effect.
|
|
13772
|
Thu Apr 19 20:41:09 2018 |
Koji | Configuration | General | Aux Laser LD dying? (AS port laser injection) |
I suspect that the LD of the aux laser is dying.
- The max power we obtain from this laser (700mW NPRO) is 33mW. Yes, 33mW. (See attachment 1)
- The intensity noise is likely to be relaxation oscillation and the frequency is so low as the pump power is low. When the ADJ is adjusted to 0, the peak moved even lower. (Attachment 2, compare purple and red)
- What the NE (noise eater) doing? Almost nothing. I suspect the ISS gain is too low because of the low output power. (Attachment 2, compare green and red) |
Attachment 1: Aux_laser_adj_Pout.pdf
|
|
Attachment 2: Aux_laser_RIN.pdf
|
|
13775
|
Fri Apr 20 16:22:32 2018 |
gautam | Update | General | Nodus hard-rebooted |
Aidan called saying nodus was down at ~345pm. I was able to access it at ~330pm. I couldn't ssh in from my machine or the control room ones. So I went to 1X7 and plugged in a monitor to nodus. It was totally unresponsive. Since the machine wasn't responding to ping either, I decided to hard-reboot it. Machine seemed to come back up smoothly. I had trouble getting the elog started - it wasn't clear to me that the web ports were closed by default, so even though the startELOGD.sh script was running fine, the 8080 port wasn't open to the outside world. Anyways, once I figured this out, I was able to start the elog. DokuWiki also seems to be up and running now... |
13778
|
Sat Apr 21 20:19:05 2018 |
gautam | Update | General | Megatron hard-rebooted |
I found megatron in a similar state to that which nodus was in yesterday. Clued by the fact that MCautolocker wasn't executing the mc scripts (as was evident from looking at the wall StripTool trace), I tried ssh-ing into megatron, but was unable to (despite it being responsive to ping requests). So I went into the VEA and plugged in a monitor to megatron - saw nothing on it. With no soft reboot options available, I power cycled the machine via the front panel button. It came back up smoothly. I manually restarted the autolocker, FSSslow and EX thermal control processes (the former two with initctl, while the latter runs in a tmux session). Everything seems alright for now. Not sure how long megatron has been dead for. |
13781
|
Tue Apr 24 08:36:47 2018 |
johannes | Configuration | General | Aux Laser LD dying? (AS port laser injection) |
In September 2017 I measured ~150mW output power, which was already kind of low. What are the chances of getting this one repaired? Steve, can you please check the serial number? It's probably too old like the other ones.
Quote: |
I suspect that the LD of the aux laser is dying.
- The max power we obtain from this laser (700mW NPRO) is 33mW. Yes, 33mW. (See attachment 1)
|
|
13797
|
Fri Apr 27 16:55:31 2018 |
gautam | Update | General | EY area access blocked |
Steve was calibrating the load cells at the EY table with the crane - we didn't get through the full procedure today, so the area near the EY table is kind of obstructed. The 100kg donut is resting on the floor on the North side of the EY table and is still connected to the crane. There are stopper plates underneath the donut, and it is still connected to the crane. Steve has placed cones around the area too. The crane has been turned off. |
13799
|
Sun Apr 29 22:53:06 2018 |
gautam | Update | General | DARM actuation estimate |
Motivation:
We'd like to know how much actuation is required on the ETMs to lock the DARM degree of freedom. The "disturbance" we are trying to cancel is the seismic driven length fluctuation of the arm cavity. In order to try and estimate what the actuation required will be, we can use data from POX/POY locks. I'd collected some data on Friday which I looked at today. Here are the results.
Method:
- I collected the error and control signals for both arm cavities while they were locked to the PSL.
- Knowing the POX/POY sensing response and the actuator transfer functions, we can back out the free running displacements of the two arm cavities.
- I used numbers from the cal filters which may not be accurate (although POX sensing response which was recently measured).
- But the spectra computed using this method seem reasonable, and the X and Y arm asds line up around 1 Hz (albeit on a log scale).
- In this context,
is really a proxy for and similarly for L_Y so I think the algebra works out correctly.
- I didn't include any of the violin mode/AA/AI filters in this calculation.
- Having calculated the arm cavity displacements, I computed "DARM" as L_y- L_x and then plotted its asd.
- For good measure, I also added the quadrature sum of 4 optics' displacement noise as per the 40m GWINC model - there seems to be a pretty large discrepancy, not sure why.
If this approach looks legit, I will compute the control signal that is required to stabilize this level of disturbance using the DARM control loop, and see what is the maximum permissible series resistance we can use in order to realize this stabilization. We can then compare various scenarios like different whitening schemes, with/without Barry puck etc, and look at coil driver noise levels for each of them. |
Attachment 1: darmEst.pdf
|
|
13805
|
Tue May 1 19:37:50 2018 |
gautam | Update | General | DARM actuation estimate |
Here is an updated plot - the main difference is that I have added a trace that is the frequency domain wiener filter subtraction of the coherent power between the L_X and L_Y time series. I tried reproducing the calculation with the time domain wiener filter subtraction as well, using half of the time series (i.e. 5 mins) to train the wiener filter (with L_X as target and L_Y as witness), but I don't get any subtraction above 5 Hz on the half of the data that is a test data set. Probably I am not doing the pre-filtering correctly - I downsampled the signal to 1 kHz, weighted it by low passing the signal above 40 Hz and trained the Wiener filter on the resulting time series. But this frequency domain Wiener filter subtraction should be at least a lower bound on DARM - indeed, it is slightly lower everywhere than simply taking the time domain subtraction of the two data streams.
To do:
- Re-measure calibration numbers used.
- Redo calculation once the numbers have been verified.
Putting a slightly cleaned up version of this plot in now - I'm only including the coherence-inferred DARM estimate now instead of the straight up time domain subtraction. So this is likely to be an underestimate. At low (<10 Hz) frequencies, the time domain computation lines up fairly well, but I suspect that I am getting huge amounts of spectral leakage (see Attachment #2) in the way I compute the spectrum using scipy's filtering routine (once the Wiener filter has been computed). Note that Attachment #2, I didn't break up the data into a training/testing set as in this case, we just care about the one-off offline performance in order to get an estimate of DARM.
The python version of the wiener filter generating code only supports [b,a] output of the digital filter, an sos filter might give better results. Need to figure out the least painful way of implementing the low-noise digital filtering in python... |
Attachment 1: darmEst.pdf
|
|
Attachment 2: darmEst_time.pdf
|
|
13810
|
Thu May 3 10:40:43 2018 |
johannes | Configuration | General | AS port laser injection |
Instead of trying to couple the fiber output into the interferometer, I'm doing the reverse and maximize the amount of interferometer light going into the fiber. I set up the mode-matching solution shown in attachment #1 and started tweaking the lens positions. Attachment #2 shows the setup on the AS table. After the initial placement I kept moving the lenses in the green arrow directions and got more and more light into the fiber.
When I stopped this work yesterday I measured 86% of the AS port light coming out the other fiber end, and I have not yet reached a turning point with moving the lenses, so it's possible I can tickle out a little more than that.
It occured to me though that I may have been a little hasty with the placement of the mirror that in attachment #2 redirects the beam which would ordinarily go to AS55. For my arm ringdown measurements this doesn't matter, I could actually place it even before the 50/50 beamsplitter that sends light onto AS110 and double the amount of light going into the IFO. What signals are needed for the Guoy phase measurement? Is AS 110 sufficient, or do we need AS55? |
Attachment 1: mm_solution_AStable.png
|
|
Attachment 2: AStable_beampath.pdf
|
|
13811
|
Thu May 3 12:10:12 2018 |
gautam | Configuration | General | AS port laser injection |
I think we need AS55 for locking the configuration Jon suggested - AS55 I and Q were used to lock the SRMI previously, and so I'd like to start from those settings but perhaps there is a way to do this with AS110 I and Q as well.
Quote: |
What signals are needed for the Guoy phase measurement? Is AS 110 sufficient, or do we need AS55?
|
|
13822
|
Mon May 7 16:23:06 2018 |
gautam | Update | General | DARM actuation estimate |
Summary:
Using the Wiener Filter estimate of the DARM disturbance we will have to cancel, I computed how the control signal would look like for a few scenarios. Our DACs are 16-bit, +/-10V (i.e. +/-32,768cts-pk, or ~23,000cts RMS). We need to consider the shape of the de-whitening filter to conclude whether it is feasible to increase the series resistance by x10 or not.
Some details:
Note that in this first computation, I have not considered
- Actuation range required by other loops (e.g. local damping, Oplev etc).
- At some point, I need to add the 2P/c radiation pressure disturbance as well.
- The control signal is calculated assuming we are actuating equally on both ETMs (but with opposite phase).
- RMS computation is done from 30 Hz downwards, as above 30 Hz, I think the estimate from the previous elog is not true seismic displacement.
- De-whitening filters (or digital whitening), which will be required to suppress DAC noise at 100Hz.
- DARM loop shape, specifically low-pass to avoid sensing noise injection. In this calculation, I just used the pendulum transfer function.
While doing this calculation, I have accounted for the fact that right now, the analog de-whitening filters in the ETM drive chain have a x3 gain which we will remove. Actually this is an assumption, I have not yet measured a transfer function, maybe I'll do one channel at EY to confirm. Also, the actuator gains themselves need to be confirmed.
As I was looking at the coil driver schematic more closely, I realized that there are actually two separate series resistances, one for the fast controls path, and another for the DC bias voltage from the slow ADCs. So I think we have been underestimating the Johnson noise of the coil drivers by sqrt(2). I've also attached screenshots of the IFOalign and MCalign screens. The two ITMs and ETMX have pitch DC bias values that are compatible with a x10 increase of the series resistance. But even so, we will have ~3pA/rtHz per coil from the two resistances.
gautam 8pm May8: Seems like I had confirmed the x3 gain in the EX de-whitening board when Johannes and I were investigating the AI board offset. |
Attachment 1: darmProj.pdf
|
|
Attachment 2: 37.png
|
|
Attachment 3: MCalign_20180507.png
|
|
13823
|
Mon May 7 20:01:14 2018 |
Rorpheus | Update | General | Use anti-dewhitening + show CARMA/DARMA |

example of plots illustrating DAC range / saturation |
13826
|
Tue May 8 11:41:16 2018 |
gautam | Update | General | IFO maintenance |
There was an earthquake, all watchdogs were tripped, ITMX was stuck, and c1psl was dead so MCautolocker was stuck.
Watchdogs were reset (except ETMX which remains shutdown until we finish with the stack weight measurement), ITMX was unstuck using the usual jiggling technique, and the c1psl crate was keyed. |
Attachment 1: ITMX_stuck.png
|
|
13827
|
Wed May 9 17:30:04 2018 |
gautam | Update | General | Input beam misaligned |
There is no beam going into the IFO at the moment. There was definitely a spot on the AS camera after I restored the suspensions yesterday, as you can see from the ASDC level in Attachment #1. But at around 2pm Pacific yesterday, the ASDC level has gone to 0. I suspect the TTs. There is no beam on the REFL camera either when PRM is aligned, and PRM's DC alignment is good as judged by Oplev.
Normally, I am able to recover the beam by scanning the TTs around with some low frequency sine waves, but not today. We don't have any readback (Oplev/OSEM) of the TT alignment, and the DC bias values havent jumped abnormally around the time this happened, judging by the OUT16 monitor points (see Attachment #2). The IMC was also locked at the time when this abrupt drop in ASDC level happened. Unfortunately, we don't have a camera on the Faraday so I don't know where the misalignment is happening, but the beam is certainly not making it to the BS. All the SOS optics (e.g. BS, ITMX and ITMY) are well aligned as judged by Oplev.
Being debugged now... |
Attachment 1: InputBeamGone.png
|
|
Attachment 2: TTpointing.png
|
|
13828
|
Wed May 9 19:51:07 2018 |
gautam | Update | General | Input beam misaligned |
As suspected - the problem was with the TTs. I tested the TT signal chain by driving a low frequency sine wave using AWG and looking at the signal on an o'scope. But I saw nothing, neither at the AI board monitor point, nor at the actual coil driver mon point. I decided to look at the IOP testpoints for the DAC channels, to see if the signals were going through okay on the digital side. But the IOP channels were flatlined, as viewed on dataviewer (see Attachment #1). This despite the fact that the DAC output monitor screen in the model itself was showing some sensible numbers, again see Attachment #1.
Looking at the CDS overview screen, there were no red flags. But there was a red indicator sneakily hidden away in the IOP model's CDS status screen, the "DAC" field in the state word is red. As Attachment #2 shows, a change in the state word is correlated with the time ASDC went to 0.
Note that there are also no errors on the c1lsc frontend itself, judging by dmesg. I want to do a model restart, but (i) this will likely require reboots of all vertex FEs and (ii) I want to know if any CDS experts want to sniff for clues to what's going on before a model restart wipes out some secret logfiles. I'm a little confused that the rtcds isn't throwing up any errors and causing models to crash if the values are not being written to the registers of the DAC. It may also be that the DAC card itself is dead . To re-iterate, all the EPICS readbacks were suggesting that I am injecting a signal right up to the IOP.
Quoting from the runtime diagnostics note:
NOTE: As V2.7, if this error is detected, the IOP will output zero values to all DAC modules, as a protective measure. Only method to clear this is to restart the IOP and all applications on that computer
|
Attachment 1: DACweirdness.png
|
|
Attachment 2: DACerror.png
|
|
13829
|
Thu May 10 08:45:16 2018 |
Steve | Update | General | 4.5M eq. Cabazon, CA |
20180508 4:49am Cabazon earth quake 4.5M at 79 miles away. ETMX is in load cell measurment condition.
Quote: |
There was an earthquake, all watchdogs were tripped, ITMX was stuck, and c1psl was dead so MCautolocker was stuck.
Watchdogs were reset (except ETMX which remains shutdown until we finish with the stack weight measurement), ITMX was unstuck using the usual jiggling technique, and the c1psl crate was keyed.
|
|
Attachment 1: Cabazon4.5m79m.png
|
|
Attachment 2: 4.5Meq.png
|
|
13830
|
Thu May 10 11:38:19 2018 |
gautam | Update | General | ITMY UL |
Looking at Steve's plot, I was reminded of the ITMY UL OSEM issue. The numbers don't make sense to me though - 300um of DC shift in UL with negligible shifts in the other coils should have made a much bigger DC shift in the Oplev spot position. |
Attachment 1: ITMY_UL.pdf
|
|
13831
|
Thu May 10 14:13:22 2018 |
gautam | Update | General | More refinement of DARM control signal projection |
Summary:
- It seems that after a x10 increase in the coil driver resistance, we will have enough actuation range to control (anti de-whitened) DARM without saturating the DAC.
- The Barry puck doesn't seem to help us much in reducing the required RMS for DARM control. If this calculation is to be believed, it actually makes the RMS actuation a little bit higher.
See Attachment #1 for the projected control signal ASDs. The main assumption in the above is that all other control loops can be low-passed sufficiently such that even with anti-dewhitening, we won't run into saturation issues.
DARM control loop:
- I'm now calculating the DARM control signal in counts after factoring into account a digital DARM control loop.
- The loop shape is what we used when the DRFPMI was locked in Oct 2015.
- I scaled the overall OLTF gain to have a UGF around 200Hz.
- The breakdown of how the DARM loop is constructed is shown in Attachment #2.
De-whitening and Anti-De-whitening:
- The existing DW shape in the ITM and ETM signal chains has ~80dB attenuation around 100 Hz.
- Assuming ~5uV/rtHz noise from the DAC, 60dB of low-passing gets us to 5nV/rtHz. With 4.3kohm series resistance, this amounts to ~1pA/rtHz current noise (compared to ~3pA/rtHz from the Johnson noise of the series resistance). Actually, I measured the DAC noise to be more like ~700nV/rtHz at 100 Hz, so the current noise contribution is only 0.16pA/rtHz.
- This amounts to getting rid of the passive filter at the end of the chain in the de-whitening board.
- Attachment #3 shows the existing and proposed filter shapes.
It remains to add the control signals for Oplev, local damping, and ASC to make sure we have sufficient headroom, but given that current projections are predicting using up only ~1000cts of the ~23000cts (RMS) available from the DAC, I think it is likely we won't run into saturations. Need to also figure out what the implication of the reduced actuation range will be on handling the locking transient. |
Attachment 1: darmProj.pdf
|
|
Attachment 2: darmOLTF.pdf
|
|
Attachment 3: DWcomparison.pdf
|
|
13833
|
Fri May 11 13:58:42 2018 |
rana | Update | General | More refinement of DARM control signal projection |
I think "OLG" trace is not labeled right; it would be good to see the actual OLG in addition to whatever that trace actually is.
Based on the first plot, however, my conclusion is that:
- we don't need the passive isolator to reduce the control signal; the control signal is dominated by f < 10 Hz.
- we should still look into isolators for the reduction of the f > 50 Hz stuff, just to make the overall DARM sensitivity better. But this does not have to be pneumatic since we no longer need 10 Hz isolation. It can instead be a solid piece of rubber to give us a ~20-30 Hz resonance. That would still give us a factor of 5-10 improvement above 100 Hz.
- In this case, we only need a mass estimate of the end chamber contents with an accuracy of ~25%. If we think we have that already, we don't need to keep doing the jacks-strain gauge adventure.
|
13835
|
Fri May 11 19:02:52 2018 |
gautam | Update | General | More refinement of DARM control signal projection |
I was a bit hasty in posting the earlier plots. In the earlier plot, the "OLG" trace was OLG * anti dewhitening as Rana pointed out.
Here are the updated ones, and a cartoon (Attachment #5) of the loop topology I assumed. I've excluded things like violin filters, AA/AI etc. The overall gain scaling I mentioned in the previous elog amounts to changing the optical sensing response in this cartoon. I now also show the DARM suppression (Attachment #4) for this OLG and the DARM linewidths for RSE. I don't think the conclusions change.
Note that for Signal Recycling, which is what Kevin tells us we need to do, there is a DARM pole at ~150 Hz. I assume we will cancel this in the digital controller and so can achieve a similar OLG shape. This would modify the control signal spectrum a little around 150Hz. But for a UGF on the loop of ~150 Hz, we should still be able to roll-off the control signal at high frequencies and so the RMS shouldn't be dramatically affected.
Steve is looking into acquiring 4.5kohm Vishay Wirewound resistors with 1% tolerance. Plan is to install two in parallel (so that we get 2kohm effective resistance) and then snip off one once we are convinced we won't have any actuation range issues. Do these look okay? They're ~$1.50ea on mouser assuming we get 100. Do we need the non-inductive winding?
Quote: |
I think "OLG" trace is not labeled right; it would be good to see the actual OLG in addition to whatever that trace actually is.
|
|
Attachment 1: darmProj.pdf
|
|
Attachment 2: darmOLTF.pdf
|
|
Attachment 3: DWcomparison.pdf
|
|
Attachment 4: DARMsuppression.pdf
|
|
Attachment 5: ControlLoop.pdf
|
|
13836
|
Sat May 12 10:02:03 2018 |
rana | Update | General | More refinement of DARM control signal projection |
Good question! I've never calculated what the resonance frequency would be if had an inductive resistor with our cable capacitance (~50 pF/m I guess). |
13837
|
Sun May 13 15:15:18 2018 |
gautam | Update | General | CDS crash |
I found the c1lsc machine to be completely unresponsive today. Looking at the trend of the state word, it happened sometime yesterday (Saturday). The usual reboot procedure did not work - I am not able to bring back any of the models on any of the machines, during the restart procedure, they all fail. The logfile reads (for the c1ioo front end, but they all behave the same):
[ 309.783460] c1x03: Initializing space for daqLib buffers
[ 309.887357] CPU 2 is now offline
[ 309.887422] c1x03: Sync source = 4
[ 309.887425] c1x03: Waiting for EPICS BURT Restore = 2
[ 309.946320] c1x03: Waiting for EPICS BURT 0
[ 309.946320] c1x03: BURT Restore Complete
[ 309.946320] c1x03: Corrupted Epics data: module=0 filter=1 filterType=0 filtSections=134610112
[ 309.946320] c1x03: Filter module init failed, exiting
[ 363.229086] c1x03: Setting stop_working_threads to 1
[ 364.232148] DXH Adapter 0 : BROADCAST - dx_user_mcast_unbind - mcgroupid=0x3
[ 364.233689] Will bring back CPU 2
[ 365.236674] Booting Node 1 Processor 2 APIC 0x2
[ 365.236771] smpboot cpu 2: start_ip = 9a000
[ 309.946320] Calibrating delay loop (skipped) already calibrated this CPU
[ 365.251060] NMI watchdog enabled, takes one hw-pmu counter.
[ 365.252135] Brought the CPU back up
[ 365.252138] c1x03: Just before returning from cleanup_module for c1x03
Not sure what is going on here, or what "Corrutped EPICS data" is supposed to mean. Thinking that something was messed up the last time the model was compiled, I tried recompiling the IOP model. But I'm not able to even compile the model, it fails giving the error message
make[1]: Leaving directory '/opt/rtcds/caltech/c1/rtbuild/3.4'
make[1]: /cvs/cds/rtapps/epics-3.14.12.2_long/modules/seq/bin/linux-x86_64/snc: Command not found
make[1]: *** [build/c1x03epics/c1x03.c] Error 127
Makefile:28: recipe for target 'c1x03' failed
make: *** [c1x03] Error 1
I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-3.14.12.2_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-3.14.12.2_long/base. Why should this have changed?
I've shutdown all watchdogs until this is resolved. |
Attachment 1: vertexFEs_crashed.png
|
|
13838
|
Sun May 13 17:31:51 2018 |
gautam | Update | General | CDS crash |
As suspected, this was indeed a path problem. Johannes will elog about it later, but in short, it is related to some path variables being changed in order to try and streamline the EPICS processes on the new c1auxex machine (Acromag Era). It is confusing that futzing around with the slow computing system messes with the realtime system as well - aren't these supposed to be decoupled? Once the paths were restored by Johannes, everything compiled and restarted fine. We even have a beam on the AS camera, which was what triggered this whole thing .
Anyways, Attachment #1 shows the current status. I am puzzled by the red TIMING indicators on the c1x04 and c1x02 processes, it is absent from any other processes. How can this be debugged further?
Quote: |
I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-3.14.12.2_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-3.14.12.2_long/base. Why should this have changed?
|
|
Attachment 1: CDS_overview_20180513.png
|
|
Attachment 2: AS_1210293643.jpeg
|
|
13839
|
Sun May 13 20:48:38 2018 |
johannes | Update | General | CDS crash |
I think the root of the problem is that the /opt/rtapps/ and /cvs/cds/rtapps/ mounting locations point to the same directory on the nfs server. Gautam and I were cleaning up the /cvs/cds/caltech/target/ directory, placing the previous contents of /cvs/cds/caltech/target/c1auxex/, including database files and startup instructions in /cvs/cds/caltech/target/c1auxex_oldVME/, and then moved /cvs/cds/caltech/target/c1auxex2/, which has the channel database and initialization files for the Acromac DAQ, to /cvs/cds/caltech/target/c1auxex/.
This also required updating the systemd entries on c1auxex to point to the changed directory. While confirming that everything worked as before we noticed that upon startup the EPICS IOC complains about not being able to find the caRepeater binary. This was not new and has not limited DAQ functionality in the past, but we wanted to fix this, as it seemed to be some simple PATH issue. While the paths are all correctly defined in the user login shell, systemd runs on a lower level and doesn't know about them. One thing we tried was to let systemd execute /cvs/cds/rtapps/epics-3.14.12.2_long/etc/epics-user-env.sh initializing EPICS. It was strange that the content of that file was pointing to /opt/rtapps/epics-3.14.12.2_long/base, which is not mounted on the slow machines, so we changed the /opt/ it to /cvs/cds/, not realizing that the frontends read from the same directory (as Gautam said, /cvs/cds does not exist as a mount point on the frontend). It ended up not working this way, and apparently I forgot to change it back during clean up. But worse, never elogged it!
In the end, we managed to to give systemd the correct path definitions by explicitly calling them out in /cvs/cds/caltech/target/c1auxex/ETMXenv, to which a reference was added in the systemd service file. The caRepeater warning no longer appears. |
13846
|
Tue May 15 21:56:57 2018 |
gautam | Update | General | Stack measurement setup decommissioned |
[steve,koji,gautam]
Since we think we already know the stack mass to ~25% (i.e. 5000 +/- 1000 lbs), we decided to restore the ETMX stack. Procedure followed was:
- Take photos of all dial indicators and spirit level. We were at ~-22 mils on all 3 indicators, with 0 being the level before we touched the stack two Fridays ago, i.e. May4.
- Raised all four jacks installed underneat blue crossbeams in 5mil increments until we were at +25mils on all of them. At this point, there was negligible load on the load cells on top of the STACIS legs, and we could easily slide the load cells out.
- Rotated all jack screws clockwise (i.e. moving jack screws downwards) by 270 degrees. The southeast jackscrew was rotated by an additional 360 degrees. This was to undo all the jack-screw raising we did on Friday, May 4.
- Re-installed jacks which were present originally on the STACIS legs, taking care to center the jack as best as we could by eye on the STACIS leg, per Dennis Coyne's suggestion not to impose shear strain on STACIS legs. There were supposedly never carrying any load, and are according to Steve, are there more for safety purposes.
- Lowered all four jacks in 5 mil steps until dial indicators read ~0. The Northwest jack resting on the STACIS leg was somehow ~0.5cm (!!) below the blue crossbeam even though the corresponding dial gauge read 0, so we raised the jack until it was barely grazing the bottom of the blue crossbeam (confirmed by looking at the point where the dial indicator started going up again). Not sure why this should have been, best hypothesis we have is that someone (one of us) changed the level of this jack while it was removed from the setup.
- Checked that jack screws could not be turned by hand. At this point, all the load has to be resting on the jack screws, as the jacks we had installed to raise the blue crossbeams could be slid out from underneath the blue beams and hence were carrying no load.
- Took photographs of all dial indicators, spirit level. We were satisfied that we had recovered the "nominal" stack alignment as best as we could judge with the available indicators.
- ETMX Oplev spot had returned to the PD. ETMX watchdog was re-engaged, optic was re-aligned using SLOW bias sliders to center Oplev spot.
- EX NPRO was turned back on, and the green beam was readily locked to a cavity TEM00 mode
.
I will upload the photos to the PICASA page and post the link here later.
Quote: |
In this case, we only need a mass estimate of the end chamber contents with an accuracy of ~25%. If we think we have that already, we don't need to keep doing the jacks-strain gauge adventure.
|
|
13847
|
Tue May 15 22:11:38 2018 |
gautam | Update | General | IFO maintenance |
Since there have been various software/hardware activity going on (stack weighing, AUX laser PLL, computing timing errors etc etc), I decided to do a check on the state of the IFO.
- c1susaux, c1aux and c1iscaux crates were keyed as they were un-telnet-able.
- Single arm locking worked fine, TT alignment was tweaked (as these had drifted due to the ADC failure in c1lsc) to maximize Y arm transmission using the dither servos.
- Arms weren't staying locked for extended periods of time. I particularly suspected ITMX, as I saw what I judged to be excess motion on the Oplev.
- @Steve - ITMX and BS HeNes look like they are in need of replacement judging by the RIN (although the trend data doesn't show any precipitous drop in power). If we are replacing the BS/PRM Oplev HeNe, might be a good time to plan the inejction path a bit better on that table.
- RIN in Attachment #1 has been normalized by the mean value of the OL sum channel. There is now a script to make this kind of plot from NDS in the scripts directory (as I found it confusing to apply different calibrations to individual traces in DTT).
|
Attachment 1: OL_RIN_2018_05_15.pdf
|
|
Attachment 2: OLsums.png
|
|
13851
|
Thu May 17 09:14:38 2018 |
Steve | Update | General | Stack measurement setup decommissioned |
The final set-up of stack measurment with 3 load cells and 4 leveling wedge mounts as Atm 1
Sensor voltages BEFORE and AFTER this attempt. |
Attachment 1: Load_Cell_Measurement_Set_Up.jpg
|
|
Attachment 2: ETMX_stack_up_down.png
|
|
13852
|
Thu May 17 11:56:37 2018 |
gautam | Update | General | EPICS process died on c1ioo |
The EPICS process on the c1ioo front end had died mysteriously. As a result, MC autolocker wasn't working, since the autolocker control variables are EPICS channels defined in the c1ioo model. I restarted the model, and now MCautolocker works. |
13865
|
Fri May 18 18:14:18 2018 |
Udit Khandelwal | Summary | General | Summary 05/18/2018 |
Tip-Tilt Suspension Design:
Designed a new ECD plate and changed dimensions of the side arms after discussing with Koji. After getting feedback on the changes, I will finish the assembly and send it to him to get approved for manufacturing.
|
13866
|
Fri May 18 19:10:48 2018 |
keerthana | Update | General | Code for adjusting the oscillator frequency remotly |
Target: Phase locking can be acheived by giving a scan to the oscilator frequency. This frequency is now controlled using the knobe on the AM/FM signal generator 2023B. But we need to control it remotely by giving the inputs of start frequency, end frequency and the steps.
The frequency oscilator and the computer is connected with the help of GPIB Ethernet converter. The IP address of the converter I used is '192.168.113.109' and its GPIB address is 10.
I could change the oscilator frequency by changing the input frequency with the help of the code I made (Inorder to check this code, I have changed the oscilator frequency multiple times. I hope it didn't create trouble to anyone). Now I am trying to make this code better by adding certain features like numpy, argument parse etc, which I will be able complete by next week. I am also considering to develop the code to have a sliding system to control the oscillatory frequency.
For record: The maximum limit of frequency which i changed upto is 100MHz.
|
Attachment 1: frequency_set.jpg
|
|