ID |
Date |
Author |
Type |
Category |
Subject |
14885
|
Mon Sep 16 20:22:19 2019 |
gautam | Summary | CDS | Update on the Acromag status |
- Jordan (new Engineer) and Chub neatened out the cabling at 1Y2/1Y3 today. After their work, I plugged in all the Dsubs to the rear Eurocrate DB37->DIN96 adaptors. Jordan nicely fixed up the labels on the cable with some extra sellotape for a more durable label.
- As part of the war on cross-connects, Chub removed some cables that were piping BIO signals from the fast CDS system to the whitening boards.
- There is a SCSI to DB37 custom ribbon cable going from the BIO card in the expansion chassis to a 1U chassis box at the very bottom of 1Y2.
- This 1U box, with DCC number D080478 (but no schematic exists on the DCC or any of the usual secret hidey-holes) breaks out the 32 BIO channels to 16+16.
- Each set of 16 channels was supposed to get broken out to 8+8 via some cross connects and then goto the whitening boards. This is the part that got distrubed.
- Koji and I discussed options - if Chub cannot resotre this easily, we will make a D37--> 4*D15 breakout board, and pipe the signals via the backplane P2 connectors. This will mean ~10 more days before the LSC system can be tested.
- Some cabling to the TT DACs and an ADC were also disturbed, but these are easily restored.
- From the hardware standpoint, some cross-struts for strain relief on the back of 1Y2 need to be installed --> Chub.
|
Attachment 1: acromagChecklist.pdf
|
|
14884
|
Mon Sep 16 19:29:24 2019 |
Koji | Update | Cameras | MC2 trans camera (?) rotated | The left one is analog and 90deg rotated.
See also: This issue tracker |
14883
|
Mon Sep 16 17:53:16 2019 |
aaron | Update | Cameras | MC2 trans camera (?) rotated | We noticed last week that the MC2 trans camera has pitch and yaw swapped; I rotated what I thought is the correct camera by 90 degrees clockwise (as viewed from above, like in the attachment), but I now have doubts. It's the camera on the right in the attachment. |
Attachment 1: 47D6ED9C-BF21-4D6E-9947-284FE4A336F4.jpeg
|
|
14882
|
Mon Sep 16 12:38:59 2019 |
aaron | Update | IOO | WFS measurements | I wanted to make a zero model of this circuit to get a handle on the results. I couldn't import zero on pianosa, and I tried pip installing zero, but was denied due to not finding version 3.0.3 of matplotlib. I finally got it to install using
pip3 install zero --user
Oddly, even though I can now import zero when I open a python3 session from the command line, when I open a jupyter notebook and switch to a python3 kernel, the zero module is still unavailable. I think I recall that conda manages the jupyter environment -- is pip managing an entirely separate environment (annoying)?
edit: Yeah, it was something like that. I reminded myself how this works with this article. |
14881
|
Mon Sep 16 12:00:16 2019 |
aaron | HowTo | General | Moved some immovable optics | When I put away the lenses we had used for measuring the RF transfer functions of the QPD heads, I saw that I'd removed them from the cabinet containing green endtable optics, but hadn't noticed the sign forbidding their removal. I'll talk with Koji/Gautam about what happened and what should be done. |
14880
|
Mon Sep 16 11:55:58 2019 |
rika | Update | IOO | WFS loop measurements | [rika, aaron]
We aligned optics of WFS as it was. Now auto-locker is working to lock MC.
But it still doesn't lock. We notice that the c1lsc machine doesn't work. So we run rebootCILSC.sh.
Now we reset the hardware!
17:11
After reset, auto locking didn't work well. Gautum and Aaron reboot slow c1ioo. Then it works, and Gautam returned the MC to a good alignment.
We found the beam is not in the center of the QPD, we (turned off the MC autolocker and MC loop, then) realigned to make beam to get in to the QPD center. Afterwards we start auto locking.
With the WFS on, the maximum MC transmission we observe is 14,700 counts; after the transmission level stabilizes (MC_TRANS pit and yaw brought to 0), the MC transmission is only 14,200 counts. Perhaps the MC_TRANS QPD offsets need adjustment. We relieve the WFS servo of its DC offsets. This is the configuration we'll use for WFS loop measurements this week. |
14879
|
Mon Sep 16 09:11:37 2019 |
gautam | Summary | CDS | DIN 96pin to DSUB37 adapter (single) ready for use | I installed 6 of these in 1Y2. Three were for PD INTF #1-3, and I used three more for the AS110, REFL11, and REFL33 Demod board FEs, where the strain-reflief of the DC power cables to the Eurocrate was becoming a problem. So now there are only 4 units available as spares.
Once the strain-relieving of the Dsub cabling to 1Y3 is done, we can move ahead with testing. I'd like to put this to bed this week if possible. |
14878
|
Mon Sep 16 05:08:04 2019 |
rana | Update | IOO | WFS loop measurements | not need to use DTT. I'm attaching some half-finished notebooks that give the gist.
- Download the data with NDS2
- Downsample the data for ease of use.
- save the data as hdf5 for easy loading later.
- demodulate the data at the specified frequencies.
That's it! Now you have the complex, single frequency TFs. Next you invert the matrix. |
Attachment 1: LSCsensingMatrix.ipynb
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get some ASC data - Calculate Sensing Matrix \n",
"### also make the radar plots"
]
},
... 327 more lines ...
|
Attachment 2: ASCsensingMatrix.ipynb
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get some ASC data - Calculate Sensing Matrix \n",
"### also make the radar plots"
]
},
... 325 more lines ...
|
14877
|
Fri Sep 13 13:03:35 2019 |
Koji | Summary | CDS | DIN 96pin to DSUB37 adapter (single) ready for use | The PCB board of the adapter for DIN 96pin to DSUB37 conversion (single DSUB version) was delivered yesterday and I quickly soldered the connectors.
They are ready for use and stored in a JLCPCB cardboard box on a pile of acromag stuff. (Note that the lacel is written on the box with Sharpie) |
Attachment 1: P_20190912_192109.jpg
|
|
14876
|
Fri Sep 13 10:53:40 2019 |
aaron | Update | IOO | WFS loop measurements | I'm scripting the WFS sensing matrix measurements. I haven't really scripted DTT before, so I'm trying to find documentation or existing scripts. I came across this elog where Gautam measured a sensing matrix during DRMI lock, and he pointed me to some .xml files used for these measurments.
|
14875
|
Fri Sep 13 10:36:03 2019 |
aaron | Update | IOO | WFS measurements | [rika, aaron]
We are at it again. Rika is setting up the TF measurement, I'm looking into scripting the WFS sensing matrix measurement we made earlier in the week so we can return to it next week.
Measurement |
file |
parameters |
WFS2_SEG1 / RFPD |
|
100 MHz - 500 MHz |
WFS2_SEG1 / RFPD |
|
10 MHz - 100 MHz |
WFS2_SEG1 / RFPD |
|
100 kHz - 10 MHz |
WFS2_SEG2 / RFPD |
TFAG4395A_13-09-2019_181415.txt |
100 MHz - 500 MHz |
WFS2_SEG2 / RFPD |
TFAG4395A_13-09-2019_180955.txt |
10 MHz - 100 MHz |
WFS2_SEG2 / RFPD |
TFAG4395A_13-09-2019_182918.txt |
100 kHz - 10 MHz |
WFS2_SEG3 / RFPD |
TFAG4395A_13-09-2019_121533.txt |
100 MHz - 500 MHz |
WFS2_SEG3 / RFPD |
TFAG4395A_13-09-2019_123820.txt |
10 MHz - 100 MHz |
WFS2_SEG3 / RFPD |
TFAG4395A_13-09-2019_123243.txt |
100 kHz - 10 MHz |
WFS2_SEG4 / RFPD |
TFAG4395A_13-09-2019_161834.txt
|
100 MHz - 500 MHz |
WFS2_SEG4 / RFPD |
TFAG4395A_13-09-2019_170007.txt |
10 MHz - 100 MHz |
WFS2_SEG4 / RFPD |
TFAG4395A_13-09-2019_172001.txt |
100 kHz - 10 MHz |
When we mesuring TF of SEG4, the beam leaking to SEG1 about 1%.
We finished mesurement SEG2-4 and get the figure by running PDH_calibrate.ipynb .
edit: We observed during segment 2 measurements that blocking the beam reduced the DC level of segment 1 by less than 1%, but still clearly observable. As you can see in the plots, something is suspicious about the normalization of these TFs. We took segment 1 data a few days before the other segments, so perhaps we weren't getting the full beam on the reference PD during the later measurements? When I make this measurement for WFS1, I will try to fix some of these problems by choosing different telescoping optics, and I will consider whether removing the QPD heads from their table will improve the measurement. |
Attachment 1: TF-.png
|
|
Attachment 2: WFS2_TFs.pdf
|
|
14874
|
Thu Sep 12 12:42:31 2019 |
aaron | Update | IOO | WFS measurements | [rika, aaron]
At Seiji and Gautam's suggestion, we added an additional RF photodiode (NewFocus 1611) to the system so we can calibrate our transfer functions. The configuration is now laser -> BS --> lenses -> QPD and BS --> lenses -> RFPD. We added lenses to get the beams focused on the RFPD and QPD heads, and are again set up for TF measurement.
We took the following data. These parameters were consistent across all measurements:
- 1kHz IF BW
- log sweep with 801 points
- 32 averages
- auto attenuation
- -10 dBm excitation amplitude
- 19.2 mA DC current to the laser
- The DC level of the reference PD is -, and with the beam blocked (dark current) it is
Measurement |
file |
parameters |
WFS2_SEG1 / RFPD |
TFAG4395A_12-09-2019_155901.txt
|
100 MHz - 500 MHz |
WFS2_SEG1 / RFPD |
TFAG4395A_12-09-2019_160811.txt |
10 MHz - 100 MHz |
WFS2_SEG1 / RFPD |
TFAG4395A_12-09-2019_170234.txt
|
100 kHz - 10 MHz |
WFS2_SEG2 / RFPD |
AG4395A_12-09-2019_183125.txt |
100 MHz - 500 MHz |
WFS2_SEG2 / RFPD |
TFAG4395A_12-09-2019_183614.txt |
10 MHz - 100 MHz |
WFS2_SEG2 / RFPD |
TFAG4395A_12-09-2019_183930.txt |
100 kHz - 10 MHz |
WFS2_SEG3 / RFPD |
TFAG4395A_12-09-2019_225243.txt |
100 MHz - 500 MHz |
WFS2_SEG3 / RFPD |
TFAG4395A_12-09-2019_225601.txt |
10 MHz - 100 MHz |
WFS2_SEG3 / RFPD |
TFAG4395A_12-09-2019_225922.txt |
100 kHz - 10 MHz |
WFS2_SEG4 / RFPD |
TFAG4395A_12-09-2019_230758.txt
|
100 MHz - 500 MHz |
WFS2_SEG4 / RFPD |
TFAG4395A_12-09-2019_232058.txt |
10 MHz - 100 MHz |
WFS2_SEG4 / RFPD |
TFAG4395A_12-09-2019_234447.txt |
100 kHz - 10 MHz |
After taking the data for segment 1, I moved the beam to segment 2. The beam didn't fit on segment 2 without partially illuminating segment 1 (tested by maximizing the signal on segment 2, then blocking the beam. If the beam is entirely on one segment, only that segment should be effected; in this case, we found that segment 1's DC signal also changed when the beam was blocked). We readjusted the telescoping lenses to get the beam a bit smaller, and now the beam fits on segment 2. We know it is entirely on segment 2 because small beam movements do not change the signal on segment 2.
We are trying to take the remaining data, but AGmeasure keeps hanging while sending the data (after taking the measurement, over 10 min). We tried restarting the network analyzer to no avail. I was able to grab the data by cancelling the measurement and running
AGmeasure --getdata -i vanna
I've uploaded the spectrum for segment 1 in the meantime. Zero model is on the way.
When I finished up the measurements on WFS2, I removed the cables from the AP table and closed the cover.
EDIT: I forgot to switch the LEMO connector to measure the other segments, so we measured the RF signal from segment 1 even when the beam was on segments 2-4. We'll have to try again tomorrow. |
Attachment 1: WFS2_TFs.pdf
|
|
Attachment 2: D755499D-9FDF-4E2B-BFC1-016B459DD35D.jpeg
|
|
14873
|
Thu Sep 12 09:49:07 2019 |
gautam | Update | Computers | control rm wkstns shutdown | Chub wanted to get the correct part number for the replacement UPS batteries which necessitated opening up the UPS. To be cautious, all the workstations were shutdown at ~9:30am while the unit is pulled out and inspected. While looking at the UPS, we found that the insulation on the main power cord is damaged at both ends. Chub will post photos.
However, despite these precautions, rossa reports some error on boot up (not the same xdisp junk that happened before). pianosa and donatella came back up just fine. It is remotely accessible (ssh-able) though so maybe we can recover it...
Quote: |
please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding
|
|
Attachment 1: IMG_7943.JPG
|
|
14872
|
Wed Sep 11 14:37:43 2019 |
aaron | Update | IOO | WFS measurements | [aaron, rika]
We identified the Jenne laser and found a long optical fiber that might be able to transport our beam to the AP table.
Now we're searching for documentation on using this laser. Kevin and John measured a TF last year. Koji advised that we needn't worry too much, the current limit is already set correctly and we need only power on the laser.
We moved the breadboard (including a couple PDs, collimating lenses, laser, steering mirrors, etc) over to the AP table, and set it on top of the panel next to the WFS. We mounted the laser on the AP table, and added one lens with f~68 mm after the laser to fit the beam on a single quadrant; the beam was about 1mm diameter (measured by eye) when it entered the QPD. We turned the laser driver on at ~19.4 mA, and directed it to WFS2 via the last two steering mirrors before WFS2.
We monitored the QPD segments' DC level with ndscope on a laptop, and were able to send the beam to each of the four quadrants in turn. We set up the Agilent network analyzer to drive the laser's amplitude modulation and sent the RF signal from the LEMO output on the QPD head directly to the network analyzer. We will take the measurements tomorrow morning. |
Attachment 1: 20190911_WFS.jpg
|
|
Attachment 2: 20190911_WFS_2.jpg
|
|
14871
|
Wed Sep 11 10:26:56 2019 |
aaron | Update | IOO | WFS measurements | Gameplan
We should also have a plan for the next couple weeks so we are organized; heavily adapted from. Here's what I'm thinking this morning:
- Construct the input/output matrix for the WFS. (basically, what we did yesterday)
- Measure a transfer function of MC[1, 2, 3]_[PIT, YAW] to [WFS1, WFS2, MC2_TRANS]_[PIT, YAW]. The transfer function above the loop bandwidth (few seconds BW, so we will excite >~ 10 Hz) characterizes the response of the sensor to the excitation.
- Invert the resulting 3x3 matrix and populate the inverted matrix at WFS_OUTMATRIX. This will map the WFS basis to the MC optics' pit/yaw basis.
- Script this process. If we make changes (for example, moving the telescoping lenses) to make this matrix more diagonal, we'll want to do these steps many times.
- Characterizing the loop
- Optimize the demodulation phase -- we want to minimize the signal in Q. This should also be automated. I found documentation in the white Wave Front Sensing binder
- Misalign a mirror in pitch or yaw, and rotate the phase to minimize the magnitude of Q (maximize I); this angle is 'R' on the WFSx_SETTINGS screen.
- We should measure a step response applied to each angular dof of the MC optics.
- Guoy Phase Calibration
- Characterizing / Calibrating the WFS heads
- The DCC has LIGO test procedures for their WFS RFPD, as does the white binder; the following checks are relevant for our WFS, and this is how I think we should carry them out (not identical to the procedure as written in the document). For many of these, we'll want to set up the JenneAM laser with a network analyzer for RF modulation.
- DC path transimpedance
- Measure the DC power of JenneAM with a power meter, and direct the beam to each of the QPD quadrants. Make sure the beam fits on a single quadrant.
- This will give us the product of the PD efficiency and DC transimpedance gain
- Last time this was measured (white WFS binder)
- notch tuning -- we are going to measure the TF, but I won't tune it without someone as ancient as the electronics
- Using the network analyzer, measure a transfer function from the laser AM to the QPD head's RF output
- Is there a pickoff available? The LIGO testing procedures recommend a FET probe
- We should do this while measuring the DC transimpedance for each quadrant
- notch rejection ratios
- While taking the RF transfer function, use the delta marker to record the difference between the notch and the RF operating frequency.
- RF transimpedance
- Illuminate the PD with white light from an incandescent bulb (a shot-noise limited source)
- 6-10 mA of photocurrent should be generated
- Use an RF spectrum analyzer and low noise RF pre-amplifier (gain ~20dB) to measure the shot noise limited spectrum
- A piece of scotch tape can be used to make the light uniformly illuminate the QPD
- Convert this RF PSD to an rms amplitude (voltage) spectral density, and also note the DC photocurrent. This can be used to calculate the RF transimpedance with

- Shot noise limited input sensitivity
- Measure the RF PSD with the beam blocked and light off; this is the dark photocurrent, and can be used to calculate the shot noise limited sensitivity.
References:
- Binders of documents about the 40m WFS
- LIGO ISC WFS RFPD test procedure (T1200347 is dual frequency, T1200380 is single frequency)
- The associated datasheet template is in T1200381
- Wavefront Sensor (T960111). This document even has a calibration protocol with forms to fill in during testing, so I've printed an extra copy of that appendix.
Automation
It would be good to script some of what we did yesterday. I'm checking out some scripts I'd used for Qryo and armloss measurements to remember the best way to do this.
- Existing WFS scripts (I didn't try these)
- WFS_DC_offsets -- sets the WFS QPD dark offsets
- block beam, then run script
- MC2_TRANS_offsets -- sets the MC2 transmission offset (why isn't this in the same script as WFS_DC_offsets?)
- MC should be aligned, beams centered on WFS, WFS servo off
- mcWFSallowOn(Off) -- turns on (off) the ASC filter module outputs
- mcwfshold -- turns off the input to WFS servos, but holds the current values of MC optic biases
- mcwfsoff -- turns off the mc wfs loop
- First, turns off the WFS outputs (eg WFS1_PIT OUTPUT)
- Turns off the MC WFS input gains
- Holds the WFS loop outputs
Miscellany
I noticed yesterday that the PSL_shutterqst box is white, and I've seen timeout requests when eg the reboot script tries to open/close the PSL shutter. It seems like a shutter that should open, so I should find the aux machine to restart it. |
14870
|
Tue Sep 10 17:26:49 2019 |
Koji | Update | CDS | D1900068 SR785 accessory box | I picked up a unit of D1900068 SR785 accessory box from Dean's office at Downs. |
Attachment 1: P_20190910_171859_1.jpg
|
|
14869
|
Tue Sep 10 16:10:40 2019 |
Chub | Update | | Rack Update | Still removing old cable, terminal blocks and hardware. Once new strain reliefs and cable guides are in place, I will need to disconnect cables and reroute them. Please let me know dates and times when that is not going to interrupt your work! |
Attachment 1: 20190910_154018.jpg
|
|
Attachment 2: 20190910_154006.jpg
|
|
14868
|
Tue Sep 10 15:41:37 2019 |
aaron | Update | IOO | WFS measurements | [rika, aaron, rana]
We are getting the MC locked in anticipation of making some WFS transfer function measurements.
The PSL screen was all white boxes, so I keyed the PSL crate and burt restored the settings from 11:19am Sep 5 (somewhat earlier than we started rebooting computers). Following this, I ran Milind's unstick.py and then the PSL autolocker script; both worked on the first go, great work Milind!
The modecleaner autolocking script is having substantially more trouble. Rana found that pitch and yaw sliders for all MC optics have been swapped--we think it's because the camera at MC2 has been rotated. Note that for now, sliding pitch gives a change in yaw, and sliding yaw changes pitch.
Improving MC alignment
We noticed that with the WFS servo on, the modecleaner would be well aligned for a while (MC trans ~ 14000), only to lose lock after several minutes. We held the MC2_TRANS_PIT/YAW outputs at 0, so the MC2 QPD does not affect the WFS loop; the beam is well centered on WFS1/2, but not on the MC2 QPD, and with this signal out of the loop MC TRANS recovers to ~15000 counts (consistent with the quiet times over the last 90 days, see attachment 2). Attachment 1 shows the MC lock degrading, followed by some noise where we lost lock, and finally a visible increase in MC trans when we remove the MC2 QPD from the WFS loop.
mode cleaner alignment setting
MC1 Pich 4.4762 Yow 4.4669
MC2 Pich 3.7652 Yow -1.5482
MC3 Pich -0.4159 Yow 1.1477
After automatic locking MC, we stopped automatical locking and took alignment to the center of QPD.
And then again did the automatic locking MC. Finaly Rana move to best alignment.
Mode cleaner Alignment Setting
MC1 Pich 4.4942 Yow 4.6956
MC2 Pich 3.7652 Yow -1.5600
MC3 Pich -0.3789 Yow 1.1477
Measured sine response
We used diaggui to measure the response of WFS1/WFS2/MC2 pitch (yaw) to excitations in MC1/MC2/MC3 pitch (yaw). Seeing fluctuations of amplitude ~1 on the MCX_PIT/YAW_OUT channels, we used an amplitude 0.01 excitation at 20 Hz. We will work on scripting some of this tomorrow.
|
Attachment 1: Screenshot_from_2019-09-10_18-51-28.png
|
|
Attachment 2: mctrend_190910.png
|
|
14867
|
Mon Sep 9 11:36:48 2019 |
aaron | HowTo | CDS | How to save c1ioo | One pair of DIMM cards from the Sunstone box had the same Sun part number as those in c1ioo, so I swapped them in and reinstalled c1ioo's CPU0. c1ioo now boots up an seems ready to go, I'm able to log on from nodus. I also reinstalled optimus' CPU0, and optimus boots up with no problems.
- old C1OMC RT
- Megatron
- I also found that megatron will require a CPU filler board if we remove one of its DIMM (it cannot operate with empty CPU module slots)
- optimus
- Rana says I can also consider using two of optimus' DIMM cards. Optimus appears to not be running any scripts currently, and I don't find any recent elog entries or wiki pages mentioning optimus with critical use.
- I shutdown optimus (from the command line Mon Sep 9 13:17:58 2019).
While opening up optimus, I noticed a box labelled 'SUNSTONE' sitting below the rack--it contains two CPU modules a similar type as in c1ioo! I'm going to try swapping in the DIMM cards from this SUNSTONE box; I didn't find any elogs about sunstone--where are these modules from?
I reset c1lsc and c1sus, then ran rebootC1LSC.sh as before. All models started by the script are running with minimal red lights; c1oaf, c1cal, c1dnn, c1daf, and c1omc are not started by the script. I manually started these in the order c1cal->c1oaf->c1daf->c1dnn. Starting c1dnn crashed the other FE on c1ioo, so I reset all three FE again, and ran the script again (this time, including the startup for c1cal, c1oaf, and c1daf, but excluding c1dnn).
Except for c1dnn and c1omc, all models are started. The status lights are attached. |
Attachment 1: reboot.png
|
|
14866
|
Fri Sep 6 22:03:30 2019 |
aaron | HowTo | CDS | How to save c1ioo | Saw these slightly delayed.
Q1: Not sure--is it a safe operation for me to remove the DIMM on CPU0, replace CPU0 (with no DIMM), and boot up to try this?
Q2: Specifically, it's this DIMM. The CPU core is compatible with DDR2, clock rate up to 333 MHz (DDR2-667) and 1, 2, or 4 GB of memory.
Q3: Hmm checking on that.
I see a message on megatron that it's currently running MC autolocker and the FSS slow servo, with nothing else listed. It's currently running 30-70% of its available memory on all 8 cores, so seems it's got some to spare. I need to relocate the old c1omc RT machine for myself, but becoming inefficient so I'm off.
Quote: |
Q1 Can we run the machine with the reduced # of cores?
Q2 We might be able to order them quickly. What's the spec and configuration of the DIMMs (like DDR2-667MHz ECC 4GBx4, and even more specs (like Samsung 2GB DDR2 RAM PC2-6400 240-Pin DIMM M378T5663EH3) so that we are to identify the exact spec).
Q3 Can we scavenge the old OMC RT machine or even megatron to extract the memories?
|
|
14865
|
Fri Sep 6 21:22:06 2019 |
Koji | HowTo | CDS | How to save c1ioo | Q1 Can we run the machine with the reduced # of cores?
Q2 We might be able to order them quickly. What's the spec and configuration of the DIMMs (like DDR2-667MHz ECC 4GBx4, and even more specs (like Samsung 2GB DDR2 RAM PC2-6400 240-Pin DIMM M378T5663EH3) so that we are to identify the exact spec).
Q3 Can we scavenge the old OMC RT machine or even megatron to extract the memories? |
14864
|
Fri Sep 6 18:08:29 2019 |
rana | Update | Computers | Alarm noise from smart-ups machine under workstation? | please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding.
Quote: |
There was an alarm sound from the Smart-UPS 2200 sitting under the workstation. I see that the 'replace battery' light is red, and this elog tells me that these batteries are replaced every ~1-4 years; the last replacement was march 2016. Holding down the 'test' button for 2-3 seconds results in the alarm sound and does not clear the replace battery indicator.
|
|
14863
|
Fri Sep 6 16:38:24 2019 |
aaron | Update | ALARM | Alarm noise from smart-ups machine under workstation? | There was an alarm sound from the Smart-UPS 2200 sitting under the workstation. I see that the 'replace battery' light is red, and this elog tells me that these batteries are replaced every ~1-4 years; the last replacement was march 2016. Holding down the 'test' button for 2-3 seconds results in the alarm sound and does not clear the replace battery indicator. |
14862
|
Fri Sep 6 15:12:49 2019 |
Koji | HowTo | CDS | WFS discussion, restarting CDS | Assuming you are at pianosa, /etc/resolv.conf is like
# Generated by NetworkManager
nameserver 192.168.113.104
nameserver 8.8.8.8
But this should be like
nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8
search martian
as indicated in https://nodus.ligo.caltech.edu:8081/40m/14767
I did this change for now. But this might get overridden by Network Manager. |
14861
|
Fri Sep 6 11:56:44 2019 |
aaron | HowTo | CDS | WFS discussion, restarting CDS | Rebooting
I reset c1lsc, c1sus, and c1ioo.
I noticed that the script gives the command 'ssh c1XXX', but we have been getting no route to host using this command. Instead, the machines are currently only reachable as c1XXX.martian. I'm not sure why this is, so I just appended .martian in rebootC1LSC.sh
This time, the script does run. I did get 'no route to host' on c1ioo, so I think I need to reset that machine again. After reset, the script failed to login to c1ioo and c1lsc.
Fri Sep 6 13:09:05 2019
After lunch, I reset the computers again, and try the script again. There is again no route to host for c1ioo. I'm going inside to shutoff the power to c1ioo, since the reset buttom seems to not be working. I still can't login from nodus, so I'm bringing a keyboard and monitor over to plug in directly.
On reset, c1ioo repeatedly reaches the screen in attachment 1, before going black. Holding down shift or ctrl+alt+f1 doesn't get me a command prompt. After waiting/searching the elog for >>3 min, we decided to follow these instructions to cycle the power of c1ioo. The same problem recurred following power up. I found online some instructions that the SunSystems 4600 can hang during reboot if it has become too hot ("reboot during a thermal shutdown"); I did notice that the temperature light was on earlier in this procedure, so perhaps that is the problem. I followed the wiki instructions to shut down the computer again (pressed power button, unplugged 4 power supplies from back of machine), and left it unplugged for 10-30 min (Fri Sep 6 14:46:18 2019 ).
Fri Sep 6 15:03:31 2019
Rana plugged in the power supplies and reset the machine again.
Fri Sep 6 16:30:37 2019
c1ioo is still unreachable! I pressed reset once, and the reset button flashes white. The yellow warning light is still on.
Fri Sep 6 16:54:21 2019
The reset light has stopped flashing, but I still can't access c1ioo. I reset once more, this time watching c1ioo on a monitor directly. I'm still seeing the same boot screen repeatedly. I do see that CPU0 is not clocking, which seems weird.
Troubleshooting CPU module
Following gautam's elog here, I found the Sun Fire X4600 manual for locating faulty CPUs. After the white reset light stopped flashing, I held down the power button to turn off the system. Before shutdown, all of the CPU displayed amber lights; after shutdown, only the leftmost CPU (as viewed from the back, presumably CPU0) displays an amber light. The manual says this is evidence that the CPU or DIMM is faulty. Following the manual, I remove the standby power, then checked out these Instructions for replacing the CPU to remove the CPU; Gautam also has done this before.
Fri Sep 6 20:09:01 2019 Fri Sep 6 20:09:02 2019
I pulled the leftmost CPU module out, following the instructions above. The CPU module matches the physical layout and part number of the Sun Fire X4600 M2 8-DIMM CPU module; pressing the fault reminder light gives amber indicators at the DIMM ejectors, indicating faulty DIMMs (see). The other indicator LEDs did not illuminate.
I located several spare DIMMs in the digital cabinet along Y arm (and a couple with misc computer components in the control room), but didn't find the correct one for this CPU module. The DIMM is Sun PN 371-1764-01; I found it online and ordered eight. Please let me know if this is incorrect.
To protect the CPU module, I've put it in an ESD safe bag with some bubble wrap and a note. It's on the E shop bench.
Conclusion: Need new DIMM, didn't find the correct part but ordered it. |
Attachment 1: B26CECF8-FC0D-4348-80DC-574B1E3A4514.jpeg
|
|
14860
|
Fri Sep 6 09:40:56 2019 |
aaron | HowTo | CDS | WFS discussion, restarting CDS | As suggested, I ran the script cds/rebootC1LSC.sh
I got a timeout error when the script tried closing the PSL shutter ('C1:AUX-PSL_ShutterRqst' not found), but Rana and I closed the shutter before leaving last night. c1sus is down, so the script found no route to host c1sus; I'm thinking I need to reset c1sus for the script to run completely. Nonetheless, c1lsc was rebooted, which crashed c1ioo and left the c1lsc FE all red (probably because c1sus wasn't restarted).
|
14859
|
Thu Sep 5 20:30:43 2019 |
rana | HowTo | CDS | WFS discussion, restarting CDS | via Polish chat, GV tells us to RTFE |
14858
|
Thu Sep 5 18:42:19 2019 |
aaron | HowTo | CDS | WFS discussion, restarting CDS | [aaron, rana]
While going to take some transfer functions of the MC WFS loop, LSC was down. When we tried to restart the FE using 'rtcds restart --all', c1lsc crashed and froze. We manually reset c1lsc, then laboriously determined the correct order of machines to reboot. Here's what works best:
on c1lsc:
rtcds start c1x04 c1lsc c1ass c1oaf c1cal c1daf
Starting c1dnn crashes the other FE
on c1ioo
rtcds restart --all
on c1sus
rtcds restart c1rfm c1sus c1mcs
restarting c1pem crashes the other FE on c1sus
We're seeing a lot of red IPC indicators--perhaps it's an issue with the order we're restarting? |
14857
|
Sun Aug 25 14:18:08 2019 |
gautam | Update | CDS | c1iscaux remaining work | There were a bunch of useless / degenerate channels added - e.g. whitening gains which are alreay burt-snapshot. Maybe there are many more useless channels being trended, but no need to add more.
Copy-pasting wasn't done correctly - the first 4 added channels were duplicates. There are in fact 5 LO power mons, one for each of the frequencies 11, 33, 55, 110 and 165 MHz.
I cleaned up. Basically only the detect-mon channels, and the ALS channels, are new in the setup now. I will review if any extra channels are required later. While checking that the daqd is happy, I noticed c1lsc FEs are in their stuck state, see Attachment #1. I guess a cable was bumped when the strain relief operation was underway. I'm not attempting a remote resuscitation.
Quote: |
I added the list of new c1iscaux channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini and restarted the framebuilder. Koji had thought some of these channels might have previously existed under slightly different names. However, after looking through C0EDCU.ini and the other _SLOW.ini files, I did not find any candidates for removal. As far as I can tell, all of these channels are being recorded for the first time.
|
|
Attachment 1: Screen_Shot_2019-08-25_at_10.38.37_PM.png
|
|
14856
|
Fri Aug 23 19:10:02 2019 |
Jon | Update | Cameras | GigE camera server is online | Following the death of rossa, which was hosting the only working environment for the GigE camera software, I've set up a new dedicated rackmount camera server: c1cam (details here). The Python server script is now configured as a persistent systemd service, which automatically starts on boot and respawns after a crash. The server depends on a set of EPICS channels being available to control the camera settings, so c1cam is also running a softIOC service hosting these channels. At the moment only the ETMX camera is set up, but we can now easily add more cameras.
Usage
Instructions for connecting to a live video feed are posted here. Any machine on the martian network can stream the feed(s). The only requirement is that the client machine have GStreamer 0.10 installed (all the control room workstations satisfy this).
Code Locations
As much as possible, the code and dependencies are hosted on the /cvs/cds network drive instead of installed locally. The client/server code and the Pylon5, PyPylon, and PyEpics dependencies are all installed at /cvs/cds/rtcds/caltech/c1/scripts/GigE . The configuration files for the soft IOC are located at /cvs/cds/caltech/target/c1cam .
Upgrade Goals
The 40m GigE camera code is a slightly-updated version of the 10+ year-old camera code in use at the sites. Consequently every one of its dependencies is now deprecated. Ultimately, we'd like to upgrade to the following:
- Python 2.7 --> 3.7
- Basler Pylon 5.0.12 --> 5.2.0
- PyPylon 1.1.1 --> 1.4.0
- GStreamer 0.10 --> 1.2
This is a long-term project, however, as many of these APIs are very different between Python 2 and 3. |
14855
|
Fri Aug 23 18:46:17 2019 |
Jon | Update | CDS | c1iscaux remaining work | I added the list of new c1iscaux channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini and restarted the framebuilder. Koji had thought some of these channels might have previously existed under slightly different names. However, after looking through C0EDCU.ini and the other _SLOW.ini files, I did not find any candidates for removal. As far as I can tell, all of these channels are being recorded for the first time.
Quote:Koji |
- Modify C0EDCU.ini to trend the new slow channels we may want long-term monitoring of (e.g. LO power levels to the Demod boards). Anyone want to volunteer to take care of this?
|
|
14854
|
Fri Aug 23 10:01:14 2019 |
gautam | Update | BHD | OMC cavity geometry - some more modeling | Summary:
I did some more investigation of what the appropriate cavity geometry would be for the OMC. Unsurprisingly, depending on the incident mode content, the preferred operating point changes. So how do we choose what the "correct" model is? Is it accurate to model the output beam HOM content from NPROs (is this purely determined by the geometry of the lasing cavity?), which we can then propagate through the PMC, IMC, and CARM cavities? This modeling will be written up in the design document shortly.
*Colorbar label errata - instead of 1 W on BS, it should read 1 W on PRM. The heatmaps take a while to generate, so I'll fix that in a bit.
Update 230pm PDT: I realize there are some problems with these plots. The critically coupled f2 sideband getting transmitted through the T=10% SRM should have significantly more power than the transmission through a T=100ppm optic. For similar modulation depth (which we have), I think it is indeed true that there will be x1000 more f2 power than f1 power for both the IFO AS beam and the LO pickoff through the PRC. But if the LO is picked off elsewhere, we have to do the numbers again.
Details:
Attachment #1: Two candidate models. The first follows the power law assumption of G1201111, while in the second, I preserved the same scaling, but for the f1 sideband, I set the DC level by assuming a PRG of 45, modulation depth of 0.18, and 100 ppm pickoff from the PRC such that we get 50 mW of carrier light (to act as a local oscillator) for 10 W incident on the back of PRM. Is this a reasonable assumption?
Attachment #2: Heatmaps of the OMC transmission, assuming (i) 0 contrast defect light in the carrier TEM00 mode, (ii) PRG=45 and (iii) 1 W incident on the back of PRM. The color bar limits are preserved for both plots, so the "dark" areas of the plot, which indicate candidate operating points, are darker in the left-hand plot. Obviously, when there is more f1 power incident on the OMC, more of it is transmitted. But my point is that the "best operating point(s)" in both plots are different.
Why is this model refinement necessary? In the aLIGO OMC design, an assumption was made that the light level of the f1 sideband is 1/1000th that of the f2 sideband in the interferometer AS beam. This is justified as the RC lengths are chosen such that the f2 sideband is critically coupled to the AS port, but the f1 is not (it is not quite anti-resonant either). For the BHD application, this assumption is no longer true, as long as the LO beam is picked off after the RF sidebands are applied. There will be significant f1 content as well, and so the mode content of the f1 field is critical in determining the OMC filtering performance. |
Attachment 1: modeContentComparison.pdf
|
|
Attachment 2: OMCtransComparison.pdf
|
|
14853
|
Thu Aug 22 20:56:51 2019 |
Koji | Update | CDS | MC1 glitch removed (for now) and IMC locking recovered | The internal ribbon cable for the MC1 satellite box was replaced with the one in the spare box. The MC1 box was closed and reinstalled as before. The IMC is locking well.
Now the burnt cable was disassembled and reassembles with a new cable. It is now in the spare box.
The case closed (literally) |
14852
|
Thu Aug 22 12:54:06 2019 |
Koji | Update | CDS | MC1 glitch removed (for now) and IMC locking recovered | I have checked the MC1 satellite box and made a bunch of changes. For now, the glitches coming from the satellite box is gone. I quickly tested the MC1 damping and the IMC locking. The IMC was locked as usual. I still have some cleaning up but will work on them today and tomorrow.
Attachment 1: Result
The noise level of the satellite box was tested with the suspension simulator (i.e., five pair of the LED and PD in a plastic box).
Each plot shows the ASD of the sensor outputs 1) before the modification, 2) after the change, and 3) with the satellite box disconnected (i.e., the noise from the PD whitening filter in the SUS rack).
Before the modification, these five signals showed significant (~0.9) correlation each other, indicating that the noise source is common. After the modification, the spectra are lowered down to the noise level of the whitening filters, and there is no correlation observed anymore. EXCEPT FOR the LR sensor: It seems that the LR has additional noise issue somewhere in the downstream. This is a separate issue.
Attachment 2: Photo of the satellite box before the modification
The thermal environment in the box is terrible. They are too hot to touch. You can see that the flat ribbon cable was burned. The amps, buffers, and regulators generate much heat.
Attachment 3: Where the board was modified
- (upper left corner) Every time I touched C51, the diode output went to zero. So C51 was replaced with WIMA 10uF (50V) cap.
- (lower left area) I found a clear indication of the glitch coming from the PD bias path (U3C). So I first replaced another 10uF (C50) with WIMA 10uF (50V). This did not change the glitch. So I replaced U3 (LT1125). This U3 had unused opamp which had railed to the supply voltage. Pins 14 and 15 of U3 were shorted to ground.
- (lower right corner) Similarly to U3, U6 also had two opamps which are railed due to no termination. U6 was replaced, and Pins 11, 12, 14, and 15 were shorted to ground.
- (middle right) During the course of the search, I suspected that the LR glitch comes from U5. So U5 was replaced to the new chip, but this had no effect.
Attachment 4: Thermal degradation of the internal ribbon cable
Because of the heat, the internal ribbon cable lost the flexibility. The cable is cracked and brittle. It now exposes some wires. This needs to be replaced. I'll work on this later this week.
Attachment 5: Thermal degradation of the board
Because of the excessive heat for those 20years, the bond between the board and the patten were degraded. In conjunction with extremely thin wire pattern, desoldering of the components (particularly LT1125s) was very difficult. I'd want to throw away this board right now if it were possible...
Attachment 6: Shorting the unused opamps
This shows how the pieces of wires were soldered to ground vias to short the unused opamps.
Attachment 7: Comparison of the noise level with the sus simulator and the actual MC1 motion
After the satellite box fix, the sensor outputs were measured with the suspension connected. This shows that the suspension is moving much more than the noise level around 1Hz. However, at the microseismic frequency there is also most no mergin. Considering the use of the adaptive feedforward, we need to lower the noise of the satellite box as well as the noise of the whitening filters.
=> Use better chips (no LT1125, no current buffers), use low noise resistors, better thermal environment.
|
Attachment 1: satellite_box.pdf
|
|
Attachment 2: before.jpg
|
|
Attachment 3: after.jpg
|
|
Attachment 4: P_20190821_194035.jpg
|
|
Attachment 5: P_20190821_174240.jpg
|
|
Attachment 6: P_20190821_194013.jpg
|
|
Attachment 7: comparison_satellite_box.pdf
|
|
14851
|
Tue Aug 20 19:05:24 2019 |
Koji | Update | CDS | MC1 (and MC3) troubleshoot | Started the troubleshoot from the MC1 issue. Gautam showed me how to use the fake PD/LED pair to diagnose the satellite box without involving the suspension mechanics.
This revealed that the MC1 has frequent light level glitches which are common for five sensors. This feature does not exist in the test with the MC3 satellite box. I will open and check the MC1 satellite box to find the cause of this common glitches tomorrow. MC1 is currently shutdown and undamped.
BTW, at the MC3 test, i found that J2 of the satellite box (male Dsub) has all the pins too low (or too short?). I brought the box outside and found that the housing of this connector was half broken down. The connector was reassembled and the metal parts of the housing was bent again so that the housing can hold the connector body tightly.
The MC3 satellite box was restored and connected to the cables. As I touched this box, it is still under probation. |
Attachment 1: Screenshot_from_2019-08-20_17-26-01.png
|
|
Attachment 2: Screenshot_from_2019-08-20_17-43-03.png
|
|
14850
|
Mon Aug 19 14:36:21 2019 |
gautam | Update | CDS | c1iscaux remaining work | Here is what is left to do:
- Strain relief of all cabling. Chub will take care of this in the coming days. I have said he can connect and disconnect cables as he pleases, but after this work, we may require a hard reboot of the Acromag chassis before restoring functionality to the channels, as it is known that the Acromags can sometimes get "stuck" by a sudden connection of voltage.
- Installation of DB15 cable to the P2 connector of the CM board and a DB9 cable to the ALS demod unit (LO and RF power monitors). These will arrive in the next couple of days and Chub will take care of the install.
- Design, manufacture and install of a custom version of the backplane P1 adaptor board with only 1 D37 connector - for some of the PD DC signals, a custom adaptor board, part number D010005 for which I can't find any schematics is already installed on the P2 connector, and makes the DC monitor signals available to 4 LEMO connectors. These signals are then digitized by the fast CDS system, presumably for PDH signal normalization. The footprint of this P2--->LEMO adaptor is such that we cannot simply install our P1---> 2xDB37 adaptor boards, because of space constraints. Fortunately, there is a simple fix to reduce the footprint of the board: remove the bottom DB37 connector, which is unused in the c1iscaux system except for the CM board. I recommend getting ~10 pcs of such boards, as it is also useful in a few other places, where the power cabling to the eurocrates are a space constraint. See Attachment #1 for a picture explaining this situation. Anyone want to volunteer to take care of this?
- In-situ testing. This is easiest done with some light available in the interferometer. Which in turn requires IMC to be locked. Which in turn requires satellite box fixing. Anyone want to volunteer to take care of this?
- Modify C0EDCU.ini to trend the new slow channels we may want long-term monitoring of (e.g. LO power levels to the Demod boards). Anyone want to volunteer to take care of this?
- Decide what to do about the CM latch logic. There are some contraints with the way the acromag register addressing works, that I've had to change the way the mbboDirect bits are controlled. Unfortunately, this seems to sometimes and unpredictably cause the bits to flip in a non-robust way, which is the whole point of having the latch in the first place. Either the latch logic needs to be improved, or we need to implement the latch logic in the fast CDS system, not the slow.
Today I set up the autoburt.req file for the c1iscaux channels, and confirmed that the snapshots are getting recorded. There were a lot of channels in the old autoburt.req file which I thought were un-necessary (and several which no longer exist), so now the only channels that are burt-ed are the whitening gains and states of the AA filters. If someone feels we need more channels to be snapshot recorded, you can add them to the file.
In the old target directory, there were also various versions of a "saverestore.req" file - why do we need this in addition to an autoburt? I guess it is possible they are used by the IFOconfigure scripts to setup some whitening gains etc... |
Attachment 1: caseForSmallerFootprint.pdf
|
|
14849
|
Sat Aug 17 16:49:23 2019 |
gautam | Update | CDS | More 1Y3 work | Work done today:
- All ribbon cable connections to the backplane of the 1Y2 Eurocrates were removed. The cables themselves were cleared for more space to work with.
- 20x 15ft DB37 Cables were run between 1Y2 and 1Y3 via overhead cable tray.
- Backplane interface boards were installed for 1Y2 Eurocrate boards.
- Connections were made between the Acromag chassis and the eurocrate electronics modules.
Testing of functionality:
- Fast BIO switching was verified to work for the following photodiodes:
- AS55, AS110, REFL11, REFL33, REFL55, REFL165, POX11, POY11, POP22, POP110.
- No light was incident on the PDs.
- Test was done by increasing the whitening gain to +45 dB, and then looking at the ASD of the electronics noise between 50 Hz and 500 Hz with the whitening enabled/disabled. We expect x10 difference between the two states. This was seen.
- "DetMon" channels were verified to work - see Attachment #1
- Y-axis units is volts
- Test was done by toggling the output of the 11 MHz Marconi, and looking for a change.
- As seen in the attachment, all 5 monitor channels show a change.
- This needs to be calibrated into some sensible units - I don't know why the different modulation frequencies have such different readbacks from supposedly identical Demod Board monitor points.
- Not sure if the ~10 V reported by the REFL165 monitor point is real or saturated.
- These channels are installed to signal/help debug the infamous ERA-5 decay problem, but maybe already some are decayed?
- QPD interface channels were verified to work - see Attachment #2.
- Test was done by shining a green laser pointer on QPD quadrants.
Much testing remains to be done, but I defer further testing till Monday - the main functionality to be verified in the short run is the whitening gain stepping. The strain-relief of cables and general cleanup will be undertaken by Chub. Current state of affairs is in Attachment #3, leaves much to be desired in terms of cleanliness.
I will also setup the autoburt for the new machine on Monday. We will also need to add some channels to C0EDCU.ini if we want to trend them over some years (e.g. RF signal powers for monitoring ERA-5 health).
* c1lsc FE was rebooted using the usual script, and everything seems to be healthy in CDS-land again, see Attachment #4.
Quote: |
Next steps:
- We did not get around to running the DB37 cables between the Acromag chassis and the 1Y2 Eurocrates today - this operation itself took the whole day as we also needed to lay out some support struts etc on the rack to support the Sorensens and the Acromag chassis.
- Once the Acromags are connected to the Eurocrates, we have to run in-situ tests to make sure the appropriate functionality has been restored.
- We must have bumped something in the c1lsc expansion chassis - the CDS FE overview screen is reporting some errors (see Attachment #3). I will fix this.
- General tidiness, strain-relief etc.
|
|
Attachment 1: Screen_Shot_2019-08-17_at_3.00.57_PM.png
|
|
Attachment 2: Screen_Shot_2019-08-17_at_3.12.23_PM.png
|
|
Attachment 3: IMG_7804.JPG
|
|
Attachment 4: Screenshot_from_2019-08-17_17-04-47.png
|
|
14848
|
Fri Aug 16 16:40:04 2019 |
gautam | Update | CDS | 1Y3 work | [chub, gautam]
Installation: The following equipment were installed in 1Y3, see Attachment #1:
- Supermicro server, which is the new c1iscaux machine, with IP Address 192.168.113.83.
- 6U Acromag chassis which contains all the ADCs, DACs and BIO units.
- 2 Sorensen DC power supplies to provide +24 V DC and +15 V DC to the Acromags.
- Fusable DIN rail power blocks were installed on the North side of the 1Y3 rack - I placed 2 banks of 5 connectors each for +15 V DC and +24 V DC.
Removal: The following equipment was removed from 1Y3:
- VME crates that were the old c1iscaux and c1iscaux2 machines.
- Spare VME crate that used to be c1susaux, which Chub and I brought over to 1Y3 in an attempt to revive the broken c1iscaux2.
- Approximately 30 twisted ribbon cables that were going to the cross connects. For now, we have not done a full cleanup and they are just piled along the east arm (see Attachment #2), beware if you are walking there!
Software:
- I connected the c1iscaux machine to the martian network.
- Then I edited the relevant files on chiara to free up the IP addresses previously used by c1iscaux (192.168.113.81) and c1iscaux2 (192.168.113.82), and re-assigned the IP address used for c1iscaux to be 192.168.113.83.
- I also changed the hostname of the c1iscaux machine (it was temporarily called c1iscaux3 to allow bench testing).
- I moved the old /cvs/cds/caltech/target/c1iscaux and /cvs/cds/caltech/target/c1iscaux2 directories to /cvs/cds/caltech/target/preAcromag_oldVME/c1iscaux and /cvs/cds/caltech/target/preAcromag_oldVME/c1iscaux2 respectively.
- I moved the temporarily named /cvs/cds/caltech/target/c1iscaux3 directory, from which I was running all the tests, to /cvs/cds/caltech/target/c1iscaux.
- I edited all references to c1iscaux3 in the systemd files so that we can run the approriate systemd services.
Next steps:
- We did not get around to running the DB37 cables between the Acromag chassis and the 1Y2 Eurocrates today - this operation itself took the whole day as we also needed to lay out some support struts etc on the rack to support the Sorensens and the Acromag chassis.
- Once the Acromags are connected to the Eurocrates, we have to run in-situ tests to make sure the appropriate functionality has been restored.
- We must have bumped something in the c1lsc expansion chassis - the CDS FE overview screen is reporting some errors (see Attachment #3). I will fix this.
- General tidiness, strain-relief etc.
Quote: |
I judge that we are good to go ahead with an install tomorrow.
|
|
Attachment 1: newLook1Y3.JPG
|
|
Attachment 2: IMG_7803.JPG
|
|
Attachment 3: c1lsc_crashed.png
|
|
14847
|
Fri Aug 16 04:24:03 2019 |
rana | Update | ALS | ALS sensing noise due to IMC | What about just use high gain feedback to MC2 below 20 Hz for the IMC lock? That would reduce the excess if this theory is correct. |
14846
|
Thu Aug 15 18:54:54 2019 |
gautam | Update | ALS | ALS sensing noise due to IMC | Summary:
I came aross an interesting suggestion by Yutaro that KAGRA's low-frequency ALS noise could be limited by the fact that the IMC comes between the point where the frequencies of the PSL and AUX lasers are sensed (i.e. the ALS beat note), and the point where we want them to be equal (i.e. the input of the arm cavity). I wanted to see if the same effect could be at play in the 40m ALS system. A first estimate suggests to me that the numbers are definitely in the ballpark. If this is true, we may benefit from lower noise ALS by picking off the PSL beam for the ALS beat note after the IMC.
Details:
Even though the KAGRA phase lock scheme is different from the 40m scheme, the algebra holds. I needed an estimate of how much the arm cavity moves, I used data from a POX lock to estimate this. The estimate is probably not very accurate (since the arm cavity length is more stable than the IMC length, and the measured ALS noise, e.g. this elog, is actually better than what this calculation would have me believe), but should be the right order of magnitude. From this crude estimate, it does look like for f<10 Hz, this effect could be significant. I assumed an IMC pole of 3.8 kHz for this calculation.
I've indicated a "target" ALS performance where the ALS noise would be less than the CARM linewidth, which would hopefully make the locking much easier. Seems like realizing this target will be touch-and-go. But if we can implement length feedforward control for the arm cavities using seismometers, the low frequency motion of the optics should go down. It would be interesting to see if the ALS noise gets better at low frequencies with length feedforward engaged.
* Some updates were made to the plot:
- Took data from Kiwamu's paper for the seismic noise
- Overlaid measured ALS noise
|
Attachment 1: ALSsensingNoise.pdf
|
|
14845
|
Tue Aug 13 14:36:17 2019 |
gautam | Update | CDS | P1--->P2 | As it turns out, only one extra shroud needed to be installed - I did this and migrated the cables for the 4 whitening boards from the P1 to P2 connectors. So until the new Acromag box is installed, we have no control over the whitening gains (slow channels), but do still have control over the whitening filter enable/disable (controlled by fast BIO). I am thinking about the easiest way to test the latter - I think the ambient PD dark noise level is too low to be seen above ADC noise even with the whitening enabled, and setting up drive signals to individual channels is too painful - maybe with +45dB of whitening gain, the (z,p) whitening filter shape can be seen with just PD/demod chain electroncis noise.
Quote: |
This morning, I wanted to move the existing cables going to the P1 connectors of the iLIGO whitening boards to the P2 connector, to test the modifications made to allow whitening stage switching. Unfortunately, I found that the shrouds werent installed. Where can I find these?
|
|
14844
|
Tue Aug 13 08:07:09 2019 |
gautam | Update | CDS | P1--->P2 | This morning, I wanted to move the existing cables going to the P1 connectors of the iLIGO whitening boards to the P2 connector, to test the modifications made to allow whitening stage switching. Unfortunately, I found that the shrouds werent installed. Where can I find these? |
14843
|
Mon Aug 12 21:25:19 2019 |
Koji | Update | CDS | More bench test of c1iscaux | 1.
> Looking through the manual, I found a recommendation (pg10) that the "IN-" terminal of the Acromag ADC units be tied to the "RTN" pins on the same units. I don't know if this preserves the differential receiving capability of the Acromag ADCs
I suppose, we loose the differential capability of an input if the -IN is connected to whatever defined potential. We should check if the channels are still working as a true differential or not.
2. If the multi bit operation is too complicated to solve, we can use EPICS Calc channels to breakout a value to bits and send the individual bits as same as the other individual binary channels.
|
14842
|
Mon Aug 12 19:58:23 2019 |
gautam | Update | IOO | MC1 suspension oddness | Repair plan:
- Get "spare" satellite box working --- Chub
- According to elog14441, this box has flaky connectors which probably need to be remade
- Re-make the 64-pin IDC crimped connection on the cable from the coil driver board to sat. box, at the Satellite box end --- Chub and gautam
Any other ideas? The problem persists and it's annoying that the IMC cannot be locked. |
14841
|
Mon Aug 12 17:36:04 2019 |
gautam | Update | CDS | More bench test of c1iscaux | [chub, gautam]
With Chub's help, most of the problems have been resolved. Summary: I judge that we are good to go ahead with an install tomorrow.
- The problem with the BIO channels was a mis-wiring internal to the chassis - Chub fixed this and now all 32 AA enable/disable switches seem to work as advertised. Of course we will need to do the in-situ test to make sure.
- The problem with the ADC channels were multiple:
- On the software end, I had gotten some addressing wrong - this was fixed.
- On the hardware side - even though the inputs of the Acromag are "differential", I found that the readback was extremely noisy (~0.5 V RMS for a 3 V DC signal from the handheld calibrator unit 😲 ). Looking through the manual, I found a recommendation (pg10) that the "IN-" terminal of the Acromag ADC units be tied to the "RTN" pins on the same units. I don't know if this preserves the differential receiving capability of the Acromag ADCs - anyways, after Chub implemented this change, all the Analog Input channels behave as expected (I tested with a DC voltage and also a 200 mHz sine wave from a function generator).
- Note that most of the Eurocard electronics we use are single-ended sending anyways.
- What does this mean for the other Acromag ADCs (e.g. OSEM Shadow Sensor monitors) we have installed????? I saw no documentation in the elog/wiki.
- Binary input channel:
- This is used by the "CM LIMIT" channel.
- I found that I had to initialize a separate alias for the BIO3 unit, which acquires this signal, to use the modbus function "4" corresponding to "Read Input Registers" - c.f. the binary output modbus function 6, which is to "Write Single Register".
- The fix for the mbbo channels is also likely to be along this lines - but I don't have the energy for that endavor right now.
- Testing of the physical mbboDirect bit channels using the Acromag Window utility
- I can't get the mbboDirect EPICS record to work as expected, so I decided to use the native Acromag utility to test the functionality
- First I released control of the acromags from the supermicro (stopped modbus)
- There were several wiring errors - Chub had left for the day so I just fixed it myself.
- The LED tester kit was used to check that the correct bits were flipped - they were.
- At the time of writing, the non-functional channels (in EPICS) are all related to the CM board:
C1:LSC-CM_LIMIT (binary input) tested later in the day, works okay...
- C1:LSC-CM_REFL1_BITS (mbboDirect)
- C1:LSC-CM_REFL2_BITS (mbboDirect)
- C1:LSC-CM_AO_BITS (mbboDirect)
- C1:LSC-CM_BOOST2_BITS (mbboDirect)
Since we don't immediately need the CM board, I say we push ahead with the install - at least that will restore the ability to lock PRMI / DRMI. Then we can debug these issues in situ - I'm certain the issue is related to the EPICS/Modbus setup and not the hardware because I verified the physical channel map using the Acromag windows utility.
Remaining Tasks:
- Install power supply cables at 1Y3
- Install supermicro and Acromag crates in 1Y3
- Migrate existing P1 connectors to P2 where applicable (Whitening boards)
- Connect Dsub-->P1 / P2 adaptors
- Run in-situ tests
Quote: |
I bench tested the functionality of all the c1iscaux Acromag crate channels. Summary: we are not ready for a Monday install, much debugging remains.
|
|
Attachment 1: iscauxCheclist.pdf
|
|
14840
|
Sun Aug 11 11:47:42 2019 |
gautam | Update | CDS | Bench test of c1iscaux | I bench tested the functionality of all the c1iscaux Acromag crate channels. Summary: we are not ready for a Monday install, much debugging remains.
- DAC channels were tested using 4 ch oscilloscope and stepping the whitening gain sliders through their 15 gain settings
- Response was satisfactory - the output changes between 0 - 5 V DC in 15 steps.
- This analog voltage is converted to binary representation by an on-board ADC on the whitening boards. So we may have to tune the offset voltage and range to avoid accidental bit flipping due to the analog voltage of a particualr step falling close to the bit-flipping edge of the on-board ADC. This will require an in-situ test.
- Test passed
- BIO output channels were tested using a DMM, and monitoring the resistance between the BIO pin and the RTN pin. In the "ON" state, the expected resistance is ~5 Mohm, and in the off state, it is ~3 ohms.
- The AA filter switches on BIO1 unit do not show the expected behavior - @ Chub, please check the wiring.
- All others (except the mbboDirect bits, see next bullet) were okay, including those for the CM board that are NOT part of the mbboDirect groups.
- Test failed
- ADC channels were tested by driving a ~2Vpp 300mHz sine wave with a function generator, and looking at the corresponding EPICS channel with StripTool.
- I found that all the ADC channels don't function as expected.
- Part of the problem is due to incorrect formatting of the EPICS records in the db files, but I think the ADCs also need to be calibrated with the precision voltage source.
- Why only ADCs require calibration and not the DACs????
- Test failed
- mbboDirect BIO output test - I made a little LED breadboard tester kit to simultaneously monitor the status of these groups of binary outputs.
- The LSB is toggled as expected when moving the gain slider along.
- However, the other bits in the group are not toggled correctly.
- I believe this is a problem with either (i) the way the EPICS record is configured to address the bits or (ii) the incorrect modbus datatype is used to initialize the ioc.
- It will be helpful if someone can look into this and get the mbboDirect bits working, I don't really want to spend more time on this.
- Test failed
I am leaving the crate powered (by bench supplies) in the office area so I have the option to work remotely on this. |
14839
|
Fri Aug 9 20:58:33 2019 |
Jon | Update | Electronics | Borrowed Variac transformer | I borrowed an old-looking Variac variable transformer from the power supplies cabinet along the y-arm. It is currently in the TCS lab. |
14838
|
Fri Aug 9 16:37:39 2019 |
gautam | Update | ALS | More EY table work | Summary:
- 220 uW / 600 uW (~36 % mode-matching) of IR light coupled into fiber at EY.
- Re-connected the RF chain from the beat mouth output on the PSL table to the DFD setup at 1Y2.
- A beat note was found between the PSL and EY beams using the BeatMouth.
Motivation:
We want to know that we can lock the interferometer with the ALS beat note being generated by beating IR pickoffs (rather than the vertex green transmission). The hope is also to make the ALS system good enough that we can transition the CARM offset directly to 0 after the DRMI is locked with arms held off resonance.
Details:
Attachment #1: Shows the layout. The realized MM is ~36 %. c.f. the 85% predicted by a la mode. It is difficult to optimize much more given the tight layout, and the fact that these fast lenses require the beam to be well centered on them. They are reasonably well aligned, but I don't want to futz around with the pointing into the doubling crystal. Consequently, I don't have much control over the pointing.
Attachment #2: Shows pictures of the fiber tips at both ends before/after cleaning. The tips are now much cleaner.
The BeatMouth NF1611 DC monitor reports ~580 mV with only the EY light incident on it. This corresponds to ~60 uW of light making it to the photodiode, which is only 25% of what we send in. This is commensurate with the BS loss + mating sleeve losses.
To find the beat between PSL and EY beams, I had to change the temperature control MEDM slider for the EY laser to -8355 cts (it was 225 cts). Need to check where this lies in the mode-hop scan by actually looking at the X-tal temperature on the front panel of the EY NPRO controller - we want to be at ~39.3 C on the EY X-tal, given the PSL X-tal temp of ~30.61 C. Just checked it, front panel reports 39.2C, so I think we're good.
Next steps:
- Fix the IMC suspension
- Measure the ALS noise for the Y arm
- Determine if improvements need to be made to the IR beat setup (e.g. more power? better MM? etc etc).
EY enclosure was closed up and ETMY Oplev was re-enabled after my work. Some cleanup/stray beam dumping remains to be done, I will enlist Chub's help on Monday. |
Attachment 1: IMG_7791.JPG
|
|
Attachment 2: fiberCleaning.pdf
|
|
14837
|
Fri Aug 9 08:59:04 2019 |
gautam | Update | CDS | Prep for install of c1iscaux | [chub, gautam]
We scoped out the 1Y3 rack this morning to figure out what needs to be done hardware wise. We did not think about how to power the Acromag crate - the LSC rack electronics are all powered by linear supplies and not Sorensens, and the linear supplies are operating at pretty close to their maximum current-drive. The Acromag box draws ~3A of current from the 20 V supply, not sure what the current draw will be from the 15 V supply. Options:
- Since there are sorensens in 1Y2 and 1Y1, do we really care about installing another pair of switching supplies (+20 V DC and +15 V DC) in 1Y3?
- Contingent on us having two spare Sorensens available in the lab. Chub has already located one.
- Use the Sorensens installed already in 1Y1.
- Probably the easiest and fastest option.
- +15 V already available, we'd have to install a +20 V one (or if the +/-5 V or +12 V is unused, reconfigure for +20 V DC).
- Can argue that "this doesn't make the situation any worse than it already is"
- Will require the running of some long (~3 m) long cabling to bring the DC power to 1Y3 where it is required.
- Get new linear supplies, and hook them up in parallel with the existing.
- Need to wait for new linear supply to arrive
- Probably expensive
- Questionable benefit to electronics noise given the uncharacterized RF pickup situation at 1Y2
I'm going with option #2 unless anyone has strong objections. |
14836
|
Thu Aug 8 12:01:12 2019 |
gautam | Update | IOO | MC1 suspension oddness | At ~1am PDT today, all the MC1 shadow sensor readbacks (fast CDS channels and Slow Acromag channels, latter not shown here) went to negative values. Of course a negative value makes no sense. After ~3 hours, they came back to positive values again. But since then, the shadow sensor RMS noise has been significantly higher in the >20 Hz band, and there are frequent glitches which kick the suspension. The IMC has been having trouble staying locked. I claim that this has to do with the Satellite box.
No action being taken now while I work on the ALS. In the past the problem has fixed itself. |
Attachment 1: MC1_suspension.png
|
|
Attachment 2: MC1_suspension.pdf
|
|
|