The Auxiliary DAQ Chassis, or Acromag box, is now wired and ready for testing. I will be sorting the cables at the vacuum rack to make connection to the box easier.
We did work in the OMC chamber today to get the OMC aligned. Aaron was in the clean suit while Gautam steered in-air optics. We modified the aux input steering optics and the final two OMC steering optics (between OMMT and OMC), but did not modify any of the AS path optics.
I had already aligned AUX approximately into the AS port from the AP table. With the OMC N door open, we aligned the aux beam first to OM6, then to OMPO, then OM5. OM5 was the last optic in the OMC chamber that we could align to.
From there, Gautam found the aux beam clipping on a few optics on its way to SR4 using the IR viewer. Once we were approximately hitting SR4, we got a return beam in the OMC chamber, which we were able to coalign with the input aux beam.
We had already done the alignment of SR5 into the OMC during the last vent, so we immediately had a refl off of the OMC, which we aligned onto a PD520 from the PSL table (larger aperture than the previous PD, which anyway needed a macroscopic adjustment to catch the refl beam).
Next, we removed the OMC cover, wrapped it in foil, and placed it in the makeshift clean room near the Y end. The screws remain in a foil bucket in the OMC chamber. With the cover off, Aaron moved the OMC input steering mirrors to align the beam in the OMC. We measured ~2.4mW in the OMC refl beam, which means about 240uW is transmitted into the OMC. Aaron thinks the beam overlaps itself after one round trip in the cavity, but that the entire plane may be too low in pitch, so more alignment may be needed here.
With the beam approximately aligned into the OMC, we energized the OMC-L piezo driver with 200V, and applied a ~0.03Hz triangle wave on the OMC diff input (pins 2-7). We monitor the REFL PD, piezo mon, function generator signal, and one of the trans PDs. We noticed that the PZT mon shows the driver saturating before the function generator reaches its full +-10V, which is something to investigate.
We saw what could have been regular dips in the REFL PD signal, but realized that with an unkown level of mode matching, it will be hard to tell whether the light becomes resonant with the DC signal. Gautam has suggested coaligning the aux and PSL beams, then observing the PDH signal from the PSL beam as the OMC sweeps through resonance, while turning aux back on anytime we try to make adjustments to the alignment of the OMC (so I can see the beam in the cavity).
I'll think through the plan in some more detail and we will try to have the OMC locked tomorrow.
I'm running a script that moves TT1 and TT2 randomly in some restricted P/Y space to try and find an alignment that gets some light onto the TRY PD. Test started at gpstime 1228967990, should be done in a few hours. The IMC has to remain locked for the duration of this test. I will close the PSL shutter once the test is done. Not sure if the light level transmitted through the ITM, which I estimate to be ~30uW, will be enough to show up on the TRY PD, but worth a shot I figure.
Test was completed and PSL shutter was closed at 1228977122.
The local backup was done at 18:18 after 11h18m of running.
2018-12-15 07:00:01,699 INFO Updating backup image of /cvs/cds
2018-12-15 18:17:56,378 INFO Backup rsync job ran successfully, transferred 5717707 files.
Edit: It was not 4TB disk but 6TB disk in fact. (We actually ordered 4TB disk...)
I think the problem of the backup disk was the flaky power supply for the external drive.
I swapped the drive to a new HGST 4TB one, but it was neither recognized nor spun up with the external power supply we had. So I decided to put both the new and old drives in the PC chassis to power them up with the internal power supply. I tested the old disk via a USB-SATA cable. However, this disk was not recognized. I noticed that the disk was not HGST 4TB but Seagate 3TB. Is it possible? I thought it was 4TB... Did I miss something?
Once the new 4TB was connected to the USB-SATA, it was very smooth to get it mounted. Now the disk is mounted as /media/40mBackup as before. /etc/fstab was also modified with the new UUID. All the command logs are found here below.
Let's see how the morning backup goes. It would take a while to copy everything on the new disk. So it was actually very nice to set this disk up by Friday midnight.
controls@chiara|~> sudo mkfs -t ext4 /dev/sdd1
controls@chiara|~> sudo emacs -nw /etc/fstab
controls@chiara|~> cat /etc/fstab
controls@chiara|~> sudo mount -a
fsck of chiara backup disk (UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000") was done. But this required many files to be fixed. So the backed-up files are not reliable now.
On the top of that, the disk became not recognized from the machine.
I went to the disk and disconnected the USB and then the power supply, which was/is connected to the UPS.
Then they are reconnected again. This made the disk came back as /media/90a5c98a-22fb-4685-9c17-77ed07a5e000. (*)
After unmounting this disk, I ran "sudo mount -a" to follow the way of mounting as fstab does.
Now I am running the backup script manually so that we can pretend to maintain a snapshot of the day at least.
(*) This is the same situation we found at the recovery from the power shutdown. So my hypothesis is that on Oct 16 at 7 AM during the backup there was a USB failure or disk failure or something which unmounted the disk. This caused some files got damaged. Also this caused the disk mounted as /media/90a5c98a-22fb-4685-9c17-77ed07a5e000. So since then, we did not have the backup.
Update (20:00): The disk connection failed again. I think this disk is no longer reliable.
sudo fsck -yV UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000" [238/276]
[sudo] password for controls:
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /media/40mBackup] fsck.ext4 -y /dev/sde1
e2fsck 1.42 (29-Nov-2011)
/dev/sde1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Error reading block 527433852 (Attempt to read block from filesystem resulted in
short read) while getting next inode from scan. Ignore error? yes
I replaced the 2'' AUX-AS combining BS with a freshly mounted 2'' HR mirror for 1064. The mirror is labelled 'Y1-2037-45-P', and had a comment on its case: 'V'. I aligned the AUX beam from the new HR mirror into the next iris, so AUX passes through irises both before and after the new optic. Now, AS does not go out to the AS PDs.
I mounted the old BS on the SP table in a random orientation.
I also dumped the beam transmitted through one of the AUX steering mirrors before the new HR mirror.
I replaced the 2'' AUX-AS combining BS with a 2'' HR mirror for 1064. I aligned the AUX beam from the new HR mirror into the next iris, so AUX passes through irises both before and after the new optic. Now, AS does not go out to the AS PDs.
We looked into the /frames situation a bit tonight. Here is a summary:
Plan of action:
BTW - the last chiara (shared drive) backup was October 16 6 am. dmesg showed a bunch of errors, Koji is now running fsck in a tmux session on chiara, let's see if that repairs the errors. We missed the opportunity to swap in the 4TB backup disk, so we will do this at the next opportunity.
I turned on AUX, and aligned the aux beam to be centered on the first optic the AS beam sees on the AP table. I then turned off the AUX laser.
I completed testing of the AI board mentioned above. In addition to the blown fuse, there were two problems:
After this, I tested the TF of all channels. For the most part, I found the expected 3rd order ~7500Hz cheby with notches at ~16kHz and 32kHz. However, some of the channels had shallower or deeper notches. By ~32kHz, I was below the resolution on the spectrum analyzer. Perhaps I just have nonideal settings? I'll attach a few representative examples.
I reinstalled the chassis at 1X2, but haven't connected power.
[Gautam, Aaron, Koji]
The PSL interlock system was fixed and now the 40m lab is laser hazard as usual.
- The schematic diagram of the interlock system D1200192
- We have opened the interlock box. Immediately we found that the DC switching supply (OMRON S82K-00712) is not functioning anymore. (Attachment #1)
- We could not remove the module as the power supply was attached on the DIN rail. We decided to leave the broken supply there (it is still AC powered with no DC output).
- Instead, we brought a DC supply adapter from somewhere and chopped the head so that we can hook it up on the crimping-type quick connects. In Attachment #1, the gray is +12V, and the orange and black lines are GND.
- Upon the inspection, the wires of the "door interlock reset button" fell off and the momentary switch (GRAYHILL 30-05-01-502-03) got broken. So it was replaced with another momentary swicth, which is way smaller than the original unfortunately. (Attachments 2 and 3)
- Once the DC supply adapter was pluged to an AC tap, we heard the sounds of the relays working, and we recovered the laser hazard lamps, PSL door alerm lamps. Also it was confirmed that the PSL innolight is operatable now.
- BTW, there is the big switch box on the wall close to the PSL enclosure. Some of the green lamps were gone. We found that we have plenty of spare lamps and relays inside of the box. So we replaced the bulbs and know the A.C. lights are functioning. (Attachments 4 & 5)
In order to see the AS beam a bit more clearly in our low-power config, I swapped out the ND=1.0 filter on the AS camera for ND=0.5.
I did a walkaround and checked the status of all the interlock switches I could find based on the SOP and interlock wiring diagram, but the PSL remains interlocked. I don't want to futz around with AC power lines so I will wait for Koji before debugging further. All the "Danger" signs at the VEA entry points aren't on, suggesting to me that the problem lies pretty far upstream in the wiring, possibly at the AC line input? The Red lights around the PSL enclosure, which are supposed to signal if the enclosure doors are not properly closed, also do not turn on, supporting this hypothesis...
I confirmed that there is nothing wrong with the laser itself - i manually shorted the interlock pins on the rear of the controller and the laser turned on fine, but I am not comfortable operating in this hacky way so I have restored the interlock connections until we decide the next course of action...
The PSL (Edwin) remains in an interlock-triggered state. We are not sure what is causing this, but the laser cannot be powered on until this is resolved.
Bob, Aaron, and I removed the door from the OMC chamber this morning. Everything went well.
After several combinations of soft/hard reboots for FB, FEs and expansion chassis, we managed to recover the nominal RTCDS status post power outage. The final reboots were undertaken by the rebootC1LSC.sh script while we went to Hotel Constance. Upon returning, Koji found all the lights to be green. Some remarks:
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
There turned out to be a few analog signals for the vacuum system after all. The TP2/3 foreline pressure gauges were never part of the digital system, but we wanted to add them, as some of the interlock conditions should be predicated on their readings. Each gauge connects to an old Granville-Phillips 375 controller which only has an analog output. Interfacing these signals with the new system required installing an Acromag XT1221 8-channel A/D unit. Taking advantage of the extra channels, I also moved the N2 delivery line pressure transducer to the XT1221, eliminating the need for its separate Omega DPiS32 controller. When the new high-pressure transducers are added to the two N2 tanks, their signals can also be connected.
The XT1221 is mounted on the DIN rail inside the chassis and I have wired a DB-9 feedthrough for each of its three input signals. It is assigned the IP 192.168.114.27 on the vacuum subnet. Testing the channels in situ revealed a subtley in calibrating them to physical units. It was first encountered by Johannes in a series of older posts, but I repeat it here in one place.
An analog-input EPICS channel can be calibrated from raw ADC counts to physical units (e.g., sensor voltage) in two ways:
From the documentation, under the engineering-units method EPICS internally computes:
where EGUF="eng units full scale", EGUL="eng units low", and "full scale A/D counts" is the full range of ADC counts. EPICS automatically infers the range of ADC counts based on the data type returned by the ADC. For a 16-bit ADC like the XT1221, this number is 2^16 = 65,536.
The problem is that, for unknown reasons, the XT1221 rescales its values post-digitization to lie within the range +/-30,000 counts. This corresponds to an actual "full scale A/D counts" = 60,001. If a multiplicative correction factor of 65,536/60,000 is absorbed into the values of EGUF and EGUL, then the first term in the above summation can be corrected. However, the second term (the offset) has no dependence on "full scale A/D counts" and should NOT absorb a correction factor. Thus adjusting the EGUF and EGUL values from, e.g., 10V to 10.92V is only correct when EGUL=0V. Otherwise there is a bias introduced from the offset term also being rescaled.
The generally correct way to handle this correction is to use the manual "NO CONVERSION" method. It constructs calibrated values by simply applying a specified gain and offset to the raw ADC counts:
calibrated val = (measured A/D counts) x ASLO + AOFF
The gain ASLO="[(V_max_adc - V_min_adc) / 60,001]" and the offset AOFF="0". I have tested this on the three vacuum channels and confirmed it works. Note that if the XT1221 input voltage range is restricted from its widest +/-10V setting, the number of counts is not necessarily 60,001. Page 42 of the manual gives the correct counts for each voltage setting.
At 11:13 am there was a ~2-3 second interruption of all power at the 40m.
I checked that nobody was in any of the lab areas at the time of the outage.
I walked along both arms of the 40m and looked for any indicator lights or unusual activity. I took photos of the power supplies that I encountered, attached. I tried to be somewhat complete, but didn't have a list of things in mind to check, so I may have missed something.
I noticed an electrical buzzing that seemed to emanate from one of the AC adapters on the vacuum rack. I've attached a photo of which one, the buzzing changes when I touch the case of the adapter. I did not modify anything on the vacuum rack. There is also
Most of the cds channels are still down. I am going through the wiki for procedures on what to log when the power goes off, and will follow the procedures here to get some useful channels.
I did the following:
I found that the BS/PRM OL SUM channels were reading close to 0. So I went to the optical table, and found that there was no beam from the HeNe. I tried power-cycling the controller, there was no effect. From the trend data, it looks like there was a slow decay over ~400000 seconds (~ 5 days) and then an abrupt shutoff. This is not ideal, because we would have liked to use the Oplevs as a DC alignment reference during the ventI plan to use the AS camera to recover some sort of good Michelson alignment, and then if we want to, we can switch out the HeNe.
*How can I export PDF from NDscope?
NDscope is now running on pianosa. To be really useful, we need the templates, so I've made /users/Templates/NDScope_templates where these will be stored. Perhaps someone can write a parser to convert dataviewer .xml to something ndscope can understand. To get it installed, I had to run:
sudo yum install ndscope
sudo yum install python34-gpstime
sudo yum install python34-dateutil
sudo yum install python34-requests
I also changed the pythonpath variable to include the python3.4 site-packages library in .bashrc
Let's install Jamie's new Data Viewer
I set up a function generator to drive OMC-L, and have the two DCPD mons and the OMC REFL PD sent to an oscilloscope. I need to select a cds channel over which to read the REFL signal.
The two DCPD mon channels have very different behaviors on the PD mons at the sat box (see attachment). PD1 has an obvious periodicity, PD2 has less noise overall and looks more white. I don't yet understand this, and whether it is caused by real light, something at the PDs, or something at the sat box.
I've again gone through the operations that will happen with the OMC chamber vented. Here's how it'll go, with some of the open questions that I'm discussing with Gautam or whoever is around the 40m:
Talked with Gautam for a good while about the above plan. In trying to figure out why the DCPD sat box appears to have a different TF for the two PDs (seems to be some loose cabling problem at the mons, because wiggling the cables changed this), we determined that the AA chassis also wasn't behaving as expected--driving the expected channels (28-31) with a sine wave yields some signal at the 100Hz driving frequency, but all save ch31 were noisy. We also still saw the 100Hz when the chassis was unplugged. I will continue pursuing this, but in the meantime I'm making an IDE40 to DB37 connector so I can drive the ADC channels directly with the DAC channels I've defined (need to match pinouts for D080303 to D080302). I also will make a new SCSI to DB37 adapter that is more robust than mentioned here. I also need to replace the cable carrying HV to the OMC-L driver, so that it doesn't have a wire-to-wire solder joint.
We moved a razor blade on the AP table so it is no longer blocking the aux beam. We checked the alignment of aux into the AS port. AUX and AS are not colinear anywhere on the AP table, and despite confirming that the main AS beam is still being reflected off of the OMC input mirror, the returning AUX beam does not reach the AP table (and probably is not reaching the OMC). AUX needs to be realigned such that it is colinear with the AS beam. It would be good if in this configuration, the SRM is held close to its position when the interferometer is locked, but the TTs should provide us some (~2.5mrad) actuation. Gautam will do this alignment and I will calculate whether the TTs will be able to compensate for any misalignment of the SRM.
Here is the new plan and minimal things to do for the door opening tomorrow:
That is the first, minimal sequence of steps, which I plan to complete tomorrow. After aligned into the OMC, the alignment into the DCPDs shouldn't need modification. Barring work needed to align from OMC to DCPDs, I think most other work with the OMC can be done in-air.
Koji gave me some tips on testing this board that I wanted to write down, notes probably a bit intermingled with my thoughts. Thanks Koji, also for the DCC and equipment logging!
D050368 Anti-Imaging Chassis
D050368 Adl SUS/SEI Anti-Image filter board
S/N 100-102 Assembled by screaming circuits. Begin testing 4/3/06
S/N xxx Mohana returned it to the shop. No S/N or traveler. Put in shop inventory 4/24/06
S/N 103 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29
S/N 104-106 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29 Needs DRV-135’s installed
S/N 107-111 Rev 02 (32768 Hz) Back from assembly 7/14/06
S/N 112-113 Rev 03 (65536 Hz) assembled into chassis and waiting for test 1/29/07
S/N 114 Rev 03 (65536 Hz) assembled and ready for test 020507
D050512 RBS Interface Chassis Power Supply Board (Just an entry. There is no file)
RBS Interface Chassis Power Board D050512-00
Taking another look at the datasheet, I don't think LM7812 is an appropriate replacement and I think the LM2940CT-12 is supposed to supply 1A, so it's possible the problem actually is on the power board, not on the dewhitening board. The board takes +/- 15V, not +/- 24...
Disclaimer: This is almost certainly some user error on my part.
I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.
I wanted to measure some transfer functions in the simulated model I set up.
To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?
* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)? But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.
the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.
11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive).
I kept having trouble keeping the power LEDs on the dewhitening board 'on'. I did the following:
1. I noticed that the dewhitening board was drawing a lot of current (>500mA), so I initially thought that the indicators were just turning on until I blew the fuse. I couldn't find the electronics diagrams for this board, so I was using analagous boards' diagrams and wasn't sure how much current to expect to draw. I swapped out for 1A fuses (only for the electronics I was adding to the system).
2. Now the +24V indicator on the dewhitening board wasn't turning on, and the -24V supply was alternatively drawing ~500mA and 0mA in a ~1Hz square wave. Thinking I could be dropping voltage along the path to the board, I swapped out the cables leading to the whitening/dewhitening boards with 16AWG (was 18AWG). This didn't seem to help.
3. Since the whitening board seemed to be consistently powered on, I removed the dewhitening board to see if there was a problem with it. Indeed, I'd burned out the +24V supply electronics--two resisters were broken entirely, and the breadboard near the voltage regulator had been visibly heated.
I noticed that the +/-15V currents are slightly higher than the labels, but didn't notice whether they were already different before I began this work.
I also noticed one pair of wires in the area of 1X1 I was working that wasn't attached to power (or anything). I didn't know what it was for, so I've attached a picture.
I did some ray tracing and determined that the aux beam will enter the OMC after losing some power in reflection on OMPO (couldn't find this spec on the wiki, I remember something like 90-10 or 50-50) and the SRM (R~0.9), and then transmission through OMPO. This gives us something like 8%-23% of the aux light going to the OMC, depending on the OMPO transmission. This elog tells me the aux power before the recombination BS is ~37mW, ~3.7mW onto SRM, which is consistent with the OMPO being 90-10, and would mean the aux power onto the OMC is ~3mW, plenty for aligning into the OMC.
Since the dewhitening board I'd intended to use isn't working (see elog) , I'm gong to scan the OMC length with a function generator while adjusting the alignment by hand, as was briefly attempted during the last vent.
I couldn't identify a PD on the AP table that was the one I had used during the last vent, I suspect I coopted the very same PD for the arm loss measurements. It is a PDA520, which has a large (100mm^2) area so I've repurposed it again to catch the OMC prompt reflection during the mode scans. I've mounted it approximately where I expect the refl beam to exit the AS chamber.
I brought over the cart that usually lives at 1X1 to help me organize materials near the OMC chamber for opening.
I replaced the banana connectors we'd been using to send HV to the HV driver with soldered wires going to the final locking connector only, so now the 150V is on a safe cable.
I powered up the DCPD sat box and again confirmed that it's working. I sent a 500Hz sine wave through the sat box and confirmed that I can see the signal in the DCPD channels I've defined in cds. I gave the TT and OMC-L PZT channels bad assignments on the ADC (right now, what reads as 'OMC_PZT_MON' is actually the unfiltered output from the sat box, while the DCPD channels are for the filtered outputs of the box), because the way the signals are grouped on the cables I can't attach all of them at once. For this vent, I'll only really need the DCPD outputs, and since I have confirmed that I can read out both of those I'll fix up the HV driver mon channels later.
All of our wikis (except the 40m one which unfortunately got turned into ligo.org mess) use DokuWiki. This now has an auto-upgrade feature through the Admin web interface.
I tried this recently and it fails with this message:
DokuWiki 2018-04-22a "Greebo" is available for download.
You're currently running DokuWiki Release 2017-02-19e "Frusterick Manners".
New DokuWiki releases need at least PHP 5.6, but you're running 5.4.16. You should upgrade your PHP version before upgrading!
So we'll have to wait until SL7 (which is what NODUS is running).
I DID do a 'yum upgrade' which updated all the packages. I also installed yum-cron so that the RPM listings get updated daily. But sadly, SL7 only has PHP 5.4.16 (which is a June 2013 release):
> Package php-5.4.16-43.el7_4.1.x86_64 already installed and latest version
Both were measured using the FieldMate power meter. I was hesitant to use the Ophir power meter as there is a label on it that warns against exceeding 100 mW. I can't find anything in the elog/wiki about the measured inesrtion loss / isolation of the input faraday, but this seems like a pretty low amount of light to get back from PRM. The IMC visibility using the MC_REFL DC values is ~87%. Assuming perfect transmission of the 87% of the 97mW that's coupled into the IMC, and assuming a further 5% loss between the Faraday rejected port and the AP table, the Faraday insertion loss would be ~30%. Realistically, the IMC transmission is lower. There is also some part of the light picked off for IPPOS. Judging by the shape of the REFL spot on the camera, it doesn't look clipped to me.
Either way, seems like we are only getting ~half of the 1W we send in on the back of PRM. So maybe it's worth it to investigate the situation in the IOO chamber during this vent.
c1psl, c1susaux,c1iool0,caux crates were keyed. Also, the physical shutter on the PSL NPRO, which was closed last Monday for the Sundance crew filming, was opened and the PMC was locked. PMC remains locked, but there is no light going into the IMC.
I started putting together some code to implement some ideas we discussed at the Tuesday meeting here. Pipeline isn't setup yet, but i think it's commented okay so if people want to play around with it, the code lives on the 40m gitlab.
Initial results and conclusions:
There still seems to be some data quality issues with the ringdown data I have, so I don't think we really gain anything from running this analysis on the data I have already collected - but in the future, we can do the ringdown with complete extinguishing of the input light, and repeat the analysis.
As for whether we should clean the IMC mirrors - I'm going to see how much power comes out at the REFL port (with PRM aligned) this afternoon, and compare to the input power. This technique suffers from uncertainty in the Faraday insertion loss, isolation and IMC parameters, but I am hoping we can at least set a bound on what the IMC loss is.
I believe I finally have the N2 gauge working correctly. The wiring is unchanged from its original state and the controller has been recalibrated.
After letting the line pressure drop to 0 PSI as indicated by the analog gauge in the drill-press room, I recorded the number of counts read by the Omega controller. Then I pressurized the line to 80 PSI, again indicated by the analog gauge, and recorded the Omega counts again. I entered these two reference points into the controller (automatically determines the gain and offset from these), then confirmed the readings to agree with the anaog gauge as I varied the line pressure.
The two reference points are:
In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up.
I need to hookup +/- 24 V supplies to the OMC whitening/dewhitening boxes that have been added to 1X2.
There are trailing +24V fuse slots, so I will extend that row to leave the same number of slots open.
While removing one +24V wire to add to the daisy chain, I let the wire brush an exposed conductor on the ground side, causing a spark. FSS_PCDRIVE and FSS_FAST are at different levels than before this spark. The 24V sorensens have the same currents as before according to the labels. Gautam advised me to remove the final fuse in the daisy chain before adding additional links.
gautam: we peeled off some outdated labels from the Sorensens in 1X1 such that each unit now has only 1 label visible reflecting the voltage and current. Aaron will post a photo after his work.
In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up. When we went into the drill-press area, we heard a hiss, turns out that one of the cylinders is leaking (to be fair, this was labelled, but i thought it isn't great to have a higher N2 concentration in an enclosed space). Since we don't need any actuation ability, I valved off the leaky cylinder, and disconnected the other properly functioning one. Attachment #1 shows the current state.
Based on new input from Chub, attached is the revised list of signal cable feedthroughs needed on the vacuum system Acromag crate. I believe this list is now complete.
need to vary start/stop times in fit to test for systematics
Recently we wondered at the meeting what the IMC round trip loss was. I had done several ringdowns in the winter of 2017, but because the incident light on the cavity wasn't being extinguished completely (the AOM 0th order beam is used), the full Isogaio et. al. analysis could not be applied (there were FSS induced features in the reflection ringdown signal). Nevertheless, I fitted the transmission ringdowns. They looked like clean exponentials, and judging by the reflection signals (see previous elogs in this thread), the first ~20us of data is a clean exponential, so I figured we may get some rough value of the loss by just fitting the transmission data.
The fitted storage time is .However, this number isn't commensurate with the 40m IMC spec of a critically coupled cavity with 2000ppm transmissivity for the input and output couplers.
Attachment #1: Expected storage time for a lossless cavity, with round-trip length ~27m. MC2 is assumed to be perfectly reflecting. The IMC length is known to better than 100 Hz uncertainty because the marconi RF modulation signal is set accordingly. For the 40m spec, I would expect storage times of ~40 usec, but I measure almost 30% longer, at ~60 usec.
Attachment #2: Fits and residuals from the 10 datasets I had collected. This isn't a super informative plot because there are 10 datasets and fits, but to eye, the fits are good, and the diagonal elements of the covariance matrix output by scipy's curve_fit back this up. The function used to fit the t > 0 portions of these signals (because the light was extinguished at t=0 by actuating on the AOM) is , where A and tau are the fitted parameters. In the residuals, the same artefacts visible in the reflection signal are seen.
Attachment #3: Scatter plot of the data. Width of circles are proportional to fit error on individual measurements (i just scaled the marker size arbitrarily to be able to visually see the difference in uncertainty, the width doesn't exactly indicate the error), while the dahsed lines are the global mean and +/- 1 sigma levels.
Attachment #4: Cavity pole measurement. Using this, I get an estimate of the loss that is a much more believable .
Below is an inventory of the signal feedthroughs that need to be installed on the vacuum Acromag crate this week.
**The original documentation lists five satellite boxes (one for each test mass chamber and one for the beamsplitter chamber), but Chub reports not all of them are in use. We may remove the ones not used.
I wanted to set up an RTCDS model to understand this problem better. Attachment #1 is the simulink diagram of the signal flow. The idea will be to put in the appropriate filter shapes into the various filter blocks denoting the DARM and auxiliary DoF plants, controllers and actuators, and then use awggui / diaggui to inject some noises and see if in this idealized model I can achieve good subtraction. Then we can build up to applying a time varying cross coupling between DARM and the vertex DoF, and see how good the adaptive FF works. Still need to setup some MEDM screens to make working with the test system easier.
I figured c1omc would be the least invasive model to set this upon without risking losing any of our IR/green alignment references. Compile and install went smooth, see Attachment #2. The c1omc model was clocking 4us before, now it's using 7us.
Attachment #3 shows the top level of the OMC model, while Attachment #4 shows the MEDM screen.
* Note to self: when closing a loop inside the realtime model, there has to be a delay block somewhere in the loop, else a compilation error is thrown.
I've made a repair to the N2 pressure monitor. I don't believe the polarity of the analog signal into the controller actually was reversed. I found the data sheet (attached) for the transducer model we have installed. Its voltage should read ~0 at 0 PSI and 100mV at 100 PSI. As wired, the input voltage reads +80 mV as it should.
The controller calibrates the sensor voltage to PSI (i.e., applies a scale and offset) based on two settable reference points which appeared to be incorrect. I changed them to:
After the change, the pressure reads 80 PSI. Let's see if the time history now shows a sensible trend.
[koji, gautam, jon, steve]
Exceptions: cryo pump and 4 ion pumps
Vac Status: The vac rack power was recycled yesterday and power to controller TP1,2 and 3 restored. atm3
VME is OFF. Power to all other instrument are ON. 23.9Vdc 0.2A
ETMY sus tower with locked optic in HEPA tent at east end is standing by for action.
Chub & Steve,
We swapped in our replacement of Varian V70D "bear-can" turbo as factory clean.
The new Agilent TwisTorr 84 FS turbo pump [ model x3502-64002, sn IT17346059 ] with intake screen, fan, vent valve. The controller [ model 3508-64001, sn IT1737C383 ] and a larger drypump IDP-7, [ model x3807-64010, sn MY17170019 ] was installed.
Next things to do:
All the serial vacuum signals are now interfaced to the new digital controls system. A set of persistent Python scripts will query each device at regular intervals (up to ~10 Hz) and push the readings to soft channels hosted by the modbus IOC. Similar scripts will push on/off state commands to the serial turbo pumps.
Each serial device is assigned an IP address on the local subnet as follows. Its serial communication parameters as configured in the terminal server are also listed.
[steve, rana, gautam]
Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.
Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.
Gautam, Aaron, Chub & Steve,
ETMY heavy door replaced by light one.
We did the following: measured 950 particles/cf min of 0.5 micron at SP table, wiped crane and it's cable, wiped chamber,
placed heavy door on clean merostate covered stand, dry wiped o-rings and isopropanol wiped Aluminum light cover
Gautam, Aaron, Chub and Steve,
Vent 80 is nearly complete; the instrument is almost to atmosphere. All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open. The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations. VM1 and VM3 could not be actuated. The condition status is still listed as Unidentified because of the disconnected valves.
The vent 81 is completed.
4 ion pumps and cryo pump are at ~ 1-4 Torr (estimated as we have no gauges there), all other parts of the vacuum envelope are at atm. P2 & P3 gauges are out of order.
V1 and VM1 are in a locked state. We suspect this is because of some interlock logic.
TP1 and TP3 controllers are turned off.
Valve conditions as shown: ready to be opened or closed or moved or rewired. To re-iterate: VC1, VC2, and the Ion Pump valves shouldn't be re-connected during the vac upgrade.
Thanks for all of your help.
I've started testing the OMC channels I'll use.
I needed to update the model, because I was getting "Unable to setup testpoint" errors for the DAC channels that I had created earlier, and didn't have any ADC channels yet defined. I attach a screenshot of the new model. I ran
I replaced the projector bulb. Previous bulb was shattered.
New hardware has been installed in the vacuum controls rack. It is shown in the below post-install photo.
Below is a high-level summary of where things stand, and what remains to be done.
✔ Set up of replacement controls server (c1vac).
✔ Set up of Acromag terminals.
✔ EPICS database migration.
✔ Set up of 16-port IOLAN terminal server (for multiplexing/Ethernetizing the serial devices).