The IMC has been misbehaving for the last 5 hours. Why? I turned the WFS servos off. afaik, aaron was the last person to work on the IFO, so i'm not taking any further debugging steps so as to not disturb his setup.
I tried to plot a long trend MC Transmitted today. I could not get farther than 2017 Aug 4
The mode cleaner was misaligned probably due to the earthquake (the drop in the MC transmitted value slightly after utc 7:38:52 as seen in the second plot). The plots show PMC transmitted and MC sum signals from 10th june 07:10:08 UTC over a duration of 17 hrs. The PMC was realigned at about 4-4:15 pm today by rana. This can be seen in the first plot.
That was likely me. I had recentered the beam on the PD I'm using for the armloss measurements, and I probably moved the wrong steering mirror. The transmission from MC2 is sent to a steering mirror that directs it to the MC2 transmission QPD; the transmission from this steering mirror I direct to the armloss MC QPD (the second is what I was trying to adjust).
Note: The MC2 trans QPD goes out to a cable that is labelled MC2 op lev. This confusion should be fixed.
I realigned the MC and recentered the beam on the QPD. Indeed the beam on MC2 QPD was up and left, and the lock was lost pretty quickly, possibly because the beam wasn't centered. Lock was unstable for a while, and I rebooted C1PSL once during this process because the slow machine was unresponsive.
When tweaking the alignment near MC2, take care not to bump the table, as this also chang es the MC2 alignment.
Once the MC was stably locked, I was able to maximize MC transmission at ~15,400 counts. I then centered the spot on the MC2 trans QPD, and transmission dropped to ~14800 counts. After tweaking the alignment again, it was recovered to ~15,000 counts. Gautam then engaged the WFS servo and the beam was centered on MC2 trans QPD, transmission level dropped to ~14,900.
Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.
But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.
Is there a reason why extender cards shouldn't be stuck into eurocrates?
Please check your data file and compare with those Johannes made last year. I think the power in your data file may have only three-disits and flactuate about 2%, which brings huge error. (see elog: 40m/14254)
On running the script again, I'm getting negative values for the loss.
The vacuum and MC are OK
The VEA vertex laptop, paola, has a flashing orange indicator which I take to mean some kind of battery issue. When the laptop is disconnected from its AC power adaptor, it immediately shuts down. So this machine is kind of useless for its intended purpose of being a portable computer we can work at optical tables with. The actual battery diagnostics (using upower) don't report any errors.
Earlier today, I rebooted a few unresponsive VME crates (susaux, auxey).
The IMC has been unhappy for a couple of days - the glitches in the MC suspensions are more frequent. I reset the dark offsets, minimized MCREFL by hand, and then re-centered the beam on the MC2 Trans QPD. In this config, the IMC has been relatively stable today, although judging by the control room StripTool WFS control signal traces, the suspension glitches are still happening. Since we have to fix the attenuator issue anyways soon, we can do a touch-up on IMC WFS.
I removed the DC PD used for loss measurements. I found that the AS beam path was disturbed - there is a need to change the alignment, this just makes it more work to get back to IFO locking as I have to check alignment onto the AS55 and AS110 PDs.
Single arm locking worked with minimal effort - although the X arm dither alignment doesn't do the intended job of maximizing the transmission. Needs a checkup.
PRMI locking (carrier resonant) was also pretty easy. Stability of the lock is good, locks hold for ~20 minutes at a time and only broke because I was mucking around. However, when the carrier is resonant, I notice a smeared scatter pattern on the ITMX camera that I don't remember from before. I wonder if the FF idea can be tested in the simpler PRMI config.
After recovering these two simpler IFO configurations, I improved the cavity alignment by hand and with the ASS servos that work. Then I re-centered all the Oplev beams onto their respective QPDs and saved the alignment offsets. I briefly attemped DRMI locking, but had little success, I'm going to try a little later in the evening, so I'm leaving the IFO with the DRMI flashing about, LSC mode off.
I had some success today. I hope that the tweaks I made will allow working with the DRMI during the day as well, though it looks like the main limiting factor in lock duty cycle is angular stability of the PRC.
[Attachment #1]: Repeatable and reliable DRMI locks tonight, stability is mainly limited by angular glitches - I'm not sure yet if these are due to a suspect Oplev servo on the PRM, or if they're because of the tip-tilt PR2/PR3/SR2/SR3.
[Attachment #2]: A pass at measuring the TF from SRCL error point to MICH error point via control noise re-injection. I was trying to measure down to 40 Hz, but lost the lock, and am calling it for the night.
[Attachment #3]: Coherence between PRM oplev error point and beam spot motion on POP QPD.
Note that the MICH actuation is not necessarily optimally de-coupled by actuating on the PRM and SRM yet (i.e. the latter two elements of the LSC output matrix are not precisely tuned yet).
What is the correct way to make feedforward filters for this application? Swept-sine transfer function measurement? Or drive broadband noise at the SRCL error point and then do time-domain Wiener filter construction using SRCL error as the witness and MICH error as the target? Or some other technique? Does this even count as "feedforward" since the sensor is not truly "outside" the loop?
This problem resurfaced. I'm doing the debugging.
6:30pm - "Solved" using the same procedure of stepping through the whitening gains with a small (10 DAC cts pk) signal applied. Simply stepping through the gains with input grounded doesn't seem to do the trick.
With the DRMI locked, I drove a line in MICH using the sensing matrix infrastructure. Then I looked at the error points of MICH, PRCL and SRCL. Initially, the sensing line oscillator output matrix for MICH was set to drive only the BS. Subsequently, I changed the --> PRM and --> SRM matrix elements until the line height in the PRCL and SRCL error signals was minimized (i.e. the change to PRCL and SRCL due to the BS moving, which is a geometric effect, is cancelled by applying the opposite actuation to the PRM/SRM respectively. Then I transferred these to the LSC output matrix (old numbers in brackets).
MICH--> PRM = -0.335 (-0.2655)
MICH--> SRM = -0.35 (+0.25)
I then measured the loop TFs - all 3 loops had UGFs around 100 Hz, coinciding with the peaks of the phase bubbles. I also ran some sensing lines and did a sensing matrix measurement, Attachment #1 - looks similar to what I have obtained in the past, although the relative angles between the DoFs makes no sense to me. I guess the AS55 demod phase can be tuned up a bit.
The demodulation was done offline - I mixed the time series of the actuator and sensor signals with a "local oscillator" cosine wave - but instead of using the entire 5 minute time series and low-passing the mixer output, I divvied up the data into 5 second chunks, windowed with a Tukey window, and have plotted the mean value of the resulting mixer output.
Unrelated to this work: I re-aligned the PMC on the PSL table, mostly in Pitch.
Gautam was doing some DRMI locking, so I replaced the photodiode at the AS port to begin loss measurements again.
I increased the resolution on the scope by selecting Average (512) mode. I was a bit confused by this, since Yuki was correct that I had only 4 digits recorded over ethernet, which made me think this was an i/o setting. However the sample acquisition setting was the only thing I could find on the tektronix scope or in its manual about improving vertical resolution. This didn't change the saved file, but I found the more extensive programming manual for the scope, which confirms that using average mode does increase the resolution... from 9 to 14 bits! I'm not even getting that many.
There's another setting for DATa:WIDth, that is the number of bytes per data point transferred from the scope.
I tried using the *.25 scope instead, no better results. Changing the vertical resolution directly doesn't change this either. I've also tried changing most of the ethernet settings. I don't think it's something on the scripts side, because I'm using the same scripts that apparently generated the most recent of Johannes' and Yuki's files; I did look through for eg tds3014b.py, and didn't see the resolution explicitly set. Indeed, I get 7 bits of resolution as that function specifies, but most of them aren't filled by the scope. This makes me think the problem is on the scope settings.
sstop using the ssscope, and just put the ssssignal into the DAQ with sssssome whitening. You'll get 16 bitsśšß.
I've been looking into the cross-coupling from the SRCL loop control point to the Michelson error point.
[Attachment #1] - Swept sine measurement of transfer function from SRCL_OUT_DQ to MICH_IN1_DQ. Details below.
[Attachment #2] - Attempt to measure time variation of coupling from SRCL control point to MICH error point. Details below.
[Attachment #3] - Histogram of the data in Attachment #2.
[Attachment #4] - Spectrogram of the duration in which data in #2 and #3 were collected, to investigate the occurrance of fast glitches.
Hypothesis: (so that people can correct me where I'm wrong - 40m tests are on DRMI so "MICH" in this discussion would be "DARM" when considering the sites)
Measurement details and next steps:
Attachments #2 and #3
This problem resurfaced, which I noticed when I couldn't get the single arm locks going.
The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.
Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.
It is posted at the 40m wiki with Gautam' help. Printed copies posted around doors also.
I began moving the AA and AI chassis over to 1X1/1X2 as outlined in the elog.
The chassis were mostly filled with empty cables. There was one cable attached to the output of a QPD interface board, but there was nothing attached to the input so it was clearly not in use and I disconnected it.
I also attach a picture of some of the SMA connectors I had to rotate to accommodate the chassis in their new locations.
The chassis are installed, and the anti-imaging chassis can be seen second from the top; the anti-aliasing chassis can be seen 7th from the top.
I need to breakout the SCSI on the back of the AA chassis, because ADC breakout board only has a DB36 adapter available; the other cables are occupied by the signals from the WFS dewhitening outputs.
I ran a BNC from the PD on the AS table along the cable rack to a free ADC channel on the LSC whitening board. I lay the BNC on top of the other cables in the rack, so as not to disturb anything. I also was careful not to touch the other cables on the LSC whitening board when I plugged in my BNC. The PD now reads out to... a mystery channel. The mystery channel goes then to c1lsc ADC0 channels 9-16 (since the BNC goes to input 8, it should be #16). To find the channel, I opened the c1lsc model and found that adc0 channel 15 (0-indexed in the model) goes to a terminator.
Rather than mess with the LSC model, Gautam freed up C1:ALS-BEATY_FINE_I, and I'm reading out the AS signal there.
I misaligned the x-arm then re-installed the AS PO PD, using the scope to center the beam then connecting it to the BNC to (first the mystery channel, then BEATY). I turned off all the lights.
I went to misalign the x-arms, but the some of the control channels are white boxed. The only working screen is on pianosa.
The noise on the AS signal is much larger than that on the MC trans signal, and the DC difference for misaligned vs locked states is much less than the RMS (spectrum attached); the coherence between MC trans and AS is low. However, after estimating that for ~30ppm the locked vs misaligned states should only be ~0.3-0.4% different, and double checking that we are well above ADC and dark noise (blocked the beam, took another spectrum) and not saturating the PD, these observations started to make more sense.
To make the measurement in cds, I also made the following changes to a copy opf Johannes' assess_armloss_refl.py that I placed in /opt/rtcds/caltech/c1/scripts/lossmap_scripts/armloss_cds/ :
I started taking a measurement, but quickly realized that the mode cleaner has been locked to a higher order mode for about an hour, so I spend some time moving the MC. It would repeatedly lock on the 00 mode, but the alignment must be bad because the transmission fluctuates between 300 and 1400, and the lock only lasts about 5 minutes.
Prep for this work:
I was trying to get some pics of the optics as a zeroth-level reference for the pre-vent loss with the single arms locked, but since our SL7 upgrade, the sensoray won't work anymore . I'll try fixing this during the daytime.
The 40m vacuum envelope has one large single O-ring on the OOC west side. All other doors have double O-ring with annuloses.
There are 3 spacers to protect o-ring. They should not be removed!
The Cryo-pump static seal to VC1 also viton. All gate valves and right angle valve plates have single viton o-ring seal.
Small single viton o-rings on all optical quality viewports.
Helium will permiate through these fast. Leak checking time is limited to 5-10 minutes.
All other seals are copper gaskits. We have 2 manual right angle with METAL-dynamic seal [ VATRING ] as VV1 & RV1
Back to loss measurements.
I replaced the PD I've been using for the AS beam.
I misaligned the x arm.
I tried to lock the y arm, but PRC was locked so I could was unable. Gautam reminded me where the config scripts are.
The armloss measurement script needed two additional modifications:
I ran successfully the loss measurement script for the x and y arms. I'm getting losses of ~100ppm from the first estimates.
I made the following changes to the lossmap script:
When the optic aligns itself not at the ideal position, I'm noticing that it often locks on a 01. When the cavity is then misaligned and restored, it can no longer obtain lock. To fix this, I've moved my 'save' commands to just before the loop begins. This means the script may take longer to run, but as long as the cavity is initially locked and well aligned, this should make it more robust against wandering off and never reacquiring lock.
I left the lossmap script running for the x-arm. Next would be to run it for the y arm, but I see that after stepping to a few positions the lock is again lost. It's still trying to run, but if you want to stop it no data already taken will be lost. To stop it, go to the remaining terminal open on rossa and ctrl+c
the analysis needs:
I made additional measurements on the x and y arms, at 5 offset positions for each arm (along with 6 measurements at the "zeroed" position).
I've begun prepping the IFO for the vent, and completed most of the IFO related items on the checklist. The power into the MC has been cut, but the low-power autolocker has not been checked. I will finish up tomorrow and post the go ahead. PSL shutter is closed for tonight.
Following the checklist, I did these:
@Steve & Chub, we are ready to vent tomorrow (Monday Nov 19).
Vent 80 is nearly complete; the instrument is almost to atmosphere. All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open. The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations. VM1 and VM3 could not be actuated. The condition status is still listed as Unidentified because of the disconnected valves.
Gautam, Aaron, Chub and Steve,
The vent 81 is completed.
4 ion pumps and cryo pump are at ~ 1-4 Torr (estimated as we have no gauges there), all other parts of the vacuum envelope are at atm. P2 & P3 gauges are out of order.
V1 and VM1 are in a locked state. We suspect this is because of some interlock logic.
TP1 and TP3 controllers are turned off.
Valve conditions as shown: ready to be opened or closed or moved or rewired. To re-iterate: VC1, VC2, and the Ion Pump valves shouldn't be re-connected during the vac upgrade.
Thanks for all of your help.
As I was turning off the lights in the VEA, I heard a rattling sound from near the PSL enclosure. I followed it to a valve - I couldn't see a label on this valve in my brief effort to find one, but it is on the south-west corner of the IMC table, so maybe VABSSCI or VABSSCO? The power cable is somehow spliced with an attachment that looks to be bringing gas in/out of the valve (See Attachment #1), and the nut on the bottom was loose, the whole power cable + mettal attachment was responsible for the rattling. I finger-tightened the nut and the sound went away.
I checked the IMC alignment following the vent, for which the manual beam block placed on the PSL table was removed. The alignment is okay, after minor touchup, the MC Trans was ~1200 cts which is roughly what it was pre-vent. I've closed the PSL shutter again.
Rana, Aaron, Gautam
The old Zojirushi has died. We have received and comissioned our new Technivoorm Mocha Master today. It is good.
I finished running the cabling for the OMC, which involved running 7x 50ft DB9 cables from the OMC_NORTH rack to the 1X2 rack, laying cables over others on the tray. I tried not to move other cables to the extent I could, and I didn't run the new cables under any old cables. I attach a sketch diagram of where these cables are going, not inclusive of the entire DAC/ADC signal path.
I also had to open up the AA board (D050387, D050374), because it had an IPC connector rather than the DB37 that I needed to connect. The DAC sends signals to a breakout board that is in use (D080302) and had a DB37 output free (though note this carries only 4 DAC channels). I opened up the AA board and it had two IPC 40s connected to an adapter to the final IPC 70 output. I replaced the IPC40 connectors with DB37 breakouts, and made a new slot (I couldn't find a DB37 punch, so this is not great...) on the front panel for one of them, so I can attach it to the breakout board.
I noticed there were many unused wires, so I had to confirm that I had the wiring correct (still haven't confirmed by driving the channels, but will do). There was no DCC for D080302, but I grabbed the diagrams for the whitening boards it was connected to (D020432) and for the AA board I was opening up as well as checked out elog 8814, and I think I got it. I'll confirm this manually and make a diagram if it's not fake news.
Attachment #1 is a block diagram depicting the pathway by which the vertex DOF control signals can couple into DARM (adapted from a similar diagram in Gabriele's Virgo note on the subject). I've also indicated some points where noise can couple into either loop. In general, there are sensing noises that couple in at the error point of the loop, and actuation noises that couple in at the control point. In this linear picture, each block represents a (possibly time varying) transfer function. So we can write out the node-to-node transfer functions and evaluate the various couplings.
The motivation is to see if we can first simulate with some realistic noise and time-varying couplings (and then possibly test on the realtime system) the effectiveness of the filter denoted by "FF" in canceling out the shot noise from the auxiliary loop being re-injected into the DARM loop via the DARM sensor. Does this look correct?
With Chub's help, I've setup a mini cleanroom at EY - Attachment #1. The HEPA unit is running on high now. All surfaces were wiped with isopropanol, we can wipe everything down again on Monday and replace the foil.
I replaced the projector bulb. Previous bulb was shattered.
I've started testing the OMC channels I'll use.
I needed to update the model, because I was getting "Unable to setup testpoint" errors for the DAC channels that I had created earlier, and didn't have any ADC channels yet defined. I attach a screenshot of the new model. I ran
Gautam, Aaron, Chub & Steve,
ETMY heavy door replaced by light one.
We did the following: measured 950 particles/cf min of 0.5 micron at SP table, wiped crane and it's cable, wiped chamber,
placed heavy door on clean merostate covered stand, dry wiped o-rings and isopropanol wiped Aluminum light cover
[steve, rana, gautam]
Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.
Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.
Exceptions: cryo pump and 4 ion pumps
Vac Status: The vac rack power was recycled yesterday and power to controller TP1,2 and 3 restored. atm3
VME is OFF. Power to all other instrument are ON. 23.9Vdc 0.2A
ETMY sus tower with locked optic in HEPA tent at east end is standing by for action.
[koji, gautam, jon, steve]
I wanted to set up an RTCDS model to understand this problem better. Attachment #1 is the simulink diagram of the signal flow. The idea will be to put in the appropriate filter shapes into the various filter blocks denoting the DARM and auxiliary DoF plants, controllers and actuators, and then use awggui / diaggui to inject some noises and see if in this idealized model I can achieve good subtraction. Then we can build up to applying a time varying cross coupling between DARM and the vertex DoF, and see how good the adaptive FF works. Still need to setup some MEDM screens to make working with the test system easier.
I figured c1omc would be the least invasive model to set this upon without risking losing any of our IR/green alignment references. Compile and install went smooth, see Attachment #2. The c1omc model was clocking 4us before, now it's using 7us.
Attachment #3 shows the top level of the OMC model, while Attachment #4 shows the MEDM screen.
* Note to self: when closing a loop inside the realtime model, there has to be a delay block somewhere in the loop, else a compilation error is thrown.
Recently we wondered at the meeting what the IMC round trip loss was. I had done several ringdowns in the winter of 2017, but because the incident light on the cavity wasn't being extinguished completely (the AOM 0th order beam is used), the full Isogaio et. al. analysis could not be applied (there were FSS induced features in the reflection ringdown signal). Nevertheless, I fitted the transmission ringdowns. They looked like clean exponentials, and judging by the reflection signals (see previous elogs in this thread), the first ~20us of data is a clean exponential, so I figured we may get some rough value of the loss by just fitting the transmission data.
The fitted storage time is .However, this number isn't commensurate with the 40m IMC spec of a critically coupled cavity with 2000ppm transmissivity for the input and output couplers.
Attachment #1: Expected storage time for a lossless cavity, with round-trip length ~27m. MC2 is assumed to be perfectly reflecting. The IMC length is known to better than 100 Hz uncertainty because the marconi RF modulation signal is set accordingly. For the 40m spec, I would expect storage times of ~40 usec, but I measure almost 30% longer, at ~60 usec.
Attachment #2: Fits and residuals from the 10 datasets I had collected. This isn't a super informative plot because there are 10 datasets and fits, but to eye, the fits are good, and the diagonal elements of the covariance matrix output by scipy's curve_fit back this up. The function used to fit the t > 0 portions of these signals (because the light was extinguished at t=0 by actuating on the AOM) is , where A and tau are the fitted parameters. In the residuals, the same artefacts visible in the reflection signal are seen.
Attachment #3: Scatter plot of the data. Width of circles are proportional to fit error on individual measurements (i just scaled the marker size arbitrarily to be able to visually see the difference in uncertainty, the width doesn't exactly indicate the error), while the dahsed lines are the global mean and +/- 1 sigma levels.
Attachment #4: Cavity pole measurement. Using this, I get an estimate of the loss that is a much more believable .
need to vary start/stop times in fit to test for systematics
I need to hookup +/- 24 V supplies to the OMC whitening/dewhitening boxes that have been added to 1X2.
There are trailing +24V fuse slots, so I will extend that row to leave the same number of slots open.
While removing one +24V wire to add to the daisy chain, I let the wire brush an exposed conductor on the ground side, causing a spark. FSS_PCDRIVE and FSS_FAST are at different levels than before this spark. The 24V sorensens have the same currents as before according to the labels. Gautam advised me to remove the final fuse in the daisy chain before adding additional links.
gautam: we peeled off some outdated labels from the Sorensens in 1X1 such that each unit now has only 1 label visible reflecting the voltage and current. Aaron will post a photo after his work.
I started putting together some code to implement some ideas we discussed at the Tuesday meeting here. Pipeline isn't setup yet, but i think it's commented okay so if people want to play around with it, the code lives on the 40m gitlab.
Initial results and conclusions:
There still seems to be some data quality issues with the ringdown data I have, so I don't think we really gain anything from running this analysis on the data I have already collected - but in the future, we can do the ringdown with complete extinguishing of the input light, and repeat the analysis.
As for whether we should clean the IMC mirrors - I'm going to see how much power comes out at the REFL port (with PRM aligned) this afternoon, and compare to the input power. This technique suffers from uncertainty in the Faraday insertion loss, isolation and IMC parameters, but I am hoping we can at least set a bound on what the IMC loss is.
Both were measured using the FieldMate power meter. I was hesitant to use the Ophir power meter as there is a label on it that warns against exceeding 100 mW. I can't find anything in the elog/wiki about the measured inesrtion loss / isolation of the input faraday, but this seems like a pretty low amount of light to get back from PRM. The IMC visibility using the MC_REFL DC values is ~87%. Assuming perfect transmission of the 87% of the 97mW that's coupled into the IMC, and assuming a further 5% loss between the Faraday rejected port and the AP table, the Faraday insertion loss would be ~30%. Realistically, the IMC transmission is lower. There is also some part of the light picked off for IPPOS. Judging by the shape of the REFL spot on the camera, it doesn't look clipped to me.
Either way, seems like we are only getting ~half of the 1W we send in on the back of PRM. So maybe it's worth it to investigate the situation in the IOO chamber during this vent.
c1psl, c1susaux,c1iool0,caux crates were keyed. Also, the physical shutter on the PSL NPRO, which was closed last Monday for the Sundance crew filming, was opened and the PMC was locked. PMC remains locked, but there is no light going into the IMC.
I did some ray tracing and determined that the aux beam will enter the OMC after losing some power in reflection on OMPO (couldn't find this spec on the wiki, I remember something like 90-10 or 50-50) and the SRM (R~0.9), and then transmission through OMPO. This gives us something like 8%-23% of the aux light going to the OMC, depending on the OMPO transmission. This elog tells me the aux power before the recombination BS is ~37mW, ~3.7mW onto SRM, which is consistent with the OMPO being 90-10, and would mean the aux power onto the OMC is ~3mW, plenty for aligning into the OMC.
Since the dewhitening board I'd intended to use isn't working (see elog) , I'm gong to scan the OMC length with a function generator while adjusting the alignment by hand, as was briefly attempted during the last vent.
I couldn't identify a PD on the AP table that was the one I had used during the last vent, I suspect I coopted the very same PD for the arm loss measurements. It is a PDA520, which has a large (100mm^2) area so I've repurposed it again to catch the OMC prompt reflection during the mode scans. I've mounted it approximately where I expect the refl beam to exit the AS chamber.
I brought over the cart that usually lives at 1X1 to help me organize materials near the OMC chamber for opening.
I replaced the banana connectors we'd been using to send HV to the HV driver with soldered wires going to the final locking connector only, so now the 150V is on a safe cable.
I powered up the DCPD sat box and again confirmed that it's working. I sent a 500Hz sine wave through the sat box and confirmed that I can see the signal in the DCPD channels I've defined in cds. I gave the TT and OMC-L PZT channels bad assignments on the ADC (right now, what reads as 'OMC_PZT_MON' is actually the unfiltered output from the sat box, while the DCPD channels are for the filtered outputs of the box), because the way the signals are grouped on the cables I can't attach all of them at once. For this vent, I'll only really need the DCPD outputs, and since I have confirmed that I can read out both of those I'll fix up the HV driver mon channels later.
I kept having trouble keeping the power LEDs on the dewhitening board 'on'. I did the following:
1. I noticed that the dewhitening board was drawing a lot of current (>500mA), so I initially thought that the indicators were just turning on until I blew the fuse. I couldn't find the electronics diagrams for this board, so I was using analagous boards' diagrams and wasn't sure how much current to expect to draw. I swapped out for 1A fuses (only for the electronics I was adding to the system).
2. Now the +24V indicator on the dewhitening board wasn't turning on, and the -24V supply was alternatively drawing ~500mA and 0mA in a ~1Hz square wave. Thinking I could be dropping voltage along the path to the board, I swapped out the cables leading to the whitening/dewhitening boards with 16AWG (was 18AWG). This didn't seem to help.
3. Since the whitening board seemed to be consistently powered on, I removed the dewhitening board to see if there was a problem with it. Indeed, I'd burned out the +24V supply electronics--two resisters were broken entirely, and the breadboard near the voltage regulator had been visibly heated.
I noticed that the +/-15V currents are slightly higher than the labels, but didn't notice whether they were already different before I began this work.
I also noticed one pair of wires in the area of 1X1 I was working that wasn't attached to power (or anything). I didn't know what it was for, so I've attached a picture.
Disclaimer: This is almost certainly some user error on my part.
I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.
I wanted to measure some transfer functions in the simulated model I set up.
To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?
* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)? But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.
the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.
11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive).
Taking another look at the datasheet, I don't think LM7812 is an appropriate replacement and I think the LM2940CT-12 is supposed to supply 1A, so it's possible the problem actually is on the power board, not on the dewhitening board. The board takes +/- 15V, not +/- 24...