40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 69 of 349  Not logged in ELOG logo
ID Date Author Type Category Subject
  14149   Thu Aug 9 12:31:13 2018 gautamUpdateCDSCDS status update

The model seems to have run without issues overnight. Not completely related, but the MC1 shadow sensor signals also don't show any abnormal excursions to negative values in the last 48 hours. I'm thinking about re-connecting the satellite box (but preserving the breakout setup at 1X6 for a while longer) and re-locking the IMC. I'll also start c1ass on the c1lsc frontend. I would say that the other models on c1lsc (i.e. c1oaf, c1cal, c1daf) aren't really necessary for basic IFO operation.

Quote:

As part of this slow but systematic debugging, I am turning on the c1lsc model overnight to see if the model crashes return.

  14148   Thu Aug 9 02:12:13 2018 gautamUpdateCOCSouth East or West?

Summary:

For operating the SRC in the "Signal-Recycled" tuning, the SRC macroscopic length needs to be ~4.04m (compared to the current value of ~5.399m), assuming we don't do anything fancy like change the modulation frequencies and not transmit through the IMC. We're putting together a notebook with all the calculations, but today I was thinking about what the signal extraction path should be, specifically which chamber the SRM should be in. Just noting down the thoughts I had here while they're fresh in my head, all this has to be fleshed out, maybe I'm making this out to be more of a problem than it actually is.

Details:

  • For the current modulation frequencies, if we want the reosnance conditions such that the f2 sideband is resonant in the SRC (but not f1, i.e. small Schnupp asymmetry regime) while the carrier is resonant in the arms (required for good sensing of the SRC length), the macroscopic length of the SRC needs to be changed to ~4.04m.
  • Practically, this means that the folded SRC would only have one folding mirror (SR2).
  • There is a shorter SRC length of ~1.something metres which would work, but that would involve changing the relative position between ITMs and BS (currently ~2.3m) so I reject that option for now.
  • So the SR2 would be roughly where it is right now, ~20cm from the BS.
  • The question then becomes, where do we direct the reflection from the SR2? We need an optical path length of ~1.5m from SR2. So options are 
    • ITMY table (East)
    • ITMX table (South)
    • IMC table (West)
  • Moreover, after the SRM, we have to accommodate:
    • Some kind of pickoff for in-air PDs.
    • OFI.
    • OMC MMT.
    • OMC.
  • Some kind of CBA (as of now I think going to the ITMY table is the best option):
Option Advantages Disadvantages
ITMY
  • Easy to direct beam from BS/PRM chamber to the ITMY table (i.e. we don't have to worry too much about avoiding other optics in the path etc).
  • Ease of access to chamber, ease of working in there.
  • ITMY table probably has the most room to work out an OFI + OMC MMT + OMC solution.
  • AS beam extraction to air will be more complicated, possibly have to do it on ITMY optical table.
  • Not sure if the ITMY table can accommodate all of the output optics subsystems I listed above.
  • Routing the LO beam to this table would be tricky I guess.
ITMX
  • Routing the LO beam for homodyne detection is probably easiest in this chamber.
  • Allows for small AoI on folding mirror, reducing the impact of astigmatism.
  • Pain to work in this chamber because of IMC tube.
  • Steering beam from SR2 to ITMX table means threading the needle between PRM and PR3 possibly.
IMC
  • Probably allows the use of (almost) the entire existing OMC chamber for the output optics (OFI, OMC MMT, OMC).
  • IMC table is crowded (2 SOS towers, several steering optics for the input beam, input faraday).
  • Not sure what is the performance of the seismic isolation stacks on these tables vs the larger optical tables.
  • Painful to work in these smaller chambers.
  14147   Wed Aug 8 23:06:59 2018 gautamUpdateSUSAnother low noise bias path idea

Today while Rich Abbott was here, Koji and I had a brief discussion with him about the HV amplifier idea for the coil driver bias path. He gave us some useful tips, perhaps most useful being a topology that he used and tested for an aLIGO ITM ESD driver which we can adapt to our application. It uses a PA95 high voltage amplifier which differs from the PA91 mainly in the output voltage range (up to 900V for the former, "only" 400V for the former. He agrees with the overall design idea of 

  • Having a LN opamp with the HV amp inside the feedback loop for better voltage noise at low frequencies.
  • Having a passive RC network at the output of the HV amp to filter out noise at high frequencies.

He also gave some useful suggestions like 

  • Using the front panel of the box that as a heatsink for the HV amps.
  • Testing the stability of the nested opamp loop by "pinging" the output of the opamp with some pulses from a function generator and monitoring the response to this perturbation on a scope.

I am going to work on making a prototype version of this box for 5 channels that we can test with ETMX. I have been told that the coupling from side coil to longitudinal motion is of the order of 1/30, in which case maybe we only need 4 channels.

  14146   Wed Aug 8 23:03:42 2018 gautamUpdateCDSc1lsc model started

As part of this slow but systematic debugging, I am turning on the c1lsc model overnight to see if the model crashes return.

  14145   Wed Aug 8 20:56:11 2018 KojiUpdatePSLEOM measuement preparation

Rich and I worked on the EOM measurement. After the measurement, the setup was reverted to the nominal state

  • AUX PLL mixer was restored to ZAD-6
  • The PLL gain was restored to 3.10
  • The main PSL marconi is connected to the freq generator again. Using the beat note, I've confirmed that the modulations are applied on the beam.
  • The PSL HEPA was reduced from 100 to 30.
  14144   Tue Aug 7 23:06:30 2018 KojiUpdatePSLEOM measuement preparation

I was preparing for the aLIGO EOM measuement to be carried out tomorrow afternoon.

I did a few modifications to the PLL setup.

  • The freq mixier in the PLL setup was replaced with ZP3 (level 7) from ZAD-6
  • The PLL gain was reduced from 3.10 to 2.80 to prevent servo oscillation
  • The main PSL marconi is connected to the PLL mixier and providing fixed 200MHz 8dBm.
  • The main PSL modulation is off.

Tomorrow I am going to modulate the EOM with the AUX Marconi via an amplifier (probably)

Automated scripts (AGinit.py and AGmeas.py) are in /users/koji/scripts

I will revert the setup once the measurement is done tomorrow.

  14143   Tue Aug 7 22:28:23 2018 gautamUpdateCDSMore CDS woes

I am starting the c1x04 model (IOP) on c1lsc to see how it behaves overnight.

Well, there was apparently an immediate reaction - all the models on c1sus and c1ioo reported an ADC timeout and crashed. I'm going to reboot them and still have c1x04 IOP running, to see what happens.

[97544.431561] c1pem: ADC TIMEOUT 3 8703 63 8767
[97544.431574] c1mcs: ADC TIMEOUT 1 8703 63 8767
[97544.431576] c1sus: ADC TIMEOUT 1 8703 63 8767
[97544.454746] c1rfm: ADC TIMEOUT 0 9033 9 8841
Quote:

Overnight, all models on c1sus and c1ioo seem to have had no stability issues, supporting the hypothesis that timing issues stem from c1lsc. Moreover, the MC1 shadow sensor readouts showed no negative values over a ~12hour period. I think we should just observe this for another day, in any case I don't think there is any urgent IFO related activity scheduled.

  14142   Tue Aug 7 11:30:46 2018 gautamUpdateCDSMore CDS woes

Overnight, all models on c1sus and c1ioo seem to have had no stability issues, supporting the hypothesis that timing issues stem from c1lsc. Moreover, the MC1 shadow sensor readouts showed no negative values over a ~12hour period. I think we should just observe this for another day, in any case I don't think there is any urgent IFO related activity scheduled.

  14141   Mon Aug 6 20:41:10 2018 aaronUpdateDAQNew DAC for the OMC

Gautam and I tested out the DAC that he installed in the latter half of last week. We confirmed that at least one of the channels is can successfully drive a sine wave (ch10, 1-indexed). We had to measure the output directly on the SCSI connector (breakout in the FE hard drive cabinet along the Y arm), since the SCSI breakout box (D080303) seems not to be working (wiring diagram in Gautam's elog from his SURF years).

I added some DAC channels to our c1omc model:
PZT1_PIT
PZT1_YAW
PZT2_PIT
PZT2_YAQ
 
And determined that when we go to use the ADC, we will initially want the following channels (even these are probably unnecessary for the very first scans):
TRANS_PD1
TRANS_PD2
REFL_PD
DVMDC (drive voltage monitor, DC level)
DVMAC ("", AC level, only needed if we dither the length)
 
I attach a screenshot of the model, and a picture of where the whitening/dewhitening boards should go in the rack.
Attachment 1: OMCDACmdl.png
OMCDACmdl.png
  14140   Mon Aug 6 19:49:09 2018 gautamUpdateCDSMore CDS woes

I've left the c1lsc frontend shutdown for now, to see if c1sus and c1ioo can survive without any problems overnight. In parallel, we are going to try and debug the MC1 OSEM Sensor problem - the idea will be to disable the bias voltage to the OSEM LEDs, and see if the readback channels still go below zero, this would be a clear indication that the problem is in the readback transimpedance stage and not the LED. Per the schematic, this can be done by simply disconnecting the two D-sub connectors going to the vacuum flange (this is the configuration in which we usually use the sat box tester kit for example). Attachment #1 shows the current setup at the PD readout board end. The dark DC count (i.e. with the OSEM LEDs off) is ~150 cts, while the nominal level is ~1000 cts, so perhaps this is already indicative of something being broken but let's observe overnight.

Attachment 1: IMG_7106.JPG
IMG_7106.JPG
  14139   Mon Aug 6 14:38:38 2018 gautamUpdateCDSMore CDS woes

Stability was short-lived it seems. When I came in this morning, all models on c1lsc were dead already, and now c1sus is also dead (Attachment #1). Moreover, MC1 shadow sensors failed for a brief period again this afternoon (Attachment #2). I'm going to wait for some CDS experts to take a look at this since any fix I effect seems to be short-lived. For the MC1 shadow sensors, I wonder if the Trillium box (and associated Sorensen) failure somehow damaged the MC1 shadow sensor/coil driver electronics.

Quote:
 

Let's see how stable this configuration is. Onto some locking now...

Attachment 1: CDScrash.png
CDScrash.png
Attachment 2: MC1failures.png
MC1failures.png
  14138   Mon Aug 6 09:42:10 2018 KojiSummaryComputersTransition of the main NFS disk on chiara

Follow up:

- At least it was confirmed that the local backup (4TB->2TB) is regularly running every morning.

- The 2TB disk was used up to 95%. To ease the size of the remaining space, I have further compressed the burt snapshot folders. (~2016). This released another 150GB. The 2TB is currently used up to  87%.

Prev

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sdc1      3845709644 1731391748 1918967020  48% /home/cds
/dev/sdd1      2113786796 1886162780  120249888  95% /media/40mBackup

Now

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sdc1      3845709644 1731706744 1918652024  48% /home/cds
/dev/sdd1      2113786796 1728124828  278287840  87% /media/40mBackup

 

  14137   Mon Aug 6 09:34:02 2018 SteveUpdateVACRGA scan at day 20

 

 

Attachment 1: pd81d20.png
pd81d20.png
  14136   Mon Aug 6 00:26:21 2018 gautamUpdateCDSMore CDS woes

I spent most of today fighting various CDS errors.

  • I rebooted c1lsc around 3pm, my goal was to try and do some vertex locking and figure out what the implications were of having only ~30% power we used to have at the AS port.
  • Shortly afterwards (~4pm), c1lsc crashed.
  • Using the reboot script, I was able to bring everything back up. But the DC lights on c1sus models were all red, and a 0x4000 error was being reported.
  • This error is indicative of some timing issue, but all the usual tricks (reboot vertex FEs in various order, restart the mx_streams etc) didn't clear this error.
  • I checked the Tempus GPS unit, that didn't report any obvious problems (i.e. front display was showing the correct UTC time).
  • Finally, I decided to shut down all watchdogs, soft reboot all the FEs, soft reboot FB, power cycle all expansion chassis.
  • This seems to have done the trick - I'm leaving c1oaf disabled for now.
  • The remaining red indicators are due to c1dnn and c1oaf being disabled.

Let's see how stable this configuration is. Onto some locking now...

Attachment 1: CDSoverview.png
CDSoverview.png
  14135   Sun Aug 5 15:43:50 2018 gautamUpdateSUSAnother low noise bias path idea

OK, how about this:

  • Attachment #1 shows the proposed schematic.
    • It consists of a second order section with Gain x10 to map the +/-10V DC range of the DAC to +/- 100V DC such that we preserve roughly the same amount of DC actuation range.
    • Corner frequency of the SOS is set to ~0.7 Hz. In hindsight, maybe this is more aggressive than necessary, we can tune this.
    • DC gain is 20 dB (typo in the text where I say the DC gain is x15, though we could go with this option as well I think if we want a larger series resistance).
    • A first order passive low-pass stage is added to filter out the voltage noise of the PA91, which dominates the output voltage noise (next bullet).
  • Attachment #2 shows the transfer function from input to output
    • The two traces compare having just a single SOS filtering stage vs the current topology of having two SOS stages.
    • The passive output RC network is necessary in either case to filter the voltage noise of the PA91 OpAmp.
    • For the DAC noise, I just assumed a flat noise level of 5 \mu V / \sqrt{\mathrm{Hz}}, I don't actually know what this is for the Acromag DACs.
  • Attachments #3 shows a breakdown of the top 5 noise contributions.
    • The PA91 datasheet doesn't give current noise information so I just assumed 1 fA / \sqrt{\mathrm{Hz}}, which was what was used for the PA85 in the existing opamp.lib file.
    • The voltage noise is modelled as 4.5 \sqrt{1+\frac{80}{f}} nV / \sqrt{\mathrm{Hz}}, which seems to line up okay with the plot on Pg4 of the datasheet.
    • So the model suggests we will be dominated by the voltage noise of the PA91.
  • Attachment #4 translates the noise into current noise seen by the actuator.
    • I add the Johnson noise contribution of the series resistance for this path, which is assumed to be 10 k \Omega.
    • For comparison, I add the filtered DAC noise contribution, and Johnson noise of the proposed series resistance in the fast path.
    • For the bias path, we are dominated by the Johnson noise of the series resistor from ~60 Hz upwards.
    • It's not quite fair to say that the Johnson noise of the resistance in the fast path dominates, the quadrature sum of fast and bais paths will be ~1.2 times of the former alone. 
    • Bottom line: we will be in the regime of total current noise of ~2.2 pA/rtHz, where I think Kevin's modeling suggests we can see some squeezing.

The question still remains of how to combine the fast and bias paths in this proposed scheme. I think the following approach works for prototyping at least:

  • Remove the series resistance on the existing coil driver boards' bias path, hence isolating this from the coil.
  • Route the DB15 output connector from the coil driver board (which is now just the fast actuation signals) into a sub-sattelite box housing the bias path electronics.
  • Sum the two signals as it is done now, by simply having a conductor (PCB trace) merge the two paths after their respective series resistances.

In the longer term, perhaps the Satellite Box revamp can accommodate a bias voltage summation connector.

Quote:

Bah! Too complex.


I have neglected many practical concerns. Some things that come to mind:

  1. Is it necessary to protect the upstream DAC from some potential failure of the PA91 in which the high voltage appears at the input?
  2. What is the correct OpAmp for this purpose? This chart on Apex's page suggests that PA15, PA85, PA91 and PA98 are all comparable in terms of drive capability, and the spec sheets don't suggest any dramatic differences. Some LIGO circuits use PA85, some use PA90, but I can't find any that use PA91. Perhaps Rana/Koji can comment about this.
  3. What kind of protection is necessary for the PA91 power?
  4. What is the correct way to do heat management? Presumably we need heatsinks, and in fact, there is a variant of the packaging style that has "formed" legs, which from what I can figure out, allow the heat sink plane on the PA91 to be parallel to the PCB surface. But I think the heat-sink wisdom suggests vertical fins are the most efficient (not sure if this holds if the PCB is inside a box though). What about the PCB itself? Are some kind of special traces needed?
  5. Can we use the current-limiting resistor feature on the PA91? The datasheet seems to advice against it for G>10 configurations, which is what we need, although our requirement is only at DC so I don't know if that table is applicable to this circuit.
  6. Are 3W resistors sufficient? I think we require only 10mA maximum current to preserve the current actuation range, so 100 V * 10mA = 1W, so 3W leaves some safety margin.
  7. All capacitors should be rated for 500 V per the datasheet.  
Attachment 1: HV_Bias_schematic.pdf
HV_Bias_schematic.pdf
Attachment 2: TF.pdf
TF.pdf
Attachment 3: bias.pdf
bias.pdf
Attachment 4: HVbias_currentNoise.pdf
HVbias_currentNoise.pdf
  14134   Sun Aug 5 13:45:00 2018 gautamUpdateSUSETMX tripped

Independent from the problems the vertex machine has been having (I think, unless it's something happening over the shared memory network), I noticed on Friday that the ETMX watchdog was tripped. Today, once again, the ETMX watchdog was tripped. There is no evidence of any abnormal seismic activity around that time, and anyways, none of the other watchdogs tripped. Attachment #1 shows that this happened ~838am PT today morning. Attachment #2 shows the 2k sensor data around the time of the trip. If the latter is to be believed, there was a big impulse in the UL shadow sensor signal which may have triggered the trip. I'll squish cables and see if that helps - Steve and I did work at the EX electronics rack (1X9) on Friday but this problem precedes our working there...

Attachment 1: ETMX_tripped.png
ETMX_tripped.png
Attachment 2: ETMX_tripped_zoom.png
ETMX_tripped_zoom.png
  14133   Sun Aug 5 13:28:43 2018 gautamUpdateCDSc1lsc flaky

Since the lab-wide computer shutdown last Wednesday, all the realtime models running on c1lsc have been flaky. The error is always the same:

[58477.149254] c1cal: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1daf: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1ass: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1oaf: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1lsc: ADC TIMEOUT 0 10963 19 11027
[58478.148001] c1x04: timeout 0 1000000 
[58479.148017] c1x04: timeout 1 1000000 
[58479.148017] c1x04: exiting from fe_code()

This has happened at least 4 times since Wednesday. The reboot script makes recovery easier, but doing it once in 2 days is getting annoying, especially since we are running many things (e.g. ASS) in custom configurations which have to be reloaded each time. I wonder why the problem persists even though I've power-cycled the expansion chassis? I want to try and do some IFO characterization today so I'm going to run the reboot script again but I'll get in touch with J Hanks to see if he has any insight (I don't think there are any logfiles on the FEs anyways that I'll wipe out by doing a reboot). I wonder if this problem is connected to DuoTone? But if so, why is c1lsc the only FE with this problem? c1sus also does not have the DuoTone system set up correctly...

The last time this happened, the problem apparently fixed itself so I still don't have any insight as to what is causing the problem in the first place frown. Maybe I'll try disabling c1oaf since that's the configuration we've been running in for a few weeks.

  14132   Fri Aug 3 19:02:11 2018 gautamUpdateASSX arm ASS recovery

[koji, gautam]

After I effected the series resistance change for ETMX, the X arm ASS didn't work (i.e. IR transmission would degrade if the servo was run). Today, we succeeded in recovering a functional ASS servo yes.

So both arms have working dither alignment servos now. But remember that the Y arm ASS gains have been set for locking the Y arm with MC2 as the actuator, not ETMY.

Details:

  • Koji pointed out that the demodulated signals from the ETM dither are only used to center the spot on the ETM, and that we should first run the servo with existing settings with the ETM pitch and yaw spot centering loops disabled.
    • This improved TRX level from ~0.8 to 1.1
  • Next, we tried increasing the LO amplitudes by x5 to account for the reduced actuation of the dither on ETMX
    • We then re-enabled the two loops that were earlier disabled.
    • This resulted in TRX degrading very quickly.
  • So we decided to try going back to the nominal LO gains, and reducing the gain of the two ETM spot centering loops.
    • This did the trick, TRX went from 1.1 --> ~1.23, which is the nominal maximum pre-vent value.
  • The snap file used to recover the correct settings to run the dither alignment servos have been updated, the old one has been backed up with today's datestamp.

We then tried to maximize GTRX using the PZT mirrors, but were only successful in reaching a maximum of 0.41. The value I remember from before the vent was 0.5, and indeed, with the IR alignment not quite optimized before we began this work, I saw GTRX of 0.48. But the IR dither servo signals indicate that the cavity axis may have shifted (spot position on the ITM, which is uncontrolled, seems to have drifred significantly, the Pitch signal doesn't stay on the StripTool scale anymore). So we may have to double check that the transmitted beam isn't falling off the GTRX DC PD.

  14131   Fri Aug 3 18:54:58 2018 gautamUpdateSUSGlitchy MC1

The wall StripTool indicated that the IMC wasn't too happy when I came in today. Specifically:

  • MC1 watchdog was tripped.
  • Even in the tripped state, MC REFL spot on the camera showed spot motion that was too large to be explained as normal seismic driven motion (i.e. with local damping supposedly disabled).
  • Strange excursions were observed in the MC1 shadow sensor signal levels as well, see Attachment #1 - negative values don't make any sense for this readout.

The last time this happened, it was due to the Sorensens not spitting out the correct voltages. This time, there were no indications on the Sorensens that anything was funky. So I just disabled the MCautolocker and figured I'd debug later in the evening.

However, around 5pm, the shadow sensor values looked nominal again, and when I re-enabled the local damping, the MC REFL spot suggested that the local damping was working just fine. I re-enabled the MCautolocker, MC re-locked almost immediately. To re-iterate, I did nothing to the electronics inside the VEA. Anyways, this enabled us to work on the X arm ASS (next elog).

Attachment 1: MC1_sensorAnomaly.png
MC1_sensorAnomaly.png
  14130   Fri Aug 3 16:27:40 2018 ranaUpdateSUSLow noise bias path idea

Bah! Too complex.

  14129   Fri Aug 3 15:53:25 2018 gautamUpdateSUSLow noise bias path idea

Summary:

The idea we are going with to push the coil driver noise contribution down is to simply increase the series resistance between the coil driver board output and the OSEM coil. But there are two paths, one for fast actuation and one that provides a DC current for global alignment. I think the simplest way to reduce the noise contribution of the latter, while preserving reasonable actuation range, is to implement a precision DC high-voltage source. A candidate that I pulled off an LT application note is shown in Attachment #1.

Requirements:

  • The series resistance in the bias path should be 10 k\Omega, such that the noise from this stage is dominated by the Johnson noise of said resistor, and hence, the current noise contribution is negligible compared to the series resistance in the fast actuation path (4.5 k\Omega).
  • Since we only really need this for the test masses, what actuation range do we want?
    • Currently, ETMY has a series resistance of 400\Omega and has a pitch DC bias voltage of -4 V. 
    • This corresponds to 10 mA of DC current.
    • To drive this current through 10 k\Omega, we need 100 V. 
    • I'm assuming we can manually correct for yaw misalignments such that 10mA of DC current will be sufficient for any sort of corrective alignment.
    • So +/- 120 V DC should be sufficient.
  • The current noise of this stage should be negligible at 100 Hz. 
    • The noise of the transistors and the HV supply should be suppressed by the feedback loop and so shouldn't be a significant contribution (I'll model to confirm).
    • The input noise of the LT1055 is ~20nV/rtHz at 100 Hz, while the Johnson noise of 10 k\Omega is ~13nV/rtHz so maybe the low-passing needs to be tuned, but I think if it comes to it, we can implement a passive RC network at the output to achieve additional filtering.
  • To implement this circuit, we need +/- 125V DC. 
    • At EX and EY, we have a KEPCO HV supply meant to be used for the Green Steering PZTs. 
    • I'm not sure if these can do bipolar outputs, if not, for temporary testing, we can transport the unit at EY to EX.

If all this seems reasonable, I'd like to prototype this circuit and test it with ETMX, which already has the high series resistance for the fast path. So I will ask Steve to order the OpAmp and transistors.

Attachment 1: LT1055_precOpAmp.pdf
LT1055_precOpAmp.pdf
  14128   Fri Aug 3 14:35:56 2018 gautamSummaryElectronicsEX AUX electronics power restored

Steve and I restored the power to the EX AUX electronics rack. The power strip on the lowest shelf of the AUX rack now goes to another power strip laid out vertically along the NW corner of 1X9. The EX green locks to the arm just fine now.

  14127   Thu Aug 2 23:09:25 2018 ranaSummaryComputersX Green "Mystery" solved

I'm going to guess that this was me: I was disconnecting some octopus power strip nonsense down there (in particular illuminators and cameras), so I might have turned off the AUX rack by mistake.

Quote:

I walked down to the X end and found that the entire AUX laser electronics rack isn't getting any power. There was no elog about this.

I couldn't find any free points in the power strip where I think all this stuff was plugged in so I'm going to hold off on resurrecting this until tomorrow when I'll work with Steve.

Quote:

The X arm green does not stay locked to the cavity - the alignment looks fine, and the green flashes are strong, but the lock does not hold. This shouldn't be directly connected to anything we did today since the Green PDH servo is entirely analog.

  14126   Thu Aug 2 20:54:18 2018 gautamSummaryComputersc1omc model looks stable

Actually, c1lsc had crashed again sometime last night so I had to reboot everything this morning. I used the reboot script again, but I increased the sleep time between trying to start up the models again so that I could walk into the VEA and power cycle the c1lsc expansion chassis, as this kind of frequent model crash has been fixed by doing so in the past. Sure enough, there have been no issues since I rebooted everything at ~1030 in the morning. 

The c1omc model itself has been stable as well, though of course, there is nothing in there at the moment. I may do a check of the newly installed DAC tomorrow just to see that we can put out a sine wave.

Steve has ordered the D-sub cabling that will allow us to route signals between AA/AI boards in 1X1/1X2 to the HV PZT electronics in the OMC rack. Things look setup for a measurement next week. Aaron will post a block diagram + photoz of what box goes where in the electronics racks.

  14125   Thu Aug 2 20:47:29 2018 gautamSummaryElectronicsX Green "Mystery" solved

I walked down to the X end and found that the entire AUX laser electronics rack isn't getting any power. There was no elog about this.

I couldn't find any free points in the power strip where I think all this stuff was plugged in so I'm going to hold off on resurrecting this until tomorrow when I'll work with Steve.

Quote:

The X arm green does not stay locked to the cavity - the alignment looks fine, and the green flashes are strong, but the lock does not hold. This shouldn't be directly connected to anything we did today since the Green PDH servo is entirely analog.

  14124   Thu Aug 2 16:30:08 2018 SteveUpdateTreasuretime capsule location

I 've just found this time capsule note from Nov. 26, 2000 by Kip Thorne:  LIGO will discover gravitational waves by Dec.31, 2007

Quote:

   Beautifully Done

   Chirp

  what is next?

Atm 3, Ron Drever could not celebrate with us because of health issues.

 

 

Attachment 1: time_capsule.JPG
time_capsule.JPG
  14123   Wed Aug 1 20:44:57 2018 gautamSummaryComputersc1omc model (re?)created

The main motivation behind adding a DAC card in c1ioo was to setup an RTCDS model for the OMC. Attachment #1 shows the new look CDS overview screen. Here is what I did.

Mostly, I followed instructions from when I setup the model for the EX green PZTs.


Simulink model:

The model is just a toy for now (CDS parameters, ADC block and 2 CDS filter modules). I leave it to Aaron to actually populate it, check functionality etc. The path to the model is /opt/rtcds/caltech/c1/userapps/release/isc/c1/models/c1omc.mdl. I am listing the parameters set on the CDS_PARAMETERS block:

  • host = c1ioo
  • site = c1
  • rate = 16k
  • dcuid = 27 (which I chose after making sure that this dcuid was not used on this list which I also updated by adding c1omc and moving c1imc to "old")
  • specific_cpu = 6 (again chosen after checking the available CPUs in the above list and confirming using the cset utility).
  • adc_Slave = 1
  • shmem_daq = 1
  • no_rfm_dma = 1
  • biquad = 1

Building and installing model:

Once the model was installed, I logged into c1ioo, and built and installed the models using the usual rtcds make and rtcds install instructions. Before starting the model, I edited /diskless/root.jessie/etc/rtsystab to allow c1omc to be run on c1ioo. Using sudo cset set, I verified that CPU #6 is no longer listed (if I understand correctly, the RTCDS system takes over the core).


MEDM:

To reflect all this on the MEDM CDS OVERVIEW screen, I just edited the screen.

  • Moved the orange explanation of bits over to the c1iscey panel to make space in the c1ioo panel.
  • Edited the macros to reflect the c1omc parameters.

DAQD:

Finally, I followed the instructions here to get the channels into frames and make all the indicators green. Went into fb and restarted the daqd processes. All looks good smiley. I'm going to leave the model running overnight to investigate stability. I forgot to svn commit the model tonight, will do it tomorrow.


The testing plan (at least initially) is to install the AA and AI boards from the OMC rack in 1X1/1X2. Then we will have short SCSI cables running from the ADC/DAC to these. The actual HV driving stages will remain in the OMC rack (NE corner of AS table).

@Steve, can we get 10 Male-Female D9 cables so that we can run them from 1X1/1X2 to the OMC rack?


Unrelated to this work: There were 2 crashes of the models on c1lsc, one ~6pm and one right now ~1030pm. The restart script brought everything back gracefully  yes...

Attachment 1: CDS_OVERVIEW_withOMC.png
CDS_OVERVIEW_withOMC.png
  14122   Wed Aug 1 19:41:15 2018 gautamSummaryComputersRTCDS recovery, c1ioo changes

[Gautam Koji]

After this work, we recovered the nominal RTCDS state. The main points were:

  1. We needed to restart the bind9 service on chiara such that the FEs knew their IP addresses upon reboot and hence, could get their root filesystems over NFS.
  2. We recovered suspension local damping, IMC locking and POX/POY locking with nominal arm transmission.

Some stuff that is not working as usual:

  1. The EX QPD is reporting strange transmission values - even with the PRM completely misaligned, it reports transmission of ~30. But we were able to lock the Xarm with the Thorlabs PD and revover transmission of ~1.15.
  2. The X arm green does not stay locked to the cavity - the alignment looks fine, and the green flashes are strong, but the lock does not hold. This shouldn't be directly connected to anything we did today since the Green PDH servo is entirely analog.

I made a model change in c1x03 (the IOP model on c1ioo) to add a DAC part. The model compiled, installed and started correctly, and looking at dmesg on c1ioo, it recognises the DAC card as what it is. Next step is to use a core on c1ioo for a c1omc model, and actually try driving some signals.

Note that the only change made to the c1ioo expansion chassis was that a DAC card was installed into the PCIe bus. The adaptor card which allows interfacing the DAC card to an AI board was already in the expansion chassis, presumably from whenever the DAC was removed from this machine.

*I think I forgot to restart optimus after this work...

Attachment 1: CDS_overview.png
CDS_overview.png
  14121   Wed Aug 1 16:23:48 2018 KojiSummaryComputersTransition of the main NFS disk on chiara

[Gautam Koji]

Taking the opportunity to shutdown c1ioo for adding a DAC card, we shutdown chiara and worked on moving of the main disk to the bigger home.

We shutdown most of the martian machines including the control machines, megatron, optimus, and nodus.

- Before shutting down chiara, we ran rsync to make the 4TB disk (used to be teh backup) and /cvs/cds synced.

sudo rsync -a --progress /home/cds/ /media/40mBackup

- Modified /etc/fstab

proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/sda1 during installation
UUID=972db769-4020-4b74-b943-9b868c26043a /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=a3f5d977-72d7-47c9-a059-38633d16413e none            swap    sw              0       0
UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000"    /media/40mBackup       ext4      defaults,relatime,commit=60       0         0
#fb:/frames      /frames nfs     ro,bg

UUID=92dc7073-bf4d-4c58-8052-63129ff5755b   /home/cds    ext4    defaults,relatime,commit=60    0   0

- Shutdown chiara. Put the 4TB disk in the chassis. We also installed a new disk (but later it turned out that it only has 2TB...)

- Restart the mahcine. This already made the 4TB disk mounted as /cvs/cds .

- Restart bind9 with DHCP for the diskless clients (cf. https://wiki-40m.ligo.caltech.edu/CDS/How_to_join_martian)

sudo service bind9 restart
sudo service isc-dhcp-server restart

- Looks like /etc/resolv.conf is automatically overwritten by a tool or something everytime we restart the machine!? I still don't know how to avoid this. (cf.  https://www.ctrl.blog/entry/resolvconf-tutorial). But at least for today we manually wrote /etc/resolv.conf

controls@chiara|backup> cat /etc/resolv.conf
# Dynamic
resolv.conf(5) file for
glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.113.104
nameserver 131.215.125.1
nameserver 8.8.8.8

search martian

  14120   Tue Jul 31 22:50:18 2018 aaronUpdateOMCOMC Expected Refl Signal

I learned a lot about lasers this week from Siegman. Here are some plots that show the expected reflectivity off of the OMC for various mode matching cases.

The main equation to know is 11.29 in Siegman, the total reflection coefficient going into the cavity:

R=r-\frac{t^2}{r}\frac{g(\omega)}{1-g(\omega)}

Where r is the mirror reflectivity (assumed all mirrors have the same reflectivity), t is the transmissivity, and g is the complex round-trip gain, eq 11.18

g(\omega)=r_1r_2(r_3...)e^{-i\phi}e^{-\alpha_0p}

The second exponential is the loss; in Siegman the \alpha_0 is some absorption coecfficient and p is the total round trip length, so the product is just the total loss in a round trip, which I take to be 4*the loss on a single optic (50ppm each). \phi is the total round trip phase accumulation, which is 2\pi*detuning(Hz)/FSR. The parameters for the cavity can be found on the wiki.

I've added the ipynb to my personal git, but I can put it elsewhere if there is somewhere more appropriate. I think this is all OK, but let me know if something is not quite right.

Attachment 1: omcRefl.pdf
omcRefl.pdf
  14119   Tue Jul 31 08:17:55 2018 SteveUpdateSUSTrillium interface box was fixed,reinstalled & working

 

 

Attachment 1: all_OK.png
all_OK.png
  14118   Mon Jul 30 18:19:03 2018 KojiUpdateSUSTrillium interface box was fixed and reinstalled

The trillium interface box was removed from the rack.

The problem was the incorrect use of an under-spec TVS (Transient Voltage Suppression) diodes (~ semiconductor fuse) for the protection circuit.
The TVS diodes we had had the breakdown voltages lower than the supplied voltages of +/-20V. This over-voltage eventually caused the catastrophic breakdown of one of the diodes.

I don't find any particular reason to have these diodes during the laboratory use of the interface. Therefore, I've removed the TVS diodes and left them unreplaced. The circuit was tested on the bench and returned to the rack. All the cables are hooked up, and now the BRLMs look as usual.


Details

- The board version was found to be D1000749-v2

- There was an obvious sign of burning or thermal history around the components D17 and D14. The solder of the D17 was so brittle that just a finger touch was enough to remove the component.

- These D components are TVS diodes (Transient Voltage Suppression Diodes) manufactured by Littelfuse Inc. It is sort of a surge/overvoltage protector to protect rest of the circuit to be exposed to excess voltage. The specified component for D17/D14 was 5.0SMMDJ20A with reverse standoff voltage (~operating voltage) of 20V and the breakdown voltage of 22.20V(min)~24.50V(max). However, the spec sheet told that the marking of the proper component must be "5BEW" rather than "DEM," which is visible on the component. Some search revealed that the used component was SMDJ15A, which has the breakdown voltage of 16.70V~18.50V. This spec is way too low compared to the supplied voltage of +/-20V.

Attachment 1: P_20180730_173134.jpg
P_20180730_173134.jpg
Attachment 2: P_20180730_180151.jpg
P_20180730_180151.jpg
  14117   Mon Jul 30 16:11:54 2018 gautamUpdateSUSTrillium interface box is broken

[koji, steve, gautam]

We debugged this in the following way:

  1. Disconnect all fuses in the terminal blocks coming from the +/- 20 VDC Sorensens.
  2. Check that they are indeed isolated using DMM.
  3. Test blocks of fuses in order to identify where the problem is happening (i.e. plug fuses in, turn up Sorensen voltage knobs, look for current overload). We did things in the following order:
    • MC suspensions
    • BS, PRM and SRM
    • ITMY
    • ITMX
    • Trillium interface box.
  4. Turns out that the Trillium box is the culprit.
  5. Confirmed that the problem is in the trillium interface box and not in the seismometer itself by unplugging all cables leading out of the interface box, and checking that the problem persists when the box is powered on.

So for now, the power cable to the box is disconnected on the back end. We have to pull it out and debug it at some point.

Apart from this, megatron was un-sshable so I had to hard reboot it, and restart the MCautolocker, FSSslowPy and nds2 processes on it. I also restarted the modbusIOC processes for the PSL channels on c1auxex (for which the physical Acromag units sit in 1X5 and hence were affected by our work), mainly so that the FSS_RMTEMP channel worked again. Now, IMC autolocker is working fine, arms are locked (we can recover TRX and TRY~1.0), and everything seems to be back to a nominal state. Phew.

  14115   Mon Jul 30 11:05:44 2018 gautamUpdateSUSIFO SUS wonky

When I came in this morning:

  • PMC was unlocked.
  • Seis BLRMS were off scale.
  • ITMX OSEM LEDs were dark on the CRT monitor even though Sat Box was plugged in.

Checking status of slow machines, it looked like c1sus, c1aux, and c1iscaux needed reboots, which I did. Still PMC would not lock. So I did a burtrestore, and then PMC was locked. But there seemed to be waaaaay to much motion of MCREFL, so I checked the suspension. The shadow sensor EPICS channels are reporting ~10,000 cts, while they used to be ~1000cts. No unusual red flags on the CDS side. Everything looked nominal when I briefly came in at 6:30pm PT yesterday, not sure if anything was done with the IFO last night.

Pending further investigation, I'm leaving all watchdogs shutdown and the PSL shutter closed.

A quick look at the Sorensens in 1X6 revealed that the +/- 20V DC power supplies were current overloaded (see Attachment #1). So I set those two units to zero until we figure out what's going on. Possibly something is shorted inside the ITMX satellite box and a fuse is blown somewhere. I'll look into it more once Steve is back.

Attachment 1: IMG_7102.JPG
IMG_7102.JPG
  14114   Sun Jul 29 23:15:34 2018 poojaUpdateCamerasDeveloping CNN

Aim: To develop a convolutional neural network that resolves mirror motion from video.

Input : Previous simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08 where random uniform noise ranging 0.05 has been added to amplitudes and frequencies. This is divided into train (0.4), validation (0.1) and test (0.5).

Model topology:

  • Number of filters = 2
  • Kernel size = 2
  • Size of pooling windows = 2
  •                                        ----->         Dense layer of 4 nodes  ---->    Output layer of 1 node 

         Activation:                      selu                                                  linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)

Plots of CNN output & applied signal given in Attachment 1. The variation in loss value with epochs given in Attachment 2.

This needs to be further analysed with increasing random uniform noise over the pixels and by training CNN on simulated data of varying ampltides and frequencies for sine waves.

Attachment 1: conv_nn_varying_freq_amp_1.pdf
conv_nn_varying_freq_amp_1.pdf
Attachment 2: conv_nn_varying_freq_amp_2.pdf
conv_nn_varying_freq_amp_2.pdf
  14113   Sun Jul 29 20:03:02 2018 ranaUpdatePEMSeismometer temp control

While Shruti is re-building Kira's heater circuit, I looked up how to do one of these (i.e. what does a real EE say about how to build a current source?):

It turns out that there is an Analog Devices application note (AN-968) about this (as there usually is once we get tired of playing around and try to look up the right answer).

I've linked to the note and attached the recommended schematic for high current applications. We'll go ahead as is, but we'll make a PCB according to this App Note for the v3 circuit.

 

Attachment 1: Screen_Shot_2018-07-29_at_8.00.27_PM.png
Screen_Shot_2018-07-29_at_8.00.27_PM.png
  14112   Sun Jul 29 00:59:54 2018 KojiUpdateElectronicsCharacterization of Transimpedance Amplifier

You have this measurement problem when the IF bandwidth is larger than the measurement frequency. I suspect the IF bandwidth is 30kHz.

  14111   Sat Jul 28 22:16:49 2018 John MartynUpdate Characterization of Transimpedance Amplifier

Kevin and I meaured the transfer function of the photodiode circuit using the Jenne laser and agilent in the 40m lab. The attached figures depict our measured transfer function over the modulation frequency ranges of 30kHz-30MHz and 1kHz-30MHz when the power of the laser was set to 69 and 95 μW. These plots indicate a clear roll off frequency around 300 kHz. In addition, the plots beginning at 1kHz display unstable behavior at frequencies below 30kHz. I am not sure why there is such a sharp change in the transfer function around 30kHz, but I suspect this to be due to an issue with the agilent or photodiode. 

Attachment 1: PD_TF1.pdf
PD_TF1.pdf
Attachment 2: PD_TF2.pdf
PD_TF2.pdf
Attachment 3: PD_and_TIA_Transfer_Function_Measurements.zip
  14110   Sat Jul 28 00:45:11 2018 terra, sandrineSummaryThermal CompensationHeater measurements overview

[Sandrine, Koji, Terra]

Summary: We completed multiple scans at different heating powers for the reflector set up, observing unique HOM peak shifts of tens of kHz. We also observed HOM5 shifts with the cylinder set up. Initial Lorentzian fittings of the magnitude give tens of Hz resolution. I summarize the main week's work below. 

Set-up

Heater set-up is described in several previous elogs, but attachments #1 and #2 show the full heater set-up and wiring/pinouts in and out of vacuum, since we're all intimately aware of how confusing in-vacuum pinouts can be. We are not using the Sorenson power supply (as described in 14071); we just have the BKPrecision power supply 1735 sitting next to the ETMY rack and are manually going out to turn on/off. 

We've continued to use the scan setup described in elog 14086, which is run using /users/annalisa/postVent/AGfast.py. Step by step notes for setting up the scan, running the scans, and processing the scans are attached in notes.txt.

Inducing/witnessing HOMs

The aux input beam was already clipped and on wednesday (after Trans was centered, 14093) we also clipped the output aux beam with razor blade (angled vertically and horizontally, elog 14103) before PDA255; we clipped ~1/3 of the output beam. Attachment #3 shows before and after clipping output, where orange 'cold' == unclipped, black 'mean' == clipped (all in cold state). Up to HOM5 is visible. 

Measurements

Below is a summary of the available scan data. We also have cold (0A) scans CAR-HOM5 and full FSR scans for most configurations. 

Elliptic Reflector
current[A] voltage[V] power[W] scans
0.4 2 0.8 CAR-HOM3(x1)
0.5 3.4 1.7 CAR-HOM3(x1)
0.6 5 3.0 CAR-HOM3(x1)
0.8 9.4(9.7) 7.5(7.8) CAR-HOM5(>x5)
0.9 12 10.8 CAR-HOM5(x4)
1.09 17 18.5 CAR-HOM3

 

 

 

 

 

 

 

Cylinder + Lenses
current[A] voltage[V] power[W] scans
0.9 15 13.5 CAR-HOM5(odds x4)

We tried the cylinder set-up again tonight for the first time since inital try and can see shifts of HOM5 - see attachment #5; we haven't looked in detail yet, but it looks like odd modes are more effected, suggesting the ring heat pattern is off centered from the beam axis. 

Scan data is saved in the following format: users/annalisa/postVent/scandata/{reflector,cylinder}/{parsed,unparsed}/{CAR,HOM1,HOM2,HOM3,HOM4,HOM5}{_datetime}{_parsed,_unparsed}.{txt,pdf}

Minimum heating

On 7/26 we increased the power to the elliptical reflector heater in steps to find the minimum heater power required to see frequency shifts with our measurement setup. Lowest we can resolve is a shift in HOM3 with 1.7W (0.5A/3.4V). According to Annalisa's measurements in elog 14050, this would be something like 30-60 mW radiated power hitting the test mass. We only looked at CAR - HOM3 for this investigation; data for scans at 0.4A, 0.5A, 0.6A is available as indicated above.

Lorentizian Fitting

The Lorentzian fitting was done using the equation a + b / sqrt(1+((x-c)/d*2), where a = constant background, b = peak height above background, c = peak frequency, d = full width at half max. 

The fitting is still being edited and optimized. We will crop the data to zoom in around the peak more.

The Lorentzian fit of the magnitude shows ~10Hz of resolution. (See attachment 6 for the carrier at 8A and attachment 7 for HOM 1 at 9A)

We're working on fitting the full complex data.

 

 

Attachment 1: heater_setup.jpg
heater_setup.jpg
Attachment 2: heater_wiring.jpg
heater_wiring.jpg
Attachment 3: notes.txt
Notes for running scans:
1. when first turning on Agilent, set initial stuff
    > cd /users/annalisa/postVent/20180718
    > AGmeasure TFAG4395Atemplate.yml
2. tweak arm alignment and offset PLL
    > sitemap (then IFO --> ALIGN and also PSL --> AUX)
    > to increase 
3. make sure X-arm is misagligned (hit '! Misalign' button for ITMX, ETMX) 
3. run scan
    > python AGfast.py startfreq stopfreq points
... 36 more lines ...
Attachment 4: FSR_clipped.pdf
FSR_clipped.pdf
Attachment 5: cylinderHOM5.pdf
cylinderHOM5.pdf
Attachment 6: pt8A_CAR.pdf
pt8A_CAR.pdf
Attachment 7: pt9A_HOM1.pdf
pt9A_HOM1.pdf
  14109   Fri Jul 27 17:16:14 2018 SandrineUpdateThermal CompensationCopied working scripts for mode spectroscopy into new directory (modeSpec)

The scripts: AGfast.py, make HDF5.py, plotSpec_marconi.py, and SandrineFitv3.py were copied into the new directory modeSpec.

The path is: /opt/rtcds/caltech/c1/scripts/modeSpec

These scripts can still be found under Annalisa's directory under postVent.

  14108   Fri Jul 27 10:48:57 2018 SteveUpdateSUSBS oplev window

Yesterday I inspected this BS oplev viewport. The heavy connector tube was shorting to table so It was moved back towards the chamber. The connection is air tight with kapton tape temporarly.

 The beam paths are well centered. The viewport is dusty on the inside.

The motivation was to improve the oplev noise.

Attachment 1: BSOw_.jpg
BSOw_.jpg
Attachment 2: dustInsideBSO.jpg
dustInsideBSO.jpg
  14107   Fri Jul 27 02:30:51 2018 gautamUpdateGeneralGlitchy MC

Kevin and I saw some weird IMC / PEM BLRMS behaviour today - see Attached screenshot. Not sure what was happening with the IMC, but MCtrans was oscillating at ~3Hz for a good 20 minutes or so. I just killed the lock, and restarted MCautolocker on megatron. There was a strange feature in the 3-10Hz BLRMS around that time as well. All seems back to normal now...

Attachment 1: 38.png
38.png
  14106   Thu Jul 26 15:11:18 2018 SteveUpdateGeneral Viewports & coating of 2001

New optical quality BK-7 windows in 2001 [4 substrates ] AR coated R<0.75 % for 630-1064nm " Azury BLue" broadband : TRX, TRY, ITMY-Oplev &  ITMX-Oplev viewports.

The BS-Oplev and PRM-Oplev 10" CF with 5.38" diameter view was coated the same way. The window here is Corning 7056 Borosilicate

5 more BK-7 substrates were coated R <0.1% of 1064 nm "Golden Orange" Their location: IMC-IN, IFO-REF and OMC   The next vent we have to confirm optical quality window locations.

All other conflat flange viewports are 7056 Kovar sealed .

Technical notes of 2001 40m upgrade can be seen at LIGO-T010115- 00- R  ....page 14

Attachment 1: BK7window_Coatings.PDF
BK7window_Coatings.PDF BK7window_Coatings.PDF
  14105   Thu Jul 26 01:52:01 2018 terraUpdateThermal Compensationheater work update

Just a quick update: over the past few days we've taken (at least) 5 scans around each peak [carrier - HOM3] at 9.4V/0.8A, 4 scans around [carrier - HOM5] at 12V/0.9A hot state with the reflector setup. We also have (at least) 5 scans of carrier - HOM5 in cold state. I attach a rough overview of the peak magnitude shifts in the first attachment. Analysis ongoing. All data stored in annalisa/postVent/{date}

Initial shifts just based on rought peak placement in the meantime:

            [9.4V/0.8A]   [12V/0.9A]

HOM1    10 kHz         20 kHz

HOM2    18 kHz         28 kHz

HOM3     30 kHz        40 kHz

HOM4     N/A             26 kHz

HOM5     N/A             35 kHz

I also attach the heating thermal transient from today (12V/0.9A) as seen by the opLevs. We see a shorter time constant for pitch, longer for yaw, preceeded by a dip in yaw. Similar behavior yesterday for slightly less heating, though less pronounced pre-dip. The heater is offcentered on the optic horizontally; likely this is part of the induced yaw. The spikey stuff i removed is from people walking around inside during the transient.

I've left the heater and LSC off for the night. Heater off at 2:07 am local time.

Please don't touch the oplevs; we're taking a cool down measurement.

Attachment 1: OpLev_thermal_drift.pdf
OpLev_thermal_drift.pdf
Attachment 2: hotColdAll.pdf
hotColdAll.pdf
  14104   Wed Jul 25 22:46:15 2018 gautamConfigurationComputersNDS access from outside

After this work, I've been having some trouble getting data with Python NDS. Eventually, I figured out that the nds connection request has to be pointed at '131.215.115.200' (the address of the NAT router which faces the outside world), port 31200 (it used to work with 'nds40.ligo.caltech.edu' or '131.215.115.189'). So the following snippet in python allows a connection to be opened. Offline access of frame data via NDS2 now seems possible.

import nds2
conn = nds2.connection('131.215.115.200',31200)
Quote:
 

So far, ssh (22), web services (30889), and elog (8081, 8080) were tested. We also need to test megatron NDS port forwarding and rsync for nodus, too.Finally I turned off the firewall rules of shorewall on nodus as it is no longer necessary.

  14103   Wed Jul 25 14:45:59 2018 SandrineSummaryThermal CompensationETM Y Table AUX read out

Attached is a photo of the set up of the ETM Y table showing the AUX read out set up. 

Currently, the flip mount sends the AUX to the PDA255. Terra inserted a razor blade so the PDA255 will witness more HOMs. The laser is also sent to the regular PD and the CCD.

Attachment 1: EY_table_.JPG
EY_table_.JPG
  14101   Tue Jul 24 09:47:51 2018 gautamUpdateCamerasDeveloping neural networks on simulated video

I was thinking a little more about the way we are training the network for the current topology - because the network has no recurrent layers, I guess it has no memory of past samples, and so it doesn't have any sense of the temporal axis. In fact, Keras by default shuffles the training data you give it randomly so the time ordering is lost. So the training amounts to requiring the network to identify the center of the Gaussian beam and output that. So in the training dataset, all we need is good (spatial) coverage of the area in which the spot is most likely to move? Or is the idea to develop some tools to generate video with spot motion close to that on the ETM in lock, so that we can use it with a network topology that has memory? 

Quote:

This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects)

  14100   Tue Jul 24 06:11:50 2018 ranaUpdateCamerasDeveloping neural networks on simulated video

This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects).

Quote:

Aim: To develop a neural network that resolves mirror motion from video.

  14098   Mon Jul 23 09:58:52 2018 SteveSummaryVACRGA scan at day 6

 

 

Attachment 1: pd81-560Hz-d6.png
pd81-560Hz-d6.png
  14097   Sun Jul 22 14:01:07 2018 poojaUpdateCamerasDeveloping neural networks on simulated video

Aim: To develop a neural network that resolves mirror motion from video.

Since error was high for the same input as in my previous elog http://nodus.ligo.caltech.edu:8080/40m/14089

I modified the network topology by tuning the number of nodes, layers and learning rate so that the model fitted the sum of 4 sine waves efficiently, saved weights of the final epoch and then in a different program, loaded saved weights & tested on simulated video that's produced by moving beam spot from the centre of image by sum of 4 sine waves whose frequencies and amplitudes change with time.

Input : Simulated video of beam spot motion in pitch by applying 4 sine  waves of frquencies 0.2, 0.4, 0.1, 0.3 Hz  and amplitude ratios to frame size to be 0.1, 0.04, 0.05, 0.08. This is divided into train (0.4), validation (0.1) and test (0.5).

Model topology:

                                          Input               -->                  Hidden layer               -->                    Output layer                                  

                                                                                          8 nodes                                              1 node

Activation function:                                  selu                                             linear

Batch size = 32, Number of epochs = 128, loss function = mean squared error

Optimizer: Nadam ( learning rate = 0.00001, beta_1 = 0.8, beta_2 = 0.85)

Normalized the target sine signal of NN by dividing by its maximum value.

Plot of predicted output by neural network, applied input signal & residual error given in 1st attachment. These weights of the model in the final epoch have been saved to h5 file and then loaded & tested with simulated data of 4 sine waves with amplitudes and frequencies changing with time from their initial values by random uniform noise ranging from 0 to 0.05. Plot of predicted output by neural network, target signal of sine waves & residual error given in 2nd attachment. The actual signal can be got from predicted output of NN by multiplication with normalization constant used before. However, even though network fits training  & validation sets efficiently, it gives a comparatively large error on test data of varying amplitude & frequency.

Gautam suggested to try training on this noisy data of varying amplitudes and frequencies. The results using the same model of NN is given in Attachment 3. It was found that tuning the number of nodes, layers or learning rate didn't improve fitting much in this case.

 

 

Attachment 1: nn_simulation_2_normalized_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_2_normalized_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
Attachment 2: nn_simulation_normalizedtarget_128epochs_mult_sin_load_wt_varyingtest_nodes8_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_normalizedtarget_128epochs_mult_sin_load_wt_varyingtest_nodes8_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
Attachment 3: nn_simulation_2_normalized_varying_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
nn_simulation_2_normalized_varying_mult_sin_nodes8_128epochs_lr0p00001_beta1_0p8_beta2_0p85_0p4train_0p1valid_marked.pdf
ELOG V3.1.3-