40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 263 of 355  Not logged in ELOG logo
ID Dateup Author Type Category Subject
  13189   Fri Aug 11 00:10:03 2017 gautamUpdateCDSSlow EPICS channels -> Frames re-enabled

Seems like something has failed after I did this - full frames are no longer on Aug 10 being written since ~2.30pm PDT. I found out when I tried to download some of the free-swinging MC1 data.

To clarify, I logged into fb1, and ran sudo systemctl restart daqd_*. The only change I made was to uncomment the line quoted below in the master file.

Looking at the log using systemctl, I see the following (I just tried restarting the daqd processes again):

Aug 11 00:00:31 fb1 daqd_fw[16149]: LDASUnexpected::unexpected: Caught unexpected exception      "This is a bug. Please log an LDAS problem report including this message.
Aug 11 00:00:31 fb1 daqd_fw[16149]: daqd_fw: LDASUnexpected.cc:131: static void LDASTools::Error::LDASUnexpected::unexpected(): Assertion `false' failed.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service: main process exited, code=killed, status=6/ABRT
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service holdoff time over, scheduling restart.
Aug 11 00:00:32 fb1 systemd[1]: Stopping Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: Starting Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service start request repeated too quickly, refusing to start.
Aug 11 00:00:32 fb1 systemd[1]: Failed to start Advanced LIGO RTS daqd frame writer.
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.

Oddly, I am able to access second trends for the same channels from the past which will be useful for the MC1 debugging). Not sure whats going on.


The live data grabbing using cdsutils still seems to be working though - so I've kicked MC1 again, and am grabbing 2 hours of data live on Pianosa.

Quote:

I went into /opt/rtcds/caltech/c1/target/daqd, opened the master file, and uncommented the line with C0EDCU.ini (this is the file in which all the slow machine channels are defined). So now I am able to access, for example, the c1vac1 channels.

The location of the master file is no longer in /opt/rtcds/caltech/c1/target/fb, but is in the above mentioned directory instead. This is part of the new daqd paradigm in which separate processes are handling the data transfer between FEs and FB, and the actual frame-writing. Jamie will explain this more when he summarizes the CDS revamp.

It looks like trend data is also available for these newly enabled channels, but thus far, I've only checked second trends. I will update with a more exhaustive check later in the evening.

So, the two major pending problems (that I can think of) are:

  1. Inability to unload models cleanly
  2. Inability of dataviewer (and cdsutils) to open testpoints.

Apart from this, dataviewer frequently hangs on Donatella at startup. I used ipcs -a | grep 0x | awk '{printf( "-Q %s ", $1 )}' | xargs ipcrm to remove all the extra messages in the dataviewer queue.


Restarting the daqd processes on fb1 using Jamie's instructions from earlier in this thread works - but the mx_stream processes do not seem to come back automatically on c1lsc, c1sus and c1ioo (reasons unknown). I've made a copy of the mxstreamrestart.sh script with the new mxstream restart commands, called mxstreamrestart_debian.sh, which lives in /opt/rtcds/caltech/c1/scripts/cds. I've also modified the CDS overview MEDM screen such that the "mxstream restart" calls this modified script. For now, this requires you to enter the controls password for each machine. I don't know what is a secure way to do it otherwise, but I recall not having to do this in the past with the old mxstreamrestart.sh script.

 

  13190   Fri Aug 11 10:27:49 2017 KiraUpdatePEMtemperature sensor

Since there seems to be little difference between AD590 and AD592, I guess we could just go with the AD590. The temperature noise spectrum in the first graph are for the AD590, so if we want to reproduce those results, we should use AD590.

For the AD581/AD587, we could go with a few varieties that have the least output voltage drift, although I am not sure what precision we will need. So maybe we could try AD587U and AD581L. We could also try AD587K and AD581K and see if those work as well.

We will also need to calibrate the sensor, as it takes an input of 5V, but the AD581/AD587 provides 10V, which will give about a 1 degree error according to the datasheet. It does state that this is only a calibration error, so it shouldn't be too much of an issue.

I will figure out the packaging once I construct the sensor and verify that it works. Maybe we could use a box similar to the existing sensor, but it depends on the size of the finished circuit.

  13191   Fri Aug 11 10:48:39 2017 KiraUpdatePEMtemperature sensor

Quick update: we actually have AD587KRZ and AD592, so we could start by using that and seeing how it works.

  13192   Fri Aug 11 11:14:24 2017 gautamUpdateCDSSlow EPICS channels -> Frames re-enabled

I commented out the line pertaining to C0EDCU again, now full frames are being written again.

But we no longer have access to the slow EPICS records.

I am not sure what the failure mode is here - In the master file, there is a line that says the EDCU list "*MUST* COME *AFTER* ALL OTHER FAST INI DEFINITIONS" which it does. But there are a bunch of lines that are testpoint lists after this EDCU line. I wonder if that is the problem?

Quote:

Seems like something has failed after I did this - full frames are no longer on Aug 10 being written since ~2.30pm PDT. I found out when I tried to download some of the free-swinging MC1 data.

 

  13193   Fri Aug 11 11:56:02 2017 ranaUpdatePEMtemperature sensor

Might as well order several of a few different varieties today. Its good to have some extra in stock; we don't always want to have to wait days for parts to show up. If you give Steve a list of parts to buy he can order them today or Monday.

There should also be some precision 5V sources (e.g. AD586) that you can try.

  13194   Fri Aug 11 12:27:25 2017 KiraUpdatePEMtemperature sensor

Used AD592CNZ and AD586 (5V output) to create a circuit that works and is responsive to temperature changes. At room temp, using ~1K resistor, it showed ~0.3V across it, as expected. The voltage went up when we heated it with a heating gun. Next step will be to add in an OP amp and design some experiments to check to see how accurate it is. Thanks to Gautam for helping me with it!

I have attached the working circuit and a close up of the connections.

  13195   Fri Aug 11 12:32:46 2017 gautamUpdateSUSMC1 glitches debugging

Attachment #1: Free swinging sensor spectra. I havent done any peak fitting but the locations of the resonances seem consistent with where we expect them to be.

The MC_REFL spot appears to not have shifted significantly (so slow bias voltages are probably not to blame). Now I have to look at trend data to see if there is any evidence of glitching.

I'm not sure I understand the input matrix though - the matrix elements would have me believe that the sensing of POS in UL is ~5x stronger than in UR and LL, but the peak heights don't back that up.

Attachment #3: Second trend over 5hours (since frame writing was re-enabled this morning). Note that MC1 is still free-swinging but there is no evidence of steps of ~30cts which have been observed some days ago. Also, from my observations yesterday, MC1 glitched multiple times over a few hours timescale. More data will have to be looked at, but as things stand, Hypothesis #3 below looks the best.

Quote:
 

Some possible scenarios (assuming the free swinging spectra look alright and the various resonances are where we expect them to be):

  1. With the watchdog shutdown, the PIT/YAW bias voltages still goto the coil (low-passed by 4 poles @1Hz). So if the glitching happens in this path, we should see it in both the shadow sensors and the DC spot positions on the WFS.
  2. If the glitching happens in the shadow sensor readout electronics/cabling, we should see it in the shadow sensor channels, but NOT in the DC spot positions on the WFS (as the watchdog is shutdown, so there should be no actuation to the coils based on OSEM signals).
  3. If we don't see any glitches in WFS spot positions or shadow sensors, then it is indicative of the problem being in the coil driver board / dewhitening board/anti-aliasing board.
  4. I am discounting the problem being in the Satellite box, as we have switched around the MC1 satellite box multiple times - the glitches remain on MC1 and don't follow a Satellite Box. Of course there is the possibility that the cabling from 1X5/1X6 to the Satellite box is bad.

 

  13196   Fri Aug 11 17:36:47 2017 gautamUpdateSUSMC1 <--> MC3

About 30mins ago, I saw another glitch on MC1 - this happened while the Watchdog was shutdown.

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 

  13197   Fri Aug 11 18:53:35 2017 gautamUpdateCDSSlow EPICS channels -> Frames re-enabled
Quote:

Seems like something has failed after I did this - full frames are no longer on Aug 10 being written since ~2.30pm PDT. I found out when I tried to download some of the free-swinging MC1 data.

To clarify, I logged into fb1, and ran sudo systemctl restart daqd_*. The only change I made was to uncomment the line quoted below in the master file.

Looking at the log using systemctl, I see the following (I just tried restarting the daqd processes again):

Aug 11 00:00:31 fb1 daqd_fw[16149]: LDASUnexpected::unexpected: Caught unexpected exception      "This is a bug. Please log an LDAS problem report including this message.
Aug 11 00:00:31 fb1 daqd_fw[16149]: daqd_fw: LDASUnexpected.cc:131: static void LDASTools::Error::LDASUnexpected::unexpected(): Assertion `false' failed.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service: main process exited, code=killed, status=6/ABRT
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service holdoff time over, scheduling restart.
Aug 11 00:00:32 fb1 systemd[1]: Stopping Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: Starting Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service start request repeated too quickly, refusing to start.
Aug 11 00:00:32 fb1 systemd[1]: Failed to start Advanced LIGO RTS daqd frame writer.
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.

Oddly, I am able to access second trends for the same channels from the past which will be useful for the MC1 debugging). Not sure whats going on.


The live data grabbing using cdsutils still seems to be working though - so I've kicked MC1 again, and am grabbing 2 hours of data live on Pianosa.

So we tried this again with a fresh build of daqd_fw, and it still fails.  The error message is pointing to an underlying bug in the framecpp library ("LDASTools"), which may be tricky to solve.  I'm rustling the appropriate bushes...

  13198   Fri Aug 11 19:34:49 2017 JamieUpdateCDSCDS final bits status update

So it appears we now have full frames and second, minute, and minute_raw trends.

We are still not able to raise test points with daqd_rcv (e.g. the NDS1 server), which is why dataviewer and nds2-client can't get test points on their own.

We were not able to add the EDCU (EPICS client) channels without daqd_fw crashing.

We have a new kernel image that's supposed to solve the module unload instability issue.  In order to try it we'll need to restart the entire system, though, so I'll do that on Monday morning.

I've got the CDS guys investigating the test point and EDCU issues, but we won't get any action on that until next week.

Quote:

Remaining unresolved issues:

  • IFO needs to be fully locked to make sure ALL components of all models are working.
  • The remaining red status lights are from the "FB NET" diagnostics, which are reflecting a missing status bit from the front end processes due to the fact that they were compiled with an earlier RCG version (3.0.3) than the mx_streams were (3.3+/trunk).  There will be a new release of the RTS soon, at which point we'll compile everything from the same version, which should get us all green again.
  • The entire system has been fully modernized, to the target CDS reference OS (Debian jessie) and more recent RCG versions.  The management of the various RTS components, both on the front ends and on fb, have as much as possible been updated to use the modern management tools (e.g. systemd, udev, etc.).  These changes need to be documented.  In particular...
  • The fb daqd process has been split into three separate components, a configuration that mirrors what is done at the sites and appears to be more stable: The "target" directory for all of these components is now:
    • daqd_dc: data concentrator (receives data from front ends)
    • daqd_fw: receives frames from dc and writes out full frames and second/minute trends
    • daqd_rcv: NDS1 server (raises test points and receives archive data from frames from 'nds' process)
    The "target" directory for all of these new components is:
    • /opt/rtcds/caltech/c1/target/daqd
    All of these processes are now managed under systemd supervision on fb, meaning the daqd restart procedure has changed.  This needs to be simplified and clarified.
  • Second trend frames are being written, but for some reason they're not accessible over NDS.
  • Have not had a chance to verify minute trend and raw minute trend writing yet.  Needs to be confirmed.
  • Get wiper script working on new fb.
  • Front end RTS kernel will occaissionally crash when the RTS modules are unloaded.  Keith Thorne apparently has a kernel version with a different set of patches from Gerrit Kuhn that does not have this problem.  Keith's kernel needs to be packaged and installed in the front end diskless root.
  • The models accessing the dolphin shared memory will ALL crash when one of the front end hosts on the dolphin network goes away.  This results in a boot fest of all the dolphin-enabled hosts.  Need to figure out what's going on there.
  • The RCG settings snapshotting has changed significantly in later RCG versions.  We need to make sure that all burt backup type stuff is still working correctly.
  • Restoration of /frames from old fb SCSI RAID?
  • Backup of entirety of fb1, including fb1 root (/) and front end diskless root (/diskless)
  • Full documentation of rebuild procedure from Jamie's notes.
  13199   Sat Aug 12 14:09:36 2017 gautamUpdateSUSGlitches stay on MC1

Even in the switched state, the glitches stayed on MC1.

The coil driver electronics for MC1, upstream of the Satellite box, was what was previously MC3 electronics.

Attachment #1 shows that there were no glitches in MC3 sensor channels (which are now physically connected to what was previously MC1 coil driver electronics).

Attachment #2 shows the second trends for a 12 hour period for MC1 and MC3 sensor channels. The MC3 channels look well behaved, but there are frequent glitches (at least 9 in the last 12 hours indecision) visible in the MC1 channels.

So to recap:

  • We switched MC1 satellite box - but glitch stayed on MC1, so it would seem the Satellite box is not to blame.
  • We shutdown the watchdog and the glitches persisted.
  • We switched the coil driver electronics for MC1, but glitches remained on MC1, and MC3 doesn't show any evidence of glitching. This and the previous bullet point suggest the coil driver electronics are not to blame.
  • For the glitch posted in Attachment #1, I could see the MC-REFL spot moving around on the CCD monitors, so the glitches aren't just a feature in the shadow sensor readout. 

I need to confirm that the output of the coil driver board goes straight to the Sat. Box, but if there are no intermediate elements, the problem is either in the cable from coil driver to sat. box, or downstream of the Satellite box - i.e. vacuum feedthroughs or the suspension itself? The size of the glitches is roughly the same in all 4 face channels (~60-80cts pp).

Quote:

About 30mins ago, I saw another glitch on MC1 - this happened while the Watchdog was shutdown.

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 


GV addendum 14 Aug 2017, 10.30am: Attachment #3 shows the second trend for the MC sensor channels over the weekend. While there were many on Saturday, it seems that Sunday was quieter.

  13200   Sat Aug 12 22:35:22 2017 ranaUpdateSUSGlitches stay on MC1

To add to Gautam's entry: we swapped the cables at the coil driver side (these are the ones that go from coil driver to sat box). In this state, damping is not useable since the MC1 servos would drive MC3.

~70 counts in the sensor means ~70 microns of motion. Since the watchdogs are off and the coil drivers are swapped, this can't be laser beam getting in to the sensors.

WE have to consider that these are some real strain release type events happening in the suspension wire or wire standoff, so may require a vent to inspect and possible repair MC1.

  13201   Sun Aug 13 01:35:09 2017 KojiUpdateSUSGlitches stay on MC1

We used to have similar suspension excursion at ETMX. This was the motivation to replace the stand-offs from Al ones to ruby ones. Did the replacement solve the issue at ETMX?

  13202   Mon Aug 14 09:49:18 2017 KiraUpdatePEMtemperature sensor

Decided to try adding in an OP amp just to see if it would work. Added LT1012 and a 100k resistor to the circuit (I originally wanted to do AD743 as it seems to be the best choice according to Zach's elog here, but it said that they are very precious so I went with LT1012 for testing purposes). When heating it with a heating gun, the output voltage went down by a few 0.01V. The maximum voltage was 0.686V. Similar thing happened when I switched to a 10k resistor, where the maximum was 0.705V and it also went down by a few 0.01V upon heating.

I've attached a few pictures showing the circuit.

  13203   Mon Aug 14 12:52:33 2017 KiraUpdatePEMtemperature sensor

I didn't realize that the LT1012 needed an additional input to function. I added in +15V and -15V to pins 7 and 4, respectively and placed a 10k resistor and the numbers make more sense now. The voltage showed a negative value, but it became more negative as I heated it up (it's negative due to how a transimpedance amplifier works).

I have attached the new setup and the value it shows (~-3V). It became more negative by about 0.4V, which translates to about a 40K increase in temperature, which makes sense.

In addition, I have attached an updated sketch of the circuit. I will need to do more testing to determine how accurate this is. The next step would be to calculate how much noise there is currently and figure out how to remove this circuit from the breadboard and use a PCB or something like that for final testing in an insulated container.

The reason I chose AD743 initially for the OP amp is because at low frequencies (which is what we are working with), a FET amp such as AD743 will have a low current noise at high impedance, which is what we have in this case. While a FET amp has high voltage noise compared to other OP amps, the current noise becomes more important at high impedance, so it will work better. According to Zach's graphs, the AD743 is best at high impedances, followed by LT1012.

Quote:

Decided to try adding in an OP amp just to see if it would work. Added LT1012 and a 100k resistor to the circuit (I originally wanted to do AD743 as it seems to be the best choice according to Zach's elog here, but it said that they are very precious so I went with LT1012 for testing purposes). When heating it with a heating gun, the output voltage went down by a few 0.01V. The maximum voltage was 0.686V. Similar thing happened when I switched to a 10k resistor, where the maximum was 0.705V and it also went down by a few 0.01V upon heating.

I've attached a few pictures showing the circuit.

 

  13204   Mon Aug 14 16:24:09 2017 gautamUpdateALSFiber ALS

Today, I borrowed the fiber microscope from Johannes and took a look at the fibers coupled to the PDs. The PD labelled "BEAT PD AUX Y" has an end that seems scratched (Attachments #1 and #2). The scratch seems to be on (or at least very close to) the core. The other PD (Attachments #3 and #4) doesn't look very clean either, but at least the area near the core seems undamaged. The two attachments for each PD corresponds to the two available lighting settings on the fiber microscope.

I have not attempted to clean them yet, though I have also borrowed the cleaning supplies to facilitate this from Johannes. I also plan to inspect the ends of all other fiber connections before re-installing them.

Quote:

Last week, we were talking about reviving the Fiber ALS box. Right now, it's not in great shape. Some changes to be made:

  1. Supply power to the PDs (Menlo FPD310) via a power regulator board. The datasheet says the current consumption per PD is 250 mA. So we need 500mA. We have the D1000217 power regulator board available in the lab. It uses the LM2941 and LM2991 power regulator ICs, both of which are rated for 1A output current, so this seems suitable for our purposes. Thoughts?
  2. Install power decoupling capacitors on the PDs.
  3. Clean up the fiber arrangement inside the box.
  4. Install better switches, plus LED indicators.
  5. Cover the box.
  6. Install it in a better way on the PSL table. Thoughts? e.g. can we mount the unit in some electronics rack and route the fibers to the rack? Perhaps the PSL IR and one of the arm fibers are long enough, but the other arm might be tricky.

Previous elog thread about work done on this box: elog11650

 

  13205   Mon Aug 14 19:41:46 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

I'm upgrading the linux kernel for all the front ends to one that is supposedly more stable and won't freeze when we unload RTS models (linux-image-3.2.88-csp).  Since it's a different kernel version it requires rebuilds of all kernel-related support stuff (mbuf, symmetricom, mx, open-mx, dolphin) and all the front end models.  All the support stuff has been upgraded, but we're now waiting on the front end rebuilds, which takes a while.

Initial testing indicates that the kernel is more stable; we're mostly able to unload/reload RTS modules without the kernel freezing.  However, the c1iscey host seems to be oddly problematic and has frozen twice so far on module unloads.  None of the other hosts have frozen on unload (yet), though, so still not clear.

We're now seeing some timing errors between the front ends and daqd, resulting in a "0x4000" status message in the 'C1:DAQ-DC0_*_STATUS' channels.  Part of the problem was an issue with the IRIG-B/GPS receiver timing unit, which I'll log in a separate post.  Another part of the problem was a bug in the symmetricom driver, which has been resolved.  That wasn't the whole problem, though, since we're still seeing timing errors.  Working with Jonathan to resolve.

  13206   Mon Aug 14 20:01:38 2017 gautamUpdateSUSGlitches stay on MC1

I don't think we can say for sure. I was just talking to EricQ about this, he said the glitches were often seen when changing the alignment offsets when aligning the arm. I am pretty sure I have seen the ETMX alignment change abruptly since the Ruby Standoff replacement (the Oplev spot just slides across the MEDM display rapidly), but I can't find an elog where I've put in details. I also haven't done a whole lot of work with the arm cavities where I would have noticed this problem. There is this test that Eric did, and it didn't throw up any red flags. But the suspension can be well behaved for weeks at a time before this problem pops up again.

There was also the flaky power connection to the timing card on the ETMX expansion chassis which was fixed only recently, after which there has been no systematic investigation of the status of ETMX.

If it is true that these events are caused by strain building up in the suspension wire, I wonder how we can take systematic steps to avoid it. From what I remember of the SOS assembly procedure, the (unglued) standoff is slid along the optic with the wire under slight tension until the wire slips into the groove on the standoff. Then the tension in the wire is adjusted till the optic is pitch balanced and at the desired height. But it is easy to imagine imprinting some torsional stresses in the (40 um?) wire during this process of looping it around under the optic and placing it in the groove. But perhaps this mechanism makes a negligible contribution to the effect we are seeing, and some other mechanism is responsible in this case.

Quote:

We used to have similar suspension excursion at ETMX. This was the motivation to replace the stand-offs from Al ones to ruby ones. Did the replacement solve the issue at ETMX?

 

  13207   Mon Aug 14 20:12:09 2017 Jamie, GautumUpdateCDSWeird problem with GPS receiver

Today we saw a weird issue with the GPS receiver (EndRun Technologies Tempus LX).  GPS timing on fb1 (which is handled via IRIG-B connection to the receiver with a spectracom card) was off by +18 seconds.  We tried resetting the GPS receiver and it still came up with +18 second offset.  To be clear, the GPS receiver unit itself was showing a time on it's front panel that looked close enough to 24-hour UTC, but was off by +18s.  The time also said "GPS" vertically to the right of the time.

We started exploring the settings on the GPS receiver and found this menu item:

Clock -> "Time Mode" -> "UTC"/"GPS"/"Local"

The setting when we found it was "GPS", which seems logical enough.  However, when we switched it to "UTC" the time as shown on the front panel was correct, now with "UTC" vertically to the right of the time, and fb1 was then showing the correct GPS time.

From the manual:

Time Mode
Time mode defines the time format used for the front-panel time display and, if installed, the optional
time code or Serial Time output. The time mode does not affect the NTP output, which is always
UTC. Possible values for the time mode are GPS, UTC, and local time. GPS time is derived from
the GPS satellite system. UTC is GPS time minus the current leap second correction. Local time is
UTC plus local offset and Daylight Savings Time. The local offset and daylight savings time displays
are described below.

The fact that moving to "UTC" fixed the problem, even though that is supposed to remove the leap second correction, might indicate that there's another bug in the symmetricom driver...

  13208   Tue Aug 15 08:22:16 2017 SteveUpdateGeneralprojector light bulb replaced

BL-FS300C-PH-LE was replaced after 2,904 hrs  It did not explode this time.  The 4 monts life period is actually pritty good because this is a $73.  cheap bulb. The best-high priced warranty is 5 months.

 

PS: future option_bulbless laser projector

HITACHI LP-WU3500 PROJECTOR	$2,549.00	
DLP, WUXGA, 3500 LUMENS, HDMI, 20K HR LASER, 5YR WARRANTY, 1.36-2.34:1 LENS, 24/7 DUTY CYCLE
45x72 IMAGE FROM 8' TO 13'10" LENS TO SCREEN, AND AT 10' APPROX. 154FL OF BRIGHTNESS ON A 1.0 GAIN SCREEN
http://www.projectorcentral.com/pdf/projector_spec_9722.pdf

This would give you everything you are requesting, plus a lamp-less design and 5yr warranty.  Ground shipping would be free anywhere in the lower 48, and we would not charge sales tax on orders billing/shipping outside of AZ.  If you have any questions or if you would like to order... just let me know!
  13209   Tue Aug 15 11:50:21 2017 KiraSummaryPEMtemp sensor packaging/mount

For the final packaging/mounting of the sensor to the seismometer, I have thought of two options.

1. Attach circuit to a PCB board and place it inside the can, while leaving the AD590 open to the air inside the can.

  • This makes sure that the sensor gets a direct measurement of the temperature of the air in the can, as it is exposed to the air. 
  • But, it takes a limited area of measurement, so it could be the case that the area we place it in happens to be a hot or cold pocket, and the measurement would be inaccurate.
  • This can be solved by placing multiple copies of the circuit in various places of the can and averaging the values.

2. Attach the AD590 to a copper plate with thermal paste and put it into a pomona box.

  • This solves the problem of having a limited sample area the first option had, as the copper plate should have a uniform temp distribution, thus we are sampling the temp of that whole area.
  • Need to make sure that the response time to the temperature variations of copper is less than the frequency that we are measuring.
  • This can be calculated using equations for heat transfer (listed below).

If anyone has input on which method is preferred or any additional options that we may have, I would appreciate it.

Heat transfer:

q = k A dT / s

  • k = thermal conductivity
  • A = area
  • dT = temperature gradient
  • s = thickness

For copper, k = 401 W/mK, x = 1.27 mm, A = 2.66x10^-3 m^2 (for the particular copper plate I measured), dT = 1K (assume). Thus the heat transfer will be 839 J/s.

I'm not completely sure what to do with this yet, but it could help us decide whether the copper plate option will be useful for us.

 

  13210   Tue Aug 15 13:32:38 2017 KiraUpdatePEMtemperature sensor

Tested to make sure that even when only the AD586 was heated that there was no change in the reading. I did so by placing the AD586 away from the rest of the circuit and blowing hot air only on it. There was, in fact, no change. 

Quote:

I didn't realize that the LT1012 needed an additional input to function. I added in +15V and -15V to pins 7 and 4, respectively and placed a 10k resistor and the numbers make more sense now. The voltage showed a negative value, but it became more negative as I heated it up (it's negative due to how a transimpedance amplifier works).

I have attached the new setup and the value it shows (~-3V). It became more negative by about 0.4V, which translates to about a 40K increase in temperature, which makes sense.

In addition, I have attached an updated sketch of the circuit. I will need to do more testing to determine how accurate this is. The next step would be to calculate how much noise there is currently and figure out how to remove this circuit from the breadboard and use a PCB or something like that for final testing in an insulated container.

The reason I chose AD743 initially for the OP amp is because at low frequencies (which is what we are working with), a FET amp such as AD743 will have a low current noise at high impedance, which is what we have in this case. While a FET amp has high voltage noise compared to other OP amps, the current noise becomes more important at high impedance, so it will work better. According to Zach's graphs, the AD743 is best at high impedances, followed by LT1012.

Quote:

Decided to try adding in an OP amp just to see if it would work. Added LT1012 and a 100k resistor to the circuit (I originally wanted to do AD743 as it seems to be the best choice according to Zach's elog here, but it said that they are very precious so I went with LT1012 for testing purposes). When heating it with a heating gun, the output voltage went down by a few 0.01V. The maximum voltage was 0.686V. Similar thing happened when I switched to a 10k resistor, where the maximum was 0.705V and it also went down by a few 0.01V upon heating.

I've attached a few pictures showing the circuit.

 

 

  13211   Tue Aug 15 16:32:42 2017 Jamie, GautumUpdateCDSGPS receiver apparently set to correct mode as "UTC"
Quote:

The setting when we found it was "GPS", which seems logical enough.  However, when we switched it to "UTC" the time as shown on the front panel was correct, now with "UTC" vertically to the right of the time, and fb1 was then showing the correct GPS time.

From Keith Thorne:

In the GPS receiver, you are trying to match the IRIG-B output format that is created by the aLIGO IRIG-B Fanout.  Since we have to prep the aLIGO IRIG-B Fanout every time there is a leap second coming, I would suspect that we are sending UTC to the IRIG-B receivers.  Thus, the GPS receiver needs to be set to that mode.

Soooo, "UTC" is the correct mode for the GPS receiver.

  13212   Wed Aug 16 14:54:13 2017 gautamUpdateCDSPSL monitoring Acromag EPICS server restarted

[johannes, gautam, jamie]

  • Made a directory /opt/rtcds/caltech/c1/scripts/Acromag/PSL where I copied over the files needed my modbusApp to start the server from Lydia's user directory
  • Edited /ligo/apps/ubuntu12/ligoapps-user-env.sh to export a couple of EPICS variables to facilitate easy startup of the EPICS server
  • Started a tmux session on (soon to be re-christened?) megatron called "acroEPICS"
  • Ran the following command to start up the EPICS server:
${EPICS_MODULES}/modbus/bin/${EPICS_HOST_ARCH}/modbusApp npro_config.cmd

To do:

  1. Make a startup script that runs the above command - eventually this can contain the initialization instructions for all the Acromags
  2. Figure out the initctl/systemctl stuff to make the server automatically restart if it drops for some reason (e.g. power failure)
  13213   Wed Aug 16 14:57:01 2017 SteveUpdatePSLref Cavity heating blanket power supply

The last entry I found relating to ref cavity was 2011 Aug 19

  13214   Wed Aug 16 16:05:53 2017 KiraUpdatePEMtemp sensor PCB

Tried taking the circuit from the breadboard to the PCB. I attached all the components to adapters that would allow them to be connected to the PCB. From the first picture, the first component is AD586, the second is AD590, and the third is LT1012, along with a resistor across it. I then soldered the connections between the components, as can be seen in the second picture. When I tested out this version of the circuit by hooking it up to the DC source, I got a reading of ~-15V. I will have to check all the connections to make sure there is contact where there should be one, and no contact where there shouldn't be. I had issues attaching the tiny AD590 and LT1012 to its adaptor, so the issue may lie there as well. I'll also check that each component is in working order as well.

Once I figure out where my error is, my plan is to build two more of these and place a metal object such that it contacts only the surface of the AD590s. This would allow me to compare the three values to the actual temperature of the metal, which would then tell me how accurate this setup is.

Note on the resistor: I measured all the resistors and chose three that had exactly 10.00k Ohm. The voltage detected is dependent on the resistor, so if we are to take three identical copies, I ensured that there would be no error due to the resistors being a little different.

  13215   Wed Aug 16 17:05:53 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

The CDS system has now been up moved to a supposedly more stable real-time-patched linux kernel (3.2.88-csp) and RCG r4447 (roughly the head of trunk, intended to be release 3.4).  With one major and one minor exception, everything seems to be working:

The remaining issues are:

  • RFM network down.  The IOP models on all hosts on the RFM network are not detecting their RFM cards.  Keith Thorne thinks that this is because of changes in trunk to support the new long-range PCIe that will be used at the sites, and that we just need to add a new parameter to the cdsParameters block in models that use RFM.  Him and Rolf are looking into for us.
  • The 3.2.88-csp kernel is still not totally stable.  On most hosts (c1sus, c1ioo, c1iscex) it seems totally fine and we're able to load/unload models without issue.  c1iscey is definitely problematic, frequently freezing on module unload.  There must be a hardware/bios issue involved here.  c1lsc has also shown some problems.  A better kernel is supposedly in the works.
  • NDS clients other than DTT are still unable to raise test points.  This appears to be an issue with the daqd_rcv component (i.e. NDS server) not properly resolving the front ends in the GDS network.  Still looking into this with Keith, Rolf, and Jonathan.

Issues that have been fixed:

  • "EDCU" channels, i.e. non-front-end EPICS channels, are now being acquired properly by the DAQ.  The front-ends now send all slow channels to the daq over the MX network stream.  This means that front end channels should no longer be specified in the EDCU ini file.  There were a couple in there that I removed, and that seemed to fix that issue.
  • Data should now be recorded in all formats: full frames, as well as second, minute, and raw_minute trends
  • All FE and DAQD diagnostics are green (other than the ones indicating the problems with the RFM network).  This was fixed by getting the front ends models, mx_stream processes, and daqd processes all compiled against the same version of the advLigoRTS, and adding the appropriate command line parameters to the mx_stream processes.
  13216   Wed Aug 16 17:14:02 2017 KojiUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

What's the current backup situation?

  13217   Wed Aug 16 18:01:28 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors
Quote:

What's the current backup situation?

Good question.  We need to figure something out.  fb1 root is on a RAID1, so there is one layer of safety.  But we absolutely need a full backup of the fb1 root filesystem.  I don't have any great suggestions, other than just getting an external disk, 1T or so, and just copying all of root (minus NFS mounts).

  13218   Wed Aug 16 18:06:01 2017 KojiUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors

We also need to copy chiara's root. What is the best way to get the full image of the root FS?
We may need to restore these root images to a different disk with a different capacity.

Is the dump command good for this?

  13219   Wed Aug 16 18:50:58 2017 JamieUpdateCDSfront-end/DAQ network down for kernel upgrade, and timing errors
Quote:

The remaining issues are:

  • RFM network down.  The IOP models on all hosts on the RFM network are not detecting their RFM cards.  Keith Thorne thinks that this is because of changes in trunk to support the new long-range PCIe that will be used at the sites, and that we just need to add a new parameter to the cdsParameters block in models that use RFM.  Him and Rolf are looking into for us.

RFM network is back!  Everything green again.

Use of RFM has been turned off in adLigoRTS trunk in favor of the new long-range PCIe networking being developed for the sites.  Rolf provided a single-line patch that re-enables it:

controls@c1sus:/opt/rtcds/rtscore/trunk 0$ svn diff
Index: src/epics/util/feCodeGen.pl
===================================================================
--- src/epics/util/feCodeGen.pl    (revision 4447)
+++ src/epics/util/feCodeGen.pl    (working copy)
@@ -122,7 +122,7 @@
 $diagTest = -1;
 $flipSignals = 0;
 $virtualiop = 0;
-$rfm_via_pcie = 1;
+$rfm_via_pcie = 0;
 $edcu = 0;
 $casdf = 0;
 $globalsdf = 0;
controls@c1sus:/opt/rtcds/rtscore/trunk 0$

This patched was applied to RTS source checkout we're using for the FE builds (/opt/rtcds/rtscore/trunk, which is r4447, and is linked to /opt/rtcds/rtscore/release).  The following models that use RFM were re-compiled, re-installed, and re-started:

  • c1x02
  • c1rfm
  • c1x03
  • c1als
  • c1x01
  • c1scx
  • c1asx
  • c1x05
  • c1scy
  • c1tst

The re-compiled models now see the RFM cards (dmesg log from c1ioo):

[24052.203469] c1x03: Total of 4 I/O modules found and mapped
[24052.203471] c1x03: ***************************************************************************
[24052.203473] c1x03: 1 RFM cards found
[24052.203474] c1x03:     RFM 0 is a VMIC_5565 module with Node ID 180
[24052.203476] c1x03: address is 0xffffc90021000000
[24052.203478] c1x03: ***************************************************************************

This cleared up all RFM transmission error messages.

CDS upstream are working to make this RFM usage switchable in a reasonable way.

  13220   Wed Aug 16 19:50:17 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

Quote:

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 

 

  13221   Wed Aug 16 20:01:03 2017 gautamUpdateGeneralSUS model ASC input weirdness

I'm not sure if this has something to do with the model restarts / new RCG, but while I was re-enabling the MC watchdogs, I noticed the RMS sensor voltage channels on ITMX hovering around ~100mV, even though local damping was on (in which configuration I would expect <1mV if everything is working normally).  I was confused by this behaviour, and after staring at the ITMX suspension screen for a while, I noticed that the input to the "ASCP" and "ASCY" servos were "-nan", and the outputs were 10^20 cts frown (see Attachment #1).

Digging a little deeper, I found that the same problem existed on ITMY, ETMX, ETMY, PRM (but not BS or SRM) - reasons unknown for now.

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

gedit 28 Oct 0026: Seems like this problem is seen at the sites as well. I wonder if the problem is related.

  13222   Wed Aug 16 20:24:23 2017 gautamUpdateALSFiber ALS

Today, with Johannes' help, I cleaned the fiber tips of the photodiodes. The effect of the cleaning was dramatic - see Attachments #1-4, which are X Beat PD, axial illumination, X Beat PD, oblique illumination, Y beat PD, axial illumination, Y beat PD, oblique illumination. They look much cleaner now, and the feature that looked like a scratch has vanished.

The cleaning procedure followed was:

  • Blow clean air over the fiber tip
  • First, we tried cleaning with the Q-tip like tool, but the results weren't great. The way to use it is to dip the tip in the cleaning solvent for a few seconds, hold the tip to the fiber taking into account the angled cut, and apply 10 gentle quarter turns.
  • Next, we tried cleaning with the wipes. We peeled out an approximately 5" section of the wipe, and laid it out on the table. We then applied cleaning solvent liberally on the central area where we were sure we hadn't touched the wipe. Then you just drag the fiber tip along the soaked part of the wipe. If you get the angle exactly right, the fiber glides smoothly along the surface, but if you are a little misaligned, you get a scratchy sensation. 
  • Blow dry and inspect.

I will repeat this procedure for all fiber connections once I start putting the box back together - I'm almost done with the new box, just waiting on some hardware to arrive.

 

Quote:

Today, I borrowed the fiber microscope from Johannes and took a look at the fibers coupled to the PDs. The PD labelled "BEAT PD AUX Y" has an end that seems scratched (Attachments #1 and #2). The scratch seems to be on (or at least very close to) the core. The other PD (Attachments #3 and #4) doesn't look very clean either, but at least the area near the core seems undamaged. The two attachments for each PD corresponds to the two available lighting settings on the fiber microscope.

I have not attempted to clean them yet, though I have also borrowed the cleaning supplies to facilitate this from Johannes. I also plan to inspect the ends of all other fiber connections before re-installing them.

 

  13223   Thu Aug 17 08:42:27 2017 SteveUpdatePSLPSL HEPA

The PSL HEPA was running noisy at 100V   The bearing is wearing out. I turned it down to 30V It is quiet there.

  13224   Thu Aug 17 10:41:58 2017 KiraUpdatePEMtemp sensor PCB

Got it to work. One of the connections was faulty. I decided to check the temperature measured against a thermometer. The sensor showed 26.1 C, but the thermometer showed 25.8 C after I let them both cool down after heating them up. The temperature of the thermometer was dropping at the time of measurement, but the temperature of the sensor was not. This is still a rough version of the final sensor, so I'm not sure what exactly causes this discrepancy.

Quote:

Tried taking the circuit from the breadboard to the PCB. I attached all the components to adapters that would allow them to be connected to the PCB. From the first picture, the first component is AD586, the second is AD590, and the third is LT1012, along with a resistor across it. I then soldered the connections between the components, as can be seen in the second picture. When I tested out this version of the circuit by hooking it up to the DC source, I got a reading of ~-15V. I will have to check all the connections to make sure there is contact where there should be one, and no contact where there shouldn't be. I had issues attaching the tiny AD590 and LT1012 to its adaptor, so the issue may lie there as well. I'll also check that each component is in working order as well.

Once I figure out where my error is, my plan is to build two more of these and place a metal object such that it contacts only the surface of the AD590s. This would allow me to compare the three values to the actual temperature of the metal, which would then tell me how accurate this setup is.

Note on the resistor: I measured all the resistors and chose three that had exactly 10.00k Ohm. The voltage detected is dependent on the resistor, so if we are to take three identical copies, I ensured that there would be no error due to the resistors being a little different.

 

  13225   Thu Aug 17 11:17:49 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Seems like this modification didn't really work. There were several large MC1 glitches, and one of them misaligned MC1 so much that the IMC didn't relock for the last ~6 hours. I re-aligned MC1 manually, and now it is locked fine.

Quote:

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

 

 

  13226   Thu Aug 17 17:33:01 2017 gautamUpdateSUSMC1 <--> MC3 switched back

that's why the Autolocker clears the outputs; we don't want to be holding the offsets from the last ms of lock when it was all messed up; instead it would be best to have a slow (~mHz) relief script that takes the WFS controls and puts them onto the MC SUS sliders. This would then re-align the MC to the input beam rather than the input to the MC. Which is not the best idea.

Quote:

Seems like this modification didn't really work.

 

  13227   Thu Aug 17 22:54:49 2017 ericqUpdateComputersTrying to access JetStor RAID files

The JetStor RAID unit that we had been using for frame writing before the fb meltdown has some archived frames from DRFPMI locks that I want to get at. I spent some time today trying to mount it on optimus with no success crying

The unit was connected to fb via a SCSI cable to a SCSI-to-PCI card inside of fb. I moved the card to optimus, and attached the cable. However, no mountable device corresponding to the RAID seems to show up anywhere.

The RAID unit can tell that it's hooked up to a computer, because when optimus restarts, the RAID event log says "Host Channel 0 - SCSI Bus Reset."

The computer is able to get some sort of signals from the RAID unit, because when I change the SCSI ID, the syslog will say 'detected non-optimal RAID status'.

The PCI card is ID'd fine in lspci as "06:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev c1)"

'lsssci' does not list anything related to the unit

Using 'mpt-status -p', which is somehow associated with this kind of thing returns the disheartening output:

Checking for SCSI ID:0
Checking for SCSI ID:1
Checking for SCSI ID:2
Checking for SCSI ID:3
Checking for SCSI ID:4
Checking for SCSI ID:5
Checking for SCSI ID:6
Checking for SCSI ID:7
Checking for SCSI ID:8
Checking for SCSI ID:9
Checking for SCSI ID:10
Checking for SCSI ID:11
Checking for SCSI ID:12
Checking for SCSI ID:13
Checking for SCSI ID:14
Checking for SCSI ID:15
Nothing found, contact the author
 
I don't know what to try at this point.
  13228   Fri Aug 18 21:58:35 2017 gautamUpdateGeneralSUS model ASC input weirdness

I spent some time today trying to debug this issue.

Jamie and I had opened up the c1sus frontend to try and replace the RFM card before we realized that the problem was in the RCG code generator. During this process, we had disconnected all of the back-panel cabling to this machine (2 ethernet cables, dolphin cable, and RFM cables/fibers). I thought I may have accidentally returned the cables to the wrong positions - but all the status indicator lights indicate that everything is working as it should, and I also confirmed that the cabling is as it is in the pictures of the rack on the wiki page.

Looking at the SimuLink model diagram (see Attachment #1 for example), it looks like (at least some of) these channels are actually on the dolphin network, and not the RFM network (with which we were experiencing problems). This suggests that the problem is something deeper. Although I did see nans in some of the ETMX ASC channels as well, for which the channels are piped over the RFM network. Even more puzzling is that the ASC MEDM screen (Attachment #3) and the SimuLink diagram (Attachment #2) suggest that there is an output matrix in between the input signals and the output angular control signals to the suspensions. As Attachment #4 shows, the rows corresponding to ITMX PIT and YAW are zero (I confirmed using z read <matrixElement>). Attachment #3 shows that the output of all the servo banks except CARM_YAW is zero, but CARM_YAW has no matrix element going to the ITMs (also confirmed with z read <servoOutputChannel>). So 0 x 0 should be 0, but for some reason the model doesn't give this output?

GV Edit: As EricQ just pointed out to me, nan x 0 is still nan, which probably explains the whole issue. Poking a little further, it seems like this is an SDF issue - the SDF table isn't able to catch differences for this hold output channel.


As I was writing this elog, I noticed that, as mentioned above, the CARM_YAW output was "nan". When I restart the model (thankfully this didn't crash c1lsc!), it seems to default to this state. Opening up the filter module, I saw that the "hold output" was enabled.

Toggling that switch made the nans in all the SUS ASC channels disappear. Mysterious indecision.

All the points above stand - CARM_YAW output shouldn't have been going anywhere as per the output matrix, but it seems to have been responsible? Seems like a bug in any case if a model restarts with a field as "nan".

Anyways the problem seems to have been resolved so I'm going to try locking and dither aligning the arms now.

Rolf mentioned that a simple update could fix several of the CDS issues we are facing (e.g. inability to open up testpoints), but he didn't seem to have any insight into this particular issue. Jamie will try and recompile all the models and then we have to see if that fixes the remaining problems.

Quote:
 

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

 

  13229   Fri Aug 18 23:59:53 2017 gautamUpdateALSX Arm ALS lock

[ericq, gautam]

  • I was just getting the IFO aligned, and single arm lock going, when EricQ came in and asked if we could get some ALS data.
  • ALS beats seemed fine, in particular the X-Arm. The broad hump around ~70Hz that was present in my previous ALS update was nowhere to be seen - reasons unknown.
  • Copied over /opt/rtcds/caltech/c1/scripts/YARM/Lock_ALS_YARM.py to /opt/rtcds/caltech/c1/scripts/XARM/Lock_ALS_XARM.py. Could be useful when we want to do arm cavity scans.
  • Made appropriate changes to allow ALS locking of Xarm - the testpoint inaccessibility makes things a little annoying but for tonight we just used DQ channels in place (or slow channels when DQ chans were not available)
  • Calibration of X arm error signal seemed off - so we fixed it by driving a line in ETMX and matching up the peaks in the ALS error signal and POX11. We then updated the gain of the filter in the CINV filter bank accordingly.
  • Got some decent data - X arm stayed locked on ALS for >60mins, during which time the Y arm stayed locked on POY11, and the Y green also reained locked yes. There was no evidence of the X arm 00 mode randomly dropping out of lock tonight.
  • EQ will update with a sick comparison plot - today we looked at the ALS noise from the perspective of the Green Locking Izumi et. al. paper.
  • Y arm ALS noise didn't look so hot tonight - to be investigated...

Leaving LSC mode OFF for now while CDS is still under investigation


Not really related to this work: We saw that the safe.snap file for c1oaf seems to have gotten overwritten at some point. I restored the EPICS values from a known good time, and over-wrote the safe.snap file.

  13230   Sat Aug 19 01:35:08 2017 ericqUpdateALSX Arm ALS lock

My motivation tonight was to get an up-to-date spectrum of a calibrated measurement of the out-of-loop displacement of an arm locked on ALS (using the PDH signal as the out-of-loop sensor) to compare the performance of ALS control noise with the Izumi et al green locking paper. 

I was able to fish out the PSD from the paper from the 40m svn, but the comparison as plotted looks kind of fishy. I don't see why the noise from 10-60Hz should be so different/worse. We updated the POX counts to meters conversion by looking at the Hz-calibrated ALSX signal and a ~800Hz line injected on ETMX.

  13231   Mon Aug 21 09:08:54 2017 SteveUpdateSUStiny glitches

They are synchronised tiny glitches. They are not mechanical.
 

  13232   Mon Aug 21 13:07:08 2017 KiraUpdatePEMtemp sensor PCB

On Friday, I cleaned up the circuit so that there are only three connections needed (+15V, -15V, GND) and a BNC connector for reading the output. Today, I added in bypass capacitors. The small yellow ones are 0.1 microF ceramic, and the large ones are 100 microF electrolytic. They are used to stabilize the +15V and -15V inputs to the OP amp and minimize fluctuations, since it doesn't have a regulator for stability. I have also attached the circuit diagram for the OP amp only, where 1 are the electrolytic and 2 are the ceramic. The temperature is still about 2 degrees off, but if that difference is constant for all temperatures in our range we can just calibrate it later.

Here is a helpful link on bypass capacitors (thanks to Kevin for sending it to me).

As a note, the electrolytic capacitors do have a polarity, so it is important to place them correctly (the negative side is towards the lower voltage potential, and not always towards ground).

Quote:

Got it to work. One of the connections was faulty. I decided to check the temperature measured against a thermometer. The sensor showed 26.1 C, but the thermometer showed 25.8 C after I let them both cool down after heating them up. The temperature of the thermometer was dropping at the time of measurement, but the temperature of the sensor was not. This is still a rough version of the final sensor, so I'm not sure what exactly causes this discrepancy.

Quote:

Tried taking the circuit from the breadboard to the PCB. I attached all the components to adapters that would allow them to be connected to the PCB. From the first picture, the first component is AD586, the second is AD590, and the third is LT1012, along with a resistor across it. I then soldered the connections between the components, as can be seen in the second picture. When I tested out this version of the circuit by hooking it up to the DC source, I got a reading of ~-15V. I will have to check all the connections to make sure there is contact where there should be one, and no contact where there shouldn't be. I had issues attaching the tiny AD590 and LT1012 to its adaptor, so the issue may lie there as well. I'll also check that each component is in working order as well.

Once I figure out where my error is, my plan is to build two more of these and place a metal object such that it contacts only the surface of the AD590s. This would allow me to compare the three values to the actual temperature of the metal, which would then tell me how accurate this setup is.

Note on the resistor: I measured all the resistors and chose three that had exactly 10.00k Ohm. The voltage detected is dependent on the resistor, so if we are to take three identical copies, I ensured that there would be no error due to the resistors being a little different.

 

 

  13233   Mon Aug 21 14:53:32 2017 gautamUpdateVACRGA reset

[gautam, steve]

In the aftermath of the accidental vent, it looks like the RGA was shutdown.

We followed the instructions in this elog to restart the RGA.

Seems to be working now, Steve says we just need to wait for it to warm up before we can collect a reliable scan.

Quote:

We have good RGA scan now. There was no scan for 3 months.

 

  13234   Mon Aug 21 16:35:48 2017 gautamUpdateVACUPS checkup

[steve, gautam]

At Rolf/Rich Abbott's request, we performed a check of the UPS today.

Steve believed that the UPS was functioning as it should, and the recent accidental vent was because the UPS batteries were insufficiently charged when the test was performed. Today, we decided to try testing the UPS.

We first closed V1, VM1 and VA6 using the MEDM screen. We prepared to pull power on all these valves by loosening the power connections (but not detaching them). [During this process, I lost the screw holding the power cord fixed to the gate valve V1 - we are looking for a replacement right now but it seems to be an odd size. It is cable tied for now.]

The battery charge indicator LEDs on the UPS indicated that the batteries were fully charged.

Next, we hit the "Test" button on the UPS - it has to be held down for ~3 seconds for the test to be actually initiated, seems to be a safety feature of the UPS. Once the test is underway, the LED indicators on the UPS will indicate that the loading is on the UPS batteries. The test itself lasts for ~5seconds, after which the UPS automatically reverts to the nominal configuration of supplying power from the main line (no additional user input is required).

In this test, one of the five battery charge indicator LEDs went off (5 ON LEDs indicate full charge).

So on the basis of this test, it would seem that the UPS is functioning as expected. It remains to be investigated if the various hardware/software interlocks in place will initiate the right sequence of valve closures when required.


Quote:
 

Never hit O on the Vacuum UPS !

Note: the " all off " configuration should be all valves closed ! This should be fixed now.

In case of  emergency  you can close V1 with disconnecting it's actuating power as shown on Atm3 if you have peumatic pressure 60 PSI 

 

  13235   Mon Aug 21 20:11:25 2017 johannesSummaryGeneralLoss measurements plan

There are three methods we (will soon) have available to evaluate the round-trip dissipative losses in the arms that do not suffer from the ITM loss dominance:

  • DC reflection method:
    • Compare reflected light levels from [ITM only] vs [arm cavity on resonance]
  • Basler CCDs:
    • Infer large (or small) angle scatter loss with calibrated CCDs
  • Reflection ringdowns:
    • Need AS port light injection, principle is similar to DC method but better (?)

DCREFL

The DC method comparing reflectivities has been used in the past and is relatively easy to do. After the recent vacuum troubles the first step should be to re-perform these as CDS permits (needs some ASS functionality and of course the MC to behave). It wouldn't hurt to know the parameters this depends on, aka mode overlap and modulation depths with better certainty. Maybe the SURF scripts for mode-spectroscopy can be applied?

CCDs

With the new CCD cameras calibrated, pre-vent we can determine the magnitude of the large-angle scatter loss (assuming isotropic scatter) of ETMX and possibly ETMY. Can we look past ETMX/ETMY from the viewports? Then we can probably also look at the small angle scatter of ITMX and ITMY. If not, once we open one of the chambers there's the option of installing mirrors as close as possible to the main beam path. The easiest is probably to look at ITMX, since there is plenty of space in the XEND chamber, and the camera is already installed.

ASPORT

This requires a lot of up-front work. We decided to use the spare 200mW NPRO. It will be placed on the PSL table and injected into an optical fiber, which terminates on the AS table. The again free space beam there needs to be sort-of mode-matched into the SRC ("sort-of" because mode-spectroscopy). We want to be able to phaselock this secondary beam to the PSL with at least a couple kHz bandwidth and also completely extinguish the beam on time-scales of a few microseconds. We will likely need to purchase a few components that we can salvage from other labs, I'm still going through the inventory and will know more soon (more detailed post to follow). We need to settle for the polarization we want to send in from the back.

 

Tentative Schedule (aggressive)

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

Week Aug 28 - Sep 3:

  • Re-evaluate modulation indices if necessary
  • Optical beat AS Port Auxiliary Laser (ASAL) - PSL
  • PLL setup
  • CCD large angle prep work

Week Sep 4 - Sep 10:

  • PLL CDS integration
  • Amplitude-modulation preparation
  • CCD large angles

Week Sep 11 - Sep 17:

  • Fiber-injection
  • AS table preliminary mode-matching
  • Faraday setup
  • CCD small angle prep work

Week Sep 18 - Sep 24:

  • ASAL amplitude switching
  • CCD small angles

Week Sep 25 - Oct 1:

  • AS port ringdowns

 

  13236   Mon Aug 21 21:26:41 2017 gautamSummaryGeneralLoss measurements plan

In case you want to use it, I had profiled the Lightwave NPRO sometime back, and we were even using it as the AUX X laser for a short period of time. 

As for using the AS laser for mode spectroscopy: don't we want to match the beam into the cavity as best as possible, and then use some technique to disturb the input mode (like the dental tooth scraper technique from Chris Mueller's thesis)? 

Johannes and I did an arm scan of the X arm today (arm controlled with ALS, monitoring IR transmission) - only 2 IR FSRs were scanned, but there should be sufficient information in there to extract the modulation depth and mode matching - can we use Kaustubh's/Naomi's code?. The Y arm ALS needs to be touched up so I don't have a Y arm scan yet. Note that to get a good arm scan measurement, the High Gain Thorlabs PD should be used as the transmission PD.

Quote:
 

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

 

  13237   Mon Aug 21 23:38:55 2017 gautamUpdateALSALS out-of-loop noise

I worked a little bit on the Y arm ALS today. 

  • Started by locking the Y arm to IR with POY, and then ran the dither alignment script to maximize Y arm transmission.
  • Green TRY DC monitor was around 0.16, whereas I have seen ~0.45 when we were doing DRFPMI locking.
  • So I went to the Y end table and tweaked the steering mirrors a little. I was able to get GTRY to ~0.42. I think this can be tweaked a little further but I decided to push on for tonight.
  • The beat amplitude on the network analyzer in the control room is comparable to the X arm beat now.
  • Adjusted the gain of the phase tracker servos, cleared phase history.
  • Looking at the ALS beat noise with the arms locked to IR and the slow ALS temperature control loops ON (see Attachment #1), the current measurements line up quite well with the reference traces.

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.


Some weeks ago, I had moved some of the Green steering optics on the PSL table around, in order to flip some mirror mounts and try and get angles of incidence closer to ~45deg on some of the steering mirrors. As a result of this work, I can see some light on the GTRY CCD when the X green shutter is open. It is unclear if there is also some scattered light on the RFPDs. I will post pictures + a more detailed investigation of the situation on the PSL table later, there are multiple stray green beams on the PSL table which should probably be dumped.


As I was writing this elog, I saw the X green lock drop abruptly. During this time, the X arm stayed locked to the IR, and the Y arm beat on the control room network analyzer did not jump (at least not by an amount visible to the eye). Toggling the X end shutter a few times, the green TEM00 lock was re-acquired, but the beatnote has moved on the control room analyzer by ~40MHz. On Friday evening however, the X green lock held for >1 hour. Need to keep an eye on this.

  13238   Tue Aug 22 02:19:11 2017 gautamUpdateALSALS OLTFs

Attachment #1 shows the results of my measurements tonight (SR785 data in Attachment #2). Both loops have a UGF of ~10kHz, with ~55 degrees of phase margin.

Excitation was injected via SR560 at the PDH error point, amplitude was 35mV. According to the LED indicators on these boxes, the low frequency boost stages were ON. Gain knob of the X end PDH box was at 6.5, that of the Y end PDH box was at 4.9. I need to check the schematics to interpret these numbers. GV Edit: According to this elog, these numbers mean that the overall gain of the X end PDH box is approx. 25dB, while that of the Y end PDH box is approx. 15dB. I believe the Y end Lightwave NPRO has an actuator discriminant ~5MHz/V, while the X end Innolight is more like 1MHz/V.

Not sure what to make of the X PDH loop measurement being so much noisier than the Y end, I need to think about this.

More detailed analysis to follow.

Quote:

 

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.

 

ELOG V3.1.3-