40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 25 of 344  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  4432   Wed Mar 23 12:40:22 2011 Chief Recycling OfficerHowToEnvironmentRecycle stuff!

The following is a message from the LIGO 40m Chief Recycling Officer:

Please get up off your (Alignment Stabilization Servo)es and recycle your bottles and cans!  There is a recycling bin in the control room.  Recent weeks have seen an increase in number of bottles/cans thrown away in the regular garbage.  This is not cool. 

Thank you,

---L4mCRO

  3161   Tue Jul 6 17:29:04 2010 ChipUpdatePEMThe Ranger is mine!
  8061   Mon Feb 11 18:39:10 2013 ChloeUpdateGeneralPictures of Circuitry in Photodiode

I am going to be making measurements to find the optical mounts with the least noise. I am using a quadrature photodiode to record intensity of laser light. These are pictures of the circuitry inside (both sides). I will be designing/making some circuitry on a breadboard in the next few days in order to add and subtract the signals to have pitch and yaw outputs.

Attachment 1: IMG_0327.JPG
IMG_0327.JPG
Attachment 2: IMG_0329.JPG
IMG_0329.JPG
  8067   Tue Feb 12 17:26:31 2013 ChloeUpdate QPD circuitry to test mount vibrations

 I spent awhile today reading about op-amps and understanding what would be necessary to design a circuit which would directly give pitch and yaw of the QPD I am using. After getting an idea of what signals would be summed or subtracted, I opened up the QPD to take better pictures than last time (sorry, the pictures were blurry last time and I didn't realize). It turns out some of the connections have been broken inside the QPD, which would explain why we saw an unchanging signal in Ch2 on the oscilloscope yesterday when trying to test the laser setup. 

I found a couple other QPDs, which I will be using to help understand the circuit (and what is going on). I will be trying to use the same QPD box since it has banana cable and BNC cable adapters, which is helpful to have in the lab. Once I have concluded what the circuitry is like and designed electronics to add and subtract signals, I will build and mount all the circuits within the box (more sturdily than last time) so as to have a quality way of measuring the mount vibrations when I get there. 

  8090   Fri Feb 15 17:11:13 2013 ChloeUpdate QPD circuit pictures

 I took better pictures of the circuits of the QPD and spent a couple of hours with a multimeter trying to figure out how all the connections worked. I will continue to do so and analyze the circuits over the weekend to try to understand what is going on. I also have an old SURF report that Eric sent me that is similar to the design I was planning to use to sum the pitch and yaw signals. I will try and look at this over the weekend. 

Attachment 1: IMG_0337.JPG
IMG_0337.JPG
Attachment 2: IMG_0338.JPG
IMG_0338.JPG
  8111   Tue Feb 19 17:14:56 2013 ChloeUpdate QPD Circuit

I finished working out the circuit and figured out where the broken connections were. This is diagrammed in my notebook (will draw up more nicely and include in future elog post). Within the QPD circuitry, it seems like there are already opamps which regulate the circuit. I need to discuss the final diagram further with Eric.

I rewired the circuit inside the QPD box, which took awhile because it was difficult to solder the wires to such small locations without having multiple wires touch. This is completed, and on Friday I will begin to make the circuit to add/subtract signals to give pitch and yaw.

  8139   Fri Feb 22 17:33:38 2013 ChloeUpdate QPD circuit diagram

Today I tested the circuitry within the QPD to make sure I had put it back together correctly. The output was able to detect when a laser pointer was shone on different quadrants of the QPD when hooked up to the oscilloscope, so fixing the QPD worked! 

Following this, I spent awhile understanding what was going on with the circuit reading from the QPD. I concluded that the opamp on the QPD circuit acted as a low pass filter, with corner frequency of about 17 kHz, which serves our purposes (we are only interested in frequencies below 1 kHz). I diagrammed the circuit that I plan to build to give pitch and yaw (attached). It will be necessary to make small modifications to the circuitry already built (removing some resistors), which I have started on. 

Next time, I will construct the circuit that adds/subtracts signals on a breadboard and test it with the QPD circuit that is already built. 

Attachment 1: IMG_0377.JPG
IMG_0377.JPG
  8173   Tue Feb 26 17:36:28 2013 ChloeUpdate QPD Circuit

I corrected the circuit diagram I constructed last time - it would read 0 output most of the time because I made a mistake with the op amps. This will be attached once I discuss it with Eric.

I searched the 40m lab and ended up finding a breadboard in the Bridge lab. I then spent awhile trying to remove the QPD from the circuit board it was on, which was difficult since it had 6 pins. After that, I soldered wires to the QPD legs and began constructing the circuit on the breadboard. I also spent time showing Annalisa the setup and explaining my project.

On Friday I will try and reconstruct all of the circuitry from before for the QPD.

  8233   Tue Mar 5 17:23:06 2013 ChloeUpdate QPD Adding/Subtracting Circuit

Today I finished building the adding/subtracting circuit for the QPD and tested that the QPD could see a laser moving across its visual field for both pitch and yaw. It didn't seem to behave weirdly (saturate) at the edges, but I need to test this more carefully to be sure.

However, this circuit uses many op amps, which will cause problems for building the actual circuit to fit into the QPD box. I am trying to figure out how to do this with fewer op amps (both with a quad op amp for amplifying the signals from the QPD and by summing/subtracting the signals with a single op amp instead of 3).

I finally got around to asking Steve to order more breadboards! Trying to determine what would be a good QPD to order for the final circuit, since we do not have any unmounted QPDs that aren't ancient. I'll read up on things I don't know enough about (namely op amps).

  8295   Thu Mar 14 16:56:16 2013 ChloeUpdate QPD Circuit Design

I have sketched out the circuit design for the QPD. However, it seems like even when using a different opamp configuration, which I talked to Eric about on Friday, space will be a problem. It may be possible to squeeze everything onto a single circuit board to fit in the QPD box but what I think is more likely is that I will need to have 2 separate circuit boards both mounted within the box, one which integrates the signal from the QPD and the other which adds/subtracts (this involves many resistors which will take up a lot of space). I will continue to think about the best design for this.

I will try to have the circuit built in the next week or so, which may be difficult since I just started finals which will take most of my time. I spent most of this week writing up an ECDL proposal for a SURF with Tara. I'll make up for whatever work I miss since I'll be here for my spring break and doing little besides working in lab.

  8338   Mon Mar 25 16:51:43 2013 ChloeUpdate Final QPD Circuit Design

This is the final version of the QPD circuit I'm going to build. After playing around with the spatial arrangement, this should fit into the box that I was planning to use, although it will be a rather tight fit. The pitch, yaw, and summing circuit will be handled with a quad op amp. Planning to meet with Eric tomorrow to figure out the logistics of building things.

In the meantime, I'm reading about designing the ECDL for my summer project with Tara. He sent me several papers to read so we can talk on Wednesday.

Attachment 1: IMG-20130325-00244.jpg
IMG-20130325-00244.jpg
  8346   Mon Mar 25 23:27:26 2013 ChloeUpdate QPD Circuit

In order to test the mount vibrations, I will likely try and make a different circuit work (with the summing/subtracting on an external breadboard) and designing an optimal circuit will be a side project. This is the circuit with the power supply Rana came up with, and the design I had in mind for the rest of the circuit. In my free time, I will try to figure out what parts to get that reduce noise and slowly work on building this, since it would be useful to have in the lab. 

 https://www.circuitlab.com/circuit/7sx995/qpd-circuit/#postsave_access_control_settings

  8358   Tue Mar 26 17:32:30 2013 ChloeUpdate QPD/ECDL Progress

I built the summing/subtracting circuit on the breadboard, and hooked this up with one of the other QPDs I found (image of setup attached). I wasn't able to get this to read the correct signals when testing with a laser pointer after a couple of hours of troubleshooting... I will hopefully get this working in the next day or 2...

I'm going to read up on ECDL stuff for Tara tonight and hopefully figure out what sort of laser diode we should purchase, since I'm meeting with Tara tomorrow. experimenting

Attachment 1: IMG-20130326-00245.jpg
IMG-20130326-00245.jpg
  8381   Mon Apr 1 16:13:50 2013 ChloeUpdate QPD

Because we would like to get started on testing mount vibrations as soon as possible, I've been trying to get one of the other QPDs we found to work with the summing/subtracting circuit on a breadboard. I've been using a power supply that I think Jamie built 15 years ago... which seems to be broken as of today, since I no longer read any signal from it with an oscilloscope.

I tried using a different power supply, but I still can't read any change in signal with the QPD for any of the quadrants when using a laser pointer to shine light on it. I'll be working with Eric on this later this week. In the meantime, I'll try and come up with a shopping list for the nicer QPD circuit that'll be a longer term side project.

  8403   Wed Apr 3 16:03:59 2013 ChloeUpdate QPD Voltage Regulators

The voltage regulator on the QPD breadboard seems to be having problems... yesterday Eric helped me debug my circuit and discovered that the +12V regulator was overheating, so we replaced it. Today, I found that the -12V regulator was also doing the same thing, so I replaced it. However, it's still overheating. We checked all of the setup for the power regulators yesterday, so I'm not sure what's wrong.

I've also noticed that not all the connections on the breadboard that I've been using seem to work - I may search for a new breadboard in this case. Need to check I'm not doing something stupid with that.

  8511   Tue Apr 30 12:25:23 2013 ChloeUpdate QPD

Annalisa and I met yesterday and fixed the voltage regulator on the breadboard so the QPD circuit is working. We will meet with Eric on Thursday to determine the course of action with measurements.

  1797   Mon Jul 27 14:43:34 2009 ChrisUpdate Photodetectors

I found two ThorLabs PDA55 Si photodetectors that says detect visible light from DC to 10MHz that I'm going to use from now on.  I don't know how low of a frequency they will actually be good to.

  1810   Wed Jul 29 19:41:58 2009 ChrisConfigurationGeneralEUCLID-setup configuration change

David and I were thinking about changing the non-polarizing beam splitter in the EUCLID setup from 50/50 to 33/66 (ref picture).  It serves as a) a pickoff to sample the input power and b) a splitter to send the returning beam to a photodetector 2 (it then hits a polarizer and half of this is lost.  By changing the reflectivity to 66% then less (1/3 instead of 1/2) of the power coming into it would be "lost" at the ref photodetector 1, and on the return trip less would be lost at the polarizer (1/6 instead of 1/4).

 

 

Attachment 1: EUCLID.png
EUCLID.png
  1845   Thu Aug 6 17:51:21 2009 ChrisUpdateGeneralDisplacement Sensor Update

For the past week Dmass and I have been ordering parts and getting ready to construct our own modified version of EUCLID (figure).  Changes to the EUCLID design could include the removal of the first lens, the replacement of the cat's eye retroreflector with a lens focusing the beam waist on a mirror in that arm of the Michelson, and the removal of the linear polarizers.  A beam dump was added above the first polarizing beam splitter and the beam at Photodetector 2 was attenuated with an additional polarizing beam splitter and beam dump.  Another proposed alteration is to change the non-polarizing beam splitter from 50/50 to 33/66.  By changing the reflectivity to 66\%, less power coming into the non-polarizing beam splitter would be ``lost" at the reference detector (1/3 instead of 1/2), and on the return trip less power would be lost at the polarizing beam splitter (1/6 instead of 1/4).  Also, here's a noise plot comparing a few displacement sensors that are used to the shot noise levels for the three designs I've been looking at.

Attachment 1: Actual_Sensor.png
Actual_Sensor.png
Attachment 2: Sensitivity.png
Sensitivity.png
  1846   Thu Aug 6 18:21:03 2009 ChrisUpdateGeneralDisplacement Sensor Update

Quote:

For the past week Dmass and I have been ordering parts and getting ready to construct our own modified version of EUCLID (figure).  Changes to the EUCLID design could include the removal of the first lens, the replacement of the cat's eye retroreflector with a lens focusing the beam waist on a mirror in that arm of the Michelson, and the removal of the linear polarizers.  A beam dump was added above the first polarizing beam splitter and the beam at Photodetector 2 was attenuated with an additional polarizing beam splitter and beam dump.  Another proposed alteration is to change the non-polarizing beam splitter from 50/50 to 33/66.  By changing the reflectivity to 66\%, less power coming into the non-polarizing beam splitter would be ``lost" at the reference detector (1/3 instead of 1/2), and on the return trip less power would be lost at the polarizing beam splitter (1/6 instead of 1/4).  Also, here's a noise plot comparing a few displacement sensors that are used to the shot noise levels for the three designs I've been looking at.

 I thought slightly harder and I think that the beamsplitter stays. We will lose too much power on the first PD if we do that:

33/66:  Pwr @ PD2 = 2/3*1/3*1/2 =  1/9 Pin

            Pwr @ PD3 = 2/3*2/3*1/2 = 2/9 Pin

 

50:50 Pwr @ PD2 = PWR @ PD3 = 1/8 Pin

balancing them is probably better.

  1894   Wed Aug 12 23:45:03 2009 ChrisUpdateGeneralLong range michelson

Today I set up the EUCLID long range michelson design on the SP table; It's the same as the setup posted earlier, but without the pickoff (at PD1), which can be added later, and a few other minor changes (moved lenses, mirrors, PDs - nothing major).  I hooked up the two PD's to the oscilliscope and got a readout that pointed to more power hitting PD2 than PD3.

Attachment 1: Actual_Sensor.png
Actual_Sensor.png
  1908   Fri Aug 14 23:45:14 2009 ChrisUpdateGeneralLong Range Readout

The EUCLID-style Michelson readout is on the SP table now and is aligned.  See image below.  I took several power spectra with the plotter attached to the HP3563 (not sure if there's another way to get the data out) and I'm still waiting to calibrate (since dP/dL isn't constant as it isn't locked, this is taking a bit longer).  When put into XY mode on the oscilliscope (plotting Voltage at PD2 on the x and Voltage at PD3 on the y), a Lissajous figure as in the first plot below.  It's offset and elliptical due to imperfections (noise, dc offset, etc) but can ideally be used to calculate the L_ target mirror movement.  By rotating the first quarter wave plate by ~80.5deg counter-clockwise (fast axis was originally at Pi/8, now at 103deg), I was able to turn the Lissajous figure from an ellipse into a more circular shape, which would ideally allow for us to use a circular approximation (much simpler) in our displacement calculations.

Attachment 1: Table_Setup.png
Table_Setup.png
Attachment 2: Ellipse.jpg
Ellipse.jpg
Attachment 3: Circle.jpg
Circle.jpg
  10189   Fri Jul 11 22:28:34 2014 ChrisUpdateCDSc1auxex "Unknown Host"

Quote:

c1auxex has forgotten who it is.  Slow sliders for the QPD head were not responding, so I did a soft reboot from telnet. The machine didn't come back, so I plugged the RJ45-DB9 cable into the machine and looked at it through a minicom session.  When I key the crate, it gives me an error that it can't load a file, with the error code 0x320001.  Looking that up on a List of VxWorks error codes, I see that it is:    S_hostLib_UNKNOWN_HOST (3276801 or 0x320001)

I'm not sure how this happened.  I unplugged and replugged in the ethernet cable on the computer, but that didn't help.  Rana is going in to wiggle the other end of the ethernet cable, in case that's the problem.  EDIT:  Replacing the ethernet cable did not help.

Former elogs that are useful:  10025, 10015

EDIT:  The actual error message is:

boot device          : ei                                                      
processor number     : 0                                                       
host name            : chiara                                                  
file name            : /cvs/cds/vw/mv162-262-16M/vxWorks                       
inet on ethernet (e) : 192.168.113.59:ffffff00                                 
host inet (h)        : 192.168.113.104                                         
user (u)             : controls                                                
flags (f)            : 0x0                                                     
target name (tn)     : c1auxex                                                 
startup script (s)   : /cvs/cds/caltech/target/c1auxex/startup.cmd             
                                                                               
Attaching network interface ei0... done.                                       
Attaching network interface lo0... done.                                       
Loading...                                                                     
Error loading file: errno = 0x320001.                                          
Can't load boot file!! 

 We fixed this problem (at least for now) by adding c1auxex to the /etc/hosts file on chiara (following a hint from this page). The DNS setup might be the culprit here.

  10309   Thu Jul 31 18:54:03 2014 ChrisFrogselogMicroSoft BingBot is attacking us

Quote:

 The ELOG was frozen, with this in the .log file:   

GET /40m/?id=1279&select=1&rsort=Type HTTP/1.1

Cache-Control: no-cache

Connection: Keep-Alive

Pragma: no-cache

Accept: */*

Accept-Encoding: gzip, deflate

From: bingbot(at)microsoft.com

Host: nodus.ligo.caltech.edu

User-Agent: Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)

  (hopefully there's a way to hide from the Bing Bot like we did from the Google bot)

 

Yesterday elog was excruciatingly slow, and bingbot was the culprit. It was slurping down elog entries and attachments so fast that it brought nodus to its knees. So I created a robots.txt file disallowing all bots, and placed it in the elog's scripts directory (which gets served at the top level). Today the log feels a little snappier -- there's now much less bot traffic to compete with when using it.

We might be able to let selected bots back in with a crawl rate limit, if anyone misses searching the elog on bing.

  10632   Wed Oct 22 21:06:33 2014 ChrisUpdateDAQ40m frames onto the cluster

Quote:

 Dan Kozak is rsync transferring /frames from NODUS over to the LDAS grid. He's doing this without a BW limit, but even so its going to take a couple weeks. If nodus seems pokey or the net connection to the outside world is too tight, then please let me and him know so that he can throttle the pipe a little.

The recently observed daqd flakiness looks related to this transfer. It appears to still be ongoing:

nodus:~>ps -ef | grep rsync
controls 29089   382  5 13:39:20 pts/1   13:55 rsync -a --inplace --delete --exclude lost+found --exclude .*.gwf /frames/trend
controls 29100   382  2 13:39:43 pts/1    9:15 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10975 131.
controls 29109   382  3 13:39:43 pts/1    9:10 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10978 131.
controls 29103   382  3 13:39:43 pts/1    9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10976 131.
controls 29112   382  3 13:39:43 pts/1    9:18 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10979 131.
controls 29099   382  2 13:39:43 pts/1    9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10974 131.
controls 29106   382  3 13:39:43 pts/1    9:13 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10977 131.
controls 29620 29603  0 20:40:48 pts/3    0:00 grep rsync

Diagnosing the problem:

I logged into fb and ran "top". It said that fb was waiting for disk I/O ~60% of the time (according to the "%wa" number in the header). There were 8 nfsd (network file server) processes running with several of them listed in status "D" (waiting for disk). The daqd logs were ending with errors like the following suggesting that it couldn't keep up with the flow of data:

[Wed Oct 22 18:58:35 2014] main profiler warning: 1 empty blocks in the buffer
[Wed Oct 22 18:58:36 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1098064730 to 1098064731

This all pointed to the possibility that the file transfer load was too heavy.

Reducing the load:

The following configuration changes were applied on fb.

Edited /etc/conf.d/nfs to reduce the number of nfsd processes from 8 to 1:

OPTS_RPC_NFSD="1"

(was "8")

Ran "ionice" to raise the priority of the framebuilder process (daqd):

controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 1 -p 10964

And to reduce the priority of the nfsd process:

controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 2 -p 11198

I also tried punishing nfsd with an even lower priority ("-c 3"), but that was causing the workstations to lag noticeably.

After these changes the %wa value went from ~60% to ~20%, and daqd seems to die less often, but some further throttling may still be in order.

  10868   Wed Jan 7 13:39:42 2015 ChrisUpdateLSCDARM phase budget

I think the dolphin and RFM transit times are double-counted in this budget. As I understand it, all IPC transit times are already built in to the cycle time of the sending model. That is, the sending model is required to finish its computational work a little bit early, so there's time left to transmit data to the receivers before the start of the next cycle. Otherwise you get IPC errors. (This is why the LSC models at the sites can't use the last ~20 usec of their cycle without triggering IPC errors. They have to allow that much time for the RFM to get their control signals down the arms to the end stations.)

For instance, the delay measurement in elog 9881 (c1als to c1lsc via dolphin) shows only the c1lsc model's own 61 usec delay. If the dolphin transfer really took an additional cycle, you would expect 122 usec.

And in elog 10811 (c1scx to c1rfm to c1ass), the delay is 122 usec, not because the RFM itself adds delay, but because an extra model is traversed.

Bottom line: there may still be some DARM phase unaccounted for. And it would definitely help to bypass the c1rfm model, as suggested in 9881.

  10897   Tue Jan 13 18:47:20 2015 ChrisConfigurationComputer Scripts / Programsinstafoton setup

To use instafoton, right click an MEDM screen, open the Execute menu, and choose "Foton".  Then click on the EPICS channel of a filter module as displayed on the screen.

Here's how it was set up:

  • Install instafoton.py in /opt/rtcds/caltech/c1/scripts; edit paths to localize for the 40m
  • Add instafoton to the MEDM_EXEC_LIST environment variable, newly defined in /ligo/cdscfg/workstationrc.sh:
    export MEDM_EXEC_LIST="Edit this screen;medm &A &:Probe;probe &P &:Foton (Pick filter PV);/opt/rtcds/caltech/c1/scripts/instafoton.py &P &"
  10898   Tue Jan 13 23:17:57 2015 ChrisFrogsComputer Scripts / Programsmedm time machine

After recompiling medm with a patch for dumping screens (attached), I added a time machine to the right-click Execute menu.  It's installed under /cvs/cds/caltech/users/wipf/src/medm_time_machine. Dependencies include the python CA server module (pcaspy) and the latest nds2-client 0.11.2.  These were also installed under my users directory, to avoid interfering with other tools.

 

Attachment 1: tm.png
tm.png
Attachment 2: dump_medm_screen.patch
--- /ligo/apps/epics-3.14.12_long/extensions/src/medm/medm/utils.c.orig	2015-01-13 18:56:44.867720104 -0800
+++ /ligo/apps/epics-3.14.12_long/extensions/src/medm/medm/utils.c	2015-01-13 22:49:56.636820963 -0800
@@ -4156,6 +4156,37 @@
 	timeOffset = time900101 - time700101;
 }
 
+#if((2*MAX_TRACES)+2) > MAX_PENS
+#define MAX_COUNT 2*MAX_TRACES+2
+#else
+#define MAX_COUNT MAX_PENS
... 75 more lines ...
  11052   Thu Feb 19 23:23:52 2015 ChrisUpdateCDSBad CDS behavior

The frontends have some paths NFS-mounted from fb. fb is on the ragged edge of being I/O bound. I'd suggest moving those mounts to chiara. I tried increasing the number of NFS threads on fb (undoing the configuration change I'd previously made here) and it seems to help with EPICS smoothness -- although there are still occasional temporal anomalies in the time channels. The daqd flakiness (which was what led me to throttle NFS on fb in the first place) may now recur as well.

 

Quote:

At about 10AM, the C1LSC frontend stopped reporting any EPICS information. The arms were locked at the time, and remained so for some hours, until I noticed the totally whited-out MEDM screens. The machine would respond to pings, but did not respond to ssh, so we had to manually reboot.

Soon thereafter, we had a global 15min EPICS freeze, and have been in a weird state ever since. Epics has come back (and frozen again), but the fast frontends are still wonky, even when EPICS is not frozen. Intermittantly, the status blinkers and GPS time EPICS values will freeze for multiple seconds at a time, sporadically updating. Looking at a StripTool trace of an IOPs GPS time value shows a line with smooth portions for about 30 seconds, about 2 minutes apart. Between this is totally jagged step function behavior. C1LSC needed to be power cycled again; trying to restart the models is tough, because the EPICS slowdown makes it hard to hit the BURT button, as is needed for the model to start without crashing.

The DAQ network switch, and martian switch inside were power cycled, to little effect. I'm not sure how to diagnose network issues with the frontends. Using iperf, I am able to show hundreds of Mbit/s bandwidth betweem the control room machines and the frontends, but their EPICS is still totally wonky. 

What can we do??? indecision

 

  12487   Tue Sep 13 16:16:28 2016 ChrisOmnistructureVACvacuum interlock bypassed

(Steve, Chris)

The pumpdown had stalled because of some ancient vacuum interlock code that prevented opening the valve V1 between the turbo pump and the main volume.

This interlock [0] compares the channels C1:Vac-P1_pressure and C1:Vac-PTP1_pressure, neither of which is functioning at the moment. The P1 channel apparently stopped reading sometime during the vent, and contained a value of ~700 torr, while the PTP1 channel contained 0. So the interlock code saw this huge apparent pressure difference and refused to move the valve.

To bypass this check, we used caput to enter a pressure of 0 for P1.

[0] /cvs/cds/caltech/state/from_luna/VacInterlock.st

  13318   Mon Sep 18 17:30:54 2017 ChrisUpdateCDSFB wiper script

Attached is the version of the wiper script we use on the CryoLab cymac. It works with perl v5.20.2. Is this different from what you have?

Attachment 1: wiper.pl
#!/usr/bin/perl
use File::Basename;

print "\n" .  `date` . "\n";
# Dry run, do not delete anything
$dry_run = 1;

if ($ARGV[0] eq "--delete") { $dry_run = 0; }

print "Dry run, will not remove any files!!!\n" if $dry_run;
... 184 more lines ...
  13338   Thu Sep 28 06:35:44 2017 ChrisUpdateLSCDAC noise measurement (again)

Since you're monitoring two channels simultaneously, you could try subtracting them, as an alternative to carving out bandstops.

  1. Drive both channels with the same time series
  2. Tweak the filter module gains if needed to equalize the analog outputs
  3. Apply SR785 user math to subtract the two channels (or A-B mode of a single input, if you're not using that already?)
  4. Measure the residual, which should be the incoherent part containing DAC + coil driver noise

Subtraction can conceal certain annoying effects (like numerical noise or level crossing glitches) that remain coherent for two identical outputs. It might be worth experimenting with a differential offset or sinusoid, to try to break up that kind of coherence if it exists.

  16853   Sat May 14 08:36:03 2022 ChrisUpdateDAQDAQ troubleshooting

I heard a rumor about a DAQ problem at the 40m.

To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.

daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...

One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:

$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)

In /etc/resolv.conf on fb1 there was the following line:

search martian.113.168.192.in-addr.arpa

Changing this to search martian got address lookup on fb1 working:

$ host c1sus2
c1sus2.martian has address 192.168.113.87

But testpoints still could not be retrieved from c1sus2, even after a daqd restart.

In /etc/hosts on fb1 I found the following:

192.168.113.92  c1sus2

Changing the hardcoded address to the value returned by the nameserver (192.168.113.87) fixed the problem.

It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time.

  16855   Mon May 16 12:59:27 2022 ChrisUpdateDAQDAQ troubleshooting

It looks like the RFM problem started a little after 2am on Saturday morning (attachment 1). It’s subsequent to what I did, but during a time of no apparent activity, either by me or others.

The pattern of errors on c1rfm (attachment 2) looks very much like this one previously reported by Gautam (errors on all IRFM0 ipcs). Maybe the fix described in Koji’s followup will work again (involving hard reboots).

Attachment 1: timeseries.png
timeseries.png
Attachment 2: err.png
err.png
  17172   Tue Oct 4 21:00:49 2022 ChrisUpdateComputersFailed takeover attempt with the new front ends

[Jamie, JC, Chris]

Today we made a failed attempt to take over the 40m hardware with the new front ends on the test stand.

As an initial test, we connected the new c1iscey to its I/O chassis using the OneStop fiber link. This went smoothly, so we tried to proceed with the rest of the system, which uncovered several problems. Consequently, we’ve reverted control back to the old front ends tonight, and will regroup and make another attempt tomorrow.

Status summary:

  • c1iscey worked on the first try
  • c1lsc worked, after we sorted out which of the two OneStop cables run to its rack we needed to use
  • c1sus2 sort of worked (its models have been crashing sporadically)
  • c1ioo had a busted OneStop cable, and worked after that was replaced
  • c1sus refused to work with the fiber OneStop cables (we tried several, including the known working one from c1ioo), but we jury-rigged it to run over a copper cable, after nudging the teststand rack a bit closer to the chassis
  • c1iscex refused to work with the fiber OneStop cables, and substituting copper was not an option, so we were stuck

There are various pathologies that we've seen with the OneStop interface cards in the I/O chassis. We don't seem to have the documentation for these cards, but our interpretive guesses are as follows:

  • When working, it is supposed to illuminate all the green LEDs along the top of the card, and the four next to the connector. In this state, you can run lspci -vt on the host, and see the various PLX/Contec/etc devices that populate the chassis.
  • When the cable is unplugged or bad, only four green LEDs illuminate on the card, and none by the connector. No devices from the chassis can be seen from the host.
  • On c1iscex and c1sus, when a fiber link is plugged in, it turns on all the LEDs along the top of the card, but the four next to the connector remain dark. We’re not sure yet what this is trying to tell us, but lspci finds no devices from the chassis, same as if it is unplugged.
  • Also, sometimes on c1iscex, no LEDs would illuminate at all (possibly the card was not seated properly).

Tomorrow, we plan to swap out the c1iscex I/O chassis for the one in the test stand, and see if that lets us get the full system up and running.

  17173   Thu Oct 6 07:29:30 2022 ChrisUpdateComputersSuccessful takeover attempt with the new front ends

[JC, Chris]

Last night’s CDS upgrade attempt succeeded in taking over the IFO. If the IFO users are willing, let’s try to run with it today.

The new system was left in control of the IFO hardware overnight, to check its stability. All looks OK so far.

The next step will be to connect the new FEs, fb1, and chiara to the martian network, so they’re directly accessible from the control room workstations (currently the system remains quarantined on the teststand network). We’ll also need to bring over the changes to models, scripts, etc that have been made since Tega’s last sync of the filesystem on chiara.

The previous elog noted a mysterious broken state of the OneStop link between FE and IO chassis, where all green LEDs light up on the OneStop board in the IO chassis, except the four next to the fiber link connector. This was seen on c1sus and c1iscex. It was recoverable last night on c1iscex, by fully powering down both FE computer and chassis, waiting a bit, and then powering up chassis and computer again. Currently c1sus is running with a copper OneStop cable because of the fiber link troubles we had, but this procedure should be tried to see if one of the fiber links can be made to work after all.

In order to string the short copper OneStop cable for c1sus, we had to move the teststand rack closer to the IO chassis, up against the back of 1X6/1X7. This is a temporary state while we prepare to move the FEs to their new rack. It hopefully also allows sufficient clearance to the exit door to pass the upcoming fire inspection.

At first, we connected the teststand rack’s power cables to the receptacle in 1X7, but this eventually tripped 1X7’s circuit breaker in the wall panel. Now, half of the teststand rack is on the receptacle in 1X6, and the other half is on 1X7 (these are separate circuits).

After the breaker trip, daqd couldn’t start. It turned out that no data was flowing to it, because the power cycle caused the DAQ network switch to forget a setting I had applied to enable jumbo frames on the network. The configuration has now been saved so that it should apply automatically on future restarts. For future reference, the web interface of this switch is available by running firefox on fb1 and navigating to 10.0.113.254.

When the FE machines are restarted, a GPS timing offset in /sys/kernel/gpstime/offset sometimes fails to initialize. It shows up as an incorrect GPS time in /proc/gps and on the GDS_TP MEDM screens, and prevents the data from getting timestamped properly for the DAQ. This needs to be looked at and fixed soon. In the meantime, it can be worked around by setting the offset manually: look at the value on one of the FEs that got it right, and apply it using sudo sh -c "echo CORRECT_OFFSET >/sys/kernel/gpstime/offset".

In the first ~30 minutes after the system came up last night, there were transient IPC errors, caused by drifting timestamps while the GPS cards in the FEs got themselves resynced to the satellites. Since then, timing has remained stable, and no further errors occurred overnight. However, the timing status is still reported as red in the IOP state vectors. This doesn’t seem to be an operational problem and perhaps can be ignored, but we should check it out later to make sure.

Also, the DAC cards in c1ioo and c1iscey reported FIFO EMPTY errors, triggering their DACKILL watchdogs. This situation may have existed in the old system and gone undetected. To bypass the watchdog, I’ve added the optimizeIO=1 flag to the IOP models on those systems, which makes them skip the empty FIFO check. This too should be further investigated when we get a chance.

  17180   Mon Oct 10 00:05:24 2022 ChrisUpdateIOOIMC WFS / MC2 SUS glitch

Thanks for pointing out that EPICS data collection (slow channels) was not working. I started the service that collects these channels (standalone_edc, running on c1sus), and pointed it to the channel list in /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini, so this should be working now.

controls@c1sus:~$ systemctl status rts-edc_c1sus
● rts-edc_c1sus.service - Advanced LIGO RTS stand-alone EPICS data concentrator
   Loaded: loaded (/etc/systemd/system/rts-edc_c1sus.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-10-09 23:30:15 PDT; 10h ago
 Main PID: 32154 (standalone_edc)
   CGroup: /system.slice/rts-edc_c1sus.service
           ├─32154 /usr/bin/standalone_edc -i /etc/advligorts/edc.ini -l 0.0.0.0:9900
           └─32159 caRepeater

 

Quote:

The IMC and the IMC WFS kept running for ~2days. 👍

I wanted to look at the trand of the IMC REFL DC, but the dataviewer showed that the recorded values are zero. And this slow channel is missing in the channel list.

I checked the PSL PMC signals (slow) as an example, and many channels are missing in the channel list.

So something is not right with some part of the CDS.

Note that I also reported that the WFS plot in the above refered previous elog has the value like 1e39

 

 

  17181   Mon Oct 10 10:14:05 2022 ChrisUpdateCDSCDS Upgrade remaining issues

List of remaining tasks to iron out various wrinkles with the upgrade:

The situation with the timing system is that we have not touched it at all in the upgrade, but have added a new diagnostic: Spectracom timing boards in each FE, to compare vs the ADC clock. So I expect that what we’re seeing now is not new, but may not have been noticed previously.

What we’re seeing is:

  • Timing status in the IOP statewords is red
  • The offset between FE timing and Spectracom is large: ~1000 µsec or greater. At the sites, this is typically much smaller, like 10 µsec
  • The offset is not stable (see first attachment). There are both excursions where it wanders off by tens of µsec and then comes back, as well as slips where it runs away by hundreds of µsec before settling down at a different level.

Possible issues stemming from unstable timing:

  • One of the timing excursions apparently glitched the ADC clock on c1ioo and c1sus2. Those IOPs are currently running with cycle time >15 µsec (see second attachment). This may mean the inputs on certain ADCs are delayed by an extra sample time, relative to the other ADCs
  • If the offset drifts too far, a discrepancy in the timestamps can glitch the IPCs and daqd
Attachment 1: timingslip.png
timingslip.png
Attachment 2: adctimingglitch.png
adctimingglitch.png
  17182   Tue Oct 11 10:58:36 2022 ChrisUpdateCDSCDS Upgrade remaining issues

The original fb1 now boots from its new drive, which is installed in a fixed drive bay and connected via SATA. There are no spare SATA power cables inside the chassis, so we’re temporarily powering it from an external power supply borrowed from a USB to SATA kit (see attachment).

The easiest way to eliminate the external supply would be to use a 4-pin Molex to SATA adapter, since the chassis has plenty of 4-pin Molex connectors available. Unfortunately those adapters sometimes start fires. The ones with crimped connectors are thought to be safer than those with molded connectors. However, the safest course will probably be to just migrate the OS partition onto the 2 TB device from the hardware RAID array, once we’re happy with it.

Historic data from the past frames should now be available, as should NDS2 access.

Starting to look into the timing issue:

  • The GPS receiver (EndRun Tempus LX) seems to have low signal from its antenna: it’s sometimes losing lock with the satellites. We should find a way to log the signal strength and locked state of this receiver in EPICS and frames (perhaps using its SNMP interface).
  • There was a BNC tee attached to the 1PPS output of the receiver. One cable was feeding the timing system, and another one went somewhere into the PSL (if I traced it correctly?). I removed the tee and connected the timing system directly to 1PPS.
  • I updated the firmware on the Tempus LX (but don’t expect this to make any difference)
Attachment 1: PXL_20221010_212415180.jpg
PXL_20221010_212415180.jpg
  17192   Sat Oct 15 17:22:56 2022 ChrisUpdateOptimal ControlIMC alignment controller testing

[Anchal, Radhika, Jamie, Chris]

We conducted a test of three alternative controllers for the IMC pitch DOFs on Friday. These were loaded into a new RTS model c1sbr, which runs on the c1ioo front end as a user-space program at 256 Hz. It communicates with the c1ioo controller via shared memory IPCs to exchange error and control signals.

The IMC maintained lock during the handoffs, and we were able to take one minute of data for each (circa GPS 1349807926, 1349808426, 1349808751; spectra attached), which we can review to assess the performance vs the baseline. (On the first trial, lock was lost at the end when the script tried to switch back to the baseline controller, because we did not take care to clear the integrators. On subsequent trials we did that part by hand.)

The method of setting up this test was convoluted, but now that we see it working, we can start putting in the merge requests to get the changes better integrated into the system. First, modifications were required to the realtime code generator, to get controllers running at the new sample rate of 256 Hz. (This was done in a separate filesystem image on fb1, /diskless/root.buster256, which is only loaded by c1ioo, so as to isolate the changes from the other front end machines.) The generated code then needed hand-edits to insert additional header files and linker options, so that the alternative controllers could be loaded from .so shared libraries. Also, the kernel parameters had to be set as described here, to allow the user-space controller to have a CPU core all to itself. Finally, isolating the core was done following the recipe in this script (skipping the parts related to docker, since we didn’t use it).

Attachment 1: 180.png
180.png
Attachment 2: 066.png
066.png
Attachment 3: 202.png
202.png
  17193   Mon Oct 17 11:57:27 2022 ChrisOmnistructureCDSTiming system monitoring

I started a GPS timing monitor script, /opt/rtcds/caltech/c1/Git/40m/scripts/cds/tempusTimingMon/tempusTimingMon.py, which runs inside a docker container on optimus. It accesses the GPS receiver (EndRun Tempus LX) status via pysnmp, and creates the following epics channels with pcaspy:

  • C1:DAQ-GPS_TFOM “The Time Figure of Merit (TFOM) value ranges from 3 to 9 and indicates the current estimate of the worst case time error. It is a logarithmic scale, with each increment indicating a tenfold increase in the worst case time error boundaries.” (nominally 3, corresponding to 100 nsec)
  • C1:DAQ-GPS_STATE “Current GPS signal processor state. One of 0, 1 or 2, with 0 being the acquisition state and 2 the fully locked on state.”
  • C1:DAQ-GPS_SATS “Current number of GPS satellites being tracked.”
  • C1:DAQ-GPS_DAC “Current 16 bit, Voltage Controlled TCXO DAC value. Typical range is 20000 to 40000, where more positive numbers have the effect of raising the TCXO frequency.”
  • C1:DAQ-GPS_SNR “The current average carrier to noise ratio of all tracked satellites, in units of dB. Values less than 35 indicate weak signal conditions.”

Attached is a 1-day trend of the above channels, along with the IRIG-B timing offsets from the IOPs. No big timing excursions or slippages were seen yet, although the c1sus IOP (FEC-20) timing seems to be hopping around by one IOP sample time (15 µsec).

Attachment 1: timing.png
timing.png
  17226   Thu Nov 3 17:01:47 2022 ChrisUpdateCDSSLwebview fixed

The simulink webview generation cronjob on megatron was not working, apparently due to a matlab license problem. I switched it to matlab 2019a (the current CDS standard), and added a script to generate medm screens from the webview. This can help with keeping hand-made screens up to date, or be a substitute for hand-made screens if the model is simple.

  17262   Fri Nov 11 20:59:13 2022 ChrisFrogsComputer Scripts / ProgramsFSS SLOW servo not running

The problem with trends was due to the epics data collection process (standalone_edc) that runs on c1sus. When all the FEs were rebooted earlier this week, this process was started automatically, but for some reason it hasn’t been doing its thing and sending epics data to the framebuilder. I restarted it just now, and it’s working again. Until this problem is sorted out, we need to remember to check on this process after rebooting c1sus.

Quote:

I also tried to post a trend plot, but the minute trends don't yet reach the current date (!!!). They seem to have stopped recording a few days ago, so I guess the Framebuilder still needs some help or its tough to figure out things like when exactly the new SLOW servo stopped working.

  1694   Wed Jun 24 10:53:34 2009 Chris ZimmermanUpdateGeneralWeek 1/2 Update

I've spent most of the last week doing background reading; fourier transforms, shm, e&m, and other physics that I didn't cover at school.  I also read a few chapters in Saulson, especially the chapter on noise and shot noise.  To get a better grip on what I'm going to be doing I read through the polarization chapter in Hobbs' "Optics" text, mostly on wave plates since that's a large part of this readout.  Since then I've been working up to calculating the shot noise, starting with the electric field throughout the new interferometer readout.

  1710   Wed Jul 1 10:56:42 2009 Chris ZimmermanUpdateGeneralWeek 2/3 Update

I spent the last week working a lot with the differences between a basic Michelson readout and the new one as a displacement sensor.  The new one (w/ wave plates) ends with two differently polarized beams and should have better sensitivity; I've also been going through noise/sensitivity calculations for each, although that hit a road block when I had to start the 1st SURF progress report, which has taken up most of my time since Saturday.

  1720   Wed Jul 8 11:05:40 2009 Chris ZimmermanUpdateGeneralWeek 3/4 Update

The last week I've spent mostly working on calculating shot noise and other sensitivities in three michelson sensor setups, the standard michelson, the "long range" michelson (with wave plates), and the proposed EUCLID setup.  The goal is to show that there is some inherent advantage to the latter two setups as displacement sensors.  This involved looking into polarization and optics a lot more, so I've been spending a lot of time on that also.  For example, the displacement sensitivity/shot noise on the standard michelson is around 6:805*10^-17 m/rHz at L_=1*10^-7m, as shown in the graph.  NSD_Displacement.png

  1750   Wed Jul 15 12:44:28 2009 Chris ZimmermanUpdateGeneralWeek 4/5 Update

I've spent most of the last week working on finishing up the UCSD calculations, comparing it to the EUCLID design, and thinking about getting started with a prototype and modelling in MATLAB.  Attached is something on EUCLID/UCSD sensors.

Attachment 1: Comparison.pdf
Comparison.pdf Comparison.pdf
  1779   Wed Jul 22 16:15:52 2009 Chris ZimmermanUpdateGeneralWeek 5/6 Update

The last week I've started setting up the HeNe laser on the PSL table and doing some basic measurements (Beam waist, etc) with the beam scan, shown on the graph.  Today I moved a few steering mirrors that steve showed me from at table on the NW corner to the PSL table.  The goal setup is shown below, based on the UCSD setup.  Also, I found something that confused me in the EUCLID setup, a  pair of quarter wave plates in the arm of their interferometer, so I've been working out how they organized that to get the results that they did.  I also finished calculating the shot noise levels in the basic and UCSD models, and those are also shown below (at 633nm, 4mw) where the two phase-shifted elements (green/red) are the UCSD outputs, in quadrature (the legend is difficult to read).

 

 

Attachment 1: Beam_Scan.jpg
Beam_Scan.jpg
Attachment 2: Long_Range_Michelson_Setup_1_-_Actual.png
Long_Range_Michelson_Setup_1_-_Actual.png
Attachment 3: NSD_Displacement.png
NSD_Displacement.png
  14305   Mon Nov 19 14:59:48 2018 ChubUpdateVACVent 81

Vent 80 is nearly complete; the instrument is almost to atmosphere.  All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open.  The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations.  VM1 and VM3 could not be actuated.  The condition status is still listed as Unidentified because of the disconnected valves. 

  14350   Thu Dec 13 10:03:07 2018 ChubUpdateGeneralOMC chamber

Bob, Aaron, and I removed the door from the OMC chamber this morning.  Everything went well.

ELOG V3.1.3-