Today I finished building the adding/subtracting circuit for the QPD and tested that the QPD could see a laser moving across its visual field for both pitch and yaw. It didn't seem to behave weirdly (saturate) at the edges, but I need to test this more carefully to be sure.
However, this circuit uses many op amps, which will cause problems for building the actual circuit to fit into the QPD box. I am trying to figure out how to do this with fewer op amps (both with a quad op amp for amplifying the signals from the QPD and by summing/subtracting the signals with a single op amp instead of 3).
I finally got around to asking Steve to order more breadboards! Trying to determine what would be a good QPD to order for the final circuit, since we do not have any unmounted QPDs that aren't ancient. I'll read up on things I don't know enough about (namely op amps).
I have sketched out the circuit design for the QPD. However, it seems like even when using a different opamp configuration, which I talked to Eric about on Friday, space will be a problem. It may be possible to squeeze everything onto a single circuit board to fit in the QPD box but what I think is more likely is that I will need to have 2 separate circuit boards both mounted within the box, one which integrates the signal from the QPD and the other which adds/subtracts (this involves many resistors which will take up a lot of space). I will continue to think about the best design for this.
I will try to have the circuit built in the next week or so, which may be difficult since I just started finals which will take most of my time. I spent most of this week writing up an ECDL proposal for a SURF with Tara. I'll make up for whatever work I miss since I'll be here for my spring break and doing little besides working in lab.
This is the final version of the QPD circuit I'm going to build. After playing around with the spatial arrangement, this should fit into the box that I was planning to use, although it will be a rather tight fit. The pitch, yaw, and summing circuit will be handled with a quad op amp. Planning to meet with Eric tomorrow to figure out the logistics of building things.
In the meantime, I'm reading about designing the ECDL for my summer project with Tara. He sent me several papers to read so we can talk on Wednesday.
In order to test the mount vibrations, I will likely try and make a different circuit work (with the summing/subtracting on an external breadboard) and designing an optimal circuit will be a side project. This is the circuit with the power supply Rana came up with, and the design I had in mind for the rest of the circuit. In my free time, I will try to figure out what parts to get that reduce noise and slowly work on building this, since it would be useful to have in the lab.
I built the summing/subtracting circuit on the breadboard, and hooked this up with one of the other QPDs I found (image of setup attached). I wasn't able to get this to read the correct signals when testing with a laser pointer after a couple of hours of troubleshooting... I will hopefully get this working in the next day or 2...
I'm going to read up on ECDL stuff for Tara tonight and hopefully figure out what sort of laser diode we should purchase, since I'm meeting with Tara tomorrow. experimenting
Because we would like to get started on testing mount vibrations as soon as possible, I've been trying to get one of the other QPDs we found to work with the summing/subtracting circuit on a breadboard. I've been using a power supply that I think Jamie built 15 years ago... which seems to be broken as of today, since I no longer read any signal from it with an oscilloscope.
I tried using a different power supply, but I still can't read any change in signal with the QPD for any of the quadrants when using a laser pointer to shine light on it. I'll be working with Eric on this later this week. In the meantime, I'll try and come up with a shopping list for the nicer QPD circuit that'll be a longer term side project.
The voltage regulator on the QPD breadboard seems to be having problems... yesterday Eric helped me debug my circuit and discovered that the +12V regulator was overheating, so we replaced it. Today, I found that the -12V regulator was also doing the same thing, so I replaced it. However, it's still overheating. We checked all of the setup for the power regulators yesterday, so I'm not sure what's wrong.
I've also noticed that not all the connections on the breadboard that I've been using seem to work - I may search for a new breadboard in this case. Need to check I'm not doing something stupid with that.
Annalisa and I met yesterday and fixed the voltage regulator on the breadboard so the QPD circuit is working. We will meet with Eric on Thursday to determine the course of action with measurements.
I found two ThorLabs PDA55 Si photodetectors that says detect visible light from DC to 10MHz that I'm going to use from now on. I don't know how low of a frequency they will actually be good to.
David and I were thinking about changing the non-polarizing beam splitter in the EUCLID setup from 50/50 to 33/66 (ref picture). It serves as a) a pickoff to sample the input power and b) a splitter to send the returning beam to a photodetector 2 (it then hits a polarizer and half of this is lost. By changing the reflectivity to 66% then less (1/3 instead of 1/2) of the power coming into it would be "lost" at the ref photodetector 1, and on the return trip less would be lost at the polarizer (1/6 instead of 1/4).
For the past week Dmass and I have been ordering parts and getting ready to construct our own modified version of EUCLID (figure). Changes to the EUCLID design could include the removal of the first lens, the replacement of the cat's eye retroreflector with a lens focusing the beam waist on a mirror in that arm of the Michelson, and the removal of the linear polarizers. A beam dump was added above the first polarizing beam splitter and the beam at Photodetector 2 was attenuated with an additional polarizing beam splitter and beam dump. Another proposed alteration is to change the non-polarizing beam splitter from 50/50 to 33/66. By changing the reflectivity to 66\%, less power coming into the non-polarizing beam splitter would be ``lost" at the reference detector (1/3 instead of 1/2), and on the return trip less power would be lost at the polarizing beam splitter (1/6 instead of 1/4). Also, here's a noise plot comparing a few displacement sensors that are used to the shot noise levels for the three designs I've been looking at.
I thought slightly harder and I think that the beamsplitter stays. We will lose too much power on the first PD if we do that:
33/66: Pwr @ PD2 = 2/3*1/3*1/2 = 1/9 Pin
Pwr @ PD3 = 2/3*2/3*1/2 = 2/9 Pin
50:50 Pwr @ PD2 = PWR @ PD3 = 1/8 Pin
balancing them is probably better.
Today I set up the EUCLID long range michelson design on the SP table; It's the same as the setup posted earlier, but without the pickoff (at PD1), which can be added later, and a few other minor changes (moved lenses, mirrors, PDs - nothing major). I hooked up the two PD's to the oscilliscope and got a readout that pointed to more power hitting PD2 than PD3.
The EUCLID-style Michelson readout is on the SP table now and is aligned. See image below. I took several power spectra with the plotter attached to the HP3563 (not sure if there's another way to get the data out) and I'm still waiting to calibrate (since dP/dL isn't constant as it isn't locked, this is taking a bit longer). When put into XY mode on the oscilliscope (plotting Voltage at PD2 on the x and Voltage at PD3 on the y), a Lissajous figure as in the first plot below. It's offset and elliptical due to imperfections (noise, dc offset, etc) but can ideally be used to calculate the L_ target mirror movement. By rotating the first quarter wave plate by ~80.5deg counter-clockwise (fast axis was originally at Pi/8, now at 103deg), I was able to turn the Lissajous figure from an ellipse into a more circular shape, which would ideally allow for us to use a circular approximation (much simpler) in our displacement calculations.
c1auxex has forgotten who it is. Slow sliders for the QPD head were not responding, so I did a soft reboot from telnet. The machine didn't come back, so I plugged the RJ45-DB9 cable into the machine and looked at it through a minicom session. When I key the crate, it gives me an error that it can't load a file, with the error code 0x320001. Looking that up on a List of VxWorks error codes, I see that it is: S_hostLib_UNKNOWN_HOST (3276801 or 0x320001)
I'm not sure how this happened. I unplugged and replugged in the ethernet cable on the computer, but that didn't help. Rana is going in to wiggle the other end of the ethernet cable, in case that's the problem. EDIT: Replacing the ethernet cable did not help.
Former elogs that are useful: 10025, 10015
EDIT: The actual error message is:
boot device : ei
processor number : 0
host name : chiara
file name : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.59:ffffff00
host inet (h) : 192.168.113.104
user (u) : controls
flags (f) : 0x0
target name (tn) : c1auxex
startup script (s) : /cvs/cds/caltech/target/c1auxex/startup.cmd
Attaching network interface ei0... done.
Attaching network interface lo0... done.
Error loading file: errno = 0x320001.
Can't load boot file!!
We fixed this problem (at least for now) by adding c1auxex to the /etc/hosts file on chiara (following a hint from this page). The DNS setup might be the culprit here.
The ELOG was frozen, with this in the .log file:
GET /40m/?id=1279&select=1&rsort=Type HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
(hopefully there's a way to hide from the Bing Bot like we did from the Google bot)
Yesterday elog was excruciatingly slow, and bingbot was the culprit. It was slurping down elog entries and attachments so fast that it brought nodus to its knees. So I created a robots.txt file disallowing all bots, and placed it in the elog's scripts directory (which gets served at the top level). Today the log feels a little snappier -- there's now much less bot traffic to compete with when using it.
We might be able to let selected bots back in with a crawl rate limit, if anyone misses searching the elog on bing.
Dan Kozak is rsync transferring /frames from NODUS over to the LDAS grid. He's doing this without a BW limit, but even so its going to take a couple weeks. If nodus seems pokey or the net connection to the outside world is too tight, then please let me and him know so that he can throttle the pipe a little.
The recently observed daqd flakiness looks related to this transfer. It appears to still be ongoing:
nodus:~>ps -ef | grep rsync
controls 29089 382 5 13:39:20 pts/1 13:55 rsync -a --inplace --delete --exclude lost+found --exclude .*.gwf /frames/trend
controls 29100 382 2 13:39:43 pts/1 9:15 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10975 131.
controls 29109 382 3 13:39:43 pts/1 9:10 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10978 131.
controls 29103 382 3 13:39:43 pts/1 9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10976 131.
controls 29112 382 3 13:39:43 pts/1 9:18 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10979 131.
controls 29099 382 2 13:39:43 pts/1 9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10974 131.
controls 29106 382 3 13:39:43 pts/1 9:13 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10977 131.
controls 29620 29603 0 20:40:48 pts/3 0:00 grep rsync
Diagnosing the problem:
I logged into fb and ran "top". It said that fb was waiting for disk I/O ~60% of the time (according to the "%wa" number in the header). There were 8 nfsd (network file server) processes running with several of them listed in status "D" (waiting for disk). The daqd logs were ending with errors like the following suggesting that it couldn't keep up with the flow of data:
[Wed Oct 22 18:58:35 2014] main profiler warning: 1 empty blocks in the buffer
[Wed Oct 22 18:58:36 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1098064730 to 1098064731
This all pointed to the possibility that the file transfer load was too heavy.
Reducing the load:
The following configuration changes were applied on fb.
Edited /etc/conf.d/nfs to reduce the number of nfsd processes from 8 to 1:
Ran "ionice" to raise the priority of the framebuilder process (daqd):
controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 1 -p 10964
And to reduce the priority of the nfsd process:
controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 2 -p 11198
I also tried punishing nfsd with an even lower priority ("-c 3"), but that was causing the workstations to lag noticeably.
After these changes the %wa value went from ~60% to ~20%, and daqd seems to die less often, but some further throttling may still be in order.
I think the dolphin and RFM transit times are double-counted in this budget. As I understand it, all IPC transit times are already built in to the cycle time of the sending model. That is, the sending model is required to finish its computational work a little bit early, so there's time left to transmit data to the receivers before the start of the next cycle. Otherwise you get IPC errors. (This is why the LSC models at the sites can't use the last ~20 usec of their cycle without triggering IPC errors. They have to allow that much time for the RFM to get their control signals down the arms to the end stations.)
For instance, the delay measurement in elog 9881 (c1als to c1lsc via dolphin) shows only the c1lsc model's own 61 usec delay. If the dolphin transfer really took an additional cycle, you would expect 122 usec.
And in elog 10811 (c1scx to c1rfm to c1ass), the delay is 122 usec, not because the RFM itself adds delay, but because an extra model is traversed.
Bottom line: there may still be some DARM phase unaccounted for. And it would definitely help to bypass the c1rfm model, as suggested in 9881.
To use instafoton, right click an MEDM screen, open the Execute menu, and choose "Foton". Then click on the EPICS channel of a filter module as displayed on the screen.
Here's how it was set up:
export MEDM_EXEC_LIST="Edit this screen;medm &A &:Probe;probe &P &:Foton (Pick filter PV);/opt/rtcds/caltech/c1/scripts/instafoton.py &P &"
After recompiling medm with a patch for dumping screens (attached), I added a time machine to the right-click Execute menu. It's installed under /cvs/cds/caltech/users/wipf/src/medm_time_machine. Dependencies include the python CA server module (pcaspy) and the latest nds2-client 0.11.2. These were also installed under my users directory, to avoid interfering with other tools.
The frontends have some paths NFS-mounted from fb. fb is on the ragged edge of being I/O bound. I'd suggest moving those mounts to chiara. I tried increasing the number of NFS threads on fb (undoing the configuration change I'd previously made here) and it seems to help with EPICS smoothness -- although there are still occasional temporal anomalies in the time channels. The daqd flakiness (which was what led me to throttle NFS on fb in the first place) may now recur as well.
At about 10AM, the C1LSC frontend stopped reporting any EPICS information. The arms were locked at the time, and remained so for some hours, until I noticed the totally whited-out MEDM screens. The machine would respond to pings, but did not respond to ssh, so we had to manually reboot.
Soon thereafter, we had a global 15min EPICS freeze, and have been in a weird state ever since. Epics has come back (and frozen again), but the fast frontends are still wonky, even when EPICS is not frozen. Intermittantly, the status blinkers and GPS time EPICS values will freeze for multiple seconds at a time, sporadically updating. Looking at a StripTool trace of an IOPs GPS time value shows a line with smooth portions for about 30 seconds, about 2 minutes apart. Between this is totally jagged step function behavior. C1LSC needed to be power cycled again; trying to restart the models is tough, because the EPICS slowdown makes it hard to hit the BURT button, as is needed for the model to start without crashing.
The DAQ network switch, and martian switch inside were power cycled, to little effect. I'm not sure how to diagnose network issues with the frontends. Using iperf, I am able to show hundreds of Mbit/s bandwidth betweem the control room machines and the frontends, but their EPICS is still totally wonky.
What can we do???
The pumpdown had stalled because of some ancient vacuum interlock code that prevented opening the valve V1 between the turbo pump and the main volume.
This interlock  compares the channels C1:Vac-P1_pressure and C1:Vac-PTP1_pressure, neither of which is functioning at the moment. The P1 channel apparently stopped reading sometime during the vent, and contained a value of ~700 torr, while the PTP1 channel contained 0. So the interlock code saw this huge apparent pressure difference and refused to move the valve.
To bypass this check, we used caput to enter a pressure of 0 for P1.
Attached is the version of the wiper script we use on the CryoLab cymac. It works with perl v5.20.2. Is this different from what you have?
Since you're monitoring two channels simultaneously, you could try subtracting them, as an alternative to carving out bandstops.
Subtraction can conceal certain annoying effects (like numerical noise or level crossing glitches) that remain coherent for two identical outputs. It might be worth experimenting with a differential offset or sinusoid, to try to break up that kind of coherence if it exists.
I heard a rumor about a DAQ problem at the 40m.
To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.
daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...
One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:
$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)
$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)
In /etc/resolv.conf on fb1 there was the following line:
Changing this to search martian got address lookup on fb1 working:
$ host c1sus2
c1sus2.martian has address 192.168.113.87
$ host c1sus2
c1sus2.martian has address 192.168.113.87
But testpoints still could not be retrieved from c1sus2, even after a daqd restart.
In /etc/hosts on fb1 I found the following:
Changing the hardcoded address to the value returned by the nameserver (192.168.113.87) fixed the problem.
It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time.
It looks like the RFM problem started a little after 2am on Saturday morning (attachment 1). It’s subsequent to what I did, but during a time of no apparent activity, either by me or others.
The pattern of errors on c1rfm (attachment 2) looks very much like this one previously reported by Gautam (errors on all IRFM0 ipcs). Maybe the fix described in Koji’s followup will work again (involving hard reboots).
I've spent most of the last week doing background reading; fourier transforms, shm, e&m, and other physics that I didn't cover at school. I also read a few chapters in Saulson, especially the chapter on noise and shot noise. To get a better grip on what I'm going to be doing I read through the polarization chapter in Hobbs' "Optics" text, mostly on wave plates since that's a large part of this readout. Since then I've been working up to calculating the shot noise, starting with the electric field throughout the new interferometer readout.
I spent the last week working a lot with the differences between a basic Michelson readout and the new one as a displacement sensor. The new one (w/ wave plates) ends with two differently polarized beams and should have better sensitivity; I've also been going through noise/sensitivity calculations for each, although that hit a road block when I had to start the 1st SURF progress report, which has taken up most of my time since Saturday.
The last week I've spent mostly working on calculating shot noise and other sensitivities in three michelson sensor setups, the standard michelson, the "long range" michelson (with wave plates), and the proposed EUCLID setup. The goal is to show that there is some inherent advantage to the latter two setups as displacement sensors. This involved looking into polarization and optics a lot more, so I've been spending a lot of time on that also. For example, the displacement sensitivity/shot noise on the standard michelson is around 6:805*10^-17 m/rHz at L_=1*10^-7m, as shown in the graph.
I've spent most of the last week working on finishing up the UCSD calculations, comparing it to the EUCLID design, and thinking about getting started with a prototype and modelling in MATLAB. Attached is something on EUCLID/UCSD sensors.
The last week I've started setting up the HeNe laser on the PSL table and doing some basic measurements (Beam waist, etc) with the beam scan, shown on the graph. Today I moved a few steering mirrors that steve showed me from at table on the NW corner to the PSL table. The goal setup is shown below, based on the UCSD setup. Also, I found something that confused me in the EUCLID setup, a pair of quarter wave plates in the arm of their interferometer, so I've been working out how they organized that to get the results that they did. I also finished calculating the shot noise levels in the basic and UCSD models, and those are also shown below (at 633nm, 4mw) where the two phase-shifted elements (green/red) are the UCSD outputs, in quadrature (the legend is difficult to read).
Vent 80 is nearly complete; the instrument is almost to atmosphere. All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open. The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations. VM1 and VM3 could not be actuated. The condition status is still listed as Unidentified because of the disconnected valves.
Bob, Aaron, and I removed the door from the OMC chamber this morning. Everything went well.
The Auxiliary DAQ Chassis, or Acromag box, is now wired and ready for testing. I will be sorting the cables at the vacuum rack to make connection to the box easier.
Connected the manual gate valve status indicator to the Acromag box this morning. Labeled the temporary cable (a 50' 9p DSUB, will order a proper sized cable shortly) and the panel RV2.
The foam in the cable tray wall passage had been falling on the floor in little bite-sized pieces, so I investigated and found a fiber cable that had be chewed/clawed through. I didn't find any droppings anywhere in the 40m, but I decided to bait an un-set trap and see if we'd find activity around it. There has been none so far. If there is still none tomorrow, I will move the trap and keep looking for signs of rodentia. At the moment, the trap is in a box in front of the double doors at the north end of the control room. Next it will be place in the IFO room, up in the cable tray.
gautam: the fiber that was damaged was the one from the LSC rack FiBox to the control room FiBox. So no DAFI action for a bit...
The Central Plant building will be undergoing seismic upgrades in the near future. The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant. Project manager Eugene Kim has explained the work to me and also noted our concerns. He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.
Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab. If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at email@example.com .
The air handler on the roof of the 40M that supplies the electronics shop and computer room is out of operation until next week. Adding insult to injury, there is a strong odor of Liquid Wrench oil (a creeping oil for loosening stuck bolts that has a solvent additive) in the building. If you don't truly need to be in the 40M, you may want to wait until the environment is back to being cool and "unscented". On a positive note, we should have a quieter environment soon!
The 40M jib cranes all passed inspection!
It's nice and compact, and the cost of new 15-pin DSUB cables shouldn't be a factor here. What does the 15p cable connect to?
Still removing old cable, terminal blocks and hardware. Once new strain reliefs and cable guides are in place, I will need to disconnect cables and reroute them. Please let me know dates and times when that is not going to interrupt your work!
The HVAC people replaced a valve and repaired the pneumatic plumbing on the roof air handler. Temperature has been stable during the day since Thursday. If anyone is in the control room during the evening, please make a note of the temperature.
Four nitrogen cylinders replaced the empties in the rack at the west entrance. Additionally, Airgas will now deliver only once a week. Let me know via email or text when the there are four empties in the rack and I'll order the next round.
The new nitrogen cylinders were delivered to the rack at the west entrance. We only get one Airgas delivery per week during the stay-at-home order, but so far they've not let us down.
The four 4x25DSUB and single 8x25DSUB feedthrough flanges have arrived and will be picked up from the dock and brought to the 40M lab.
Ordered 11/16 from CDW, on PO# S492940, the high voltage Tripp Lite SMART5000XFMRXL for TP-1. Should be arriving in about a week.
Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?
I can't find anything in the manual that describes the nature of the FAULT message. In fact, it's not mentioned at all. If the unit detects a fault at its output, I would expect a bit more information. This unit does a programmable level of input error protection, too, usually set at 100%. Still, there is no indication in the manual whether an input issue would be described as a fault; that usually means a short or lifted ground at the output.
Yikes! That's ONE filter. I'll get another from storage.
When adjusting the blower speed, give the blower at least 30 seconds to speed up or slow down to the set speed. The flywheel effect of the big motor armature and blower mass requires time to follow the control current. Note the taller Flanders HEPA filters. These and the new intake filters should keep the PSL air clean for a long time!
The new HEPA speed controllers are attached at the middle of the HEPA unit (not at the edge of the unit)... (Attachment 1)
You still need a step./stool to touch the knob and need a ladder for a more precise setting.
We still don't know the optimal speed of the nominal IFO operation. For now, the HEPAs are running at the max speed (Attachment 2).
Once we know the optimal setting, we mark the knobs so that we can see them only with the step.