Below 100 Hz, I suppose this means that the X arm is now limited by the quadrature sum of the X and Y arm seismic noise.
Found PMC unlocked for many hours so I relocked it. IMC relocked by itself, but the input switch seems to be flickering to fast. Also the Keep Alive bit is not flashing.
I'd suggest clamping and moving it to the flow bench so you can inspect with a bright light. Then remove the wire and inspect the standoff, but hurry up with getting it in the soak bath so you can start on the cleaning of the other ones.
I wonder if we're really sure that its a mechanical problem with ETMX.
Gautam tells me that the local damping was always ON when looking for the jumps. This means that the coil driver was still hooked up and we can't rule out glitches in the DAC or the coil driver.
The UL OSEM shows the biggest movement (10 microns). The LR shows the second most (6-7 microns). The others are 2x less. So its consistent with a voltage change on UL,
Is this consistent with a slip in one of the wire standoffs? I think no.
Comments on the schematic:
Looks pretty great. However, there's two problems:
1) Some of the MEDM screens don't show the time. You can fix this by editing the screens and copy/paste from screens which have working screens.
2) The snapshot script seems to not grab the full MEDM screen sometimes.
These are not a very big deal, so you can get the microphones working first and we can take care of this afterwards.
Steve, please look into getting some plated magnets (either SmCo or NdFeB is OK) of this size so that we can install cleaner magnets by the next vent.
For the rest of this vent, at least, we need to start using the EQ stops more frequently. Whenever the suspension is being worked on clamp the optic. When you need it to be free back off the stops, but only by a few hundred microns - never more than a millimeter.
Best to take our time and use the stops often. With all the magnets being broken off, its not clear now how many partially cracked glue joints we have on dumbells which didn't completely fall off.
I'd recommend replacing the wire and grinding down the clamp to prevent cutting the wire. Since we have almost never replaced clamps, many of them probably have grooves from the wires and can make unpleasant cuts. Better safe than sorry in this case.
I cleaned up the south Electronics bench today.
The other two, as well as several of the desks are in some chaotic state of degradation. Please clean up your areas and put away projects which do not need to remain staged for several months. Try to eliminate "that's not mine" and "I don't know who's that is" from your vocabulary. Fight back against entropy!
I found the DAFI screen as a button inside of the LSC screen - I think its more logically found from the sitemap, so I'll move it into there as well.
1) I have added the status summary of the DAFI block to the main FE status overview screen in the c1lsc cloumn. (attachment 1)
2) I have edited all the kissel matrix buttons appropriately, and given them appropriate lables. (attachment 2)
Gautam and I noticed a 60 Hz + harmonics hum which comes from the DAFI. Its the noisiest thing in the control room. It goes away when we unplug the fiber coming into the control room FiBox receiver, so its not a ground loop on this end. Probably a ground loop at the LSC rack.
Upon further investigation we notice that the Fibox at the LSC rack had its gain turned all the way up to +70 dB. This seemed too much, we reduced it to ~20 (?) so that we could use more of the DAC range. Also, it is powered by a AC/DC converter plugged in to the LSC rack power strip. We cannot use this for a permanent install - must power the FiBox using the same power supplies as are used for the LSC electronics. Probably we'll have to make a little box that takes the fused rack power of 15 V and turns it into +12 V with a regulator (max current of 0.15 A). Making sure that the FiBox doesn't pollute the rest of the LSC stuff with its nasty internal DC-DC converters.
We also put a high pass in the output filter banks of DAFI. For the PEM channels we put in a 60 Hz comb. WE then routed the Y-end Guralp in through the boxes and out the output, mostly bypassing the frequency shifting and AGC. It seems that there is still a problem with GUR2.
Does anyone know which one is GUR1 and which one is GUR2? I don't remember the result of the Guralp cable switching adventures - maybe Koji or Steve does. According to the trend it was totally dead before March and in March it became alive enough for us to see ~30 ADC counts of action, so way smaller than GUR111 or GUR snoopy or whatever its called.
Usual Ubuntu apt-get upgrades; long delayed but now happening.
After some cable swapping, we now have both Guralp seismometers running and the times series and spectra look similar to each other and motley healthy.
Bean and I took a look at the whole situation today. Ben had nicely fixed the Dsub end of the EX cable (the EY one is still just a sad joke), After installing this newly fixed cable, we still saw no signals. There was some confusion in the control room about using the MED displays to diagnose seismometers: flickering MEDM values cannot be used for this. It would be like checking a pizza box temperature to determine if the pizza is any good.
Tomorrow, Lydia is going to change all of the labels and channel names. The new names will be EX & EY to prevent this kind of huge waste of time with channel name swapping. That means no more illegal names with the label maker, Steve.
From the spectrum you can see that the EX seismometer (GUR2) is still not centered or at least its oscillating at 245 Hz for some reason. This should go away after some power cycling or recentering using the magic wand.
I noticed some anomalies in the mechanical setups at the ends:
Seems to good to be true. Maybe you're over fitting? Please put all the traces on one plot and let us know how you do the parameter setting. You should use half the data for training the filter and the second half for doing the subtraction.
Not really true that it passed. That's just an arbitrary margin. Best to throw away all the old wire. We have no quantitative estimate of what the real torque should be. Its just feelings.
The wire will arrive in 1-2 weeks. It is a new production. Brad Snook of Ca Fine Wire was suprised that we are still using the 13 years old wire. Oxidation is an issue with iron contained steel wire.
He would not give me a shelf life time on it. He recommended to check the strenght of it before usage. It passed with safety factor of 2 just recently.
In the future we'll store the new spool in oxigen free nitrogen environment..
My box has been suspended in the PSL using surgical tubing, and it has been connected to C1:PEM-MIC_1 (C17) with a BNC. I made a braided power cable as well but it turned out to be slightly too short... Once this is fixed, everything should be ready and we can see if it's working correctly. I also set up a new tab on the summary pages for this channel:
This data is back from when I had my solderless breadboard running near MC2. I'll add this tab to the real pages once the box is working (which could be a while since I'm gone for a month). Let me know if you see any issues with either the tab or the box/cables.
On the bounce roll balancing:
Recall that back in 2006, the main issue was not with the bounce mode coupling into the OSEMs but instead with too much cross-coupling between the damping loops themselves:
Old elogs from Osamu (reader / readonly). Osamu will be here in a couple weeks and can try to explain what he was doing back then.
The problem was that without a good input matrix, the low frequency motion of the suspension point was dominated by the damping noise rather than the seismic noise. The bounce mode is a nice indicator of whether the OSEM is oriented up/down but its not the most important thing. More important is that the magnet is in the actual LED beam, not just the apparent center of the OSEM.
Then we should be able to fix things by running the diagonalization script and correcting the input matrix (which depends somewhat on the DC alignment).
In November of 2010, Valera Frolov (LLO), investigated our satellite amplifiers and made some recommendations about how to increase the SNR.
In light of the recent issues, we ought to fix up one of the spares into this state and swap it in for the ITMY's funky box.
The sat amp schematic is (D961289). It has several versions. Our spare is labeled as version D (not a choice on the DCC page).
Edit (Sep 6): The purpose of the Radd resistors is to lower the resistance and thus up the current through the LED. The equivalent load becomes 287 Ohms. Presumably, this in series with the LED is what gives the 25 mA stated on the schematic. This implies the LED has an effective resistance of 100 Ohms at this operating point. Why 3 resistors? To distribute the heat load. The 1206 SMD resistors are usually rated for 1/4 W. Better to replace with 287 Ohm metal film resistors rated for 1 W, if Steve can find them online.
The attached PDF shows the output noise of the satellite amp. This was calculated using 'osempd.fil' in the 40m/LISO GitLab repo.
The mean voltage output is ~1 Vdc, which corresponds to a current with a shot noise level of 100 nV/rHz on this plot. So the opamp current noise dominates below 1 Hz as long as the OSEM LED output is indeed quantum limited down to 0.1 Hz. Sounds highly implausible.
To convert into meters, we divide by the OSEM conversion factor of ~1.6 V/mm, so the shot noise equivalent would be ~1e-10 m/rHz above 1 Hz.
After adding the sat amp to the 40m DCC tree (D1600348), I notice that not only is the PD readout not built for low noise, neither is the LED drive. The noise should be dominated by the voltage noise of the LT1031 voltage reference. This has a noise of ~500 nV/rHz at 1 Hz. That corresponds to an equivalent current noise through the LED of 25 mA * (500e-9 / 10) ~ 1 nA/rHz. Or ~45 nV/rHz at the sat amp output. This would be OK as long as everything behaves ideally. BUT, we have thick film (i.e. black surface mount) resistors on the LED drive so we'll have to measure it to make sure.
Also, why is the OSEM LED included in the feedback loop of the driver? It means disconnecting the cable from the sat amp makes the driver go unstable probably. I think one concept is that including the device in the feedback loop makes it so that any EMI picked up in the cabling, etc. gets cancelled out by the opamp. But this then requires that we test each driver to make sure it doesn't oscillate when driving the long cable.
If we have some data with one of the optics clamped and the open light hitting the PD, or with the OSEMs removed and sitting on the table, that would be useful for evaluating the end-to-end noise of the OSEM circuit. It seems like we probably have that due to the vent work, so please post the times here if you have them.
I looked at the PRM free swing spectra. The modes look like they're at the right frequencies, so pointing more and more towards a LED or satellite box issue.
Some of the frequencies have changed between the 2011 in-vac measurement and our 2016 in-air measurement, but that seems within usual parameters.
In the morning, Steve will start opening the north BS door so that we can enter to inspect the PRM LR OSEM.
For the ITMY, I squished together the cables which are in the 'Cable Interface Board' which lives in the rack. This thing takes the 64 pin IDC from the satellite module and converts it into 2 D-sub connectors to go to the PD whitening board and the coil driver board. Lets see if the ITMY OSEM glitches change character overnight.
For the PRM, I aligned it until the arm flashes were maximized and the REFL camera showed a centered spot with dips happening during the arm pops. AS port was more messy since the Michelson alignment wasn't perfect, but the spots were both near the center of the cam and the SRM alignment maximized the wangy fringiness of the image as well as the angry cat meow sounds that the full IFO makes as heard through the DAFI (listening to POX).
On Monday, Osamu should be back and can help with doors and then alignment recovery and locking.
All is not lost. I've stuck and unstuck optics around a half dozen times. Can you please post the zoomed in time series (not trend) from around the time it got stuck? Sometimes the bias sliders have to be toggles to make the bias correct. From the OSEM trend it seems like it got a large Yaw bias. May also try to reseat the satellite box cables and the cable from the coil driver to the cable breakout board in the back of the rack.
susaux is responsible for turning on/off the inputs to the coil driver, but not the actual damping loops. So rebooting susaux only does the same as turning the watchdogs on/off so it shouldn't be a big issue.
Both before and after would be good. We want to see how much bias and how much voltage from the front ends were applied. l1susaux could have put in a huge bias, but NOT a huge force from the damping loops. But I've never seen it put in a huge bias and there's no way to prevent this anyway without disconnecting cables.
I think its much more likely that its a little stuck due to static charge on the rubber EQ stop tips and that we can shake it lose with the damping loops.
Rana suspicious. We had arms locked before pumpdown with beams on Transmon PDs. If they're off now, must be beams are far off on the mirrors. Try A2L to estimate spot positions before walkin the beams too far.
With the WFS and OL, we never have figured out a good way to separate pit and yaw. Need to figure out a reference for up/down and then align everything to it: quad matrix + SUS output matrix
Been non-functional for 3 weeks. Anyone else notice this? images missing since ~Sep 21.
I say just fix the clipping. Don't worry about the PRM OSEM filters. We can do that next time when we put in the ITM baffles. No need for them on this round.
These old specs are not so bad. But we now want to get replacements for the TRX and TRY and PSL viewports that are R <0.1% at 532 and 1064 nm.
I don't know of any issues with keeping BK-7 as the substrate.
I don't think the loss of 25 ppm is outrageous. Its just surprisingly good. The SIS model predicted numbers more like 1 ppm / mirror taking into account just the phase map and not the coating defects.
However, we should take into account the lossed in the DRMI to be more accurate: AR coating reflectivities, scatter loss on those surfaces, as well as possible clipping around BS or some other optics.
We don't need to record any of the AIOut channels, the OL channels (since we record them fast), or the _MEAN channels (I think they must be CALC records or just bogus).
I did apt-get update and then apt-get upgrade on optimus. All systems are nominal.
Indeed. I suggest discussing with Joe B. I believe we should use a dedicated cam network to get the camera signals from the ends and corner all into one machine. Do not use the main CDS FE network for this since it might produce weird collissions. How about make a diagram, post it to elog, and send link to Joe?
It may be a good idea to leave the gigecam interfacing up to a dedicated machine.
In the Generic Pentek interface board, which is used to take in the analog 2-pin LEMO cable from the MC Servo board, there is some analog whitening before the signal is sent into the ADC.
There are jumpers in there to set whether it is 0, 1, or 2 stages of 150:15 (z:p) whitening.
more U4 gain, lesssss U5 gain
Mach Zucker on howto do Ringdowns: https://dcc.ligo.org/LIGO-T900007
As it turns out, its not so old as I thought. Jenne and I reworked these in 2014-2015. The QPD whitening is the same as the IMC WFS whitening so we can just repeat those fixes here for the IMC.
Rana pointed out that this modification (removal of 900Ohm) leave the input impedance as low as 100Ohm.
As OP284 can drive up to 10mA, the input can span only +/-1V with some nonlinearity.
Rather than reinstalling the 900Ohms, Rana will investigate the old-days fix for the whitening filter that may involve the removal of AD602s.
Until the solution is supplied, the IMC WFS project is suspended.
At Hanford, there is this issue with laser jitter turning into an IMC error point noise injection. I wonder if we can try out taking the acoustic band WFS signal and adding it to the MC error point as a digital FF. We could then look at the single arm error signal to see if this makes any improvement. There might be too much digital delay in the WFS signals if the clock rate in the model is too low.
The WFS gains are supposedly maximized already. If we remotely try to increase the gain, the two MAX4106 chips in the RF path will oscillate with each other.
We should insert a bi-directional coupler (if we can find some LEMO to SMA converters) and find out how much actual RF is getting into the demod board.
MC unlocked, Autolocker waiting for c1iool0 EPICS channels to respond. c1iool0 was responding to ping, but not to telnet. Keyed the crate and its coming back now.
There's many mentions of c1iool0 in the recent past, so it seems like its demise must be imminent. Good thing we have an Acromag team on top of things!
Also, the beam on WFS2 is too high and the autolocker is tickling the Input switch on the servo board too much: this is redundant / conflicting with the MC2 tickler.
Dead again. No outputs for the past month. We really need a cron job to check this out rather than wait for someone to look at the web page.
Max tells us that soem conf files were bad and that he did something and now some pages are being made. But the PEM and MEDM pages are bank. Also the ASC tab looks bogus to me.
In this video: https://youtu.be/iphcyNWFD10, the comments focus on the orange crocs, my wrinkled shirt, and the first aid kit.
Manasa pointed me to the CAD drawings in the 40m SVN and I've now uploaded them to the 40m DCC Tree so that EricG and SteveV can convert them into SolidWorks.
Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?
The computers are all done.
megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely
This is one of those unsolved door lock acquisition problems. Its been happening for years.
Please ask facilities to increase the strength of the door tensioner so that it closes with more force.
Pages still not working: PEM and MEDM blank.
The attached file is a python notebook that you can use to get data. Minimal syntax.
"## Get some 40m data using NDS"
Minute trend data seems not available using the NDS2 server. Its super slow using dataviewer from the control room.
Did some digging into the NDS2 config on megatron. It hasn't been updated in 2 years.
All of the stuff is run by the user 'nds2mgr'. The CronTab for this user was running all the channel name updates and server restarts at 3 AM each day; I've moved it to 5:05 AM. I don't know the password for this user, so I just did 'sudo su nds2mgr' to become him.
On megatron, in /home/nds2mgr/nds2-megatron/ there is a list of channels and configs. The file for the minute trend (C-M-ChanList.txt), hasn't been updated since Nov-2015. ???
Did we turn off minute trend writing in one of the recent FrameBuilder debug sessions? Seems we only have second trends in 2016. Maybe this explains why its so slow to get minute trends? Dataviewer has to rebuild it from second trend.
controls@nodus|frames > l
drwx------ 2 root root 16384 Jun 8 2009 lost+found/
drwxr-xr-x 2 controls controls 4096 Jul 14 2015 tmp/
-rw-r--r-- 1 controls controls 0 Jul 14 2015 test-file
drwxr-xr-x 5 controls controls 4096 Apr 7 2016 trend/
drwxr-xr-x 4 root root 4096 Apr 11 2016 archive/
drwxr-xr-x 789 controls controls 36864 Jan 13 19:34 full/
controls@nodus|frames > cd trend
controls@nodus|trend > l
drwxr-xr-x 258 controls controls 3342336 Jul 6 2015 minute_raw/
drwxr-xr-x 387 controls controls 36864 Nov 5 2015 minute/
drwxr-xr-x 969 controls controls 36864 Jan 13 19:49 second/