I pulled the beatbox from the 1X2 rack so that I could try to hack in some output whitening filters. These are shamefully absent because of my mis-manufacturing of the power on the board.
Right now we're just using the MON output. The MON output buffer (U10) is the only chip in the output section that's stuffed:
The power problem is that all the AD829s were drawn with their power lines reversed. We fixed this by flipping the +15 and -15 power planes and not stuffing the differential output drivers (AD8672).
It's possible to hack in some resistors/capacitors around U10 to get us some filtering there. It's also possible to just stuff U9, which is where the whitening is supposed to be, then just jump it's output over to the MON output jack. That might be the cleanest solution, with the least amount of hacking on the board.
I modified the beatbox according to this plan. I stuffed the whitening filter stage (U9) as indicated in the schematic (I left out the C26 compensation cap which, according to the AD829 datasheet, is not actually needed for our application). I also didn't have any 301 ohm resistors so I stuffed R18 with 332 ohm, which I think should be fine.
Instead of messing with the working monitor output that we have in place, I stuffed the J5 SMA connector and wired U9 output to it in a single-ended fashion (ie. I grounded the shield pins of J5 to the board since we're not driving it differentially). I then connected J5 to the I/Q MON outputs on the front panel. If there's a problem we can just rewire those back to the J4 MON outputs and recover exactly where we were last week.
It all checks out: 0 dB of gain at DC, 1 Hz zero, 10 Hz pole, with 20 dB of gain at high frequencies.
I installed it back in the rack, and reconnected X/Y ARM ALS beatnote inputs and the delay lines. The I/Q outputs are now connected directly to the DAQ without going through any SR560s (so we recover four SR560s).
Keven, our janitor accidentally pushed the main entry door laser emergency stop switch.
The laser was turned back on. The MC and the arms were started flashing happily as they were before.
with the script, as it was down.
MC down script is too slow to block MC_L when the cavity goes out of lock. As a result the loop strongly kicks MC2. We decided to make a threshold inside MCS model on MC TRANS that will block MC_L during lock loss. This is a lower threshold. Upper threshold can be slow and is implemented inside MC up script.
Fast threshold can be set inside MC2 POS. I did not correct MC2 top level medm screen as it is the same for all core optics.
Note: Fast trigger will also block ALS signal if MC loose lock.
Since I keep asking Manasa to "measure" distances off of the CAD drawing for me, I thought I might just write them all down, and quit asking.
So, these are only valid until our next vent, but they're what we have right now. All distances are in meters, angles in degrees.
I tested On-Track (from LLO) OT 301 amp with PSM2-10 qpd. It was responding. Jenne will calibrate it. The 12V DC ps input is unipolar.
The one AC to DC adapter that Jenne tried was broken.
Here's an example of the total horribleness of what's happening right now:
controls@rossa:~ 0$ ping 192.168.113.222
PING 192.168.113.222 (192.168.113.222) 56(84) bytes of data.
From 192.168.113.215 icmp_seq=2 Destination Host Unreachable
From 192.168.113.215 icmp_seq=3 Destination Host Unreachable
From 192.168.113.215 icmp_seq=4 Destination Host Unreachable
From 192.168.113.215 icmp_seq=5 Destination Host Unreachable
From 192.168.113.215 icmp_seq=6 Destination Host Unreachable
From 192.168.113.215 icmp_seq=7 Destination Host Unreachable
From 192.168.113.215 icmp_seq=9 Destination Host Unreachable
From 192.168.113.215 icmp_seq=10 Destination Host Unreachable
From 192.168.113.215 icmp_seq=11 Destination Host Unreachable
64 bytes from 192.168.113.222: icmp_seq=12 ttl=64 time=10341 ms
64 bytes from 192.168.113.222: icmp_seq=13 ttl=64 time=10335 ms
--- 192.168.113.222 ping statistics ---
35 packets transmitted, 2 received, +9 errors, 94% packet loss, time 34021ms
rtt min/avg/max/mdev = 10335.309/10338.322/10341.336/4.406 ms, pipe 11
Note that 10 SECOND round trip time and 94% packet loss. That's just beyond stupid. I have no idea what's going on.
Temporary solution: I ssh'd to nodus from the 40m wifi network and was able to connect to the FE machines.This works but the bandwidth is limited this way as expected.
40m MARS network needs to be fixed.
I am getting tired of having to restart Rossa all the time. She freezes almost once per day now. Jamie has looked at it with me in the past, and we (a) don't know why exactly it's happening and (b) have determined that we can't un-freeze it by ssh-ing from another machine.
I wonder if it's because I start to have too many different windows open? Even if that's the cause, that's stupid, and we shouldn't have to deal with it.
Renaming of the c1gcv model earlier (elog 7011) had left white boxes in most of the ALS medm screens.
Channels names were corrected. No more white boxes.
The ITMx Oplev was misaligned. Switched the ITMx Oplev back on and fixed the alignment.
EDIT, JCD: This is totally my fault, sorry. I turned it off the other day when I was working on the POP layout, and forgot to turn the laser back on. Also, I moved the fork on the lens directly in front of the laser (in order to accommodate one of the G&H mirrors), and I nudged that lens a bit, in both X and Y directions (although very minimally along the beam path). Anyhow, bad Jenne for forgetting to elog this part of my work.
Update: We don't have our BIG screen
There was no light from the projector when I came in this morning. I suspected it might have to do with the lifetime of the bulb. But turning the projector OFF and ON got the projector working....but only for about 10-15 seconds. The display would go OFF after that. I will wait for some additional help to dismount it and check what the problem really is.
There were 4 cables running over the front side of rack 1Y4 such that the front door could not be closed. I re-routed them (one at a time) through the opening on the top of the rack. The concerned channels were
Before and after pics attached.
Thank you Ben Abbott forwarding this information:
QPD Amplifier D990272 https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Number&docid=D990272&version= at the X-end. It plugs into a Generic QPD Interface, D990692, https://dcc.ligo.org/cgi-bin/private/DocDB/ShowDocument?.submit=Number&docid=D990692&version= according to my drawings, that should be in 1x4-2-2A.
Wrong: this is not an interface.
It turned out that the earlier fix was not really a fix, because there was some confusion as to which of the two lenses Jenne moved while working, and while Manasa and I were re-aligning the beam, we may have moved the other lens.
Subsequently, when we checked the quadrant sum, it was low (in the region of 20), even though OPLEV_PERROR and OPLEV_YERROR were reasonably low. We called up a 30 day trend of the quadrant sum and found that it was typically closer to 4000. This warranted a visit to the table once again. Before going to the table, we did a preliminary check from the control room so as to make sure that the beam on the QPD was indeed the right one by exciting ITMx in pitch (we tried offsets of 500 and -500 counts, and the spot responded as it should). ITMx oplev servo was then switched off.
At the table, we traced the beam path from the laser and found, first, that the iris (I have marked it in one of the photos attached) was practically shut. Having rectified this, we found that the beam was getting clipped on the first steering mirror after the laser (also marked in the same photo, and a second photo showing the clipping is attached). The beam isn't very well centred on the first lens after the laser, which was the one disturbed in the first place. Nevertheless, the path of the entering beam seems alright. The proposed fix, then, is as follows;
Back in the control room we noticed that the quadrant sum had gone up to ~3500 after opening out the iris. The OPLEV_PERROR and OPLEV_YERROR counts however were rather high (~200 counts in pitch and ~100 counts in yaw). Jenne went back to the table and fixed the alignment such that these counts were sub-10, and the quadrant sum went up to ~3800, close to the trend value.
At the time of writing, the beam is still not centred on the lens immediately after the laser and is still getting clipped at the first steering mirror. Oplev servo back on.
Replaced the batteries successfully in the control room. We just had to switch the clips from the old batteries to the new one, which we didn't know was possible until now.
-the replacement lamp arrived a while back.
-the old lamp has been switched out, it had 3392 lamp hours on it.
-new lamp installed, projector mounted back up, and lamp hours reset to zero. there is a lingering odour of something burning, not sure what it is or if it is in any way connected to the new lamp. old lamp disposed in the hazardous waste bin. the big screen is back online.
Red-green laser pointers added to the depleted stock of 2011
The two pointers output measured 4.4 mW green and 2 mW red
Office work benches were cleaned up yesterday. Anti-image filter boards were moved to north wall of the control room. Koji's pd- electronics box placed next to water dispenser.
The removed ETMY optical table: TMC 4' x 2' x 4" with Aluminum enclosure was placed on table in the east arm.
The 3 Panasonic Ceramic Kits Books, 1206 NPO, SMT are well stocked. The 4 th one needs to be refilled at some values.
I labeled them on the cover for fast access. See Atm1
The Metalized Polyester Film Book with through holes mount are in good shape also. Atm2
The AVX Ceramic 1206, Garrett cab, range: 1pF - 22 microF 50V...... 67 values
Note here: that the value of dielectric, capacitance / voltage will vary
NPO: 1 pF - 1 nF / 50V .......37 values
X7R : 1 nF - 0.082 microF / 50V, 0.1 microF / 100V.......27 values
Y5V: 4.7 microF / 6.3 V, 10 microF / 10V, 22 microF / 6.3V.........3 values
I have been working on setting up a serial-link with the temperature controller of the PPKPT crystal doubling oven at the Y-end for some time now. The idea was to remotely tune the PID gains of the controller and get temperature data. The device used to serially interface with the temperature controller is a Raspberry Pi model B, which is connected to the temperature controller by means of a USB to serial adaptor with a PL2303 chip. I installed the interface this morning, and have managed get talking with the doubling oven. I am now able to collect time-series data by ssh-ing to the Raspberry Pi from the control room. I will use this data to manually tune the PID gains for now, though automatic tuning via some script is the long-term goal.
The temperature controller for the doubling oven is a Thorlabs TC200, and supports serial communication via the RS232 protocol by means of a female DB9 connector located on its rear panel. I have hooked up the Raspberry Pi to this port by means of a USB-Serial adaptor that was in one of the cabinets in the 40m control room. After checking the Martian Host Table, I assigned the Raspberry Pi the static IP 192.168.113.166 so that I could ssh into it from the control room and test the serial-link. This morning, I first hooked up the Raspberry pi to an ethernet cable running from rack 1Y4 to make sure I could ssh into it from the control room. Having established this, I moved the raspberry pi and its power supply to under the Y-endtable, where it currently resides on top of the temperature controller. I then took down the current settings on the temperature controller so that I have something to revert to if things go wrong: these are
Set-Point: 35.7 Celcius
Actual Temperature: 35.8
I then connected the Pi to the temperature controller using the serial-USB cable, and plugged the ethernet cable in. Rebooted the Pi and ssh-ed into it from the control room. I first checked the functionality of the serial-link by using terminal's "screen" feature, but the output to my queries was getting clipped on the command line for some reason (i.e. the entire output string wasn't printed on the terminal window, only the last few characters were). Turns out this is some issue with screen, as when I tried writing the replies to my queries to a text file, things worked fine.
At present, I have a python script which can read and set parameters (set-point temperature, actual temperature, PID gains)on the controller as well as log time-series data (temperature from the temperature sensor as a function of time )to a text file on the Pi. As of now, I have only checked the read functions and the time-series logger, and both are working (some minor changes required in the time-series function, I need to get rid of the characters the unit spits out, and only save the numbers in my text-file).
For the time-being, I plan to apply a step to the controller and use the time-series data to manually tune the PID parameters using MATLAB. I am working on a bunch of shell scripts to automate the entire procedure.
Having established the serial link between the Doubling oven at the Y-end and the Raspberry pi, I wanted to use this interface to collect time-series from the oven after applying a step function in an effort to measure the transfer function of the oven. The idea was that knowing the transfer function of the oven, I could use some simple PID tuning rules like the Ziegler-Nichols rule or put everything in SIMULINK and find the optimal PID gains. However, I am unable to extract the oven transfer function from the time series data collected.
Last night, between 920pm and 940pm I applied a step function to the doubling oven by changing the setpoint of the controller from 35.7 Celsius to 39 Celsius (having checked elog 3203 to get an idea of a 'safe' step to apply). I then used the Pi to collect time series data for 6 minutes, then returned the set-point back to 35.7 Celsius, and took another time-series to make sure things were back to normal. Having gotten the time series data, I attempted to fit it using some exponentials which I derived as follows:
I couldn't think of a way to get the laplace transform of the time-series data collected, so I approximated the oven transfer function as a system with a one simple pole i.e. G(s)=K/(1+Ts), where K and T are parameters that characterise the oven transfer function. I then plugged in the above expression for Y(s) into Mathematica (knowing X(s)=constant/s, and H(s) = 250 + 60/s +25s from the PID gains) and did an inverse laplace transform to find a y(t) with two unknown parameters K and T to which I could fit the time-series data.
The time-series data collected via the Pi after applying the step was this:
The inverse laplace transform from mathematica yielded the following (formidable!) function (time, the independent variable, is x, and the fitting parameters are a=K and b=T where K and T are as described earlier):
(39*(exp(x*(1/(2*(25*a - b)) - (125*a)/(25*a - b) - sqrt(1 - 500*a+ 56500*a^2 + 240*a*b)/(2*(25*a - b)))) - exp(x*(1/(2*(25*a - b)) - (125*a)/(25*a - b) + sqrt(1 - 500*a + 56500*a^2 + 240*a*b)/(2*(25*a - b)))))*a)/sqrt(1 - 500*a + 56500*a^2 + 240*a*b)
My best attempts to fit this using MATLAB's cftool have given me useless fits:
I tried changing the start-points for the fitting parameters but I didn't get any better fits.
Steve and I tried to fix the Oplev situation detailed in elog 8684, today afternoon. We have come up with a fix which needs to be adjusted, possibly completely overhauled depending on whether the mirror steering the return beam to the QPD is blocking the POX beam coming out.
Situation in the chamber: the black line is meant to indicate what was happening, the red is indicative of the present path.
Plan of action:
I'm not sure what's going on today but we're seeing ~80% packet loss on the 40MARS wireless network. This is obviously causing big problems for all of our wirelessly connected machines. The wired network seems to be fine.
I've tried power cycling the wireless router but it didn't seem to help. Not sure what's going on, or how it got this way. Investigating...
I'm still seeing some problems with this - some laptops are losing and not recovering any connection. What's to be done next? New router?
We had the same problem yesterday. However the Vacuum Dedicated laptop worked with fewer disconnects. Christian is coming over this after noon to look at this issue.
This happened a few weeks ago and it recovered misteriously. Jamie did not understand it.
Jenne just aligned the X arm and I got a chance to check the status of the POX beam coming out of the chamber. Turned the Oplev servo off so that the red beam could be blocked, turned all the lights off, and had a look at the beam in the vicinity of the mirror steering the Oplev-out beam to the QPD with an IR view-card. The beam is right now about half a centimeter from the pitch knob of the said mirror, so its not getting clipped at the moment. But perhaps the offending mirror can be repositioned slightly, along with the Oplev QPD such that more clearance is given to the POX beam. I will work this out with Steve tomorrow morning.
The keyboard on Pianosa workstation has been flaky for the last several days at least. Today, it was having troubles mounting the linux1 file system and was hanging on boot.
People in the control room emailed Jamie and then grew afraid of the computer. Annalisa suggested that we put garlic on it since was clearly possessed.
Typing 'dmesg' at the command prompt, I found that there were thousands of messages like these:
[ 3148.181956] usb 2-1.2: new high speed USB device number 68 using ehci_hcd
[ 3149.773883] usb 2-1.2: USB disconnect, device number 68
[ 3150.228900] usb 2-1.2: new high speed USB device number 69 using ehci_hcd
[ 3152.076544] usb 2-1.2: USB disconnect, device number 69
[ 3152.787391] usb 2-1.2: new high speed USB device number 70 using ehci_hcd
[ 3154.123331] usb 2-1.2: USB disconnect, device number 70
[ 3154.578459] usb 2-1.2: new high speed USB device number 71 using ehci_hcd
So I replaced the existing Dell keyboard with an older Dell keyboard and the bad messages have stopped. No garlic was used.
With rana's input, I changed the ITMx oplev servo gains given the beam path had been changed. The pitch gain was changed from 36 to 30, while the yaw gain was changed from -25 to -40. Transfer function plots attached. The UGF is ~8Hz for pitch and ~7Hz for yaw.
I had to change the envelope amplitudes in the templates for both pitch and yaw to improve the coherence. Above 3Hz, I multiplied the template presets by 10, and below 3Hz, I multiplied these by 25.
As mentioned in elog 8770, I wanted to give the POX beam a little more clearance from the pick-off mirror steering the outcoming oplev beam. I tweaked the position of this mirror a little this morning, re-centred the spot, and checked the loop transfer function once again. These were really close to those I measured last night (UGF for pitch ~8Hz, for yaw ~7Hz), reported in elog 8777, so I did not have to change the loop gains for either pitch or yaw. Plots attached.
I found the south end emergency doors not latched completely. There was a ~ 3/8" vertical gap from top to bottom.
Please pull or push doors harder if they not catch fully.
There are 4 oscilloscopes left on the AP optical table top.... It's only 25 lbs... Do not leave anything on the optical table tops!
Alex and Steve,
Old halogen chamber illuminator cabling disconnected and potenciometer board removed at 1Y1 in order to give room for pd calibration fibre set up.
During the process, they had also removed the power cable to the ITMY camera. Steve and I fixed this...so the camera is back.
[Annalisa, Manasa, Jenne, Koji]
We are working on the vent preparation.
First of all, there was no light in the interferometer.
Obviously there were lots of IFO activity in the weekend. Some were elogged, some were not.
Annalisa took her responsibility to restore the alignment and the arms recovered their flashes.
The odd thing was that the ASS got instable after we turned down the TRY PD gain from +20dB to +10dB (0dB original).
We increased the TRY gain by factor of 10 (that's the "10dB" of this PDA520. See the spec sheet) to compensate this change.
This made the ASS instable. Anyway we reduced the gain of TRY PD to 0dB. This restored the ASS.
Jenne took some more data for the QPD spectrum calibration.
Link to the vent plan
The results of today's MC spot position measurements:
spot positions in mm (MC1,2,3 pit MC1,2,3 yaw):
[2.3244717046516197, -0.094366247149508087, 1.6060842142158149, -0.74616561350974353, -0.67461746482832874, -1.3301448018100492]
MC1 and MC3 both have spots that are a little high in pitch, but everything else looks okay.
I have just centered IPPOS, as well as PSL POS and PSL ANG (also called IOO POS and IOO ANG on the screens). Annalisa is working on placing mirrors to get the IPANG beam to its QPD, so that one will be centered later.
Green steering mirrors have been swapped with PZT mirrors at the X end table. We aligned the green to the X arm.
X arm green transmission +PSL green ~ 0.95
That's better than before the swap...woohooo
Centering of the oplev beams: done
Recording the OSEM values: done
There seems to be an unexplained oscillation in X arm cavity transmission for IR when the cavity is locked using the POX error signal.
Their origin is not related to the oplevs because the oscillation does not exist when LSC is OFF and the arms are controlled only by the oplevs and OSEMs.
Low power MC locking
- Rotated HWP right after the laser
- Put a knife edge beam dump at the output of the PBS after the HWP.
- Replaced the PO mirror for the MC refl by an HR mirror.
Input offset from 0 to 0.29
Servo Gain from 10 to 30
=> Transmission 0.84 (1.2W at the MC input) to 0.069 (100mW)
VCO Gain from 25 to 31
MC REFL: Unlocked 3.6 Locked 0.38-0.40
After Koji and I lowered the power into the PMC and saw that the MC locked nicely, I remeasured the spot positions (no alignment on the PSL table, or of the MC mirrors has been done. Also, WFS are off, since there isn't any power going to them).
spot positions in mm (MC1,2,3 pit MC1,2,3 yaw):
[1.1999406656184595, 0.63492727550953243, 1.0769104750021909, -1.0260011922577466, -1.059439987970527, -1.2717741991488549]
The spot positions seem to have actually gotten a bit better in pitch (although between 2 consecutive measurements there was ~0.5mm discrepancy), and no real change in yaw. This means that Rana was right all along (surprise!), and that decreasing the power before the PMC reduces alignment pain significantly.
After everyone's work today (good teamwork everybody!!), we are a GO for the vent.
Steve, please check the jam nuts, and begin the vent when you get in. Thanks.
record of the initial state
Full alignment of the IFO was recovered. The arms were locked with the green beams first, and then locked with the IR.
In order to use the ASS with lower power, C1:LSC-OUTPUT_MTRX_9_6 and C1:LSC-OUTPUT_MTRX_10_7 were reduced to 0.05.
This compensates the gain imbalance between TRX/Y siganls and the A2L component in the arm feedback signals.
Despite the IFO was aligned, we don't touch the OPLEVs and green beams to the vented IFO.
There was no progress tonight after Jenne left.
I could not find any reasonable fringes of the IFO after 3 hours of optics jiggling.
* I jiggled TT1 and TT2. The slider has not been restored.
We should probably look at the value in the day time and revert them.
(Still this does not ensure the recovery of the previous pointing because of the hysteresis)
* The arms are still aligned for the green.
It's not TEM00 any more because of the vent/drift but the fringe is visible (i.e. eigenaxis is on the mirror)
* As we touched PR3, the input pointing is totally misaligned.
To Do / Plan
* We need to find the resonance of the yarm by the input TTs. Once the resonance is found, we will align the PRM.
* Move the BS to find the xarm resonance.
* Finally align SRM
* It was not possible to find the resonance of the yarm without going into the chamber. Definitely we can find the spot on the ITMY by a card, but we are not sure the beam can hit the ETMY. And the baffles makes the work difficult.
* One possibility is to align the input beam so that the ITMY beam is retroreflected to the PRM. I tried it but the beam was not visible form the camera.
[Jenne, Annalisa, Manasa]
After yesterday's flipping of PR3, we lost our input pointing. Koji spent a few hours last night but couldn't restore the Y arm. I did my set of trials this morning which also didn't help.
So Jenne and I went ahead and requested Steve to get the ETMY door off.
We set the tiptilts TT1 and TT2 to the slider values from yesterday and started aligning the PR3 to hit the center of ITMY.
When we were hitting close to the center of ITMY, we decide to use the tip-tilts because the movement of PR3 was coarse at this point.
We used TT1 to get the beam to the center of ITMY and TT2 to get the beam at the center of ETMY. We did this iteratively until we were at the center of both the ITMY and ETMY.
We then went to fix IPANG.
The IPANG steering mirror on the BS table was steered to hit the center of the steering mirrors at the ETMY table. We aligned the beam to the IPANG QPD on the green endtable. The steering mirror on the BS table was then steered to misalign the beam in pitch by an inch at the last IPANG steering mirror. This should fix the IPANG clipping we have everytime we pump down.
We closed the chambers with light doors and saw IR flashing in the arm cavity. Koji is now trying to lock the cavity with IR.