1) Gravity has to be included because the inverted pendulum effect changes the resonant frequencies. The deflection from gravity is tiny but the change in the dynamics is not. The results are not accurate without it. The z-direction probably is unaffected by gravity, but the tilt modes really feel it.
2) You should try a better meshing. Right now COMSOL is calculating a lot of strain/stress in the steel plates. For our purposes, we can imagine that the steel is infinitely stiff. There are options in COMSOL to change the meshing density in the different materials - as we can see from your previous plots, all the action is in the rubber.
3) I don't think the mesh density directly limits the upper measurement frequency. When you redo the swept-sine using the matlab scripting, use a logarithmic frequency grid like we usually do for the Bode plots. The measurement axis should go from 0.1 - 30 Hz and have ~100 points.
In any case, the whole thing looks promising: we've got real solid models and we're on the merge of being able to duplicate numerically the Dugolini-Vass-Weinstein measurements.
I made some progress on a couple issues:
1) I figured out how to create log-transfer function plots directly in COMSOL, which eliminates the hassle of toggling between programs.
2) Instead of plotting maximum displacement, which could lead to inconsistencies, I've started using point displacement, standardizing to the center of the top surface.
3) I discovered that the displacement can be measured as a field vector, so the minor couplings between each translational direction (due to the asymmetry in the original designs) can be easily ignored.
All of my plots have already taken into account the calibration of the photosensor (V/mm ratio)
Here is a bode plot generated for the transfer function measurements we obtained last night/this morning. This is a bode plot for the fully-assembled T.T. (with flexibly-supported dampers and bottom bar). I will continue to upload bode plots (editing this post) as I finish them but for now I will go to sleep and come back later on today.
Here is a bode plot comparing the no eddy-current damper case with and without the bar that we suspected to induce some non-uniform damping. We have limited data on the NO EDC, no bar measurements (sine swept data from 7 Hz to 50 Hz) and FFT data from 0 Hz to 12.5 Hz because we did not want to induce too much movement in the mirror (didn't want to break the mirror). This plot shows that there is not much difference in the transfer functions of the TT (no EDC) with and without the bar.
From FFT measurements of the no eddy-current damper case without the bar (800 data points, integrated 10 times) we can define the resonance peak of the TT mirror (although there are still damping effects from the cantilever blades).
The largest resonance peak occurs at about 1.94 Hz. The response (magnitude) is 230.
The second-largest resonance peak occurs at about 1.67 Hz. The response (magnitude) is 153. This second resonance peak may be due to pitch motion coupling (this is caused by the fact that the clamping attaching the mirror to the wires occurs above the mirror's center of mass, leading to inevitable linear and pitch coupling).
Here is a bode plot of the EDC without the bar. It seems very similar to the bode plot with the bar
Here is a bode plot of the rigidly-supported EDC, without bar. I need to do a comparison plot of the rigid and flexibly-supported EDCs (without bar)
Here is my bode plot comparing the flexibly-supported and rigidly-supported EDCs (both with no bar)
It seems as if the rigidly-supported EDC has better isolation below 10 Hz (the mathematically-determined Matlab model predicted this...that for the same magnet strength, the rigid system would have a lower Q than the flexible system). Above 10 Hz (the resonance for the flexibly-supported EDCs seem to be at 9.8 Hz) , we can see that the flexibly-supported EDC has slightly better isolation? I may need to take additional measurements of the transfer function of the flexibly-supported EDC (20 Hz to 100 Hz?) to hopefully get a less-noisy transfer function at higher frequencies. The isolation does not appear to be that much better in the noisy region (above 20Hz). This may be because of the noise (possibly from the electromagnetic field from the shaker interfering with the magnets in the TT?). There is a 3rd resonance peak at about 22 Hz. I'm not sure what causes this peak...I want to confirm it with an FFT measurement of the flexibly-supported EDC (20 Hz to 40 Hz?)
Since the last post, I have found from the Characterization of TT data (from Jenne) that the resonant frequency of the cantilever springs for TT #4 (the model I am using) have a resonant frequency at 22 Hz. They are in fact inducing the 3rd resonance peak.
Here is a bode plot (CORRECTLY SCALED) comparing the rigidly-supported EDCs (model and experimental transfer functions)
Here is a bode plot comparing the flexibly-supported EDCs (model and experimental transfer functions). I have been working on this graph for FOREVER and with the set parameters, this is is close as I can get it (I've been mixing and matching parameters for well over an hour > <). I think that experimentally, the TTs have better isolation than the model because they have additional damping properties (i.e. cantilever blades that cause resonance peak at 22 Hz). Also, there may be a slight deviation because my model assumes that all four EDCs are a single EDC.
To estimate the transfer function and the noise in the FC that is a part of the FOL-PID loop.
The setup used for the measurements is described in my previous elogs.
The input modulation signal and the FC output were recorded simultaneously for a certain period of time and the phase and gain are estimated from the data.
Analysis(Data and code attached):
The recordings must contain equal number of data points(around 6000 data points in my measurements) for analysis.
The steps I followed to generate these plots are:
Phase(system) =Phase(FC Signal) - Phase(Input Signal)
From the plots its can be inferred that :the delay of the FC is almost 0 until the modulation of 0.1 Hz. Then there are phase shifts of +/- 180 degrees showing that the system has multiple poles and zeroes(will be estimated after I have phase plots at few more carrier frequencies).
To Do Next:
Phase plots for varying carrier frequencies and different sampling times.
Installation of FC inside the 40m.
If I assume 1sample delay for 0.1s sampling rate, the delay is Exp[-I 2 pi f T], where T is the sampling period.
This means that you expect only 36 deg phase delay at 1Hz. In reality, it's 90deg. Huge!
Also there are suspicious zeros at ~1.6Hz and ~3Hz. This may suggest that the freq counter is doing some
internal averaging like a moving average.
It would be interesting to apply a theoretical curve on the plot. It's an intellectual puzzle.
I hooked up Bonnie and Clyde last night and tested it today. First I tried some loud noises to make sure I could identify them on the readout. Then, Steve suggested I try to look for some periodic stuff. I set up Butch Cassidy and the Sundance Kid on the cabinets by the MC2 optic. Now for graphs!
I tapped on the microphone a few times. I also yelled a bit, but this is sampling by seconds, so perhaps they got overwhelmed by the tapping.
This time I tried some more isolated yells. I started with a tap so I'd be sure to be able to recognize what happened. Apparently, not so necessary.
Here, it looks like a pretty strong periodic pattern on the second mic (Butch Cassidy). I replaced the lines with dashed ones where the pattern was a little less clear. Possibility interference from something. Mic1 (Bonnie) seems to show a pretty regular beat pattern, which seems reasonable, as it isn't particularly close to any one instrument fan.
So, anyway. I thought those were neat. And that I wanted to share.
In her position overlooking whichever table it is that is next to the PSL, Bonnie drummed up some decent coherence with the PSL-PMC_ERR channel, but not so much with the MC_L. I moved her into the PSL itself, and now there is rather good coherence with the PMC_ERR channel, but still not so great for MC_L.
Bonnie's new home in the PSL.
[Alberto, Koji, Rana]
The RFM network failed today. We had to reboot the frame builder anf restart all the front end following the instructions for the "Nuclear Option".
Burt-restoring to May 1st at 18:07, or April 30 18:07 made c1sosvme crash. We had to reset the front ends again and restore to April 15th at 18:07 in order to make everything work.
Everything seems fine again now.
Thi afternoon I found that the RFM network in trouble. The frontends sync counters had railed to 16384 counts and some of the computers were not responding. I went for a bootfest, but before I rebooted c1dcu epics. I did it twice. Eventually it worked and I could get the frontends back to green.
Although trying to burtrestore to snapshots taken at any time after last wednesday till today would make the RFM crash again. Weird.
Also, c1iscey seems in a coma and doesn't want to come back. Power cycling it didn't work. I don't know how to be more persuasive with it.
During the testing of Megatron as the controller for ETMY, c1iscey had been disconnected from the ethernet hub. Apparently we forgot to reconnect it after the test. This prevented it from mounting the nfs directory from linux1, and thus prevented it from coming up after being shutdown. It has been reconnected, restarted, and is working properly now.
This afternoon, I wanted to start the nominal alignment/adjustment steps for evening time locking, but got sucked into CDS frustrations.
Primary symptom: TRX and TRY signals were not making it from C1:SUS-ETMX_TR[X,Y]_OUT to C1:LSC-TR[X,Y]_IN1. Various RFM bits were red on the CDS status page.
Secondary symptom: ITMX was randomly getting a good sized kick for no apparent reason. I still don't know what was behind this.
First fix attempt: run sudo ntpdate -b -s -u pool.ntp.org on c1sus and c1lsc front ends, to see if NTP issues were responsible. No result.
sudo ntpdate -b -s -u pool.ntp.org
Second fix attempt: Restart c1lsc, c1sus and c1rfm models. No change
Next fix attempt: Restart c1lsc and c1sus frontend machines. c1lsc models come back, c1sus models fail to sync / time out/ dmesg has some weird message about ADC channel hopping. At this point, c1ioo, c1iscey and c1iscex all have their models stop working due to sync problems.
I then ran the above ntp command on all front ends and the FB, and restarted everyone's models (except c1lsc, who stayed working from here on out) which didn't change anything. I command-line rebooted all front ends (except c1lsc) and the FB (which had some dmesg messages about daqd segfaulting, but daqd issues weren't the problem). Still nothing.
Finally, Koji came along and relieved me from my agony by hard rebooting all of the front ends; pulling out their power cables and seeing the life in their lights fade away... He did this first with the end station machines (c1iscey and c1iscex), and we saw them come back up perfectly happy, and then c1ioo and c1sus followed. At this point, all models came back; green RFM bits abounding, and TR[X,Y] signals propagating through as desired.
Then, we tried turning the damping/watchdogs back on, which for some strange reason started shaking the hell out of everyone except the ETMs and ITMX. We restarted c1sus and c1mcs, and then damping worked again. Maybe a bad BURT restore was to blame?
At this point, all models were happy, all optics were damped, mode cleaner + WFS locked happily, but no beams were to be seen in the IFO
The Yarm green would lock fine though, so tip-tilt alignment is probably to blame. I then left the interferometer to Jenne and Koji.
Still no real luck getting the beam back aligned to the IFO.
Koji and I tried a few minutes of wiggling the input pointing tip tilts (TT1 and TT2) around, and then tried doing some thinking.
We note that the beam propagates (modulo a few pickoffs):
IMC -> Faraday -> TT1 -> MMT1 -> MMT2 -> TT2 -> PRM.
Since moving TT1 to the rails does make beam reflections in the BS chamber move (as seen by movement of the general illumination on the PRM face camera), I posit that the beam is getting through the Faraday. It is certainly getting at least mostly through the Faraday, although since the MC locked so easily, I assume that we didn't have too much movement after the ~2pm Alaskan earthquake & aftershocks, so we're at pretty much the same alignment as usual, in terms of beam pointing coming from the IMC.
The plan is then to see the position of the beam on MMT1, and steer using TT1 to get the beam to roughly the center. Then, see the beam propagate to MMT2 (if possible) and TT2 (if possible). From here, we should be able to see the spot on PRM. We should be able to use TT2 to tweak things up, and get the beam back to about the right place on POP, or REFL, or somewhere farther along. Hopefully at this point, we'd see some flashes in the Yarm.
Using a spare Watek camera, I was able to capture a shot of the face of MMT1. This is when the TTs were restored to their values that were saved last Monday. I checked, and this is also roughly the center of the actuation range of TT1, for both pitch and yaw.
I am not able to see the face of MMT2, or TT2. If I leave TT1, and move TT2, I am not able to see any movement of any beam or reflections seen in the PRM face camera.
Koji and I are checking the MC spot positions, but it may be time to leave this for the morning crew.
EDIT: The MC spots were actually pretty bad, and the WFS were working really hard. Koji realigned the MC suspensions, and now the MC spots are slightly better, although quite similar, to what Manasa measured last week. The restored TT values still don't give us any flashes in the arms.
Alberto, Kiwamu, Koji,
this morning we found the RFM network and all the front-ends down.
To fix the problem, we first tried a soft strategy, that is, we tried to restart CODAQCTRL and C1DCUEPICS alone, but it didn't work.
We then went for a big bootfest. We first powered off fb40m, C1DCUEPICS, CODAQCTRL, reset the RFM Network switch. Then we rebooted them in the same order in which we turned them off.
Then we power cycled and restarted all the front-ends.
Finally we restored all the burt snapshots to Monday Dec 7th at 20:00.
I borrowed the little red cart 🛒 to help clear the path for new optical tables in B252 West Bridge. Will return once I am done with it.
Osamu has borrowed an ADC card from the LSC IO chassis (which currently has a flaky generation 2 Host interface board). He has used it to get his temporary Dell test stand running daqd successfully as of yesterday.
This is mostly a note to myself so I remember this in the new year, assuming Osamu hasn't replaced the evidence by January 7th.
I've borrowed the Busby Box for a day or so. Location: QIL lab at Bridge West.
Edit Sat Apr 20 21:16:46 2019 (awade): returned.
Borrowed DSUB cables for Juan's SURF project
- 2x D25F-M cables (~6ft?)
- 2x D2100103 ReducerCables
I borrowed an old-looking Variac variable transformer from the power supplies cabinet along the y-arm. It is currently in the TCS lab.
Borrowed Zurich HF2LI Lock in Amplifer to QIL lab Wed Apr 24 11:25:11 2019.
I borrowed one Marconi (2023 B) from 40 m lab to QIL lab.
ZHL-3A (2 units) —-> QIL
[Nicole / Jamie / Rana / Kiwamu]
The X arm and Y arm have been locked.
The settings for the locking were stored on the usual IFO_CONFIGURE scripts, so anybody can lock the arms.
In addition to that Nicole, Jamie and Rana re-centered the beam spot on the ETMY_TRANS camera and the TRY PD.
The next step is to activate the C1ASS servo and align the both arms and beam axis.
Xarm locking notes:
* Changed TRX gain from -1 to -0.02. Without this 50x reduction the arm power was not normalized.
* Had to fix trigger matrix to use TRX for XARM and TRY for YARM. Before it was crazy and senseless.
* Lots of PZT alignment. It was off by lots.
* Yarm trans beam was clipping on the steering mirrors. Re-aligned. Needs to be iterated again. Be careful when bumping around the ETMY table.
* YARM gain was set to -2 instead of -0.2. Because the gain was too high the alignment didn't work right.
ALWAYS HAVE an OPEN DATAVIEWER with the standard ARM channels going when doing ANY INTERFEROMETER WORK.
THIS IS THE LAW.
We succeeded in stabilizing both the arms using ALS and get IR to resonate at the sametime.
At each step we measured the _PHASE_OUT_Hz calibrated error signals for Y in this configuration so as to get the in-loop noise of ALS control of YARM
1. we stabilized YARM off IR resonance by using ALS, misaligned ETMX, closed XARM green shutter. That means no IR flashing and no green in XARM.
2. we aligned the ETMX with XARM green shutter closed.
3. we opened the green shutter and locked the green laser with PDH to the XARM.
4. we stabilized the XARM using ALS and off resonance for IR.
5.We brought the XARM to IR resonance with YARM stabilized off IR resonance.
6. we brought the YARM to IR resonance
Beat frequencies when both the arms were stabilized and had IR resonating :
X arm beat frequency = 73.2 MHz; Y arm beat frequency = 26.6 MHz.
1.the ALS in-loop noise in X and Y arms with IR off resonance and resonating.
2.the ALS in-loop noise in Y arm in each step from 1 to 6.(will follow soon)
The Y arm ALS in-loop noise doesn't seem to be different in any of the configurations in step 1 to 6. This seems to mean that the ALS of the two arms are decoupled.
Actually we are not sure what changed from the last few days (when we were seeing some sort of coupling between the ALS of X and Y arm) except for YARM green PDH servo gain changed (see this entry),
[JC, Paco, Yuta]
We locked both Y and X arms with POY11 and POX11.
RFM fix (40m/16887) enabled us to use triggering using C1:LSC-TRY/X_OUT.
IR beam is now centered on TMs using ASS (for Yarm, ASS loops cannot be closed fully, so did it manually).
What we did:
- Aligned both arms so that the beams are roughly centered at TMs using cameras.
- Yarm lock was easy, but Xarm lock required gain tuning. Somehow, Xarm required x3 higher gain as follows, although the amplitude of POX11_I_ERR seems to be almost the same as POY11_I_ERR. I suspect it is something to do with power normalization matrix (TRX flashing is almost a double of TRY flashing).
C1:LSC-YARM_GAIN = 0.01
C1:LSC-XARM_GAIN = 0.03
- Run ASS for Yarm. ASS loops cannot be closed fully using default feedback parameters. I guess this is because ITMY ULCOIL is not working (40m/16873). ASS demodulated signals were manually zero-ed by manually aligning ETMY, ITMY and PR3 (and some TT1 and TT2), except for demodulated signals related to ITMY. Beam on ITMY was centered just by using our eyes.
- Run ASS for Xarm. It seemed to work well.
- After this, TRX and TRY were as follows and beam positions on TMs were as attached.
C1:LSC-TRY_OUT ~ 0.58
(TRX is somehow lower than what we had yesterday... 40m/16886; TRX and TRY photodiode alignment was checked, but seems to be OK.)
- Centered TMs and BS oplevs.
- POX and POY demodulation phases are not fully optimized. Needs re-tuning.
- Tweak GRX and GRY injection (restore GRY PZTs?)
- Install ETMXT camera (if it is easy)
- MICH locking
- RTS model for BHD needs to be updated
The filters were already in the damping loops but missing the MC WFS path. I checked that these accurately cover the peaks at 16.5 Hz and 23.90 and 24.06 Hz.
I measured the bounce/roll frequencies for all the optics, and updated the Mechanical Resonances wiki page accordingly.
I put the DTT templates I used in the /users/Templates/DTT_BounceRoll folder; I wrote a python script which takes the exported ASCII data from such templates and does all the rest; the only tricky part is to remember to export the channel data in the order "UL UR LL" for each optic; the ordering of the optics in a single template export is not important, as long as you remember it...
Anyhow, the script is documented and the only things that may need to be modified are:
The script is in scripts/SUS/BR_freq_finder.py and in the SVN. I attach the plots I made with this method.
Bringing back CDS took a lot of work yesterday. I'm gonna try to summarize the main points here.
For some reason, fb1 was not able to mount mx devices automatically on system boot. This was an issue I earlier faced in fb1(clone) too. The fix to this problem is to run the script:
To make this persistent, I've configured a daemon (/etc/systemd/system/mx_start_stop.service) in fb1 to run once on system boot and mount the mx devices as mentioned above. We did not see this issue of later reboots yesterday.
Next was the issue of gpstime module out of date on fb1. This issue is also known in the past and requires us to do the following:
controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 1$ sudo modprobe gpstime
Again, to make this persistent, I've configured a daemon (/etc/systemd/system/re-add-gpstime.service) in fb1 to run the above commands once on system boot. This corrected gpstime automatically and we did not face these problems again.
Later we found that fb1-FE computers, ntp time synchronization was not working and the main reason was that fb1 was unable to access internet. As a rule of thumb, it is always a good idea to try pinging www.google.com on fb1 to ensure that it is connected to internet. The issue had to do with fb1 not being able to find any namespace server. We fixed this issue by reloading bind9 service on chiara a couple of times. We're not really sure why it wasn't working.
After the above, we saw that fb1 ntp server is working fine. You see following output on fb1 when that is the case:
On the FE models, timedatectl should show that NTP synchronized feild is yes. That wasn't happening even after us restarting the systemd-timesyncd service. After this, I just tried restarting all FE computers and it started working.
We had removed all db9 enabling plugs on the new SOSs beforehand to keep coils off just in case CDS does not come back online properly.
Everything in CDS loaded properly except the c1oaf model which kepy showing 0x2bad status. This meant that some IPC flags are red on c1sus, c1mcs and c1lsc as well. But everything else is green. See attachment 1. I then burtrestroed everything in the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Feb/4/12:19 directory. This includes the snapshot of c1vac as well that I added on autoburt that day. All burt restore statuses were green OK. I think we are in good state now to start watchdogs on the new SOSs and put back the db9 enabling plugs.
When somebody gets time, we should make cutom service files in fb1:/etc/systemd/system/ symbolic links to a repo directory and version control these important services. We should also make sure that their dependencies and startup order is correctly configured. I might have done a half-assed job there since I recently learned how to make unit files. We should do the same on nodus and chiara too. Our hope is that on one glorious day, the lab can be restarted without spending more than 20 min on booting up the computers and network.
Great recovery work and cleaning of the rebooting process.
I'm just curious: Did you observe that the c1sus2 cards have different numbering order than the previous along with the power outage/cycling?
Modified one of the PD assemblies carrying a large SI-Diode (~10mm diameter).
Removed elements used for resonant operation and changed PD readout to transimpedance
configuration. The opamp is a CLC409 with 240 Ohm feedback (i.e. transimpedance) resistor.
To prevent noise peaking at very high frequencies and get some decoupling of the PD,
I added a small series resistor in line with the PD and the inverting opamp input.
It was chosen as 13 Ohm, and still allows for operation up to ~100MHz.
Perhaps it could be smaller, but much more bandwith seems not possible with this opamp anyway.
Changes are marked in the schematic, and I list affected components here.
(Numbers refer to version 'PD327.SCH' from 30-April-1997):
-connected L3 (now open pad) via 100 Ohm to RF opamp output. This restores the DC sognal output.
-connected pin 3 of opamp via 25 Ohm to GND
-connected kathode of PD via 13 Ohm to pin 2 of opamp
-removed L6, C26, L5, C18, and C27
-shorted C27 pad to get signal to the RF output
Measured the optical TF with the test laser setup.
(Note that this is at 1064nm, although the PD is meant to work with green light at 532nm!)
Essentially it looks usable out to 100MHz, where the gain dropped only by about
6dB compared to 10MHz.
Beyond 100MHz the TF falls pretty steeply then, probably dominated by the opamp.
The maximal bias used is -150V.
If the bias is 'reduced' from -150V to -50V, the response goes down by 4dB at 10MHz and
by 9dB at 100MHz.
The average output was 30mV at the RF output, corresponding to 60mV at the opamp output (50Ohm divider chain).
With 240 Ohm transimpedance this yields 250µA photo-current used for these transfer functions.
Another Hamasutu S3399 photodiode was tested with the electronic circuit as described in LIGO-D-1002969-v.
RF transimpedance is 1k although the DC transimpedance is 2k.
The noise level is 25pA/sqrt(Hz) which corresponds to a dark current of 1.9mA or 1.7mA in the independent measurement.
At all frequencies the noise is larger compared to Koji's measurement (see labbook page 4778).
In file idet_S3399.pdf the first point is not within its error bars on the fitted curved. This point corresponds to the dark noise measurement
I made this measurement again. Now it is on the fitted curve. In the previous measurement I pushed the save button a bit too early. The
averaging process has not been ready while I pushed the 'save' button.
Dark current is 1.05mA and noise is lower than in the previous measurement.
New file are the XXX_v2.pdf files
The ITMX tower was shipped into the Bob's clean room to put the magnet back on.
Repair work is delayed. I need the "pickle pickers" that hold the magnet+dumbbell in the gluing fixture, for gluing them to the optic. Here at the 40m we have a full set of SOS gluing supplies, except for pickle pickers. We had borrowed Betsy's from Hanford for about a year, but a few months ago I returned all of the supplies we had borrowed. Betsy said she would find them in her lab, and overnight them to us. Since the problem occurred so late in the day, they won't get shipped until tomorrow (Thursday), and won't arrive until Friday.
I also can't find our magnet-to-dumbbell gluing fixture, so I asked her to send us her one of those, as well.
I have 2 options for fixing ITMX. I'll write down the pros and cons for each, and we can make a decision over the next ~36 hours.
(#1) Remove dumbbell from optic. Reglue magnet to dumbbell. Reglue magnet+dumbbell to optic.
(#2) Carefully clean dumbbell and magnet, without breaking dumbbell off of optic. Glue magnet to dumbbell.
(#1) Guarantee that magnet and dumbbell are axially aligned.
(#2) Takes only 1 day of glue curing time.
(#1) Takes 2 days of glue curing time. (one for magnet to dumbbell, one for set to optic.)
(#2) Could have slight mismatch in axis of dumbbell and magnet. Could accidentally drop a bit of acetone onto dumbbell-to-optic glue, which forces us into option 1, since this might destroy the integrity of the glue joint (this would take only the 2 days already required for option 1, it wouldn't force us to take 2+1=3 days).
Dmass just reminded me that the usual procedure is to bake the optics after the last gluing, before putting them into the chambers. Does anyone have opinions on this?
On the one hand, it's probably safer to do a vacuum bake, just to be sure. On the other hand, even if we could use one of the ovens immediately, it's a 48 hour bake, plus cool down time. But they're working on aLIGO cables, and might not have an oven for us for a while. Thoughts?
I think we should follow the established procedure in full, even though it will cost us a few more days. I dont think we should consider the vacuum bake as something "optional". If the glue has any volatile components they could be deposited on the optic resulting in a change in the coating and consequently optical loss in the arm cavity.
Follow full procedure for full strength, minimum risk
I ran the "off" script for the Xarm ASS, followed by the "on" script, and now the Xarm ASS doesn't work. Usually we just run the freeze/unfreeze, but I ran the off/on scripts one time.
Koji, if you have some time tomorrow, can you please look at it? I am sorry to ask, but it would be very helpful if I could keep working on other things while the ASS is taken care of.
Steve, can you please find a cable that goes from the LSC rack to the IOO rack (1Y2 to 1X2), or lay a new one? It must be one single long cable, without barrels sticking it together. This will help me actuate on the Marconi using the LSC rack's DAC.
I spent a day to fix the XARM ASS, but no real result. If the input of the 6th DOF servo is turned off, the other error signals are happy
to be squished to around their zeros. So this gives us some sort of alignment control. But obviously a particular combination of the
misalignment is left uncontrolled.
This 6th DOF uses BS to minimize the dither in ITMX yaw. I tired to use the other actuators but failed to have linear coupling between
the actuator and the sensor.
During the investigation, I compared TRX/TRY power spectra. TRX had a bump at 30Hz. Further investigation revealed that the POX/POY
had a big bump in the error signals. The POX/POY error signals between 10-100Hz were coherent. This means that this is coming from
the frequency noise stabilized with the MC. (Is this frequency noise level reasonable?)
The mysterious discovery was that the bump in the transmission exist only in TRX. How did the residual frequency noise cause
the intensity noise of the transmission? One way is the PDH offset.
Anyway, Rana pointed out that IMC WFS QPDs had large spot offsets. Rana went to the AS table and fixed the WFS spot centering.
This actually removed the bump in TRX although we still don't know the mechanism of this coupling.
The bump at 30Hz was removed. However, the ASS issue still remains.
While tightening the bolts on the ETMX wire clamp, the wire broke. All four face magnets broke off.
Fortunately, no pieces were lost.
For the rest of this vent, at least, we need to start using the EQ stops more frequently. Whenever the suspension is being worked on clamp the optic. When you need it to be free back off the stops, but only by a few hundred microns - never more than a millimeter.
Best to take our time and use the stops often. With all the magnets being broken off, its not clear now how many partially cracked glue joints we have on dumbells which didn't completely fall off.
On behalf of Steve and of the rest of the not-native-English community at the 40m willing to have their browser's spell checker working while editing the Elog, I fixed the Elog's feature that prevented Firefox' context menu (that one which pops up with a mouse right click) to work when using the HTML editing interface (FCKeditor).
That let also Firefox spell checker to get enabled.
To get the browser context menu just press CTRL right-clicking.
To make sure that the features works properly on your browser, you might have to fully clear the browser's cache.
Basically I modified the FCKeditor config file (/cvs/cds/caltech/elog/elog-2.7.5/scripts/fckeditor/fckconfig.js). I added this also to the elog section on our Wiki.
Since the repair work, the temperature is significantly cooler. Surprisingly, even at the vertex (to be more specific, inside the PSL enclosure, which for the time being is the only place where we have a logged temperature sensor, but this is not attributable to any change in the HEPA speed), the temperature is a good 3 deg C cooler than it was before the HVAC work (even though Koji's wind vane suggest the vents at the vertex were working). The setpoint for the entire lab was modified? What should the setpoint even be?
- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.
- Then I went to the vertex and the east arm. The outlets and intakes are flowing.
Jordan reported on Jun 18, 2021:
"HVAC tech came today, and replaced the thermostat and a coolant tube in the AC unit. It is working now and he left the thermostat set to 68F, which was what the old one was set to."
As part of preparing for the SURF projects this summer, I grabbed ~50 minutes of MCL and STS_1 data from early this morning to do a little MISO wiener filtering. It was pretty straightforward to use the misofw.m code to achieve an offline subtraction factor of ~10 from 1-3Hz. This isn't the best ever, but doesn't compare so unfavorably to older work, especially given that I did no prefiltering, and didn't use all that long of a data stretch.
Code and plot (but not data) is attached.
Good to see that misofw.m is still alive and well. Todo:
Bryan Barr is visiting us from Glasgow for a month. He received 40m specific safety training on Friday.