ID |
Date |
Author |
Type |
Category |
Subject |
14584
|
Mon Apr 29 16:34:27 2019 |
gautam | Update | Electronics | ITMX/IMTY mis-labelling fixed at 1X4 and 1X5 | After the X and Y arm naming conventions were changed, the labelling of the electronics in the eurocrates was not changed 😞 😔 😢 . This meant that when we hooked up the new Acromag crate, all the slow ITMX channels were in fact connected to the physical ITMY optic. I ♦️fixed♦️ the labelling now - Attachments #1 and #2 show the coil driver boards and SUS PD whitening boards correctly labelled. Our electronics racks are in desperate need of new photographs.
The "Y" arm runs in the EW direction, while the "X" arm runs in the NW direction as of April 29 2018.
ITMX was freed. ITMY is being worked on is also free.. |
Attachment 1: IMG_7400.JPG
|
|
Attachment 2: IMG_7401.JPG
|
|
14585
|
Mon Apr 29 19:23:49 2019 |
Jon | Update | Computer Scripts / Programs | Scripted tests of suspension VMons using fast system | I've added a scripted VMon/coil-enable test to PyIFOTest following the suggestion in #15542. Basically, a DC offset is added to one fast coil output at a time, and all the VMon responses are checked.
After resolving the swapped ITMX/ITMY eurocrate slots described in #14584, I ran the new scripted VMon test on all eight optics managed by c1susaux. All of them passed: SRM, BS, MC1, MC2, MC3, PRM, ITMX, ITMY. This is not the final suspension test we plan to do, but it gives me reasonably good confidence that all channels are connected correctly. |
14586
|
Tue Apr 30 17:27:35 2019 |
Anjali | Update | Frequency noise measurement | Frequency noise measurement of 1 micron source | We repeated the homodyne measurement to check whether we are measuring the actual frequency noise of the laser. The idea was to repeat the experiment when the laser is not locked and when the laser is locked to IMC.The frequency noise of the laser is expected to be reduced at higher frequency (the expected value is about 0.1 Hz/rtHz at 100 Hz ) when it is locked to IMC . In this measurement, the fiber beam splitter used is Non PM. Following are the observations
1. Time domain output_laser unlocked.pdf : Time domain output when the laser is not locked. The frequency noise is estimated from data corresponds to the linear regime. Following time intervals are considered to calculate the frequency noise (a) 104-116 s (b) 164-167 s (c) 285-289 s
2. Frequency_noise_laser_unlocked.pdf: Frequency noise when the laser is not locked. The model used has the functional form of 5x104/f as we did before. Compared to our previous results, the closeness of the experimental results to the model is less from this measurement. In both the cases, we have the uncertainty because of the fiber length fluctuation. Moreover, this measurement could have effect of polarisation fluctuation as well.
3.Time domain output_laser locked.pdf :Time domain output when the laser is locked. Following time intervals are considered to calculate the frequency noise (a) 70-73 s (b) 142-145 s (c) 266-269 s.
4. Frequency_noise_laser_locked.pdf : Frequency noise when the laser is locked
5. Frequency noise_comparison.pdf : Comparison of frequency noise in two cases. The two values are not significantly different above 10 Hz. We would expect reduction in frequency noise at higher frequency once the laser is locked to IMC. But this result may indicate that we are not really measuring the actual frequency noise of the laser. |
Attachment 1: Homodyne_repeated_measurement.zip
|
14587
|
Thu May 2 10:41:50 2019 |
gautam | Update | SUS | SOS Magnet polarity | A concern was raised about the two ETMs and ITMX having the opposite response (relative to the other 7 SOS optics) in the OSEM PDmon channel in response to a given polarity of PIT/YAW offset being applied to the coils. Jon has factored into account all the digital gains in the actuation part of the CDS system in making this conclusion. I raised the possibility of the OSEM coil winding direction being opposite on the 15 OSEMs of the ETMs and ITMX, but I think it is more likely that the magnets are just glued on opposite to what they are "supposed" to be. See Attachment #6 of this elog (you'll have to rotate the photo either in your head or in your viewer) and note that it is opposite to what is specified in the assembly procedure, page 8. The net magnetic quadrupole moment is still 0, but the direction of actuation in response to current in the coil in a given direction would be opposite. I can't find magnet polarities for all the 10 SOS optics, but this hypothesis fits all the evidence so far.. |
14588
|
Thu May 2 10:59:58 2019 |
Jon | Update | SUS | c1susux in situ wiring testing completed | Summary
Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well.
I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.
Usage and Design
The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as
$ ./IFOTest <PARAMETER_FILE>
where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml .
The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution:
- VMon test: Validates the low-frequency sensing of SUS actuation (VMon channels). A DC offset is applied in the final filter module of the fast coil outputs, one coil at a time. The test confirms that the VMon of the actuated coil, and only this VMon, senses the displacement, and that the response has the correct polarity. The screen output is a matrix showing the change in VMon responses with actuation of each coil. A passing test, roughly, is diagonal values >> 0 and off-diagonal values << diagonal.

- Coil Enable test: Validates the slow watchdog control of the fast coil outputs (Coil-Enable channels). Analogously to (1), this test also applies a DC offset via the fast system to one coil at a time and analyzes the VMon responses. However, in this case, the offset is enabled to all five coils simulataneously and only one coil output is enabled at a time. The screen output is again a \Delta VMon matrix interpreted in the same way as above.

- PDMon/DC Bias test: Validates slow alignment control and readback (BiasAdj and PDMon channels). A DC misalignment is introduced first in pitch, then in yaw, with the OSEM PDMon responses measured in both cases. Using the gains from the PIT/YAW---> COIL output coupling matrix, the script verifies that each coil moves in the correct direction and by a sufficiently large magnitude for the applied DC bias. The screen output shows the change in PDMon responses with a pure pitch actuation, and with a pure yaw actuation. The output filter matrix coefficients have already been divided out, so a passing test is a sufficiently large, positive change under both pitch and yaw actuations.

|
14591
|
Fri May 3 09:12:31 2019 |
gautam | Update | SUS | All vertex SUS watchdogs were tripped | I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?
On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames? |
Attachment 1: SUSwatchdogs.png
|
|
14592
|
Fri May 3 12:48:40 2019 |
gautam | Update | SUS | 1X4/1X5 cable admin | Chub and I crossed off some of these items today morning. The last bullet was addressed by Jon yesterday. I added a couple of new bullets.
The new power connectors will arrive next week, at which point we will install them. Note that there is no 24V Sorensen available, only 20V.
I am running a test on the 2W Mephisto for which I wanted the diagnostics connector plugged in again and Acromag channels to record them. So we set up the highly non-ideal but temporary set up shown in Attachment #1. This will be cleaned up by Monday evening latest.
update 1630 Monday 5/6: the sketchy PSL acromag setup has been disassembled.
Quote: |
- Take photos of the new setup, cabling.
- Remove the old c1susaux crate from the rack to free up space, possibly put the PSL monitoring acromag chassis there.
Test that the OSEM PD whitening switching is working for all 8 vertex optics.(verified as of 5/3/19 5pm)
- New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
All 24 new DB-37 signal cables need to be labeled.
New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.
|
|
Attachment 1: D38CC485-1EB6-4B34-9EB1-2CB1E809A21A.jpeg
|
|
14593
|
Fri May 3 12:51:58 2019 |
gautam | Update | PSL | PSL turned on again | Per instructions from Coherent, I made the some changes to the NPRO settings. The value we were operating at is in the column labelled "Operating value", while that in the Innolight test datasheet is in the rightmost column. I changed the Xtal temp and pump current to the values Innolight tested them at (but not the diode temps as they were close and they require a screwdriver to adjust), and turned the laser on again at ~1245pm local time. The acromag channels are recording the diagnostic information.
update 2:30pm - looking at the trend, I saw that D2 TGuard channel was reporting 0V. This wasn't the case before. Suspecting a loose contact, I tightened the DSub connectors at the controller and Acromag box ends. Now it too reports ~10V, which according to the manual signals normal operation. So if one sees an abrupt change in this channel in the long trend since 1245pm, that's me re-seating the connector. According to the manual, an error state would be signalled by a negative voltage at this pin, up to -12V. Also, the Innolight manual says pin 13 of the diagnostics connector is indicating the "Interlock" state, but doesn't say what the "expected" voltage should be. The newer manual Coherent sent me has pin13 listed as "Do not use".
Setting |
Operating value |
Value Innolight tested at |
Diode 1 temp [C] |
20.74 |
21.98 |
Diode 2 temp [C] |
21.31 |
23.01 |
Xtal temp [C] |
29.39 |
25.00 |
Pump current [A] |
2.05 |
2.10
|
|
14594
|
Fri May 3 15:40:33 2019 |
gautam | Update | General | CVI 2" beamsplitters delivered | Four new 2" CVI 50/50 beamsplitters (2 for p-pol and 2 for s-pol) were delivered. They have been stored in the optics cabinet, along with the "Test Data" sheets from CVI. |
14595
|
Mon May 6 10:51:43 2019 |
gautam | Update | PSL | PSL turned off again | As we have seen in the last few weeks, the laser turned itself off after a few hours of running. So bypassing the lab interlock system / reverting laser crystal temperature to the value from Innolight's test datasheet did not fix the problem.
I do not understand why the "Interlock" and "TGUARD" channels come revert to their values when the laser was lasing a few minutes after the shutoff. Is this just an artefact of the way the diagnostics is set up, or is this telling us something about what is causing the shutoff? |
Attachment 1: NPROshutoff.png
|
|
14596
|
Mon May 6 11:05:23 2019 |
Jon | Update | SUS | All vertex SUS watchdogs were tripped | Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.
Quote: |
I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?
|
|
14597
|
Wed May 8 19:04:20 2019 |
rana | Update | PSL | PSL turned on again |
- Increased PSL HEPA Variac from 30 to 100% to get more airflow.
- All of the TEC setpoints seem cold to me, so I increased the laser crystal temperature to 30.6 C
- Adjusted the diode TEC setpoints individually to optimize the PMC REFL power (unlocked). DTEC A = 22.09 C, DTEC B = 21.04 C
- locked PMC at 1900 PT; let's see how long it lasts.
My hunch is that the TECs are working too hard and can't offload the heat onto the heat sinks. As the diode's degrade, more of the electrical power is converted to heat in the diodes rather than 808 nm photons. So hopefully the increased airflow will help.
I tried to increase the DTEC setpoints, but that seems to detune them too far from the laser absorption band, so that's not very efficient for us. IN any case, if we end up changin the laser temperature, we'll have to adjust the ALS lasers to match, and that will be annoying.
The office area was very cold and the HVAC air flow stronger than usual. I changed the setpoint on the thermostat near Steve's desk from 71 to 73F at 1830 today. |
14599
|
Thu May 9 19:50:04 2019 |
gautam | Update | PSL | PSL turned off again | This time, it stayed on for ~24 hours. I am not going to turn it on again today as the crane inspection is tomorrow and we plan to keep the VEA a laser safe area for speedy crane inspection.
But what is the next step? If these diode temps maximize the power output of the NPRO, then it isn't a good idea to raise the TEC setpoint futher, so should I just turn it on again with the same settings?
I did not turn the HEPA down on the PSL enclosure. I also turned off the NPROs at EX and EY so now all the four 1064nm lasers in the VEA are turned OFF (for crane inspection).
Quote: |
locked PMC at 1900 PT; let's see how long it lasts.
My hunch is that the TECs are working too hard and can't offload the heat onto the heat sinks. As the diode's degrade, more of the electrical power is converted to heat in the diodes rather than 808 nm photons. So hopefully the increased airflow will help
|
|
|
Attachment 1: Screenshot_from_2019-05-09_19-49-29.png
|
|
14601
|
Fri May 10 13:00:25 2019 |
Chub | Update | General | crane inspection complete | The 40M jib cranes all passed inspection! |
Attachment 1: 20190510_110245.jpg
|
|
14602
|
Fri May 10 15:18:04 2019 |
gautam | Update | PSL | Some work on/around PSL table |
- In anticipation of installing the new fan on the PSL, I disconencted the old fan and finally removed the bench power supply from the top shelf.
- Moved said bench supply to under the south-west corner of the PSL table.
- Installed temporary Acromag crate, now with two ADC cards, under the PSL table and hooked it up to the bench suppy (+15 VDC). Also ran an ethernet cable from 1X3 to the box on over head cable tray and connected it.
- Brought other end of 25-pin D-sub cable used to monitor the NPRO diagnostics channels from 1X4/1X5 to the PSL table. Rolled the excess length up and cable tied it, the excess is sitting on top of the PSL enclosure. Key parts of the setup are shown in Attachments #1-3. This is not an ideal setup and is only meant to get us through to the install of the new c1psl/c1ioo Acromag crate.
- Edited the modbus config file at /cvs/cds/caltech/target/c1psl2/npro_config.cmd to add Jon's new ADC card to the list.
- Edited EPICS database file at /cvs/cds/caltech/target/c1psl2/psl.db to add entries for the C1:PSL-FSS_RMTEMP and C1:PSL-PMC_PMCTRANSPD channels.
- Hooked up said channels to the physical ADC inputs via a DB15 cable and breakout board on the PSL table.
CH0 --- FSS_RMTEMP (Pins 5/18 of the DB25 connector on the interface box to pins 1/9 of the Acromag DB15 connector)
CH1 --- PMC TRANS (BNC cable from PD to pomona minigrabber to pins 2/10 of the Acromag DB15 connector)
CH2-6 are unsued currently and are available via the DB15 breakout board shown in Attachment #3. CH7 is not connected at the time of writing
The pin-out for the temperature sensor interface box may be found here. Restarted the modbus process. The channels are now being recorded, see Attachment #4, although checking the status of the modbus process, I get some error message, not sure what that's about.
So now we can monitor both the temperature of the enclosure (as reported by the sensor on the PSL table) and the NPRO diagnostics channels. The new fan for the controller has not been installed yet, due to us not having a good mounting solution for the new fans, all of which have a bigger footprint than the installed fan. But since the laser isn't running right now, this is probably okay.
● modbusPSL.service - ModbusIOC Service via procServ
Loaded: loaded (/etc/systemd/system/modbusPSL.service; disabled)
Active: active (running) since Fri 2019-05-10 13:17:54 PDT; 2h 13min ago
Process: 8824 ExecStop=/bin/kill -9 ` cat /run/modbusPSL.pid` (code=exited, status=1/FAILURE)
Main PID: 8841 (procServ)
CGroup: /system.slice/modbusPSL.service
├─8841 /usr/bin/procServ -f -L /home/controls/modbusPSL.log -p /run/modbusPSL.pid 8009 /cvs/cds/rtapps/epics-3.14.12.2_long/module...
├─8842 /cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/bin/linux-x86_64/modbusApp /cvs/cds/caltech/target/c1psl2/npro_config.c...
└─8870 caRepeater
May 10 13:17:54 c1auxex systemd[1]: Started ModbusIOC Service via procServ.
|
Attachment 1: IMG_7427.JPG
|
|
Attachment 2: IMG_7428.JPG
|
|
Attachment 3: IMG_7429.JPG
|
|
Attachment 4: newPSLAcro.png
|
|
14603
|
Fri May 10 18:24:29 2019 |
gautam | Update | NoiseBudget | aligoNB | I pulled the aligoNB git repo to /ligo/GIT/aligoNB/aligoNB. There isn't a reqs.txt file in the repo so installing the dependencies on individual workstations to get this running is a bit of a pain. I found the easiest thing to do was to setup a virtual environment for the python3 stuff, this way we can run python2 for the cdsutils package (hopefully that gets updated soon). I'm setting up a C1 directory in there, plan is to budget some subsystems like Oplev, ALS for now, and develop the code for the eventual IFO locking. As a test, I ran the H1 noise budget (./aligonb H1), works, so looks like I got all the dependencies... |
14604
|
Sat May 11 11:48:54 2019 |
Jon | Update | PSL | Some work on/around PSL table | I took a look at the error being encountered by the modbusPSL service. The problem is that the /run/modbusPSL.pid file is not being generated by procServ, even though the -p flag controlling this is correctly set. I don't know the reason for this, but it was also a problem on c1vac and c1susaux. The solution is to remove the custom kill command (ExecStop=... ) and just allow systemd to stop it via its default internal kill method.
● modbusPSL.service - ModbusIOC Service via procServ
Loaded: loaded (/etc/systemd/system/modbusPSL.service; disabled)
Active: active (running) since Fri 2019-05-10 13:17:54 PDT; 2h 13min ago
Process: 8824 ExecStop=/bin/kill -9 ` cat /run/modbusPSL.pid` (code=exited, status=1/FAILURE)
Main PID: 8841 (procServ)
CGroup: /system.slice/modbusPSL.service
├─8841 /usr/bin/procServ -f -L /home/controls/modbusPSL.log -p /run/modbusPSL.pid 8009 /cvs/cds/rtapps/epics-3.14.12.2_long/module...
├─8842 /cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/bin/linux-x86_64/modbusApp /cvs/cds/caltech/target/c1psl2/npro_config.c...
└─8870 caRepeater
May 10 13:17:54 c1auxex systemd[1]: Started ModbusIOC Service via procServ.
|
|
14605
|
Mon May 13 10:45:38 2019 |
gautam | Update | PSL | PSL turned ON again | I used some double-sided tape to attach a San Ace 60 9S0612H4011 to the Innolight controller (Attachment #1). This particular fan is rated to run with up to 13.8V, but I'm using a +15V Sorensen output - at best, this shortens the lifespan of the fan, but I don't have a better solution for now. Then I turned the laser on again (~1040 local time), using the same settings Rana configured earlier in this thread. PMC was locked, and the IMC also could be locked but I closed the shutter for now while the laser frequency/intensity stabilizes after startup. The purpose is to facilitate completion of the pre-vent alignment checklist in prep for the planned vent tomorrow. PMC Trans reports 0.63 after alignment was optimized, which is ~15% lower than in Oct 2016. |
Attachment 1: IMG_7431.JPG
|
|
14606
|
Mon May 13 18:48:32 2019 |
gautam | Update | General | Vent prep |
- c1auxey and c1aux VME crates were keyed.
- EX and EY NPROs were turned on.
- Y arm was aligned to the IR - best effort TRY ~0.75.
- EY green was aligned to the Y arm cavity. The spot is on the lower right quadrant on the CCD monitor, but GTRY ~0.35.
- #3 and #4 were repeated for XARM.
- All beams were centerd on Oplev and IP POS QPDs with this reference alignment - see Attachment #1. SOS Optic and TT DC bias positions were saved to burt snap files.
- I've never really used it but I updated all the SUS "driftmon" values - Attachment #2.
- Power going into the IMC was cut from 945 mW to 100 mW (both numbers measured with FieldMate power meter) by rotating the HWP installed last time for this purpose from 244 degrees (OLD) to 208 degrees (NEW). There was no beam dump for the reflected port of the PBS used to cut power, so I installed one, see Attachment #4.
- The T=90% BeamSplitter in the MC REFL path was replaced with a 2" HR mirror as is the norm for the low power IMC locking. Alignment of the MC REFL beam onto the MC REFL PD was tweaked.
- init.d file was edited and MCautolocker initctl process was restarted on Megatron to adopt the low power settings. It was locked, MCT ~1350 counts, see Attachment #3. Also adjusted the threshold level above which to have the slow PID offloading of FSS PZT voltage from 10000 to 1000.
I believe this completes the non-Chub portions of the pre-vent checklist, we will start letting air into the main volume ASAP tomorrow morning after crossing off the remaining items.
Main goal of this vent is to investigate the oddness of the YARM suspensions. I leave the PSL NPRO on overnight in the interest of data gathering, it's been running ~10 hrs now - I suspect it'll turn itself off before we are ready to vent in the AM. |
Attachment 1: ventPrep_20190514.png
|
|
Attachment 2: driftMon_20190514.png
|
|
Attachment 3: lowPowIMC.png
|
|
Attachment 4: IMG_7434.JPG
|
|
14607
|
Tue May 14 10:35:58 2019 |
gautam | Update | General | Vent underway |
- PSL had stayed on overnight. There was an EQ (M 4.6 near Costa Rica) which showed up on the Seis BLRMS, and I noticed that several optics were reporting Oplev spots off their QPDs (I had just centered these yesterday). So I did a quick alignment check:
- IMC was readily locked
- After moving test mass bias sliders to bring Oplev spots back to the center, the EX and EY green beams were readily locked to a TEM00 mode
- IR flashes could be seen in TRX and TRY (though their levels are low, since we are operating with 1/10th the nominal power
- The IP-POS QPD channels were reporting a "segmentation fault" so I keyed the c1iscaux crate and they came back. Still the QPD was reporting a low SUM value, but this too is because of the lower power. Conveniently, there was an ND2.0 filter in the beam path on a flip mount which I just flipped out of the way for the low-power tracking.
- Then, PSL and green shutters were closed and Oplev loops were disengaged.
- Checked that we have an RGA scan from today
- During the walkthrough to check the jam nuts, Chub noticed that the outer nuts on the bellows between the OMC chamber and the IMC chamber were loose to the finger! He is tightening them now and checking the remaining jam nuts. AFAIK, Steve made it sound like this was always a formality. Should we be concerned? The other jam nuts are fine according to Chub.
- We valved off the pumpspool from the main volume and annuli, and started letting Nitrogen into the main volume at ~1045am.
- Started letting instrument grade air into the main volume at ~1130am. We are aiming for a pressure increase of 3 torr/min
- 4 cylinders of dry air were exhausted by ~330pm. It actually looks like we over-pressured the main volume by ~20torr - this is bad, we should've stopped the air inletting at 700 psi and then let it equilibriate to lab air pressure.
- At some point during the vent, the main volume pressure exceeded the working range of the cold cathode gauge CC1. It reports "Current Fail" on its LED display, which I'm assuming meant it auto-shutoff its HV to protect itself, Jon tells me the vacuum code isn't responsible for initiating any manual shutoff.
- A new vacuum state was added to reflect these conditions (pumpspool under vacuum, main volume at atmosphere).
- The annuli remain under vacuum for now. Tomorrow, when we remove the EY door, we will vent the EY annulus.
IMC was locked, MC2T ~ 1200cts after some alginment touch ups. The test mass oplevs indicate some drift, ~100urad. I didn't realign them.
The EY door removal will only be done tomorrow. I will take some free-swinging ETMY data today (suspension was kicked at 1241919438) to see if anything has changed (it shouldn't have). I need to think up a systematic debugging plan in the meantime. |
Attachment 1: vent.png
|
|
Attachment 2: Screenshot_from_2019-05-14_16-35-16.png
|
|
14608
|
Wed May 15 00:40:19 2019 |
gautam | Update | SUS | ETMY diagnosis plan | I collected some free-swinging data from earlier today evening. There are still only 3 peaks visible in the ASDs, see Attachment #1.
Plan for tomorrow:
TBH, I don't have any clear ideas as to what we are supposed to do to to fix the problem (or even what the problem is). So here is my plan for now:
- Take pictures of relative position of magnet and OSEM coil for all five coils
- Inspect positions of all EQ stops - back them well out if any look suspiciously close
- Inspect suspension wire for any kinks
- Inspect position of suspension wire in standoff
I anticipate that these will throw up some more clues |
Attachment 1: ETMY_sensorSpectra.pdf
|
|
14609
|
Wed May 15 10:56:47 2019 |
gautam | Update | PSL | PSL turned ON again | To test the hypothesis that the fan replacement had any effect on the NPRO shutoff phenomena, I turned the HEPA on the PSL table down to the nominal 30% setting at ~10am.
Tomorrow I will revert the laser crystal temperature to whatever the nominal value was. If the NPRO runs in that configuration (i.e. the only change from March 2019 are the diode TEC setpoints and the new fan on the back of the controller), then hurray. |
14610
|
Wed May 15 10:57:57 2019 |
gautam | Update | SUS | EY chamber opened | [chub, gautam]
- Vented the EE annulus.
- Took the heavy door off, put it on the wooden rack, put a light door on at ~11am.
|
14611
|
Wed May 15 17:46:24 2019 |
gautam | Update | SUS | ETMY inspection | I setup the usual mini-cleanroom setup around the ETMY chamber. Then I carried out the investigative plan outlined here.
Main finding: I saw a fiber of what looks like first contact on the bottom left (as viewed from HR side) of ETMY, connecting the optic to the cage. See Attachment #1. I don't know that this can explain the problem with the missing eigenmode, it's not a hard constraint. Seems like something that should be addressed in any case. How do we want to remove this? Just use a tweezer and pull it off, or apply a larger FC patch and then pull it off? I'm pretty sure it's first contact and not a piece of PEEK mesh because I can see it is adhered to the HR side of the optic, but couldn't capture that detail in a photo.
There weren't any obvious problem with the magnet positioning inside the OSEM, or the suspension wire. All the EQ stop tips were >3mm away from the optic.
I also backed out the bottom EQ stops on the far (south side) of the optic by ~2 full turns of the screw. Taking another free-swinging dataset now to see if anything has changed. I will upload all the photos I took, with annotations, to the gPhotos later today eve. Light doors back on at ~1730.
Update 10pm: the photos have been uploaded. I've added a "description" to each photo which should convey the message of that particualr shot, it shows up in my browser on the bottom left of the photo but can also be accessed by clicking the "info" icon. Please have a look and comment if something sticks out as odd / requires correction.
Update 1045pm: I looked at the freeswinging data from earlier today. Still only 3 peaks around 1 Hz.
The following optics were kicked:
ETMY
Wed May 15 17:45:51 PDT 2019
1242002769 |
Attachment 1: firstContactFiber.JPG
|
|
Attachment 2: ETMY_sensorSpectra.pdf
|
|
14612
|
Wed May 15 19:36:29 2019 |
Koji | Update | SUS | ETMY instepction | A pair of tweezer is OK as long as there is no magnets around. You need to (somewhat) constrain the mirror with the EQ stops so that you can pull the fiber without dragging the mirror. |
14613
|
Thu May 16 13:07:14 2019 |
gautam | Update | SUS | First contact residue removal | I used a pair of tweezers to remove the stray fiber of first contact. As Koji predicted, this was rather dry and so it didn't have the usual elasticity, so while I was able to pull most of it off, there is a small spot remaining on the HR surface of the ETM. We will remove this with a fresh application of a small patch of FC.
I the meantime, I'm curious if this has actually fixed the suspension woes, so yet another round of freeswinging data collection is ongoing. From the first 5 mins, looks positive, I see 4 peaks around 1Hz !
The following optics were kicked:
ETMY
Thu May 16 13:06:39 PDT 2019
1242072418
Update 730pm: There are now four well-defined peaks around 1 Hz. Together with the Bounce and Roll modes, that makes six. The peak at 0.92 Hz, which I believe corresponds to the Yaw eigenmode, is significantly lower than the other three. I want to get some info about the input matrix but there was some NDS dropout and large segments of data aren't available using the python nds fetch method, so I am trying again, kicked ETMY at 1828 PDT. It may be that we could benefit from some adjustment of the OSEM positions, the coupling of bounce mode to LL is high. Also the SIDE/POS resonances aren't obviously deconvolved. The stray first contact has to be removed too. But overall I think it was a successful removal, and the suspension characteristics are more in line with what is "expected". |
Attachment 1: etmy_sensors.pdf
|
|
Attachment 2: etmy_BRmode.pdf
|
|
14614
|
Thu May 16 22:58:25 2019 |
gautam | Update | ASS | In air ASS test with green? | We were wondering yesterday if we can somehow test the ASS system in air. Though the arm cavity can be locked with the low power IMC transmission, I think the dither would render the POY lock unstable. But I wonder if we can use the green beam for a test. The steering PZTs installed by Yuki can serve the role of TT1/TT2 and we can dither the arm cavity mirrors while the green TEM00 mode is locked to the arm no problem. This would at least give us confidence that the actuation of ETMY/ITMY are okay (in addition to the other suspension tests). Then on the sensing side, after pumping down, the only thing we'd be foiled by is in-vacuum clipping or some major gunk on ETMY - everything else should be de-buggable even after pumping down?
I think most of the CDS infrastructure for this is already in place. |
14615
|
Thu May 16 23:31:55 2019 |
gautam | Update | SUS | ETMY suspension characterization | Here is my analysis. I think there are still some problems with this suspension.
Attachment #1: Time domain plots of the ringdown. The LL coil has peak response ~half of the other face OSEMs. I checked that the signal isn't being railed, the lowest level is > 100 cts.
Attachment #2: Complex TF from UL to the other coils. While there are four peaks now, looking at the phase information, it isn't possible to clearly disentangle PIT or YAW motion - in fact, for all peaks, there are at least three face shadow sensors which report the same phase. The gains are also pretty poorly balanced - e.g. for the 0.77 Hz peak, the magnitude of UR->UL is ~0.3, while LR->UL is ~3. Is it reasonable that there is a factor of 10 imbalance?
Attachment #3: Nevertheless, I assumed the following mapping of the peaks (quoted f0 is from a lorentzian fit) and attempted to find the input matrix that best convers the Sensor basis into the Euler basis.
DoF |
f0 [Hz] |
POS |
1.004 |
PIT |
0.771 |
YAW |
0.920 |
SIDE |
0.967 |
Unsurprisingly, the elements of this matrix are very different from unity (I have to fix the normalization of the rows).
Attachment #4: Pre and post diagonalization spectra. The null stream certainly looks cleaner, but then again, this is by design so I'm not sure if this matrix is useful to implement.
Next steps:
- Repeat the actuator diagnonality test detailed here.
- ???
In case anyone wants to repeat the analysis, the suspension was kicked at 1828 PDT today and this analysis uses 15000 seconds of data from then onwards.
Update 18 May 3pm: Attachment #5 better presentation of the data shown in Attachment #2, the remark about the odd phasing of the coils is more clearly seen in this zoomed in view. Attachment #6 shows Lorentzian fits to the peaks - the Qs are comparable to that seen for the other optics, although the Q for the 0.77 Hz peak is rather low. |
Attachment 1: ETMY_sensors_timeDomain.pdf
|
|
Attachment 2: ETMY_cplxTF.pdf
|
|
Attachment 3: matrixDiag.png
|
|
Attachment 4: ETMY_diagComp.pdf
|
|
Attachment 5: ETMY_cplxTF.pdf
|
|
Attachment 6: ETMY_pkFitNaive.pdf
|
|
14617
|
Fri May 17 10:57:01 2019 |
gautam | Update | SUS | IY chamber opened | At ~930am, I vented the IY annulus by opening VAEV. I checked the particle count, seemed within the guidelines to allow door opening so I went ahead and loosened the bolts on the ITMY chamber.
Chub and I took the heavy door off with the vertex crane at ~1015am, and put the light door on.
Diagnosis plan is mainly inspection for now: take pictures of all OSEM/magnet positionings. Once we analyze those, we can decide which OSEMs we want to adjust in the holders (if any). I shut down the ITMY and SRM watchdogs in anticipation of in-chamber work.
Not related to this work: Since the annuli aren't being pumped on, the pressure has been slowly rising over the week. The unopened annuli are still at <1 torr, and the PAN region is at ~2 mtorr. |
14620
|
Fri May 17 17:01:08 2019 |
gautam | Update | SUS | ETMY suspension characterization | To investigate my mapping of the eigenfrequencies to eigenmodes, I checked the Oplev spectra for the last few hours, when the Oplev spot has been on the QPD (but the optic is undamped).
- Based on Attachment #1, I can't figure out which peak corresponds to what motion.
- The most prominent peak (judged by peak height) is at 0.771 Hz for both PITCH and YAW
- Assuming the peak at 0.92 Hz is the other angular mode, the PIT/YAW decoupling is poor in both peaks, only ~factor of 2 in both cases.
- Why are the POS and SIDE resonances sensed so asymmetrically in the PIT and YAW channels? There's a factor of 10 difference there...
So, while I conclude that my first-contact residue removal removed a constraint from the system (hence the pendulum dynamics are accurate and there are 6 eigenmodes), more thought is needed in judging what is the appropriate course of action. |
Attachment 1: etmy_oplevs.pdf
|
|
14621
|
Sat May 18 12:19:36 2019 |
Kruthi | Update | | CCD calibration and telescope design | I went through all the elog entries related to CCD calibration. I was wondering if we can use Spectralon diffuse reflectance standards (https://www.labsphere.com/labsphere-products-solutions/materials-coatings-2/targets-standards/diffuse-reflectance-standards/diffuse-reflectance-standards/) instead of a white paper as they would be a better approximation to a Lambertian scatterer.
Telescope design:
On calculating the accessible u-v ranges and the % error in magnification (more precisely, %deviation), I got %deviation of order 10 and in some cases of order 100 (attachments 1 to 4), which matches with Pooja's calculations. But I'm not able reproduce Jigyasa's %error calculations where the %error is of order 10^-1. I couldn't find the code that she had used for these calculations and I even mailed her about the same. We can still image with 150-250 mm combination as proposed by Jigyasa, but I don't think it ensures maximum usage of pixel array. Also for this combination the resulting conjugate ratio will be greater than 5. So, use of plano-convex lenses will reduce spherical aberrations. I also explored other focal length combinations such as 250-500 mm and 500-500mm. In these cases, both the lenses will have f-numbers greater than 5. But the conjugate ratios will be less than 5, so biconvex lenses will be a better choice.
Constraints: available lens tube length (max value of d) = 3" ; object distances range (u) = 70 cm to 150 cm ; available cylindrical enclosures (max value of d+v) are 52cm and 20cm long (https://nodus.ligo.caltech.edu:8081/40m/13000).
I calculated the resultant image distance (v) and the required distance between lenses (d), for fixed magnifications (i.e. m = -0.06089 and m = -0.1826 for imaging test masses and beam spot respectively) and different values of 'u'. This way we can ensure that no pixels are wasted. The focal length combinations - 300-300mm (for imaging beam spot), and 100-125mm (for imaging test masses) - were the only combinations that gave all positive values for 'd' and 'v', for given range of 'u' (attachments 5-6). But here 'd' ranges from 0 to 30cm in first case, which exceeds the available lens tube length. Also, in the second case the f-numbers will be less than 5 for 2" lenses and thus may result in spherical aberration.
All this fuss about f-numbers, conjugate ratios, and plano-convex/biconvex lenses is to reduce spherical aberrations. But how much will spherical aberrations affect our readings?
We have two 2" biconvex lenses of 150mm focal length and one 2" biconvex lens of focal length 250mm in stock. I'll start off with these and once I have a metric to quantify spherical aberrations we can further decide upon lenses to improve the telescopic lens system. |
Attachment 1: 15-25.png
|
|
Attachment 2: 25-25.png
|
|
Attachment 3: 25-50.png
|
|
Attachment 4: 50-50.png
|
|
Attachment 5: 30-30_for_1%22.png
|
|
Attachment 6: 10-12.5_for_3%22.png
|
|
14623
|
Mon May 20 11:33:46 2019 |
gautam | Update | SUS | ITMY inspection | With Chub providing illumination via the camera viewport, I was able to take photos of ITMY this morning. All the magnets look well clear of the OSEMs, with the possible exception of UR. I will adjust the position of this OSEM slightly. To test if this fix is effective, I will then cycle the bias voltage to the ITM between 0 and the maximum allowed, and check if the optic gets stuck. |
14625
|
Mon May 20 17:12:57 2019 |
gautam | Update | SUS | ETMY LL adjustment | Following the observation that the response in the LL shadow sensor was lower than that of the others, I decided to pull it out a little to move the signal level with nominal DC bias voltage applied was closer to half the open-voltage. I also chose to rotate the SIDE OSEM by ~20 degrees CCW in its holder (viewed from the south side of the EY chamber), to match more closely its position from a photo prior to the haphazhard vent of the summer of 2018. For the SIDE OSEM, the theoretical "best" alignment in order to be insensitive to POS motion is the shadow sensor beam being horizontal - but without some shimming of the OSEM in the holder, I can't get the magnet clear of the teflon inside the OSEM.
While I was inside the chamber, I attempted to minimize the Bounce/Roll mode coupling to the LL and SIDE OSEM channels, by rotating the Coil inside the holder while keeping the shadow sensor voltage at half-light. To monitor the coupling "live", I set up DTT with 0.3 Hz bandwidth and 3 exponentially weighted averages. For the LL coil, I went through pi radians of rotation either side of the equilibrium, but saw no significant change in the coupling - I don't understand why.
In any case, this wasn't the most important objective so I pushed ahead with recovering half-light levels for all the shadow sensors and closed up with the light doors. I kicked the optic again at 1712:14 PDT, let's see what the matrix looks like now.
before starting this work, i had to key the unresponsive c1auxey VME crate. |
14626
|
Mon May 20 21:45:20 2019 |
Milind | Update | | Traditional cv for beam spot motion | Went through all of Pooja's elog posts, her report and am currently cleaning up her code and working on setting up the simulations of spot motion from her work last year. I've also just begun to look at some material sent by Gautam on resonators.
This week, I plan to do the following:
1) Review Gabriele's CNN work for beam spot tracking and get his code running.
2) Since the relation between the angular motion of the optic and beam spot motion can be determined theoretically, I think a neural network is not mandatory for the tracking of beam spot motion. I strongly believe that a more traditional approach such as thresholding, followed by a hough transform ought to do the trick as the contours of the beam spot are circles. I did try a quick and dirty implementation today using opencv and ran into the problem of no detection or detection of spurious circles (the number of which decreased with the increased application of median blur). I will defer a more careful analysis of this until step (1) is done as Gautam has advised.
3) Clean up Pooja's code on beam tracking and obtain the simulated data.
4) Also data like this (https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view) is incredibly noisy. I will look up some standard techniques for cleaning such data though I'm not sure if the impact of that can be measured until I figure out an algorithm to track the beam spot.
A more interesting question Gautam raised was the validity of using the beam spot motion for detection of angular motion in the presence of other factors such as surface irregularities. Another question is the relevance of using the beam spot motion when the oplevs are already in place. It is not immediately obvious to me how I can ascertain this and I will put more thought into this. |
14627
|
Mon May 20 22:06:07 2019 |
gautam | Update | SUS | ITMY also kicked | For good measure:
The following optics were kicked:
ITMY
Mon May 20 22:05:01 PDT 2019
1242450319 |
14628
|
Tue May 21 00:15:21 2019 |
gautam | Update | SUS | Main objectives of vent achieved (?) | Summary:
- ETMY now shows four suspension eigenmodes, with sensible phasing between signals for the angular DoFs. However, the eigenfrequencies have shifted by ~10% compared to 16 May 2019.
- PIT and YAW for ETMY as witnessed by the Oplev are now much better separated.
- ITMY can have its bias voltage set to zero and back to nominal alignment without it getting stuck.
- The sensing matrix for ETMY that I get doesn't make much sense to me. Nevertheless, the optic damps even with the "naive" input matrix.
So the primary vent objectives have been achieved, I think.
Details:
- ETMY free-swinging data after adjusting LL and SIDE coils such that these were closer to half-light values
- Attachment #1 - oplev witnessing the angular motion of the optic. PIT and YAW are well decoupled.
- Attachment #2 - complex TF between the suspension coils. There is still considerable imbalance between coils, but at least the phasing of the signals make sense for PIT and YAW now.
- Attachment #3 - DoFs sensed using the naive and optimized sensing matrices.
- Attachment #4 - sensing matrix that the free swinging data tells me to implement. If the local damping works with the naive input matrix but we get better diagonality in the actuation matrix, I think we may as well stick to the naive input matrix.
- BR mode coupling minimization:
- As alluded to in my previous elog, I tried to reduce the bounce mode coupling into the shadow sensor by rotating the OSEM in its holder.
- However, I saw negligible change in the coupling, even going through a full pi radian rotation. I imagine the coupling will change smoothly so we should have seen some change in one of the ~15 positions I sampled in between, but I saw none.
- The anomalously high coupling of the bounce mode to the shadow sensor readout is telling us something - I'm just not sure what yet.
- ITMY:
- The offender was the LL OSEM, whose rotational orientation was causing the magnet to get stuck to the teflon part of the OSEM coil when the bias voltage was changed by a sufficiently large amount.
- I rectified this (required adjustment of all 5 OSEMs to get everything back to half light again).
- After this, I was able to zero the bias voltage to the PIT/YAW DoFs and not have the optic get stuck - huzzah 😀
- While I have the chance, I'm collecting the free-swinging data to see what kind of sensing matrix this optic yields.
Tomorrow and later this week:
- Prepare ETMY for first contact cleaning to remove the residual piece.
- Drag wipe the HR surface with dehydrated acetone
- Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
- This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
- Confirm ETMY actuation makes sense.
- Use the green beam for an ASS proxy implementation?
- High quality close out pictures of OSEMs and general chamber layout.
- Anything else? Any other tests we can do to convince ourselves the suspensions are well-behaved?
While we have the chance:
- Fix the IPANG alignment? Because the TT drift/hysteresis problem is still of unknown cause.
- Check that the AS beam is centered on OMs 1-6?
- Recover the 70% AS light that is being diverted to the OMC?
Unrelated to this work: megatron is responding to ping but isn't ssh-able. I also noticed earlier to day that the IMC autolocker blinky wasn't blinking. So it probably requries a hard reboot. I left the lab for tonight so I'll reboot it tomorrow, but no nds data access in the meantime... |
Attachment 1: etmy_oplevs_20190520.pdf
|
|
Attachment 2: ETMY_cplxTF.pdf
|
|
Attachment 3: ETMY_diagComp.pdf
|
|
Attachment 4: Screen_Shot_2019-05-21_at_12.37.08_AM.png
|
|
14629
|
Tue May 21 21:33:27 2019 |
gautam | Update | SUS | ETMY HR face cleaned | [koji, gautam]
We executed this plan. Photos are here. Summary:
- Optic was EQ-stopped (face stops only)., with the OSEMs in situ. We tried to do this as evenly as possible to avoid any magnets getting stuck on OSEMs.
- We used the specially procured acetone from Chub to drag wipe the HR face. This was a definite improvement, we should always get the correct grade of solvents when we attempt cleaning optics.
- It was observed that drag-wiping did not really have the desired cleaning effect. So Koji went in with hemostat / lens tissue soaked in acetone and wiped the HR face. This improved the situation.
- Applied a layer of F.C. Waited for it to dry, and then peeled it off. Under the green flashlight, the optic still looks horrific - but we decided against further drag-wiping/first-contacting. If the loss is truly 50 ppm, this is totally not a show-stopper for now.
- Suspension cage was replaced. EQ stops were released. Bias voltages were adjusted to bring the Oplev spot back to the center of the QPD. Now a free-swinging data collection is ongoing...
The following optics were kicked:
ETMY
Tue May 21 22:58:18 PDT 2019
1242539916
So if nothing, we got to practise this new wiping technique with OSEMs in situ successfully.
Quote: |
- Prepare ETMY for first contact cleaning to remove the residual piece.
- Drag wipe the HR surface with dehydrated acetone
- Apply F.C. as usual, inspect the HR face after peeling for improvement if any.
- This will give us a chance to practise the F.C.ing with the optic EQ-stopped (moving cage etc).
|
|
14630
|
Wed May 22 11:53:50 2019 |
gautam | Update | SUS | ETMY EQ stops backed out | Yesterday we noticed that the POS and SIDE eigenmodes were degenerate (with 1mHz spectral resolution). Moreover, the YAW peak had shifted down by ~500 mHz compared to earlier this week, although there was still good separation between PIT and YAW in the Oplev error signals. Ideas were (i) check if EQ stops were not backed out sufficiently, and (ii) look for any fibers/other constraints in the system. Today morning, I inspected the optic again. I felt the EQ stop viton tips were a bit close to the optic, so I backed them out further. Apart from this, I adjusted the LR and SIDE OSEM position in their respective holders to make the sensor voltages closer to half-light. Kicked the optic again just now, let's see if there is any change.
Remaining tasks:
- Check EY table leveling.
- Check EY actuation matrix diagonality using this technique.
- Check that IR resonances are seen (and all the usual pre-pumpdown alignment checks).
- Take close out pictures.
- Heavy doors on, pump down.
If everything goes smoothly, I think we should plan for the heavy doors going back on and commencing the pumpdown tomorrow. After discussion with Koji, we came to the conclusion that it isn't necessary to investigate IPANG (high likelihood of it falling off the steering optics during the pumpdown) / AS beam clipping (no strong evidence that this is a problem) for this vent.
Update 1235: Indeed, the eigenmodes are back to their positions from earlier this week. Indeed, the POS and SIDE modes are actually better separated! So, the OSEM/magnet and EQstop/optic interactions are non-negligible in the analysis of the dynamics of the pendulum. |
Attachment 1: ETMY_eigenmodes.pdf
|
|
14631
|
Wed May 22 22:50:13 2019 |
gautam | Update | VAC | Pumpdown prep | I did the following:
- Checked the ETMY OSEM sensing matrix and OSEM actuation matrix - more on this later, but everything seems much more reasonable than it was prior to this vent.
- Checked that the IMC could be locked with the low-power beam
- Aligned the Y-arm cavity using the green beam. Then tweaked the TT1/TT2 alignment until I saw IR flashes in TRY.
- Repeated #2 for the X arm, using the BS to control the beam pointing.
- Confirmed that the AS beam makes it out of the vacuum. It is only ~30uW in a large (~1cm dia) beam, so not the clearest spot on an IR card, but looks pretty clean, no evidence of clipping. I removed an ND filter on the AS port camera in order to better see the beam on the CRT monitor, this should be re-installed prior to ramping the input power to the IMC again.
- With the PRM aligned, I confirmed that I could see resonant flashes in the POP QPD.
- With the SRM aligned, I confirmed that I could see SRC cavity flashes on the AS camera.
I think this completes the pre-pumpdown alignment checks we usually do. The detailed plan for tomorrow is here: please have a look and lmk if I missed something. |
14632
|
Thu May 23 08:51:30 2019 |
Milind | Update | Cameras | Setting up beam spot simulation | I have done the following thus far since elog #14626:
Simulation:
- Cleaned up Pooja's code for simulating the beam spot. Added extensive comments and made the code modular. Simulated the Gaussian beam spot to exhibit
- Horizontal motion
- Vertical motion
- motion along both x and y directions:
- The motion exhibited in any direction in the above videos is the combination of four sinusoids at the frequencies: 0.2, 0.4, 0.1, 0.3 Hz with amplitudes that can be found as defaults in the script ((0.1, 0.04, 0.05, 0.08)*64 for these simulations.). The variation looks as shown in Attachment 1. For the sake of convenience I have created the above video files with only a hundred frames (fps = 10, total time ~ 10s) and this took around 2.4s to write. Longer files need much longer. As I wish to simply perform image processing on these frames immediately, I don't see the need to obtain long video files right away.
- I have yet to add noise at the image level and randomness to the motion itself. I intend to do that right away. Currently video 3 will show you that even though the time variation of the coordinates of the center of the beam is sinusoidal, the motion of the beam spot itself is along a line as both x and y motions have the same phase. I intend to add the feature of phase between the motion of x and y coordinates of the center of the beam, but it doesn't seem all too important to me right now. The white margins in the videos generated are annoying and make tracking the beam spot itself slightly difficult as they introduce offset (see below). I shall fix them later if simple cropping doesn't do the trick.
- I have yet to push the code to git. I will do that once I've incorporated the changes in (3).
Circle detection:
- If the beam spot intensity variation is indeed Gaussian (as it definitely is in the simulation), then the contours are circular. Consequently, centroid detection of the beam spot reduces to detecting these contours and then finding their centroid (center). I tried this for a simulated video I found in elog post 14005. It was a quick implementation of the following sequence of operations: threshold (arbritrarily set to 127), contour detection (video dependent and needs to be done manually), centroid determination from the required contour. Its evident that the beam spot is being tracked (green circle in the video). Check #Attachment 2 for the results. However, no other quantitative claims can be made in the absence of other data.
- Following this, Gautam pointed me to a capture in elog post 13908. Again, the steps mentioned in (1) were followed and the results are presented below in Attachment #3. However, this time the contour is no longer circular but distorted. I didn't pursue this further. This test was just done to check that this approach does extend (even if not seamlessly) to real data. I'm really looking forward to trying this with this real data.
- So far, the problem has been that there is no source data to compare the tracked centroid with. That ought to be resolved with the use of simulated data that I've generated above. As mentioned before, some matplotlib features such as saving with margins introduce offsets in the tracked beam position. However, I expect to still be able to see the same sinusoidal motion. As a quick test, I'll obtain the fft of the centroid position time series data and check if the expected frequencies are present.
I will wrap up the simulation code today and proceed to going through Gabriele's repo. I will also test if the contour detection method works with the simulated data. During our meeting, it was pointed out that when working with real data, care has to be taken to synchronize the data with the video obtained. However, I wish to put off working on that till later in the pipeline as I think it doesn't affect the algorithm being used. I hope that's alright (?).
|
Attachment 1: variation.pdf
|
|
Attachment 2: contours_simulated.mp4
|
Attachment 3: contours_real.mp4
|
14633
|
Thu May 23 10:18:39 2019 |
Kruthi | Update | Cameras | CCD calibration | On Tuesday, I tried reproducing Pooja's measurements (https://nodus.ligo.caltech.edu:8081/40m/13986). The table below shows the values I got. Pictures of LED circuit, schematic and the setup are attached. The powermeter readings fluctuated quite a bit for input volatges (Vcc) > 8V, therefore, I expect a maximum uncertainity of 50µW to be on a safer side. Though the readings at lower input voltages didn't vary much over time (variation < 2µW), I don't know how relaible the Ophir powermeter is at such low power levels. The optical power output of LED was linear for input voltages 10V to 20V. I'll proceed with the CCD calibration soon.
Input Voltage (Vcc) in volts |
Optical power |
0 (dark reading) |
1.6 nW |
2 |
55.4 µW |
4 |
215.9 µW |
6 |
0.398 mW |
8 |
0.585 mW |
10 |
0.769 mW |
12 |
0.929 mW |
14 |
1.065 mW |
16 |
1.216 mW |
18 |
1.330 mW |
20 |
1.437 mW |
22 |
1.484 mW |
24 |
1.565 mW |
26 |
1.644 mW |
28 |
1.678 mW |


|
Attachment 1: setup.jpeg
|
|
Attachment 2: led_circuit.jpeg
|
|
Attachment 3: led_schematic.pdf
|
|
14634
|
Thu May 23 15:30:56 2019 |
gautam | Update | VAC | Pumpdown underway - so far so good! | [chub, koji, gautam]
- We executed the pre-pumpdown tasks per the checklist - heavy doors were on by ~1030am.
- We were thwarted by the display of c1vac becoming unresponsive - the mouse cursor moves, but we could not interact with any screens. Connecting to c1vac by ssh with the -X option, we could interact with everything. Using top, we saw that the load average was reporting ~8 - this is pretty high! The most demanding processes were the modbus IOC and some python processes, presumably connected with the interlocks. We tried stopping the interlock systemctl process, kill -9ing the heavy processes, but to no avail. Next, we tried killing the X display proces, but this also did not fix the problem. Finally, we did a soft reboot of c1vac - the machine came back up, but still no interactivity. So we moved asia, the EY laptop, to the vacuum station for this pumpdown. We will fix the situation once the vacuum is in the nominal state.
- The actual pumpdown commenced by first evacuating the EY and IY annular volumes with the roughing pump. There is an interlock condition that prevents V6 from being opened if the PRP gauge reports < 0.25 torr (this is to protect against oil backstreaming from the roughing pumps I believe). To get around this, we gave the roughing pumps some work by exposing the annular line to the atmospheric pressure of the EY and IY annuli. In a few minutes, both of these reported < 1 torr.
- Main volume pumping started around noon - we have been going down in pressure steadily at ~3 torr/min (Koji has a nice python utility made that calculates the rate from the pressure channel).
- At the time of writing, after ~3.5 hrs of pumping, we are at 25 torr. I will keep going till ~1 torr, and then valve off the main volume until tomorrow, when Chub and I will work on getting the turbo pumps exposed to the main volume. Pausing at 355pm while I go for the colloquium. Resumed later in the evening, stopping for today at 500 mtorr.
- In preparation for the increased load on TP2 and TP3, I spun them up to the "high RPM mode" from their nominal "Standby mode".
Close up photos of the EY and IY chambers may be found here.
Update on the display manager of c1vac: I was able to get it working again by running sudo systemctl restart display-manager. Now I can interact with the MEDM screens on c1vac. It is a bit annoying that this machine doesn't have the users directory so I don't have access to the many convenient StripTool templates though - maybe I'll make local copies tomorrow for the pumpdown. |
Attachment 1: pumpdownPres.png
|
|
14635
|
Thu May 23 15:37:30 2019 |
Milind | Update | Cameras | Simulation enhancements and performance of contour detection |
- Implemented image level noise for simulation. Added only uniform random noise.
- Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
- Implemented motion along y axis according to data in "power_spectrum" file.
- Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
- Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience. See Attachment #5.
- Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
- Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.
Things yet to be done:
Simulation:
- I will implement the mean square error function to compute the relativer performance as conditions change.
- I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
- Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
- Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
- It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
- Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
- NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.
Real data:
- Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
- Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
- Synchronization of data stream regarding beam spot motion and video.
- Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.
Other approaches:
- Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.
|
Attachment 1: actual_motion.pdf
|
|
Attachment 2: predicted_motion.pdf
|
|
Attachment 3: normalised_comparison.pdf
|
|
Attachment 4: residue_normalised.pdf
|
|
Attachment 5: simulated_motion1.mp4
|
Attachment 6: elog_22may_contours.mp4
|
14636
|
Fri May 24 11:47:15 2019 |
gautam | Update | VAC | IFO is almost at nominal vacuum | [chub, gautam]
Overnight, the pressure of the main volume only rose by 10 mtorr, so there was no need to run the roughing pumps again. So we went straight to the turbos - hooked up the AUX drypump and set it up to back TP2. Initially, we tried having both TP2 and TP3 act as backing pumps for TP1, but the wimpy TP3 current was always passing the interlock threshold. So we decided to pump down with TP3 valved off, only TP2 backing TP1. This went smooth - we had to keep an eye on P2, to make sure it stayed below 1 torr. It took ~ 1 hour to go from 500 mtorr to 100 mtorr, but after that, I could almost immediately open up RV2 completely. A safe setting to run at seems to be to have RV2 open by between 0.5 and 1 turn (out of the full range of 7 turns) until the pressure drops to ~100 mtorr. Then we can crank it open. We are, at the time of writing, at ~8e-5 torr and the pressure is coming down steadily.
I had to manually clear the IG error on the CC1 gauge, and re-enabled the High Voltage, so that we have a readback of the main volume pressure in that range. I made a script to do this (enable the HV, the IG error still has to be cleared by pushing the appropriate buttons on the Hornet), it lives at /opt/target/python/serial/turnHornetON.py. I guess it'll take a few days to hit 8e-6 torr, but I don't see any reason to not leave the turbos running over the weekend.
Remaining tasks are (i) disconnect the roughing pump line and (ii) pump down the annuli, which will be done later today. Both were done at ~2pm, now we are in the vacuum normal config. I'll turn the two small turbos to run on "Standby Mode" before I head home today. I think TP3 may be close to end-of-life - the TP3 current went up to 1A even while evacuating the small volume of the annular line (which was already at 1 torr) with the AUX drypump backing it. The interlock condition is set to trip at 1.2A, and this pump is nominally supposed to be able to back TP1 during the pumpdown of the main volume from 500 mtorr, which it wasn't able to do.
|
Attachment 1: pumpdown_20190524.png
|
|
14637
|
Fri May 24 17:50:19 2019 |
gautam | Update | IOO | IFO recovery | At ~4pm, the main volume pressure (CC1) was reported to be ~5e-5 torr. So I replaced the HR mirror in the MC REFL path with the usual 10% beamsplitter, and aligned the beam onto MCREFL photodiode. I also replaced the ND filter on the AS port camera, and in front of the IPPOS QPD.
Then I turned up the power by HWP rotation - at the input to the IMC, I now measured 960 mW with the Coherent power meter, so the NPRO power has certainly decayed by ~10% from 2018 July. Normal high-power IMC autolocker script was re-enabled on megatron (and the slow servo enable threshold raised from 1000 cts to 8000cts). IMC was readily locked, after some hand alignment, I got a maximum of 14500 cts transmission. I was then able to lock the Y-arm. The dither alignment servo did not work with the nominal settings, but by hand alignment, I was able to get TRY up to 0.6 (I didn't try too hard to optimize this in any systematic way). X arm was also locked.
AUX drypump valved off and shutdown at ~610pm. I also switched both TP2 and TP3 to their lower rotation "standby" mode. So overall no major mishaps this time around. I am leaving the PSL shutter open over the long weekend. For in-air vs vacuum suspension spectra comparison, I kicked the ETMY optic at Fri May 24 18:26:10 PDT 2019. |
14638
|
Sat May 25 20:29:08 2019 |
Milind | Update | Cameras | Simulation enhancements and performance of contour detection |
- I used the same motion as defined in the previous elog. I gradually added noise to the images. Noise added was uniform random noise - a 2 dimensinoal array of random numbers between 0 and a predetermined maximum (noise_amp). The previous elog provides the variation of the y coordinate. In this, I am also uploading the effect of noise on the error in the prediction of the x coordinate. As a reminder, the motion of the beam spot center was purely vertical. Attachement #1 is the error for noise_amp = 0, #2 for noise_amp = 20 and #3 for noise_amp = 40. While Attachment #3 does provide the impression of there being a large error, this is not really the case as without normalization, each peak corresponds to a deviation of one pixel about the central value, see Attachement #4 for reference.
- While the error does increase marginally, adding noise has no significant effect on the prediction of the y coordinate of the centroid as Attachment #5 shows at noise_amp = 40.
- I am currently running an experiment to obtain the variation of mean square error with different noise amplitudes and will put up the plots soon. Further, I shall vary the resolution of the image frames and the the standard deviation of the Gaussain beam with time and try to obtain simulations very close to the real data available and then determine the performance of the algorithm.
- The following videos will serve as a quick reference for what the videos and detection look like at
- noise_amp = 20
- noise_amp = 40
- I also performed a quick experiment to see how low the amplitude of motion could be before the algorithm falied to detect the motion and found it to occur at 2 orders of magnitude below the values used in the previous post. This is a line of thought I intend to pursue more carefully and I am looking into how opencv and python handle images with floats as coordinates and will provide more details about the previous trial soon. This should give us an idea of what the smallest motion of the beam spot that can be resolved is.
Quote: |
- Implemented image level noise for simulation. Added only uniform random noise.
- Implemented addition of uniform random noise to any sinusoidal motion of beam spot.
- Implemented motion along y axis according to data in "power_spectrum" file.
- Impelemented simulation of random motion of beam spot in both x and y directions (done previously by Pooja, but a cleaner version).
- Created a video file for 10s with motion of beam spot along the y direction as given by Attachment #1. This was created by mixing four sinusoids at different amplitudes (frequencies (0.1, 0.2, 0.4, 0.8) Hz Amplitudes as fractions of N = 64 (0.1 0.09 0.08 0.09). FPS = 10. Total number of frames = 100 for the sake of convenience. See Attachment #5.
- Following this, I used the thresholding (threshold = 127, chosen arbitrarily), contour detection and centroid computation sequence (see Attachment #6 for results) to obtain the plot in Attachment 2 for the predicted motion of the y coordinate. As is evident, the centering and scale of values obtained are off and I still haven't figured out how to precisely convert from one to another.
- Consequently, as a workaround, I simply normalised the values corresponding to each plot by subtracting the mean in each case and dividing the resulting series of values by their maximum. This resulted in the plots in Attachments 3 and 4 which show the normalised values of y coordinate variation and the error between the actual and predicted values between 0 and 1 respectively.
Things yet to be done:
Simulation:
- I will implement the mean square error function to compute the relativer performance as conditions change.
- I will add noise both to the image and to the motion (meaning introduce some randomness in the motion) to see how the performance, determined by both the curves such as the ones below and the mean square error, changes.
- Following this, I will vary the standard deviation of the beam spot along X and Y directions and try to obtain beam spot motion similar to the video in Attachment #2 of elog post 14632.
- Currently, I have made no effort to carefully tune the parameters associated with contour detection and threshold and have simply used the popular defaults. While this has worked admirably in the case of the simple simulated videos, I suspect much more tweaking will be needed before I can use this on real data.
- It is an easy step to determine the performance of the algorithm for random, circular and other motions of the beam spot. However, I will defer this till later as I do not see any immediate value in this.
- Determine noise threshold. In simulation or with real data: obtain a video where the beam spot is ideally motionless (easy to do with simulated data) and then apply the above approach to the video and study the resulting predicted motion. In simulation, I expect the predictions for a motionless beam spot video (without noise) to be constant. Therefore, I shall add some noise to the video and study the prediction of the algorithm.
- NOTE: the above approach relies on some previous knowledge of what the video data will look like. This is useful in determining which contours to ignore, if any like the four bright regions at the corners in this video.
Real data:
- Obtaining real data and evaluate if the algorithm is succesful in determining contours which can be used to track the beam spot.
- Once the kind of video feed this will be used on is decided, use the data generated from such a feed to determine what the best settings of hyperparameters are and detect the beam spot motion.
- Synchronization of data stream regarding beam spot motion and video.
- Determine the calibration: anglular motion of the optic to beam spot motion on the camera sensor to video to pixel mapping in the frames being processed.
Other approaches:
- Review work done by Gabriele with CNNs, implement it and then compare performance with the above method.
|
|
Attachment 1: residue_normalised_x.pdf
|
|
Attachment 2: residue_normalised_x.pdf
|
|
Attachment 3: residue_normalised_x.pdf
|
|
Attachment 4: predicted_motion_x.pdf
|
|
Attachment 5: normalised_comparison_y.pdf
|
|
14639
|
Sun May 26 21:47:07 2019 |
Kruthi | Update | Cameras | CCD Calibration |
On Friday, I tried calibrating the CCD with the following setup. Here, I present the expected values of scattered power (Ps) at s = 45°, where s is scattering angle (refer figure). The LED box has a hole with an aperture of 5mm and the LED is placed at approximately 7mm from the hole. Thus the aperture angle is 2*tan-1(2.5/7) ≈ 40° approx. Using this, the spot size of the LED light at a distance 'd' was estimated. The width of the LED holder/stand (approx 4") puts a constraint on the lowest possible s. At this lowest possible s, the distance of CCD/Ophir from the screen is given by . This was taken as the imaging distance for other angles also.
In the table below, Pi is taken to be 1.5mW, and Ps and were calculated using the following equations:

d (cm)
|
Estimated spot diameter (cm) |
Lowest possible s (in degrees)
|
Distance of CCD/Ophir from the screen (in cm) |
(in sr) |
Expected Ps at s = 45° (in µW)
|
1.0 |
1.2 |
78.86 |
5.2 |
0.1036 |
34.98 |
2.0 |
2.0 |
68.51 |
5.5 |
0.0259 |
8.74 |
3.0 |
2.7 |
59.44 |
5.9 |
0.0115 |
3.88 |
4.0 |
3.4 |
51.78 |
6.5 |
0.0065 |
2.19 |
5.0 |
4.1 |
45.45 |
7.1 |
0.0041 |
1.38 |
6.0 |
4.9 |
40.25 |
7.9 |
0.0029 |
0.98 |
7.0 |
5.6 |
35.97 |
8.6 |
0.0021 |
0.71 |
8.0 |
6.3 |
32.42 |
9.5 |
0.0016 |
0.54 |
9.0 |
7.1 |
29.44 |
10.3 |
0.0013 |
0.44 |
10.0 |
7.8 |
26.93 |
11.2 |
0.0010 |
0.34 |
On measuring the scattered power (Ps) using the ophir power meter, I got values of the same order as that of expected values given the above table. Like Gautam suggested, we could use a photodiode to detect the scattered power as it will offer us better precision or we could calibrate the power meter using the method mentioned in Johannes's post: https://nodus.ligo.caltech.edu:8081/40m/13391.
|
Attachment 1: CCD_calibration_setup.png
|
|
14640
|
Mon May 27 11:37:13 2019 |
gautam | Update | VAC | c1vac is unresponsive | I've been monitoring the status of the pumpdown remotely with ndscope lookbacks of C1:Vac-CC1_pressure. Today morning, I saw that the channel was putting out a constant value (signature of EPICS server being frozen). caget did not work either. Then I tried ssh-ing into c1vac to see if there were any issues but I was unable to. The machine isn't responding to ping either. The EPICS value has been frozen since ~1030pm PDT 26 May 2019.
I will try and head to campus later today to check on it. Isn't an email alert or soemthing supposed to be sent out in such an event? |
14641
|
Tue May 28 09:51:33 2019 |
gautam | Update | VAC | c1vac hard-rebooted | The vacuum itself was fine - CC1 gauge reported a pressure of 1.3e-5 torr. Note to self: the C1:Vac-CC1_HORNET_PRESSURE channel, which is the analog readback of the Hornet gauge and which is hooked up to an Acromag ADC in the c1auxex chassis, is independent of the status of the c1vac machine, and so can serve as a diagnostic.
However, I was unable to interact with c1vac in any way, the monitor hooked up directly to it was showing a frozen display. So I hard-rebooted the system. It took a few minutes to come back online - but even after 10 minutes of waiting, still no display. In the process of the reboot, several valves were closed off - when the EPICS processes restart, there are momentary instances where the readback channels get an "undefined" value, which prompts the main interlock process to transition to a "SAFE" state.
Running df -h, I saw that the /var partition was completely full. Maybe this was somehow interfering with the machine running smoothly? Two files in particular, daemon.log and daemon.log.1 were ~1GB each. The contents of these files seemed to be just the readbacks for the caget and caput commands. So I cleared both these files, and now the /var partition usage is only 26%. I also got the display back up and running on the physical monitor hooked up to the c1vac machine's VGA port. Let's see if this has improved the stability situation. The CPU load is still high (~6-7), with most of this coming from the modbus process. Why is this so high? c1susaux has more Acromag units but claims a much lower load of 0.71. Is the CPU of the c1vac machine somehow inferior?
In the meantime, I ssh-ed into c1vac and restored the "Vacuum normal" valve config. During this little escapade, the main volume pressure rose to ~6e-5 torr. It's coming back down smoothly.
Unrelated to this work: we had turned the RGA off for the vent, I powered it back on and re-initialized it this morning. |
Attachment 1: Screen_Shot_2019-05-31_at_12.44.54_PM.png
|
|
14642
|
Tue May 28 17:41:13 2019 |
gautam | Update | General | IFO status | [chub, gautam]
Today, we tried to resuscitate the c1iscaux2 channels by swapping the existing, failed VME crate with the newly freed up crate from c1susaux. In summary, the crate gets power, and the EPICS server gets satrted, but I am unable to switch the whitening gain on the whitening boards. I belive that this has to do with the FAIL LEDs that are on for the XVME-220 units. We were careful to preserve the location of the various cards in the VME crates during the swap. Rather than do a detailed debugging with custom RJ45 cables and terminal emulators, I think we should just focus the efforts on getting the Acromag system up and running.
Our work must have bumped a cable to the c1lsc expansion chassis in the same rack - the c1lsc FE had crashed. I rebooted it using the script - everything came back gracefully. |
Attachment 1: IMG_7444.JPG
|
|
|