ID |
Date |
Author |
Type |
Category |
Subject |
10018
|
Tue Jun 10 09:25:29 2014 |
Jamie | Update | CDS | Computer status: should not be changing names | I really think it's a bad idea to be making all these names changes. You're making things much much harder for yourselves.
Instead of repointing everything to a new host, you should have just changed the DNS to point the name "linux1" to the IP address of the new server. That way you wouldn't need to reconfigure all of the clients. That's the whole point of name service: use a name so that you don't need to point to a number.
Also, pointing to an IP address for this stuff is not a good idea. If the IP address of the server changes, everything will break again.
Just point everything to linux1, and make the DNS entries for linux1 point to the IP address of chiara. You're doing all this work for nothing!
RXA: Of course, I understand what DNS means. I wanted to make the changes to the startup to remove any misconfigurations or spaghetti mount situations (of which we found many). The way the VME162 are designed, changing the name doesn't make the fix - it uses the number instead. And, of course, the main issue was not the DNS, but just that we had to setup RSH on the new machine. This is all detailed in the ELOG entries we've made, but it might be difficult to understand remotely if you are not familiar with the 40m CDS system. |
9662
|
Mon Feb 24 13:40:13 2014 |
Jenne | Update | CDS | Computer weirdness with c1lsc machine | I noticed that the fb lights on all of the models on the c1lsc machine are red, and that even though the MC was locked, there was no light flashing in the IFO. Also, all of the EPICS values on the LSC screen were frozen.

I tried restarting the ntp server on the frame builder, as in elog 9567, but that didn't fix things. (I realized later that the symptom there was a red light on every machine, while I'm just seeing problems with c1lsc.
I did an mxstream restart, as a harmless thing that had some small hope of helping (it didn't).
I logged on to c1lsc, and restarted all of the models (rtcds restart all), which stops all of the models (IOP last), and then restarts them (IOP first). This did not change the status of the lights on the status screen, but it did change the positioning of some optics (I suspect the tip tilts) significantly, and I was again seeing flashes in the arms. The LSC master enable switch was off, so I don't think that it was trying to send any signals out to the suspensions. The ASS model, which sends signals out to the input pointing tip tilts runs on c1lsc, and it was about when the ass model was restarted that the beam came back. Also, there are no jumps in any of the SOS OSEM sensors in the last few hours, except me misaligning and restoring the optics. I we don't have sensors on the tip tilts, so I can't show a jump in their positioning, but I suspect them.
I called Jamie, and he suggested restarting the machine, which I did. (Once again, the beam went somewhere, and I saw it scattering big-time off of something in the BS chamber, as viewed on the PRM-face camera). This made the oaf and cal models run (I think they were running before I did the restart all, but they didn't come back after that. Now, they're running again). Anyhow, that did not fix the problem. For kicks, I re-ran mxstream restart, and diag reset, to no avail. I also tried running the sudo /etc/init.d/ntp-client restart command on just the lsc machine, but it doesn't know the command 'ntp-client'.
Jamie suggested looking at the timing card in the chassis, to ensure all of the link lights are on, etc. I will do this next.
|
9663
|
Mon Feb 24 15:25:29 2014 |
Jenne | Update | CDS | Computer weirdness with c1lsc machine | The LSC machine isn't any better, and now c1sus is showing the same symptoms. Lame.
The link lights on the c1lsc I/O chassis and on the fiber timing system are the same as all other systems. On the timing card in the chassis, the light above the fibers was solid-on, and the light below blinks at 1pps.
Koji and I power-cycled both the lsc I/O chassis, and the computer, including removing the power cables (after softly shutting down) so there was seriously no power. Upon plugging back in and turning everything on, no change to the timing status. It was after this reboot that the c1sus machine also started exhibiting symptoms. |
10717
|
Fri Nov 14 15:45:34 2014 |
Jenne | Update | CDS | Computers back up after reboot | [Jenne, Q]
Everything seems to be back up and running.
The computers weren't such a big problem (or at least didn't seem to be). I turned off the watchdogs, and remotely rebooted all of the computers (except for c1lsc, which Manasa already had gotten working). After this, I also ssh-ed to c1lsc and restarted all of the models, since half of them froze or something while the other computers were being power cycled.
However, this power cycling somehow completely screwed up the vertex suspensions. The MC suspensions were fine, and SRM was fine, but the ITMs, BS and PRM were not damping. To get them to kind of damp rather than ring up, we had to flip the signs on the pos and pit gains. Also, we were a little suspicious of potential channel-hopping, since touching one optic was occasionally time-coincident with another optic ringing up. So, no hard evidence on the channel hopping, but suspicions.
Anyhow, at some point I was concerned about the suspension slow computer, since the watchdogs weren't tripping even though the osem sensor rmses were well over the thresholds, so I keyed that crate. After this, the watchdogs tripped as expected when we enabled damping but the RMS was higher than the threshold.
I eventually remotely rebooted c1sus again. This totally fixed everything. We put all of the local damping gains back to the values that we found them (in particular, undoing our sign flips), and everything seems good again. I don't know what happened, but we're back online now.
Q notes that the bounce mode for at least ITMX (haven't checked the others) is rung up. We should check if it is starting to go down in a few hours.
Also, the FSS slow servo was not running, we restarted it on op340m. |
695
|
Fri Jul 18 17:06:20 2008 |
Jenne | Update | Computers | Computers down for most of the day, but back up now | [Sharon, Alex, Rob, Alberto, Jenne]
Sharon and I have been having trouble with the C1ASS computer the past couple of days. She has been corresponding with Alex, who has been rebooting the computers for us. At some point this afternoon, as a result of this work, or other stuff (I'm not totally sure which) about half of the computers' status lights on the MEDM screen were red. Alberto and Sharon spoke to Alex, who then fixed all of them except C1ASC. Alberto and I couldn't telnet into C1ASC to follow the restart procedures on the Wiki, so Rob helped us hook up a monitor and keyboard to the computer and restart it the old fashioned way.
It seems like C1ASC has some confusion as to what its IP address is, or some other computer is now using C1ASC's IP address.
As of now, all the computers are back up. |
8324
|
Thu Mar 21 10:29:12 2013 |
Manasa | Update | Computers | Computers down since last night | I'm trying to figure out what went wrong last night. But the morning status...the computers are down.

|
Attachment 1: down.png
|
|
9130
|
Mon Sep 16 13:11:15 2013 |
Evan | Update | Computer Scripts / Programs | Comsol 4.3b upgrade | Comsol 4.3b is now installed under /cvs/cds/caltech/apps/linux64/COMSOL43b. I've left the existing Comsol 4.2 installation alone; according to the Comsol installation guide [PDF], it is unaffected by the new install. On megatron I've made a symlink so that you can call comsol in bash to start Comsol 4.3b.
The first time I ran comsol server, it asked me to choose a username/password combo, so I made it the same as the combo used to log on to megatron.
Edit: I've also added a ~/.screenrc on megatron (based on this Stackoverflow answer) so that I don't constantly go nuts trying to figure out if I'm already inside a screen session. |
9770
|
Tue Apr 1 17:37:57 2014 |
Evan | Update | Computer Scripts / Programs | Comsol 4.4 upgrade | Comsol 4.4 is now installed under /cvs/cds/caltech/apps/linux64/COMSOL44. I've left the other installations alone. I've changed the symlink on megatron so that comsol now starts Comsol 4.4.
The first time I ran comsol server, it asked me to choose a username/password combo, so I made it the same as the combo used to log on to megatron.
We should consider uninstalling some of the older Comsol versions; right now we have 4.0, 4.2, 4.3b, and 4.4 installed. |
15389
|
Thu Jun 11 09:37:38 2020 |
Jon | Update | BHD | Conclusions on Mode-Matching Telescopes | After further astigmatism/tolerance analysis [ELOG 15380, 15387] our conclusion is that the stock-optic telescope designs [ELOG 15379] are sufficient for the first round of BHD testing. However, for the final BHD hardware we should still plan to procure the custom-curvature optics [DCC E2000296]. The optimized custom-curvature designs are much more error-tolerant and have high probability of achieving < 2% mode-matching loss. The stock-curvature designs can only guarantee about 95% mode-matching.
Below are the final distances between optics in the relay paths. The base set of distances is taken from the 2020-05-21 layout. To minimize the changes required to the CAD model, I was able to achieve near-maximum mode-matching by moving only one optic in each relay path. In the AS path, AS3 moves inwards (towards the BHDBS) by 1.06 cm. In the LO path, LO4 moves backwards (away from the BHDBS) by 3.90 cm.
AS Path
Interval |
Distance (m) |
Change (cm) |
SRMAR-AS1 |
0.7192 |
0 |
AS1-AS2 |
0.5405 |
0 |
AS2-AS3 |
0.5955 |
-1.06 |
AS3-AS4 |
0.7058 |
-1.06 |
AS4-BHDBS |
0.5922 |
0 |
BHDBS-OMCIC |
0.1527 |
0 |
LO Path
Interval |
Distance (m) |
Change (cm) |
PR2AR-LO1 |
0.4027 |
0 |
LO1-LO2 |
2.5808 |
0 |
LO2-LO3 |
1.5870 |
0 |
LO3-LO4 |
0.3691 |
+3.90 |
LO4-BHDBS |
0.2573 |
+3.90 |
BHDBS-OMCIC |
0.1527 |
0 |
|
11491
|
Tue Aug 11 10:13:32 2015 |
Jessica | Update | General | Conductive SMAs seem to work best | After testing both the Conductive and Isolated front panels on the ALS delay line box using the actual beatbox and comparing this to the previous setup, I found that the conductive SMAs improved crosstalk the most. Also, as the old cables were 30m and the new ones are 50m, Eric gave me a conversion factor to apply to the new cables to normalize the comparison.
I used an amplitude of 1.41 Vpp and drove the following frequencies through each cable:
X: 30.019 MHz Y: 30.019203 MHz
which gave a difference of 203 Hz.
In the first figure, it can be seen that, for the old setup with the 30m cables, in both cables there is a spike at 203 Hz with an amplitude of above 4 m/s^2/sqrt(Hz). When the 50m cables were measured in the box with the conductive front panel, the amplitude drops at 203 Hz by a factor of around 3. I also compared the isolated front panel with the old setup, and found that the isolated front panel worse by a factor of just over 2 than the old setup. Therefore, I think that using the conductive front panel for the ALS Delay Line box will reduce noise and crosstalk between the cables the most. |
Attachment 1: best4.png
|
|
Attachment 2: isolated4.png
|
|
8433
|
Wed Apr 10 01:10:22 2013 |
Jenne | Update | Locking | Configure screen and scripts updated | I have gone through the different individual degrees of freedom on the IFO_CONFIGURE screen (I haven't done anything to the full IFO yet), and updated the burt snapshot request files to include all of the trigger thresholds (the DoF triggers were there, but the FM triggers and the FM mask - which filter modules to trigger - were not). I also made all of the restore scripts (which does the burt restore for all those settings) the same. They were widely different, rather than just different optics chosen for misaligning and restoring.
Before doing any of this work, I moved the whole folder ..../caltech/c1/burt/c1ifoconfigure to ..../caltech/c1/burt/c1ifoconfigure_OLD_but_SAVE , so we can go back and look at the past settings, if we need to.
I also changed the "C1save{DoF}" scripts to ask for keyboard input, and then added them as options to the CONFIGURE screen. The keyboard input is so that people randomly pushing the buttons don't overwrite our saved burt files. Here's the secret: It asks if you are REALLY sure you want to save the configuration. If you are, type the word "really", then hit enter (as in yes, I am really sure). Any other answer, and the script will tell you that it is quitting without saving.
I have also removed the "PRM" option, since we really only want the "PRMsb" for almost all purposes.
Also, I removed access to the very, very old text file about how to lock from the screen. That information is now on the wiki: https://wiki-40m.ligo.caltech.edu/How_To/Lock_the_Interferometer
I have noted in the drop-down menus that the "align" functions are not yet working. I know that Den has gotten at least one of the arms' ASSes working today, so once those scripts are ready, we can call them from the configure screen.
Anyhow, the IFO_CONFIGURE screen should be back to being useful! |
8722
|
Wed Jun 19 02:46:19 2013 |
Jenne | Update | CDS | Connected ADC channels from IOO model to ASS model | Following Jamie's table in elog 8654, I have connected up the channels 0, 1 and 2 from ADC0 on the IOO computer to rfm send blocks, which send the signals over to the rfm model, and then I use dolphin send blocks to get over to the ass model on the lsc machine.
I'm using the 1st 3 channels on the Pentek Generic interface board, which is why I'm using channels 0,1,2.
I compiled all 3 models (ioo, rfm, ass), and restarted them. I also restarted the daqd on the fb, since I put in a temporary set of filter banks in the ass model, to use as sinks for the signal (since I haven't done anything else to the ASS model yet).
All 3 models were checked in to the svn. |
16479
|
Mon Nov 22 17:42:19 2021 |
Anchal | Update | General | Connected Megatron to battery backed ports of another UPS | [Anchal, Paco]
I used the UPS that was providing battery backup for chiara earlier (a APS Back-UPS Pro 1000), to provide battery backup to Megatron. This completes UPS backup to all important computers in the lab. Note that this UPS nominally consumes 36% of UPS capacity in power delivery but at start-up, Megatron was many fans that use up to 90% of the capacity. So we should not use this UPS for any other computer or equipment.
While doing so, we found that PS3 on Megatron was malfunctioning. It's green LED was not lighting up on connecting to power, so we replaced it from the PS3 of old FB computer from the same rack. This solved this issue.
Another thing we found was that Megatron on restart does not get configured to correct nameserver resolution settings and loses the ability to resolve names chiara and fb1. This results in the nfs mounts to fail which in turn results in the script services to fail. We fixed this by identifying that the NetworkManager of ubuntu was not disabled and would mess up the nameserver settings which we want to be run by systemd-resolved instead. We corrected the symbolic link: /etc/resolv.conf -> /run/systemd/resolve/resolv.conf. the we stopped and diabled the NetworkManager service to keep this persistent on reboot. Following are the steps that did this:
> sudo rm /etc/resolv.conf
> ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf
> sudo systemctl stop NetworkManager.service
> sudo systemctl disable NetworkManager.service
|
16396
|
Tue Oct 12 17:20:12 2021 |
Anchal | Summary | CDS | Connected c1sus2 to martian network | I connected c1sus2 to the martian network by splitting the c1sim connection with a 5-way switch. I also ran another ethernet cable from the second port of c1sus2 to the DAQ network switch on 1X7.
Then I logged into chiara and added the following in chiara:/etc/dhcp/dhcpd.conf :
host c1sus2 {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.92;
}
And following line in chiara:/var/lib/bind/martian.hosts :
c1sus2 A 192.168.113.92
Note that entires c1bhd is already added in these files, probably during some earlier testing by Gautam or Jon. Then I ran following to restart the dhcp server and nameserver:
~> sudo service bind9 reload
[sudo] password for controls:
* Reloading domain name service... bind9 [ OK ]
~> sudo service isc-dhcp-server restart
isc-dhcp-server stop/waiting
isc-dhcp-server start/running, process 25764
Now, As I switched on c1sus2 from front panel, it booted over network from fb1 like other FE machines and I was able to login to it by first logging to fb1 and then sshing to c1sus2.
Next, I copied the simulink models and the medm screens of c1x06, xc1x07, c1bhd, c1sus2 from the paths mentioned on this wiki page. I also copied the medm screens from chiara(clone):/opt/rtcds/caltech/c1/medm to martian network chiara in the appropriate places. I have placed the file /opt/rtcds/caltech/c1/medm/teststand_sitemap.adl which can be used to open sitemap for c1bhd and c1sus2 IOP and user models.
Then I logged into c1sus2 (via fb1) and did make, install, start procedure:
controls@c1sus2:~ 0$ rtcds make c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### building c1x07...
Cleaning c1x07...
Done
Parsing the model c1x07...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1x07...
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl
Successfully compiled c1x07
***********************************************
Compile Warnings, found in c1x07_warnings.log:
***********************************************
***********************************************
controls@c1sus2:~ 0$ rtcds install c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### installing c1x07...
Installing system=c1x07 site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1X07.txt
Installing /opt/rtcds/caltech/c1/target/c1x07/c1x07epics
Installing /opt/rtcds/caltech/c1/target/c1x07
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1x07
/opt/rtcds/caltech/c1/scripts/startc1x07
sudo: unable to resolve host c1sus2
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_211012_174226.par -gds_node=24 -site_letter=C -system=c1x07 -host=c1sus2
Installing GDS node 24 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1X07.ini
Installing Epics MEDM screens
Running post-build script
safe.snap exists
controls@c1sus2:~ 0$ rtcds start c1x07
Cannot start/stop model 'c1x07' on host c1sus2.
controls@c1sus2:~ 4$ rtcds list
controls@c1sus2:~ 0$
One can see that even after making and installing, the model c1x07 is not listed as available models in rtcds list. Same is the case for c1sus2 as well. So I could not proceed with testing.
Good news is that nothing that I did affect the current CDS functioning. So we can probably do this testing safely from the main CDS setup. |
16397
|
Tue Oct 12 23:42:56 2021 |
Koji | Summary | CDS | Connected c1sus2 to martian network | Don't you need to add the new hosts to /diskless/root/etc/rtsystab at fb1? --> There looks many elogs talking about editing "rtsystab".
controls@fb1:/diskless/root/etc 0$ cat rtsystab
#
# host list of control systems to run, starting with IOP
#
c1iscex c1x01 c1scx c1asx
c1sus c1x02 c1sus c1mcs c1rfm c1pem
c1ioo c1x03 c1ioo c1als c1omc
c1lsc c1x04 c1lsc c1ass c1oaf c1cal c1dnn c1daf
c1iscey c1x05 c1scy c1asy
#c1test c1x10 c1tst2
|
164
|
Wed Dec 5 10:57:08 2007 |
alberto | HowTo | Computers | Connecting the GPIBto USB interface to the Dell laptop | The interface works only on one of the USB ports of the laptop (the one on the right, looking at the computer from the back). |
1972
|
Tue Sep 8 12:26:16 2009 |
Alberto | Update | PSL | Connection of the RC heater's power supply replaced | I have replaced the temporary clamps that were connecting the RC heater to its power supply with a new permanent connection.
In the IY1 rack, I connected the control signal of the RC PID temperature servo - C1:PSL-FSS_TIDALSET - to the input of the RC heater's power supply.
The signal comes from a DAC in the same rack, through a pair of wires connected to the J9-4116*3-P3 cross-connector (FLKM). I joined the pair to the wires of the BNC cable coming from the power supply, by twisting and screwing them into two available clamps of the breakout FKLM in the IY1 rack - the same connected to the ribbon cable from RC Tmeperature box.
Instead of opening the BNC cable coming from the power supply, I thought it was a cleaner and more robust solution to use a BNC-to-crocodile clamp from which I had cut the clamps off.
During the transition process, I connected the power supply BNC input to a a voltage source that I set at the same voltage of the control signal before I disconnected it (~1.145V).
I monitored the temperature signals and it looked like the RC Temperature wasn't significantly affected by the operation. |
290
|
Fri Feb 1 10:43:05 2008 |
John | Update | Environment | Construction work | The boys next door have some bigger noisier toys. |
Attachment 1: DSC_0433.JPG
|
|
14273
|
Tue Nov 6 10:03:02 2018 |
Steve | Update | Electronics | Contec board found | The Contec test board with Dsub37Fs was on the top shelf of E7 |
Attachment 1: DSC01836.JPG
|
|
823
|
Mon Aug 11 12:42:04 2008 |
josephb | Configuration | Computers | Continuing saga of c1susvme1 | Coming back after lunch around 12:30pm, c1susvme1's status was again red. After switching off watchdogs, a reboot (ssh, su, reboot) and restarting startup.cmd, c1susvme1 is still reporting a max sync value (16384), occassionally dropping down to about 16377. The error light cycles between green and red as well.
At this point, I'm under the impression further reboots are not going to solve the problem.
Currently leaving the watchdogs associated with c1susvme1 off for the moment, at least until I get an idea of how to proceed. |
12564
|
Fri Oct 14 19:59:09 2016 |
Yinzi | Update | Green Locking | Continuing work with the TC 200 | Oct. 15, 2016
Another attempt (following elog 8755) to extract the oven transfer function from time series data using Matlab’s system identification functionalities.
The same time series data from elog 8755 was used in Matlab’s system identification toolbox to try to find a transfer function model of the system.
From elog 8755: H(s) is known from current PID gains: H(s) = 250 + 60/s +25s, and from the approximation G(s)=K/(1+Ts), we can expect the transfer function of the system to have 3 poles and 2 zeros.
I tried fitting a continuous-time and a discrete time transfer function with 3 poles and 2 zeros, as well as using the "quick start" option. Trying to fit a discrete time transfer function model with 3 poles and 2 zeros gave the least inaccurate results, but it’s still really far off (13.4% fit to the data).
Ideas:
1. Obtain more time domain data with some modulation of the input signal (also gives a way to characterize nonlinearities like passive cooling). This can be done with some minor modifications to the existing code on the raspberry pi. This should hopefully lead to a better system ID.
2. Try iterative tuning approach (sample gains above and below current gains?) so that a tune can be obtained without having to characterize the exact behavior of the heater.
Oct. 16, 2016
-Found the raspberry pi but it didn’t have an SD card
-Modified code to run directly on a computer connected to the TC 200. Communication seems to be happening, but a UnicodeDecodeError is thrown saying that the received data can’t be decoded.
-Some troubleshooting: tried utf-8 and utf-16 but neither worked. The raw data coming in is just strings of K’s, [‘s, and ?’s
-Will investigate possible reasons (update to Mac OS or a difference in Python version?), but it might be easier to just find an SD card for the raspberry pi which is known to work. In the meantime, modify code to obtain more time series data with variable input signals. |
12190
|
Thu Jun 16 15:57:46 2016 |
gautam | Update | COC | Contrast as a function of RoC of ETMX | Summary
In a previous elog, I demonstrated that the RoC mismatch between ETMX and ETMY does not result in appreciable degradation in the mode overlap of the two arm modes. Koji suggested also checking the effect on the contrast defect. I'm attaching the results of this investigation (I've plotted the contrast, rather than the contrast defect 1-C).
Details and methodology
- I used the same .kat file that I had made for the current configuration of the 40m, except that I set the reflectivities of the PRM and the SRM to 0.
- Then, I traced the Y arm cavity mode back to the node at which the laser sits in my .kat file to determine what beam coming out of the laser would be 100% matched to the Y arm (code used to do this attached)
- I then set the beam coming out of the laser for the subsequent simulations to the value thus determined using the
gauss command in finesse.
- I then varied the RoC of ETMX (I varied the sagittal and tangential RoCs simultaneously) between 50m and 70m. As per the wiki page, the spare ETMs have an RoC between 54 and 55m, while the current ETMs have an RoC of 60.26m and 59.48m for the Y and X arms respectively (I quote the values in the "ATF" column). Simultaneously, at each value of the RoC of ETMX, I swept the microscopic position of the ETMX HR surface through 2pi radians (-180 degrees to 180 degrees) using the
phi functionalilty of finesse, while monitoring the power at the AS port of this configuration using a pd in finesse. This guarantees that I sweep through all the resonances. I then calculate the contrast using the above formula. I divided the parameter space into a grid of 50 points for the RoC of ETMX and 1000 points for the microscopic position of ETMX.
- I fixed the RoC of ETMY as 57.6m in the simulations... Also, the
maxtem option in the .kat file is set to 4 (i.e. higher order modes with indices m+n<=4 are accounted for...)
Result:
Attachment #1 shows the result of this scan (as mentioned earlier, I plot the contrast C and not the contrast defect 1-C, sorry for the wrong plot title but it takes ~30mins to run the simulation which is why I didn't want to do it agian). If the RoC of the spare ETMs is about 54m, the loss in contrast is about 0.5%. This is in good agreement with this technical note by Koji - it tells us to expect a contrast defect in the region of 0.5%-1% (depending on what parameter you use as the RoC of ETMY).
Conclusion:
It doesn't seem that switching out the current ETM with one of the spare ETMs will result in dramatic degradation of the contrast defect...
Misc notes:
- Regarding the
phase command in Finesse - EricQ pointed out that the default value of this is 3, which as per the manual could give unphysical results sometimes. The flags "0" or "2" are guaranteed to yield physical results always according to the manual, so it is best to set this flag appropriately for all future Finesse simulaitons.
- I quickly poked around inside the cabinet near the EX table labelled "clean optics" to see if I could locate the spare ETMs. In my (non-exhaustive) search, I could not find it in any of the boxes labelled "2010 upgrade" or something to that effect. I did however find empty boxes for ETMU05 and ETMU07 which are the ETMs currently in the IFO... Does anyone know if I should look elsewhere for these?
EDIT 17Jun2016: I have located ETMU06 and ETMU08, they are indeed in the cabinet at the X end...
- I'm attaching a zip file with all the code used to do this simulation. The phase flag has been appropriately set in the (only) .kat file. setLaserQparam.py was used to determine what beam parameter to assign to be perfectly matched to the Y arm. modeMatchCheck_ETM.py was used to generate the contrast as a function of the RoC of ETMX.
- With regards to the remaining checks to be done - I will post results of my investigations into the HOM scans as a function of the RoC of the ETMs and also the folding mirrors shortly...
|
Attachment 1: contrastDefect.pdf
|
|
Attachment 2: finesseCode.zip
|
12193
|
Thu Jun 16 18:42:12 2016 |
rana | Update | COC | Contrast as a function of RoC of ETMX | That sounds weird. If the ETMY RoC is 60 m, why would you use 57.6 m in the simulation? According to the phase map web page, it really is 60.2 m. |
12194
|
Thu Jun 16 23:02:57 2016 |
gautam | Update | COC | Contrast as a function of RoC of ETMX |
Quote: |
That sounds weird. If the ETMY RoC is 60 m, why would you use 57.6 m in the simulation? According to the phase map web page, it really is 60.2 m.
|
This was an oversight on my part. I've updated the .kat file to have all the optics have the RoC as per the phase map page. I then re-did the tracing of the Y arm cavity mode to determine the appropriate beam parameters at the laser in the simulation, and repeated the sweep of RoC of ETMX while holding RoC of ETMY fixed at 60.2m. The revised contrast defect plot is attached (this time it is the contrast defect, and not the contrast, but since I was running the simulation again I thought I may as well change the plot).
As per this plot, if the ETMX RoC is ~54.8m (the closer of the two spares to 60.2m), the contrast defect is 0.9%, again in good agreement with what the note linked in the previous elog tells us to expect... |
Attachment 1: contrastDefect.pdf
|
|
12197
|
Mon Jun 20 01:38:04 2016 |
rana | Update | COC | Contrast as a function of RoC of ETMX | So, it seems that changing the ETMX for one of the spares will change the contrast defect from ~0.1% to 0.9%. True? Seems like that might be a big deal. |
12204
|
Mon Jun 20 18:07:15 2016 |
gautam | Update | COC | Contrast as a function of RoC of ETMX |
Quote: |
So, it seems that changing the ETMX for one of the spares will change the contrast defect from ~0.1% to 0.9%. True? Seems like that might be a big deal.
|
That is what the simulation suggests... I repeated the simulation for a PRFPMI configuration (i.e. no SRM, everything else as per the most up to date 40m numbers), and the conclusion is roughly the same - the contrast defect degrades from ~0.1% to ~1.4%... So I would say this is significant. I also attempted to see what the contribution of the asymmetry in loss in the arms is, by running over the simulation with the current loss numbers of 230ppm for Yarm and 484ppm for the X arm, split equally between the ITMs and ETMs for both cases, and then again with lossless arms - see attachment #1. While this is a factor, this plot seems to suggest that the RoC mismatch effect dominates the contrast defect... |
Attachment 1: contrastDefectComparison.pdf
|
|
17020
|
Tue Jul 19 18:41:42 2022 |
yuta | Update | BHD | Contrast measurements for Michelson and ITM-LO | [Paco, Yuta]
We measured contrast of Michelson fringe in both arms locked and mis-aligned. It was around 90%.
We also measured the contrast of ITM single bounce vs LO beam using BHD DC PDs. It was around 43%.
The measurement depends on the alignment and how to measure the maximum and minimum of the fringe. ITM-LO fringe was also not stable because motions of AS/LO mirrors are large. More tuning necessary.
Background
- As measured in elog 40m/17012, we see a lot of CARM in AS, which indicates large contrast defect.
- We want to check mode-matching of LO beam to AS beam.
BHD DC PD conditioning
- We added DCPD_A and DCPD_B to /opt/rtcds/caltech/c1/scripts/LSC/LSCoffsets3 script, which zeros the offsets when shutters are closed.
- We also set C1:LSC-DCPD_(A|B)_GAIN = -1 since they are inverted.
Contrast measurement
- Contrast was measured using channels ['C1:LSC-ASDC_OUT','C1:LSC-POPDC_OUT','C1:LSC-REFLDC_OUT','C1:LSC-DCPD_A_OUT','C1:LSC-DCPD_B_OUT']. For LO, only DCPD_(A|B) are used.
- We took 15%-percentile (40% for ITM-LO fringe) from the maximum and minimum of the data, and took the median to estimate the maximum value and the minimum value (see Attachment).
- Contrast = (Imax - Imin) / (Imax + Imin)
- We measured three times in each configuration to estimate the standard error.
- Jupyter notebook: https://git.ligo.org/40m/scripts/-/blob/main/CAL/BHD/measureContrast.ipynb
Results
Both arms locked, MICH fringe (15% percentile)
Contrast measured by C1:LSC-ASDC_OUT is 89.75 +/- 0.17 %
Contrast measured by C1:LSC-POPDC_OUT is 79.41 +/- 0.86 %
Contrast measured by C1:LSC-REFLDC_OUT is 97.34 +/- 0.34 %
Contrast measured by C1:LSC-DCPD_A_OUT is 95.41 +/- 1.55 %
Contrast measured by C1:LSC-DCPD_B_OUT is 89.76 +/- 1.49 %
Contrast measured by all is 90.34 +/- 1.68 %
Both arms mis-aligned, MICH fringe (15% percentile)
Contrast measured by C1:LSC-ASDC_OUT is 89.32 +/- 0.57 %
Contrast measured by C1:LSC-POPDC_OUT is 94.55 +/- 0.62 %
Contrast measured by C1:LSC-REFLDC_OUT is 97.95 +/- 1.37 %
Contrast measured by C1:LSC-DCPD_A_OUT is 96.40 +/- 1.04 %
Contrast measured by C1:LSC-DCPD_B_OUT is 90.98 +/- 1.07 %
Contrast measured by all is 93.84 +/- 0.94 %
ITMY-LO fringe (40% percentile)
Contrast measured by C1:LSC-DCPD_A_OUT is 45.51 +/- 0.45 %
Contrast measured by C1:LSC-DCPD_B_OUT is 38.69 +/- 0.43 %
Contrast measured by all is 42.10 +/- 1.03 %
ITMX-LO fringe (40% percentile)
Contrast measured by C1:LSC-DCPD_A_OUT is 46.65 +/- 0.65 %
Contrast measured by C1:LSC-DCPD_B_OUT is 39.82 +/- 0.51 %
Contrast measured by all is 43.24 +/- 1.45 %
Discussion
- As you can see from the attachment, REFLDC is noisy and over estimating the contrast. ASDC is reliable. We need to tune the threshold to measure the maximum value and minimum value. We should also use the mode instead of median.
- Contrast depends very much on the alignment. We didn't tweak too much today.
- ITM-LO fringe was not stable, probably due to too much motion in AS1, AS4, LO1, LO2. Their damping needs to be re-tuned.
Next:
- Model FPMI sensing matrix with measured contrast defect
- Estimate AS-LO mode-mismatch using the measured contrast
- Lock ITM-LO fringe using DCPD_(A|B) as error signal, and ITM or LO1/2 as actuator
- Lock MICH with DCPD_(A|B), and with LO beam
- Get better contrast data with better alignment and better AS1, AS4, LO1, LO2 damping |
Attachment 1: ContrastMeasurements.pdf
|
|
15595
|
Tue Sep 22 16:29:30 2020 |
Koji | Update | General | Control Room AC setting for continuous running | I came to the lab. The control room AC was off -> Now it is on.
Here is the setting of the AC meant for continuous running |
Attachment 1: P_20200922_161125.jpg
|
|
7866
|
Thu Dec 20 19:46:20 2012 |
rana | Configuration | Environment | Control Room Projector | Needs

|
1927
|
Wed Aug 19 02:17:52 2009 |
rana | Omnistructure | Environment | Control Room Workstation desks lowered to human height | There were no injuries...Now we need to get some new chairs. |
1931
|
Thu Aug 20 09:16:32 2009 |
steve | HowTo | Photos | Control Room Workstation desks lowered to human height |
Quote: |
There were no injuries...Now we need to get some new chairs.
|
The control room desk tops heights on the east side were lowered by 127 mm
|
Attachment 1: P1040788.png
|
|
Attachment 2: P1040782.png
|
|
Attachment 3: P1040786.png
|
|
Attachment 4: P1040789.png
|
|
Attachment 5: P1040785.png
|
|
14778
|
Fri Jul 19 15:54:47 2019 |
gautam | Update | General | Control room UPS Batteries need replacement | The control room UPS started making a beeping noise saying batteries need replacement. I hit the "Test" button and the beeping went away. According to the label on it, the batteries were last repalced in March 2016, so maybe it is time for a replacement, @Chub, please look into this. |
7508
|
Mon Oct 8 23:58:57 2012 |
Jenne | Update | SAFETY | Control room emergency lights came on | [Evan, Jenne]
We were sitting trying to lock MICH (hooooorraaaayy!!!), and the emergency lights above the control room door came on, and then ~1 minute later turned off. Steve, can you see what's up? |
15541
|
Wed Aug 26 15:48:31 2020 |
gautam | Update | VAC | Control screen left open on vacuum workstation | I found that the control MEDM screen was left open on the c1vac workstation. This should be closed every time you leave the workstation, to avoid accidental button pressing and such.
The network outage meant that the EPICS data from the pressure gauges wasn't recorded until I reset everything ~noon. So there isn't really a plot of the outgassing/leak rate. But the pressure rose to ~2e-4 torr, over ~4 hours. The pumpdown back to nominal pressure (9e-6 torr) took ~30 minutes. |
841
|
Fri Aug 15 16:45:43 2008 |
Sharon | Update | | Converting from FIR to IIR | I have been looking into different techniques to convert from FIR to IIR. This is so we can see how effective the adaptive FIR filter is in comparison to an IIR one with fewer taps.
For now I tried 2 matlab functions: Prony and stmcb (which works on LMS method).
I used the FIR wiener code, MC1_X, (c1wino) and applied the FIR to IIR algorithm.
Seems like the stmcb one works a bit better, and that there isn't much effect for having 1000 and not 400 taps.
Will keep updating on more results I have, and hopefully have the MC in time to actually check this live. |
Attachment 1: fir2iir50.png
|
|
Attachment 2: fir2iir400.png
|
|
Attachment 3: fir2iir1000.png
|
|
14678
|
Mon Jun 17 14:36:13 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".
Quote: |
Networks for beam tracking:
- I will use the architectures suggested in this work with a few modifications. I will use MSE loss function, Adam optimizer and my local GPU for training.
|
|
14682
|
Tue Jun 18 22:54:59 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.
Quote: |
Begun setting up an environment (as mentioned before, on my local machine) and scripts to run experiments with Convolutional networks for beam tracking. All code has been pushed to this folder in the GigEcamera repository. I am presently looking for pre-processing techniques for the video which go beyond the usual "Crop the images! Normalize pixel values! Convert to Grayscale!".
|
|
14694
|
Tue Jun 25 00:25:47 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | In the previous meeting, Koji pointed out (once again) that I should determine if the displacement values and frames are synchronized before training a network. Pooja did the following last time. Koji also suggested that I first predict the motion (a series of x and y coordintates) and then slide resulting plots around until I get the best match for the original motion. This is however not possible with a neural network based approach as the network learns exactly what you show it and therefore it will learn any mismatch between the labels and the frames and predict exactly that. Therefore I came up with what Koji described as "hacky" method to achieve the same using the opencv work described previously in this elog (the only addition being the application of a mask to block out the OSEMs and work only with the beam spot) .
Hacky technique to sync frames and labels:
- I ran the OpenCV algorithm on the data to obtain a plot for predicted motion depicted in Attachment #2. As is evident, the predicted motion is only an approximate of the actual motion and also displays a shift . However, a plot of the fourier transform of the signal (see Attachment #1) shows that the components present are the same. However, the predominant frequency component is 0.22 Hz rather than 0.2 Hz as stated by Pooja in her elog. I wonder if this is of any consequence. Therefore, this predicted motion can be slid around until it overlaps with the applied sinusoidal dither signal "well".
- Defining "well": I computed an error signal as the differnece between the predicted signal and the actual motion with each signal being normalized by subtracting the mean and then dividing the resulting signal by the maximum value (see Attachment #3). The lower the power of the resulting signal, the better the synchronization of the predicted and actual signal. Note: To achieve this overlap of signals, datapoints are removed from either the start or end of the signals and this effectively reduces the number of data points available for training by 36 pionts (see Attachment #4, positive and negative shifts merely indicate if the predicted signal is being moved right or left).
- Attachments #5,#6 show the resutls of shifting the data by 36 samples. it is evident that there is far greater overlap of the prediction and the actual values.
- Well, what now? I will use the mapping between labels and frames obtained by the above steps to train a neural network.
[Koji, Milind - 21/06/2019]
- Well, the above is fine, but why is contour detection really necessary? Why not take a weighted sum of all the pixel values (in a rectangular region obtained, say, after blocking out the OSEMs) to see what the centroid motion is? Black areas (0 pixel intensity values) will not contribute to this sum anyway. Perhaps that can be used for the sliding instead of the above (fallible!) approach, specially for cases in which the beam "spot" is just a collection of random speckles?
- Something like this was done by Pooja where she computed the sum of pixel intensities in a rectangular region containing the beam spot. However, she did this for very noisy data and observed intensity variation at a frequency double that of the applied signal.
- Results of applying a median filter and doing the same are presented in Attachment #7. Clearly, they can't be used for this sliding task.
- Results of computing the weighted sum of all the coordinates (with pixel intensities as the weights are presented in Attachment #8. Clearly, for this data and for this task, the contour approach seems to be a better method. Further, these resutls just serve to prove Rana's point that such simple, unsophisticated, naive approaches will not produce desired results and therefore, shall be presented in this very context in the report that is due.
- The contour detection technique does not work if the beam spot is just a cokllection of speckles. In that case Koji suggested that we use a bounding convex hull instead of a contour. Alternately, for a bunch of speckles I can perform dilation to reduce it to the same problem.
- Using gpstime for time stamping: To determine the absolute time which a frame is grabbed. However, the time between the time being recorded and grabbing of frame needs to be determined for this which should be doable using linux/python commands.
Quote: |
Worked further on this. I skimmed through a few resources to look for details of what pre-processing can be done. Here (am planning to convert all these resources, particularly those I come across for GANs into either a README on the repo or a Wiki soon) are some of the useful things I found during today's reading. The work I skimmed through today mostly pointed to the use of a median filter for pre-processing, if any is to be done. I am presently using the Sequential() API in Keras to set up the neural network. I will train it tomorrow.
|
Upcoming work (in the order of priority):
- Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.
- Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
- Simulation:
- Putting the physics in: Previously, I worked on adding point scatterers. I shall add the effect of surface roughness and incorporate the BRDF next. Just as Gautam did, Rana also reccommended that I go through Hiro Yamamoto's work to improve my understanding of this.
- GANs: I will put together a readme (which I will turn into a wiki later) for all the material that I am using to develop my ideas about GAN training. Currently, my understanding of GANs is that they take as input noise vectors which are fed to the generative networks which then produce the fakes. This clearly isn't the only way to do it as GANs are used for several applications such as image generation from text. I am referring to these papers to set up the necessary architecture.
- PMC autolocker: I will convert the existing autolocker script to python. Rana also suggested that it would be interesting to see what the best settings of the hyperparameters would be to lock the PMC the fastest. I will write a script to do that and plot a 3D surface plot of the average time taken to lock the PMC as a function of the PZT scan speed and the Servo gain to determine the optimal setting of these "hyperparameters".
- Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
- Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.
|
Attachment 1: Spectra.pdf
|
|
Attachment 2: normalised_comparison_y.pdf
|
|
Attachment 3: residue_normalised_y.pdf
|
|
Attachment 4: error_power_sliding.pdf
|
|
Attachment 5: normalised_comparison_y.pdf
|
|
Attachment 6: residue_normalised_y.pdf
|
|
Attachment 7: intensum.pdf
|
|
Attachment 8: centroid.pdf
|
|
14697
|
Tue Jun 25 22:14:10 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | I discussed this with Gautam and he asked me to come up with a list of signals that I would need for my use and then design the data acquisition task at a high level before proceeding. I'm working on that right now. We came up with a very elementary sketch of what the script will do-
- Check the MC is locked.
- Choose an exposure value.
- Choose a frequency and amplitude value for the applied sinusoidal dither (check warning by Gabriele below).
- Apply sinusoidal dither to optic.
- Timestamping: Record gpstime, instantaneous channel values and a frame. These frames can later be put together in a sequence and a network can be trained on this. (NEED TO COME UP WITH SOMETHING CLEVERER THAN THIS!)
Tomorrow I will try and prepare a dummy script for this before the meeting at noon. Gautam asked me to familiarize myself with the awg, cdsutils (I have already used ezca before) to write the script. This will also help me do the following two tasks-
- IFO test scripts that Rana asked me to work on a while ago
- The PMC autolocker scripts that Rana asked me work on
Quote: |
Upcoming work (in the order of priority):
- Data acquisition: With the mode cleaner being locked and Kruthi having focused on to the beam spot, I will obtain data for training both GANs and the convolutional networks. I really hope that some of the work done above can be extended to the new data. Rana suggested that I automate this by writing a script which I will do after a discussion with Gautam tomorrow.
|
I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.
I am pushing the code that I wrote for
- Kruthi's exposure variation - ccd calibration experiment
- modified camera_client_movie.py code (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)
- interact.py (to interact with the GigE in viewing or recording mode) (currently at /opt/rtcds/caltech/c1/scripts/GigE/SnapPy_pypylon)
to the GigEcamera repository.
Gautam also asked me to look at Jigyasa's report and elog 13443 to come up with the specs of a machine that would accomodate a dedicated camera server.
Quote: |
- Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
- Cleaning up/ formalizing code: Rana pointed out that any code that messes with channel values must return them to the original settings once the script is finished running. I have overlooked this and will add code to do this to all the files I have created thus far. Further, while most of my code is well documented and frequently pushed to Github, I will make sure to push any code that I might have missed to github.
- Talk to Jon!: Gautam suggested that I speak to Jon about the machine requirements for setting up a dedicated machine for running the camera server and about connecting the GigE to a monitor now that we have a feed. Koji also suggested that I talk to him about somehow figuring out the hardware to ensure that the GigE clock is the same as the rest of the system.
|
|
14706
|
Thu Jun 27 20:48:22 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | And finally, a network is trained!
Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github.
More details of the experiment:
Aim:
- To train a network to check that training occurs and get a feel for what the learning might be like.
- To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison.
- To track beam spot motion.
What I did:
- Set up a network that learns a framewise mapping as described in here.
- Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames
- Hyperparameters: Attachment #1
- Did no tuning of hyperparameters.
- Compiled and fit the models and saved the results.
What I saw
- Attachment #2: data fed to the network after pre-processing - median blur + crop
- Attachment #3: learning curves.
- Attachment #4: true and predicted motion. Nothing great.
What I think is going wrong-
- No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments.
- Too little data.
- Maybe wrong architecture.
Well, what now?
- Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?)
- Currently the network has around 200k parameters. Maybe reduce that.
- Set up a network that takes as input (one example corresponding to one forward pass) a bunch of frames and predicts a vector of position values that can be used as continuous data).
Quote: |
I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.
Quote: |
- Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
|
|
|
Attachment 1: readme.txt
|
Experiment file: train.py
batch_size: 32
dropout_probability: 0.8
eta: 0.0001
filter_size: 19
filter_type: median
initializer: xavier
num_epochs: 50
activation_function: relu
dense_layer_units: 64
... 10 more lines ...
|
Attachment 2: frame0.pdf
|
|
Attachment 3: Learning_curves.png
|
|
Attachment 4: Motion.png
|
|
14726
|
Thu Jul 4 18:19:08 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details.
Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is the training data I overfit to and the latter the validation data that the network has not generalized well to.
Quote: |
And finally, a network is trained!
Result summary (TLDR :-P) : No memory was used. Model trained. Results were garbage. Will tune hyperparameters now. Code pushed to github.
More details of the experiment:
Aim:
- To train a network to check that training occurs and get a feel for what the learning might be like.
- To set up the necessary framework to perform mulitple experiments and record results in a manner facilitating comparison.
- To track beam spot motion.
What I did:
- Set up a network that learns a framewise mapping as described in here.
- Training data: 0.9 x 1791 frames. Validation data: 0.1 x 1791 frames. Test data (only prediction): all the 1791 frames
- Hyperparameters: Attachment #1
- Did no tuning of hyperparameters.
- Compiled and fit the models and saved the results.
What I saw
- Attachment #2: data fed to the network after pre-processing - median blur + crop
- Attachment #3: learning curves.
- Attachment #4: true and predicted motion. Nothing great.
What I think is going wrong-
- No hyperparameter tuning. This was only a first pass but is being reported as it will form the basis of all future experiments.
- Too little data.
- Maybe wrong architecture.
Well, what now?
- Tune hyperparmeters (try to get the network to overfit on the data and then test on that. We'll then know for sure that all we probably need is more data?)
- Currently the network has around 200k parameters. Maybe reduce that.
- Set up a network that takes as input (one example corresponding to one forward pass) a bunch of frames and predicts a vector of position values that can be used as continuous data).
Quote: |
I got to speak to Gabriele about the project today and he suggested that if I am using Rana's memory based approach, then I had better be careful to ensure that the network does not falsely learn to predict a sinusoid at all points in time and that if I use the frame wise approach I try to somehow incorporate the fact that certain magnitudes and frequencies of motion are simply not physically possible. Something that Rana and Gautam emphasized as well.
Quote: |
- Network training for beam spot tracking: I will begin training the convolutional network with the data pre-processed as described above. I will also simultaneously prepare data acquired from the GigE and train networks on that. Note: I planned to experiment with framewize predictions and hence did some of the work described above. However, I will restrict the number of experiments on that and perform more of those that use 3D convolution. Rana also pointed out that it would be interesting to have the network output uncertainity in the predictions. I am not sure how this can be done, but I will look into it.
|
|
|
|
Attachment 1: Motion.pdf
|
|
Attachment 2: Error.pdf
|
|
Attachment 3: Learning_curves.pdf
|
|
14734
|
Mon Jul 8 17:52:30 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | After the two earthquakes, I collected some data by dithering the optic and recording the QPD readings. Today, I set up scripts to process the data and then train networks on this data. I have pushed all the code to github. I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.
Trainng networks with memory
I set up a network to handle input volumes (stacks of frames) instead of individual frames. It still uses 2D convolution and not 3D convolution. I am currently training on the new data. However, I was curious to see if it would provide any improved performance over the results I put up in the previous elog. After a bit of hyperparameter tuning, I did get some decent results which I have attached below. However, this is for Pooja's old data which makes them, ah, not so relevant. Also, this testing isn't truly representative because the test data isn't entirely new to the network. I am going to train this network on the new data now with the following objectives (in the following steps):
- Train on data recorded at one frequency, generalize/ test on unseen data of the same frequency, large amplitude of motion
- Train on data recorded at one frequency, generallize/ test on unseen data of a different frequency, large amplitude of motion
- Train on data recorded at one frequency, generalize/ test on unseen data of same/ different frequency, small amplitude of motion
- Train on data at different frequencies and generalize/ test on data with a mixture of frequencies at small amplitudes - Gautam pointed out that the network would truly be superb (good?) if we can just predict the QPD output from the video of the beam spot when nothing is being shaken.
I hope this looks alright? Rana also suggested I try LSTMs today. I'll maybe code it up tomorrow. What I have in mind- A conv layer encoder, flatten, followed by an LSTM layer (why not plain RNNs? well LSTMs handle vanishing gradients, so why the hassle).
Quote: |
The quoted elog has figures which indicate that the network did not learn (train or generalize) on the used data. This is a scary thing as (in my experience) it indicates that something is fundamentally wrong with either the data or model and learning will not happen despite how hyperparameters are tuned. To check this, I ran the training experiment for nearly 25 hyperparameter settings (results here)with the old data and was able to successfully overfit the data. Why is this progress? Well, we know that we are on the right track and the task is to reduce overfitting. Whether, that will happen through more hyperparameter tuning, data collection or augmentation remains to be seen. See attachments for more details.
Why is the fit so perfect at the start and bad later? Well, that's because the first 90% of the test data is the training data I overfit to and the latter the validation data that the network has not generalized well to.
|
|
Attachment 1: Motion.pdf
|
|
14741
|
Tue Jul 9 22:13:26 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-
- On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
- On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
- On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.
Therefore, I will carry out all training only on this machine from now.
Note to self:
Steps to repeat what you did are:
- ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
- activate virtualenv as descirbed above
- navigate to code and run it.
Quote: |
I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.
|
|
Attachment 1: predicted_motion_first.pdf
|
|
Attachment 2: pcdev5_time.png
|
|
14746
|
Wed Jul 10 22:32:38 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | I trained a bunch (around 25 or so - to tune hyperparameters) of networks today. They were all CNNs. They all produced garbage. I also looked at lstm networks with CNN encoders (see this very useful link) and gave some thought to what kind of architecture we want to use and how to go about programming it (in Keras, will use tensorflow if I feel like I need more control). I will code it up tomorrow after some thought and discussion. I am not sure if abandoning CNNs is the right thing to do or if I should continue probing this with more architectures and tuning attempts. Any thoughts?
Right now, after speaking to Stuart (ldas_admin) I've decided on coding up the LSTM thing and then running that on one machine while probing the CNN thing on another.
Update on 10 July, 2019: I'm attaching all the results of training here in case anyone is interested in the future.
Quote: |
I received access today. After some incredible hassle, I was able to set up my repository and code on the remote system. Following this, Gautam wrote to Gabriele to ask him about which GPUs to use and if there was a previously set up environment I could directly use. Gabriele suggested that I use pcdev2 / pcdev3 / pcdev11 as they have good gpus. He also said that I could use source ~gabriele.vajente/virtualenv/bin/activate to use a virtualenv with tensorflow, numpy etc. preinstalled. However, I could not get that working, Therefore I created my own virtual environment with the necessary tensorflow, keras, scipy, numpy etc. libraries and suitable versions. On ssh-ing into the cluster, it can be activated using source /home/millind.vaddiraju/beamtrack/bin/activate. How do I know everything works? Well, I trained a network on it! With the new data. Attached (see attachment #1) is the prediction data for completely new test data. Yeah, its not great, but I got to observe the time it takes for the network to train for 50 epochs-
- On pcdev5 CPU: one epoch took ~1500s which is roughly 25 minutes (see Attachment #2). Gautam suggested that I try to train my networks on Optimus. I think this evidence should be sufficient to decide against that idea.
- On my GTX 1060: one epoch took ~30s. Which is 25 minutes (for 50 epochs) to train a network.
- On pcdev11 GPU (Titan X I think): each epoch took ~16s which is a far more reasonable time.
Therefore, I will carry out all training only on this machine from now.
Note to self:
Steps to repeat what you did are:
- ssh in to the cluster using ssh albert.einstein@ssh.ligo.org as described here.
- activate virtualenv as descirbed above
- navigate to code and run it.
Quote: |
I attempted to train a bunch of networks on the new data to test if the code was alright but realised quickly that, training on my local machine is not feasilble at all as training for 10 epochs took roughly 6 minutes. Therefore, I have placed a request for access to the cluster and am waiting for a reply. I will now set up a bunch of experiments to tune hyperparameters for this data and see what the results are.
|
|
|
14809
|
Thu Jul 25 00:26:47 2019 |
Milind | Update | Cameras | Convolutional neural networks for beam tracking | Somehow I never got around to doing the pixel sum thing for the new real data from the GigE. Since I have to do it for the presentation, I'm putting up the results here anyway. I've normalized this and computed the SNR with the true readings.
SNR = (power in true readings)/ (power in error signal between true and predicted values)
Attachment #2 is SNR of best performing CNN for comparison. |
Attachment 1: centroid.pdf
|
|
Attachment 2: subplot_yaw_test.pdf
|
|
374
|
Thu Mar 13 03:07:19 2008 |
Lisa | Metaphysics | Environment | Coolness at the 40m | My first (and hopefully not last) week at the 40m lab is ending 
I found this lab really cool, the people working here really cool as well, and this e-log....
this e-log is not just cool, it is FANTASTIC!!!
LISA |
14052
|
Wed Jul 11 16:23:21 2018 |
aaron | Update | OMC | Coordination of the Output Mode-cleaner Mirror Insertion Expedition (COMMIE) | I started this document on my own with notes as I was tracing the beam path through the output optics, as well as some notes as I started digging through the elogs. Let's just put it here instead....
- Beam from AS port into OMMT
- Reflect off OM5-PJ
- TO DO: check that the PZT works
- 40/P/F/L, 1525-45-P
- Pick off from OMPO
- TO DO: determine how much power is needed for the pick off, choose an appropriate optic (for this vent probably 50-50 is fine)
- The PO beam goes to OM6
- Reflect off MMT1???
- TO DO: determine if this mirror has a PZT, get it working
- Has a PZT?
- Which PZT channel on the DAQ?
- Is there a cable going to from the DAC to the PZT?
- Is the PZT functional?
- How many PZTs does this mirror actually have?
- TO DO: determine the real name of this optic, find its recent history in the elog
- TO DO: determine the correct telescope parameters to optimally couple into the mode cleaner given the following:
- TO DO: look up how the radius of curvature (RC) of the OMC has changed, and therefore what telescope parameters are necessary
- Focused by MMT2???
- TO DO: determine if this mirror has a PZT
- Has a PZT?
- Which PZT channel on the DAQ?
- Is there a cable going to from the DAC to the PZT?
- Is the PZT functional?
- How many PZTs does this mirror actually have?
- TO DO: determine the real name of this optic, find its recent history in the elog
- TO DO: what about this optic is tunable? It looks bulky
- Columnated by MMT3???
- TO DO: determine if this mirror has a PZT
- Has a PZT?
- Which PZT channel on the DAQ?
- Is there a cable going to from the DAC to the PZT?
- Is the PZT functional?
- How many PZTs does this mirror actually have?
- Steered by MMT4???
- TO DO: determine the real name of this optic
- TO DO: why is this optic so small? Looks different from the rest, maybe weird space constraint
- Steered by MMT5???
- TO DO: why is this optic so large compared to OMMT4?
- TO DO: is there a more space efficient way of steering this beam, or even some way that avoids having to steer with three distinct optics
- Steered by MMT6???
- TO DO: Can this optic be removed with some clever new beam path?
- Cleaned by the OMC
- TO DO: Where does the promptly reflected beam from OMC1 go after it exits the chamber?
- TO DO: check the PZTs
- Has a PZT?
- Which PZT channel on the DAQ?
- Is there a cable going to from the DAC to the PZT?
- Is the PZT functional?
- How many PZTs does the OMC actually have?
- TO DO: Determine if a new OMC configuration would be more ideal for the squeezing experiment
- This is a large task, not part of this immediate vent
- TO DO: What is done with the OMC reflection? What is done with the transmission?
- TO DO: Check the logs about how the OMC had been in use; should be mostly from rob ward
- Reflected beam goes to the next chamber
- Transmitted beam is split by OM7???
- TO DO: find the actual name of this optic
- TO DO: why does this have the R/T that is does?
- Reflected beam goes to my OMPD
- TO DO: figure out what this PD is used for, and whether we even need it
- I think this might be the camera mentioned in 40m elog 21
- Elog 42 says the 4 QPDs for the OMC have meds screens located under C2TPT—is this a clue for channel names?
- Transmitted beam is reflected to the next chamber by OM8???
- TO DO: determine the name of this optic
- TO DO: Where does this beam go? What is it used for?
- Beam Dumps to add
- Transmission through OM5? Probably don’t need…
- OMMT1 transmission
- OMMT steering mirror transmissions
- OMC transmissions? Probably not?
- OMPD transmission?
- OM8 transmission
- Green scattering off of the window where the beam goes after GR_SM5
- Backscatter from the OMC prompt reflection to the window
- Backscatter from the OMC reflection to the window
- Backscatter from the MC beam off the window (this beam just travels through this chamber, interacts with no optics; there is also what looks like a small blue beam on this diagram, so maybe need to dump that backscatter too)
- Backscatter from the PO beam from OM6 going through the chamber window
- Backscatter from IM1 out the window
- There is a small blue beam from OMMT3 that goes through this window as well, I’m not sure exactly what is is from or for, or if it is physical (there are a few of these strange blue lines, i'm probably just misreading the diagram)
- TESTS TO DO
- Characterize the PZT control
- Lock the OMC with a PZT dither lock
- Eg elog 59
- “Tap-tap-tappy-tap test” to find resonanes
- Look at combination of PDH error signal and DCPD signal???
- See elog 86 for results from initial OMC install—Nov 2007
- Check wiggling wires, etc
- TFs to check? Vertical TF?
- OMC Length check— see for eg elog 768
- ADDITIONAL TO DO
- Mode matching calculation for new radius of curvature optics—see elog 1271
- The current MMT is not the optimal configuration even for the old Rc (see 3077 and 3088)
Notes during reading elog
- Entry 590 has a labelled picture of the optics setup with OMC
- Mention of omcepics at elog 894
- Some important changes happened in elog 1823
- 1''->2'' mirror out of the vacuum--I should check whether this is still there, or if it has been moved
- [many more changes.....]
- There were at one time 2 cameras monitoring OMCT and R (see 4492, 4493)
- Some OMC PZT HV supply info is at elog 4738, 4740...
- There are some photos of the OMC table at elog 5120, and a note about moving some optics
- Not strictly about the OMC, but I really like Suresh's diagram 6756, I'll make something similar for the OMC electronics
- although it is about adding the tip tilt electronics, which I think required a new flange for the OMC chamber
- OMC stage 1 and 2 are the steering mirrors going into the OMC, and were controlled by EPICS chans (6875, 6884)
- these PZT HV supplies lived in OMC_SOUTH (or maybe 1Y3? see elog 6893), the driver in OMC_NORTH (LIGO-D060287)
- Photos of these supplies in 7696
- There are pictures of the OMC and its PZTs in 7401
- The OMC HV supply was moved to power a different set of PZTs (see 7603)
- Talk of replacing the PZTs with picomotors or tip/tilts in 7684
- More pictures of the OMC table before the OMC was 'removed' are here (8114) and in 12563/12571 Gautam links to a Picassa album with pictures from just before the beam was diverted
|
14109
|
Fri Jul 27 17:16:14 2018 |
Sandrine | Update | Thermal Compensation | Copied working scripts for mode spectroscopy into new directory (modeSpec) | The scripts: AGfast.py, make HDF5.py, plotSpec_marconi.py, and SandrineFitv3.py were copied into the new directory modeSpec.
The path is: /opt/rtcds/caltech/c1/scripts/modeSpec
These scripts can still be found under Annalisa's directory under postVent. |
3260
|
Wed Jul 21 15:43:38 2010 |
Megan | Summary | PSL | Copper Layer Thickness on the Reference Cavity | Using the equation for thermal resistance
Rthermal = L/(k*A)
where k is the thermal conductivity of a material, L is the length, and A is the surface area through which the heat passes, I could find the thermal resistance of the copper and stainless steel on the reference cavity. To reduce temperature gradients across the vacuum chamber, the thermal resistance of the copper must be the same or less than that of the stainless steel. Since the copper is directly on top of the stainless steel, the length and width will be the same for both, just the thickness will be different (for ease of calculation, I assumed flat, rectangular strips of the metal). Assuming we wish to have a thermal resistance of the copper n times less than that of the stainless steel, we have
RCu = RSS/n
or
L/(kCu*w*tCu) = L/(kSS*w*tSS*n)
so that
tCu/tSS = n*kSS/kCu
We know that kSS = 401 W/m*K and KCu = 16 W/m*K, so
tCu/tSS = 0.0399*n
By using the drawings for the short reference cavity vacuum chamber (the only one I could find drawings for online) I found a thickness of the walls of 0.12 in or 0.3048 cm. So for the same thermal resistance in both metals, the copper must be 0.0122 cm thick and for a thermal resistance 10 times less, it must be 0.122 cm thick. So we will have to keep wrapping the copper on the vacuum chamber! |
4930
|
Fri Jul 1 18:41:53 2011 |
Jamie | Update | SUS | Core optic sus damping controllers normalized | I found many of the core optic (ETMs, ITMs, BS, PRM, SRM) suspension DOF damping controllers (SUSPOS, SUSPIT, SUSYAW, SUSSIDE) to be in various states of disarray:
- Many of the controllers did not have their "Cheby" and "BounceRoll" filters switched on.
- Some of the controllers didn't even have the Cheby or BounceRoll filters at all, or had other different filters in their place.
- ETMY was particularly screwy (I'll make a separate follow-up post about this)
- A bunch of random other unused filters lying around.
- oplev servos not on
- etc.
I went around and tried to clean things up, by "normalizing" all of the DOF damping filter banks, ie. giving them all the same filters and clearing out unused filters, and then turning on all the appropriate filters in all core optic damping filter banks ("3:0.0", "Cheby", "BounceRoll"). I also went sure that all the outputs were properly on, and the oplev servos were on.
A couple of the optics had to have their gains adjusted to compensate for filter changes, but nothing too drastic.
Everything now looks good, and all optics are nicely damped.
I didn't touch the MC sus damping controllers, but they're in a similar state of disarray and could use a once-over as well.
|
|