Wall StripTool traces showed that IMC has not been locked for at least 8 hours when I came in this morning. Going to the IMC autolocker log, it looks like the last timestamp was at ~6pm yesterday. Megatron was responding to ping, but I couldn't ssh into it. So I went over to the machine and did a hard-reboot via front panel power switch. The computer took ~10mins to come back online and respond to ping. Once it did, I was able to ssh into it. However, trying the usual commands to restart the IMC autolocker and FSS Slow loops didn't work. Specifically, monitoring the logfile with tail -f Autolocker.log, I would see that the autolocker seemed to get stuck after starting the "blinky" script. Trying to restart the process using sudo initctl restart MCautolocker, init would print to shell that the restart had worked, and reported the PID, but the logfile wouldn't update "live" as it should when tail is used with the -f option. All very strange .
Anyways, as a last resort, I kill -9'ed the PID for the init instance, and init automatically restarted the Autolocker - this did the trick, IMC is locked now and logfile seems to be getting updated normally.
I also cleared a bunch of matlab crash dump files in the home directory.
The MC autolocker and FSSslow scripts weren't running on Megtron. These were started by running the following commands on megatron:
sudo initctl start MCautolocker
sudo initctl start FSSslow
The new autoburt cronjob was failing because the .cron file was not executable (fixed by chmod +x burtnew.cron), and the new perl script didn't use the full path for ifconfig. Similarly, the simulink webview updating script was failing because the full path for matlab wasn't being given. Both of these fixes have been tested and commited to SVN.
chmod +x burtnew.cron
In general, cron scripts can be a real pain, since the cron process doesn't run our .bashrc, and so doesn't know about updates to $PATH, or other environment vairables that get updated through /ligo/cdscfg/workstationrc.sh, which is called by .bashrc. So something that manually works fine in the terminal may not play out as expected when run by cron.
SLOWDC servo was dead. I followed EricQ's instruction.
MC Autolocker got stack somewhere. I had to go to megatron and kill MC Autolocker.
init relaunched the autolocker automatically, and now it started properly.
Last night around 5pm or so, Alex had remotely logged in and made some fixes to megatron.
First, he changed the local name from scipe11 to megatron. There were no changes to the network, this was a purely local change. The name server running on Linux1 is what provides the name to IP conversions. Scipe11 and Megatron both resolve to distinct IPs. Given c1auxex wasn't reported to have any problems (and I didn't see any problems with it yesterday), this was not a source of conflict. Its possible that Megatron could get confused while in that state, but it would not have affected anything outside its box.
Just to be extra secure, I've switched megatron's personal router over from a DMZ setup to only forwarding port 22. I have also disabled the dhcp server on the gateway router (184.108.40.206).
Second, he turned the mdp and mdc codes on. This should not have conflicted with c1omc.
This morning I came in and turned megatron back on around 9:30 and began trying to replicate the problems from last night between c1omc and megatron. I called Alex and we rebooted c1omc while megatron was on, but not running any code, and without any changes to the setup (routers, etc). We were able to burt restore. Then we turned the mdp, mdc and framebuilder codes on, and again rebooted c1omc, which appeared to burt restore as well (I restored from 3 am this morning, which looks reasonable to me).
Finally, I made the changes mentioned above to the router setups in the hope that this will prevent future problems but without being able to replicate the issue I'm not sure.
Megatron's top fan, rear ps, and temperature front panel lights were all lit amber this morning. I checked the service manual, found at :
According to the manual, this means a front fan failed, a voltage event occured, and we hit a high temperature threshold. However, there were no failure light on any of the individual front fans (which should have been the case given the front panel fan light). The lights remained on after I shutdown megatron. After unplugging, waiting 30 seconds, and replugging the power cords in, the lights went off and stayed off. Megatron seems to come up fine.
I unplugged the IO chassis from megatron, rebooted, and tried to start Peter's plant model. However, it still prints that its starting, but really doesn't. One thing I forgot to mention in the previous elog on the matter, is that on the local monitor it prints "shm_open(): No such file or directory" every time we try to start one of these programs.
I've changed megatron's controls account default shell to tcsh (like it was before). It now sources cshrc.40m in /cvs/cds/caltech/ correctly at login, so all the usual aliases and programs work without doing any extra work.
We are moving towards a first test of getting Kiwamu's green locking signals into the new front end at the new X end, as well as sending signal out to the green laser temperature control.
Towards that end, we borrowed the router which we were using as a firewall for megatron. At the moment, megatron is not connected to the network. The router (a linksys N wire router), was moved to the new X end, and setup to act as a firewall for the c1iscex machine.
At this point, we need to figure which channels of the DAC correspond to which outputs of the anti-imaging board (D000186) and coil driver outputs. Ideally, we'd like to simply take a spare output from that board and bring it to the laser temperature control. The watchdogs will be disabled when testing to avoid any unfortunate mis-sent signals to the coils. It looks like it should be something like channels 6,7,8 are free, although I'm not positive if thats the correct mapping or if there's a n*8 + 6,7,8 mapping.
The ADC should be much easier to determine, since we only have a single 16 channel set coming from the lemo breakout box. Once we've determined channels, we should be all set to do a test with the green system.
We changed the pointer on /cvs/cds/caltech/target/gds/bin/awgtpman from
Then killed the megatron framebuilder and testpoint manager (daqd, awgtpman), restarted, hit the daq reload button from the GDS_TP screen.
This did not fix everything. However, it did seem to fix the problem where it needed a rtl_epics under the root directory which did not exist. Alex continued to poke around. When next he spoke, he claimed to have found a problem in the daqdrc file. Specifically, the cvs/cds/caltech/target/fb/ daqdrc file.
set gds_server = "megatron" "megatron" 10 "megatron" 11;
He said this need to be:
set gds_server = "megatron" "megatron" 11 "megatron" 12;
However, during this, I had looked file, and found dataviewer working, while still with the 10 and 11. Doing a diff on a backup of daqdrc, shows that Alex also changed
set controller_dcu=10 to set controller_dcu=12, and commented the previous line.
He also changed set debug=2 to set debug=0.
In a quick test, we changed the 11 and 12 back to 10 and 11, and everything seemed to work fine. So I'm not sure what that line actually does. However, the set controller_dcu line seems to be important, and probably needs to be set to the dcu id of an actually running module (it probably doesn't matter which one, but at least one that is up). Anyways, I set the gds_server line back to 11 and 12, just in case there's numerology going on.
I'll add this information to the wiki.
I did a full make clean and make uninstall-daq-tst, then rebuilt it. I copied a good version of filters to C1TST.txt in /cvs/cds/caltech/chans/ as well as a good copy of screens to /cvs/cds/caltech/medm/c1/tst/.
Test points still appear to be broken. Although for a single measurement in dtt, I was somehow able to start, although the output in the results page didn't seem to have any actual data in the plots, so I'm not sure what happened there - after that it just said unable to select test points. It now says that when starting up as well. The tst channels are the only ones showing up. However, the 1k channels seem to have disappeared from Data Viewer, and now only 16k channels are selectable, but they don't actually work. I'm not actually sure where the 1k channels were coming from earlier now that I think about it. They were listed like C1:TST-ETMY-SENSOR_UL and so forth.
RA: Koji and I added the SENSOR channels by hand to the .ini file last night so that we could have data stored in the frames ala c1susvme1, etc.
Alex and I took a look at megatron this morning, and it was in the same state I left it on Friday, with file system errors. We were able to copy the advLIGO directory Peter had been working in to Linux1, so it should be simple to restore the code. We then tried just running fsck, and overwritting bad sectors, but after about 5 minutes it was clear it could potentially take a long time (5-10 seconds per unreadable block, with an unknown number of blocks, possibly tens or millions). The decision was made to simply replace the hard drive.
Alex is of the opinion that the hard drive failure was a coincidence. Or rather, he can't see how the RFM card could have caused this kind of failure.
Alex went to Bridge to grab a usb to sata adapter for a new hard drive, and was going to copy a duplicate install of the OS onto it, and we'll try replacing the current hard drive with it.
The "apt-get update" was failing on some machines because it couldn't find the 'Debian squeeze' repos, so I made some changes so that Megatron could be upgraded.
I think Jamie set this up for us a long time ago, but now the LSC has stopped supporting these versions of the software. We're running Ubuntu12 and 'squeeze' is meant to support Ubuntu10. Ubuntu12 (which is what LLO is running) corresponds to 'Debian-wheezy' and Ubuntu14 to 'Debian-Jessie' and Ubuntu16 to 'debian-stretch'.
We should consider upgrading a few of our workstations to Ubuntu 14 LTS to see how painful it is to run our scripts and DTT and DV. Better to upgrade a bit before we are forced to by circumstance.
I followed the instructions from software.ligo.org (https://wiki.ligo.org/DASWG/DebianWheezy) and put the recommended lines into the /etc/apt/sources.list.d/lsc-debian.list file.
but I still got 1 error (previously there were ~7 errors):
W: Failed to fetch http://software.ligo.org/lscsoft/debian/dists/wheezy/Release Unable to find expected entry 'contrib/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file)
Restarting now to see if things work. If its OK, we ought to change our squeeze lines into wheezy for all workstations so that our LSC software can be upgraded.
I would recommend upgrading the workstations to one of the reference operating systems, either SL7 or Debian squeeze, since that's what the sites are moving towards. If you do that you can just install all the control room software from the supported repos, and not worry about having to compile things from source anymore.
Now the lock with megatron is pretty easy. Really. It's very cool.
As we saw the oscillation of the YARM servo, we temporalily increased the gain of TRY filter by a factor of 2 (0.003->0.006). Also decreased the gain of YARM servo by the factor of 2 (1->0.5). This makes the servo gain reduced by a factor of 4 in total. This change seemed to come from the change of the ADC/DAC range.
We finally fixed the hi-gain pd transmission communications from Megatron to the c1lsc by tracking down the correct RFM memory location (which is unhelpfully labeled as a qpd channel in both losLinux and lsc40.m). The memory location is 0x11a1e0, and is refered to as qpdData.
I have removed the RFM card from Megatron and left it (along with all the other cables and electronics) on the trolly in front of the 1Y9 rack.
Megatron proceeded to boot normally up until it started loading Centos 5. During the linux boot process it checks the file systems. At this point we have an error:
/dev/VolGroup00/LogVol00 contains a file system with errors, check forced
Error reading block 28901403 (Attempt to read block from filesystem resulted short read) while doing inode scan.
/dev/VolGroup00/LogVol00 Unexpected Inconsistency; RUN fsck MANUALLY
So I ran fsck manually, to see if I get some more information. fsck reports back it can't read block 28901403 (due to a short read), and asks if you want to ignore(y)?. I ignore (by hitting space), and unfortunately touch it an additional time. The next question it asks is force rewrite(y)? So I apparently forced a rewrite of that block. On further ignores (but no forced rewrites) I continue seeing short read errors at 28901404, *40, *41,*71, *512, *513, etc. So not totally continugous. Each iteration takes about 5-10 seconds. At this point I reboot, but the same problem happens again, although it starts 28901404 instead of 28901403. So apparently the force re-write fixed something, but I don't know if this is the best way of going about this. I just wondering if there's any other tricks I can try before I just start rewriting random blocks on the hard drive. I also don't know how widespread this problem is and how long it might take to complete (if its a large swath of the hard drive and its take 10 seconds for each block that wrong, it might take a while).
So for the moment, megatron is not functional. Hopefully I can get some advice from Alex on Monday (or from anyone else who wants to chime in). It may wind up being easiest to just wipe the drive and re-install real time linux, but I'm no expert at that.
In investigating why megatron wouldn't talk to the network, I re-discovered the fact that it had been placed on its own private network to avoid conflicts with the 40m's test point manager. So I moved the linksys router (model WRT310N V2) down to 1Y9, plugged megatron into a normal network port, and connected its internet port to the rest of the gigabit network.
Unfortunately, megatron still didn't see the rest of the network, and vice-versa. I brought out my laptop and started looking at the settings. It had been configured with the DMZ zone on for 192.168.1.2, which was Megatron's IP, so communications should flow through the router. Turns out it needs the dhcp server on the gateway router (220.127.116.11) to be on for everyone to talk to each other. However, this may not be the best practice. It'd probably be better to set the router IP to be fixed, and turn off the dhcp server on the gateway. I'll look into doing this tomorrow.
Also during this I found the DNS server running on linux1 had its IP to name and name to IP files in disagreement on what the IP of megatron should be. The IP to name claimed 18.104.22.168 while the name to IP claimed 22.214.171.124. I set it so both said 126.96.36.199. (These are in /var/named/chroot/var/ directory on linux1, the files are 113.215.131.in-addr.arpa.zone and martian.zone - I modified the 113.215.131.in-addr.arpa.zone file). This is the dhcp served IP address from the gateway, and in principle could change or be given to another machine while the dhcp server is on.
I noticed recently that Megatron was running Ubuntu 12, so I've started its OS upgrade.
Megatron and IMC autolocking will be down for awhile, so we should use a different 'script' computer this week.
Mon Dec 9 14:52:58 2019
upgrade to Ubuntu 14 complete; now upgrading to 16
Megatron is now running Ubuntu 18.04 LTS.
We should probably be able to load all the LSC software on there by adding the appropriate Debian repos.
I have re-enabled the cron jobs in the crontab.
The MC Autolocker and the PSL NPRO Slow/Temperature control are run using 'initctl', so I'll leave that up to Shruti to run/test.
upgrade was done
cronjob testing wasn't one by one 😢
burt snapshots were gone
i brought them back home 🏠
The burt snapshotting is still not so reliable - for whatever reason, the number of snapshot files that actually get written looks random. For example, the 14:19 backup today got all the snaps, but 15:19 did not. There are no obvious red flags in either the cron job logs or the autoburt log files. I also don't see any clues when I run the script in a shell. It'll be good if someone can take a look at this.
Overall a "meh" night for locking I think. The script to all-RF worked several times earlier in the evening, although it was delicate and failed at least 50% of the time. Later in the evening, we couldn't get even ~10% of the lock attempts all the way to RF-only.
Den looked into angular things tonight. With the HEPA bench at the Xend on (which it was found to be), the ETMX oplevs were injecting almost a factor of 10 noise (around 10ish Hz?) into the cavity axis motion (as seen by the trans QPD) as compared to oplevs off. Turning off the HEPA removed this noise injection.
Den retuned the QPD trans loops so that they only push on the ETMs, so that we can turn off the ETM oplevs, and leave the ITMs and their oplevs alone.
We are worried again about REFL55. There is much more light on REFL55 than there is on REFL11 (a 90/10 beam splitter divides the light between them), and we see this in the DC output of the PDs, but there seems to be very little actual signal in REFL55. Den drove a line (in PRCL?) while we had the PRMI locked with the arms held off resonance, and REFL55 saw the line a factor of 1,000 less than REFL 11 or REFL165. The analog whitening gain for REFL11 is +18dB, and for REFL55 is +21dB, so it's not that we have significantly less analog gain (that we think). We need to look into this tomorrow. As of now, we don't think there's much hope for transitioning PRMI to REFL55 without a health checkup.
I turned on the HEPA at the south end during the LSC. Sorry I ment to turn it off.
It happened again. Defrosting required.
The main communications data structure is RFM_FE_COMMS, from the rts/src/include/iscNetDsc40m.h file. The following comments regard sub-structures inside it. I'm looking at all the files in /rts/src/fe/40m to determine how the structures are used, or if they seem to be unnecessary.
The dsccompad structure is used in the lscextra.c file. I am assuming I don't need to add anything fo the model for these. They cover from 0x00000040 to 0x00001000.
FE_COMMS_DATA is used twice, once for dataESX (0x00001000 to 0x00002000), and once for dataESY (0x00002000 to 0x00003000).
Inside FE_COMMS_DATA we have:
status and cycle which look to be initialized then never changed (although they are compared to).
ascETMoutput[P,Y], ascQPDinput are all set to 0 then never used.
qpdGain is used, and set by asc40m, but not read by anything. It is offset 114, so in dataESX its 4210 (0x00001072), and in dataESY its (0x00002072)
All the other parts of this substructure seem to be unused.
daqTest, dgsSet, low1megpad,mscomms seem unused.
dscPad is referenced, but doesn't seem to be set.
pCoilDriver is a structure of type ALL_CD_INFO, inside a union called suscomms, inside FE_COMMS_Data, and is used. In this structure, we have:
extData, an array of DSC_CD_PPY structures, which is used. Inside extData we have for each optic (ETMY has an offset of 9 inside the extData array):
Pos is set in sos40m.c via the line pRfm->suscomms.pCoilDriver.extData[jj].Pos = dsp[jj].data[FLT_SUSPos].filterInput; Elsewhere, Pos seems to be set to 1.0
Similarly, Pit and Yaw are set in sos40m, except with FLT_SUSPitch and FLT_SUSYaw, and being set elsewhere to 1.1, 1.2. However, these are never applied to the ETMX and ETMY optics (it goes through offests 0 through 7 inclusive).
Side is set 1.3 or 1.0 only, not used.
ascPit , ascYaw, lscPos are read by the losLinux.c code, and is updated by the sos40m.c code. For ETMY, their respective addresses are: 0x11a1c0, 0x11a1c4, 0x11a1c8.
lscTpNum, lscExNum, seem to be initialized, and read by the losLinux.c, and set by sos40m.c.
modeSwitch is read, but looks to be used for turning dewhitening on and off. Similarly dewhiteSW1R is read and used.
This ends the DSC_CD_PPY structure.
lscCycle, which is used, although it seems to be an internal check.
dum is unused.
losOpLev is a substructure that is mostly unused. Inside losOpLev, opPerror, opYerror, opYout seem to be unused, and opPout only seems ever to be set to 0.
Thats the end of ALL_CD_INFO and pCoilDriver.
After we have itmepics, itmfmdata, itmcoeffs, rmbsepics,...etymyepics, etmyfmdata,etmycoeffs which I don't see in use.
We have substructure asc inside mcasc, with epics, filt, and coeff char arrays. These seem to be asc and iowfsDrv specific.
lscIpc, lscepics, and lscla seems lsc specific,
The there is lscdiag struct, which contains v struct, which includes cpuClock, vmeReset, nSpob, nPtrx, nPtry don't seem to be used by the losLinux.c.
The lscfilt structure contains the FILT_MOD dspVME, which seems to be used only by lsc40m.
The lsccoeff structure contrains the VME_COEF pRfmCoeff, which again seems to only interact in the lsc code.
Then we have aciscpad, ascisc, ascipc, ascinfo, and mscepics which do not seem to be used.
ascepics and asccoeff are used in asc.c, but does not seem to be referenced elsewhere.
hepiepics , hepidsp, hepicoeff, hepists do not appear to be used.
I want to lock the PRFPMI again (to commission AS WFS). Have had some success - but in doing characterization, I find that the REFL port sensing is completely messed up compared to what I had before. Specifically, MICH and PRCL DoFs have no separation in either the 1f or 3f photodiodes.
I did make considerable changes to the RF source box, and so now the relative phase between the 11 MHz and 55 MHz signals is changed compared to what it was before. But do we really expect any effect even in the 1f signal? I am not able to reproduce this effect in simulation (Finesse), though I'm using a simplified model. I attach two sensing matrices to illustrate what i mean:
I've been trying to get the simPlant model to work, and my main method of testing is switching between the real ETMX and the simulated ETMX and comparing the resulting power spectrum (the closer the two are, the more our simulation works). While the simPlant is on, ETMX is NOT BEING DAMPED. I started this ~Wednesday, and the testing will continue today, then hopefully we'll get a similiar simPlant up for ITMX (at which point, testing will continue for both ITMX and ETMX).
TL;DR: ETMX is not being continuously damped, XARM will likely be exhibiting some wonky behavior next week.
Last night Yehonathan and I located the two steel PMCs in the QIL, with help from Anchal. They are currently sitting on my desk in Bridge, inside a box that also contains optics and other OMC parts. I will bring them over to the 40m the next time I come.
Jon brought over a box of parts for constructing the metal PMCs. I have stored it along the Y-arm, on top of the green optics cabinet.
I didn't do an exhaustive inventory check, but the following are the rough contents of the box:
I didn't inspect the optics but since we have so many, I am hoping we can find 3 good quality ones for one cavity at least. We should check that the geometry is suitable for our RF sideband frequencies.
I took the metal PMC box and examined its content and find the following items:
There seem to be enough parts to build 2 PMCs + spares.
I find several problems in the metal PMCs:
PMC1 has a broken screw in one of its flat mirror mounts (Attachment 16). We need to get it out in the machine shop.
PMC2 one of the flat mirrors has a scratch on the AR coating and its ORing is failing (Attachment 17). Mirror and ORing need to be replaced.
I measure the physical dimensions of the PMC with the help of https://dcc.ligo.org/LIGO-E1400332. The roundtrip is found to be 24cm which gives an FSR of 1.25GHz.
I use Evan Hall's Python script for calculating the mode spectrum as a function of the cavity length of the metal PMC and overlay the RF sidebands (Green dashed lines) on it (Attachment 18) to check for any HOM coincidence. The width of the lines is the mode splitting due to the cavity astigmatism.
It seems like the only issue might come from a 10th order modes (green ribbon) which are hopefully small enough in reality.
I set up a test inverting amplifier circuit using the LT1677 opamp:
The input signal was a sine wave from the function generator with peak to peak amplitude of 20 mV and a frequency of 500 Hz and I received an output with an amplitude of about 670 mV and the same 500 Hz frequency, agreeing with the expected gain of -332k/10k = -33.2:
So now I know that the LT1677 works as expected with a negative supply voltage. My issue with Den's original circuit is that I was getting some clipping on the input to pin 2, which didn't seem to be due to any of the capacitors- I switched them all out. I set up a modified version of Den's circuit using a negative voltage input to see if I could fix this clipping issue:
I might reduce the input voltages to +5V and -5V- I couldn't get my inverting amp circuit to work with +12V and -12V. I'll start testing this new circuit next week and start setting up some amplifier boxes.
I could not get Den's circuit to work for some reason with microphone input, so I decided to try to use another circuit I found online. I made some modifications to this circuit and made a schematic:
Using this circuit, I have been able to amplify microphone input and adjust my passband. Currently, this circuit has a high-pass at about 7 Hz and a low-pass at about 23 kHz. I tested the microphone using Audacity, an audio testing program. I produced various sine waves at different frequencies using this program and confirmed that my passband was working as intended. I also used a function generator to ensure that the gain fell off at the cutoff frequencies. Finally, I measured the frequency response of my amplifier circuit:
A text file with the parameters of my frequency response and the raw data is attached as well.
These results are encouraging but I wanted to get some feedback on this new circuit before continuing. This circuit seems to do everything that Den's circuit did but in this case I have a better understanding of the functions of the circuit elements and it is slightly simpler.
I took the spectrum of an EM172 connected to my amplifier inside and outside a large box filled with foam layers:
I also made a diagram with my plan for the microphone amplifier boxes. This is a bottom view:
The dimensions I got from this box: http://www.digikey.com/product-detail/en/bud-industries/CU-4472/377-1476-ND/696705
This seemed like the size I was looking for and it has a mounting flange that could make suspending it easier. Let me know if you have any suggestions.
I'll be doing a Huddle test next week to get a better idea of the noise floor and well as starting construction of the circuits to go inside the boxes and the boxes themselves.
I set up 3 of my circuits in the interferometer near MC2 to do a huddle test. I have the signals from my microphones going into C1:PEM-MIC_1_IN1, C1:PEM-MIC_2_IN1, and C1:PEM-MIC_3_IN1. These are channels C17-C19. Here are some pictures of my setup:
I'll likely be collecting data from this for a couple of hours. Please don't touch it for now- it should be gone soon. There are some wires running along the floor near MC2 as well.
In order to help Praful do his huddle test, I have temporarily arranged for the outputs of the 3 channels he wants to monitor to be acquired as DQ channels at 2048 Hz by editing the C1PEM model. No prior DQ channels were set up for the microphones. Data collected overnight should be sufficient for Praful's analysis, so we can remove these DQ channels from C1PEM before committing the updated model to the svn. There is in fact a filter that is enabled for these microphone channels that claims to convert the amplified microphone output to Pascals, but it is just a gain of 0.0005.
In the long term, once we install microphones around the IFO, we can update C1PEM to reflect the naming conventions for the microphones as is appropriate.
The results of my first huddle test were not so good- one of the signals did not match the other two very well- so I changed the setup so that the mics would be better oriented to receive the same signal. Pictures of the new setup are attached.
I also noticed some problems with one of my microphones so I soldered a new mic to bnc and switched it out. Just judging from Dataviewer, the signals seem to be more similar now. I'll be taking data for another few hours to confirm.
I used the Wiener filtering method described by Ignacio and Jessica (https://dcc.ligo.org/DocDB/0119/T1500195/002/SURF_Final.pdf and https://dcc.ligo.org/public/0119/T1500194/001/Final_Report.pdf) and got the following results:
The channel readout has a gain of 0.0005 and the ADC is 16-bit and operates are 20V. The channel also reads the data out in Pa. I therefore had to multiply the timeseries by 1/0.0005=2000 to get it in units of counts and then by (20 Volts)/(2^16 counts) to get back to the original signal in volts. The PSDs were generated after doing this calibration. I also squared, integrated, and square rooted the PSDs to get an RMS voltage for each microphone as a sanity check:
Mic 1: 0.00036 V
Mic 2: 0.00023 V
Mic 3: 0.00028 V
These values seem reasonable given that the timeseries look like this:
Seems to good to be true. Maybe you're over fitting? Please put all the traces on one plot and let us know how you do the parameter setting. You should use half the data for training the filter and the second half for doing the subtraction.
I didn't have a separate training set and data set, so I think that's why the graphs came out looking too good. The units on the graphs are also incorrect, I was interpreting PSD as ASD. I haven't been able to get my Wiener filtering code working well- I get unreasonable subtractions like the noise being larger than the unfiltered signal, so Eric showed me this frequency-dependent calculation described here: https://dcc.ligo.org/LIGO-P990002
This seems to be working well so far:
Here's all the plots on one figure:
Let me know if this looks believable.
I'm working on locking the Michelson now in order to put an excitation on one of the input test masses and measure the resulting error signal at the anti-symmetric port. I aligned the beams from ITMX and ITMY by looking at the AS camera with the video screens, but the fringes were not destructively interfering. Jenne advised that I look at the gain on the MICH servo filter modules in the LSC screen. We flipped the sign on the gain (it was 0.120 and it is now -0.120) and the fringes destructively interfered as desired after this change.
For purposes of documentation, I locked the YARM earlier in the morning before moving on to the Michelson. The purpose of this was to put another excitation on C1:SUS-ETMY_LSC_EXC and then measure the error signal on C1:LSC-POY11_I_ERR.
Today I worked on locking the Michelson. Here's what I did:
Open Data Viewer and Restore Settings /users/Templates/JenneLockingDataviewer/MICH.xml. This opens the C1:LSC-ASDC_OUT and C1:LSC-AS55_Q_ERR plots.
Check the LSC screen to verify that the path between the Servo Filter Modules and the SUS Ctrls are outlined in green. If not turn on the OUT button within the Filter Servo Modules, enable LSC mode, and turn on the SUS Ctrls for the BS.
Misalign all optics other than BS and one of ITMX and ITMY. The ITMY was already well-aligned from my work on locking the YARM, so I actually chose to misalign ITMY at first.
Restore BS and ITMX. Use the AS camera on the video screen as your guide when aligning ITMX.
Adjust pitch and yaw of ITMX until a bright, circular spot appears near the middle of the AS camera.
Now restore ITMY and adjust pitch and yaw until a second circular spot appears on the AS camera.
Adjust both ITMX and ITMY until both bright spots occupy the same location. If the spots remain bright when they are in the same location you are locking onto a bright fringe actually, and need to flip the sign of the gain on the MICH servo filter modules. I had to do this today in fact, as discussed in ELOG 7145.
If the sign is correct, the two beams should interfere destructively and the formerly bright spots will form a comparatively dark spot. The shape of the spot will likely be two bright lobes separated by a dark middle.
C1:LSC-ASDC_OUT should be a roughly flat signal, and the goal now is to minimize the magnitude of this signal. The smaller this signal, the darker the AS camera should look. Decent target values for C1:LSC-ASDC_OUT are around 0.10 to 0.05.
Once I did this, I made measurements by exciting C1:SUS-ITMY_LSC_EXC and measuring with C1:LSC-AS55_Q_ERR. I ran a logarithmic swept sine response from 1 to 1000 Hz again, with an envelope amplitude dependence. Again I looked at the measured transfer function and coherence. I was able to get good coherence, but it was somewhat erratic in that it dipped low at high frequency multiple times.
[Koji / Kiwamu]
The Michelson was locked with the new LSC realtime code.
(what we did)
-- Fine alignment of the Michelson, including PZTs, BS and ITMY.
Since the X arm has been nicely aligned we intentionally avoided touching ITMX. The IR beam now is hitting the center of both end mirrors.
At the end we lost X arm's resonance for IR. This probably means the PZTs need more careful alignments.
-- Signal acquisition
We replaced the RFPD (AS55) that Aidan and Jamie nicely installed by POY11 because we haven't yet installed a 55MHz RF source.
The maximum DC voltage from the PD went to about 50 mV after aligning steering mirrors on the AP table.
The RF signal from the PD is transferred by a heliax cable which has been labeled 'REFL33'.
Then the RF signal is demodulated at a demodulation board 'AS11', which is one of the demodulation boards that Suresh recently modified.
Although we haven't fully characterized the demod board the I and Q signal looked healthy.
Finally the demod signals go to ADC_0_3 and ADC_0_4 which are the third and fourth channel.
They finally show up in REFL33 path in the digital world.
With the new LSC code we fedback the signal to BS. We put anti-whitening filters in the I and Q input filter banks.
We found that dataviewer didn't show correct channels, for example C1LSC_NREFL33I showed just ADC noise and C1LSC_NREFL33Q showed NREFL_33I.
Due to this fact we gave up adjusting the digital phase rotation and decided to use only the I-phase signal.
Applying a 1000:10 filter gave us a moderate lock of the Michelson. The gain was -100 in C1LSC_MICH_GAIN and this gave us the UGF of about 300 Hz.
Note that during the locking both ETMs were intentionally misaligned in order not to have Fabry-Perot fringes.
The ELOG was frozen, with this in the .log file:
GET /40m/?id=1279&select=1&rsort=Type HTTP/1.1
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)
(hopefully there's a way to hide from the Bing Bot like we did from the Google bot)
Yesterday elog was excruciatingly slow, and bingbot was the culprit. It was slurping down elog entries and attachments so fast that it brought nodus to its knees. So I created a robots.txt file disallowing all bots, and placed it in the elog's scripts directory (which gets served at the top level). Today the log feels a little snappier -- there's now much less bot traffic to compete with when using it.
We might be able to let selected bots back in with a crawl rate limit, if anyone misses searching the elog on bing.
Oh, this is cool! Thanks!
I could not figure out how to place robot.txt as it was not so obvious how elogd handles the files in the "logfile" directory.
I added an EM172 to my soldered circuit and it seems to be working so far. I have taken a spectra using the EM172 in ambient noise in the control room as well as in white noise from Audacity. My computer's speakers are not very good so the white noise results aren't great but this was mainly to confirm that the microphone is actually working.
Thanks to Den, power supplies for microphone circuit are changed.
So I measured the microphone noise again by the same way as I did last time.
solid lines: acoustic noise
dashed lines: un-coherent noise
black line: circuit noise (microphone unconnected)
The circuit noise improves so much, but many line noises appeared.
Where do these lines (40, 80, 200 Hz...) come from?
These does not change if we changed the microphones...
Anyway, I have to change the circuit (because of the low-pass filter). I can check if the circuit I will remake will give some effects on these lines.
The circuit noise improves so much, but many line noises appeared.
Where do these lines (40, 80, 200 Hz...) come from?
These does not change if we changed the microphones...
I do not think that 1U rack power supply influenced on the preamp noise level as there is a 12 V regulator inside. Lines that you see might be just acoustic noise produced by cpu fans. Usually, they rotate at ~2500-3000 rpm => frequency is ~40-50 Hz + harmonics. Microphones should be in an isolation box to minimize noise coming from the rack. This test was already done before and described here.
I think we need to build a new box for many channels (32, for example, to match adc). The question is how many microphones do we need to locate around one stack to subtract acoustic noise. Once we know this number, we group microphones, use 1 cable with many twisted pairs for a group and suspend them in an organized way.
The circuit noise improves so much, but many line noises appeared.
Where do these lines (40, 80, 200 Hz...) come from?
These does not change if we changed the microphones...
I do not think they are acoustic sounds. If so, there should be coherence between three microphones because I placed three at the same place, tied together. However, there are no coherence at lines between them.