(PRMI locking with slightly misaligned SRM)
First I tried locking PRC and MICH with a little bit misaligned SRM. This condition allowed me to search for a good signal port for SRC.
In this locking, REFL11_I was used to lock PRC and AS55_Q was used for MICH. This is the same scheme as the current PRMI locking.
Since the alignment of SRM was close to the good alignment, I expected to see fringes from SRC in some signal ports (i.e. REFL55, POY55 and so on).
Sometimes a fringe of SRC disturbed AS55_Q and broke the MICH locking, so I had to carefully misalign SRM so that the SRC fringes are small enough to maintain the lock of MICH.
(Looking for a good signal port for SRC)
After I locked the PRMI with slightly misaligned SRM, I started looking for a good signal port for SRC.
At the beginning I tried finding a good SRC port by shaking SRM at 100 Hz and looked at the power spectra of all the available LSC signals.
I was expecting to see a 100 Hz peak in the spectra, but this technique didn't work well because SRC wasn't within the linear range and hence didn't produce linear signals.
So I didn't see any strong signals at 100 Hz and finally gave up this technique.
Then I started looking for a PDH-like signal in time series and immediately found AS55_I showed large PDH-like signals.
So I started using the AS55_I for the SRC locking and eventually succeeded.
(Two tips for the DRMI locking)
During the locking of DRMI, I found two tips that made the locking quite smooth.
- Triggered locking
Since every LSC signal ports showed large signals from PRC somehow, feeding back the signals made the suspensions crazy.
So I used triggered locking for the PRC and MICH locking to avoid unwanted kicks on BS and PRM.
If the DC of REFL goes above a certain level, the control of PRC starts. Also if the DC of AS goes below a certain level the control of MICH starts.
These triggers make the lock smoother.
- Do not use resonant gain filters
This is really a stupid tip. When I was trying to lock MICH, the lock became quite difficult for some reasons.
It looked there was an oscillation at 3 Hz every time the MICH control started. It turned out that a 3 Hz resonant gain filter had been making it difficult.
All the resonant gain filters should be off when a lock acquisition is taken place.
Eventually the DRMI was locked.
More details will be reported in the morning.
I will try with POY55 that Koji prepared today.
I was struggling to find a good signal port for SRC over the weekend and finally found AS55_I worked somehow. I used :
REFL11_I --> PRC
AS55_Q --> MICH
AS55_I --> SRC
A configuration script was prepared such that someone can try this configuration by clicking a button on the C1IFO_CONFIGURE.adl screen.
I don't think this signal extraction scheme is the best, but now we can find better signal ports by shaking each DOF and looking at each signal port.
Given the RF component power supply grounding, POP110, POP22 and REFL165 all changed somewhat. They have all been rephased for the DRMI, as they were before.
I tweaked the 3F DRMI settings, and chose to phase REFL165I to PRCL, instead of SRCL as before, to try and minimize the PRCL->MICH coupling instead of the SRCL->MICH coupling.
With these settings, I once locked the DRMI for ~5 seconds with the arms held off on ALS, during which I could see some indications of neccesary demod angle changes. Haven't yet gotten longer, but we're getting there...
We are working on trying out the UGF servos, and wanted to take loop measurements with and without the servo to prove that it is working as expected. However, it seems like new DTT is not following the envelopes that we are giving it.
If we uncheck the "user" box, then it uses the amplitude that is given on the excitation tab. But, if we check user and select envelope, the amplitude will always be whatever number is the first amplitude requested in the envelope. If we change the first amplitude in the envelope, DTT will use that number for the new amplitude, so it is reading that file, but not doing the whole envelope thing correctly.
Thoughts? Is this a bug in new DTT, or a pebkac issue?
I took the output of the OMC DAC and plugged it directly into an OMC ADC channel to see if I could isolate the OMC DAC weirdness I'd been seeing. It looks like it may have something to do with DTT specifically.
Attachment 1 is a DTT transfer function of a BNC cable and some connectors (plus of course the AI and AA filters in the OMC system). It looks like this on both linux and solaris.
Attachment 2 is a transfer function using sweepTDS (in mDV), which uses TDS tools as the driver for interfacing with testpoints and DAQ channels.
Attachment 3 is a triggered time series, taken with DTT, of the same channels as used in the transfer functions, during a transfer function. I think this shows that the problem lies not with awg or tpman, but with how DTT is computing transfer functions.
I've tried soft reboots of the c1omc, which didn't work. Since the TDS version appears to work, I suspect the problem may actually be with DTT.
I am attempting to use the DTT program to look at the coherence of the individual accelerometer signals with the MC_L signal. Rana suggested that I might break up the XYZ configuration, so i wanted to see how the coherence changed when I moved things around over the past couple of weeks, but I keep getting a synchronization error every time I try to set the start time to more than about 3 days ago. I tried restarting the program and checking the "reconnect" option in the "Input" tab, neither of which made any kind of difference. I can access this data with no problem from the Data Viewer and the Matlab scripts, so I'm not really sure what is happening. Help?
EDIT: Problem solved - Full data was not stored for the time I needed to access it for DTT.
Seems like DTT also works now. The trick seems to be to run sudo /usr/bin/diaggui instead of just diaggui. So this is indicative of some conflict between the yum installed gds and the relic gds from our shared drive. I also have to manually change the NDS settings each time, probably there's a way to set all of this up in a more smooth way but I don't know what it is. awggui still doesn't get the correct channels, not sure where I can change the settings to fix that.
DON"T RUN DIAGGUI AS ROOT
I visited downs and announced that I would be showing up again until all the 40m hardware is delivered.
I brought over 4 ADC boards and 5 DAC boards which slot into the IO chassis.
The DACs are General Standards Corporation, PMC66-16AO16-16-F0-OF, PCIe4-PMC-0 adapters.
The ADCs are General Standards Corporation, PMC66-16AI6455A-64-50M, PCIe4-PMC-0 adapters.
These new ones have been placed with the blue and gold adapter boards, under the table behind the 1Y4-1Y5 racks.
With the 1 ADC and 1 DAC we already have, we now have enough to populated the two ends and the SUS IO chassis. We have sufficient Binary Output boards for the entire 40m setup. I'm going back with a full itemized list of our current equipment, and bring back the remainder of the ADC/DAC boards we're due. Apparently the ones which were bought for us are currently sitting in a test stand, so the ones I took today were from a different project, but they'll move the test stand ones to that project eventually.
I'm attempting to push them to finish testing the IO chassis and the remainder of those delivered as well.
I'd like to try setting up the SUS IO chassis and the related computer this week since we now have sufficient parts for it. I'd also like to move megatron to 1Y3, to free up space to place the correct computer and IO chassis where its currently residing.
Yesterday afternoon I went to downs and acquired the following materials:
2 100 ft long blue fibers, for use with the timing system. These need to be run from the timing switch in 1Y5/1Y6 area to the ends.
3 ADCs (PMC66-16AI6455A-64-50M) and 2 DACs (PMC66-16AO16-16-F0-OF), bringing our total of each to 8.
7 ADC adapter boards which go in the backs of the IO chassis, bringing our total for those (1 for each ADC) to 8.
There were no DAC adapter boards of the new style available. Jay asked Todd to build those in the next day or two (this was on Thursday), so hopefully by Monday we will have those.
Jay pointed out there are different styles of the Blue and Gold adapter boxes (for ADCs to DB44/37) for example. I'm re-examining the drawings of the system (although some drawings were never revised to the new system, so I'm trying to interpolate from the current system in some cases), to determine what adapter style and numbers we need. In any case, those do not appear to have been finished yet (there basically stuffed boards in a bag in Jay's office which need to be put into the actual boxes with face plates).
When I asked Rolf if I could take my remaining IO chassis, there was some back and forth between him and Jay about numbers they have and need for their test stands, and having some more built. He needs some, Jay needs some, and the 40m still needs 3. Some more are being built. Apparently when those are finished, I'll either get those, or the ones that were built for the 40m and are currently in test stands.
Aparently Friday afternoon (when we were all at Journal Club), Todd dropped off the 7 DAC adapter boards, so we have a full set of those.
Things still needed:
1) 3 IO chassis (2 Dolphin style for the LSC and IO, and 1 more small style for the South end station (new X)). We already have the East end station (new Y) and SUS chassis.
2) 2 50+ meter Ethernet cables and a router for the DAQ system. The Ethernet cables are to go from the end stations to 1Y5-ish, where the DAQ router will be located.
3) I still need to finish understanding the old drawings drawings to figure out what blue and gold adapter boxes are needed. At most 6 ADC, 3 DAC are necessary but it may be less, and the styles need to be determined.
4) 1 more computer for the South end station. If we're using Megatron as the new IO chassis, then we're set on computers. If we're not using Megatron in the new CDS system, then we'll need a IO computer as well. The answer to this tends to depend on if you ask Jay or Rolf.
I talked with Rolf, and asked if we were using Megatron for IO. The gist boiled down to we (the 40m) needed to use it for something, so yes, use it for the IO computer. In regards to the other end station computer, he said he just needed a couple of days to make sure it doesn't have anything on it they need and to free it up.
I had a chat with Jay where he explained exactly what boards and cables we need. Adapter boards are 95% of the way there. I'll be stopping by this afternoon to collect the last few I need (my error this morning, not Jays). However it looks like we're woefully short on cables and we'll have to make them. I also acquired 2 D080281 (Dsub 44 x2 to SCSI).
For each 2 Pentek DACs plus a 110B, you need 1 DAC adapter board (D080303 with 2 connectors for IDC40 and a SCSI). You also need a D080281 to plug onto the back of the Sander box (going to the 110Bs) to convert the D-sub 44 pins to SCSI.
LSC will need none, SUS will need 3, IO will need 1, and the ends will need 1 each. We have a total of 6, we're set on D080303s. We have 3 110Bs, so we need one more D080281 (Dsub44 to SCSI). I'll get that this afternoon.
For each XVME220, we'll need one D080478 binary adapter. We have 8 XVME220s, and we have 8 boards, so we're set on D08478s.
For the ends, there's a special ADC to DB44/37 adapter, which we only have 1 one of. I need to get them to make 1 more of these boxes.
We have 1 ADC to DB37 adapter, of which we'll need 1 more of as well, one for IO and one for SUS.
However, for each Pentek ADC, we need a IDC40 to DB37 cable. For each Pentek DAC we need an IDC40 to IDC40 cable. We need a SCSI cable for each 110B. I believe the current XVME220 cables plug directly in the BIO adapter boxes, so those are set.
So we need to make or acquire 11 IDC40 to DB37 cables, 7 IDC40 to IDC40 cables, and 3 SCSI cables.
I went to talk to Rolf and Jay this morning. I asked Rolf if a chassis was available, so he went over and disconnected one of his test stand chassis and gave it to me. It comes with a Contect DIO-1616L-PE Isolated Digital IO board and an OSS-MAX-EXP-ELB-C, which is a host interface board. The OSS board means it has to go into the south end station. There's a very short maximum cable length associated with that style, and the LSC and IOO chassis will be further than that from their computers (we have dolphin connectors on optical fiber for those connections).
I also asked Jay for another 4 port 37 d-sub ADC blue and gold adapter box, and he gave me the pieces. While over there, I took 2 flat back panels and punched them with approriate holes for the scsi connectors that I need to put in them. I still need to drill 4 holes in two chassis to mount the boards, and then a bit of screwing. Shouldn't take more than an hour to put them both together. At that point, we should have all the adapter boxes necessary for the base design. We still need some stuff for the green locking, as noted on Friday.
After talking with Rolf, and clarifying exactly which machine I wanted, he gave me an 4600 Sun machine (similar to our current megatron). I'm currently trying to find a good final place for it, but its at least here at the 40m.
I also acquired 3 boards to plug our current VMIPMC 5565 RFM cards into, so they can be installed in the IO chassis. These require +/- 5V power be connected to the top of the RFM board, which would be not possible in the 1U computers, so they have to go in the chassis. These style boards prevent the top of the chassis from being put on (not that Rolf or Jay have given me tops for the chassis). I'm planning on using the RFM cards from the East End FE, the LSC FE, and the ASC FE.
I talked to Jay, and offered to upgrade the old megatron IO chassis myself if that would speed things up. They have most of the parts, the only question being if Rolf has an extra timing board to put in it. Todd is putting together a set of instructions on how to put the IO chassis together and he said he'd give me a copy tomorrow or Monday. I'm currently planning on assembling it on Monday. At that point I only need 1 more IO chassis from Rolf.
When I asked about the dolphin IO chassis, he said we're not planning on using dolphin connections between the chassis and computer anymore. Apparently there was some long distance telecon with the dolphin people and they said the Dolphin IO chassis connection and RFM doesn't well together (or something like that - it wasn't very clear from Rolf's description). Anyways, the other style apparently is now made in a fiber connected version (they weren't a year ago apparently), so he's ordered one. When I asked why only 1 and what about the IOO computer and chassis, he said that would either require moving the computer/chassis closer or getting another fiber connection (not cheap).
So the current thought I hashed out with Rolf briefly was:
We use one of the thin 1U computers and place that in the 1Y2 rack, to become the IOO machine. This lets us avoid needing a fiber. Megatron becomes the LSC/OAF machine, either staying in 1Y3 or possibly moving to 1Y4 depending on the maximum length of dolphin connection because LSC and the SUS machine are still supposed to be connected via the Dolphin switch, to test that topology.
I'm currently working on an update to my CDS diagram with these changes and will attach it to this post later today.
I picked up the ribbon cable connectors from Jay. It looks like we'll have to make the new cables for connecting the ADCs/DACs myself (or maybe with some help). We should be able to make enough ribbon cables for use now. However, I'm adding "Make nice shielded cables" to my long term to do list.
I pointed out the 2 missing adapter boxes we need to Jay. He has the parts (I saw them) and will try to get someone to put it together in the next day or so. I also picked up 2 more D080281 (DB44 to SCSI), giving us enough of those.
I once again asked Jay for an update on IO chassis, and expressed concern that without them the CDS effort can't really go forward, and that we really need this to come together ASAP. He said they still need to make 3 new ones for us.
So we're still waiting on a computer, 3 IO chassis, router + ethernet.
Talked with Jay briefly this morning.
We are due another 1-U 4 core (8 CPU) machine, which is one of the ones currently in the test stand. I'm hoping sometime this week I can convince Alex to help me remove it from said test stand.
The megatron machine we have is definitely going to be used in the 40m upgrade (to answer a question of Rana's from last Wednesday's meeting). Thats apparently the only machine of that class we get, so moving it to the vertex for use as the LSC or SUS vertex machine may make sense. Overall we'll have the ASS, OMC, Megatron (SUS?), along with the new 4 1-U machines, for LSC, IO, End Y and End X. We are getting 4 more IO chassis, for a total 5. ASS and OMC machine will be going without full new chassis.
Speaking of IO chassis, they are still being worked on. Still need a few cards put in and some wiring work done. I also didn't see any other adapter boards finished either.
Talked with Jay briefly today. Apparently there are 3 IO chassis currently on the test stand at Downs and undergoing testing (or at least they were when Alex and Rolf were around). They are being tested to determine which slots refer to which ADC, among other things. Apparently the numbering scheme isn't as simple as 0 on the left, and going 1,2,3,4, etc. As Rolf and Alex are away this week, it is unlikely we'll get them before their return date.
Two other chassis (which apparently is one more than the last time I talked with Jay), are still missing cards for communicating between the computer and the IO chassis, although Gary thinks I may have taken them with me in a box. I've done a look of all the CDS stuff I know of here at the 40m and have not seen the cards. I'll be checking in with him tomorrow to figure out when (and if) I have the the cards needed.
Today I balanced the mirror, finished putting together the second photosensor, and finished my photosensor circuit box!
Upon Jamie's suggestion, I have used a translation stage to obtain calibration data points (voltage outputs relative to displacement) for the new photosensor and for the first photosensor.
I will plot these tomorrow morning (too hungry now > < )
Here is a photo of the inside of my circuit box! It is finally done! It is now enclosed in a nice aluminum casing ^ ^
I just wrote a short description of how to run the daily summary pages and the configuration process for making changes to the site. It can be found in /users/public_html/40m-summary and is named README.txt. If I need to clarify anything, please let me know! The configuration process should be relatively straightforward, so it will be easy to add plots or change them when there are changes at the 40 meter.
Please check the summary pages out at the link below and let me know if there are any modifications I should make! All existing pages are up to date and contain all of the pages I have.
Questions, comments, and suggestions will be appreciated! Contact me at email@example.com
I found that Yarm cannot be locked today. Both POY11 and POYDC were not there when Yarm was aligned, and ITMY needed to be highly misaligned to get POYDC.
POY beam also could not be found at ITMY table.
Koji suggested to use AS55 instead to lock Yarm. We did it (AS55_I_ERR, C1:LSC-YARM_GAIN=-0.002) and manually ASS-ed to get Yarm aligned (ASS with AS55 somehow didn't work).
After that, we checked ITMY table and found that POY beam was clipped at an iris which was closed!
I opened it and now Yarm locks with POY11 again. ASS works.
PMC was also aligned.
Before the final measurement of the DC values for the transmissions, I aligned the PMC. This made the PMC trans increased from 0.67 to 0.74.
Top tab categolies:
PRM watchdog tripped, but the damprestore.py script wouldn't run.
It turns out the script tries to import some ezca stuff from /users/yuta (), which had been moved to /users/OLD/yuta ().
I've moved the yuta directory back to /users/ until I fix the damprestore script.
I will move it back. We need to fix our scripts to not use any users/ libraries ever again.
Just a heads up, it looks like the damping came on at around 8:30pm. Not sure why.
I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself. Wee hoo.
I spent a while yesterday trying to figure out what could have been going on. I couldn't find anything. I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.
So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.
Looks like I spoke too soon. daqd seems to be crapping itself again:
controls@fb /opt/rtcds/caltech/c1/target/fb 0$ ls -ltr logs/old/ | tail -n 20
-rw-r--r-- 1 4294967294 4294967294 11244 Oct 17 11:34 daqd.log.1413570846
-rw-r--r-- 1 4294967294 4294967294 11086 Oct 17 11:36 daqd.log.1413570988
-rw-r--r-- 1 4294967294 4294967294 11244 Oct 17 11:38 daqd.log.1413571087
-rw-r--r-- 1 4294967294 4294967294 13377 Oct 17 11:43 daqd.log.1413571386
-rw-r--r-- 1 4294967294 4294967294 11481 Oct 17 11:45 daqd.log.1413571519
-rw-r--r-- 1 4294967294 4294967294 11985 Oct 17 11:47 daqd.log.1413571655
-rw-r--r-- 1 4294967294 4294967294 13219 Oct 17 13:00 daqd.log.1413576037
-rw-r--r-- 1 4294967294 4294967294 11150 Oct 17 14:00 daqd.log.1413579614
-rw-r--r-- 1 4294967294 4294967294 5127 Oct 17 14:07 daqd.log.1413580231
-rw-r--r-- 1 4294967294 4294967294 11165 Oct 17 14:13 daqd.log.1413580397
-rw-r--r-- 1 4294967294 4294967294 5440 Oct 17 14:20 daqd.log.1413580845
-rw-r--r-- 1 4294967294 4294967294 11352 Oct 17 14:25 daqd.log.1413581103
-rw-r--r-- 1 4294967294 4294967294 11359 Oct 17 14:28 daqd.log.1413581311
-rw-r--r-- 1 4294967294 4294967294 11195 Oct 17 14:31 daqd.log.1413581470
-rw-r--r-- 1 4294967294 4294967294 10852 Oct 17 15:45 daqd.log.1413585932
-rw-r--r-- 1 4294967294 4294967294 12696 Oct 17 16:00 daqd.log.1413586831
-rw-r--r-- 1 4294967294 4294967294 11086 Oct 17 16:02 daqd.log.1413586924
-rw-r--r-- 1 4294967294 4294967294 11165 Oct 17 16:05 daqd.log.1413587101
-rw-r--r-- 1 4294967294 4294967294 11086 Oct 17 16:21 daqd.log.1413588108
-rw-r--r-- 1 4294967294 4294967294 11097 Oct 17 16:25 daqd.log.1413588301
controls@fb /opt/rtcds/caltech/c1/target/fb 0$
The times all indicate when the daqd log was rotated, which happens everytime the process restarts. It doesn't seem to be happening so consistently, though. It's been 30 minutes since the last one. I wonder if it somehow correlated with actual interaction with the NDS process. Does some sort of data request cause it to crash?
Merging of threads.
ChrisW figured out that it looks like the problem with the frame builder is that it's having to wait for disk access. He has tweaked some things, and life has been soooo much better for Q and I this evening! See Chris' elog at elog 10632.
In the last few hours we've had 2 or maybe 3 times that I've had to reconnect Dataviewer to the framebuilder, which is a significant improvement over having to do it every few minutes.
Also, Rossa is having trouble with DTT today, starting sometime around dinnertime. Ottavia and Pianosa can do DTT things, but Rossa keeps getting "test timed out".
The daqd process on the frame builder looks like it is segfaulting again. It restarts itself every few minutes.
The symptoms remind me of elog 9530, but /frames is only 93% full, so the cause must be different.
Did anyone do anything to the fb today? If you did, please post an elog to help point us in a direction for diagnostics.
Q!!!! Can you please help? I looked at the log files, but they are kind of mysterious to me - I can't really tell the difference between a current (bad) log file and an old (presumably fine) log file. (I looked at 3 or 4 random, old log files, and they're all different in some ways, so I don't know which errors and warnings are real, and which are to be ignored).
I've been trying to figure out why daqd keeps crashing, but nothing is fixed yet.
I commented out the line in /etc/inittab that runs daqd automatically, so I could run it manually. Each time I run it ( with ./daqd -c ./daqdrc while in c1/target/fb), it churns along fine for a little while, but eventually spits out something like:
[Thu Oct 16 11:43:54 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 11:43:55 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:56 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:57 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:58 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:59 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:00 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:01 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:02 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097520250 to 1097520257
FATAL: exception not rethrown
I looked for time disagreements between the FB and the frontends, but they all seem fine. Running ntpdate only corrected things by 5ms. However, looking through /var/log/messages on FB, I found that ntp claims to have corrected the FB's time by ~111600 seconds (~31 hours) when I rebooted it on Monday.
Maybe this has something to do with the timing that the FB is getting? The FE IOPs seem happy with their sync status, but I'm not personally currently aware of how the FB timing is set up.
On Monday, Jamie suggested checking out the situation with FB's RAID. Searching the elog for "empty blocks in the buffer" also brought up posts that mentioned problems with the RAID.
I went to the JetStor RAID web interface at http://192.168.113.119, and it reports everything as healthy; no major errors in the log. Looking at the SMART status of a few of the drives shows nothing out of the ordinary. The RAID is not mounted in read-only mode either, as was the problem mentioned in previous elogs.
As one of the trouble shooting steps for the daqd (i.e. framebuilder) I rebuilt the daqd executable. My guess is somewhere in the build code is some kind of GPS offset to make the time correct due to our lack of IRIG-B signal.
The actual daqdrc file was left untouched when I did the new install, so the symmetricom gps offset is still the same, which confuses me.
I'll take a look at the SVN diffs tomorrow to see what changed in that code that could cause a 300000000 or so offset to the GPS time.
Bottom trace is proportional to the OMC PZT voltage - top trace is the transmitted light through the OMC. Interferometer is locked (DARM- RF) with arm powers = 80 / 100. The peaks marked by the cursors are the +(- ?) 166 MHz sidebands.
Ok, so the whole idea that mirror motion can explain the ripples is nonsense. At least, when you think off the ringdown with "pump off". The phase shifts that I tried to estimate from longitudinal and tilt mirror motion are defined against a non-existing reference. So I guess that I have to click on the link that Koji posted...
Just to mention, for the tilt phase shift (yes, there is one, but the exact expression has two more factors in the equation I posted), it does not matter, which mirror tilts. So even for a lower bound on the ripple time, my equation was incorrect. It should have the sum over all three initial tilt angles not only the two "shooting into the long arms" of the MC.
Laser frequency shift = longitudinal motion of the mirrors
Hmm. I don't know what ringing really is. Ok, let's assume it has to do with the pump... I don't see how the pump laser could produce these ripples. They have large amplitudes and so I always suspected something happening to the intracavity field. Therefore I was looking for effects that would change resonance conditions of the intracavity field during ringdown. Tilt motion seemed to be one explanation to me, but it may be a bit too slow (not sure yet). Longitudinal mirror motion is certainly too slow. What else could there be?
It is essential we take a look at the ringdown data for all measurements made so far to figure out what must be done to track the source of these notorious ripples. I've attached the plot for the same showing the decay time to be the same in all cases. About the ripples; it seems unlikely to both Jan and me that the ripples are some electronic noise because the ripples do not follow any common pattern or time constant. We have discussed with Koji about monitoring the frequency shift, the input power to the MC and also try other methods of shutting down the pump to track their source as the next steps.
I succeeded in creating a new channel access server hosted on domenica ( R Pi) for continuous data acquisition from the FC into accessible channels. For this I have written a ctypes interface between EPICS and the C interface code to write data into the channels. The channels which I created are:
The scripts I have written for this can be found in:
db script in: /users/akhil/fcreadoutIoc/fcreadoutApp/Db/fcreadout.db
Python code: /users/akhil/fcreadoutIoc/pycall
C code: /users/akhil/fcreadoutIoc/FCinterfaceCcode.c
I will give the standard channel names(similar to the names on the channel root)once the testing is completed and confirm that data from FC is consistent with the C code readout. Once ready I will run the code forever so that both the server and data acquisition are in process always.
Yesterday, when I set out to test the channel, I faced few serious issues in booting the raspberry pi. However, I have backed up the files on the Pi and will try to debug the issue very soon( I will test with Eric Q's R Pi).
To run these codes one must be root ( sudo python pycall, sudo ./FCinterfaceCcode) because the HID- devices can be written to only by the root(should look into solving this issue).
Instructions for Installation of EPICS, and how to create channel server on Pi will be described in detail in 40m Wiki ( FOLL page).
controls@rossa|~ 2> ls /users/akhil/fcreadoutIoc
ls: cannot access /users/akhil/fcreadoutIoc: No such file or directory
This code should be in the 40m SVN somewhere, not just stored on the RPi.
I'm still confused why python is in the mix here at all. It doesn't make any sense at all that a C program (EPICS IOC) would be calling out to a python program (pycall) that then calls out to a C program (FCinterfaceCcode). That's bad programming. Streamline the program and get rid of python.
You also definitely need to fix whatever the issue is that requires running the program as root. We can't have programs like this run as root.
I tried making these changes but there was a problem with R pi boot again.I now know how to bypass the python code using IOC.I will make these changes once the problem with the Pi is fixed.
I took some data tonight for a quick look at what combinations of DC signals might be good to use for DARM, as an alternative to ALS before we're ready for RF.
I had the arms locked with ALS, PRMI with REFL33, and tried to move the CARM offset between plus and minus 1. The PRMI wasn't holding lock closer than about -0.3 or +0.6, so that is also a problem. Also, I realized just now that I have left the beam dumps in front of the transmission QPDs, so I had prevented any switching of the trans PD source. This means that all of my data for C1:LSC-TR[x,y]_OUT_DQ is taken with the Thorlabs PDs, which is fine, although they saturate around arm powers of 4 ever since my analog gain increase on the whitening board. Anyhow, the IFO didn't hold lock for much beyond then anyway, so I didn't miss out on much. I need to remember to remove the dumps though!!
Self: Good stuff should be between 12:50am - 1:09am. One set of data was ./getdata -s 1089445700 -d 30 -c C1:LSC-TRX_OUT_DQ C1:LSC-TRY_OUT_DQ C1:LSC-CARM_IN1_DQ C1:LSC-PRCL_IN1_DQ
./getdata -s 1089445700 -d 30 -c C1:LSC-TRX_OUT_DQ C1:LSC-TRY_OUT_DQ C1:LSC-CARM_IN1_DQ C1:LSC-PRCL_IN1_DQ
I realized while I was looking at last night's data that I had been doing CARM sweeps, when really I wanted to be doing DARM sweeps. I took a few sets of data of DARM sweeps while locked on ALSdiff. However, Rana pointed out that comparing ALSdiff to TRX-TRY isn't exactly a fair comparison while I'm locked on ALSdiff, since it's an in-loop signal, so it looks artificially quiet.
Anyhow, I may consider transitioning DARM over to AS55 temporarily so that I can look at both as out-of-loop sensors.
Also, so that I can try locking DARM on DC transmission, I have added 2 more columns to the LSC input matrix (now we're at 32!), for TRX and TRY. We already had sqrt inverse versions of these signals, but the plain TRX and TRY were only available as normalization signals before. Since Koji put in the facility to sqrt or not the normalization signals, I can now try:
Option 1: ( TRX - TRY ) / (TRX + TRY)
Option 2: ( TRX - TRY ) / sqrt( TRX + TRY )
DARM does not yet have the facility to normalize one signal (DC transmission) and not another (ALS diff), so I may need to include that soon. For tonight, I'm going to try just changing matrix elements with ezcastep.
Since I changed the c1lsc.mdl model, I compiled it, restarted the model, and checked the model in. I have also added these 2 columns to the AUX_ERR sub-screen for the LSC input matrix. I have not changed the LSC overview screen.
Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.
It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.
I noticed that someone, that wasn't me, has edited the wiki page about the netgpibdata under my name saying:
* A4395 Spectrum Units
Independetly by which unites are displayed by the A4395 spectrum analyzer on the screen, the data is saved in Watts/rtHz"
We've been talking for a while about how we want to store data. I'm not in love with keeping it on the elog, although I think we should always be able to reference and go back and forth between the elogs and the data.
I have made a new folder: /data EDIT: nevermind. I want it to be on the file system just like /users, but I don't know how to do that. Right now the folder is just on Ottavia. Jamie will help me tomorrow.
In this folder, we will save all of the data which goes into the elog.
I propose that we should have a common format for the names of the data files, so that we can easily find things.
My proposal is that one begins ones elog regarding the data to be saved, and submit it immediately after putting in the first ~sentence or so. One should then make a new folder inside the data folder with a title "elog#####_Anything_Else_You_Want" Then, data (which was originally saved in ones own users folder) should be copied into the /data/elog#####_AnythingElse/ folder. Also in that folder should be any Matlab scripts used to create the plots that you post in the elog. One should then edit the elog to continue making a regular, very thorough elog, including the path to the data. Elog should include all of the information about the measurement, state of the IFO (or whatever you were measuring), etc.
Riju will be alpha-testing this procedure tonight. EDIT: nevermind...see previous edit.
For the past couple of days, the summary pages have shown minute trend data disappear at 12:00 UTC (05:00 AM local time). This seems to be the case for all channels that we plot, see e.g. https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20150724/ioo/. Using Dataviewer, Koji has checked that indeed the frames seem to have disappeared from disk. The data come back at 24 UTC (5pm local). Any ideas why this might be?
c1lsc had 60 full-rate (16kS/s) channels to record. This yielded the LSC to FB connection to handle 4MB/s (mega-byte) data rate.
This was almost at the data rate limit of the CDS and we had frequent halt of the diagnostic systems (i.e. DTT and/or dataviewer)
Jenne and I reviewed DAQ channel list and decided to remove some channels. We also reviewed the recording rate of them
and reduced the rate of some channels. c1lsc model was rebuilt, re-installed, and restarted. FB was also restarted. These are running as they were.
The data rate is now reduyced to ~3MB nominal.
The following is the list of the channels removed from the DQ channels:
The following is the list of the channels with the new recording rate:
I was trying to plot trends (min, 10 min, and hour) in DataViewer and got the following error message
mjd = 58235
Open of leapsecs.dat failed
leapsecs_read() returning 0
frameMemRead - gpstimest = 1208844718
thoough the plots showed up fine after. Do we need to fix something with the leapsecs.dat file?