40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 57 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  4760   Mon May 23 12:27:26 2011 kiwamuUpdateLSCDRMI trial : details

(PRMI locking with slightly misaligned SRM)

 First I tried locking PRC and MICH with a little bit misaligned SRM. This condition allowed me to search for a good signal port for SRC.

In this locking, REFL11_I was used to lock PRC and AS55_Q was used for MICH. This is the same scheme as the current PRMI locking.

Since the alignment of SRM was close to the good alignment, I expected to see fringes from SRC in some signal ports (i.e. REFL55, POY55 and so on).

Sometimes a fringe of SRC disturbed AS55_Q and broke the MICH locking, so I had to carefully misalign SRM so that the SRC fringes are small enough to maintain the lock of MICH.

 

(Looking for a good signal port for SRC)

 After I locked the PRMI with slightly misaligned SRM, I started looking for a good signal port for SRC.

At the beginning I tried finding a good SRC port by shaking SRM at 100 Hz and looked at the power spectra of all the available LSC signals.

I was expecting to see a 100 Hz peak in the spectra, but this technique didn't work well because SRC wasn't within the linear range and hence didn't produce linear signals.

So I didn't see any strong signals at 100 Hz and finally gave up this technique.

Then I started looking for a PDH-like signal in time series and immediately found AS55_I showed large PDH-like signals.

So I started using the AS55_I for the SRC locking and eventually succeeded.
 

 

(Two tips for the DRMI locking)

During the locking of DRMI, I found two tips that made the locking quite smooth.

 - Triggered locking

   Since every LSC signal ports showed large signals from PRC somehow, feeding back the signals made the suspensions crazy.

   So I used triggered locking for the PRC and MICH locking to avoid unwanted kicks on BS and PRM.

   If  the DC of REFL goes above a certain level, the control of  PRC starts. Also if the DC of AS goes below a certain level the control of MICH starts.

  These triggers make the lock smoother.

 - Do not use resonant gain filters

  This is really a stupid tip. When I was trying to lock MICH, the lock became quite difficult for some reasons.

  It looked there was an oscillation at 3 Hz every time the MICH control started. It turned out that a 3 Hz resonant gain filter had been making it difficult.

  All the resonant gain filters should be off when a lock acquisition is taken place.

Quote from #4759

Eventually the DRMI was locked.

More details will be reported in the morning.

 

  4757   Sat May 21 06:19:46 2011 kiwamuUpdateLSCDRMI trial : no luck

I will try with POY55 that Koji prepared today.

  4759   Mon May 23 00:36:51 2011 kiwamuUpdateLSCDRMI trial : sucess

Eventually the DRMI was locked.

I was struggling to find a good signal port for SRC over the weekend and finally found AS55_I worked somehow. I used :

   REFL11_I --> PRC

   AS55_Q   --> MICH

   AS55_I    --> SRC

A configuration script was prepared such that someone can try this configuration by clicking a button on the C1IFO_CONFIGURE.adl screen.

I don't think this signal extraction scheme is the best, but now we can find better signal ports by shaking each DOF and looking at each signal port.

More details will be reported in the morning.

Quote:

I will try with POY55 that Koji prepared today.

 

  11632   Tue Sep 22 03:48:18 2015 ericqUpdateLSCDRMI tweaked, briefly held with ALS arms

Given the RF component power supply grounding, POP110, POP22 and REFL165 all changed somewhat. They have all been rephased for the DRMI, as they were before. 

I tweaked the 3F DRMI settings, and chose to phase REFL165I to PRCL, instead of SRCL as before, to try and minimize the PRCL->MICH coupling instead of the SRCL->MICH coupling. 

With these settings, I once locked the DRMI for ~5 seconds with the arms held off on ALS, during which I could see some indications of neccesary demod angle changes. Haven't yet gotten longer, but we're getting there...

  252   Tue Jan 22 02:33:45 2008 robUpdateLSCDRMI work

0) The ETMY oplev needs work/centering

1) recentered DRMI oplevs

2) Did some light DRMI locking. Looked at the loops and the DD signals. The PODD signals look flaky; the beam may not be on the diode. MICH and PRC handoffs to DD signals were spotty, but not a total disaster. Changed the PD9 phase by 115 degs. Work continues on the DD_handoff subscript.

3) John says "There are ants everywhere."

4) Andrey is now versed in the arts of decimation.
  366   Mon Mar 10 02:05:08 2008 robUpdateLockingDRMI+2ARMs working better

Some encouraging progress on the locking front tonight. After the work on the DRM loops last week and a review of the settings for initial lock acquisition (loop gains, tickle amplitude, filter states, so on), the DRMI+2ARMS locking is working pretty well. That's to say, it takes from 5-15 minutes generally for the IFO to lock in the offset CARM state, with the arm powers at 0.5. It's then possible to raise the arm powers slightly, and handing off control of CARM to MCL works at low power, but engaging the AO path (using PO_DC as an error signal) is not working so well. Taking swept sines indicates that the PO_DC should be a good error signal. The next good thing to try might be just using PO_DC as an error signal for the length path, without using the AO path at all, to see if it's something in the hardware.
  10859   Tue Jan 6 17:41:20 2015 JenneConfigurationCDSDTT doesn't do envelopes??

[Jenne, Diego]

We are working on trying out the UGF servos, and wanted to take loop measurements with and without the servo to prove that it is working as expected.  However, it seems like new DTT is not following the envelopes that we are giving it. 

If we uncheck the "user" box, then it uses the amplitude that is given on the excitation tab.  But, if we check user and select envelope, the amplitude will always be whatever number is the first amplitude requested in the envelope.  If we change the first amplitude in the envelope, DTT will use that number for the new amplitude, so it is reading that file, but not doing the whole envelope thing correctly.

Thoughts?  Is this a bug in new DTT, or a pebkac issue?

  2045   Fri Oct 2 18:04:45 2009 robUpdateCDSDTT no good for OMC channels

I took the output of the OMC DAC and plugged it directly into an OMC ADC channel to see if I could isolate the OMC DAC weirdness I'd been seeing.  It looks like it may have something to do with DTT specifically.

Attachment 1 is a DTT transfer function of a BNC cable and some connectors (plus of course the AI and AA filters in the OMC system).  It looks like this on both linux and solaris.

Attachment 2 is a transfer function using sweepTDS (in mDV), which uses TDS tools as the driver for interfacing with testpoints and DAQ channels. 

Attachment 3 is a triggered time series, taken with DTT, of the same channels as used in the transfer functions, during a transfer function.  I think this shows that the problem lies not with awg or tpman, but with how DTT is computing transfer functions. 

 

I've tried soft reboots of the c1omc, which didn't work.   Since the TDS version appears to work, I suspect the problem may actually be with DTT.

  1718   Tue Jul 7 16:06:59 2009 ClaraUpdateComputer Scripts / ProgramsDTT synchronization errors, help would be appreciated

I am attempting to use the DTT program to look at the coherence of the individual accelerometer signals with the MC_L signal. Rana suggested that I might break up the XYZ configuration, so i wanted to see how the coherence changed when I moved things around over the past couple of weeks, but I keep getting a synchronization error every time I try to set the start time to more than about 3 days ago. I tried restarting the program and checking the "reconnect" option in the "Input" tab, neither of which made any kind of difference. I can access this data with no problem from the Data Viewer and the Matlab scripts, so I'm not really sure what is happening. Help?

EDIT: Problem solved - Full data was not stored for the time I needed to access it for DTT.

  14007   Fri Jun 22 15:13:47 2018 gautamUpdateCDSDTT working

Seems like DTT also works now. The trick seems to be to run sudo /usr/bin/diaggui instead of just diaggui. So this is indicative of some conflict between the yum installed gds and the relic gds from our shared drive. I also have to manually change the NDS settings each time, probably there's a way to set all of this up in a more smooth way but I don't know what it is. awggui still doesn't get the correct channels, not sure where I can change the settings to fix that.

  14008   Fri Jun 22 15:22:39 2018 sudoUpdateCDSDTT working
Quote:

Seems like DTT also works now. The trick seems to be to run sudo /usr/bin/diaggui instead of just diaggui. So this is indicative of some conflict between the yum installed gds and the relic gds from our shared drive. I also have to manually change the NDS settings each time, probably there's a way to set all of this up in a more smooth way but I don't know what it is. awggui still doesn't get the correct channels, not sure where I can change the settings to fix that.

DON"T RUN DIAGGUI AS ROOT

  3107   Wed Jun 23 15:33:42 2010 josephbUpdateCDSDaily Downs Update

I visited downs and announced that I would be showing up again until all the 40m hardware is delivered. 

I brought over 4 ADC boards and 5 DAC boards which slot into the IO chassis.

The DACs are General Standards Corporation, PMC66-16AO16-16-F0-OF, PCIe4-PMC-0 adapters.

The ADCs are General Standards Corporation, PMC66-16AI6455A-64-50M, PCIe4-PMC-0 adapters.

These new ones have been placed with the blue and gold adapter boards, under the table behind the 1Y4-1Y5 racks.

With the 1 ADC and 1 DAC we already have, we now have enough to populated the two ends and the SUS IO chassis.  We have sufficient Binary Output boards for the entire 40m setup.  I'm going back with a full itemized list of our current equipment, and bring back the remainder of the ADC/DAC boards we're due.  Apparently the ones which were bought for us are currently sitting in a test stand, so the ones I took today were from a different project, but they'll move the test stand ones to that project eventually.

I'm attempting to push them to finish testing the IO chassis and the remainder of those delivered as well.

I'd like to try setting up the SUS IO chassis and the related computer this week since we now have sufficient parts for it.  I'd also like to move megatron to 1Y3, to free up space to place the correct computer and IO chassis where its currently residing.

  3119   Fri Jun 25 08:10:23 2010 josephbUpdateCDSDaily Downs Update

Yesterday afternoon I went to downs and acquired the following materials:

2 100 ft long blue fibers, for use with the timing system.  These need to be run from the timing switch in 1Y5/1Y6 area to the ends.

3 ADCs (PMC66-16AI6455A-64-50M) and 2 DACs (PMC66-16AO16-16-F0-OF), bringing our total of each to 8.

7 ADC adapter boards which go in the backs of the IO chassis, bringing our total for those (1 for each ADC) to 8.

There were no DAC adapter boards of the new style available.  Jay asked Todd to build those in the next day or two (this was on Thursday), so hopefully by Monday we will have those.

Jay pointed out there are different styles of the Blue and Gold adapter boxes (for ADCs to DB44/37) for example.  I'm re-examining the drawings of the system (although some drawings were never revised to the new system, so I'm trying to interpolate from the current system in some cases), to determine what adapter style and numbers we need.  In any case, those do not appear to have been finished yet (there basically stuffed boards in a bag in Jay's office which need to be put into the actual boxes with face plates).

When I asked Rolf if I could take my remaining IO chassis, there was some back and forth between him and Jay about numbers they have and need for their test stands, and having some more built.  He needs some, Jay needs some, and the 40m still needs 3.  Some more are being built.  Apparently when those are finished, I'll either get those, or the ones that were built for the 40m and are currently in test stands.

 

Edit:

Aparently Friday afternoon (when we were all at Journal Club), Todd dropped off the 7 DAC adapter boards, so we have a full set of those.

Things still needed:

1) 3 IO chassis (2 Dolphin style for the LSC and IO, and 1 more small style for the South end station (new X)).  We already have the East end station (new Y) and SUS chassis.

2) 2 50+ meter Ethernet cables and a router for the DAQ system.  The Ethernet cables are to go from the end stations to 1Y5-ish, where the DAQ router will be located.

3) I still need to finish understanding the old drawings drawings to figure out what blue and gold adapter boxes are needed.  At most 6 ADC, 3 DAC are necessary but it may be less, and the styles need to be determined.

4) 1 more computer for the South end station.  If we're using Megatron as the new IO chassis, then we're set on computers.  If we're not using Megatron in the new CDS system, then we'll need a IO computer as well.  The answer to this tends to depend on if you ask Jay or Rolf.

 

  3134   Tue Jun 29 12:08:43 2010 josephbUpdateCDSDaily Downs Update

I talked with Rolf, and asked if we were using Megatron for IO.  The gist boiled down to we (the 40m) needed to use it for something, so yes, use it for the IO computer.  In regards to the other end station computer, he said he just needed a couple of days to make sure it doesn't have anything on it they need and to free it up.

I had a chat with Jay where he explained exactly what boards and cables we need.  Adapter boards are 95% of the way there.  I'll be stopping by this afternoon to collect the last few I need (my error this morning, not Jays).  However it looks like we're woefully short on cables and we'll have to make them. I also acquired 2 D080281 (Dsub 44 x2 to SCSI).

For each 2 Pentek DACs plus a 110B, you need 1 DAC adapter board (D080303 with 2 connectors for IDC40 and a SCSI).  You also need a D080281 to plug onto the back of the Sander box (going to the 110Bs) to convert the D-sub 44 pins to SCSI.

LSC will need none, SUS will need 3, IO will need 1, and the ends will need 1 each.  We have a total of 6, we're set on D080303s.  We have 3 110Bs, so we need one more D080281 (Dsub44 to SCSI).  I'll get that this afternoon.

For each XVME220, we'll need one D080478 binary adapter.  We have 8 XVME220s, and we have 8 boards, so we're set on D08478s.

For the ends, there's a special ADC to DB44/37 adapter, which we only have 1 one of.  I need to get them to make 1 more of these boxes.

We have 1 ADC to DB37 adapter, of which we'll need  1 more of as well, one for IO and one for SUS. 

However, for each Pentek ADC, we need a IDC40 to DB37 cable.  For each Pentek DAC we need an IDC40 to IDC40 cable.  We need a SCSI cable for each 110B.  I believe the current XVME220 cables plug directly in the BIO adapter boxes, so those are set.

So we need to make or acquire 11 IDC40 to DB37 cables, 7 IDC40 to IDC40 cables, and 3 SCSI cables.

 

Summary Needed:

1 ADC to DB44/37 for the End (D080397)

1 ADC adapter (D080302)

1 Dsub44 to SCSI (D080291)

11 IDC40 to DB37 cables

7 IDC40 to IDC40 calbes

3 SCSI cables

PLUS from before:

3 IO Chassis (2 Dolphin, 1 Small)

1 1U computer (8 core for end)

Router/2 50+m ethernet for DAQ

  3158   Tue Jul 6 11:57:06 2010 josephbUpdateCDSDaily Downs Update

I went to talk to Rolf and Jay this morning.  I asked Rolf if a chassis was available, so he went over and disconnected one of his test stand chassis and gave it to me.  It comes with a Contect DIO-1616L-PE Isolated Digital IO board and an OSS-MAX-EXP-ELB-C, which is a host interface board.  The OSS board means it has to go into the south end station.  There's a very short maximum cable length associated with that style, and the LSC and IOO chassis will be further than that from their computers (we have dolphin connectors on optical fiber for those connections).

I also asked Jay for another 4 port 37 d-sub ADC blue and gold adapter box, and he gave me the pieces.  While over there, I took 2 flat back panels and punched them with approriate holes for the scsi connectors that I need to put in them.  I still need to drill 4 holes in two chassis to mount the boards, and then a bit of screwing.  Shouldn't take more than an hour to put them both together.  At that point, we should have all the adapter boxes necessary for the base design.  We still need some stuff for the green locking, as noted on Friday.

Major hardware still needed:

2 Dolphin style IO chassis

1 computer for south end front end

  3177   Thu Jul 8 14:32:42 2010 josephbUpdateCDSDaily Downs Update

After talking with Rolf, and clarifying exactly which machine I wanted, he gave me an 4600 Sun machine (similar to our current megatron).  I'm currently trying to find a good final place for it, but its at least here at the 40m. 

I also acquired 3 boards to plug our current VMIPMC 5565 RFM cards into, so they can be installed in the IO chassis.  These require +/- 5V power be connected to the top of the RFM board, which would be not possible in the 1U computers, so they have to go in the chassis.  These style boards prevent the top of the chassis from being put on (not that Rolf or Jay have given me tops for the chassis).  I'm planning on using the RFM cards from the East End FE, the LSC FE, and the ASC FE. 

I talked to Jay, and offered to upgrade the old megatron IO chassis myself if that would speed things up.  They have most of the parts, the only question being if Rolf has an extra timing board to put in it.  Todd is putting together a set of instructions on how to put the IO chassis together and he said he'd give me a copy tomorrow or Monday.  I'm currently planning on assembling it on Monday.  At that point I only need 1 more IO chassis from Rolf.

 

NEW CDS SETUP CHANGE:

When I asked about the dolphin IO chassis, he said we're not planning on using dolphin connections between the chassis and computer anymore.  Apparently there was some long distance telecon with the dolphin people and they said the Dolphin IO chassis connection and RFM doesn't  well together (or something like that - it wasn't very clear from Rolf's description).  Anyways, the other style apparently is now made in a fiber connected version (they weren't a year ago apparently), so he's ordered one.  When I asked why only 1 and what about the IOO computer and chassis, he said that would either require moving the computer/chassis closer or getting another fiber connection (not cheap).

So the current thought I hashed out with Rolf briefly was:

We use one of the thin 1U computers and place that in the 1Y2 rack, to become the IOO machine.  This lets us avoid needing a fiber.  Megatron becomes the LSC/OAF machine, either staying in 1Y3 or possibly moving to 1Y4 depending on the maximum length of dolphin connection because LSC and the SUS machine are still supposed to be connected via the Dolphin switch, to test that topology.

I'm currently working on an update to my CDS diagram with these changes and will attach it to this post later today.

  3136   Tue Jun 29 14:19:44 2010 josephbUpdateCDSDaily Downs Update (Part 2)

I picked up the ribbon cable connectors from Jay.  It looks like we'll have to make the new cables for connecting the ADCs/DACs myself (or maybe with some help).  We should be able to make enough ribbon cables for use now.  However,  I'm adding "Make nice shielded cables" to my long term to do list.

I pointed out the 2 missing adapter boxes we need to Jay.  He has the parts (I saw them) and will try to get someone to put it together in the next day or so.  I also picked up 2 more D080281 (DB44 to SCSI), giving us enough of those.

I once again asked Jay for an update on IO chassis, and expressed concern that without them the CDS effort can't really go forward, and that we really need this to come together ASAP.  He said they still need to make 3 new ones for us.

So we're still waiting on a computer, 3 IO chassis, router + ethernet.

  2845   Mon Apr 26 12:24:58 2010 josephbUpdateGeneralDaily Downs update

Talked with Jay briefly this morning.

We are due another 1-U 4 core (8 CPU) machine, which is one of the ones currently in the test stand.  I'm hoping sometime this week I can convince Alex to help me remove it from said test stand.

The megatron machine we have is definitely going to be used in the 40m upgrade (to answer a question of Rana's from last Wednesday's meeting).  Thats apparently the only machine of that class we get, so moving it to the vertex for use as the LSC or SUS vertex machine may make sense.  Overall we'll have the ASS, OMC, Megatron (SUS?), along with the new 4 1-U machines, for LSC, IO, End Y and End X.  We are getting 4 more IO chassis, for a total 5.  ASS and OMC machine will be going without full new chassis.

Speaking of IO chassis, they are still being worked on.  Still need a few cards put in and some wiring work done.  I also didn't see any other adapter boards finished either.

  2871   Mon May 3 15:39:39 2010 josephbUpdateCDSDaily Downs update

Talked with Jay briefly today.  Apparently there are 3 IO chassis currently on the test stand at Downs and undergoing testing (or at least they were when Alex and Rolf were around).  They are being tested to determine which slots refer to which ADC, among other things. Apparently the numbering scheme isn't as simple as 0 on the left, and going 1,2,3,4, etc.  As Rolf and Alex are away this week, it is unlikely we'll get them before their return date.

Two other chassis (which apparently is one more than the last time I talked with Jay), are still missing cards for communicating between the computer and the IO chassis, although Gary thinks I may have taken them with me in a box.  I've done a look of all the CDS stuff I know of here at the 40m and have not seen the cards.  I'll be checking in with him tomorrow to figure out when (and if) I have the the cards needed.

  5145   Mon Aug 8 22:12:58 2011 NicoleSummarySUSDaily Summary

Today I balanced the mirror, finished putting together the second photosensor, and finished my photosensor circuit box! 

Upon Jamie's suggestion, I have used a translation stage to obtain calibration data points (voltage outputs relative to displacement) for the new photosensor and for the first photosensor.

I will plot these tomorrow morning (too hungry now > < )

 

Here is a photo of the inside of my circuit box! It is finally done! It is now enclosed in a nice aluminum casing ^ ^

 

frontview.jpg

  7203   Thu Aug 16 13:04:36 2012 LizSummaryComputer Scripts / ProgramsDaily Summary Details

I just wrote a short description of how to run the daily summary pages and the configuration process for making changes to the site.  It can be found in /users/public_html/40m-summary and is named README.txt.  If I need to clarify anything, please let me know!  The configuration process should be relatively straightforward, so it will be easy to add plots or change them when there are changes at the 40 meter.

  7108   Tue Aug 7 18:38:50 2012 LizUpdateComputer Scripts / ProgramsDaily Summary Pages are in their final form!

Please check the summary pages out at the link below and let me know if there are any modifications I should make!  All existing pages are up to date and contain all of the pages I have.

Questions, comments, and suggestions will be appreciated! Contact me at endavison@umail.ucsb.edu

https://nodus.ligo.caltech.edu:30889/40m-summary/

  16940   Wed Jun 22 18:55:31 2022 yutaUpdateLSCDaily alignment work; POY trouble solved

[Koji, Yuta]

I found that Yarm cannot be locked today. Both POY11 and POYDC were not there when Yarm was aligned, and ITMY needed to be highly misaligned to get POYDC.
POY beam also could not be found at ITMY table.
Koji suggested to use AS55 instead to lock Yarm. We did it (AS55_I_ERR, C1:LSC-YARM_GAIN=-0.002) and manually ASS-ed to get Yarm aligned (ASS with AS55 somehow didn't work).
After that, we checked ITMY table and found that POY beam was clipped at an iris which was closed!
I opened it and now Yarm locks with POY11 again. ASS works.
PMC was also aligned.

C1:PSL-PMC_PMCTRANSPD ~0.74
C1:IOO-MC_TRANS_SUM ~14000
C1:LSC-TRY_OUT ~0.7
C1:LSC-TRX_OUT ~0.8

  16941   Wed Jun 22 19:41:13 2022 KojiUpdateLSCDaily alignment work; POY trouble solved

Before the final measurement of the DC values for the transmissions, I aligned the PMC. This made the PMC trans increased from 0.67 to 0.74.

  6675   Thu May 24 14:49:59 2012 KojiSummaryGeneralDaily news idea

Top tab categolies:

  • Summary
  • CDS
    • CDS Status
  • PEM
    • Seismic 24h trend
    • Accoustic 24h trend
    • Weather/Temp/Barometer/etc 24h trend
  • PSL/IOO
    • PSL summary trend / duty ratio
    • IOO summary (MC Health Check/IOO QPD trends / IFO QPD trends / Transmon QPD trends) duty ratio
  • SUS
    • Summary
    • OSEM PSD/trend
    • OPLEV PSD/trend
  • IFO
    • DC Mon
    • RF ports
    • OMC
  • Steve
    • Vacuum
  • Misc.

IFO

  • DC Monitors
    • Incident beam power trend (24h)
    • AS/REFL/POP/TRX/TRY bean power trend (24h)
    • AS/POP RF beam power trend (24h)
  • RF port
    • DARM sensitivity PSD (mean/min/max/reference) for an hour
    • DARM/CARM/PRCL/MICH/SRCL PSD
    • DARM/CARM/PRCL/MICH/SRCL (freq vs Gaussianity)
    • DARM/CARM/PRCL/MICH/SRCL calibration trend
  • OMC
    • TBD

 

  11319   Fri May 22 11:59:54 2015 ericqUpdateSUSDampRestore script problem

PRM watchdog tripped, but the damprestore.py script wouldn't run. 

It turns out the script tries to import some ezca stuff from /users/yuta (angry), which had been moved to /users/OLD/yuta (crying). 

I've moved the yuta directory back to /users/ until I fix the damprestore script. 

  11320   Fri May 22 12:09:57 2015 ranaUpdateSUSDampRestore script problem

I will move it back. We need to fix our scripts to not use any users/ libraries ever again.

Quote:

PRM watchdog tripped, but the damprestore.py script wouldn't run. 

It turns out the script tries to import some ezca stuff from /users/yuta (angry), which had been moved to /users/OLD/yuta (crying). 

I've moved the yuta directory back to /users/ until I fix the damprestore script. 

 

  16748   Tue Mar 29 17:35:54 2022 PacoUpdateSUSDamping fix on BS, AS4, PR2, and PR3

[Ian, Paco]

  • We removed the "cheby" filters from AS4, PR2 and PR3 which had been misplaced after copying from the old SUS models. After removing them, the new SOS damped fine. Note that because of the Input matrices, the filters have to be enabled all at once for the MIMO loop to make sense.
  • We also disabled the "Cheby" filter on BS and saw it damp better. We don't understand this yet, but perhaps it's just a consequence of the many changes in the BSC that have rendered this filter obsolete.
  • we also reduced the damping gains on PR2, PR3 and AS4 to prevent overflow values. After the adjustments the optics were damping fine.
  12515   Thu Sep 22 22:52:08 2016 ericqUpdateGeneralDamping found to be on

Just a heads up, it looks like the damping came on at around 8:30pm. Not sure why. 

  10623   Fri Oct 17 15:17:31 2014 jamieUpdateCDSDaqd "fixed"?

I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself.  Wee hoo.

I spent a while yesterday trying to figure out what could have been going on.  I couldn't find anything.  I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.

So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.

  10624   Fri Oct 17 16:54:11 2014 jamieUpdateCDSDaqd "fixed"?

Quote:

I very tentatively declare that this particular daqd crapfest is "resolved" after Jenne rebooted fb and daqd has been running for about 40 minutes now without crapping itself.  Wee hoo.

I spent a while yesterday trying to figure out what could have been going on.  I couldn't find anything.  I found an elog that said a previous daqd crapfest was finally only resolved by rebooting fb after a similar situation, i.e. there had been an issue that was resolved, daqd was still crapping itself, we couldn't figure out why so we just rebooted, daqd started working again.

So, in summary, totally unclear what the issue was, or why a reboot solved it, but there you go.

Looks like I spoke too soon.  daqd seems to be crapping itself again:

controls@fb /opt/rtcds/caltech/c1/target/fb 0$ ls -ltr logs/old/ | tail -n 20
-rw-r--r-- 1 4294967294 4294967294    11244 Oct 17 11:34 daqd.log.1413570846
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 11:36 daqd.log.1413570988
-rw-r--r-- 1 4294967294 4294967294    11244 Oct 17 11:38 daqd.log.1413571087
-rw-r--r-- 1 4294967294 4294967294    13377 Oct 17 11:43 daqd.log.1413571386
-rw-r--r-- 1 4294967294 4294967294    11481 Oct 17 11:45 daqd.log.1413571519
-rw-r--r-- 1 4294967294 4294967294    11985 Oct 17 11:47 daqd.log.1413571655
-rw-r--r-- 1 4294967294 4294967294    13219 Oct 17 13:00 daqd.log.1413576037
-rw-r--r-- 1 4294967294 4294967294    11150 Oct 17 14:00 daqd.log.1413579614
-rw-r--r-- 1 4294967294 4294967294     5127 Oct 17 14:07 daqd.log.1413580231
-rw-r--r-- 1 4294967294 4294967294    11165 Oct 17 14:13 daqd.log.1413580397
-rw-r--r-- 1 4294967294 4294967294     5440 Oct 17 14:20 daqd.log.1413580845
-rw-r--r-- 1 4294967294 4294967294    11352 Oct 17 14:25 daqd.log.1413581103
-rw-r--r-- 1 4294967294 4294967294    11359 Oct 17 14:28 daqd.log.1413581311
-rw-r--r-- 1 4294967294 4294967294    11195 Oct 17 14:31 daqd.log.1413581470
-rw-r--r-- 1 4294967294 4294967294    10852 Oct 17 15:45 daqd.log.1413585932
-rw-r--r-- 1 4294967294 4294967294    12696 Oct 17 16:00 daqd.log.1413586831
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 16:02 daqd.log.1413586924
-rw-r--r-- 1 4294967294 4294967294    11165 Oct 17 16:05 daqd.log.1413587101
-rw-r--r-- 1 4294967294 4294967294    11086 Oct 17 16:21 daqd.log.1413588108
-rw-r--r-- 1 4294967294 4294967294    11097 Oct 17 16:25 daqd.log.1413588301
controls@fb /opt/rtcds/caltech/c1/target/fb 0$

The times all indicate when the daqd log was rotated, which happens everytime the process restarts.  It doesn't seem to be happening so consistently, though.  It's been 30 minutes since the last one.  I wonder if it somehow correlated with actual interaction with the NDS process.  Does some sort of data request cause it to crash?

 

  10633   Thu Oct 23 01:39:34 2014 JenneUpdateCDSDaqd "fixed"?

Merging of threads. 

ChrisW figured out that it looks like the problem with the frame builder is that it's having to wait for disk access.  He has tweaked some things, and life has been soooo much better for Q and I this evening!  See Chris' elog at elog 10632.

In the last few hours we've had 2 or maybe 3 times that I've had to reconnect Dataviewer to the framebuilder, which is a significant improvement over having to do it every few minutes.

Also, Rossa is having trouble with DTT today, starting sometime around dinnertime.  Ottavia and Pianosa can do DTT things, but Rossa keeps getting "test timed out". 

  10616   Thu Oct 16 03:18:48 2014 JenneUpdateCDSDaqd segfaulting again

 The daqd process on the frame builder looks like it is segfaulting again.  It restarts itself every few minutes.  

The symptoms remind me of elog 9530, but /frames is only 93% full, so the cause must be different.  

Did anyone do anything to the fb today?  If you did, please post an elog to help point us in a direction for diagnostics.

Q!!!!  Can you please help?  I looked at the log files, but they are kind of mysterious to me - I can't really tell the difference between a current (bad) log file and an old (presumably fine) log file.  (I looked at 3 or 4 random, old log files, and they're all different in some ways, so I don't know which errors and warnings are real, and which are to be ignored).

  10617   Thu Oct 16 12:22:43 2014 ericqUpdateCDSDaqd segfaulting again

I've been trying to figure out why daqd keeps crashing, but nothing is fixed yet. 

I commented out the line in /etc/inittab that runs daqd automatically, so I could run it manually. Each time I run it ( with ./daqd -c ./daqdrc while in c1/target/fb), it churns along fine for a little while, but eventually spits out something like:

[Thu Oct 16 12:07:23 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 12:07:24 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 12:07:25 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097521658 to 1097521660
Segmentation fault
 
Or:
 
[Thu Oct 16 11:43:54 2014] main profiler warning: 1 empty blocks in the buffer
[Thu Oct 16 11:43:55 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:56 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:57 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:58 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:43:59 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:00 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:01 2014] main profiler warning: 0 empty blocks in the buffer
[Thu Oct 16 11:44:02 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1097520250 to 1097520257
FATAL: exception not rethrown
Aborted

I looked for time disagreements between the FB and the frontends, but they all seem fine. Running ntpdate only corrected things by 5ms. However, looking through /var/log/messages on FB, I found that ntp claims to have corrected the FB's time by ~111600 seconds (~31 hours) when I rebooted it on Monday.

Maybe this has something to do with the timing that the FB is getting? The FE IOPs seem happy with their sync status, but I'm not personally currently aware of how the FB timing is set up. 


Addendum:

On Monday, Jamie suggested checking out the situation with FB's RAID. Searching the elog for "empty blocks in the buffer" also brought up posts that mentioned problems with the RAID. 

I went to the JetStor RAID web interface at http://192.168.113.119, and it reports everything as healthy; no major errors in the log. Looking at the SMART status of a few of the drives shows nothing out of the ordinary. The RAID is not mounted in read-only mode either, as was the problem mentioned in previous elogs. 

  4320   Thu Feb 17 23:56:53 2011 josephbUpdateCDSDaqd was rebuilt, now reverted.

As one of the trouble shooting steps for the daqd (i.e. framebuilder) I rebuilt the daqd executable.  My guess is somewhere in the build code is some kind of GPS offset to make the time correct due to our lack of IRIG-B signal.

The actual daqdrc file was left untouched when I did the new install, so the symmetricom gps offset is still the same, which confuses me.

I'll take a look at the SVN diffs tomorrow to see what changed in that code that could cause a 300000000 or so offset to the GPS time.

 

 

  2095   Thu Oct 15 02:38:10 2009 rana, robUpdateOMCDark Port Mode Scan using the OMC

Bottom trace is proportional to the OMC PZT voltage - top trace is the transmitted light through the OMC. Interferometer is locked (DARM- RF) with arm powers = 80 / 100. The peaks marked by the cursors are the +(- ?) 166 MHz sidebands.

  519   Wed Jun 4 16:57:12 2008 josephbConfigurationCamerasDark images from cameras (electronics noise measurement)
The attached pdfs are 1 second and 1 millisecond long integrations from the GC650 and GC750 cameras with a cap in place - i.e. no light.

They include the mean and standard deviation values.

The single bright pixel in the 1 second long exposure image for the GC650 seems to be a real effect. Multiple images taken show the same bright pixel (although with slightly varying amplitudes).

The last pdf is a zoom in on the z-axis of the first pdf (i.e. GC650 /w 1 sec exposure time).

I'm not really sure what to make of the mean remaining virtually fixed for the different integration times for both cameras. I guess 0 is simply offset, but doesn't result in any runaway integrations in general. Although there are certainly some stronger pixels in the long exposures when compared to the short exposures.

Its interesting to note the standard deviation actually drops from the long exposure to the short exposure, possibly influenced by certain pixels which seem to grow with time.

The one with the least variation from its "zero" was the 1 millisecond GC750 dark image.
  7330   Fri Aug 31 17:44:21 2012 ManasaUpdateRingdownData

Quote:

Ok, so the whole idea that mirror motion can explain the ripples is nonsense. At least, when you think off the ringdown with "pump off". The phase shifts that I tried to estimate from longitudinal and tilt mirror motion are defined against a non-existing reference. So I guess that I have to click on the link that Koji posted...

Just to mention, for the tilt phase shift (yes, there is one, but the exact expression has two more factors in the equation I posted), it does not matter, which mirror tilts. So even for a lower bound on the ripple time, my equation was incorrect. It should have the sum over all three initial tilt angles not only the two "shooting into the long arms" of the MC.

Quote:

Laser frequency shift = longitudinal motion of the mirrors

Ringing: http://www.opticsinfobase.org/ol/abstract.cfm?uri=ol-20-24-2463

Quote:

Hmm. I don't know what ringing really is. Ok, let's assume it has to do with the pump... I don't see how the pump laser could produce these ripples. They have large amplitudes and so I always suspected something happening to the intracavity field. Therefore I was looking for effects that would change resonance conditions of the intracavity field during ringdown. Tilt motion seemed to be one explanation to me, but it may be a bit too slow (not sure yet). Longitudinal mirror motion is certainly too slow. What else could there be?

 

 

It is essential we take a look at the ringdown data for all measurements made so far to figure out what must be done to track the source of these notorious ripples. I've attached the plot for the same showing the decay time to be the same in all cases. About the ripples; it seems unlikely to both Jan and me that the ripples are some electronic noise because the ripples do not follow any common pattern or time constant. We have discussed with Koji about monitoring the frequency shift, the input power to the MC and also try other methods of shutting down the pump to track their source as the next steps.

 

cum_plot.png 

  10274   Sat Jul 26 10:12:19 2014 AkhilUpdateGeneralData Acquisition from FC into EPICS Channels

 I succeeded in creating a new channel access server hosted on domenica ( R Pi) for continuous data acquisition from the FC into  accessible channels. For this I have written a ctypes interface between EPICS and the C interface code to write data into the channels. The channels which I created are:

C1:ALS-X-BEAT-NOTE-FREQ

C1:ALS-Y-BEAT-NOTE-FREQ

 

The scripts I have written for this can be found in:

db script in:     /users/akhil/fcreadoutIoc/fcreadoutApp/Db/fcreadout.db

 Python code:  /users/akhil/fcreadoutIoc/pycall

C code:          /users/akhil/fcreadoutIoc/FCinterfaceCcode.c

I will give the standard channel names(similar to the names on the channel root)once the testing is completed and confirm that data from FC is consistent with the C code readout. Once ready I will run the code forever so that both the server and data acquisition are in process always.

Yesterday, when I set out to test the channel, I faced few serious issues in booting the raspberry pi. However, I have backed up the files on the Pi and will try to debug the issue very soon( I will test with Eric Q's R Pi).

To run these codes one must be root ( sudo python pycall, sudo ./FCinterfaceCcode)  because the HID- devices can be written to only by the root(should look into solving this issue). 

Instructions for Installation of EPICS, and how to create channel server on Pi will be described in detail in 40m Wiki ( FOLL page).

 

  10276   Sat Jul 26 13:38:34 2014 JamieUpdateGeneralData Acquisition from FC into EPICS Channels

Quote:

 I succeeded in creating a new channel access server hosted on domenica ( R Pi) for continuous data acquisition from the FC into  accessible channels. For this I have written a ctypes interface between EPICS and the C interface code to write data into the channels. The channels which I created are:

C1:ALS-X-BEAT-NOTE-FREQ

C1:ALS-Y-BEAT-NOTE-FREQ

 

The scripts I have written for this can be found in:

db script in:     /users/akhil/fcreadoutIoc/fcreadoutApp/Db/fcreadout.db

 Python code:  /users/akhil/fcreadoutIoc/pycall

C code:          /users/akhil/fcreadoutIoc/FCinterfaceCcode.c

I will give the standard channel names(similar to the names on the channel root)once the testing is completed and confirm that data from FC is consistent with the C code readout. Once ready I will run the code forever so that both the server and data acquisition are in process always.

Yesterday, when I set out to test the channel, I faced few serious issues in booting the raspberry pi. However, I have backed up the files on the Pi and will try to debug the issue very soon( I will test with Eric Q's R Pi).

To run these codes one must be root ( sudo python pycall, sudo ./FCinterfaceCcode)  because the HID- devices can be written to only by the root(should look into solving this issue). 

Instructions for Installation of EPICS, and how to create channel server on Pi will be described in detail in 40m Wiki ( FOLL page).

 

controls@rossa|~ 2> ls /users/akhil/fcreadoutIoc
ls: cannot access /users/akhil/fcreadoutIoc: No such file or directory
controls@rossa|~ 2> 

This code should be in the 40m SVN somewhere, not just stored on the RPi.

I'm still confused why python is in the mix here at all.  It doesn't make any sense at all that a C program (EPICS IOC) would be calling out to a python program (pycall) that then calls out to a C program (FCinterfaceCcode).  That's bad programming.  Streamline the program and get rid of python.

You also definitely need to fix whatever the issue is that requires running the program as root.  We can't have programs like this run as root.

  10277   Sat Jul 26 14:35:28 2014 AkhilUpdateGeneralData Acquisition from FC into EPICS Channels

Quote:

Quote:

 I succeeded in creating a new channel access server hosted on domenica ( R Pi) for continuous data acquisition from the FC into  accessible channels. For this I have written a ctypes interface between EPICS and the C interface code to write data into the channels. The channels which I created are:

C1:ALS-X-BEAT-NOTE-FREQ

C1:ALS-Y-BEAT-NOTE-FREQ

 

The scripts I have written for this can be found in:

db script in:     /users/akhil/fcreadoutIoc/fcreadoutApp/Db/fcreadout.db

 Python code:  /users/akhil/fcreadoutIoc/pycall

C code:          /users/akhil/fcreadoutIoc/FCinterfaceCcode.c

I will give the standard channel names(similar to the names on the channel root)once the testing is completed and confirm that data from FC is consistent with the C code readout. Once ready I will run the code forever so that both the server and data acquisition are in process always.

Yesterday, when I set out to test the channel, I faced few serious issues in booting the raspberry pi. However, I have backed up the files on the Pi and will try to debug the issue very soon( I will test with Eric Q's R Pi).

To run these codes one must be root ( sudo python pycall, sudo ./FCinterfaceCcode)  because the HID- devices can be written to only by the root(should look into solving this issue). 

Instructions for Installation of EPICS, and how to create channel server on Pi will be described in detail in 40m Wiki ( FOLL page).

 

controls@rossa|~ 2> ls /users/akhil/fcreadoutIoc
ls: cannot access /users/akhil/fcreadoutIoc: No such file or directory
controls@rossa|~ 2> 

This code should be in the 40m SVN somewhere, not just stored on the RPi.

I'm still confused why python is in the mix here at all.  It doesn't make any sense at all that a C program (EPICS IOC) would be calling out to a python program (pycall) that then calls out to a C program (FCinterfaceCcode).  That's bad programming.  Streamline the program and get rid of python.

You also definitely need to fix whatever the issue is that requires running the program as root.  We can't have programs like this run as root.

 I tried making these changes but there was a problem with R pi boot again.I now know how to bypass the python code using IOC.I will make these changes once the problem with the Pi is fixed.

  10200   Tue Jul 15 01:41:43 2014 JenneUpdateLSCData for DARM on sqrtInv investigation

I took some data tonight for a quick look at what combinations of DC signals might be good to use for DARM, as an alternative to ALS before we're ready for RF.

I had the arms locked with ALS, PRMI with REFL33, and tried to move the CARM offset between plus and minus 1.  The PRMI wasn't holding lock closer than about -0.3 or +0.6, so that is also a problem.  Also, I realized just now that I have left the beam dumps in front of the transmission QPDs, so I had prevented any switching of the trans PD source.  This means that all of my data for C1:LSC-TR[x,y]_OUT_DQ is taken with the Thorlabs PDs, which is fine, although they saturate around arm powers of 4 ever since my analog gain increase on the whitening board.  Anyhow, the IFO didn't hold lock for much beyond then anyway, so I didn't miss out on much.  I need to remember to remove the dumps though!!

Self:  Good stuff should be between 12:50am - 1:09am.  One set of data was ./getdata -s 1089445700 -d 30 -c C1:LSC-TRX_OUT_DQ C1:LSC-TRY_OUT_DQ C1:LSC-CARM_IN1_DQ C1:LSC-PRCL_IN1_DQ

  10208   Wed Jul 16 01:04:09 2014 JenneUpdateLSCData for DARM on sqrtInv investigation

I realized while I was looking at last night's data that I had been doing CARM sweeps, when really I wanted to be doing DARM sweeps.  I took a few sets of data of DARM sweeps while locked on ALSdiff.  However, Rana pointed out that comparing ALSdiff to TRX-TRY isn't exactly a fair comparison while I'm locked on ALSdiff, since it's an in-loop signal, so it looks artificially quiet. 

Anyhow, I may consider transitioning DARM over to AS55 temporarily so that I can look at both as out-of-loop sensors. 

Also, so that I can try locking DARM on DC transmission, I have added 2 more columns to the LSC input matrix (now we're at 32!), for TRX and TRY.  We already had sqrt inverse versions of these signals, but the plain TRX and TRY were only available as normalization signals before.  Since Koji put in the facility to sqrt or not the normalization signals, I can now try:

Option 1:  ( TRX - TRY ) / (TRX + TRY)

Option 2:  ( TRX - TRY ) / sqrt( TRX + TRY )

DARM does not yet have the facility to normalize one signal (DC transmission) and not another (ALS diff), so I may need to include that soon.  For tonight, I'm going to try just changing matrix elements with ezcastep.

Since I changed the c1lsc.mdl model, I compiled it, restarted the model, and checked the model in.  I have also added these 2 columns to the AUX_ERR sub-screen for the LSC input matrix.  I have not changed the LSC overview screen.

  2775   Tue Apr 6 11:27:11 2010 AlbertoUpdateComputer Scripts / ProgramsData formats in the Agilent AG4395a Spectrum Analyzer

Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.

It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.

Another problem that I noticed in the GPIB downloaded data when I was measuring noise spectrum, is an unwanted factor of 2 in the amplitude spectral density.
For example, measuring the amplitude spectral density of the FSS RF PD's dark noise at its resonant frequency (~21.5 MHz), I would expect ~15nV/rtHz from the thermal noise - as Rana pointed out in the elog entry 2759). However, the spectrum analyzer reads 30nV/rtHz, in both the display and the GPIB downloaded data, except for the above mentioned little discrepancy between the two. (The discrepancy is about 0.5dBm/Hz in the power spectrum density).
 
My measurement, as I showed it in the elog entry 2760) is of ~15nV/rtHz, but only becasue I divided by 2. Now I realize that that division was unjustified.
 
I'm trying to figure out the reason for that. By now I'm not sure we can trust the netgpib package for spectrum measurements with the AG4395.
  2776   Tue Apr 6 16:55:28 2010 AlbertoUpdateComputer Scripts / ProgramsData formats in the Agilent AG4395a Spectrum Analyzer

Quote:

Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.

It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.

Another problem that I noticed in the GPIB downloaded data when I was measuring noise spectrum, is an unwanted factor of 2 in the amplitude spectral density.
For example, measuring the amplitude spectral density of the FSS RF PD's dark noise at its resonant frequency (~21.5 MHz), I would expect ~15nV/rtHz from the thermal noise - as Rana pointed out in the elog entry 2759). However, the spectrum analyzer reads 30nV/rtHz, in both the display and the GPIB downloaded data, except for the above mentioned little discrepancy between the two. (The discrepancy is about 0.5dBm/Hz in the power spectrum density).
 
My measurement, as I showed it in the elog entry 2760) is of ~15nV/rtHz, but only becasue I divided by 2. Now I realize that that division was unjustified.
 
I'm trying to figure out the reason for that. By now I'm not sure we can trust the netgpib package for spectrum measurements with the AG4395.

 I noticed that someone, that wasn't me, has edited the wiki page about the netgpibdata under my name saying:

 " [...]

* A4395 Spectrum Units
Independetly by which unites are displayed by the A4395 spectrum analyzer on the screen, the data is saved in Watts/rtHz
"

That is not correct. The spectrum is just in Watts, since it gives the power over the bandwidth. The correspondent power spectral density is showed under the "Noise" measurement format and it's in Watts/Hz.
Watts/rtHz is not a correct unit.
  7746   Mon Nov 26 18:56:34 2012 JenneHowToComputersData logging suggestions

We've been talking for a while about how we want to store data.  I'm not in love with keeping it on the elog, although I think we should always be able to reference and go back and forth between the elogs and the data.

I have made a new folder: /data    EDIT: nevermind.  I want it to be on the file system just like /users, but I don't know how to do that.  Right now the folder is just on Ottavia. Jamie will help me tomorrow.

In this folder, we will save all of the data which goes into the elog. 

I propose that we should have a common format for the names of the data files, so that we can easily find things.

My proposal is that one begins ones elog regarding the data to be saved, and submit it immediately after putting in the first ~sentence or so. One should then make a new folder inside the data folder with a title "elog#####_Anything_Else_You_Want" Then, data (which was originally saved in ones own users folder) should be copied into the /data/elog#####_AnythingElse/ folder. Also in that folder should be any Matlab scripts used to create the plots that you post in the elog.  One should then edit the elog to continue making a regular, very thorough elog, including the path to the data.  Elog should include all of the information about the measurement, state of the IFO (or whatever you were measuring), etc. 

Riju will be alpha-testing this procedure tonight.  EDIT: nevermind...see previous edit.

  11444   Fri Jul 24 18:12:52 2015 Max IsiUpdateGeneralData missing

For the past couple of days, the summary pages have shown minute trend data disappear at 12:00 UTC (05:00 AM local time). This seems to be the case for all channels that we plot, see e.g. https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20150724/ioo/. Using Dataviewer, Koji has checked that indeed the frames seem to have disappeared from disk. The data come back at 24 UTC (5pm local). Any ideas why this might be?

  11455   Tue Jul 28 17:07:45 2015 JamieUpdateGeneralData missing
Quote:

For the past couple of days, the summary pages have shown minute trend data disappear at 12:00 UTC (05:00 AM local time). This seems to be the case for all channels that we plot, see e.g. https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20150724/ioo/. Using Dataviewer, Koji has checked that indeed the frames seem to have disappeared from disk. The data come back at 24 UTC (5pm local). Any ideas why this might be?

Possible explanations:

  • The data transfers to LDAS had been shut off while we were doing the DAQ debugging. I don't know if they have been turned back on.  Unlikely this is the problem since you would probably see no data at all if this were the case.
  • wiper script parameters might have been changed to store less of the trend data for some reason.
  • Frame size is different and therefore wiper script parameters need to be adjusted.
  • Steve deleted it all.
  • ...
  10973   Wed Feb 4 18:16:44 2015 KojiUpdateLSCData transfer rate of c1lsc reduced from ~4MB/s to ~3MB/s

c1lsc had 60 full-rate (16kS/s) channels to record. This yielded the LSC to FB connection to handle 4MB/s (mega-byte) data rate.
This was almost at the data rate limit of the CDS and we had frequent halt of the diagnostic systems (i.e. DTT and/or dataviewer)

Jenne and I reviewed DAQ channel list and decided to remove some channels.  We also reviewed the recording rate of them
and reduced the rate of some channels. c1lsc model was rebuilt, re-installed, and restarted. FB was also restarted. These are running as they were.
The data rate is now reduyced to ~3MB nominal.


The following is the list of the channels removed from the DQ channels:

AS11_I_ERR
AS11_Q_ERR
AS165_I_ERR
AS165_Q_ERR
POP55_I_ERR
POP55_Q_ERR

The following is the list of the channels with the new recording rate:

TRX_SQRTINV_OUT 2048
TRY_SQRTINV_OUT 2048
DARM_A_ERR 2048
DARM_B_ERR 2048
MICH_A_ERR 2048
MICH_B_ERR 2048
PRCL_A_ERR 2048
PRCL_B_ERR 2048
CARM_A_ERR 2048
CARM_B_ERR 2048

  13801   Mon Apr 30 23:13:12 2018 KevinUpdateComputer Scripts / ProgramsDataViewer leapseconds

I was trying to plot trends (min, 10 min, and hour) in DataViewer and got the following error message

Connecting.... done
 mjd = 58235
leapsecs_read()
  Opening leapsecs.dat
  Open of leapsecs.dat failed
leapsecs_read() returning 0
frameMemRead - gpstimest = 1208844718

 

thoough the plots showed up fine after. Do we need to fix something with the leapsecs.dat file?

ELOG V3.1.3-