40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 162 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  15838   Wed Feb 24 10:23:03 2021 YehonathanUpdateSUSOSEM testing for SOSs

Continuing with the new rig, I measure the resistance of the cable leading to the coil to be 0.08+(0.52-0.08)+(0.48-0.08)=0.9ohm.

S/N

Coil Resistance

(ohm)

Coil Inductance

(mH)

PD Voltage

(V)

LED spot image

(Attachment #)

LED perfectly centered Ready for C&B and install Short/Long Notes
078 13.0 2.8 1.86 1 N Y L Reengraved
280 13.3 2.8 1.92 2

Y

Y L  
117 13.1 2.8 2.12 3 Y Y L Reengraved
140 inf       N N L  
146 12.8 2.8 1.83   Y Y L Reengraved
093 13.1 2.8 2.19   N Y L Reengraved
296 13.1 2.8 2.19   N Y L  
256 13.1 2.8 2.0   N Y L  
060 12.9 2.8 2.0   Y Y L Reengraved
098 13 2.8 1.95   N Y L Reengraved
269 13.2 2.8 1.92   Y Y L  
260 13.2 2.8 2.03   Y Y L  
243 13.1 2.8 1.94   N Y L  
080 12.9 2.8 2.38   Y Y L Reengraved
292 13.3 2.8 2.06   N Y L  
113 13 2.8 2.08   Y Y L Reengraved
251 13.1 2.8 2.04   Y Y L  
231 13.3 2.8 1.89   Y Y L filter not covering the entire PD area
230 13.3 2.8 1.92   Y Y L  
218 13.3 2.8 2.13   Y Y L  
091 13.2 2.8 1.98   Y N L No pigtail. Reengraved
118 13.3 2.8 2.15   Y N L No pigtail. Reengraved
302 13.2 2.8 2.06   Y Y L  
159 13 2.8 2.15   Y N S No pigtail. One cap screw too long. Reengraved.
016 13 2.8 2.54   Y N S No pigtail. Reengraved.
122

13.1

2.8 2.04   N N L No pigtail. Reengraved.
084 13 2.8 1.94   N N L No pigtail. Reengraved.
171 13.1 2.8 2.20   Y N L No pigtail. Reengraved.
052 12.9 2.8 1.75   Y Y S Reengraved.
106 13.1 2.8 1.62   Y Y S Reengraved.
096 13 2.8 2.05   Y Y S Reengraved. The OSEM fell on the floor. I rechecked it. Everything seems fine except the PD voltage has changed. It was previously 1.76
024 13 2.8 1.81   Y Y S Reengraved.
134 12.9 2.8 1.82   N Y S Reengraved.
081 12.9 2.7 1.85   Y Y S Reengraved.
076 12.9 2.8 1.91   N Y S Reengraved.
108 12.9 2.8 1.83   Y Y S Reengraved.
020 12.9 2.8 1.98   N Y S Reengraved.
031 12.9 2.8 1.74   Y Y S Reengraved.
133 13.1 2.8 1.65   Y Y S Reengraved.
007 13 2.8 1.74   Y Y S Reengraved.
088 12.8 2.8 1.77   N Y S Reengraved.
015 12.9 2.7 1.81   Y Y S  
115 13 2.8 1.89   Y Y S Reengraved.
009 12.9 2.8 1.78   Y Y S Reengraved.
099 13.1 2.8 2.00   Y Y S Reengraved.
103 12.9 2.8 1.82   N Y S Reengraved.
143 13.1 2.8 1.80   Y Y S Reengraved.
114 12.8 2.8 2.04   Y Y S  
155 13.1 2.8 1.90   N Y S Reengraved.
121 12.9 2.8 1.86   Y Y S Reengraved.
130 13 2.7 1.78   N Y S Reengraved.
022 13 2.8 1.92   N Y S Reengraved.
150 12.8 2.8 1.90   N Y S Reengraved.
144 12.7 2.7 1.86   N Y S  
040 12.9 2.8 1.70   N Y S Reengraved. way off-center
125 12.8 2.8 1.75   N Y S Reengraved.
097 12.9 2.8 1.81   N Y S Reengraved.
089 12.9 2.8 1.51   Y Y S Reengraved.
095 13 2.8 1.96   Y Y L Reengraved.
054 13.1 2.8 1.86   Y N L Have a long screw going through it. Reengraved.
127 13.1 2.9 1.82   N N L Have a long screw going through it. Reengraved.
135 12.7 2.8 1.75   N N L Have a long screw going through it. Reengraved.
046 13.1 2.8 2.08   Y N L Have a long screw going through it. Reengraved.
000 13.1 3.1 6.6 The LED light looks totally scattered. No clear spot N N S Made out of Teflon? Looks super old. Didn't engrave

Total: 63 OSEMS. Centered working OSEMS: 42. Will upload a more detailed summary to the wiki soon.

Note: The Olympus camera is eating the AA camera very quickly (need to replace every 1.5 days). I'm guessing this is because of the corrosion in the battery housing.

 

  15856   Wed Mar 3 11:51:07 2021 YehonathanUpdateSUSOSEM testing for SOSs

I finished testing the OSEMs. I put all the OSEMs back in the box. The OSEMS were divided into several bags. I put the OSEM box next to the south flow bench on the floor.

I have uploaded the OSEM catalog to the wiki. I will upload the LED spot images later.

In summary:

Total 64 OSEMS, 31 long, 33 short.

Perfectly centered LED spots, ready for C&B OSEMS: 30, 12 long, 18 short.

Perfectly centered LED spots, need some work (missing pigtails, weird screws) OSEMS: 7, 5 long, 2 short.

Slightly off-centered (subjective) LED spots, ready for C&B OSEMS: 20, 7 long, 13 short.

Slightly off-centered (subjective) LED spots, need some work (missing pigtails, weird screws) OSEMS: 4 long

Defective OSEMS or LED spot way off-center: 3.

  15886   Tue Mar 9 14:30:22 2021 YehonathanUpdateSUSOSEM testing for SOSs

I finished ranking the OSEMS on the OSEM wiki page.

I also moved the OSEM data folder from /home/export/home to /users/public_html and created a soft link instead. I have done the same for the 40m_TIS folder that I uploaded there a while ago.

  15888   Tue Mar 9 15:19:03 2021 KojiUpdateSUSOSEM testing for SOSs

How were the statistics of them? i.e. # of Good OSEMs, # of OK OSEMs, etc...

  15891   Tue Mar 9 18:49:28 2021 YehonathanUpdateSUSOSEM testing for SOSs

29 Good OSEMs, of which 1 is questionable (089) with PD voltage of 1.5V, 5 need some work (pigtailing, replace/remove/add screws). We have 4 pigtails. Schematics.

20 OK OSEMs (Slightly off-centered LED spot), of which 3 need some work (pigtailing, replace/remove/add screws).

13 Bad OSEMS (Way off-centered LED spot)

2 Defunct OSEMs

-------

Ed: KA
Good: 23 complete OSEMs +  5 good ones, which need soldering work (there are 4 pigtails and take one from a defunct OSEM).
OK:  Use good 7 OSEMs for the sides. And keep some functional OSEMs as spares.

 

  760   Tue Jul 29 21:04:55 2008 SharonUpdate OSEM's Power Spectrum
From 16:30 this afternoon
  11760   Fri Nov 13 15:20:24 2015 SteveUpdateSUSOSEMs

Are they oscillating or not ?

Quote:

The ITMX OSEMS are oscillating.

ETMX, ETMY and MC2 POS _biases are off. Why ?

The Epics MEDMs screens are going blank in  ~3-5 minutes reluctantly.

  16119   Tue May 4 19:14:43 2021 YehonathanUpdateGeneralOSEMs from KAGRA

I put the box containing the untested OSEMs from KAGRA near the south flow bench on the floor.

  16944   Fri Jun 24 13:29:37 2022 YehonathanUpdateGeneralOSEMs from KAGRA

The box was given to Juan Gamez (SURF)

Quote:

I put the box containing the untested OSEMs from KAGRA near the south flow bench on the floor.

 

  3294   Mon Jul 26 20:12:18 2010 kiwamuUpdateSUSOSEMs on PRM

 [Alberto and Kiwamu]

We installed the OSEMs to the new PRM.

As I wrote down on the elog (see here)  today's mission was to install the OSEMs to the PRM.

After putting them on the tower we adjusted the readout offsets by sliding the OSEMs so that they can stay in the linear sensing ranges. 

Now all of the OSEMs have almost good separation distances from the PRM.

In the attached picture you can see the OSEMs installed on the PRM tower ( middle: PRM tower, left: BS tower)


(what we did)

 1. moved the PRM tower close to the door so that we could easily access the PRM.

 2. leveled the table by putting some weights and confirmed the level by a  bubble level tool.

     - We must level the table every time when we set / adjust any OSEMs,  otherwise the readout voltages of  the OSEMs vary every time due to the tilted table.

 3. released the PRM by loosing the earthquake stops

 4. put the OSEMs with approximately right separation distances from the PRM.

      -  At this phase we can see the readout of the OSEMs, which were oscillating freely because we still didn't enable the damping.

        -  The OSEM positions were checked by looking at useful notes on the wiki (see here).

 5. turned on the damping servo of the OSEMs

       - Without changing any gains, it worked well. 

      - Then we could see stable readouts of the OSEMs which didn't show any oscillations in turn because of the damping.

 6. checked the level of the table again

 7. set each of the OSEM readouts to the half of its maximum value by sliding their positions slightly.

      - The readout offsets were at almost the half value within +/- 100 mV accuracy (this was the best accuracy we could adjust by our hands)

 8. screwed down the earthquake stops to lock the PRM.

      - Now the damping is off.

 9. closed the door

 


(to be done)

 *  Putting the PRM tower back to the designed place

 *  Installation of the pick off mirror

 *  Arrangement of the PZT mirror

  3688   Mon Oct 11 10:51:36 2010 steveUpdateSUSOSEMs, OSEMs, OSEMs...those lovely little OSEMs
  10480   Tue Sep 9 23:05:01 2014 KojiUpdateCDSOTTAVIA lost network connection

Today the network connection of OTTAVIA was sporadic.

Then in the evening OTTAVIA lost completely it. I tried jiggle the cables to recover it, but in vain.

We wonder if the network card (on-board one) has an issue.

  10483   Wed Sep 10 02:35:55 2014 ranaUpdateCDSOTTAVIA lost network connection

Quote:

Today the network connection of OTTAVIA was sporadic.

Then in the evening OTTAVIA lost completely it. I tried jiggle the cables to recover it, but in vain.

We wonder if the network card (on-board one) has an issue.

 I would also suspect IP conflicts; I had temporarily put the iMac on the Ottavia IP wire a few weeks ago. Hopefully its not back on there.

  10484   Wed Sep 10 02:52:05 2014 ericqUpdateCDSOTTAVIA lost network connection

I checked chiara's tables, all seemed fine. I switched ethernet cables from the black one labelled "allegra," which seemed maybe fragile, for the teal one that may have been chiara's old ethernet cable. It's back on the network now; hopefully it lasts. 

  10109   Fri Jun 27 20:52:30 2014 KojiUpdateCDSOTTAVIA was not on network

I came in the lab. Found bunch of white EPICS boxes on ottavia.
It turned out that only ottavia was kicked out from the network.

After some struggle, I figured out that ottavia needs the ethernet cable unplugged / plugged
to connect (or reconnect) to the network.

For some unknown reason, ottavia was isolated from the martian network and couldn't come back.
This caused the MC autolocker frozen.

I logged in to megatron from ottavia, and ran at .../scritpt/MC

nohup ./AutoLockMC.csh &

Now the MC is happy.

  17027   Fri Jul 22 17:43:19 2022 KojiUpdateGeneralObtained a functional CRT

[Koji Paco]

Koji went to Downs and found a CRT labeled "for 40m Rana?". So I decided to salvage it to the 40m after getting approval from Rich/Todd.

Paco and I tried this unit with the control room CCD signal and it just worked fine. So we can use this as a spare for any purpose in the lab.

  4298   Tue Feb 15 11:43:53 2011 JenneUpdateComputersOccasional error with NDS2

Just in case anyone has encountered this / knows how to fix it....

I'm running NDS2 on Rossa, trying to get a bunch of raw data from S5.  I get 10min of data at a time, and it goes through ~200 iterations successfully, and then throws the following error:

Getting new data
Connecting.... authenticate ... done
daq_recv_id: Wrong length read (0)
Error reading writerID in daq_recv_block
Warning: daq_request_data failed
 
??? Error using ==> NDS2_GetData
Fatal Error getting channel data.

Error in ==> getDARMdataTS at 37
oot = NDS2_GetData({...

Error in ==> SaveRawData_H1_DARM at 40
    oot = getDARMdataTS(t0s(ii), strideDuration, srate);

  12132   Wed May 25 02:54:09 2016 ericqUpdateGeneralOdds and ends

WFS locking point seemed degraded; I hand aligned and reset the WFS offsets as usual.

ITMX oplev recentered. While doing so, I noticed an ETMX excursion rear its head for the first time in a long while :crying

There was no active length control on ETMX, only OSEM damping + oplevs. Afterwards, its still moving around with only local damping on. I'm leaving the oplevs off for now.

  743   Sun Jul 27 20:25:49 2008 ranaConfigurationEnvironmentOffice Temperature increased to 75 F
Since we have the chiller for the PSL chiller now, I've just increased the office area
temperature set point by 2 F
to 75 F to see if the laser will still behave.
  3571   Tue Sep 14 00:21:51 2010 ranaOmnistructureEnvironmentOffice area temperature change

I changed the setpoint for the HVAC control (next to Steve) from 73F to 72F. This is to handle the temperature increase in the control room with the AC unit there turned off.

We know that the control setpoint is not linear, but I hope that it settles down after several hours. Lets wait until Tuesday evening before making another change.

  6078   Wed Dec 7 00:11:58 2011 DenUpdateAdaptive FilteringOfflineAF

 I did offline adaptive filtering with yesterday's 3 hours of MC-F and GUR1X data. It turns out that normalized-lms can strongly outperform static Wiener filtering!

offlineaf_psd.png 

This is interesting. It might be something inside MC_F that Wiener static does not see. I think the problem is either with seismometer noise or tilt.

  11846   Fri Dec 4 10:18:33 2015 yutaroUpdateASSOffset in the dither loop of XARM vs beam spot shift on ETMX

As I did for YARM (elog 11779), I measured the relation between offsets added just after the demodulation of the dithering loop of XARM and beam spot shift on ETMX. Defferent from YARM, the beam spot on ITMX DOES change because only BS is used as a steering mirror (TT1&2 are used for the dithering of YARM). Instead, the beam spot on BS DOES NOT change.

This time, I measured by oplevs the angles of both ETMX and ITMX for each value of offset, and using these angles I calculated the shift of the beam spot on ETMX so that I got two independent estimations (one from ETMX oplev, and the other from ITMX oplev) as shown below. The calibration of the oplevs reported in elog 11831 is taken into account. 

The difference of two estimations comes from the error of calibration of oplevs and/or imperfect alignment, I think. 

  5747   Thu Oct 27 18:00:38 2011 kiwamuSummaryLSCOffsets in LSC signals due to the RFAMs : Optickle simulation

The amount of offsets in the LSC signals due to the RFAMs have been estimated by an Optickle simulation.

The next step is to think about what kind of effects we get from the RFAMs and estimate how much they will degrade the performance.

(Motivation)

  We have been having relatively big RFAM sidebands (#5616), which generally introduce unwanted offsets in any of the LSC demodulated signals.
The motivation was that we wanted to estimate how much offsets we've been having due to the RFAMs.
The extreme goal is to answer the question : 'How big RFAMs do we allow for operation of the interferometer?'.
Depending on the answer we may need to actively control the RFAMs as already planed (#5686).
Since the response of the interferometer is too complicated for analytic works, so a numerical simulation is used.
 

(Results : Offsets in LSC error signals)

PRCL_200.png

 

MICH_200.png

 SRCL_200.png

  Figure: Offsets in unit of meter in all the LSC demodulated signals.  Y-axis is the amount of the offsets and the X-axis represents each signal port.
In each signal port, the signals are classified by color.
(1) Offsets in the PRCL signal. (2) Offsets in the MICH signal. (3) Offsets in the SRCL signal.
 
 
Roughly the signals showed offsets at a 0.1 nm level.
The numerical error was found to be about 10-10 nm by running the same simulation without the AM sidebands.
Here is a summary of the amount of the offsets:
 
    offsets [nm] (1f signal port)  offsets [nm] (3f signal port)  biggest offsets [nm] (signal port)
PRCL       0.3 (REFL11)       0.2 (REFL33)     1 (REFL55)
MICH      0.00009 (AS55)       0.8 (REFL33)     7 (POP11)
SRCL      0.1 (REFL55)       0.1 (REFL165)     40 (POX11)
In the SRCL simulation  REFL11I, REFL11Q, POP11I, POP11Q and POX11I didn't show any zero crossing points within 100 nm range around the resonance.
It is because that the SRCL doesn't do anything for the 11MHz sidebands. So it is the right behavior.
However POX11 was somewhat sensitive to the SRCL motion and showed a funny signal with a big offset.
 

(Simulation setup)

I applied the current PM/AM ratio according to the measurement (#5616, #5519)
The modulation indices used in the simulation are :
    + PM index in 11MHz = 0.17
    + PM index in 55MHz = 0.14
    + AM index in 11MHz = 0.17 / 200 = 8.5x10-4
    + AM index in 55MHz = 0.14 / 200 = 7.0x10-4
Note that the phases of the AM and PM sidebands are the same.

For clarity, I also note the definition of PM/AM ratio as well as how the first order upper sideband looks like.

ratio.png

upper.png
 

The optical parameters are all at ideal length although we may want to check the results with more realistic parameters:
    + No arm cavities
    + PRCL length = 6.75380
    + SRCL length = 5.39915
    + Schnupp asymmetry = 3.42 cm
    + loss in each optic = 50 ppm
    + PRCL = resonant for 11 and 55MHz
    + MICH = dark fringe
    + SRCL = resonant for 55 MHz
The matlab script will be uploaded to the cvs server.

Quote from #5686
  8. In parallel to those actions, figure out how much offsets each LSC error signal will have due to the current amount of the RFAMs.
    => Optickle simulations.

  10815   Thu Dec 18 15:41:30 2014 ericqUpdateComputer Scripts / ProgramsOffsite backups of /cvs/cds going again

Since the Nodus switch, the offsite backup scripts (scripts/backup/rsync.backup) had not been running successfully. I tracked it down to the weird NFS file ownership issues we've been seeing since making Chiara the fileserver. Since the backup script uses rsync's "archive" mode, which preserves ownership, permissions, modification dates, etc, not seeing the proper ownership made everything wacky. 

Despite 99% of the searches you do about this problem saying you just need to match your user's uid and gid on the NFS client and server, it turns out NFSv4 doesn't use this mechanism at all, opting instead for some ID mapping service (idmapd), which I have no inclination of figuring out at this time. 

Thus, I've configured /etc/fstab on Nodus (and the control room machines) to use NFSv3 when mounting /cvs/cds. Now, all the file ownerships show up correctly, and the offsite backup of /cvs/cds is churning along happily. 

  10519   Thu Sep 18 17:44:55 2014 JenneUpdateLSCOld AO cable pulled

[Q, Jenne]

We pulled the old 2-pin lemo cable after I had a look at the connectors.  When I unscrewed the connector on the MC side, one of the wires came off.  I suspect that it was still hanging on a bit, but my torquing it finally killed it. 

We pulled the cable with the idea of resoldering the connectors, but there are at least 2 places where the cable has been squished enough that the shielding or the inner wires are exposed.  These places aren't near enough the ends to just cut the cable short.

Downs doesn't have a spool of shielded twisted single-pair cable, so Todd is going to get me the part number for the cable they use, and I've asked Steve to order it tomorrow. 

For now, we will continue using the BNC cable that we installed last night - I don't think it's worth resoldering and putting in a crappy 2-pin lemo cable that we'll just throw out in a week.

  7851   Tue Dec 18 15:51:33 2012 JenneUpdateIOOOld G&H TT mirrors' phase maps measured

I took the 2 G&H mirrors that we de-installed from PR3 and SR3 over to GariLynn to measure their phase maps. Data is in the same place as before, http://www.ligo.caltech.edu/~coreopt/40MCOC/Oct24-2012/ .  Optic "A" is SN 0864, and optic "B" is SN 0884, however I'm not sure which one came from which tip tilt.  It's hard to tell from what photos we have on picasa.

Both are astigmatic, although not lined up with the axes defined by where the arrow marks the HR side.  Both have RoCs of -600 or -700m. RMS of ~10nm.

  3826   Fri Oct 29 16:39:01 2010 JenneUpdateTreasureOld Green suspension towers disassembled

[Jenne, Joonho]

At Koji's request, we disassembled 2 of the old Green suspension towers that have been sitting along the X-arm forever (read that last word in a 'Sandlot' voice.  Then you'll know how long the suspensions have been sitting there).

They are now hanging out in plastic trays, covered with foil.  They will now be much easier to store.

We should remember that we have these, particularly because the tables at the top are really nice, and have lots of degrees of freedom of fine adjustment.

 

Steve:

Atm1, there is one more of these old suspension towers

Atm2, disassembled

  1455   Mon Apr 6 19:09:15 2009 JenneUpdatePEMOld Guralp is hooked back up to the ADC

Old Guralp is hooked back up, the new one is sitting next to it, disconnected for now.

  3736   Mon Oct 18 17:16:30 2010 JenneUpdateSUSOld PRM, SRM stored, new PRM drag wiped

[Jenne, Suresh]

We've put the old PRM and SRM (which were living in a foil house on the cleanroom optical table) into Steve's nifty storage containers.  Also, we removed the SRM which was suspended, and stored it in a nifty container.  All 3 of these optics are currently sitting on one of the cleanroom optical tables.  This is fine for temporary storage, but we will need to find another place for them to live permanently.  The etched names of the 3 optics are facing out, so that you can read them without picking them up.  I forgot to note the serial numbers of the optics we've got stored, but the old optics are labeled XRM ###, whereas the new optics are labeled XRMU ###. 

Koji chose for us PRMU 002, out of the set which we recently received from ATF, to be the new PRM.  Suresh and I drag wiped both sides with Acetone and Iso, and it is currently sitting on one of the rings, in the foil house on the cleanroom optical table.

We are now ready to begin the guiderod gluing process (later tonight or tomorrow).

  3737   Mon Oct 18 18:00:36 2010 KojiUpdateSUSOld PRM, SRM stored, new PRM drag wiped

- Steve is working on the storage shelf for those optics.

- PRMU002 was chosen as it has the best RoC among the three.

Quote:

[Jenne, Suresh]

We've put the old PRM and SRM (which were living in a foil house on the cleanroom optical table) into Steve's nifty storage containers.  Also, we removed the SRM which was suspended, and stored it in a nifty container.  All 3 of these optics are currently sitting on one of the cleanroom optical tables.  This is fine for temporary storage, but we will need to find another place for them to live permanently.  The etched names of the 3 optics are facing out, so that you can read them without picking them up.  I forgot to note the serial numbers of the optics we've got stored, but the old optics are labeled XRM ###, whereas the new optics are labeled XRMU ###. 

Koji chose for us PRMU 002, out of the set which we recently received from ATF, to be the new PRM.  Suresh and I drag wiped both sides with Acetone and Iso, and it is currently sitting on one of the rings, in the foil house on the cleanroom optical table.

We are now ready to begin the guiderod gluing process (later tonight or tomorrow).

 

  11419   Thu Jul 16 03:01:57 2015 ericqUpdateLSCOld beatbox hooked back up

I was having issues trying to get reasonable noise performance out of the aLIGO demod board as an ALS DFD. Terminating the inputs to the LSC whitening inputs did not show much 60Hz noise, and an RMS in the single Hz range. 

A 60Hz line of hundreds of uV was visible in the power spectrum of the single ended BNC and double-ended DB25 outputs of the board no matter how I drove or terminated.

So, I tried out hooking up the ALS beatbox. It turns out to work better for the time being; not only is the 60Hz line in the analog outputs about ten times smaller, the broadband noise floor in the resultant beat spectrum when driven by a 55MHz LO on the LSC rack is a fair bit lower too. I wonder if this is due to not driving the aLIGO board LO at the +10dBm it expects. With the amplifiers and beat note amplitudes we have, we'd only be able to supply around 0 dBm anyways. 

Here's a comparison of the aLIGO board (black) and ALS beatbox (dark green) driven with the 55MHz LO, both going through the LSC whitening filters for a resultant magnitude of 3kCounts in the I-Q plane. The RMS sensing noise is about 30 times lower for the beatbox. (Note, this is with the old delay cables. When we switch to the 50m cables, we'll win further frequency noise sensitivity through the better degrees->Hz calibration.) I'm very interested to see what the green beat spectrum looks like with this setup. 

Not only is the 60Hz line smaller, there is simply less junk in the beatbox signal. I did not expect this to be the case. 

There were some indications of funky status of the aLIGO board: channels 3 and 4 are totally nonfunctioning, so who knows what's going on in there. I've pulled it out, to take a gander if I can figure out how to make it suitiable for our purposes. 

  13239   Tue Aug 22 15:17:19 2017 ericqUpdateComputersOld frames accessible again

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

  13240   Tue Aug 22 15:40:06 2017 gautamUpdateComputersOld frames accessible again

[jamie, gautam]

I had some trouble getting the daqd processes up and running again using Jamie's instructions.

With Jamie's help however, they are back up and running now. The problem was that the mx infrastructure didn't come back up on its own. So prior to running sudo systemctl restart daqd_*, Jamie ran sudo systemctl start mx. This seems to have done the trick.

c1iscey was still showing red fields on the CDS overview screen so Jamie did a soft reboot. The machine came back up cleanly, so I restarted all the models. But the indicator lights were still red. Apparently the mx processes weren't running on c1iscey. The way to fix this is to run sudo systemctl start mx_stream. Now everything is green.

Now we are going to work on trying the fix Rolf suggested on c1iscex.

Quote:

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

 

  3939   Wed Nov 17 15:49:53 2010 ranaUpdateDAQOle Channel Names

The following channels should be named as below to keep in line with their names pre-upgrade rather than use _DAQ in the name.

C1:SUS-{OPT}_{POS,PIT,YAW}

SUS{POS,PIT,YAW}_IN1
C1:SUS-{OPT}_OPLEV_{P,Y}ERROR

OL{PIT,YAW}_IN1

C1:SUS-{OPT}_SENSOR_{UL,UR,LL,LR,SIDE}
{UL,UR,LL,LR,SD}SEN_OUT
C1:SUS-{OPT}_OPLEV_{P,Y}OUT
OL{PIT,YAW}_OUT
C1:IOO-MC_TRANSPD
MC2_OLSUM_IN1

 

  15940   Thu Mar 18 13:12:39 2021 gautamUpdateComputer Scripts / ProgramsOmnigraffle vs draw.io

What is the advantage of Omnigraffle c.f. draw.io? The latter also has a desktop app, and for creating drawings, seems to have all the functionality that Omnigraffle has, see for example here. draw.io doesn't require a license and I feel this is a much better tool for collaborative artwork. I really hate that I can't even open my old omnigraffle diagrams now that I no longer have a license.

Just curious if there's some major drawback(s), not like I'm making any money off draw.io.

Quote:

After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle.

  8579   Wed May 15 15:33:49 2013 SteveUpdateGeneralOn-Track QPD

I tested On-Track (from LLO) OT 301  amp with PSM2-10 qpd. It was responding. Jenne will calibrate it.  The 12V DC ps input is unipolar.

The one AC to DC adapter that Jenne tried was broken.

  14174   Tue Aug 21 17:32:51 2018 awadeBureaucracyEquipment loanOne P-810.10 Piezo Actuators element removed

I've taken a PI Piezo Actuator (P-810.10) from the 40m collection. I forgot to note it on the equipment checklist by the door, will do so when I next drop by.

  3201   Mon Jul 12 22:01:13 2010 KojiUpdateSUSOne TT suspended. Still need fine alignment

Jenne and Koji

We tweaked the alignment of the TT mirror.

First we put a G&H mirror, but the mirror was misaligned and touching the ECD as the magnet was too heavy. We tried to move the wires towards the magnet by 1mm.
It was not enough but once we moved the clamps towards the magnet, we got the range to adjust the pitching back and forth.
We tried to align it by the feaher touch to the clamp, we could not get close to the precision of 10mrad as the final tightening of the clamp screws did change the alignment.

We will try to adjust the fine alignment tomorrow again.

The damping in pitch, yaw and longitudinal looks quite good. We will also try to characterize the damping of the suspension using a simple oplev setup.

  3786   Tue Oct 26 15:57:10 2010 JenneUpdateSUSOne magnet broken, reglued

[Jenne, Suresh, Thanh (Bram's Grad Student)]

When we removed the grippers from the magnets on the PRM, one of the face magnets broke off.  This time, the dumbbell remained glued to the optic, while the magnet came off.  (Usually the magnet and dumbbell will stay attached, and both come off together).  I had 3 spare magnet-dumbbells, but only one of them was the correct polarization.  The strength of the spare magnet was ~128 Gauss, while the other magnets glued to the PRM are all ~180 Gauss.  We considered this too large a discrepancy, and so elected to reuse the same magnet as before. 

We removed the dumbbell from the optic using acetone.  After the epoxy was gently removed, we drag wiped the AR face of the optic (Acetone followed by Iso, as usual), being careful to keep all the solvent away from all the other glue joints.  We cleaned off the magnet with acetone (it didn't really have any glue stuck on it...most of the glue was stuck on the dumbbell), and epoxied it to a new dumbbell. 

The PRM, as well as the magnet-dumbbell gluing fixture are in the little foil house, waiting for tomorrow's activities.  Tomorrow we will re-glue this magnet to the optic, and Thursday we will balance the optic.  

This still leaves us right on schedule for giving the PRM to Bob on Friday at lunchtime, so it can bake over the weekend.

  4348   Thu Feb 24 10:56:04 2011 JenneUpdateWienerFilteringOne month of H1 S5 data is now on Rossa

Just in case anyone else wants to access it, we now have 30 days of H1 S5 DARM data sitting on Rossa's harddrive.  It's in 10min segments.  This is handy because if you want to try anything, particularly Wiener Filtering, now we don't have to wait around for the data to be fetched from elsewhere.

  3516   Thu Sep 2 17:43:30 2010 josephbUpdateCDSOne working BO output module, others not so much

 Joe and Kiwamu:

We found one bug in the RCG code, where the second input for the CDO32 part (32 binary output) was simply a repeat of the first input, and totally ignored the second input.  This was fixed in the /advLigoRTS/src/epics/util/lib/CDO32.pm file by changing 

$calcExp .= $::fromExp[0];

to

$calcExp .= $::fromExp[1];

This fix has been added to the svn.  Unfortunately, while we have a single working binary output module, the 2nd and later modules do not seem to be responding at all.  We've done the usual swaping parts of the path in both software and hardware and can't find any bad pieces in our model files or the actual hardware.   That leaves me wondering about the c code, specifically if the CDO32Output[1], CDO32Output[2], and so forth array entries in the code are being handled properly.  I'll try to get some thoughts on it from Alex tomorrow.

  16682   Sat Feb 26 01:01:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I will make a detailed elog later today giving a detailed outline of the connection from the Agilent gauge controller to the vacuum subnet and the work I have been doing over the past two days to get data from the unit to EPICs channels. I just want to mention that I have plugged the XGS-600 gauge controller into the serial server on the vacuum subnet. I check the vacuum medm screen and I can confirm that the other sensors did not experience and issues are a result of this. I also currently have two of the FRG-700 connected to the controller but I have powered the unit down after the checks.

  16683   Sat Feb 26 15:45:14 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I have attached a flow diagram of my understanding of how the gauges are connected to the network.

Earlier today, I connected the XGS-600 gauge controller to the IOLAN Serial Device Server on port 192.168.114.22 .

The plan is a follows:

1. Update the serial device yaml file to include this new ip entry for the XGS-600 gauge controller

2. Create a serial gauge class "serial_gauge_xgs.py" for the XGS-600 gauge controller that inherits from the serial gauge parent class for EPICS communication with a serial device via TCP sockets.

  • Might be better to use the current channels of the devices that are being replaced initially, i.e.
  • C1:Vac-FRG1_pressure C1:Vac-CC1_pressure
    C1:Vac-FRG2_pressure C1:Vac-CCMC_pressure
    C1:Vac-FRG3_pressure C1:Vac-PTP1_pressure
    C1:Vac-FRG4_pressure C1:Vac-CC4_pressure
    C1:Vac-FRG5_pressure C1:Vac-IG1_pressure

3. Modify the launcher file to include the XGS gauge controller. Following the same pattern used  to start the service for the other serial gauges, we can start the communication between the XGS-600 gauge controller and the IOLAN serial server and write data to EPICS channels using

controls@c1vac> python launcher.py XGS600

If we are able to establish communication between the XGS-600 gauge controller and write it gause data to EPICS channels, go on to steps 4.

4. Create a serial service file "serial_XGS600.service" and place it in the service folder

5. Add the new EPICS channels to the database file

6. Add the "serial_XGS600.service" to line 10 and 11 of modbusIOC.service

7. Later on, when we are ready, we can restart the updated modbusIOC service

 

For vacuum signal flow and Acromag channel assignments see [1]  and [2] respectively. For the 16 port IOLAN SDS (Serial Device Server) ethernet connections, see [3]. 

[1] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=40m_Vacuum_System_Signal_Flow.pdf

[2] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=AcromagChannelAssignment.pdf

[3] https://git.ligo.org/40m/vac/-/blob/master/python/serial/serial_devices.yaml

  16688   Mon Feb 28 19:15:10 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I decided to create an independent service for the XGS data readout so we can get this to work first before trying to integrate into current system. After starting the service, I noticed that the EPICS channel were not updating as expected. So I started to debug the problem and managed to track it down to an ip socket connect() error, i.e. we get a connection error for the ip address assigned to the LAN port to which the XGS box was connected. After trying a few things and searching the internet, I think the error indicates that this particular LAN port is not yet configured. I reached this conclusion after noting that only a select number of LAN ports connected without issues and these are the ports that already had devices connected. So it must be the case that the LAN ports were somehow configured. The next step is to look at the IOLAN manual to figure out how to configure the ip port for the XGS controller. Fingers crossed.

  16691   Tue Mar 1 20:38:49 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

During my investigation, I inadvertently overwrote the serial port configuration for the connected devices. So I am now working to get it all back. I have attached screenshots of the config settings that brought back communication that is not garbled. There is no physical connection to port 6, which I guess was initially used for the UPS serial communication but not anymore. Also, ports 9 and 10 are connected to Hornet and SuperBee, both of which have not been communicating for a while and are to be replaced, so there is no way to confirm communication with them. Otherwise, the remaining devices seem to be communicating as before.

I still could not establish communication with the XGS-600 controller using the serial port settings given in the manual, which also happen to work via Serial to USB adapter, so I will revisit the problem later. My immediate plan is to do a Serial Ethernet, then Ethernet to Serial, and then Serial to USB connection to see if the USB code still works. If it does then at least I know the problem is not coming from the Serial to Ethernet adapters. Then I guess I will replace the controller with my laptop and see what signal comes through when I send a message to the controller via the IOLAN serial device server. Hopefully, I can discover what's wrong by this point.

 

Note to self: Before doing anything, do a sanity check by comparing the settings on the IOLAN SDS and the config settings that worked for the Serial to USB communication and post an elog for this for reference.

  16692   Wed Mar 2 11:50:39 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Here is the IOLAN SDS TCP socket setting and the USBserial setting for comparison.

I have also included the python script and output from the USBserial test from earlier.

  16693   Wed Mar 2 12:40:08 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Connector Test:

A quick test to rule out any issue with the Ethernet to Serial adapter was done using the setup shown in Attachment 1. The results rule out any connector problem.

 

IOLAN COMM test (as per Koji's suggestion):

The next step is to swap the controller with a laptop set up to receive serial commands using the same settings as the XGS600 controller. Basically, run a slightly modified version of python script where we go into listening mode. Then send commands to the TCP socket on the IOLAN SDS unit using c1vac and check what data makes its way to the laptop USBserial terminal. After working on this for a bit, I realized that we do not need to do anything on the c1vac machine. We only need to start the service as it would work normally. So I wrote a small python code for a basic XGS-600 controller emulator, see Attachment 4. The outputs from the laptop and c1vac terminals are Attachments 5 and 6 respectively. 

These results show that we can communicate via the assigned IP address "192.168.114.22" and the commands that are sent from c1vac reaches the laptop in the correct format. Furthermore, the serial_XSG service, a part modbusIOC_XGS service, which usually exits with an error seems fine now after successfully communicating with the laptop. I don't know why it did not die after the tests. I also found a bug in my code as a result of the test, where the status field for the fourth gauge didn't get written to. 

 

Pressure reading issue:

I noticed that the pressure reading was not giving the atmospheric value of ~760 Torrs as expected. Looking through my previous readouts, it seems the unit showed this atm value of ~761 Torrs when the first gauge was attached. However, a closer look at the issue revealed a transient behavior, i.e. when the unit is turned on the reading dips to atm value but eventually rises up to 1000 Torrs. I don't think this is a calibration problem bcos the value of 1000 Torrs is the maximum value for the gauge range. I also found out that when the XGS-controller has been running for a while, a power cycle does not have this transient behavior. So maybe a faulty capacitor somewhere? I have attached a short video clip that shows what happens when the XGS-controller unit is turned on.

  16704   Sun Mar 6 18:14:45 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

  16706   Mon Mar 7 13:53:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

So it appears that my deduction from the pictures of needing a cable swap was correct, however, it turns out that the installed cable was actually the normal RS232 and what we need instead is the RS232 null cable. After the swap was done, the communication between c1vac and the XGS600 controller became active. Although, the data makes it all to the to c1vac without any issues, the scope view of it shows that it is mainly utilizing the upper half of the voltage range which is just over 50% of the available range. I don't know what to make of this.

 

I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

 

Quote:

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

 

  16707   Mon Mar 7 14:52:34 2022 KojiUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Great trouble shoot!

> I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

This is just a calibration issue. The controller should have the calibration function.
(The other Pirani showing 850Torr was also a calibration issue although I didn't bother to correct it. I think the pirani's typically has large distribution of the calibration values and requires individual calibration)

ELOG V3.1.3-