40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 159 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  12132   Wed May 25 02:54:09 2016 ericqUpdateGeneralOdds and ends

WFS locking point seemed degraded; I hand aligned and reset the WFS offsets as usual.

ITMX oplev recentered. While doing so, I noticed an ETMX excursion rear its head for the first time in a long while :crying

There was no active length control on ETMX, only OSEM damping + oplevs. Afterwards, its still moving around with only local damping on. I'm leaving the oplevs off for now.

  743   Sun Jul 27 20:25:49 2008 ranaConfigurationEnvironmentOffice Temperature increased to 75 F
Since we have the chiller for the PSL chiller now, I've just increased the office area
temperature set point by 2 F
to 75 F to see if the laser will still behave.
  3571   Tue Sep 14 00:21:51 2010 ranaOmnistructureEnvironmentOffice area temperature change

I changed the setpoint for the HVAC control (next to Steve) from 73F to 72F. This is to handle the temperature increase in the control room with the AC unit there turned off.

We know that the control setpoint is not linear, but I hope that it settles down after several hours. Lets wait until Tuesday evening before making another change.

  6078   Wed Dec 7 00:11:58 2011 DenUpdateAdaptive FilteringOfflineAF

 I did offline adaptive filtering with yesterday's 3 hours of MC-F and GUR1X data. It turns out that normalized-lms can strongly outperform static Wiener filtering!

offlineaf_psd.png 

This is interesting. It might be something inside MC_F that Wiener static does not see. I think the problem is either with seismometer noise or tilt.

Attachment 2: offlineaf_coh.png
offlineaf_coh.png
  11846   Fri Dec 4 10:18:33 2015 yutaroUpdateASSOffset in the dither loop of XARM vs beam spot shift on ETMX

As I did for YARM (elog 11779), I measured the relation between offsets added just after the demodulation of the dithering loop of XARM and beam spot shift on ETMX. Defferent from YARM, the beam spot on ITMX DOES change because only BS is used as a steering mirror (TT1&2 are used for the dithering of YARM). Instead, the beam spot on BS DOES NOT change.

This time, I measured by oplevs the angles of both ETMX and ITMX for each value of offset, and using these angles I calculated the shift of the beam spot on ETMX so that I got two independent estimations (one from ETMX oplev, and the other from ITMX oplev) as shown below. The calibration of the oplevs reported in elog 11831 is taken into account. 

The difference of two estimations comes from the error of calibration of oplevs and/or imperfect alignment, I think. 

Attachment 1: offset-angleETMXPIT.png
offset-angleETMXPIT.png
Attachment 2: offset-angleETMXYAW.png
offset-angleETMXYAW.png
  5747   Thu Oct 27 18:00:38 2011 kiwamuSummaryLSCOffsets in LSC signals due to the RFAMs : Optickle simulation

The amount of offsets in the LSC signals due to the RFAMs have been estimated by an Optickle simulation.

The next step is to think about what kind of effects we get from the RFAMs and estimate how much they will degrade the performance.

(Motivation)

  We have been having relatively big RFAM sidebands (#5616), which generally introduce unwanted offsets in any of the LSC demodulated signals.
The motivation was that we wanted to estimate how much offsets we've been having due to the RFAMs.
The extreme goal is to answer the question : 'How big RFAMs do we allow for operation of the interferometer?'.
Depending on the answer we may need to actively control the RFAMs as already planed (#5686).
Since the response of the interferometer is too complicated for analytic works, so a numerical simulation is used.
 

(Results : Offsets in LSC error signals)

PRCL_200.png

 

MICH_200.png

 SRCL_200.png

  Figure: Offsets in unit of meter in all the LSC demodulated signals.  Y-axis is the amount of the offsets and the X-axis represents each signal port.
In each signal port, the signals are classified by color.
(1) Offsets in the PRCL signal. (2) Offsets in the MICH signal. (3) Offsets in the SRCL signal.
 
 
Roughly the signals showed offsets at a 0.1 nm level.
The numerical error was found to be about 10-10 nm by running the same simulation without the AM sidebands.
Here is a summary of the amount of the offsets:
 
    offsets [nm] (1f signal port)  offsets [nm] (3f signal port)  biggest offsets [nm] (signal port)
PRCL       0.3 (REFL11)       0.2 (REFL33)     1 (REFL55)
MICH      0.00009 (AS55)       0.8 (REFL33)     7 (POP11)
SRCL      0.1 (REFL55)       0.1 (REFL165)     40 (POX11)
In the SRCL simulation  REFL11I, REFL11Q, POP11I, POP11Q and POX11I didn't show any zero crossing points within 100 nm range around the resonance.
It is because that the SRCL doesn't do anything for the 11MHz sidebands. So it is the right behavior.
However POX11 was somewhat sensitive to the SRCL motion and showed a funny signal with a big offset.
 

(Simulation setup)

I applied the current PM/AM ratio according to the measurement (#5616, #5519)
The modulation indices used in the simulation are :
    + PM index in 11MHz = 0.17
    + PM index in 55MHz = 0.14
    + AM index in 11MHz = 0.17 / 200 = 8.5x10-4
    + AM index in 55MHz = 0.14 / 200 = 7.0x10-4
Note that the phases of the AM and PM sidebands are the same.

For clarity, I also note the definition of PM/AM ratio as well as how the first order upper sideband looks like.

ratio.png

upper.png
 

The optical parameters are all at ideal length although we may want to check the results with more realistic parameters:
    + No arm cavities
    + PRCL length = 6.75380
    + SRCL length = 5.39915
    + Schnupp asymmetry = 3.42 cm
    + loss in each optic = 50 ppm
    + PRCL = resonant for 11 and 55MHz
    + MICH = dark fringe
    + SRCL = resonant for 55 MHz
The matlab script will be uploaded to the cvs server.

Quote from #5686
  8. In parallel to those actions, figure out how much offsets each LSC error signal will have due to the current amount of the RFAMs.
    => Optickle simulations.

  10815   Thu Dec 18 15:41:30 2014 ericqUpdateComputer Scripts / ProgramsOffsite backups of /cvs/cds going again

Since the Nodus switch, the offsite backup scripts (scripts/backup/rsync.backup) had not been running successfully. I tracked it down to the weird NFS file ownership issues we've been seeing since making Chiara the fileserver. Since the backup script uses rsync's "archive" mode, which preserves ownership, permissions, modification dates, etc, not seeing the proper ownership made everything wacky. 

Despite 99% of the searches you do about this problem saying you just need to match your user's uid and gid on the NFS client and server, it turns out NFSv4 doesn't use this mechanism at all, opting instead for some ID mapping service (idmapd), which I have no inclination of figuring out at this time. 

Thus, I've configured /etc/fstab on Nodus (and the control room machines) to use NFSv3 when mounting /cvs/cds. Now, all the file ownerships show up correctly, and the offsite backup of /cvs/cds is churning along happily. 

  10519   Thu Sep 18 17:44:55 2014 JenneUpdateLSCOld AO cable pulled

[Q, Jenne]

We pulled the old 2-pin lemo cable after I had a look at the connectors.  When I unscrewed the connector on the MC side, one of the wires came off.  I suspect that it was still hanging on a bit, but my torquing it finally killed it. 

We pulled the cable with the idea of resoldering the connectors, but there are at least 2 places where the cable has been squished enough that the shielding or the inner wires are exposed.  These places aren't near enough the ends to just cut the cable short.

Downs doesn't have a spool of shielded twisted single-pair cable, so Todd is going to get me the part number for the cable they use, and I've asked Steve to order it tomorrow. 

For now, we will continue using the BNC cable that we installed last night - I don't think it's worth resoldering and putting in a crappy 2-pin lemo cable that we'll just throw out in a week.

  7851   Tue Dec 18 15:51:33 2012 JenneUpdateIOOOld G&H TT mirrors' phase maps measured

I took the 2 G&H mirrors that we de-installed from PR3 and SR3 over to GariLynn to measure their phase maps. Data is in the same place as before, http://www.ligo.caltech.edu/~coreopt/40MCOC/Oct24-2012/ .  Optic "A" is SN 0864, and optic "B" is SN 0884, however I'm not sure which one came from which tip tilt.  It's hard to tell from what photos we have on picasa.

Both are astigmatic, although not lined up with the axes defined by where the arrow marks the HR side.  Both have RoCs of -600 or -700m. RMS of ~10nm.

  3826   Fri Oct 29 16:39:01 2010 JenneUpdateTreasureOld Green suspension towers disassembled

[Jenne, Joonho]

At Koji's request, we disassembled 2 of the old Green suspension towers that have been sitting along the X-arm forever (read that last word in a 'Sandlot' voice.  Then you'll know how long the suspensions have been sitting there).

They are now hanging out in plastic trays, covered with foil.  They will now be much easier to store.

We should remember that we have these, particularly because the tables at the top are really nice, and have lots of degrees of freedom of fine adjustment.

 

Steve:

Atm1, there is one more of these old suspension towers

Atm2, disassembled

Attachment 1: P1070014.JPG
P1070014.JPG
Attachment 2: P1070015.JPG
P1070015.JPG
  1455   Mon Apr 6 19:09:15 2009 JenneUpdatePEMOld Guralp is hooked back up to the ADC

Old Guralp is hooked back up, the new one is sitting next to it, disconnected for now.

  3736   Mon Oct 18 17:16:30 2010 JenneUpdateSUSOld PRM, SRM stored, new PRM drag wiped

[Jenne, Suresh]

We've put the old PRM and SRM (which were living in a foil house on the cleanroom optical table) into Steve's nifty storage containers.  Also, we removed the SRM which was suspended, and stored it in a nifty container.  All 3 of these optics are currently sitting on one of the cleanroom optical tables.  This is fine for temporary storage, but we will need to find another place for them to live permanently.  The etched names of the 3 optics are facing out, so that you can read them without picking them up.  I forgot to note the serial numbers of the optics we've got stored, but the old optics are labeled XRM ###, whereas the new optics are labeled XRMU ###. 

Koji chose for us PRMU 002, out of the set which we recently received from ATF, to be the new PRM.  Suresh and I drag wiped both sides with Acetone and Iso, and it is currently sitting on one of the rings, in the foil house on the cleanroom optical table.

We are now ready to begin the guiderod gluing process (later tonight or tomorrow).

  3737   Mon Oct 18 18:00:36 2010 KojiUpdateSUSOld PRM, SRM stored, new PRM drag wiped

- Steve is working on the storage shelf for those optics.

- PRMU002 was chosen as it has the best RoC among the three.

Quote:

[Jenne, Suresh]

We've put the old PRM and SRM (which were living in a foil house on the cleanroom optical table) into Steve's nifty storage containers.  Also, we removed the SRM which was suspended, and stored it in a nifty container.  All 3 of these optics are currently sitting on one of the cleanroom optical tables.  This is fine for temporary storage, but we will need to find another place for them to live permanently.  The etched names of the 3 optics are facing out, so that you can read them without picking them up.  I forgot to note the serial numbers of the optics we've got stored, but the old optics are labeled XRM ###, whereas the new optics are labeled XRMU ###. 

Koji chose for us PRMU 002, out of the set which we recently received from ATF, to be the new PRM.  Suresh and I drag wiped both sides with Acetone and Iso, and it is currently sitting on one of the rings, in the foil house on the cleanroom optical table.

We are now ready to begin the guiderod gluing process (later tonight or tomorrow).

 

  11419   Thu Jul 16 03:01:57 2015 ericqUpdateLSCOld beatbox hooked back up

I was having issues trying to get reasonable noise performance out of the aLIGO demod board as an ALS DFD. Terminating the inputs to the LSC whitening inputs did not show much 60Hz noise, and an RMS in the single Hz range. 

A 60Hz line of hundreds of uV was visible in the power spectrum of the single ended BNC and double-ended DB25 outputs of the board no matter how I drove or terminated.

So, I tried out hooking up the ALS beatbox. It turns out to work better for the time being; not only is the 60Hz line in the analog outputs about ten times smaller, the broadband noise floor in the resultant beat spectrum when driven by a 55MHz LO on the LSC rack is a fair bit lower too. I wonder if this is due to not driving the aLIGO board LO at the +10dBm it expects. With the amplifiers and beat note amplitudes we have, we'd only be able to supply around 0 dBm anyways. 

Here's a comparison of the aLIGO board (black) and ALS beatbox (dark green) driven with the 55MHz LO, both going through the LSC whitening filters for a resultant magnitude of 3kCounts in the I-Q plane. The RMS sensing noise is about 30 times lower for the beatbox. (Note, this is with the old delay cables. When we switch to the 50m cables, we'll win further frequency noise sensitivity through the better degrees->Hz calibration.) I'm very interested to see what the green beat spectrum looks like with this setup. 

Not only is the 60Hz line smaller, there is simply less junk in the beatbox signal. I did not expect this to be the case. 

There were some indications of funky status of the aLIGO board: channels 3 and 4 are totally nonfunctioning, so who knows what's going on in there. I've pulled it out, to take a gander if I can figure out how to make it suitiable for our purposes. 

Attachment 1: beat_comparison.png
beat_comparison.png
Attachment 2: aLIGO_vs_beatbox.xml.zip
  13239   Tue Aug 22 15:17:19 2017 ericqUpdateComputersOld frames accessible again

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

  13240   Tue Aug 22 15:40:06 2017 gautamUpdateComputersOld frames accessible again

[jamie, gautam]

I had some trouble getting the daqd processes up and running again using Jamie's instructions.

With Jamie's help however, they are back up and running now. The problem was that the mx infrastructure didn't come back up on its own. So prior to running sudo systemctl restart daqd_*, Jamie ran sudo systemctl start mx. This seems to have done the trick.

c1iscey was still showing red fields on the CDS overview screen so Jamie did a soft reboot. The machine came back up cleanly, so I restarted all the models. But the indicator lights were still red. Apparently the mx processes weren't running on c1iscey. The way to fix this is to run sudo systemctl start mx_stream. Now everything is green.

Now we are going to work on trying the fix Rolf suggested on c1iscex.

Quote:

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

 

  3939   Wed Nov 17 15:49:53 2010 ranaUpdateDAQOle Channel Names

The following channels should be named as below to keep in line with their names pre-upgrade rather than use _DAQ in the name.

C1:SUS-{OPT}_{POS,PIT,YAW}

SUS{POS,PIT,YAW}_IN1
C1:SUS-{OPT}_OPLEV_{P,Y}ERROR

OL{PIT,YAW}_IN1

C1:SUS-{OPT}_SENSOR_{UL,UR,LL,LR,SIDE}
{UL,UR,LL,LR,SD}SEN_OUT
C1:SUS-{OPT}_OPLEV_{P,Y}OUT
OL{PIT,YAW}_OUT
C1:IOO-MC_TRANSPD
MC2_OLSUM_IN1

 

  15940   Thu Mar 18 13:12:39 2021 gautamUpdateComputer Scripts / ProgramsOmnigraffle vs draw.io

What is the advantage of Omnigraffle c.f. draw.io? The latter also has a desktop app, and for creating drawings, seems to have all the functionality that Omnigraffle has, see for example here. draw.io doesn't require a license and I feel this is a much better tool for collaborative artwork. I really hate that I can't even open my old omnigraffle diagrams now that I no longer have a license.

Just curious if there's some major drawback(s), not like I'm making any money off draw.io.

Quote:

After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle.

  8579   Wed May 15 15:33:49 2013 SteveUpdateGeneralOn-Track QPD

I tested On-Track (from LLO) OT 301  amp with PSM2-10 qpd. It was responding. Jenne will calibrate it.  The 12V DC ps input is unipolar.

The one AC to DC adapter that Jenne tried was broken.

  14174   Tue Aug 21 17:32:51 2018 awadeBureaucracyEquipment loanOne P-810.10 Piezo Actuators element removed

I've taken a PI Piezo Actuator (P-810.10) from the 40m collection. I forgot to note it on the equipment checklist by the door, will do so when I next drop by.

  3201   Mon Jul 12 22:01:13 2010 KojiUpdateSUSOne TT suspended. Still need fine alignment

Jenne and Koji

We tweaked the alignment of the TT mirror.

First we put a G&H mirror, but the mirror was misaligned and touching the ECD as the magnet was too heavy. We tried to move the wires towards the magnet by 1mm.
It was not enough but once we moved the clamps towards the magnet, we got the range to adjust the pitching back and forth.
We tried to align it by the feaher touch to the clamp, we could not get close to the precision of 10mrad as the final tightening of the clamp screws did change the alignment.

We will try to adjust the fine alignment tomorrow again.

The damping in pitch, yaw and longitudinal looks quite good. We will also try to characterize the damping of the suspension using a simple oplev setup.

Attachment 1: IMG_2634.jpg
IMG_2634.jpg
  3786   Tue Oct 26 15:57:10 2010 JenneUpdateSUSOne magnet broken, reglued

[Jenne, Suresh, Thanh (Bram's Grad Student)]

When we removed the grippers from the magnets on the PRM, one of the face magnets broke off.  This time, the dumbbell remained glued to the optic, while the magnet came off.  (Usually the magnet and dumbbell will stay attached, and both come off together).  I had 3 spare magnet-dumbbells, but only one of them was the correct polarization.  The strength of the spare magnet was ~128 Gauss, while the other magnets glued to the PRM are all ~180 Gauss.  We considered this too large a discrepancy, and so elected to reuse the same magnet as before. 

We removed the dumbbell from the optic using acetone.  After the epoxy was gently removed, we drag wiped the AR face of the optic (Acetone followed by Iso, as usual), being careful to keep all the solvent away from all the other glue joints.  We cleaned off the magnet with acetone (it didn't really have any glue stuck on it...most of the glue was stuck on the dumbbell), and epoxied it to a new dumbbell. 

The PRM, as well as the magnet-dumbbell gluing fixture are in the little foil house, waiting for tomorrow's activities.  Tomorrow we will re-glue this magnet to the optic, and Thursday we will balance the optic.  

This still leaves us right on schedule for giving the PRM to Bob on Friday at lunchtime, so it can bake over the weekend.

  4348   Thu Feb 24 10:56:04 2011 JenneUpdateWienerFilteringOne month of H1 S5 data is now on Rossa

Just in case anyone else wants to access it, we now have 30 days of H1 S5 DARM data sitting on Rossa's harddrive.  It's in 10min segments.  This is handy because if you want to try anything, particularly Wiener Filtering, now we don't have to wait around for the data to be fetched from elsewhere.

  3516   Thu Sep 2 17:43:30 2010 josephbUpdateCDSOne working BO output module, others not so much

 Joe and Kiwamu:

We found one bug in the RCG code, where the second input for the CDO32 part (32 binary output) was simply a repeat of the first input, and totally ignored the second input.  This was fixed in the /advLigoRTS/src/epics/util/lib/CDO32.pm file by changing 

$calcExp .= $::fromExp[0];

to

$calcExp .= $::fromExp[1];

This fix has been added to the svn.  Unfortunately, while we have a single working binary output module, the 2nd and later modules do not seem to be responding at all.  We've done the usual swaping parts of the path in both software and hardware and can't find any bad pieces in our model files or the actual hardware.   That leaves me wondering about the c code, specifically if the CDO32Output[1], CDO32Output[2], and so forth array entries in the code are being handled properly.  I'll try to get some thoughts on it from Alex tomorrow.

  16682   Sat Feb 26 01:01:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I will make a detailed elog later today giving a detailed outline of the connection from the Agilent gauge controller to the vacuum subnet and the work I have been doing over the past two days to get data from the unit to EPICs channels. I just want to mention that I have plugged the XGS-600 gauge controller into the serial server on the vacuum subnet. I check the vacuum medm screen and I can confirm that the other sensors did not experience and issues are a result of this. I also currently have two of the FRG-700 connected to the controller but I have powered the unit down after the checks.

  16683   Sat Feb 26 15:45:14 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I have attached a flow diagram of my understanding of how the gauges are connected to the network.

Earlier today, I connected the XGS-600 gauge controller to the IOLAN Serial Device Server on port 192.168.114.22 .

The plan is a follows:

1. Update the serial device yaml file to include this new ip entry for the XGS-600 gauge controller

2. Create a serial gauge class "serial_gauge_xgs.py" for the XGS-600 gauge controller that inherits from the serial gauge parent class for EPICS communication with a serial device via TCP sockets.

  • Might be better to use the current channels of the devices that are being replaced initially, i.e.
  • C1:Vac-FRG1_pressure C1:Vac-CC1_pressure
    C1:Vac-FRG2_pressure C1:Vac-CCMC_pressure
    C1:Vac-FRG3_pressure C1:Vac-PTP1_pressure
    C1:Vac-FRG4_pressure C1:Vac-CC4_pressure
    C1:Vac-FRG5_pressure C1:Vac-IG1_pressure

3. Modify the launcher file to include the XGS gauge controller. Following the same pattern used  to start the service for the other serial gauges, we can start the communication between the XGS-600 gauge controller and the IOLAN serial server and write data to EPICS channels using

controls@c1vac> python launcher.py XGS600

If we are able to establish communication between the XGS-600 gauge controller and write it gause data to EPICS channels, go on to steps 4.

4. Create a serial service file "serial_XGS600.service" and place it in the service folder

5. Add the new EPICS channels to the database file

6. Add the "serial_XGS600.service" to line 10 and 11 of modbusIOC.service

7. Later on, when we are ready, we can restart the updated modbusIOC service

 

For vacuum signal flow and Acromag channel assignments see [1]  and [2] respectively. For the 16 port IOLAN SDS (Serial Device Server) ethernet connections, see [3]. 

[1] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=40m_Vacuum_System_Signal_Flow.pdf

[2] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=AcromagChannelAssignment.pdf

[3] https://git.ligo.org/40m/vac/-/blob/master/python/serial/serial_devices.yaml

Attachment 1: Vac-gauges-flow-diagram.png
Vac-gauges-flow-diagram.png
  16688   Mon Feb 28 19:15:10 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I decided to create an independent service for the XGS data readout so we can get this to work first before trying to integrate into current system. After starting the service, I noticed that the EPICS channel were not updating as expected. So I started to debug the problem and managed to track it down to an ip socket connect() error, i.e. we get a connection error for the ip address assigned to the LAN port to which the XGS box was connected. After trying a few things and searching the internet, I think the error indicates that this particular LAN port is not yet configured. I reached this conclusion after noting that only a select number of LAN ports connected without issues and these are the ports that already had devices connected. So it must be the case that the LAN ports were somehow configured. The next step is to look at the IOLAN manual to figure out how to configure the ip port for the XGS controller. Fingers crossed.

  16691   Tue Mar 1 20:38:49 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

During my investigation, I inadvertently overwrote the serial port configuration for the connected devices. So I am now working to get it all back. I have attached screenshots of the config settings that brought back communication that is not garbled. There is no physical connection to port 6, which I guess was initially used for the UPS serial communication but not anymore. Also, ports 9 and 10 are connected to Hornet and SuperBee, both of which have not been communicating for a while and are to be replaced, so there is no way to confirm communication with them. Otherwise, the remaining devices seem to be communicating as before.

I still could not establish communication with the XGS-600 controller using the serial port settings given in the manual, which also happen to work via Serial to USB adapter, so I will revisit the problem later. My immediate plan is to do a Serial Ethernet, then Ethernet to Serial, and then Serial to USB connection to see if the USB code still works. If it does then at least I know the problem is not coming from the Serial to Ethernet adapters. Then I guess I will replace the controller with my laptop and see what signal comes through when I send a message to the controller via the IOLAN serial device server. Hopefully, I can discover what's wrong by this point.

 

Note to self: Before doing anything, do a sanity check by comparing the settings on the IOLAN SDS and the config settings that worked for the Serial to USB communication and post an elog for this for reference.

Attachment 1: Working_Serial_Port_List_1.png
Working_Serial_Port_List_1.png
Attachment 2: Working_Serial_Port_List_2.png
Working_Serial_Port_List_2.png
Attachment 3: Working_Config_Ports#1-5.png
Working_Config_Ports#1-5.png
Attachment 4: Working_Config_Ports#7-8.png
Working_Config_Ports#7-8.png
  16692   Wed Mar 2 11:50:39 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Here is the IOLAN SDS TCP socket setting and the USBserial setting for comparison.

I have also included the python script and output from the USBserial test from earlier.

Attachment 1: XGS600_IOLAN_settings_1.png
XGS600_IOLAN_settings_1.png
Attachment 2: XGS600_IOLAN_settings_2.png
XGS600_IOLAN_settings_2.png
Attachment 3: XGS600_USBserial_settings.png
XGS600_USBserial_settings.png
Attachment 4: XGS600_comm_test.py
#!/usr/bin/env python

#Created 2/24/22 by Tega Edo
'''Script to read/write to the XGS-600 Gauge Controller'''

import serial
import sys,os,math,time

ser = serial.Serial('/dev/cu.usbserial-1410') # open serial port 

... 74 more lines ...
Attachment 5: XGS600_comm_test_result.txt
----- Multiple Sensor Read Commands -----

Sent to XGS-600 -> #0001\r : Read XGS contents
response : >FE4CFE4CFE4C

Sent to XGS-600 -> #0003\r : Read Setpoint States
response : >0000

Sent to XGS-600 -> #0005\r : Read software revision
response : >0206,0200,0200,0200
... 69 more lines ...
  16693   Wed Mar 2 12:40:08 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Connector Test:

A quick test to rule out any issue with the Ethernet to Serial adapter was done using the setup shown in Attachment 1. The results rule out any connector problem.

 

IOLAN COMM test (as per Koji's suggestion):

The next step is to swap the controller with a laptop set up to receive serial commands using the same settings as the XGS600 controller. Basically, run a slightly modified version of python script where we go into listening mode. Then send commands to the TCP socket on the IOLAN SDS unit using c1vac and check what data makes its way to the laptop USBserial terminal. After working on this for a bit, I realized that we do not need to do anything on the c1vac machine. We only need to start the service as it would work normally. So I wrote a small python code for a basic XGS-600 controller emulator, see Attachment 4. The outputs from the laptop and c1vac terminals are Attachments 5 and 6 respectively. 

These results show that we can communicate via the assigned IP address "192.168.114.22" and the commands that are sent from c1vac reaches the laptop in the correct format. Furthermore, the serial_XSG service, a part modbusIOC_XGS service, which usually exits with an error seems fine now after successfully communicating with the laptop. I don't know why it did not die after the tests. I also found a bug in my code as a result of the test, where the status field for the fourth gauge didn't get written to. 

 

Pressure reading issue:

I noticed that the pressure reading was not giving the atmospheric value of ~760 Torrs as expected. Looking through my previous readouts, it seems the unit showed this atm value of ~761 Torrs when the first gauge was attached. However, a closer look at the issue revealed a transient behavior, i.e. when the unit is turned on the reading dips to atm value but eventually rises up to 1000 Torrs. I don't think this is a calibration problem bcos the value of 1000 Torrs is the maximum value for the gauge range. I also found out that when the XGS-controller has been running for a while, a power cycle does not have this transient behavior. So maybe a faulty capacitor somewhere? I have attached a short video clip that shows what happens when the XGS-controller unit is turned on.

Attachment 1: IMG_20220302_123529382.jpg
IMG_20220302_123529382.jpg
Attachment 2: XGS600_Serial2Ethernet2Serial2USB_comm_test_result.txt
$ python3 XGS600_comm_test.py  

----- Multiple Sensor Read Commands -----

Sent to XGS-600 -> #0001\r : Read XGS contents
response : >FE4CFE4CFE4C

Sent to XGS-600 -> #0003\r : Read Setpoint States
response : >0000

... 73 more lines ...
Attachment 3: VID-20220302-WA0001.mp4
Attachment 4: comm_test_c1vac_to_laptop_via_iolansds.py
#!/usr/bin/env python

#Created 3/2/22 by Tega Edo
'''Script to emulate XGS-600 controller using laptop USBserial port'''

import serial
import sys,os,math,time

ser = serial.Serial('/dev/cu.usbserial-1410') # open serial port 

... 19 more lines ...
Attachment 5: laptop_terminal.txt
(base) tega.edo@Tegas-MBP serial % python3 comm_test_c1vac_to_laptop_via_iolansds.py

----- Listen for USBserial command and asynchronously send data in XGS600 format -----

Command received from c1vac [1] : 

Data sent to c1vac [1] : >1.000E+00,NOCBL    ,NOCBL    ,NOCBL    ,2.00E+00,NOCBL\r
Command received from c1vac [2] : 

Data sent to c1vac [2] : >2.000E+00,NOCBL    ,NOCBL    ,NOCBL    ,3.00E+00,NOCBL\r
... 54 more lines ...
Attachment 6: c1vac_terminal.txt
controls@c1vac:/opt/target/python/serial$ caget C1:Vac-FRG1_status && caget C1:Vac-FRG2_status && caget C1:Vac-FRG3_status && caget C1:Vac-FRG4_status && caget C1:Vac-FRG5_status
C1:Vac-FRG1_status             1.530E+02
C1:Vac-FRG2_status             OFF
C1:Vac-FRG3_status             OFF
C1:Vac-FRG4_status             NO COMM
C1:Vac-FRG5_status             1.55E+02
controls@c1vac:/opt/target/python/serial$ caget C1:Vac-FRG1_status && caget C1:Vac-FRG2_status && caget C1:Vac-FRG3_status && caget C1:Vac-FRG4_status && caget C1:Vac-FRG5_status
C1:Vac-FRG1_status             1.630E+02
C1:Vac-FRG2_status             OFF
C1:Vac-FRG3_status             OFF
... 70 more lines ...
  16704   Sun Mar 6 18:14:45 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

Attachment 1: iolan_xgs_comm_investigation.pdf
iolan_xgs_comm_investigation.pdf iolan_xgs_comm_investigation.pdf iolan_xgs_comm_investigation.pdf iolan_xgs_comm_investigation.pdf
  16706   Mon Mar 7 13:53:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

So it appears that my deduction from the pictures of needing a cable swap was correct, however, it turns out that the installed cable was actually the normal RS232 and what we need instead is the RS232 null cable. After the swap was done, the communication between c1vac and the XGS600 controller became active. Although, the data makes it all to the to c1vac without any issues, the scope view of it shows that it is mainly utilizing the upper half of the voltage range which is just over 50% of the available range. I don't know what to make of this.

 

I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

 

Quote:

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

 

Attachment 1: iolan_xgs_comm_live.pdf
iolan_xgs_comm_live.pdf
  16707   Mon Mar 7 14:52:34 2022 KojiUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Great trouble shoot!

> I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

This is just a calibration issue. The controller should have the calibration function.
(The other Pirani showing 850Torr was also a calibration issue although I didn't bother to correct it. I think the pirani's typically has large distribution of the calibration values and requires individual calibration)

  16713   Tue Mar 8 12:08:47 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

OMG, it worked! It was indeed a calibration issue and all I had to do was press the "OK" button after selecting the "CAL" tab beside the pressure reading. Wow.

Quote:

Great trouble shoot!

> I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

This is just a calibration issue. The controller should have the calibration function.
(The other Pirani showing 850Torr was also a calibration issue although I didn't bother to correct it. I think the pirani's typically has large distribution of the calibration values and requires individual calibration)

 

Attachment 1: XGS600_calibration.pdf
XGS600_calibration.pdf
  395   Sun Mar 23 00:43:08 2008 mevansHowToGeneralOnline Adaptive Filtering
I wrote a short document about the OAF running on the ASS. Since there is no BURT setup, I put a script in /cvs/cds/caltech/scripts to help with setting initial parameters: upass.
Attachment 1: OnlineAdaptiveFilter.pdf
OnlineAdaptiveFilter.pdf OnlineAdaptiveFilter.pdf OnlineAdaptiveFilter.pdf OnlineAdaptiveFilter.pdf OnlineAdaptiveFilter.pdf OnlineAdaptiveFilter.pdf
  7220   Fri Aug 17 16:58:06 2012 MashaUpdatePEMOnline Seismic Noise Classification - Part 1

Den and I decided to try to classify seismic signals in the frequency domain rather than the time domain. We looked at amplitude spectral density plots of all of the data in our set, and noted that there were noticeable differences in the frequency domain for midnight quiet, trucks, and earthquakes.

For example, here is the time series of quiet, midnight seismic noise as compared to the seismic noise at the peak of an earthquake - the earthquake signal is noticeably higher in the 1 - 3 Hz region. Likewise, for the truck signal, there are noticeable bumps that arise at 10 and 30 Hz during the peak of the truck's motion due to the resonant frequency of the truck bouncing on its wheels.

noises.png

We investigated this potential means of classification further by considering the linear separability of the power of our signals in various frequency bands. Below is a plot of the power of a normalized signal in the 0.1 - 3.0 Hz region vs. the power of the normalized signal in the 3.0 - 30.0 Hz region - calculated by means of fft and separation of the discrete resulting frequencies (in short, an ideal filter).

Seismic_Signal_Linear_Separability.png

There is rather clear linear separability of the normalized signals in this case, as two lines could potentially be drawn to separate trucks from quiet and earthquake in this case (with a few misclassified points due to quiet - since the lab isn't actually empty and quiet in the middle of the night, and man-made seismic disturbances to occur). The reason we have to normalize our signals lies in the fact that the data set had different gains for various seismometers at different times. Normalization not only allows us to use our data set for training effectively, but it also assures that the online classification, if the online signals are also normalized, will allow for variable seismometer gains in the future and still be able to classify signals.

I looked at the linear separability of our training set using various combinations of frequency bands, and deduced that the current separation in the BLRMS preforms best (coincidentally, since the BLRMS separations are just decades), which meant that we could use the current BLRMS system we have for online classification of seismic noise.

Thus, I built a neural network which performed classification with the following parameters:

- One hidden layer of 20 neurons

- Gradient descent backpropagation with learning parameter mu = 0.175

- Sigmoidal activation functions for each neuron (computationally achieved by a parametrized hyperbola rather than an actual hyper-tangent in order to save on computation time). 

- 5 inputs - the normalized fft^2 of the signal (since the root of a signal doesn't add linearly to 1) in the following frequency regions: 0.1 - 0.3, 0.3 - 1.0, 1.0 - 3.0, 3.0 - 10.0 and 10.0 - 30.0 Hz. Since this division was done through the (frequency, fft value) return in Matlab, the signal was essentially filtered ideally into these frequency bands.

- 3 output neurons representing an output vector, with desired output vectors of [1, 0, 0] for earthquake, [0, 1, 0] for truck, and [0, 0, 1] for quiet.

- 1,600,000 training epochs (batch backpropagation on all of the data)

Below is the best learning curve for this network, representing the total amount of inputs misclassified out of 224. The best result achieved was 30 misclassified signals out of 224. Obviously this is not ideal, but our data is not totally linearly separable. This could, however, be reduced with further iterations, but given the close to 0 slope of the learning curve between iteration number 1,000,000 and number 1,500,000, this could take a very long time.

 

3_Output_Learning_Curve.png

Thus, I trained the network, generated the weight vectors and optimal activation function parameters, and was ready to implement a feed-forward neural network (with no online training). My next e-log (Part 2) will be about this system and will be posted shortly.

Attachment 1: Earthquake_Quiet_PSD.png
Earthquake_Quiet_PSD.png
Attachment 2: Truck_Signal_Progression.png
Truck_Signal_Progression.png
Attachment 3: Seismic_Signal_Linear_Separability.png
Seismic_Signal_Linear_Separability.png
Attachment 4: 3_Output_Learning_Curve.png
3_Output_Learning_Curve.png
Attachment 5: Earthquake_Quiet_PSD.png
Earthquake_Quiet_PSD.png
Attachment 6: Earthquake_Quiet_PSD.png
Earthquake_Quiet_PSD.png
Attachment 7: Truck_Signal_Progression.png
Truck_Signal_Progression.png
  7221   Fri Aug 17 18:17:16 2012 MashaConfigurationPEMOnline Seismic Noise Classification - Part 2

As promised in previous e-log, this log is all about the current online seismic noise classification system.

While we had the BLRMS system already in place (which I helped make), Den realized that we would need better filters for the BLRMS channels, as we wanted a strong cut-off, but we also wanted a short step-response so that we could quickly classify seismic signals. Likewise, having a step response which oscillates is also undesirable as this could lead to false classifications of post-truck signal as trucks as a filter adjusts and then dips back down. Thus, after experimenting with many different filters, Den chose to use a combination of

chebyl("LowPass", 1, 1, 0.03)*chebyl("LowPass", 1, 1, 0.03)

as our low-pass filter. The step response and bode plot are below.

LP_RMS_Filter

 The next step was to write C code that would implement the feedforward neural network with my newly generated weights.

Next, I had to implement the code in the c1pem model, and normalize the inputs. Below is an overview of the model, and a close up of the C block section.

GUR1X_Model.png

 GUR1X_Model_Closeup.png

The above close-up includes the process of normalization (dividing by the square of the incoming signal), feeding through the neural network, and classifying.

Each seismometer channel set (GUR1X, GUR1Y, GUR1Z, GUR2X, GUR2Y, GUR2Z, STS1X, STS1Y, STS1Z) now has channels (and corresponding DQ channels) of the following form:

SEIS_CLASS : The class of seismic noise 1.0 means Earthquake, 0.5 means Quiet, and 0.0 means Truck. (There are only these 3 digital values).

SEIS_CLASS_EQ, SEIS_CLASS_TRUCK, SEIS_CLASS_QUIET: These channels represent the confidence of the neural network's classification. The class of the current signal will have an output of 1, where the other two channels will have an output between 0 and 1 representing the ratio of the neural network's output in that class neuron to the output in the classification vector neuron. To simply - suppose the neural network classified an earthquake. Ideally, the neural network output neurons would have the value [1, 0, 0], and SEIS_CLASS would equal 1.0 for earthquake. However, the output neurons probably read something along the lines of [0.9, 0.3, 0.5] - SEIS_CLASS is still 1.0, but SEIS_CLASS_EQ would be 1.0, and SEIS_CLASS_TRUCK would be 0.5 / 0.9 and SEIS_CLASS_QUIET would be 0.3 / 0.9. The lower the other two signals are, the better - this means that we are more confident in our classification.

The MEDM screen for this system (in the RMS system) has the following form for all seismometer channels (this one is GUR1X):

GUR1X_MEDM.png

These are the screens I edited earlier in the summer, with modifications. The bottom filter banks represent the norm of the seismometer signal, which we use to normalize the inputs to the neural network.

Here a close-up of the most important part:

GUR1X_MEDM_CLOSE_UP.png

The orange meter on the right points to the current signal type. Here it reads truck - this is ok because it's the middle of the day, and there are a lot of trucks around. The left side represents our confidence in the signal - the signal is classified as a truck, so the "Truck" bar is saturated. The quiet signal bar is very low, which is good since it means that the neural network thinks that it's definitely not quiet. The earthquake bar has some magnitude, since earthquake signals and trucks have some degree of linear non-separability.

How has this been performing? Firstly, all of the seismometer channels have the same classification readout, which is good. Last night, all of the classes were "quiet", with an "earthquake" which occurred when Den jumped around GUR1 to simulate an EQ. This morning it was on "truck" as expected. The filters are still not fine enough to detect individual trucks, but I will continue to monitor the performance over the coming days.

If anyone has ideas on how better to represent this information, please let me know. This was the first thing that came into my head that would work with my MEDM monitor options, and I'm open to suggestions!

  7223   Sat Aug 18 01:40:09 2012 MashaConfigurationPEMOnline Seismic Noise Classification Widget

I added a widget to the C1PEM_OVERVIEW MEDM screen. The screen shows the nine seismometer channels (GUR1, GUR2, and STS1 X, Y, and Z), the current signal class in dark red, and the overall confidence in the classification, as Rana suggested. The confidence indication thresholds range from 0.1 to 0.9, in intervals of 0.1. Basically, if a signal class is completely dark red, and the other two classes show only white, or, better yet, nothing at all, this means that we have a clear classification. If, however, the other regions have some yellow, or even red indicators, this means that we are not very confident in our signal classification.

Classification_Widget.png

This is a screenshot of the widget. The nine seismometer channels are classifying the signal as quiet, which is good both because it's the middle of the night, and because the nine seismometer signals somehow agree (I'd use the word correspond with one another, but that implies a strong level of coherence..). The confidence is high, seeing as there's little indication in the truck and earthquake regions (none whatsoever in the truck, meaning that the signal, given our classification method, could not possibly be a truck, and some in the earthquake region (below 0.1 of the quiet signal classification strength, however), possibly due to low seismic disturbance).

  8515   Tue Apr 30 23:04:23 2013 JenneConfigurationRF SystemOnly 4 25m cables ordered

I have found in the depths of the elog the (original?) list of fibers and lengths that were decided upon:  elog 6535.

In Suresh's elog, we were assuming that POP22 & POP110 would be served by a single PD.  This is still the nominal plan, although we (Rana is maybe still thinking about this in the back of his head?) think that it might not be feasible.  Riju and I were hoping to put a 4th fiber in the tubing so that we wouldn't have to add it later if POP22 & POP110 are eventually 2 separate PDs.  Anyhow, for now, all we have available are 3 fibers for the POX table, so that is what was installed this afternoon.

  1422   Tue Mar 24 13:54:49 2009 JenneUpdateSUSOp Levs Centered

ITMX, ITMY, BS, SRM, PRM op levs were all recentered.  ETM's looked okay enough to leave as-is. 

  6601   Thu May 3 22:37:44 2012 JenneUpdateGeneralOpLev 90-day trend

After fixing the PRM tonight, all of the oplevs look okay. 

.....except ITMX, which looks like it's power dropped significantly after the CDS upgrade.  To be investigated tomorrow.

Attachment 1: OpLevTrends_90days_Ending3May2012
  6602   Fri May 4 17:44:42 2012 JenneUpdateGeneralOpLev 90-day trend

Quote:

After fixing the PRM tonight, all of the oplevs look okay. 

.....except ITMX, which looks like it's power dropped significantly after the CDS upgrade.  To be investigated tomorrow.

 I had a look-see at ITMX's oplev.  I can't see any clipping, so maybe the power is just low?  One thing that was funny though is the beam coming directly from the laser.  There is the main, regular beam, and then there is a thin horizontal line of red light also coming straight out of the laser.  I don't know what to do about that, except perhaps put an iris right after the HeNe, to block the horizontal part?  I'm not sure that it's doing anything bad to the optic though, since the horizontal part gets clipped by other optics before the beam enters the vacuum, so there is no real irregularity on the beam incident on the QPD.

I realigned the oplev on the QPD, using last night's ITMX alignment + whatever drift it picked up over night, so it may need re-recentering after Xarm is nicely aligned.

  10735   Tue Nov 25 14:52:14 2014 ericqUpdateOptical LeversOpLev RINs

 At Rana's request, I've made an in-situ measurement of the RIN of all of our OpLevs. PSL shutter closed, 10mHz BW. The OpLevs are not neccesarily centered, but the counts on darkest quadrant on each QPD is not more than a factor of a few lower than the brightest quadrant; i.e. I'm confident that the beam is not falling off. 

I have not attached that raw data, as it is ~90MB. Instead, the DTT template can be found in /users/Templates/OL/ALL-SUM_141125.xml

Here are the mean and std of the channels as reported by z avg 30 -s, (in parenthesis, I've added the std/mean to estimate the RMS RIN)

SUS-BS_OLSUM_IN1 1957.02440999 1.09957708641 (5.62e-4)
SUS-ETMX_OLSUM_IN1 16226.5940104 2.25084766713 (1.39e-4)
SUS-ETMY_OLSUM_IN1 6755.87203776 8.07100449176 (1.19e-3)
SUS-ITMX_OLSUM_IN1 6920.07502441 1.4903816992 (2.15e-4)
SUS-ITMY_OLSUM_IN1 13680.9810547 4.71903560692 (3.45e-4)
SUS-PRM_OLSUM_IN1 2333.40523682 1.28749988092 (5.52e-4)
SUS-SRM_OLSUM_IN1 26436.5919596 4.26549117459 (1.61e-4)
 

Dividing each spectrum from DTT by these mean values gives me this plot:

 RIN.pdf

ETMY is the worst offender here...

  10488   Wed Sep 10 14:58:58 2014 JenneUpdateSUSOpLev test: New channels

Steve and EricG are moving their oplev test for aLIGO over to the SP table, so that we can have the SRM optical lever back.  

I have pulled out an Ontrak PSM2-10 position sensor and accompanying driver for the sensor.  This, like the POP QPD, has BNC outputs that we can take straight to the ADC.

In the c1pem model I have created 3 new filter modules:  C1:PEM-OLTEST_X, C1:PEM-OLTEST_Y, and C1:PEM-OLTEST_SUM.  I built, installed and restarted the model, and also restarted the daqd process on the frame builder.  On the AA breakout board on the 1X7 rack, these correspond to:

BNC # 29 = OLTEST_X

BNC # 30 = OLTEST_Y

BNC # 31 = OLTEST_SUM

By putting 1Vpp, 0.1Hz into each of these channels one at a time, I see on StripTool that they correspond as I expect.

Everything should be plug-and-play at this point, as soon as Steve is ready with the hardware.

  10496   Thu Sep 11 17:12:42 2014 SteveUpdateSUSOpLev test: old SP qpd connected

IP POS cable was swapped with old SP-QPD sn222 at the LSC rack.  So there is NO IP POS temporarily.

This QPDsn222 will be used the HeNe oplev test for aLIGO

 

  9349   Tue Nov 5 19:39:27 2013 JenneUpdateLSCOpLev time series

[Rana, Jenne]

We looked at the time series for all the oplevs except the BS, from last Tuesday night, during a time when we were building up the power in the arms.  We conclude from a 400 second stretch of data that there is not discernible difference in the amount of motion of any optic, when the cavities are at medium power, and when they're at low power.  Note however, that we don't have such a nice stretch of data for the really high powers, so the maximum arm power in these plots is around 5.  Both the TRX and TRY signals look fairly stationary up to powers of 1 or 2, but once you get to 4 or 5, the power fluctuations are much more significant.  So, since this isn't caused by any optic moving more, perhaps it's just that we're more sensitive to optic motion when we're closer to resonance in the arms.

However, from this plot, it looks like the ETMY is moving much more than any other optic.  On the other hand, ETMY has not ever been calibrated (there's an arbitrary 300 in there for the calibration numbers on the ETMY oplev screen).  So, perhaps it's not actually moving any more than other optics.  We should calibrate the ETM oplevs nicely, so we have some real numbers in there.  ETMX also only is roughly calibrated, relative to the OSEMs.  We should either do the move-the-QPD calibration, or a Kakeru-style pitch and yaw some mirrors and look at transmitted power.

Traces on this xml file have been filtered with DTT, using zpk([0],[0.03],1,"n").

OpLevs_during_PRMI_2arms.pdf

 

  7058   Tue Jul 31 15:24:53 2012 YaakovUpdateSTACISOpen loop gains and block diagram

First, a quick note on the PZT I thought I killed- it was most likely something in the high voltage amplifier that broke, since I put the amplifier in another STACIS with a working y-axis PZT and it still didn't work properly. Conclusion: something in the y-axis amplifier circuitry is broken, not the PZT itself.

Today I retook the open loop gains in the X and Z axes (Y axis out of commission for now, see above). With the loop open, I input a swept sine signal from 0.1 to 100 Hz, and measure the output of the geophones. This way all the transfer function that are present in the closed loop are present here as well: the transfer functions of the physical STACIS, the geophone pre-amplifier circuit, the high-voltage amplifier, and the PZT actuators.

Here is a block diagram showing what I am measuring, with the various transfer functions in blue boxes (the measurement is their product):

 stacis_block.bmp

x_OL.bmpz_OL.bmp

z_OL.fig

x_OL.fig

These open loop gains show there is gain of at least 10x from 2 to 80 Hz in the z-axis and 2 to 60 Hz in the x-axis. This is the region I was seeing isolation in when I switched to closed loop, which is consistent. These measurements were with all the pots in the geophone preamplifier set very low, so more gain (and thus isolation) is hypothetically possible if I find a way to stop the horizontal axes from becoming unstable at higher gains. There is unity gain at around 0.5 Hz and 100 Hz for the z-axis, but the phase is nowhere near 180 deg. at these points so there shouldn't be instability due to this. The peak at around 15 Hz is consistent with old records of the STACIS open loop gain.

  7061   Tue Jul 31 19:34:55 2012 KojiUpdateSTACISOpen loop gains and block diagram

With your definition of the open loop gain, G=+1 is the condition to have singularity in a closed loop transfer function 1/(1-G).

But this is not the sole criteria of the loop stability.
Basically, the closed loop transfer function should not have "unphysical" pole.
For more about loop instability, you should refer stability criteria in literature such as Nyquist's stability criterion.

Both of the X and Z loops look unstable with the current gain.

  2203   Sat Nov 7 23:50:45 2009 HaixingUpdateGeneralOpen-loop transfer function of the magnetic levitation system

I measured the open-loop transfer function of the magnetic levitation system.

The schematic block diagram for this measurement is the following:

transfer_function_meas_bd.PNG

I injected a signal at a level of 20mV between two preamplifiers, and the corresponding open-loop

transfer function is given by B/A.  I took a picture of the resulting measurement, because

I encountered some difficulties to save the data to the computer via the wireless network.

The bode plots for the transfer function shown on the screen is the following:

Transfer_function_meas.jpg

 

I am puzzled with the zero near 10 Hz. I think it should come from the mechanical response function, because there is no zero in the transfer functions

of the preamplifer and the coil itself. I am not sure at the moment.

The corresponding configuration of the levitated magnet is

magnetic_levitation.jpg

  7662   Fri Nov 2 14:37:36 2012 JenneUpdateGeneralOpen-sided mount - why

Quote:

Quote:

Suprema- SS clear edge mirror mount 2" diameter is modified for 40m vacuum use. One left and one right handed one. It's adjustment screw housing is bronze! It is not ideal for out gassing.

It will be baked and scanned. If it passes we should use it.

We may need these to bring out some pick-off beams.

 I vote against it. We don't know about the grease inside the screw bushings - scans are not everything if adjusting the screw loosens up the grease. If we need more pick off mirrors lets just make some of the kind that we already use inside for the 2" optics.

 I think Steve had these prepared in response to my question a few days ago of how badly do we need adjustability for the POX/POY mirrors?  We already have cleaned open-sided mounts that have no adjustment screws.  So as long as the beam reflects off the ITMs horizontally (which it should), we can do yaw adjustment by twisting the whole mount. We don't need super fine yaw adjustment, we just need to get the beam out, so this is probably good enough.

We should put the POY mirror on this open-sided mount (the one without screws) some time.  Perhaps even today.

ELOG V3.1.3-