40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 84 of 329  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  4885   Sun Jun 26 16:02:12 2011 kiwamuUpdateIOOFriday MC activity

[Rana / Kiwamu]

 Last Friday we did several things for MC :

   - aligned the incident beam to MC

   - increased the locking gain by 6 dB and modified the auto-locker script accordingly

   - improved the alignment of the beam on the MC_REFLPD photo diode

 


(Motivation)

 In the beginning of the work, we wanted to know what RF frequency components are prominent in the reflection from MC.

Since the WFS circuits are capable for two RF notches, we wanted to determine which frequencies are appropriate for those notches.

So for the purpose we tried searching for unwanted RF components in the reflection.

However during the work, we found several things that needed to be fixed, so we spent most of the time for improving the MC locking.

 

(Some notes)

 - Alignments of the incident beam

At the beginning, the reflection from MC was about 2.2 in C1:IOO-REFLDC and the lock of MC had been frequently unlocked.

This situation of high reflection seemed to be related to a work done by Suresh (#4880).

Rana went to the PSL table and tweaked two input steering mirrors in the zig-zag path, and finally the reflection went down to ~ 0.8 in C1:IOO-REFLDC.

This work made the lock more robust.

 

 - Change of the locking gain

 After the alignment of the incident beam, we started looking at the time series of the MC_REFLPD signal with an oscilloscope as a start point.

What we found was a significant amount of 30 kHz components. This 30 kHz oscillation was thought be a loop oscillation, and indeed it was so.

We increased the loop gain by 6 dB and then the 30 kHz components disappeared successfully.

So the nominal locking gain of MC is now 11 dB in C1:IOO-MC_REFL_GAIN. The auto locker script was also modified accordingly.

 

- RF components in the MCREFL signal

After those improvements mentioned above, we started looking at the spectrum of the MCREFL PD using the spectrum analyzer HP8590.

The 29.5 MHz component was the biggest components in the spectrum. Ideally this 29.5 MHz signal should be zero when MC is locked.

One possible reason for this big 29.5 MHz signal was because the lock point was off from the resonant point.

We tweaked the offset in the MC lock path using a digital offset, C1:IOO-MC-REFL_OFFSET.

We found an offset point where the 29.5MHz signal went to the minimum, but didn't go to zero.

 

(works to be done)

So it needs some more works to investigate the cause of nonzero 29.5 MHz signal as well as investigation of what RF components should be notched out.

A good start point would be writing a GPIB interface script such that we can get the spectra from HP8590 without any pains.

  9641   Sun Feb 16 17:40:11 2014 KojiUpdateLSCFriday Night ALS

I wanted to try common/differential ALS Friday evening. I tried ALS using the LSC servo but this was not successfull.
The usual ALS servo in the ALS model works without problem. So this might be coming from the shape of the servo filter.
The ALS one has 1:1000 filter but the LSC one has 10:3000. Or is there any problem in the signal transfer between
ALS and LSC???


- MC:
Slow offset -0.302V

- IR:
TRX=1.18 / TRY=1.14, XARM Servo gain = 0.25 / YARM Servo gain = 0.10

- Green Xarm:
GTRX without PSL green 0.562 / with PSL green 0.652 -> improved upto 0.78 by ASX and tweaking of PZTs
Beat note found at SLOW OFFSET +15525
Set the beat note as +SLOW OFFSET gives +BEAT FREQ

- Green Yarm:
GTRY without PSL green 0.717 / with PSL green 1.340
Beat note found at SLOW OFFSET -10415
Set the beat note as +SLOW OFFSET gives -BEAT FREQ

- BEAT X -10dBm on the RF analyzer@42.5MHz / Phase tracker Qout = 2300 => Phase tracking loop gain 80 (Theoretical UGF = 2300/180*Pi*80 = 3.2kHz)
- BEAT Y -22dBm on the RF analyzer@69.0MHz / Phase tracker Qout = 400 => Phase tracking loop gain 300 (Theoretical UGF = 2.1kHz)

Transfer function between ALSX/Y and POX/Y11I @560Hz excitation of ETMX
POX11I/ALSX = 54.7dB (~0deg)
POY11I/ALSY = 64.5dB (~180deg)

POX11I calibration:

ALSX[cnt]*19230[Hz/cnt] = POX11I[cnt]/10^(54.7/20)*19230[Hz/cnt]
= 35.4 [Hz/cnt] POX11I [cnt] (Hz in green frequency)

35.4 [Hz/cnt]/(2.99792458e8/532e-9 [Hz]) * 37.8 [m] = 2.37e-12 [m/cnt] => 4.2e11 [cnt/m] (c.f. Ayaka's number in ELOG #7738 6.7e11 cnt/m)

POY11I calibration:

ALSY[cnt]*19230[Hz/cnt] = POY11I[cnt]/10^(64.5/20)*19230[Hz/cnt]
= 11.5 [Hz/cnt] POX11I [cnt] (Hz in green frequency)

11.5 [Hz/cnt]/(2.99792458e8/532e-9 [Hz]) * 37.8 [m] = 7.71e-13 [m/cnt] => 1.3e12 [cnt/m] (c.f. Ayaka's number in ELOG #7738 9.5e11 cnt/m)

  7334   Tue Sep 4 11:32:58 2012 JenneUpdateLockingFriday in-vac work

Elog re: Friday's work

Adjusted PZT2 so we're hitting the center of PR2. 

Noticed that the beam centering target is too low by a few mm, since the OSEM set screw holes that it mounts to are lower than the center line of the optic.  This meant that while we were hitting the center of PR2, the beam was half clipped by PRM's centering target.  We removed the target to confirm that the beam is really centered on PR2.

Checked the beam on PR3 - it looked fine.  There had been concern last week that PR2 was severely pitched forward, but this turns out to be an effect of the PRM centering target being too low - shoot the beam downward to go through the hole, beam continues downward to hit the bottom of PR2, so beam is falling of the bottom of PR3.  But when we actually centered the beam on PR2, things looked fine on PR3. 

Checked that the beam approximately goes through the beam splitter.  Again, the targets are too low, and these 45 deg targets' holes are smaller than the 0 deg targets, so we don't see any beam going through the target, since the beam is hitting the target higher than the hole.  The beam looked left/right like it was pretty close to the hole, but it was hard to tell since the angle is bad, and I'm not infinitely tall.  We should check again to make sure that the beam is going through properly, and we're not clipping anywhere.  I'll need help from a height-advantaged person for this.

Checked that the beam is hitting the center of the ITMY, as best we can see by using an IR card at the back of the optic.  We didn't try reaching around to put a target on the front side. 

We were debating whether it would be worth it to open ETMY this week, to check that the beam transmitted through the BS hits the center of ETMY.

We also took a quick look around the AS optics, but since that depends on BS/ITMX alignment, we weren't sure how to proceed.  We need a plan for this part.  All suspended optics were restored to their last good alignment, but we haven't tried locking MICH or anything to confirm that the alignment. 

To do list:  Check no clipping on ITMY table of beam between BS and ITMY, clipping on POY optics.  Also, oplev is clipping on cable holder thing on the table - this needs to be moved.  .....other?

  4884   Sat Jun 25 06:09:38 2011 kiwamuUpdateLSCFriday locking

I was able to measure the sensing matrix in the PRMI configuration.

The results will be posted later.

  1909   Sat Aug 15 05:08:55 2009 YoichiUpdateLockingFriday night locking
Summary: DD hand off fails for DRFPMI.

Tonight, I did a lot of house keeping work.

(1) I noticed that the reference cavity (RC) was locked to TEM10.
This was probably the reason why we had to increase the FSS common gain.
I re-locked the RC to TEM00. Now the common gain value is back to the original.

(2) The MC WFS did not engage. I found that c1dcuepics had the /cvs/cds mounting problem.
I rebooted it. Then MC WFS started working.

(3) After checking that the MC WFS QPDs are centered for direct reflection (the MZ half fringe method),
I locked the MC and tweaked the mirror alignment (mainly MC3) to offload the WFS feedback signals.
Now the MC locks to TEM00 robustly.

(4) Since the MC mirror alignment is touchy recently, I did not like the idea of mis-aligning MC2
when you do the LSC PD offset adjustment. So I modified the LSCoffset script so that it will close
the PSL shutter instead of mis-aligning MC2.

(5) I changed the PD11_Q criteria for success in the alignment scripts because PD11_Q is now lower
than before due to the lower laser power.

(6) Since today's bootfest, some epics values were not properly restored. Some of the PD gains were
unmatched between I and Q. I corrected these with the help of conlog.

(7) By checking the open loop TFs, I found that the short DOFs have significantly lower UGFs than before,
probably due to the lower laser power. I increased the gains of MICH, PRCL and SRCL by a factor of 2 for
the full configuration.
For the DRM configuration the changes I made were:

PRC -0.15 -> -0.3
SRC 0.2 -> 0.6
MICH 0.5 -> 0.5

(8) I locked the DRFPMI with arm offsets, then adjusted the demodulation phases of PD6,PD7,PD8 and PD9 (DD PDs)
to minimize the offsets in the error signal, while locked with the single demodulation signals.

Change log:
PD6_PHASE 201 -> 270
PD7_PHASE 120 -> 105
PD8_PHASE 131 -> 145
PD9_PHASE -45 -> -65


(9) I ran senseDRM to get the sensing Matrix for the short DOFs using DD signals in DRM configuration.

(10) Still the DD hand off fails for DRFPMI. It succeeds for DRM.
  61   Sun Nov 4 23:55:24 2007 ranaUpdateIOOFriday's In-Vac work
On Friday morning when closing up we noticed that we could not get the MC to flash any modes.
We tracked this down to a misalignment of MC3. Rob went in and noticed that the stops were
still touching. Even after backing those off the beam from MC3 was hitting the east edge of
the MC tube within 12" of MC3.

This implied a misalignment of MC of ~5 mrad which is quite
large. At the end our best guess is that either I didn't put the indicator blocks in the
right place or that the MC3 tower was not slid all the way back into place. Since there
is such a strong stickiness between the table and the base of the tower its easy to
imagine the tower was misplaced.

So we looked at the beam on MC2 and twisted the MC3 tower. This got the beam back onto the
MC2 cage and required ~1/3 if the MC3 bias range to get the beam onto the center. We used
a good technique of finding that accurately: put an IR card in front of MC2 and then look
in from the south viewport of the MC2 chamber to eyeball the spot relative to the OSEMs.

Hitting MC2 in the middle instantly got us multiple round trips of the beam so we decided
to close up. First thing Monday we will put on the MC1/MC3 access connector and then
pump down.


Its possible that the MC length has changed by ~1-2 mm. So we should remeasure the length
and see if we need to reset frequencies and rephase stuff.
  62   Mon Nov 5 07:29:35 2007 ranaUpdateIOOFriday's In-Vac work
Liyuan recently did some of his pencil beam scatterometer measurements measuring not the
BRDF but instead the total integrated power radiated from each surface point
of some of the spare small optics (e.g. MMT, MC1, etc.).

The results are here on the iLIGO Wiki.

So some of our loss might just be part of the coating.
  4203   Tue Jan 25 22:49:13 2011 KojiUpdateCDSFront End multiple crash

STATUS:

  • Rebooted c1lsc and c1sus. Restarted fb many times.
  • c1sus seems working.
  • All of the suspensions are damped / Xarm is locked by the green
  • Thermal control for the green is working
  • c1lsc is frozen
  • FB status: c1lsc 0x4000, c1scx/c1scy 0x2bad
  • dataviewer not working

1. DataViewer did not work for the LSC channels (liek TRX)

2. Rebooted LSC. There was no instruction for the reboot on Wiki. But somehow the rebooting automatically launched the processes.

3. However, rebooting LSC stopped C1SUS processes working

4. Rebooted C1SUS. Despite the rebooting description on wiki, none of the FE processes coming up.

5. Probably, I was not enough patient to wait for the completion of dorphine_wait? Rebooted C1SUS again.

6. Yes. That was true. This time I wait for everything going up automatically. Now all of c1pemfe,c1rfmfe,c1mcsfe,c1susfe,c1x02fe are running.
FB status for c1sus processes all green.

7. burtrestored c1pemfe,c1rfmfe,c1mcsfe,c1susfe,c1x02fe with the snapshot on Jan 25 12:07, 2010.

8. All of the OSEM filters are off, and the servo switches are incorrectly on. Pushing many buttons to restore the suspensions.

9. I asked Suresh to restore half of the suspensions.

10. The suspensions were restored and damped. However, c1lsc is still freezed.

11. Rebooting c1lsc freezed the frontends on c1sus. We redid the process No. 5 to No.10

12. c1x04 seems working. c1lsc, however, is still frozen. We decided to leave C1LSC in this state.

 

  4206   Wed Jan 26 10:58:48 2011 josephbUpdateCDSFront End multiple crash

Looking at dmesg on c1lsc, it looks like the model is starting, but then eventually times out due to a long ADC wait. 

[  114.778001] c1lsc: cycle 45 time 23368; adcWait 14; write1 0; write2 0; longest write2 0
[  114.779001] c1lsc: ADC TIMEOUT 0 1717 53 181

I'm not sure what caused the time out, although there about 20 messages indicating a failed time stamp read from c1sus (its sending TRX information to c1lsc via the dolphin connection) before the time out.

Not seeing any other obvious error messages, I killed the dead c1lsc model by typing:

sudo rmmod c1lscfe

I then tried starting just the front end model again by going to the /opt/rtcds/caltech/c1/target/c1lsc/bin/ directory and typing:

sudo insmod c1lscfe.ko

This started up just the FE again (I didn't use the restart script because the EPICs processes were running fine since we had non-white channels).  At the moment, c1lsc is now running and I see green lights and 0x0 for FB0 status  on the C1LSC_GDS_TP screen.

At this point I'm not sure what caused the timeout.  I'll be adding some more trouble shooting steps to the wiki though.  Also, c1scx, c1scy are probably in need of restart to get them properly sync'd to the framebuilder.

I did a quick test on dataviewer and can see LSC channels such as C1:LSC-TRX_IN1, as well other channels on C1SUS such as BS sensors channels.

Quote:

STATUS:

  • Rebooted c1lsc and c1sus. Restarted fb many times.
  • c1sus seems working.
  • All of the suspensions are damped / Xarm is locked by the green
  • Thermal control for the green is working
  • c1lsc is frozen
  • FB status: c1lsc 0x4000, c1scx/c1scy 0x2bad
  • dataviewer not working 

 

  4207   Wed Jan 26 12:03:45 2011 KojiUpdateCDSFront End multiple crash

This is definitely a nice magic to know as the rebooting causes too much hustles.

Also, you and I should spend an hour in the afternoon to add the suspension swtches to the burt requests.

Quote:

I killed the dead c1lsc model by typing:

sudo rmmod c1lscfe

I then tried starting just the front end model again by going to the /opt/rtcds/caltech/c1/target/c1lsc/bin/ directory and typing:

sudo insmod c1lscfe.ko

This started up just the FE again

 

  3837   Mon Nov 1 15:35:41 2010 josephbUpdateCDSFront end USR and CPU times now recorded by DAQ

Problem:

We have no record of how long the CPUs are taking to perform a cycle's worth of computation

Solution:

I added the following channels to the various slow DAQ configuration files in /opt/rtcds/caltech/c1/chans/daq/

IOO_SLOW.ini:[C1:FEC-34_USR_TIME]
IOO_SLOW.ini:[C1:FEC-34_CPU_METER]
IOP_SLOW.ini:[C1:FEC-20_USR_TIME]
IOP_SLOW.ini:[C1:FEC-20_CPU_METER]
IOP_SLOW.ini:[C1:FEC-33_USR_TIME]
IOP_SLOW.ini:[C1:FEC-33_CPU_METER]
MCS_SLOW.ini:[C1:FEC-36_USR_TIME]
MCS_SLOW.ini:[C1:FEC-36_CPU_METER]
RMS_SLOW.ini:[C1:FEC-37_USR_TIME]
RMS_SLOW.ini:[C1:FEC-37_CPU_METER]
SUS_SLOW.ini:[C1:FEC-21_USR_TIME]
SUS_SLOW.ini:[C1:FEC-21_CPU_METER]

 

Notes:

To restart the daqd code, simply kill the running process.  It should restart automatically.  If it appears not to have started, check the /opt/rtcds/caltech/c1/target/fb/restart.log file and the /opt/rtcds/caltech/c1/target/fb/logs/daqd.log.xxxx files.  If you made a mistake in the DAQ channels and its complaining, fix the error and then restart init on the fb machine by running "sudo /sbin/init q"

  4312   Thu Feb 17 11:49:48 2011 josephbUpdateCDSFront end start/stop scripts go to /scripts/FE again

I modified the core/advLigoRTS/Makefile to once again place the startc1SYS and killc1SYS scripts in the scripts/FE/ directory.

It had been reverted in the SVN update.

  8858   Tue Jul 16 15:22:27 2013 manasaUpdateCDSFront ends back up

c1sus, c1ioo, c1iscex and c1iscey were down. Why? I was trying to lock the arm and I found that around this time, several computers stopped working mysteriously. Who was working near the computer racks at this time???

I did an ssh into each of these machines and rebooted them sudo shutdown -r now

But then I forgot / neglected/ didn't know to bring back some of the SLOW Controls computers because I am new here and these computers are OLD. Then Rana told me to bring them back and then I ignored him to my great regret. As it turns out he is very wise indeed as the legends say.

So after awhile I did Burtgooey restore of c1susaux (one of the OLD & SLOW ones) from 03:07 this morning. This brought back the IMC pointing and it locked right away as Rana foretold.

Then, I again ignored his wise and very precious advice much to my future regret and the dismay of us all and the detriment of the scientific enterprise overall.

Later however, I was forced to return to the burtgooey / SLOW controls adventure. But what to restore? Where are these procedures? If only we had some kind of electronics record keeping system or software. Sort of like a book. Made of electronics. Where we could LOG things....

But then, again, Rana came to my rescue by pointing out this wonderful ELOG thing. Huzzah! We all danced around in happiness at this awesome discovery!!

But wait, there was more....not only do we have an ELOG. But we also have a thing called WIKI. It has been copied from the 40m and developed into a thing called Wikipedia for the public domain. Apparently a company called Google is also thinking about copying the ELOG's 'find' button.

When we went to the Wiki, we found a "Computer Restart Procedures" place which was full of all kinds of wonderous advice, but unfortunately none of it helped me in my SLOW Controls quest.

 

Then I went to the /cvs/cds/caltech/target/ area and started to (one by one) inspect all of the targets to see if they were alive. And then I burtgooey'd some of them (c1susaux) ?? And then I thought that I should update our 'Computer Restart Procedures' wiki page and so I am going to do so right now ??

And then I wrote this elog.

 

  12827   Mon Feb 13 19:44:55 2017 LydiaUpdateIMCFront panel for 29.5 MHz amplifier box

I made a tentative front panel design for the newly installed amplifier box. I used this chassis diagram to place the holes for attaching it. I just made the dimensions match the front of the chassis rather than extending out to the sides since the front panel doesn't need to screw into the rack; the chassis is mounted already with separate brackets. For the connector holes I used a caliper to measure the feedthroughs I'm planning to use and added ~.2 mm to every dimension for clearance, because the front panel designer didn't have their dimensions built in. Please let me know if I should do something else. 

The input and coupled output will be SMA connectors since they are only going to the units directly above and below this one. The main output to the EOM is the larger connector with better shielded cables. I also included a hole for a power indicator LED. 

EDIT: I added countersinks for 4-40 screws on all the screw clearance holes. 

Johannes, if you're going to be putting a front panel order in soon, please include this one. 

Also, Steve, I found a caliper in the drawer with a dead battery and the screws to access it were in bad shape- can this be fixed? 

 

Attachment 1: rfAmp.pdf
rfAmp.pdf
  12861   Wed Mar 1 21:15:40 2017 LydiaUpdateIMCFront panel for 29.5 MHz amplifier box


I installed the front panel today. While I had the box out I also replaced the fast decoupling capacitor witha 0.1 uF ceramic one. I made SMA cables to connect to the feedthroughs and amplifier, trying to keep the total lengths as close as possible to the cables that were there before to avoid destroying the demod phases Gautam had found. I didn't put in indicator lights in the interest of getting the mode cleaner operational again ASAP. 

I turned the RF sources back on and opened the PSL shutter. MC REFL was dark on the camera; people were taking pictures of the PD face today so I assume it just needs to be realigned before the mode cleaner can be locked again. 

I've attached a schematic for what's in the box, and labeled the box with a reference to this elog. 

Attachment 1: RF_amp_(1).pdf
RF_amp_(1).pdf
  12862   Wed Mar 1 23:56:09 2017 gautamUpdateIMCFront panel for 29.5 MHz amplifier box

The alignment wasn't disturbed for the photo-taking - I just re-checked that the spot is indeed incident on the MC REFL PD. MC REFL appeared dark because I had placed a physical beam block in the path to avoid accidental PSL shutter opening to send a high power beam during the photo-taking. I removed this beam block, but MC wouldn't lock. I double checked the alignment onto the MC REFL PD, and verified that it was ok.

Walking over to the 1X1, I noticed that the +24V Sorensen that should be pushing 2.9A of current when our new 29.5MHz amplifier is running, was displaying 2.4A. This suggests the amplifier is not being powered. I toggled the power switch at the back and noticed no difference in either the MC locking behaviour or the current draw from the Sorensen.

To avoid driving a possibly un-powered RF amplifier, I turned off the Marconi and the 29.5MHz source. I can't debug this anymore tonight so I'm leaving things in this state so that Lydia can check that her box works fine...

Quote:

I turned the RF sources back on and opened the PSL shutter. MC REFL was dark on the camera; people were taking pictures of the PD face today so I assume it just needs to be realigned before the mode cleaner can be locked again. 

 

  12865   Thu Mar 2 20:32:18 2017 LydiaUpdateIMCFront panel for 29.5 MHz amplifier box

[gautam, lydia]

I pulled out the box and found the problem: the +24 V input to the amplifier was soldered messily and shorted to ground. So I resoldered it and tested the box on the bench (drove with Marconi and checked that the gain was correct on scope). This also blew the fuse where the +24 power is distributed, so I replaced it. The box is reinstalled and the mode cleaner is locking again with the WFS turned on.

Since I tried to keep the cable lengths the same, the demod phases shouldn't have changed significantly since the amplifier was first installed. Gautam and I checked this on a scope and made sure the PDH signals were all in the I quadrature. In the I vs. Q plot, we did also see large loops presumably corresponding to higher order mode flashes.

Quote:

Walking over to the 1X1, I noticed that the +24V Sorensen that should be pushing 2.9A of current when our new 29.5MHz amplifier is running, was displaying 2.4A. This suggests the amplifier is not being powered. I toggled the power switch at the back and noticed no difference in either the MC locking behaviour or the current draw from the Sorensen.

 

 

  16167   Fri May 28 11:16:21 2021 JonUpdateCDSFront-End Assembly and Testing

An update on recent progress in the lab towards building and testing the new FEs.

1. Timing problems resolved / FE BIOS changes

The previously reported problem with the IOPs losing sync after a few minutes (16130) was resolved through a change in BIOS settings. However, there are many required settings and it is not trivial to get these right, so I document the procedure here for future reference.

The CDS group has a document (T1300430) listing the correct settings for each type of motherboard used in aLIGO. All of the machines received from LLO contain the oldest motherboards: the Supermicro X8DTU. Quoting from the document, the BIOS must be configured to enforce the following:

• Remove hyper-threading so the CPU doesn’t try to run stuff on the idle core, as hyperthreading simulate two cores for every physical core.
• Minimize any system interrupts from hardware, such as USB and Serial Ports, that might get through to the ‘idled’ core. This is needed on the older machines.
• Prevent the computer from reducing the clock speed on any cores to ‘save power’, etc. We need to have a constant clock speed on every ‘idled’ CPU core.

I generally followed the T1300430 instructions but found a few adjustments were necessary for diskless and deterministic operation, as noted below. The procedure for configuring the FE BIOS is as follows:

  1. At boot-up, hit the delete key to enter the BIOS setup screen.
  2. Before changing anything, I recommend photographing or otherwise documenting the current working settings on all the subscreens, in case for some reason it is necessary to revert.
  3. T1300430 assumes the process is started from a known state and lists only the non-default settings that must be changed. To put the BIOS into this known state, first navigate to Exit > Load Failsafe Defaults > Enter.
  4. Configure the non-default settings following T1300430 (Sec. 5 for the X8DTU motherboard). On the IPMI screen, set the static IP address and netmask to their specific assigned values, but do set the gateway address to all zeros as the document indicates. This is to prevent the IPMI from trying to initiate outgoing connections.
  5. For diskless booting to continue to work, it is also necessary to set Advanced > PCI/PnP Configuration > Load Onboard LAN 1 Option Rom > Enabled.
  6. I also found it was necessary to re-enable IDE direct memory access and WHEA (Windows Hardware Error Architecture) support. Since these machines have neither hard disks nor Windows, I have no idea why these are needed, but I found that without them, one of the FEs would hang during boot about 50% of the time.
    • Advanced > PCI/PnP configuration > PCI IDE BusMaster  > Enabled.
    • Advanced > ACPI Configuration > WHEA Support > Enabled.

After completing the BIOS setup, I rebooted the new FEs about six times each to make sure the configuration was stable (i.e., would never hang during boot).

2. User models created for FE testing

With the timing issue resolved, I proceeded to build basic user models for c1bhd and c1sus2 for testing purposes. Each one has a simple structure where M ADC inputs are routed through IIR filters to an output matrix, which forms linear signal combinations that are routed to N DAC outputs. This is shown in Attachment 1 for the c1bhd case, where the signals from a single ADC are conditioned and routed to a single 18-bit DAC. The c1sus2 case is similar; however the Contec BO modules still needed to be added to this model.

The FEs are now running two models each: the IOP model and one user model. The assigned parameters of each model are documented below.

Model Host CPU DCUID Path
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl
c1sus2 c1sus2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1sus2.mdl

The user models were compiled and installed following the previously documented procedure (15979). As shown in Attachment 2, all the RTS processes are now working, with the exception of the DAQ server (for which we're still awaiting hardware). Note that these models currently exist only on the cloned copy of the /opt/rtcds disk running on the test stand. The plan is to copy these models to the main 40m disk later, once the new FEs are ready to be installed.

3. AA and AI chassis installed

I installed several new AA and AI chassis in the test stand to interface with the ADC and DAC cards. This includes three 16-bit AA chassis, one 16-bit AI chassis, and one 18-bit AI chassis, as pictured in Attachment 3. All of the AA/AI chassis are powered by one of the new 15V DC power strips connected to a bench supply, which is housed underneath the computers as pictured in Attachment 4.

These chassis have not yet been tested, beyond verifying that the LEDs all illuminate to indicate that power is present.

Attachment 1: c1bhd.png
c1bhd.png
Attachment 2: gds_tp.png
gds_tp.png
Attachment 3: teststand.jpeg
teststand.jpeg
Attachment 4: bench_supply.jpeg
bench_supply.jpeg
  16185   Sun Jun 6 08:42:05 2021 JonUpdateCDSFront-End Assembly and Testing

Here is an update and status report on the new BHD front-ends (FEs).

Timing

The changes to the FE BIOS settings documented in [16167] do seem to have solved the timing issues. The RTS models ran for one week with no more timing failures. The IOP model on c1sus2 did die due to an unrelated "Channel hopping detected" error. This was traced back to a bug in the Simulink model, where two identical CDS parts were both mapped to ADC_0 instead of ADC_0/1. I made this correction and recompiled the model following the procedure in [15979].

Model naming standardization

For lack of a better name, I had originally set up the user model on c1sus2 as "c1sus2.mdl" This week I standardized the name to follow the three-letter subsystem convention, as four letters lead to some inconsistency in the naming of the auto-generated MEDM screens. I renamed the model c1sus2.mdl -> c1su2.mdl. The updated table of models is below.

Model Host CPU DCUID Path
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl
c1su2 c1su2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl

Renaming an RTS model requires several steps to fully propagate the change, so I've documented the procedure below for future reference.

On the target FE, first stop the model to be renamed:

controls@c1sus2$ rtcds stop c1sus2

Then, navigate to the build directory and run the uninstall and cleanup scripts:

controls@c1sus2$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1sus2$ make uninstall-c1sus2
controls@c1sus2$ make clean-c1sus2

Unfortunately, the uninstall script does not remove every vestige of the old model, so some manual cleanup is required. First, open the file /opt/rtcds/caltech/c1/target/gds/param/testpoint.par and manually delete the three-line entry corresponding to the old model:

hostname=c1sus2
system=c1sus2
[C-node26]

If this is not removed, reinstallation of the renamed model will fail because its assigned DCUID will appear to already be in use. Next, find all relics of the old model using:

controls@c1sus2$ find /opt/rtcds/caltech/c1 -iname "*sus2*"

and manually delete each file and subdirectory containing the "sus2" name. Finally, rename, recompile, reinstall, and relaunch the model:

controls@c1sus2$ cd /opt/rtcds/userapps/release/sus/c1/models
controls@c1sus2$ mv c1sus2.mdl c1su2.mdl
controls@c1sus2$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1sus2$ make c1su2
controls@c1sus2$ make install-c1su2
controls@c1sus2$ rtcds start c1su2

Sitemap screens

I used a tool developed by Chris, mdl2adl, to auto-generate a set of temporary sitemap/model MEDM screens. This package parses each Simulink file and generates an MEDM screen whose background is an .svg image of the Simulink model. Each object in the image is overlaid with a clickable button linked to the auto-generated RTS screens. An example of the screen for the C1BHD model is shown in Attachment 1. Having these screens will make the testing much faster and less user-error prone.

I generated these screens following the instructions in Chris' README. However, I ran this script on the c1sim machine, where all the dependencies including Matlab 2021 are already set up. I simply copied the target .mdl files to the root level of the mdl2adl repo, ran the script (./mdl2adl.sh c1x06 c1x07 c1bhd c1su2), and then copied the output to /opt/rtcds/caltech/c1/medm/medm_teststand. Then I redefined the "sitemap" environment variable on the chiara clone to point to this new location, so that they can be launched in the teststand via the usual "sitemap" command.

Current status and plans

Is it possible to convert 18-bit AO channels to 16-bit?

Currently, we are missing five 18-bit DACs needed to complete the c1sus2 system (the c1bhd system is complete). Since the first shipment, we have had no luck getting additional 18-bit DACs from the sites, and I don't know when more will become available. So, this week I took an inventory of all the 16-bit DACs available at the 40m. I located four 16-bit DACs, pictured in Attachment 2. Their operational states are unknown, but none were labeled as known not to work.

The original CDS design would call for 40 more 18-bit DAC channels. Between the four 16-bit DACs there are 64 channels, so if only 3/4 of these DACs work we would have enough AO channels. However, my search turned up zero additional 16-bit DAC adapter boards. We could check if first Rolf or Todd have any spares. If not, I think it would be relatively cheap and fast to have four new adapters fabricated.

DAQ network limitations and plan

To get deeper into the signal-integrity aspect of the testing, it is going to be critical to get the secondary DAQ network running in the teststand. Of all the CDS tools (Ndscope, Diaggui, DataViewer, StripTool), only StripTool can be used without a functioning NDS server (which, in turn, requires a functioning DAQ server). StripTool connects directly to the EPICS server run by the RTS process. As such, StripTool is useful for basic DC tests of the fast channels, but it can only access the downsampled monitor channels. Ian and Anchal are going to carry out some simple DAC-to-ADC loopback tests to the furthest extent possible using StripTool (using DC signals) and will document their findings separately.

We don't yet have a working DAQ network because we are still missing one piece of critical hardware: a 10G switch compatible with the older Myricom network cards. In the older RCG version 3.x used by the 40m, the DAQ code is hardwired to interface with a Myricom 10G PCIe card. I was able to locate a spare Myricom card, pictured in Attachment 3, in the old fb machine. Since it looks like it is going to take some time to get an old 10G switch from the sites, I went ahead and ordered one this week. I have not been able to find documentation on our particular Myricom card, so it might be compatible with the latest 10G switches but I just don't know. So instead I bought exactly the same older (discontinued) model as is used in the 40m DAQ network, the Netgear GSM7352S. This way we'll also have a spare. The unit I bought is in "like-new" condition and will unfortunately take about a week to arrive.

Attachment 1: c1bhd.png
c1bhd.png
Attachment 2: 16bit_dacs.png
16bit_dacs.png
Attachment 3: myricom.png
myricom.png
  16220   Tue Jun 22 16:53:01 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

The channels on both the C1BHD and C1SUS2 seem to be frozen: they arent updating and are holding one value. To fix this Anchal and I tried:

  • restarting the computers 
    • restarting basically everything including the models
  • Changing the matrix values
  • adding filters
  • messing with the offset 
  • restarting the network ports (Paco suggested this apparently it worked for him at some point)
  • Checking to make sure everything was still connected inside the case (DAC, ADC, etc..)

I wonder if Jon has any ideas. 

  16224   Thu Jun 24 17:32:52 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

Anchal and I ran tests on the two systems (C1-SUS2 and C1-BHD). Attached are the results and the code and data to recreate them.

We connected one DAC channel to one ADC channel and thus all of the results represent a DAC/ADC pair. We then set the offset to different values from -3000 to 3000 and recorded the measured signal. I then plotted the response curve of every DAC/ADC pair so each was tested at least once.

There are two types of plots included in the attachments

1) a summary plot found on the last pages of the pdf files. This is a quick and dirty way to see if all of the channels are working. It is NOT a replacement for the other plots. It shows all the data quickly but sacrifices precision.

2) In an in-depth look at an ADC/DAC pair. Here I show the measured value for a defined DC offset. The Gain of the system should be 0.5 (put in an offset of 100 and measure 50). I included a line to show where this should be. I also plotted the difference between the 0.5 gain line and the measured data. 

As seen in the provided plots the channels get saturated after about the -2000 to 2000 mark, which is why the difference graph is only concentrated on -2000 to 2000 range. 

Summary: all the channels look to be working they all report very little deviation off of the theoretical gain. 

Note: ADC channel 31 is the timing signal so it is the only channel that is wildly off. It is not a measurement channel and we just measured it by mistake.

Attachment 1: C1-SU2_Channel_Responses.pdf
C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf
Attachment 2: C1-BHD_Channel_Responses.pdf
C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf
Attachment 3: CDS_Channel_Test.zip
  16225   Fri Jun 25 14:06:10 2021 JonUpdateCDSFront-End Assembly and Testing

Summary

Here is the final summary (from me) of where things stand with the new front-end systems. With Anchal and Ian's recent scripted loopback testing [16224], all the testing that can be performed in isolation with the hardware on hand has been completed. We currently have no indication of any problem with the new hardware. However, the high-frequency signal integrity and noise testing remains to be done.

I detail those tests and link some DTT templates for performing them below. We have not yet received the Myricom 10G network card being sent from LHO, which is required to complete the standalone DAQ network. Thus we do not have a working NDS server in the test stand, so cannot yet run any of the usual CDS tools such as Diaggui. Another option would be to just connect the new front-ends to the 40m Martian/DAQ networks and test them there.

Final Hardware Configuration

Due to the unavailablity of the 18-bit DACs that were expected from the sites, we elected to convert all the new 18-bit AO channels to 16-bit. I was able to locate four unused 16-bit DACs around the 40m [16185], with three of the four found to be working. I was also able to obtain three spare 16-bit DAC adapter boards from Todd Etzel. With the addition of the three working DACs, we ended up with just enough hardware to complete both systems.

The final configuration of each I/O chassis is as follows. The full setup is pictured in Attachment 1.

  C1BHD C1SUS2
Component Qty Installed Qty Installed
16-bit ADC 1 2
16-bit ADC adapter 1 2
16-bit DAC 1 3
16-bit DAC adapter 1 3
16-channel BIO 1 1
32-channel BO 0 6

This hardware provides the following breakdown of channels available to user models:

  C1BHD C1SUS2
Channel Type Channel Count Channel Count
16-bit AI* 31 63
16-bit AO 16 48
BO 0 192

*The last channel of the first ADC is reserved for timing diagnostics.

The chassis have been closed up and their permanent signal cabling installed. They do not need to be reopened, unless future testing finds a problem.

RCG Model Configuration

An IOP model has been created for each system reflecting its final hardware configuration. The IOP models are permanent and system-specific. When ready to install the new systems, the IOP models should be copied to the 40m network drive and installed following the RCG-compilation procedure in [15979]. Each system also has one temporary user model which was set up for testing purposes. These user models will be replaced with the actual SUS, OMC, and BHD models when the new systems are installed.

The current RCG models and the action to take with each one are listed below:

Model Name Host CPU DCUID Path (all paths local to chiara clone machine) Action
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl Copy to same location on 40m network drive; compile and install
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl Copy to same location on 40m network drive; compile and install
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl Do not copy; replace with permanent OMC/BHD model(s)
c1su2 c1su2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl Do not copy; replace with permanent SUS model(s)

Each front-end can support up to four user models.

Future Signal-Integrity Testing

Recently, the CDS group has released a well-documented procedure for testing General Standards ADC and DACs: T2000188. They've also automated the tests using a related set of shell scripts (T2000203). Unfortnately I don't believe these scripts will work at the 40m, as they require the latest v4.x RCG.

However, there is an accompanying set of DTT templates that could be very useful for accelerating the testing. They are available from the LIGO SVN (log in with username: "first.last@LIGO.ORG"). I believe these can be used almost directly, with only minor updates to channel names, etc. There are two classes of DTT-templated tests:

  1. DAC -> ADC loopback transfer functions
  2. Voltage noise floor PSD measurements of individual cards

The T2000188 document contains images of normal/passing DTT measurements, as well as known abnormalities and failure modes. More sophisticated tests could also be configured, using these templates as a guiding example.

Hardware Reordering

Due to the unexpected change from 18- to 16-bit AO, we are now short on several pieces of hardware:

  • 16-bit AI chassis. We originally ordered five of these chassis, and all are obligated as replacements within the existing system. Four of them are now (temporarily) in use in the front-end test stand. Thus four of the new 18-bit AI chassis will need to be retrofitted with 16-bit hardware.
  • 16-bit DACs. We currently have exactly enough DACs. I have requested a quote from General Standards for two additional units to have as spares.
  • 16-bit DAC adapters. I have asked Todd Etzel for two additional adapter boards to also have as spares. If no more are available, a few more should be fabricated.
Attachment 1: test_stand.JPG
test_stand.JPG
  15872   Fri Mar 5 17:48:25 2021 JonUpdateCDSFront-end testing

Today I moved the c1bhd machine from the control room to a new test area set up behind (west of) the 1X6 rack. The test stand is pictured in Attachment 1. I assembled one of the new IO chassis and connected it to the host.

I/O Chassis Assembly

  • LIGO-style 24V feedthrough replaced with an ATX 650W switching power supply
  • Timing slave installed
  • Contec DO-1616L-PE card installed for timing control
  • One 16-bit ADC and one 32-channel DO module were installed for testing

The chassis was then powered on and LED lights illuminated indicating that all the components have power. The assembled chassis is pictured in Attachment 2.

Chassis-Host Communications Testing

Following the procedure outlined T1900700, the system failed the very first test of the communications link between chassis and host, which is to check that all PCIe cards installed in both the host and the expansion chassis are detected. The Dolpin host adapter card is detected:

07:06.0 PCI bridge: Stargen Inc. Device 0102 (rev 02) (prog-if 00 [Normal decode])
    Flags: bus master, fast devsel, latency 0
    Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
    I/O behind bridge: 00002000-00002fff
    Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff
    Capabilities: [40] Power Management version 2
    Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
    Capabilities: [60] Express Downstream Port (Slot+), MSI 00
    Capabilities: [80] Subsystem: Device 0000:0000
    Kernel driver in use: pcieport

However the OSS PCIe adapter card linking the host to the IO chassis was not detected, nor were any of the cards in the expansion chassis. Gautam previously reported that the OSS card was not detected by the host (though it was not connected to the chassis then). Even now connected to the IO chassis, the card is still not detected. On the chassis-side OSS card, there is a red LED illuminated indicating "HOST CARD RESET" as pictured in Attachment 3. This may indicate a problem with the card on the host side. Still more debugging to be done.

Attachment 1: image_67203585.JPG
image_67203585.JPG
Attachment 2: image_67216641.JPG
image_67216641.JPG
Attachment 3: image_17185537.JPG
image_17185537.JPG
  15890   Tue Mar 9 16:52:47 2021 JonUpdateCDSFront-end testing

Today I continued with assembly and testing of the new front-ends. The main progress is that the IO chassis is now communicating with the host, resolving the previously reported issue.

Hardware Issues to be Resolved

Unfortunately, though, it turns out one of the two (host-side) One Stop Systems PCIe cards sent from Hanford is bad. After some investigation, I ultimately resolved the problem by swapping in the second card, with no other changes. I'll try to procure another from Keith Thorne, along with some spares.

Also, two of the three switching power supplies sent from Livingston (250W Channel Well PSG400P-89) appear to be incompatible with the Trenton BPX6806 PCIe backplanes in these chassis. The power supply cable has 20 conductors and the connector on the board has 24. The third supply, a 650W Antec EA-650, does have the correct cable and is currently powering one of the IO chassis. I'll confirm this situation with Keith and see whether they have any more Antecs. If not, I think these supplies can still be bought (not obsolete).

I've gone through all the hardware we've received, checked against the procurement spreadsheet. There are still some missing items:

  • 18-bit DACs (Qty 14; but 7 are spares)
  • ADC adapter boards (Qty 5)
  • DAC adapter boards (Qty 9)
  • 32-channel DO modules (Qty 2/10 in hand)

Testing Progress

Once the PCIe communications link between host and IO chassis was working, I carried out the testing procedure outlined in T1900700. This performs a series checks to confirm basic operation/compatibility of the hardware and PCIe drivers. All of the cards installed in both the host and the expansion chassis are detected and appear correctly configured, according to T1900700. In the below tree, there is one ADC, one 16-ch DIO, one 32-ch DO, and one DolphinDX card:

+-05.0-[05-20]----00.0-[06-20]--+-00.0-[07-08]----00.0-[08]----00.0  Contec Co., Ltd Device 86e2
|                               +-01.0-[09]--
|                               +-03.0-[0a]--
|                               +-08.0-[0b-15]----00.0-[0c-15]--+-02.0-[0d]--
|                               |                               +-03.0-[0e]--
|                               |                               +-04.0-[0f]--
|                               |                               +-06.0-[10-11]----00.0-[11]----04.0  PLX Technology, Inc. PCI9056 32-bit 66MHz PCI <-> IOBus Bridge
|                               |                               +-07.0-[12]--
|                               |                               +-08.0-[13]--
|                               |                               +-0a.0-[14]--
|                               |                               \-0b.0-[15]--
|                               \-09.0-[16-20]----00.0-[17-20]--+-02.0-[18]--
|                                                               +-03.0-[19]--
|                                                               +-04.0-[1a]--
|                                                               +-06.0-[1b]--
|                                                               +-07.0-[1c]--
|                                                               +-08.0-[1d]--
|                                                               +-0a.0-[1e-1f]----00.0-[1f]----00.0  Contec Co., Ltd Device 8632
|                                                               \-0b.0-[20]--
\-08.0-[21-2a]--+-00.0  Stargen Inc. Device 0101
                \-00.1-[22-2a]--+-00.0-[23]--
                                +-01.0-[24]--
                                +-02.0-[25]--
                                +-03.0-[26]--
                                +-04.0-[27]--
                                +-05.0-[28]--
                                +-06.0-[29]--
                                \-07.0-[2a]--

Standalone Subnet

Before I start building/testing RTCDS models, I'd like to move the new front ends to an isolated subnet. This is guaranteed to prevent any contention with the current system, or inadvertent changes to it.

Today I set up another of the Supermicro servers sent by Livingston in the 1X6 test stand area. The intention is for this machine to run a cloned, bootable image of the current fb1 system, allowing it to function as a bootserver and DAQ server for the FEs on the subnet.

However, this hard disk containing the fb1 image appears to be corrupted and will not boot. It seems to have been sitting disconnected in a box since ~2018, which is not a stable way to store data long term. I wasn't immediately able to recover the disk using fsck. I could spend some more time trying, but it might be most time-effective to just make a new clone of the fb1 system as it is now.

Attachment 1: image_72192707.JPG
image_72192707.JPG
  15924   Tue Mar 16 16:27:22 2021 JonUpdateCDSFront-end testing

Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)

Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.

Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup.

  15925   Tue Mar 16 19:04:20 2021 gautamUpdateCDSFront-end testing

Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.

Good to hear the backup disk was able to boot though!

Quote:

And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.

  15947   Fri Mar 19 18:14:56 2021 JonUpdateCDSFront-end testing

Summary

Today I finished setting up the subnet for new FE testing. There are clones of both fb1 and chiara running on this subnet (pictured in Attachment 2), which are able to boot FEs completely independently of the Martian network. I then assembled a second FE system (Supermicro host and IO chassis) to serve as c1sus2, using a new OSS host adapter card received yesterday from LLO. I ran the same set of PCIe hardware/driver tests as was done on the c1bhd system in 15890. All the PCIe tests pass.

Subnet setup

For future reference, below is the procedure used to configure the bootserver subnet.

  • Select "Network" as highest boot priority in FE BIOS settings
  • Connect all machines to subnet switch. Verify fb1 and chiara eth0 interfaces are enabled and assigned correct IP address.
  • Add c1bhd and c1sus2 entries to chiara:/etc/dhcp/dhcpd.conf:
host c1bhd {
  hardware ethernet 00:25:90:05:AB:46;
  fixed-address 192.168.113.91;
}
host c1bhd {
  hardware ethernet 00:25:90:06:69:C2;
  fixed-address 192.168.113.92;
}
  • Restart DHCP server to pick up changes:
$ sudo service isc-dhcp-server restart
  • Add c1bhd and c1sus2 entries to fb1:/etc/hosts:
192.168.113.91    c1bhd
192.168.113.92    c1sus2
  • Power on the FEs. If all was configured correctly, the machines will boot.

C1SUS2 I/O chassis assembly

  • Installed in host:
    • DolphinDX host adapter
    • One Stop Systems PCIe x4 host adapter (new card sent from LLO)
  • Installed in chassis:
    • Channel Well 250 W power supply (replaces aLIGO-style 24 V feedthrough)
    • Timing slave
    • Contec DIO-1616L-PE module for timing control

Next time, on to RTCDS model compilation and testing. This will require first obtaining a clone of the /opt/rtcds disk hosted on chiara.

Attachment 1: image_72192707_(1).JPG
image_72192707_(1).JPG
Attachment 2: image_50412545.JPG
image_50412545.JPG
  15959   Wed Mar 24 19:02:21 2021 JonUpdateCDSFront-end testing

This evening I prepared a new 2 TB 3.5" disk to hold a copy of /opt/rtcds and /opt/rtapps from chiara. This is the final piece of setup before model compilation can be tested on the new front-ends. However chiara does not appear to support hot-swapping of disks, as the disk is not recognized when connected to the live machine. I will await confirmation before rebooting it. The new disk is not currently connected.

  15976   Mon Mar 29 17:55:50 2021 JonUpdateCDSFront-end testing

Cloning of chiara:/home/cvs underway

I returned today with a beefier USB-SATA adapter, which has an integrated 12 V supply for powering 3.5" disks. I used this to interface a new 6 TB 3.5" disk found in the FE supplies cabinet.

I decided to go with a larger disk and copy the full contents of chiara:/home/cds. Strictly, the FEs only strictly need the RTS executables in /home/cvs/rtcds and /home/cvs/rtapps. However, to independently develop models, the shared matlab binaries in /home/cvs/caltech/... also need to be exposed. And there may be others I've missed.

I began the clone around 12:30 pm today. To preserve bandwidth to the main disk, I am copying not the /home/cds disk directly, but rather its backup image at /media/40mBackup.

Set up of dedicated SimPlant host

Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.

I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them. However, if there are concerns about having it present on the network, it can be moved to the outside-facing switch in the office area. It is not currently running any RTCDS processes.

Set-up was carried out via the following procedure:

  • Installed Debian 10.9 on an internal 480 GB SSD.
  • Installed cdssoft repos following Jamie's instructions.
  • Installed RTS and Docker dependencies:
    $ sudo apt install cpuset advligorts-mbuf-dkms advligorts-gpstime-dkms docker.io docker-compose
  • Configured scheduler for real-time operation:
    $ sudo /sbin/sysctl kernel.sched_rt_runtime_us = -1
  • Reserved 10 cores for RTS user models (plus one for IOP model) by adding the following line to /etc/default/grub:
    GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=nohz,domain,1-11 nohz_full=1-11 tsc=reliable mce=off"
    followed by the commands:
    $ sudo update-grub
    $ sudo reboot now
  • Downloaded virtual cymac repo to /home/controls/docker-cymac.

I need to talk to Chris before I can take the setup further.

  15979   Tue Mar 30 18:21:34 2021 JonUpdateCDSFront-end testing

Progress today:

Outside Internet access for FE test stand

This morning Jordan and I ran an 85-foot Cat 6 Ethernet cable from the campus network switch in the office area (on the ligo.caltech.edu domain) to the FE test stand near 1X6. This is to allow the test-stand subnet to be accessed for remote testing, while keeping it invisible to the parallel Martian subnet.

Successful RTCDS model compilation on new FEs

The clone of the chiara:/home/cds disk completed overnight. Today I installed the disk in the chiara clone. The NFS mounts (/opt/rtcds, /opt/rtapps) shared with the other test-stand machines mounted without issue.

Next, I attempted to open the shared Matlab executable (/cvs/cds/caltech/apps/linux64/matlab/bin/matlab) and launch Simulink. The existing Matlab license (/cvs/cds/caltech/apps/linux64/matlab/licenses/license_chiara_865865_R2015b.lic) did not work on this new machine, as they are machine-specific, so I updated the license file. I linked this license to my personal license, so that the machine license for the real chiara would not get replaced. The original license file is saved in the same directory with a *.bak postfix. If this disk is ever used in the real chiara machine, this file should be restored. After the machine license was updated, Matlab and Simulink loaded and allowed model editing.

Finally, I tested RTCDS model compilation on the new FEs using the c1lsc model as a trial case. It encountered one path issue due to the model being located at /opt/rtcds/userapps/release/isc/c1/models/isc/ instead of /opt/rtcds/userapps/release/isc/c1/models/. This seems to be a relic of the migration of the 40m models from the SVN to a standalone git repo. This was resolved by simply symlinking to the expected location:

$ sudo ln -s /opt/rtcds/userapps/release/isc/c1/models/isc/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl

The model compilation then succeeded:

controls@c1bhd$ cd /opt/rtcds/caltech/c1/rtbuild/release

controls@c1bhd$ make clean-c1lsc
Cleaning c1lsc...
Done

controls@c1bhd$ make c1lsc
Cleaning c1lsc...
Done
Parsing the model c1lsc...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 28830 s in the
future
make[1]: warning:  Clock skew detected.  Your build may be incomplete.
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/caltech/c1/userapps/release/cds/common/src/cdsToggle.c
/opt/rtcds/userapps/release/cds/c1/src/inmtrxparse.c
/opt/rtcds/userapps/release/cds/common/models/FILTBANK_MASK.mdl
/opt/rtcds/userapps/release/cds/common/models/rtbitget.mdl
/opt/rtcds/userapps/release/cds/common/models/SCHMITTTRIGGER.mdl
/opt/rtcds/userapps/release/cds/common/models/SQRT_SWITCH.mdl
/opt/rtcds/userapps/release/cds/common/src/DB2MAG.c
/opt/rtcds/userapps/release/cds/common/src/OSC_WITH_CONTROL.c
/opt/rtcds/userapps/release/cds/common/src/wait.c
/opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl
/opt/rtcds/userapps/release/isc/c1/models/IQLOCK_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/PHASEROT.mdl
/opt/rtcds/userapps/release/isc/c1/models/RF_PD_WITH_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/UGF_SERVO_40m.mdl
/opt/rtcds/userapps/release/isc/common/models/FILTBANK_TRIGGER.mdl
/opt/rtcds/userapps/release/isc/common/models/LSC_TRIGGER.mdl

Successfully compiled c1lsc
***********************************************
Compile Warnings, found in c1lsc_warnings.log:
***********************************************
[warnings suppressed]

As did the installation:

controls@c1bhd$ make install-c1lsc
Installing system=c1lsc site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1LSC.txt
Installing /opt/rtcds/caltech/c1/target/c1lsc/c1lscepics
Installing /opt/rtcds/caltech/c1/target/c1lsc
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1lsc
/opt/rtcds/caltech/c1/scripts/startc1lsc
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl
-par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_210330_170634.par
-gds_node=42 -site_letter=C -system=c1lsc -host=c1lsc
Installing GDS node 42 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1lsc.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1LSC.ini
Installing Epics MEDM screens
Running post-build script

safe.snap exists

We are ready to start building and testing models.

  2772   Mon Apr 5 13:52:45 2010 AlbertoUpdateComputersFront-ends down. Rebooted

This morning, at about 12 Koji found all the front-ends down.

At 1:45pm rebooted ISCEX, ISCEY, SOSVME, SUSVME1, SUSVME2, LSC, ASC, ISCAUX

Then I burtestored ISCEX, ISCEY, ISCAUX to April 2nd, 23:07.

The front-ends are now up and running again.

  2376   Thu Dec 10 08:40:12 2009 AlbertoUpdateComputersFronte-ends down

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

  2378   Thu Dec 10 08:50:33 2009 AlbertoUpdateComputersFronte-ends down

Quote:

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to single out the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
 
Then I did the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. I executed the steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
5) power-cycle and restart the single front-ends
6) burt-restore all the snapshots
 
When I tried to restart C1SOSVME by power-cycling it I still got the same response: "No response from EPICS". But I then reset C1SUSVME1 and C1SUSVME2 I was able to restart C1SOSVME.
 
It turned out that while I was checking the efficacy of the steps of the Grand Reboot to single out the crucial one, I was getting fooled by C1SOSVME's status. C1SOSVME was stuck, hanging on C1SUSVME1 and C1SUSVME2.
 
So the Nuclear option is still unproven as the only working procedure. It might be not necessary.
 
Maybe restating BOTH RFM switches, the one in 1Y7 and the one in 1Y6, would be sufficient. Or maybe just power-cycling the C0DAQCTRL and C1DCU1 is sufficient. This has to be confirmed next time we incur on the same problem.
  2382   Thu Dec 10 10:01:16 2009 JenneUpdateComputersFronte-ends down

All the front ends are back up.  

Quote:

Quote:

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
 
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
 
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.

 

  2383   Thu Dec 10 10:31:18 2009 JenneUpdateComputersFronte-ends down

Quote:

All the front ends are back up.  

Quote:

Quote:

I found all the front-ends, except for C1SUSVME1 and C0DCU1 down this morning. DAQAWG shows up green on the C0DAQ_DETAIL screen but it is on a "bad" satus.

I'll go for a big boot fest.

Since I wanted to understand once for all what's the faulting system when these situations occur, I tried to reboot the computers one by one.

1) I reset the RFM Network by pushing the reset button on the bypass switch on the 1Y7 rack. Then I tried to bring C1SOSVME up by power-cycling and restarting it as in the procedure in the wiki. I repeated a second time but it didn't work. At some point of the restarting process I get the error message "No response from EPICS".
2) I also tried rebooting only C1DCUEPICS but it didn't work: I kept having the same response when restarting C1SOSVME
3) I tried to reboot C0DAQCTRL and C1DCU1 by power cycling their crate; power-cycled and restarted C1SOSVME. Nada. Same response from C1SOSVME.
4) I restarted the framebuilder;  power-cycled and restarted C1SOSVME. Nothing. Same response from C1SOSVME.
5) I restarted the framebuilder, then rebooted C0DAQCTRL and C1DCU, then power-cycled and restarted C1SOSVME. Niente. Same response from C1SOSVME.
 
The following is the so called "Nuclear Option", the only solution that so far has proven to work in these circumstances. Execute the following steps in the order they are listed, waiting for each step to be completed before passing to the next one.
0) Switch off: the frame builder, the C0DAQCTRL and C1DCU crate, C1DCUEPICS
1) turn on the frame builder
2) reset of the RFM Network switch on 1Y7 (although, it's not sure whether this step is really necessary; but it's costless)
3) turn on C1DCUEPICS
4) turn on the C0DAQCTRL and C1DCU crate
 
One other possibility remains to be explored to avoid the Nuclear Option. And that is to just try to reset both RFM Network switches: the one in 1Y7 and the one in 1Y6.

 

 I burtrestored all the snapshots to Dec 9 2009 at 18:00.

  16336   Thu Sep 16 01:16:48 2021 KojiUpdateGeneralFrozen 2

It happened again. Defrosting required.

Attachment 1: P_20210916_003406_1.jpg
P_20210916_003406_1.jpg
  10756   Thu Dec 4 23:45:30 2014 JenneUpdateCDSFrozen?

[Jenne, Q, Diego]

I don't know why, but everything in EPICS-land froze for a few minutes just now.  It happened yesterday that I saw, but I was bad and didn't elog it.

Anyhow, the arms stayed locked (on IR) for the whole time it was frozen, so the fast things must have still been working.  We didn't see anything funny going on on the frame builder, although that shouldn't have much to do with the EPICS service.  The seismic rainbow on the wall went to zeros during the freeze, although the MC and PSL strip charts are still fine. 

After a few minutes, while we were still trying to think of things to check, things went back to normal.  We're going to just keep locking for now....

  3782   Tue Oct 26 01:53:21 2010 Joonho LeeUpdateElectronicsFuction Generator removed.

Today I worked on how to measure cable impedance directly.

In order to measure the impedance in RF range, I used a function generator which could generate 50MHz signal and was initially connected to the table on the right of the decks.

The reason I am checking the cables is for replacing the cables with impedance of 50 or 52 ohm by those with impedance of 75 ohm.

After I figures out which cable has not proper impedance, I will make new cables and substitute them in order to match the impedance, which would lead to better VIDEO signal.

 

To test the VIDEO cables, I need a function generator generating signal of frequency 50 MHz.

In the deck on the right of PSL table, there was only one such generator which was connected to the table on the right of the deck.

Therefore, I disconnected it from the cable and took it to the control room to use it because Rana said it was not used.

Then, I tired to find on how to measure the impedance of cable directly but I did not finish yet.

When I finished today works, I put the generator back to the deck but I did not connect to the previous cable which was initially connected to the generator.

 

Next time, I will finish the practical method of measuring the cable impedance then I will measure the cables with unknown impedance.

Any suggestion would be appreciated.

  8908   Tue Jul 23 16:39:31 2013 KojiUpdateGeneralFull IFO alignment recovered

[Annnalisa Koji]

Full alignment of the IFO was recovered. The arms were locked with the green beams first, and then locked with the IR.

In order to use the ASS with lower power, C1:LSC-OUTPUT_MTRX_9_6 and C1:LSC-OUTPUT_MTRX_10_7 were reduced to 0.05.
This compensates the gain imbalance between TRX/Y siganls and the A2L component in the arm feedback signals.

Despite the IFO was aligned, we don't touch the OPLEVs and green beams to the vented IFO.

Attachment 1: alignment.png
alignment.png
  8912   Tue Jul 23 20:41:40 2013 gautamConfigurationendtable upgradeFull range calibration and installation of PZT-mounted mirrors

 Given that the green beam is to be used as the reference during the vent, it was decided to first test the PZT mounted mirrors at the X-endtable rather than the Y-endtable as originally planned. Yesterday, I prepared a second PZT mounted mirror, completed the full range calibration, and with Manasa, installed the mirrors on the X-endtable as mentioned in this elog. The calibration constants have been determined to be (see attached plots for aproximate range of actuation):

M1-pitch: 0.1106 mrad/V

M1-yaw: 0.143 mrad/V

M2-pitch: 0.197 mrad/V

M2-yaw: 0.27 mrad/V


Second 2-inch mirror glued to tip-tilt and mounted:

  • The spot sizes on the steering mirrors at the X-end are fairly large, and so two 2-inch steering mirrors were required.
  • The mirrors already glued to the PZTs were a CVI 2-inch and a Laseroptik 1-inch mirror.
  • I prepared another Laseroptik 2-inch mirror (45 degree with HR and AR coatings for 532 nm) and glued it to a PZT mounted in a modified mount as before.
  • Another important point regarding mounting the PZTs: there are two perforated rings (see attached picture) that run around the PZT about 1cm below the surface on which the mirror is to be glued. The PZT has to be pushed in through the mount till these are clear of the mount, or the actuation will not be as desired. In the first CVI 2-inch mirror, this was not the the case, which probably explains the unexpectedly large pitch-yaw coupling that was observed during the calibration [Thanks Manasa for pointing this out]. 

Full range calibration of PZT:

Having prepared the two steering mirrors, I calibrated them for the full range of input voltages, to get a rough idea of whether the tilt varied linearly and also the range of actuation. 

Methodology:

  • The QPD setup described in my previous elogs was used for this calibration. 
  • The linear range of the QPD was gauged to be while the output voltage lay between -0.5V and 0.5V. The calibration constants are as determined during the QPD calibration, details of which are here.
  • In order to keep the spot always in the linear range of the QPD, I stared with an input signal of -10V or +10V (ie. one extreme), and moved both the X and Y micrometers on the translational stage till both these coordinates were at one end of the linear range (i.e -0.5V or 0.5V). I then increased the input voltage in steps of ~1V through the full range from -10V to +10V DC. The signal was applied using a SR function generator with the signal amplitude kept to 0, and a DC offset in the range -5V to 5V DC, which gave the desired input voltages to the PZT driver board (between -10V DC and 10V DC).
  • When the output of the QPD amp reached the end of the linear regime (i.e 0.5V or -0.5V), I moved the appropriate micrometer dial on the translational stage to take it to the other end of the linear range, before continuing with the measurements. The distance moved was noted. 
  • Both the X and Y coordinates were noted in order to investigate pitch-yaw coupling.

Analysis and remarks:

  • The results of the calibration are presented in the plots below. 
  • Though the measurement technique was crude (and maybe flawed because of a possible z-displacement while moving the translational stage), the calibration was meant to be rough, and I think the results obtained are satisfactory. 
  • Fitting the data linearly is only an approximation, as there is evidence of hysteresis. Also, PZTs appear to have some drift, though I have not been able to quantify this (I did observe that the output of the QPD amp shifted by an amount equal to ~0.05mm while I left the setup standing for an hour or so).  
  • The range of actuation seems to be different for the two PZTs, and also for each degree of freedom, though the measured data is consistent with the minimum range given in the datasheet (3.5 mrad for input voltages in the range -20V to 120V DC). 

 

PZT Calibration Plots

The circles are datapoints for the degree of freedom to which the input is applied, while the 'x's are for the other degree of freedom. Different colours correspond to data measured with the position of the translational stage at some value.

                                            M1 Pitch                                                                                             M1 Yaw

M1_Pitch_calib.pdf     M1_Yaw_calib.pdf

 

                                              M2 Pitch                                                                                        M2 Yaw 

M2_Pitch_calib.pdf     M2_Yaw_calib.pdf

 



Installation of the mirrors at the X-endtable:

The calibrated mirrors were taken to the X-endtable for installation. The steering mirrors in place were swapped out for the PZT mounted pair. Manasa managed (after considerable tweaking) to mode-match the green beam to the cavity with the new steering mirror configuration. In order to fine tune the alignment, Koji moved ITMx and ETMx in pitch and yaw so as to maximise green TRX. We then got an idea of which way the input pointing had to be moved in order to maximise the green transmission.

 

Attachment 5: PI_S330.20L.pdf
PI_S330.20L.pdf
  8967   Mon Aug 5 18:48:44 2013 gautamConfigurationendtable upgradeFull range calibration of PZT mounted mirrors for Y-endtable

 I had prepared two more PZT mounted mirrors for the Y-end some time back. These are:

  • A 2-inch CVI mirror (45 degree, HR and AR for 532nm, was originally one of the steering mirrors at the X-endtable, and was removed while switching those out for the PZT mounted mirrrors).
  • A 1-inch Laseroptik mirror (45 degree, HR and AR for 532nm).

I used the same QPD set-up and the methodology described here to do a full-range calibration of these PZTs. Plots attached. The calibration constants have been determined to be:

CVI-pitch: 0.316 mrad/V

CVI-yaw:  0.4018 mrad/V

Laseroptik pitch: 0.2447 mrad/V

Laseroptik yaw:  0.2822 mrad/V

Remarks:

  • These PZTs, like their X-end counterparts, showed evidence of drift and hysteresis. We just have to deal with this.
  • One of the PZTs (the one on which the CVI mirror is mounted) is a used one. While testing it, I thought that its behaviour was a little anomalous, but the plots do not seem to suggest that anything is amiss.

Plots:

                                                        CVI YAW                                                                                                                         CVI PITCH

2-inch-CVI-Yawcalib.pdf      2-inch-CVI-Pitchcalib.pdf

                                                        Laseroptik YAW                                                                                                             Laseroptik PITCH

1-inch-Laseroptik-Yawcalib.pdf   1-inch-Laseroptik-Pitchcalib.pdf

 

  2279   Tue Nov 17 10:09:57 2009 josephbUpdateEnvironmentFumes

The smell of diesel is particularly bad this morning.  Its concentrated enough to be causing me a headache.  I'm heading off to Millikan and will be working remotely on Megatron.

  6191   Thu Jan 12 11:08:23 2012 Leo SingerUpdatePEMFunky spectrum from STS-2

I am trying to stitch together spectra from seismometers and accelerometers to produce a ground motion spectrum from Hz to 100's of Hz.  I was able to retrieve data from two seismometers, GUR1 and STS_1, but not from any of the accelerometers.  The GUR1 spectrum is qualitatively similar to other plots that I have seen, but the STS_1 spectrum looks strange: the X axis spectrum is falling off as ~1/f, but the Y and Z spectra are pretty flat.  All three axes have a few lines that they may share in common and that they may share with GUR1.

See attached plot.

Attachment 1: spectrum.jpg
spectrum.jpg
  932   Fri Sep 5 09:56:14 2008 josephb, EricConfigurationComputersFunny channels, reboots, and ethernet connections
1) Apparently the I00-ICS type channels had gotten into a funny state last night, where they were showing just noise, exactly when Rana changed the accelerometer gains and did major reboots. A power cycle of the c1ioo crate and appropriate restarts fixed this.

2) c1asc looks like it was down all night. When I walked out to look at the terminal, it claimed to be unable to read the input file from the command line I had entered the previous night ( < /cvs/cds/caltech/target/c1asc/startup.cmd). In addition we were unable to telnet in, suggesting an ethernet breakdown and inability to mount the appropriate files. So we have temporarily run a new cat6 cable from the c1asc board to the ITMX prosafe switch (since there's a nice knee high cable tray right there). One last power cycle and we were able to telnet in and get it running.
  687   Thu Jul 17 00:59:18 2008 JenneSummaryGeneralFunny signal coming out of VCO
While working on calibrating the MC_F signal, Rana and I noticed a funny signal coming out of the VCO. We expect the output to be a nice sine wave at about 80MHz. What we see is the 80MHz signal plus higher harmonics. The reason behind the craziness is to be determined. For now, here's what the signal looks like, in both time and frequency domains.

The first plot is a regular screen capture of a 'scope. The second is the output of the SR spectrum analyzer, as seen on a 'scope screen. The leftmost tall peak is the 80MHz peak, and the others are the harmonics.
Attachment 1: VCOout_time.PNG
VCOout_time.PNG
Attachment 2: VCOout_freq.PNG
VCOout_freq.PNG
  9583   Tue Jan 28 22:24:46 2014 ericq UpdateGeneralFurther Alignment

[Masasa, ericq]

Having no luck doing things remotely, we went into the ITMX chamber and roughly aligned the IR beam. Using the little sliding alignment target, we moved the BS to get the IR beam centered on ITMX, then moved ITMX to get good michelson fringes with ITMY. Using an IR card, found the retroflection and moved ETMX to make it overlap with the beam transmitted through the ITM. With the PRM flashing, X-arm cavity flashes could be seen. So, at that point, both the y-arm and x-arm were flashing low order modes. 

  12107   Thu May 5 14:03:52 2016 ericqUpdateLSCFurther Aux X PDH tweaks

This morning I poked around with the green layout a bit. I found that the iris immediately preceding the viewport was clipping the ingoing green beam too much, opening it up allowed for better coupling to the arm. I also tweaked the positions of the mode matching lenses and did some alignment, and have since been able to achieve GTRX values of around 0.5.

I also removed the 20db attenuator after the mixer, and turned the servo gain way down and was able to lock easily. I then adjusted the gain while measuring the CLG, and set it where the maximum gain peaking was 6dB, which worked out to be a UGF of around 8kHz. On the input monitor, the PDH horn-to-horn voltage going into the VGA is 2.44V, which shouldn't saturate the G=4 preamp stage of the AD8336, which seems ok.

The ALS sensitivity is now approaching the good nominal state:

There remains some things to be done, including comprehensive dumping of all beams at the end table (especially the reflections off of the viewport) and the new filters to replace the current post-mixer LPF, but things look pretty good.

Attachment 1: 2016-05-05_newals.pdf
2016-05-05_newals.pdf
  4581   Thu Apr 28 12:25:11 2011 josephbUpdateCDSFurther adventures in Hyper-threading

First, I disabled front end starts on boot up, and brought c1sus up.  I rebuilt the models for the c1sus computer so they had a new specific_cpu numbers, making the assumption that 0-1 were one real core, 2-3 were another, etc.

Then I ran the startc1SYS scripts one by one to bring up the models.  Upon just loading the c1x02 on "core 2" (the IOP), I saw it fluctuate from about 5 to 12.  After bringing up c1sus on "core 3", I saw the IOP settle down to about 7 consistently.  Prior to hyper-threading it was generally 5. 

Unfortunately, the c1sus model was between 60 and 70 microseconds, and was producing error messages a few times a second

[ 1052.876368] c1sus: cycle 14432 time 65; adcWait 0; write1 0; write2 0; longest write2 0
[ 1052.936698] c1sus: cycle 15421 time 74; adcWait 0; write1 0; write2 0; longest write2 0

Bringing up the rest of the models (c1mcs on 4, c1rfm on 5, and c1pem on 6), saw c1mcs occasionally jumping above the 60 microsecond line, perhaps once a minute.   It was generally hovering around 45 microseconds.  Prior to hyper-threading it was around 25-28 microseconds.

c1rfm was rock solid at 38, which it was prior to hyper-threading.  This is most likely due to the fact it has almost no calculation and only RFM reads slowing it down.

c1pem continued to use negligible time, 3 microseconds out of its 480.

I tried moving c1sus to core 8 from core 3, which seemed to bring it to the 58 to 65 microsecond range, with long cycles every few seconds.

 

I built 5 dummy models (dua on 7, dub on 9, duc on 10, dud on 11, due on 1) to ensure that each virtual core had a model on it, to see if it helped with stabilizing things.  The models were basically copies of the c1pem model.

Interestingly, c1mcs seemed to get somewhat better and only taking to 30-32 microseconds, although still not as good as its pre-hyper-threading 25-28.  Over the course of several minutes it was no longer having a long cycle.

c1sus got worse again, and was running long cycles 4-5 times a second.

 

At this point, without surgery on which models are controlling which optics (i.e. splitting the c1sus model up) I am not able to have hyper-threading on and have things working.  I am proceeding to revert the control models and c1sus computer to the hyper-threading state.

 

 

  13741   Mon Apr 9 18:46:03 2018 gautamUpdateIOOFurther debugging
  1. I analyzed the data from the free swinging MC test conducted over the weekend. Attachment #1 shows the spectra. Color scheme is same for all panels.
    • I am suspicious of MC3: why does the LR coil see almost no Yaw motion?
    • The "equilibrium" values of all the sensor signals (at the IN1 of the coil input filters) are within 20% of each other (for MC3, but also MC1 and MC2).
    • The position resonance is also sensed more by the side coil than by the LR coil.
    • To rule out satellite box shenanigans, I just switched the SRM and MC3 satellite boxes. But coherence between frequency noise as sensed by the arms remain.
  2. I decided to clean up my IMC nosie budget a bit more.
    • Attachment #2 shows the NB as of today. I'll choose a better color palette for the next update.
    • "Seismic" trace is estimated using the 40m gwinc file - the MC2 stack is probably different from the others and so it's contribution is probably more, but I think this will suffice for a first estimate.
    • "RAM" trace is measured at the CM board input, with MC2 misaligned.
    • The unaccounted noise is evident from above ~8 Hz.
    • More noises will be added as they are measured.
    • I am going to spend some time working on modeling the CM board noise and TF in LTspice. I tried getting a measurement of the transfer function fron IN1 to the FAST output of the CM board with the SR785 (motivation being to add the contribution of the input referred CM board noise to the NB plot), but I suspect I screwed up something w.r.t. the excitation amplitude, as I am getting a totally nonsensical shape, which also seems to depend on my input excitation amplitude. I don't think the output is saturated (viewed during measurement on a scope), but perhaps there are some subtle effects going on.
Attachment 1: MC_Freeswinging.pdf
MC_Freeswinging.pdf
Attachment 2: IMC_NB_20180409.pdf
IMC_NB_20180409.pdf
  13744   Tue Apr 10 14:28:44 2018 gautamUpdateIOOFurther debugging

I am working on IMC electronics. IMC is misaligned until further notice.

ELOG V3.1.3-