I've completed bench testing of all seven vacuum Acromags installed in a custom rackmount chassis. The system contains five XT1111 modules (sinking digital I/O) used for readbacks of the state of the valves, TP1, CP1, and the RPs. It also contains two XT1121 modules (sourcing digital I/O) used to pass 24V DC control signals to the AC relays actuating the valves and RPs. The list of Acromag channel assignments is attached.
I tested each input channel using a manual flip-switch wired between signal pin and return, verifying the EPICS channel readout to change appropriately when the switch is flipped open vs. closed. I tested each output channel using a voltmeter placed between signal pin and return, toggling the EPICS channel on/off state and verifying the output voltage to change appropriately. These tests confirm the Acromag units all work, and that all the EPICS channels are correctly addressed.
I've set up a closed subnetwork for interfacing the vacuum hardware (Acromags and serial devices) with the new controls machine (c1vac; 192.168.113.72). The controls machine has two Ethernet interfaces, one which faces outward into the martian network and another which faces the internal subnetwork, 192.168.114.xxx. The second network interface was configured via the following procedure.
1. Add the following lines to /etc/network/interfaces:
iface eth1 inet static
2. Restart the networking services:
$sudo /etc/init.d/networking restart
3. Enable DNS lookup on the martian network by adding the following lines to /etc/resolv.conf:
4. Enable IP forwarding from eth1 to eth0:
$sudo echo 1 > /proc/sys/net/ipv4/ip_forward
5. Configure IP tables to allow outgoing connections, while keeping the LAN invisible from outside the gateway (c1vac):
$sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
$sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
6. Finally, because the EPICS 3.14 server binds to all network interfaces, client applications running on c1vac now see two instances of the EPICS server---one at the outward-facing address and one at the LAN address. To resolve this ambiguity, two additional enviroment variables must be set that specify to local clients which server address to use. Add the following lines to /home/controls/.bashrc:
A list of IP addresses so far assigned on the subnetwork follows.
As I was turning off the lights in the VEA, I heard a rattling sound from near the PSL enclosure. I followed it to a valve - I couldn't see a label on this valve in my brief effort to find one, but it is on the south-west corner of the IMC table, so maybe VABSSCI or VABSSCO? The power cable is somehow spliced with an attachment that looks to be bringing gas in/out of the valve (See Attachment #1), and the nut on the bottom was loose, the whole power cable + mettal attachment was responsible for the rattling. I finger-tightened the nut and the sound went away.
Gautam, Aaron, Chub and Steve,
Vent 80 is nearly complete; the instrument is almost to atmosphere. All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open. The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations. VM1 and VM3 could not be actuated. The condition status is still listed as Unidentified because of the disconnected valves.
The vent 81 is completed.
4 ion pumps and cryo pump are at ~ 1-4 Torr (estimated as we have no gauges there), all other parts of the vacuum envelope are at atm. P2 & P3 gauges are out of order.
V1 and VM1 are in a locked state. We suspect this is because of some interlock logic.
TP1 and TP3 controllers are turned off.
Valve conditions as shown: ready to be opened or closed or moved or rewired. To re-iterate: VC1, VC2, and the Ion Pump valves shouldn't be re-connected during the vac upgrade.
Thanks for all of your help.
Following the checklist, I did these:
@Steve & Chub, we are ready to vent tomorrow (Monday Nov 19).
I've begun prepping the IFO for the vent, and completed most of the IFO related items on the checklist. The power into the MC has been cut, but the low-power autolocker has not been checked. I will finish up tomorrow and post the go ahead. PSL shutter is closed for tonight.
I made additional measurements on the x and y arms, at 5 offset positions for each arm (along with 6 measurements at the "zeroed" position).
Our 4 ion pumps were closed off for a lomg time. I estmated their pressure to be around ~1 Torr. After talking with Koji we decided not to vent them.
It'd be still useful to wire their position sensors. But make sure we do not actuate the valves.
The cryo pump was regenerated to 1e-4 Torr about 2 years ago. It's pressure can be ~ 2 Torr with charcoal powder. It is a dirty system at room temperature.
Do not actuate VC1 and VC2, and keep its manual valve closed.
IF someone feels we should vent them for some reason, let us know here in the elog before Monday morning.
Wiring of the power, Ethernet, and indicator lights for the vacuum Acromag chassis is complete. Even though this crate will only use +24V DC, I wired the +/-15V connector and indicator lights as well to conform to the LIGO standard. There was no wiring diagram available, so I had to reverse-engineer the wiring from the partially complete c1susaux crate. Attached is a diagram for future use. The crate is ready to begin software developing on Monday.
Back to loss measurements.
I replaced the PD I've been using for the AS beam.
I misaligned the x arm.
I tried to lock the y arm, but PRC was locked so I could was unable. Gautam reminded me where the config scripts are.
The armloss measurement script needed two additional modifications:
I ran successfully the loss measurement script for the x and y arms. I'm getting losses of ~100ppm from the first estimates.
I made the following changes to the lossmap script:
When the optic aligns itself not at the ideal position, I'm noticing that it often locks on a 01. When the cavity is then misaligned and restored, it can no longer obtain lock. To fix this, I've moved my 'save' commands to just before the loop begins. This means the script may take longer to run, but as long as the cavity is initially locked and well aligned, this should make it more robust against wandering off and never reacquiring lock.
I left the lossmap script running for the x-arm. Next would be to run it for the y arm, but I see that after stepping to a few positions the lock is again lost. It's still trying to run, but if you want to stop it no data already taken will be lost. To stop it, go to the remaining terminal open on rossa and ctrl+c
the analysis needs:
The 40m vacuum envelope has one large single O-ring on the OOC west side. All other doors have double O-ring with annuloses.
There are 3 spacers to protect o-ring. They should not be removed!
The Cryo-pump static seal to VC1 also viton. All gate valves and right angle valve plates have single viton o-ring seal.
Small single viton o-rings on all optical quality viewports.
Helium will permiate through these fast. Leak checking time is limited to 5-10 minutes.
All other seals are copper gaskits. We have 2 manual right angle with METAL-dynamic seal [ VATRING ] as VV1 & RV1
Prep for this work:
I was trying to get some pics of the optics as a zeroth-level reference for the pre-vent loss with the single arms locked, but since our SL7 upgrade, the sensoray won't work anymore . I'll try fixing this during the daytime.
I ran a BNC from the PD on the AS table along the cable rack to a free ADC channel on the LSC whitening board. I lay the BNC on top of the other cables in the rack, so as not to disturb anything. I also was careful not to touch the other cables on the LSC whitening board when I plugged in my BNC. The PD now reads out to... a mystery channel. The mystery channel goes then to c1lsc ADC0 channels 9-16 (since the BNC goes to input 8, it should be #16). To find the channel, I opened the c1lsc model and found that adc0 channel 15 (0-indexed in the model) goes to a terminator.
Rather than mess with the LSC model, Gautam freed up C1:ALS-BEATY_FINE_I, and I'm reading out the AS signal there.
I misaligned the x-arm then re-installed the AS PO PD, using the scope to center the beam then connecting it to the BNC to (first the mystery channel, then BEATY). I turned off all the lights.
I went to misalign the x-arms, but the some of the control channels are white boxed. The only working screen is on pianosa.
The noise on the AS signal is much larger than that on the MC trans signal, and the DC difference for misaligned vs locked states is much less than the RMS (spectrum attached); the coherence between MC trans and AS is low. However, after estimating that for ~30ppm the locked vs misaligned states should only be ~0.3-0.4% different, and double checking that we are well above ADC and dark noise (blocked the beam, took another spectrum) and not saturating the PD, these observations started to make more sense.
To make the measurement in cds, I also made the following changes to a copy opf Johannes' assess_armloss_refl.py that I placed in /opt/rtcds/caltech/c1/scripts/lossmap_scripts/armloss_cds/ :
I started taking a measurement, but quickly realized that the mode cleaner has been locked to a higher order mode for about an hour, so I spend some time moving the MC. It would repeatedly lock on the 00 mode, but the alignment must be bad because the transmission fluctuates between 300 and 1400, and the lock only lasts about 5 minutes.
All 7 Acromag units are now installed in the vacuum chassis. They are connected to 24V DC power and Ethernet.
I have merged and migrated the two EPICS databases from c1vac1 and c1vac2 onto the new machine, with appropriate modifications to address the Acromags rather than VME crate.
I have tested all the digital output channels with a voltmeter, and some of the inputs. Still more channels to be tested.
I’ll follow up with a wiring diagram for channel assignments.
I began moving the AA and AI chassis over to 1X1/1X2 as outlined in the elog.
The chassis were mostly filled with empty cables. There was one cable attached to the output of a QPD interface board, but there was nothing attached to the input so it was clearly not in use and I disconnected it.
I also attach a picture of some of the SMA connectors I had to rotate to accommodate the chassis in their new locations.
The chassis are installed, and the anti-imaging chassis can be seen second from the top; the anti-aliasing chassis can be seen 7th from the top.
I need to breakout the SCSI on the back of the AA chassis, because ADC breakout board only has a DB36 adapter available; the other cables are occupied by the signals from the WFS dewhitening outputs.
It is posted at the 40m wiki with Gautam' help. Printed copies posted around doors also.
This problem resurfaced, which I noticed when I couldn't get the single arm locks going.
The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.
Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.
I've been looking into the cross-coupling from the SRCL loop control point to the Michelson error point.
[Attachment #1] - Swept sine measurement of transfer function from SRCL_OUT_DQ to MICH_IN1_DQ. Details below.
[Attachment #2] - Attempt to measure time variation of coupling from SRCL control point to MICH error point. Details below.
[Attachment #3] - Histogram of the data in Attachment #2.
[Attachment #4] - Spectrogram of the duration in which data in #2 and #3 were collected, to investigate the occurrance of fast glitches.
Hypothesis: (so that people can correct me where I'm wrong - 40m tests are on DRMI so "MICH" in this discussion would be "DARM" when considering the sites)
Measurement details and next steps:
Attachments #2 and #3
sstop using the ssscope, and just put the ssssignal into the DAQ with sssssome whitening. You'll get 16 bitsśšß.
I increased the resolution on the scope by selecting Average (512) mode. I was a bit confused by this, since Yuki was correct that I had only 4 digits recorded over ethernet, which made me think this was an i/o setting. However the sample acquisition setting was the only thing I could find on the tektronix scope or in its manual about improving vertical resolution. This didn't change the saved file, but I found the more extensive programming manual for the scope, which confirms that using average mode does increase the resolution... from 9 to 14 bits! I'm not even getting that many.
Gautam was doing some DRMI locking, so I replaced the photodiode at the AS port to begin loss measurements again.
There's another setting for DATa:WIDth, that is the number of bytes per data point transferred from the scope.
I tried using the *.25 scope instead, no better results. Changing the vertical resolution directly doesn't change this either. I've also tried changing most of the ethernet settings. I don't think it's something on the scripts side, because I'm using the same scripts that apparently generated the most recent of Johannes' and Yuki's files; I did look through for eg tds3014b.py, and didn't see the resolution explicitly set. Indeed, I get 7 bits of resolution as that function specifies, but most of them aren't filled by the scope. This makes me think the problem is on the scope settings.
With the DRMI locked, I drove a line in MICH using the sensing matrix infrastructure. Then I looked at the error points of MICH, PRCL and SRCL. Initially, the sensing line oscillator output matrix for MICH was set to drive only the BS. Subsequently, I changed the --> PRM and --> SRM matrix elements until the line height in the PRCL and SRCL error signals was minimized (i.e. the change to PRCL and SRCL due to the BS moving, which is a geometric effect, is cancelled by applying the opposite actuation to the PRM/SRM respectively. Then I transferred these to the LSC output matrix (old numbers in brackets).
MICH--> PRM = -0.335 (-0.2655)
MICH--> SRM = -0.35 (+0.25)
I then measured the loop TFs - all 3 loops had UGFs around 100 Hz, coinciding with the peaks of the phase bubbles. I also ran some sensing lines and did a sensing matrix measurement, Attachment #1 - looks similar to what I have obtained in the past, although the relative angles between the DoFs makes no sense to me. I guess the AS55 demod phase can be tuned up a bit.
The demodulation was done offline - I mixed the time series of the actuator and sensor signals with a "local oscillator" cosine wave - but instead of using the entire 5 minute time series and low-passing the mixer output, I divvied up the data into 5 second chunks, windowed with a Tukey window, and have plotted the mean value of the resulting mixer output.
Unrelated to this work: I re-aligned the PMC on the PSL table, mostly in Pitch.
This problem resurfaced. I'm doing the debugging.
6:30pm - "Solved" using the same procedure of stepping through the whitening gains with a small (10 DAC cts pk) signal applied. Simply stepping through the gains with input grounded doesn't seem to do the trick.
I had some success today. I hope that the tweaks I made will allow working with the DRMI during the day as well, though it looks like the main limiting factor in lock duty cycle is angular stability of the PRC.
[Attachment #1]: Repeatable and reliable DRMI locks tonight, stability is mainly limited by angular glitches - I'm not sure yet if these are due to a suspect Oplev servo on the PRM, or if they're because of the tip-tilt PR2/PR3/SR2/SR3.
[Attachment #2]: A pass at measuring the TF from SRCL error point to MICH error point via control noise re-injection. I was trying to measure down to 40 Hz, but lost the lock, and am calling it for the night.
[Attachment #3]: Coherence between PRM oplev error point and beam spot motion on POP QPD.
Note that the MICH actuation is not necessarily optimally de-coupled by actuating on the PRM and SRM yet (i.e. the latter two elements of the LSC output matrix are not precisely tuned yet).
What is the correct way to make feedforward filters for this application? Swept-sine transfer function measurement? Or drive broadband noise at the SRCL error point and then do time-domain Wiener filter construction using SRCL error as the witness and MICH error as the target? Or some other technique? Does this even count as "feedforward" since the sensor is not truly "outside" the loop?
Earlier today, I rebooted a few unresponsive VME crates (susaux, auxey).
The IMC has been unhappy for a couple of days - the glitches in the MC suspensions are more frequent. I reset the dark offsets, minimized MCREFL by hand, and then re-centered the beam on the MC2 Trans QPD. In this config, the IMC has been relatively stable today, although judging by the control room StripTool WFS control signal traces, the suspension glitches are still happening. Since we have to fix the attenuator issue anyways soon, we can do a touch-up on IMC WFS.
I removed the DC PD used for loss measurements. I found that the AS beam path was disturbed - there is a need to change the alignment, this just makes it more work to get back to IFO locking as I have to check alignment onto the AS55 and AS110 PDs.
Single arm locking worked with minimal effort - although the X arm dither alignment doesn't do the intended job of maximizing the transmission. Needs a checkup.
PRMI locking (carrier resonant) was also pretty easy. Stability of the lock is good, locks hold for ~20 minutes at a time and only broke because I was mucking around. However, when the carrier is resonant, I notice a smeared scatter pattern on the ITMX camera that I don't remember from before. I wonder if the FF idea can be tested in the simpler PRMI config.
After recovering these two simpler IFO configurations, I improved the cavity alignment by hand and with the ASS servos that work. Then I re-centered all the Oplev beams onto their respective QPDs and saved the alignment offsets. I briefly attemped DRMI locking, but had little success, I'm going to try a little later in the evening, so I'm leaving the IFO with the DRMI flashing about, LSC mode off.
The VEA vertex laptop, paola, has a flashing orange indicator which I take to mean some kind of battery issue. When the laptop is disconnected from its AC power adaptor, it immediately shuts down. So this machine is kind of useless for its intended purpose of being a portable computer we can work at optical tables with. The actual battery diagnostics (using upower) don't report any errors.
Today I finished setting up the server that will replace the c1vac1/2 machines. I put it on the martian network at the unassigned IP 192.168.113.72. I assigned it the hostname c1vac and added it to the DNS lookup tables on chiara.
I created a new targets directory on the network drive for the new machine: /cvs/cds/caltech/target/c1vac. After setting EPICS environment environment variables according to 13681 and copying over (and modifiying) the files from /cvs/cds/caltech/target/c1auxex as templates, I was able to start a modbusIOC server on the new machine. I was able to read and write (soft) channel values to the EPICS IOC from other machines on the martian network.
I scripted it as a systemd-managed process which automatically starts on boot and restarts after failure, just as it is set up on c1auxex.
The vacuum and MC are OK
Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.
But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.
Is there a reason why extender cards shouldn't be stuck into eurocrates?
Please check your data file and compare with those Johannes made last year. I think the power in your data file may have only three-disits and flactuate about 2%, which brings huge error. (see elog: 40m/14254)
On running the script again, I'm getting negative values for the loss.
This afternoon I started setting up the Supermicro 5017A-EP that will replace c1vac1/2. Following Johannes's procedure in 13681 I installed Debian 8.11 (jessie). There is a more recent stable release, 9.5, now available since the first acromag machine was assembled, but I stuck to version 8 for consistency. We already know that version to work. The setup is sitting on the left side of the electronics bench for now.
That was likely me. I had recentered the beam on the PD I'm using for the armloss measurements, and I probably moved the wrong steering mirror. The transmission from MC2 is sent to a steering mirror that directs it to the MC2 transmission QPD; the transmission from this steering mirror I direct to the armloss MC QPD (the second is what I was trying to adjust).
Note: The MC2 trans QPD goes out to a cable that is labelled MC2 op lev. This confusion should be fixed.
I realigned the MC and recentered the beam on the QPD. Indeed the beam on MC2 QPD was up and left, and the lock was lost pretty quickly, possibly because the beam wasn't centered. Lock was unstable for a while, and I rebooted C1PSL once during this process because the slow machine was unresponsive.
When tweaking the alignment near MC2, take care not to bump the table, as this also chang es the MC2 alignment.
Once the MC was stably locked, I was able to maximize MC transmission at ~15,400 counts. I then centered the spot on the MC2 trans QPD, and transmission dropped to ~14800 counts. After tweaking the alignment again, it was recovered to ~15,000 counts. Gautam then engaged the WFS servo and the beam was centered on MC2 trans QPD, transmission level dropped to ~14,900.
I tried to plot a long trend MC Transmitted today. I could not get farther than 2017 Aug 4
The mode cleaner was misaligned probably due to the earthquake (the drop in the MC transmitted value slightly after utc 7:38:52 as seen in the second plot). The plots show PMC transmitted and MC sum signals from 10th june 07:10:08 UTC over a duration of 17 hrs. The PMC was realigned at about 4-4:15 pm today by rana. This can be seen in the first plot.
The IMC has been misbehaving for the last 5 hours. Why? I turned the WFS servos off. afaik, aaron was the last person to work on the IFO, so i'm not taking any further debugging steps so as to not disturb his setup.
I'm checking out the data this morning, running armloss_AS_calc.py using the parameters Yuki used here.
I made the following changes to scripts (measurement script and calculator script)
I repeated the 'dark' measurements, because I need 20 files to run the script and the measurements before had the window on the scope set larger than the integration time in the script, so it was padded with bad values that were influencing the calculation.
On running the script again, I'm getting negative values for the loss. I removed the beamstops from the PDs, and re-centered the beams on the PDs to repeat the YARM measurements.
The Contec test board with Dsub37Fs was on the top shelf of E7
New all organic machine.
We have no coffee machine.
We are dreaming about it
We still do not have it.
After running this script Friday night, i noticed Saturday that the data hadn't saved. Scrolling up inthe terminal, I couldn't see where I'd run the script, so I thought I'd forgotten to run it as I was making last minute changes to the scope settings Friday before leaving.
Monday it turns out I hadn't forgotten to run the script, but the script itself was getting hung up as it waited for ASS to settle, due to the offset on the ETM PIT or YAW setpoints. The script was waiting until both pitch and yaw settled to below 0.7, but yaw was reading ~15; I think this is normal, and it looks like Yuki had solved this problem by waiting for the DEMOD-OFFSET to become small, rather than just the DEMOD signal to be small. Since this is a solved problem, I think I might be using an old script, but I'm pretty sure I'm running the one in Johannes' folder that Yuki is referencing for example here. The scripts in /yutaro_scripts/ have this DEMOD-OFFSET functionality commented out, and anyway those scripts seem to do the 2D loss maps rather than 1D loss measurements.
In the meantime I blocked the beams and ran the script in DARK mode. The script is saving data in /armloss/data/run_20181105/, and runs with no exceptions thrown.
However, when I try to dither align the YARM, I get an error that "this is not a degree of freedom that has an ASS". I'm alsogetting some exceptions from MEDM about unavailable channels. It must have been something about donatella not initializing, because it's working on pianosa. I turned on YARM ADS from pianosa. Monitoring from dataviewer, I see that LSC-TRY_OUT has some spikes to 0.5, but it's mostly staying near 0. I tried returning to the previous frozen outputs, and also stepping around ETMY-[PIT/YAW] from the IFO_ALIGN screen, but didn't see much change in the behavior of LSC-TRY. I missed the other controls Gautam was using to lock before, and I've also made myself unclear on whether ASS is acting only on angular dof, or also on length.
I unblocked the beams after the DARK run was done.
Some facts which should be considered when doing this measurement and the associated uncertainty:
This result has about 40% of uncertaintities in XARM and 33% in YARM (so big... ).
I'm continuing the arm loss measurements Yuki was making. I'm first familiarizing myself with the procedures for the measurement Johannes describes.
I'm not very familiar with the medm screens, so I'm just kind of poking around and checking with Gautam. I do the following:
I've left the script running.
Let's install Jamie's new Data Viewer
Physical plan is cleaning our roof and gutters today.
It took at least ten years to rust away.
Steve reported to me that the CC1 Hornet gauge was not reporting the IFO pressure after some cable tracing at EX. I found that the power to the unit had been accidentally disconnected. I re-connected the power and manually turned on the HV on the CC gauge (perhaps this can be automated in the new vacuum paradigm). IFO pressure of 8e-6 torr is being reported now.
Chub Osthelder received 40m specific basic safety traning today.
Gautam & Steve,
Our controller is back with Osaka maintenace completed. We swapped it in this morning.
TP-1 Osaka maglev controller [ model TCO10M, ser V3F04J07 ] needs maintenance. Alarm led on indicating that we need Lv2 service.
The turbo and the controller are in good working order.
Our maintenance level 2 service price is $...... It consists of a complete disassembly of the controller for internal cleaning of all ICB’s, replacement of all main board capacitors, replacement of all internal cooling units, ROM battery replacement, re-assembly, and mandatory final testing to make sure it meets our factory specifications. Turnaround time is approximately 3 weeks.
RMA 5686 has been assigned to Caltech’s returning TC010M controller. Attached please find our RMA forms. Complete and return them to us via email, along with your PO, prior to shipping the cont
Osaka Vacuum USA, Inc.
510-770-0100 x 109
our TP-1 TG390MCAB is 9 years old. What is the life expectancy of this turbo?
The Osaka maglev turbopumps are designed with a 100,000 hours(or ~ 10 operating years) life span but as you know most of our end-users are
running their Osaka maglev turbopumps in excess of 10+, 15+ years continuously. The 100,000 hours design value is based upon the AL material being rotated at
the given speed. But the design fudge factor have somehow elongated the practical life span.
We should have the cost of new maglev & controller in next year budget. I put the quote into the wiki.
As a part of the preparation for the replacement of c1susaux with Acromag, I made inspection of the coil-osem transfer function measurements for the vertex SUSs.
The TFs showed typical f^-2 with the whitening on except for ITMY UL (Attachment 1). Gautam told me that this is a known issue for ~5 years.
We made a thorough inspection/replacement of the components and identified the mechanism of the problem.
It turned out that the inputs to MAX333s are as listed below.
The switching voltage for UL is obviously incorrect. We thought this comes from the broken BIO board, and thus swapped the corresponding board. But the issue remained. There are 4 BIO boards in total on c1sus, so maybe we have replaced a wrong board?
Initially we thought that the BIO can't drive the pull-up resister of 5KOhm form 15V to 0V (=3mA of current). So I have replaced the pull-up resisters to 30KOhm. But this did not help. These 30Ks are left on the board.
To do for Green Locking in YARM:
The auto-alignment servo should be completed. This servo requires many parameters to be optimized: demodulation frequency, demodulation phase, servo gain (for each M1/2 PIT/YAW), and matrix elements which can remove PIT-YAW coupling.