Anchal and I ran tests on the two systems (C1-SUS2 and C1-BHD). Attached are the results and the code and data to recreate them.
We connected one DAC channel to one ADC channel and thus all of the results represent a DAC/ADC pair. We then set the offset to different values from -3000 to 3000 and recorded the measured signal. I then plotted the response curve of every DAC/ADC pair so each was tested at least once.
There are two types of plots included in the attachments
1) a summary plot found on the last pages of the pdf files. This is a quick and dirty way to see if all of the channels are working. It is NOT a replacement for the other plots. It shows all the data quickly but sacrifices precision.
2) In an in-depth look at an ADC/DAC pair. Here I show the measured value for a defined DC offset. The Gain of the system should be 0.5 (put in an offset of 100 and measure 50). I included a line to show where this should be. I also plotted the difference between the 0.5 gain line and the measured data.
As seen in the provided plots the channels get saturated after about the -2000 to 2000 mark, which is why the difference graph is only concentrated on -2000 to 2000 range.
Summary: all the channels look to be working they all report very little deviation off of the theoretical gain.
Note: ADC channel 31 is the timing signal so it is the only channel that is wildly off. It is not a measurement channel and we just measured it by mistake.
The issue of the PD output was that the PD whitened outputs of the sat amp (D080276) are differential, while the successive circuit (D000210 PD whitening unit) has the single-ended inputs. This means that the neg outputs (D080276 U2) have always been shorted to GND with no output R. This forced AD8672 to work hard at the output current limit. Maybe there was a heat problem due to this current saturation as Anchal reported that the unit came back sane after some power-cycling or opening the lid. But the heat issue and the forced differential voltage to the input stage of the chip eventually cause it to fail, I believe.
Anchal came up with the brilliant idea to bypass this issue. The sat amp box has the PD mon channels which are single-ended. We simply shifted the output cables to the mon connectors. The MC1 sus was nicely damped and the IMC was locked as usual. Anchal will keep checking if the circuit will keep working for a few days.
MC was unable to acquire lock because the WFS offsets were cleared to zero at some point and because of that MC was very misaligned to be able to catch back lock. In such cases, one wants the WFS to start accumulating offsets as soon as minimal lock is attained so that the mode cleaner can be automatically aligned. So I did following that worked:
According to the schematics, the distance between the original EQ tap holes is 0.5". Given that the original tap holes' diameter is 0.13" there is enough room for a 1/4" drill.
Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).
The channels on both the C1BHD and C1SUS2 seem to be frozen: they arent updating and are holding one value. To fix this Anchal and I tried:
I wonder if Jon has any ideas.
After sliding the alignment bias around and browsing through elog while searching for "stuck" we concluded the ITMX osems needed to be freed. To do this, the procedure is to slide the alignment bias back and forth ("shaking") and then as the OSEMs start to vary, enable the damping. We did just this, and then restored the alignment bias sliders slowly into their original positions. Attachment 1 shows the ITMX OSEM sensor input monitors throughout this procedure.
At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists
We checked back in time to see how the BS and PRM OSEM slow channels are zero. It was clear that they became zero when we worked on this issue on June 17th, Thursday. So we simply went back and power cycled the c1susaux acromag chassis. After that, we had to log in to c1susaux computer and run
sudo /sbin/ifdown eth1
sudo /sbin/ifup eth1
This restarted the ethernet port acromag chassis is connected to. This solved this issue and we were able to see all the slow channels in BS and PRM.
But then, we noticed that the OPLEV of ITMX is unable to read the position of the beam on the QPD at all. No light was reaching the QPD. We went in, opened the ITMX table cover and confirmed that the return OPLEV beam is way off and is not even hitting one of the steering mirrors that brings it to the QPD. We switched off the OPLEV contribution to the damping.
We did burt restore to 16th June morning using
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Jun/16/06:19/c1susaux.snap -l /tmp/controls_1210622_095432_0.write.log -o /tmp/controls_1210622_095432_0.nowrite.snap -v
This did not solve the issue.
Then we noticed that the OSEM signals from ITMX were saturated in opposite directions for Left and Right OSEMs. The Left OSEM fast channels are saturated to 1.918 um for UL and 1.399 um for LL, while both right OSEM channels are bottomed to 0 um. On the other hand, the acromag slow PD monitors are showing 0 on the right channels but 1097 cts on UL PDMon and 802 cts in LL PD Mon. We actually went in and checked the DC voltages from the PD input monitor LEMO ports on the ITMX dewhitening board D000210-A1 and measured non-zero voltages across all the channels. Following is a summary:
We even took out the 4-pin LEMO outputs from the dewhitening boards that go to the anti-aliasing chassis and checked the voltages. They are same as the input voltages as expected. So the dewhitening board is doing its job fine and the OSEMs are doing their jobs fine.
It is weird that both the ADC and the acromags are reading these values wrong. We believe this is causing a big yaw offset in the ITMX control signal causing the ITMX to turn enough make OPLEV go out of range. We checked the CDS FE status (attachment 1). Other than c1rfm showing a yellow bar (bit 2 = GE FANUC RFM card 0) in RT Net Status, nothing else seems wrong in c1sus computer. c1sus FE model is running fine. c1x02 (the lower level model) does show a red bar in TIM which suggests some timing issue. This is present in c1x04 too.
Currently, the ITMX coil outputs are disabled as we can't trust the OSEM channels. We're investigating more why any of this is happening. Any input is welcome.
Anchal and I wrote a script (Attachment 1) that will test the ADC and DAC connections with inputs on the INMON from -3000 to 3000. We could not run it because some of the channels seemed to be frozen.
Today I glued some magnets to dumbells.
First, I took 6 magnets (the maximum I can glue in one go) and divided them into 3 north and 3 south. Each triplet on a different razor (attachment 1).
I put the gluing fixture I found on top of these magnets so that each of the magnets sits in a hole in the fixture. I close the fixture but not all the way so that the dumbells get in easily (attachment 2).
I prepared EP-30 glue according to this dcc. I tested the mixture by putting some of it in the small toaster oven in the cleanroom for 15min at 200 degrees F.
The first two batches came out sticky and soft. I discarded the glue cartridge and opened a new one. The oven test results with the new cartridge were much better: smooth and hard surface. I picked up some glue with a needle and applied it to the surface of 6 dumbells I prepared in advance. I dropped the dumbells with the glue facing down into the magnet holes in the fixtures (attachment 3). I tightened the fixture and put some weight on it. I let it cure over the weekend.
I also pushed cut Viton tips that Jordan cleaned into the vented screws. While screwing small EQ stops into the lower clamps I found some problems. 4 of the lower clamps need rethreading. This is quite urgent because without those 4 clamps we don't have enough SOS towers. Moreover, I found that the screws that we bought from UC components to hold the lower clamps on the SOS towers were silver plated. This is a mistake in the SOS schematics (part 23) - they should be SS.
I propose we set up a temperature sensor network as described in attachment 1.
Here there are two types of units:
These sensors can be configured over network by going to their assigned IP addresses. I'm not sure at the moment how to configure the dB files to get them to write on slow EPICS channels.
We will have an unused port on the BASE-GATEWAY (#B) which can be used to run another temperature sensor, maybe at an important rack, PSL table or somewhere else.
In future, if more sensors are required, there are expansion (network switch like) options for BASE-GATEWAY or daisy-chain options for the probes.
Edit Fri Jun 18 16:28:13 2021 :
See this [wiki page](https://wiki-40m.ligo.caltech.edu/Physical_Environment_Monitoring/Thermometers) for updated plan and final quote.
We did the measurement of OLTF for Xend green laser PDH loop with excitation added at control point using a SR560 as shown in attachment 1 of 16202. We also measured coherence in our measurement, see attachment 1.
It is a belated report: We received 5 more sat amps on June 4th. (I said 7 more but it was 6 more) So we still have one more sat amp coming from Todd.
- 1 already delivered long ago
- 8 received from Todd -> DeLeone -> Chub. They are in the lab.
- 11 units on May 21st
- 5 units on Jun 4th
Total 1+8+11+5 = 25
1 more unit is coming
11 new Satellite Amps were picked up from Downs. 7 more are coming from there. I have one spare unit I made. 1 sat amp has already been used at MC1.
We had 8 HAM-A coil drivers delivered from the assembling company. We also have two coil drivers delivered from Downs (Anchal tested)
25 HAM-A coil driver units were fabricated by Todd and I've transported them to the 40m.
2 units we already have received earlier.
The last (1) unit has been completed, but Luis wants to use it for some A+ testing. So 1 more unit is coming.
Jon suggested to reboot the acromag chassis, then the computer, and we did this without success. Then, Koji suggested we try running ifup eth0, so we ran `sudo /sbin/ifup eth0` and it worked to put c1susaux back in the martian network, but the modbus service was still down. We switched off the chassis and rebooted the computer and we had to do sudo /sbin/ifup eth0` again (why do we need to do this manually everytime?). Switched on the chassis but still no channels. `sudo systemctl status modbusioc.service' gave us inactive (dead) status. So we ran sudo systemctl restart modbusioc.service'.
The status became:
● modbusIOC.service - ModbusIOC Service via procServ
Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
Active: inactive (dead)
start condition failed at Thu 2021-06-17 16:10:42 PDT; 12min ago
ConditionPathExists=/opt/rtcds/caltech/c1/burt/autoburt/latest/c1susaux.snap was not met`
After another iteration we finally got a modbusIOC.service OK status, and we then repeated Jon's reboot procedure. This time, the acromags were on but reading 0.0, so we just needed to run `sudo /sbin/ifup eth1`and finally some sweet slow channels were read. As a final step we burt restored to 05:19 AM today c1susaux.snap file and managed to relock the IMC >> will keep an eye on it.... Finally, in the process of damping all the suspended optics, we noticed some OSEM channels on BS and PRM are reading 0.0 (they are red as we browse them)... We succeeded in locking both arms, but this remains an unknown for us.
MC1 LL Sensor showed signs of fluctuating large offsets. We tried to find the issue in the box but couldn't find any. On power cycling, the sensor got back to normal. But in putting back the box, we bumped something and c1susaux slow channels froze. We tried to reboot it, but it didn't work and the channels do not exist anymore.
Today morning we came to find that IMC struggled to lock all night (See attachment 1). We kind of had an indication yesterday evening that MC1 LL Sensor PD had a higher variance than usual and Paco had to reset WFS offsets because they had integrated the noise from this sensor. Something similar happened last night, that a false offset and its fluctuation overwhelmed WFS and MC1 got misaligned making it impossible for IMC to get lock.
In the morning, Paco again reset the WFS offsets but not we were sure that the PD variance from MC1 LL osem was very high. See attachment 2 to see how only 1 OSEM is showing higher noise in comparison to the other 4 OSEMs. This behavior is similar to what we saw earlier in 16138 but for UL sensor. Koji and I fixed it in 16139 and we tested all other channels too.
So, Paco and I, went ahead and took out the MC1 satellite amplifier box S2100029 D1002812, opened the top, and checked all the PD channel testpoints with no input current. We didn't find anything odd. Next we checked the LED dirver circuit testpoints with LED OUT and GND shorted. We got 4.997V on all LED MON testpoints which indicate normal functioning.
We just hooked back everything on the MC1 satellit box and checked the sensor channels again on medm screens. To our surprise, it started functioning normally. So maybe, just a power cycling was required but we still don't know what caused this issue.
BUT when I (Anchal) was plugging back the power cables and D25 connectors on the back side in 1X4 after moving the box back into the rack, we found that the slow channels stopped updating. They just froze!
We got worried for some time as the negative power supply indicator LEDs on the acromag chassis (which is just below the MC1 satellite box) were not ON. We checked the power cables and had to open the side panel of the 1X4 rack to check how the power cables are connected. We found that there is no third wire in the power cables and the acromag chassis only takes in single rail supply. We confirmed this by looking at another acromag chassis on Xend. We pasted a note on the acromag chassis for future reference that it uses only positive rails and negative LED monitors are not usually ON.
Back to solving the frozen acromag issue, we conjectured that maybe the ethernet connection is broken. The DB25 cables for the satellite box are bit short and pull around other cables with it when connected. We checked all the ethernet cabling, it looked fine. On c1susaux computer, we saw that the monitor LED for ethernet port 2 which is connected to acromag chassis is solid ON while the other one (which is probably connection to the switch) is blinking.
We tried doing telnet to the computer, it didn't work. The host refused connection from pianosa workstation. We tried pinging the c1susaux computer, and that worked. So we concluded that most probably, the epics modbus server hosting the slow channels on c1susaux is unable to communicate with acromag chassis and hence the solid LED light on that ethernet port instead of a blinking one. We checked computer restart procedure page for SLOW computers on wiki and found that it said if telnet is not working, we can hard reboot the computer.
We hard reboot the computer by long pressing the power button and then presssing it back on. We did this process 3 times with the same result. The ethernet port 2 LED (Acromag chassis) would blink but the ethernet port 1 LED (connected to switch) would not turn ON. We now can not even ping the machine now, let alone telnet into it. All SUS slow monitor channels are not present now ofcourse. We also tried once pressing the reset button (which the manual said would reboot the machine), but we got the same outcome.
Now, we decided to stop poking around until someone with more experience can help us on this.
Bottomline: We don't know what caused the LL sensor issue and hence it has not been fixed. It can happen again. We lost all C1SUSAUX slow channels which are the OSEM and COIL slow monitor channels for PRM, BS, ITMX, ITMY, MC1, MC2 and MC3.
Jon and I tested the ADC and DAC cards in both of the systems on the test stand. We had to swap out an 18-bit DAC for a 16-bit one that worked but now both machines have at least one working ADC and DAC.
[Still working on this post. I need to look at what is in the machines to say everything ]
I installed 2 additional isolators in the Acromag chassis. I set all the input channels to PNP. I ran the digital inputs (EnableMon channels) through these isolators according to the previous post.
I tested the digital inputs in the following way:
I connected an 18V voltage source to the signal wire under test through a 1Kohm resistor. I connected the GND of the voltage source to the RTN wire of the feedthrough. When the voltage source was connected, the LED on the isolator turned on and the EPICs channel under test was Enabled. When I disconnected the voltage source or shorted the signal wire to GND the LED on the isolator turned off and the EPICs channel showed a Disabled state.
I made a flow sensor with a stick and tissue paper to check the airflow.
- The HVAC indicator was not lit, but it was just the blub problem. The replacement bulb is inside the gray box.
- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.
- Then I went to the vertex and the east arm. The outlets and intakes are flowing.
I updated the wiring diagram according to Koji's suggestion. According to the isolator manual, this configuration requires that the isolator input be configured as PNP.
Additionally, when the switch in the coil driver is open the LED in the isolator is signaling an on-state. Therefore, we might need to configure the Acromag to invert the input.
There are the Run/Aquire channels that we might need to add to the wiring diagram. If we do need to read them using slow channels, we will have to pull them up like the EnableMon channels to use them like in the wiring diagram.
We replaced the Mon 7 with an LCD monitor from back bench. It is fed the analog signal from BNC converted into VGS with a converter box that Paco bought. We can replace this monitor with another monitor if it is required on the back bench. For now, we definitely need a monitor to show IMC camera's up there.
If my understanding is correct, the (photo receiving) NPN transistor of the optocoupler is energized through the acromag. The LED side should be driven by the coil driver circuit. It is properly done for the "enable mon" through 750Ohm and +V. However, "Run/Acquire" is a relay switch and there is no one to drive the line. I propose to add the pull-up network to the run/acquire outputs. This way all 8 outputs become identical and symmetric.
We should test the configuration if this works properly. This can be done with just a manual switch, R=750Ohm, and a +V supply (+18V I guess).
Attachment 1 shows the case when excitation is sent at control point i.e. the PZT output. As before, free running laser noise in units of Hz/rtHz is added after the actuator and I've also shown shot noise being added just before the detector.
Again, we have a access to three output points for measurement. right at the output of mixer (the PDH error signal), the feedback signal to be applied by uPDH box (PZT Mon) and the output of the summing box SR560.
Doing loop algebra as before, we get:
So measurement of can be done by
This means at 100 Hz, with 10 integration cycles, we should have needed only 3 mV of excitation signal to get an SNR of 100. However, we have been unable to get good measurements with even 25 mV of excitation. We tried increasing the cycles, that did not work either.
This post is to summarize this analysis. We need more tests to get any conclusions.
I have added more degrees of freedom. The model includes x, y, z, pitch, yaw, roll and is controlled by a matrix of transfer functions (See Attachment 2). I have added 5 control filters to individually control UL, UR, LL, LR, and side. Eventually, this should become a matrix too but for the moment this is fine.
Note the Unit delay blocks in the control in Attachment 1. The model will not compile without these blocks.
Working in Xend with mask on has become unbearable. It is very hot there and I would really like if we fix this issue.
Today, the Xend Green laser was just unable to hold lock for longer than 10's of seconds. The longest I could see it hold lock was for about 2 minutes. I couldn't find anything obviously wrong with it. Attached are noise spectrums of error and control points. The control point spectrum shows good matching with typical free running laser noise.
Are the few peaks above 10 kHz in error point spectrum worrysome? I need to think more about it in a cooler place to make sure.
I wanted to take a high frequency spectrum of error point to make sure that higher harmonics of 250 kHz modulation frequency are not leaking into the PDH box after demodulation. However, the lock could not be maintained long enough to take this final measurement. I'll try again tomorrow morning. It is generally cooler in the mornings.
This post is just an update on what's happening. I need to work more to get some meaningful inferences about this loop.
I checked the BI situation on the HAM-A coil driver. It seems like these are sinking BIs and indeed need to be isolated from the Acromag unit GND to avoid contamination.
The BIs will have to be isolated on a different isolator. Now, the wires coming from the field (red) are connected to the second isolator's input and the outputs are connected to the Acromag BI module and the Acromag's RTN.
I updated the wiring diagram (attached) and the wiring spreadsheet.
In the diagram, you can notice that the BI isolator (the right one) is powered by the Acromag's +15V and switched when the coil driver's GND is supplied. I am not sure if it makes sense or not. In this configuration, there is a path between the coil driver's GND and the Acromag's GND but its resistance is at least 10KOhm. The extra careful option is to power the isolator by the coil driver's +V but there is no +V on any of the connectors going out of the coil driver.
I installed an additional isolator on the DIN rail and wired the remaining BOs (C1:SUS-ETMY_SD_ENABLE, C1:SUS-ETMY_LR_ENABLE) through it to the DB9F-4 feedthrough. I also added DB9F-3 for incoming wires from the RTS and made the required connection from it to DB9F-4.
I tested the new isolated BOs using the Windows machine (after stopping Modbus). As before, I measure the resistance between pin 5 (coil driver GND) and the channel under test. When I turn on the BO I see the resistance drops from inf to 166ohm and back to inf when I turn it off. Both channels passed the test.
Stephen and I discussed the in-vacuum OMC wiring.
- One of the OMCs has already been completed. (Blue)
- The other OMC is still being built. It means that these cables need to be built. (Pink)
- However, the cables for the former OMC should also be replaced because the cable harness needs to be replaced from the metal one to the PEEK one.
- The replacement of the harness can be done by releasing the Glenair Mighty Mouse connectors from the harness. (This probably requires a special tool)
- The link to the harness photo is here: https://photos.app.goo.gl/3XsUKaDePbxbmWdY7
- We want to combine the signals for the two OMCs into three DB25s. (Green)
- These cables are custom and need to be designed.
- The three standard aLIGO-style cables are going to be used. (Yellow)
- The cable stand here should be the aLIGO style.
Attachment 1 shows the closed loop of Xend Green laser Arm PDH lock loop. Free running laser noise gets injected at laser head after the PZT actuation as . The PDH error signal at output of miser is fed to a gain 1 SR560 used as summing junction here. Used in 'A-B mode', the B port is used for sending in excitation where .
We have access to three ports for measurement, marked at output of mixer, at output of SR560, and at PZT out monitor port in uPDH box. From loop algebra, we get following:
, where is the open loop transfer function of the loop.
So measurement of can be done in following two ways (not a complete set):
We did a quick check to make sure there is no saturation in the C1:SUS-ITMX_LSC_EXC analog path. For this, we looked at the SOS driver output monitors from the 1X4 chassis near MC2 on a scope. We found that even with 600 x 10 = 6000 counts of our 619 Hz excitation these outputs in particular are not saturating (highest mon signal was UL coil with 5.2 Vpp). In comparison, the calibration trials we have done before had 600 counts of amplitude, so we can safely increase our oscillator strength by that much
Things that remain to be investigated -->
I have attached an updated transfer function graph with the residual easier to see. I thought here I would include a better explanation of what this transfer function was measuring.
This transfer function was mainly about learning how to use DTT and Foton to make and measure transfer functions. Therefore it is just measuring across a single CDS filter block. X1SUP_ISOLATED_C1_SUS_SINGLE_PLANT_Plant_POS_Mod block to be specific. This measurement shows that the block is doing what I programmed it to do with Foton. The residual is probably just because the measured TF had fewer points than the calculated one.
The next step is to take a closed-loop TF of the system and the control module.
After that, I want to add more degrees of freedom to the model. both in the plant and in the controls.
We measured the Xend green laser PDH Open loop transfer function by following method:
I tested the digital inputs the following way: I connected a DB9 breakout to DB9M-5 and DB9M-6 where digital inputs are hosted. I shorted the channel under test to GND to turn it on.
I observed the channels turn from Disabled to Enabled using caget when I shorted the channel to GND and from Enabled to Disabled when I disconnected them.
I did this for all the digital inputs and they all passed the test.
I am still waiting for the other isolator to wire the rest of the digital outputs.
Next, I believe we should take some noise spectra of the Y end before we do the installation.
Tomorrow I will test the BIs using EPICs.
We attempted to simulate "oscillator based realtime calibration noise monitoring" in offline analysis with python. This helped us in finding about a factor of sqrt(2) that we were missing earlier in 16171. we measured C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ when X-ARM was locked to main laser and Xend green laser was locked to XARM. An excitation signal of amplitude 600 was setn at 619 hz at C1:ITMX_LSC_EXC.
We got a value of . The calibration factor in use is from nm/cts from 13984.
Next steps could be to budget this noise while we setup some way of having this calibration factor generated in realitime using oscillators on a FE model. Calibrating actuation of a single optic in a single arm is easy, so this is a good test setup for getting a noise budget of this calibration method.
Added difference to the graph. I included the code so that others could see what it looks like and use it for easy use.
We found Mon7 in control room dead today afternoon. It's front power on green light is not lighting up. All other monitors are working as normal.
This monitor was used for looking at IMC camera analog feed. It is one of the most important monitors for us, so we should replace it with a different monitor.
Yehonathan and Paco disconnected the monitor and brought it down. We put it under the back table if anyone wants to fix it. Paco has ordered a BNC to VGA/HDMI converter to put in any normal monitor up there. It will happen this Wednesday. Meanwhile, I have changed the MON4 assignment from POP to Quad2 to be used for IMC.
I added a new XT1111 Acromag module to the c1auxey chassis. I sanitized and configured it according to the slow machines wiki instructions.
Since all the spare BIOs fit one DB37 connector I didn't add another feedthrough and combined them all on one and the same DB37 connector. This was possible because all the RTNs of the BIOs are tied to the chassis ground and therefore need only one connection. I changed the wiring spreadsheet accordingly.
I did a lot of rewirings and also cut short several long wires that were protruding from the chassis. I tested all the wires from the feedthroughs to the Acromag channels and fixed some wiring mistakes.
There is still an open issue with the BI channels not read by EPICS. They can still be read by the Windows machine though.
I looked into the issue that Yehonathan reported with the BI channels. I found the problem was with the .cmd file which sets up the Modbus interfacing of the Acromags to EPICS (/cvs/cds/caltech/target/c1auxey1/ETMYaux.cmd).
The problem is that all the channels on the XT1111 unit are being configured in Modbus as output channels. While it is possible to break up the address space of a single unit, so that some subset of channels are configured as inputs and another as outputs, I think this is likely to lead to mass confusion if the setup ever has to be modified. A simpler solution (and the convention we adopted for previous systems) is just to use separate Acromag units for BI and BO signals.
Accordingly, I updated the wiring plan to include the following changes:
So, one more Acromag XT1111 needs to be added to the c1auxey chassis, with the wiring changes as noted above. I have already updated the .cmd and EPICS database files in /cvs/cds/caltech/target/c1auxey1 to reflect these changes.
According to the BO interface circuit board https://dcc.ligo.org/D1001266, PCIN wires are connected to the coil driver and they are not pulled either way.
That means that they're either grounded or floating. I updated the drawing.
This RTS also use the BO interface with an opto isolator. https://dcc.ligo.org/LIGO-D1002593
Could you also include the pull up/pull down situations?
Since this Ocean Controls optoisolator has been shown to be compatible, I've gone ahead and ordered 10 more:
They are expected to arrive by Wednesday.
Here is an update and status report on the new BHD front-ends (FEs).
The changes to the FE BIOS settings documented in  do seem to have solved the timing issues. The RTS models ran for one week with no more timing failures. The IOP model on c1sus2 did die due to an unrelated "Channel hopping detected" error. This was traced back to a bug in the Simulink model, where two identical CDS parts were both mapped to ADC_0 instead of ADC_0/1. I made this correction and recompiled the model following the procedure in .
For lack of a better name, I had originally set up the user model on c1sus2 as "c1sus2.mdl" This week I standardized the name to follow the three-letter subsystem convention, as four letters lead to some inconsistency in the naming of the auto-generated MEDM screens. I renamed the model c1sus2.mdl -> c1su2.mdl. The updated table of models is below.
Renaming an RTS model requires several steps to fully propagate the change, so I've documented the procedure below for future reference.
On the target FE, first stop the model to be renamed:
controls@c1sus2$ rtcds stop c1sus2
Then, navigate to the build directory and run the uninstall and cleanup scripts:
controls@c1sus2$ cd /opt/rtcds/caltech/c1/rtbuild/release
controls@c1sus2$ make uninstall-c1sus2
controls@c1sus2$ make clean-c1sus2
Unfortunately, the uninstall script does not remove every vestige of the old model, so some manual cleanup is required. First, open the file /opt/rtcds/caltech/c1/target/gds/param/testpoint.par and manually delete the three-line entry corresponding to the old model:
If this is not removed, reinstallation of the renamed model will fail because its assigned DCUID will appear to already be in use. Next, find all relics of the old model using:
and manually delete each file and subdirectory containing the "sus2" name. Finally, rename, recompile, reinstall, and relaunch the model:
I used a tool developed by Chris, mdl2adl, to auto-generate a set of temporary sitemap/model MEDM screens. This package parses each Simulink file and generates an MEDM screen whose background is an .svg image of the Simulink model. Each object in the image is overlaid with a clickable button linked to the auto-generated RTS screens. An example of the screen for the C1BHD model is shown in Attachment 1. Having these screens will make the testing much faster and less user-error prone.
I generated these screens following the instructions in Chris' README. However, I ran this script on the c1sim machine, where all the dependencies including Matlab 2021 are already set up. I simply copied the target .mdl files to the root level of the mdl2adl repo, ran the script (./mdl2adl.sh c1x06 c1x07 c1bhd c1su2), and then copied the output to /opt/rtcds/caltech/c1/medm/medm_teststand. Then I redefined the "sitemap" environment variable on the chiara clone to point to this new location, so that they can be launched in the teststand via the usual "sitemap" command.
Currently, we are missing five 18-bit DACs needed to complete the c1sus2 system (the c1bhd system is complete). Since the first shipment, we have had no luck getting additional 18-bit DACs from the sites, and I don't know when more will become available. So, this week I took an inventory of all the 16-bit DACs available at the 40m. I located four 16-bit DACs, pictured in Attachment 2. Their operational states are unknown, but none were labeled as known not to work.
The original CDS design would call for 40 more 18-bit DAC channels. Between the four 16-bit DACs there are 64 channels, so if only 3/4 of these DACs work we would have enough AO channels. However, my search turned up zero additional 16-bit DAC adapter boards. We could check if first Rolf or Todd have any spares. If not, I think it would be relatively cheap and fast to have four new adapters fabricated.
DAQ network limitations and plan
To get deeper into the signal-integrity aspect of the testing, it is going to be critical to get the secondary DAQ network running in the teststand. Of all the CDS tools (Ndscope, Diaggui, DataViewer, StripTool), only StripTool can be used without a functioning NDS server (which, in turn, requires a functioning DAQ server). StripTool connects directly to the EPICS server run by the RTS process. As such, StripTool is useful for basic DC tests of the fast channels, but it can only access the downsampled monitor channels. Ian and Anchal are going to carry out some simple DAC-to-ADC loopback tests to the furthest extent possible using StripTool (using DC signals) and will document their findings separately.
We don't yet have a working DAQ network because we are still missing one piece of critical hardware: a 10G switch compatible with the older Myricom network cards. In the older RCG version 3.x used by the 40m, the DAQ code is hardwired to interface with a Myricom 10G PCIe card. I was able to locate a spare Myricom card, pictured in Attachment 3, in the old fb machine. Since it looks like it is going to take some time to get an old 10G switch from the sites, I went ahead and ordered one this week. I have not been able to find documentation on our particular Myricom card, so it might be compatible with the latest 10G switches but I just don't know. So instead I bought exactly the same older (discontinued) model as is used in the 40m DAQ network, the Netgear GSM7352S. This way we'll also have a spare. The unit I bought is in "like-new" condition and will unfortunately take about a week to arrive.
I mounted the optoisolator on the DIN rail and connected the 3 first channels
to the optoisolator inputs 1,3,4 respectively. I connected the +15V input voltage into the input(+) of the optoisolator.
The outputs were connected to DB9F-2 where those channels were connected before.
I added DB9F-1 to the front panel to accept channels from the RTS. I connected the fast channels to connectors 1,2,3 from DB9F-1 to DB9F-2 according to the wiring diagram. The GND from DB9F-1 was connected to both connector 5 of DB9F-2 and the output (-).
I tested the channels: I connected a DB9 breakout board to DB9F-2. I measured the resistance between the RTS GND and the isolated channels while switching them on and off. In the beginning, when I turned on the binary channels the resistance was behaving weird - oscillating between low resistance and open circuit. I pulled up the channels through a 100Kohm resistor to observe whether the voltage behavior is reasonable or not. Indeed I observed that in the LOW state the voltage between the isolated channel and slow GND is 15V and 0.03V in the HIGH state. Then I disconnected the pull up from the channels and measured the resistance again. It showed ~ stable 170ohm in the HIGH state and an open circuit in the LOW state. I was not able to reproduce the weird initial behavior. Maybe the optoisolator needs some warmup of some sort.
We still need to wire the rest of the fast channels to DBF9-3 and isolate the channels in DBF9-4. For that, we need another optoisolator.
I made a diagram (Attached). I think it explains the blue thing in the previous post.
I don't know what is the grounding situation in the RTS so I put a ground in both the coil driver and the RTS. Hopefully, only one of them is connected in reality.
- Could you explain what is the blue thing in Attachment 1?
- To check the validity of the signal chain, can you make a diagram summarizing the path from the fast BO - BO I/F - Acromag - This opto-isolator - the coil driver relay? (Cut-and-paste of the existing schematics is fine)
I borrowed the little red cart 🛒 to help clear the path for new optical tables in B252 West Bridge. Will return once I am done with it.
I fixed the PSL shutter button on Shutters summary page C1IOO_Mech_Shutter.adl. Now PSL switch changes C1:PSL-PSL_ShutterRqst channel. Earlier it was C1:AUX-PSL_ShutterRqst which doesn't do anything.
As Jon wrote we need to use the NPN configuration (see attachments). I tested the isolator channels in the following way:
1. I connected +15V from the power supply to the input(+) contact.
2. Signal wire from one of the digital outputs was connected to I1-4
3. When I set the digital output to HIGH, the LED on the isolator turns on.
4. I measure the resistance between O1-4 to output(-) and find it to be ~ 100ohm in the HIGH state and an open circuit in the LOW state, as expected from an open collector output.
Unlike the Acromag output, the isolator output is not pulled up in the LOW state. To do so we need to connect +15V to the output channel through a pull-up resistor. For now, I leave it with no pull-up. According to the schematics of the HAM-A Coil Driver, the digital output channels drive an electromagnetic relay (I think) so it might not need to be pulled up to switch back. I'm not sure. We will need to check the operation of these outputs at the installation.
During the testing of the isolator outputs pull-up, I accidentally ran a high current through O2, frying it dead. It is now permanently shorted to the + and - outputs rendering it unusable. In any case, we need another isolator since we have 5 channels we need to isolate.
I mounted the isolator on the DIN rail and started wiring the digital outputs into it. I connected the GND from the RTS to output(-) such that when the digital outputs are HIGH the channels in the coil driver will be sunk into the RTS GND and not the slow one avoiding GND contamination.
I was able to measure the transfer function of the plant filter module from the channel X1:SUP-C1_SUS_SINGLE_PLANT_Plant_POS_Mod_EXC to X1:SUP-C1_SUS_SINGLE_PLANT_Plant_POS_Mod_OUT. The resulting transfer function is shown below. I have also attached the raw data for making the graph.
Next, I will make a script that will make the photon filters for all the degrees of freedom and start working on the matrix version of the filter module so that there can be multiple degrees of freedom.
Following the conclusion, we are reverting the suspension gains to old values, i.e.
While the F2A filters, AC coil gains and input matrices are changed to as mentioned in 16066 and 16072.
The changes can be reverted all the way back to old settings (before Paco and I changed anything in the IMC suspensions) by running python scripts/SUS/general/20210602_NewIMCOldGains/restoreOldConfigIMC.py on allegra. The new settings can be uploaded back by running python scripts/SUS/general/20210602_NewIMCOldGains/uploadNewConfigIMC.py on allegra.
Unix Time = 1622676038
GPS Time = 1306711256