ID |
Date |
Author |
Type |
Category |
Subject |
379
|
Fri Mar 14 14:59:51 2008 |
josephb | Configuration | Cameras | Comparison between GC650 (CCD) and GC750 (CMOS) looking at ETMX | Attached are images taken of ETMX while locked.
The first two are 300,000 microsecond exposure time, with approximately the same focusing/zoom. (The 750 is slightly more zoomed in than the 650 in these images). The second are 30,000 microsecond exposures. The la
The CMOS appears to be more sensitive to the 1064 nm reflected light (resulting in bright images for the same exposure time). This may make a difference in applications where images are desired to be taken quickly and repeatedly.
Both seem to be resolving individual specks on the optic reasonably well.
Next test is to place both camera on a Gaussian beam (in a couple different modes say 00, 11, and so forth), probably using the PMC. |
Attachment 1: ETMX_z2_exp_300000_650.tiff
|
Attachment 2: ETMX_z2_exp_300000_750.tiff
|
Attachment 3: ETMX_z2_exp_30000_650.tiff
|
Attachment 4: ETMX_z2_exp_30000_750.tiff
|
4632
|
Thu May 5 04:38:20 2011 |
Koji | Summary | LSC | Comparison between S3399 and FFD-100 | Comparison between Hamamatsu S3399 and Perkin Elmer FFD-100
These are the candidates for the BB PD for the green beat detection as well as aLIGO BB PD for 532nm/1064nm.
FFD-100 seems the good candidate.
Basic difference between S3399 and FFD-100
- S3399 Si PIN diode: 3mm dia., max bias = 30V, Cd=20pF
- FFD-100 Si PIN diode: 2.5mm dia., max bias = 100V, Cd=7pF
The circuit at the page 1 was used for the amplifier.
- FFD-100 showed 5dB (= x1.8) larger responsivity for 1064nm compared with S3399. (Plot not shown. Confirmed on the analyzer.)
- -3dB BW: S3399 180MHz, FFD-100 250MHz for 100V_bias. For 30V bias, they are similar.
|
Attachment 1: PD_response.pdf
|
|
16923
|
Thu Jun 16 17:40:15 2022 |
Yehonathan | Update | BHD | Comparison of MICH OLTF model with measurement | I made some progress in modeling MICH loop.
Putting all the LSC and SUS filters together with the MICH Finesse model I constructed an OLTF model and plot it with the measurement done by Paco and Yuta in this elog (attachment 1).
There are 2 unknown numbers that I had to adjust in order to fit the model with the measurement:
1. The SUS damping loop gain (found to be ~ 2.22), which seems to vary wildly from SUS to SUS.
2. The coil driver gain (found to be 45), which I should measure.
I find coil_driver_gain*SUS_damping_filter_gain by increasing it until the SUS damping loop becomes unstable.
The coil driver gain I find by making the measurement and model overlap.
However, there is one outstanding discrepancy between the measurement and the model: Paco and Yuta measure the MICH calibration to be 1.3e9 cts/m while my model shows it to be 1.3e10 cts/m, an order of magnitude larger.
Details
The model can be summarized with these lines of code (I assume that the product of the ADCs(DACs) and and whitening(dewhitening) filters is unity):
BS2AS55 = TFs["AS_f2"]["BS"]
PD_responsivity = 1e3*0.8 #V/W
ADC_TF = 3276.8 #cts/V
demod_gain = 6.77 #According to https://wiki-40m.ligo.caltech.edu/Electronics/LSC_demoddulators
whitening_gain = 10**(24/20) #24 dB
BS2MICH = BS2AS55*PD_responsivity*demod_gain*whitening_gain*ADC_TF
DAC_TF = 6.285e-4 #V/cts, elog 16161
coil_TF = 0.016 #Newton/Ampere per coil, elog 15846
coil_R = 20e3 #Ohm
actuation_TF = DAC_TF*coil_TF/coil_R
OLTF = (BS2MICH*MICH_ctrl_cmplx*-6*0.5 + OSEM_filters_cmplx*OSEM_TF*2.22)*coil_filters_cmplx*actuation_TF*SUS_cmplx*45
- BS2AS55 is the optical plant, calculated with Finesse
- MICH_ctrl_cmplx is the MICH control filter with gain of -6
- 0.5 factor comes from the LSC output matrix
- OSEM_TF is the product of the OSEMs input filters and damping loop filters.
- coil_filters_cmplx are the coil filters
- SUS_cmplx is the suspension transfer function (w0 = 1Hz, Q=200)
|
Attachment 1: MICH_Model_Measurement_Comparison.pdf
|
|
16927
|
Fri Jun 17 12:05:32 2022 |
Yehonathan | Update | BHD | Comparison of MICH OLTF model with measurement | I should write down what I didn't include for completeness:
1. AA filters
2. AS55 input 60Hz comb filter
3. Violin filters
After discussing with Paco, we agreed that the discrepancy in the MICH calibration might come from the IQ mixing angle which for the IFO is not optimized, while in Finesse is set such that all the amplitude is in one quadrature.
Quote: |
I made some progress in modeling MICH loop.
Putting all the LSC and SUS filters together with the MICH Finesse model I constructed an OLTF model and plot it with the measurement done by Paco and Yuta in this elog (attachment 1).
There are 2 unknown numbers that I had to adjust in order to fit the model with the measurement:
1. The SUS damping loop gain (found to be ~ 2.22), which seems to vary wildly from SUS to SUS.
2. The coil driver gain (found to be 45), which I should measure.
I find coil_driver_gain*SUS_damping_filter_gain by increasing it until the SUS damping loop becomes unstable.
The coil driver gain I find by making the measurement and model overlap.
However, there is one outstanding discrepancy between the measurement and the model: Paco and Yuta measure the MICH calibration to be 1.3e9 cts/m while my model shows it to be 1.3e10 cts/m, an order of magnitude larger.
|
|
16928
|
Fri Jun 17 13:07:08 2022 |
Koji | Update | BHD | Comparison of MICH OLTF model with measurement | I'm curious why the actual OLTF included the 60Hz comb...? It is undesirable to have such structure in the OLTF around the UGF as it can cause servo instability.
And if you remove them, you don't need to model them :-) |
1999
|
Thu Sep 24 20:17:05 2009 |
rana | Summary | LSC | Comparison of Material Properties for the new RFPD Mounts |
|
Steel |
Brass |
Aluminum |
Delrin |
Density (kg/m^3) |
7850 |
8500 |
2700 |
1420 |
CTE (ppm/C) |
12 |
19 |
23 |
100 |
Young's
Modulus
(GPa)
|
200 |
110 |
69 |
2 |
Hardness |
|
|
|
|
Color |
grey |
gold |
light silver |
any |
|
3855
|
Wed Nov 3 17:01:01 2010 |
josephb | Summary | CDS | Comparison of RFM read times | Problem:
RFM reads are slow. Rolf has said it should take 2-3 microseconds per read.
c1sus is taking about 7 microseconds per read, twice as slow as Rolf's claim.
Hypothesis:
The RFM card is in the IO chassis, and is sharing the PCIe bus with 4 ADC cards, 3 DAC cards, 4 BO cards, and a BIO card. Its possible this crowded bus is causing the reads to take even longer.
Test Results:
Compare read times between the c1sus computer, which has its RFM card in the IO chassis, to c1ioo, which has its RFM card in the computer.
c1ioo:
No RFM reads: 8 microseconds
3 RFM reads: 20 microseconds (~4 per read)
6 RFM reads: 32 microseconds (~4 per read)
c1sus:
No RFM read: 25 microseconds (bigger model)
1 RFM read: 33 microseconds (~8 per read)
3 RFM read: 45 microseconds (~7 per read)
6 RFM read: Over 62 microseconds, doesn't run.
Conclusion:
It looks like moving the RFM card may help by about a factor of 2 in read speed, although its still not quite what Alex and Rolf claim it should be.
The c1mcs and c1ioo models have been reverted to their normal operations.
|
13938
|
Mon Jun 11 11:45:13 2018 |
keerthana | Update | elog | Comparison of the analytical and finesse values of TMS and FSR. |
Quantity |
Analytical Value |
Finesse Value |
Percentage Error |
Free Spectral range (FSR) |
3.893408 MHz |
3.8863685 MHz |
0.180 % |
Transverse Mode Spacing (TMS) |
1.195503 MHz |
1.1762885 MHz |
1.607 % |
The values obtained from both analytical and finesse solution is given in the above table along with the corresponding percentage errors.finesse1.pdf
The parameters used for this calculation are listed below.
Parameter |
Value |
length of the cavity (L) |
38.5 m |
Wavelength of the laser beam ( ) |
1064 nm |
Radius of curvature of ITM (R1) |
 |
Radius of curvature of ETM (R2) |
58 m |
The cavity scan data obtained from Finesse is also attached here. |
Attachment 1: finesse1.pdf
|
|
13941
|
Mon Jun 11 18:10:51 2018 |
Koji | Update | elog | Comparison of the analytical and finesse values of TMS and FSR. | Hmm? What is the definition of the percentage error? I don't obtain these numbers from the given values.
And how was the finesse value obtained from the simulation result? Then what is the frequency resolution used in Finesse simulation? |
13943
|
Mon Jun 11 19:16:49 2018 |
keerthana | Update | elog | Comparison of the analytical and finesse values of TMS and FSR. | The percentage error which I found out =[(analytical value - finesse value)/analytical value]*100
But inorder to find the finesse value, I just used curser to get the central frequency of each peak and by substracting one from the other I found TMS and FSR.
The resolution was 6500 Hz. Thus, it seems that this method is not actually reliable. I am trying to find the central frequency of each mode with the help of lorentzian fits. I am attaching a fit which I did today. I have plotted its residual graph also.
I am uploading 4 python scripts to the github.
1. Analytical Solution
2. Finesse model- cavity scan
3. Finesse model- fitting
4. Finesse model- residual
Quote: |
Hmm? What is the definition of the percentage error? I don't obtain these numbers from the given values.
And how was the finesse value obtained from the simulation result? Then what is the frequency resolution used in Finesse simulation?
|
fitting_1.pdf |
Attachment 1: fitting_1.pdf
|
|
13944
|
Mon Jun 11 22:05:03 2018 |
Koji | Update | elog | Comparison of the analytical and finesse values of TMS and FSR. | > The percentage error which I found out =[(analytical value - finesse value)/analytical value]*100
Yes, I this does not give us 0.70%
(3.893408 - 3.8863685)/3.893408 *100 = 0.18%
But any way, go for the fitting. |
13945
|
Mon Jun 11 22:18:18 2018 |
keerthana | Update | elog | Comparison of the analytical and finesse values of TMS and FSR. | Oopss !! I made a mistake while taking the values from my notes. Sorry.
Quote: |
> The percentage error which I found out =[(analytical value - finesse value)/analytical value]*100
Yes, I this does not give us 0.70%
(3.893408 - 3.8863685)/3.893408 *100 = 0.18%
But any way, go for the fitting.
|
|
3339
|
Sat Jul 31 04:03:11 2010 |
Gopal | Summary | Optic Stacks | Complete Displacement Translational Transfer Function Matrix | Over the past 36 hours, I've run full-fledged FDAs on KALLO.
The transfer functions for translational drives and responses are neatly described by the attached transfer function "matrix."

Next steps:
1) Extend the 3x3 analysis to include tilts and rotations in a 6x6 analysis.
2) Figure out how to do the same types of tests for phase instead of displacement. |
16647
|
Fri Feb 4 10:21:39 2022 |
Anchal | Summary | General | Complete lab shutdown | Please edit this same entry throughout the day for the shutdown elogging.
I took a screenshot of C0VAC_MONITOR.adl to ensure that all pnematic valves are in closed positions:

The status message says "All pnematic valves closed" and the latest error message is about "V7 closed, N2 < 6.50e+01".
I found out that there was no autoburt happening for c1vac channels. I created an autoBurt.req file for the vac.db file and saved one snapshot. I also added the path of this file in autoburt/.requestfilelist . Let's see if autoburting starts by that for this file as well.
With this, I think we can safely shutdown acromag chassis. Hopefully, the relays are configured such that the valves are nominally closed in absence of a control signal. After the chassis is shut down, wwe can shutdown C1VAC by:
sudo shutdown
[Chub, Jordan]
At the 1x8 rack, the following were switched off on their respective front panels:
PTP2 & PTP3 Controller
MKS Gauge controller
PRP Gauge Controller
G2P316a & b Controllers
Sorenson
Serial Device Server
Both UPS's
Powered off from back of unit:
TP1 Controller
Acromag chassis
TP2 and 3 controllers were unplugged from respective power strips (labeled C2 and C3)
C1vac and the laptop at the workstation were shut down
Manual Gate valve was closed |
17080
|
Mon Aug 15 15:43:49 2022 |
Anchal | Update | General | Complete power shutdown and startup documentation | All steps taken have been recorded here:
https://wiki-40m.ligo.caltech.edu/Complete_power_shutdown_2022_08 |
15204
|
Mon Feb 10 15:54:47 2020 |
Jordan | Update | PSL | Completed Acromag Wiring | All spare channels on the PSL acromag chassis are connected with ~12in of spare wiring for future use. |
4913
|
Wed Jun 29 22:35:06 2011 |
Nicole | Summary | SUS | Completed Quad photodiode Box Circuit Diagrams | I have finished drawing the circuit diagrams for the quad photodiode boxes. Here are copies of the circuit diagram.
There are three main operation circuits in the quad photdiode box: a summing circuit (summing the contributions from the four inputs),
a Y output circuit (taking the difference between the input sums 3+2 and 1+4), and an X output circuit (taking the difference between the
input sums 3+4 and 1+2). I will complete an mini report on my examination and conclusions of the QPD circuit for the suspension wiki tomorrow.
  
|
3180
|
Thu Jul 8 16:24:30 2010 |
Gopal | Update | Optic Stacks | Completion of single stack layer | Single layer of stack successfully modeled in COMSOL. I'm working on trying to add Viton springs now and extend it to a full stack. Having some difficulty with finding consistent parameters to work with. |
9708
|
Mon Mar 10 21:12:30 2014 |
Koji | Summary | LSC | Composite Error Signal for ARms (1) | The ALS error (i.e. phase tracker output) is linear everywhere, but noisy.
The 1/sqrt(TR) is linear and less noisy but is not linear at around the resonance and has no sign.
The PDH signal is linear and further less noisy but the linear range is limited.
Why don't we combine all of these to produce a composite error signal that is linear everywhere and less-noisy at the redsonance?
This concept was confirmed by a simple mathematica calculation:
The following plot shows the raw signals with arbitorary normalizations
1) ALS: (Blue)
2) 1/SQRT(TR): (Purple)
3) PDH: (Yellow)
4) Transmission (Green)

The following plot shows the preprocessed signals for composition

1) ALS: no preprocess (Blue)
2) 1/SQRT(TR): multiply sign(PDH) (Purple)
3) PDH: linarization with the transmission (If TR<0.1, use 0.1 for the normalization). (Yellow)
4) Transmittion (Green)
The composite error signal

1) Use ALS at TR<0.03. Use 1/SQRT(TR)*sign(PDH)*(1-TR) + PDH*TR at TR>0.03
2) Transmittion (Purple)
|
Attachment 1: composite_linear_signal.nb.zip
|
9715
|
Tue Mar 11 15:14:34 2014 |
den | Summary | LSC | Composite Error Signal for ARms (1) |
Quote: |
The composite error signal

|
Very nice error signal. Still, I think we need to take into account the frequency shape of the transfer function TR -> CARM. |
9717
|
Tue Mar 11 15:21:08 2014 |
Koji | Summary | LSC | Composite Error Signal for ARms (1) | True. But we first want to realize this for a single arm, then move onto the two arms case.
At this point we'll need to incorporate frequency dependence. |
9710
|
Mon Mar 10 21:14:58 2014 |
ericq | Summary | LSC | Composite Error Signal for ARms (3) | Using Koji's mathematica notebook, and Nic's python work, I set out to run a time domain simulation of the error signal, with band-limited white noise added in.

Basically, I sweep the displacement of the cavity (with no noise), and pass it to the analytical formulae with the coefficients Koji used, with some noise added in. I also included some 1/0 protection for the linearized PDH signal. I ran a sweep, and then compared it to an ALS sweep that Jenne ran on Monday; reconstructing what the CESAR signal would have looked like in the sweep.
The noise amounts were totally made up.
They matched up very well, qualitatively! [Since the real sweep was done by a (relatively) noisy ALS, the lower noise of the real pdh signal was obscured.]
 
Given this good match, we were motivated to start trying to implement it on Monday.
At this point, since we've gotten it working on the actual IFO, I don't plan on doing much more with this simulation right now, but it may come in handy in the future... |
9751
|
Wed Mar 26 11:16:59 2014 |
ericq | Summary | LSC | Composite Error Signal for ARms (3) | Extending the previous model, I've closed a rudimentary CESAR loop in simulink. Error signals with varying noise levels are combined to bring a "cavity" to lock.

There are many things that are flat out arbitrary at this point, but it qualitatively works. The main components of this model are:
- The "Plant": A pendulum with f0 = 2Hz, Q = 10
- Some white force noise, low passed at 1Hz before input to the plant.
- The Controller: A very rough servo design that is stable...
- ALS signal: Infinite range Linear signal, with a bunch of noise
- Transmission and PDH signals are computed with some compiled C code containing analytic functions (which can be a total pain to get working), have less noise than ALS
- Some logic for computing linearized PDH and SqrtInv signals
- A C code block for doing the CESAR mixing, and feeding to the servo
And it can lock! 

Right now, all of the functions and noise levels are similar to the previous simulation, and therefore don't tell us anything about anything real...
However, at this point, I can tune the parameters and noise levels to make it more like our interferometer, and thus maybe actually useful. |
9711
|
Mon Mar 10 21:16:13 2014 |
Koji | Summary | LSC | Composite Error Signal for ARms (4) | The LSC model was modified for CESAR.
A block called ALSX_COMBINE was made in the LSC block. This block receives the signals for ALS (Phase Tracker output), TRX_SQRTINV, TRX, POX11 (Unnormalized POX11I).
It spits out the composite ALS signal.
Inside of the block we have several components:
1) a group of components for sign(x) function. We use the PDH signal to produce the sign for the transmission signal.
2) Hard triggering between ALS and TR/PDH signals. An epics channel "THRESH" is used to determine how much transmission
we should have to turn on the TR/PDH signals.
3) Blending of the TR and PDH. Currently we are using a confined TR between 0 and 1 using a saturation module. When the TR is 0, we use the 1/SQRT(TR) signal for the control,
When the TR is 1, we use the PDH signal for the control.
4) Finally the three processed signals are combined into a single signal by an adder.
It is important to make a consideration on the offsets. We want all of ALS, 1/SQRT(TR), and PDH to have zero crossing at the resonance.
ALS tends to have arbitorary offset. So we decided to use two offsets. One is before the CESAR block and in the ALS path.
The other is after the CESAR block. Right now we are using the XARM servo offset for the latter purpose.
We run the resonance search script to find the first offset. Once this is set, we never touch this offset until the lock is lost.
Then for the further scanning of the arm length, we uused the offset in the XARM servo filter module. |
Attachment 1: ss1.png
|
|
Attachment 2: ss2.png
|
|
Attachment 3: CESAR_OFFSETS.pdf
|
|
9712
|
Mon Mar 10 21:16:56 2014 |
Koji | Summary | LSC | Composite Error Signal for ARms (5) | After making the CDS modification, CESAR was tested with ALS.
First of all, we run CESAR with threshold of 10. This means that the error signal always used ALS.
The ALS was scanned over the resonance. The plot of the scan can be found in EricQ's elog.
At each point of the scan, the arm stability is limited by the ALS.
Using this scan data, we could adjust the gains for the TR and PDH signals. Once the gains were adjusted
the threshold was lowered to 0.25. This activates dynamic signal blending.
ALS was stabilized with XARM FM1/2/3/5/6/7/9. The resonance was scanned. No glitch was observed.
This is some level of success already.
Next step was to fully hand off the control to PDH. But this was not successfull. Everytime the gain for the TR was
reduced to zero, the lock was lost. When the TR is removed from the control, the raw PDH signal is used fot the control
without normalization. Without turning on FM4, we lose 60dB of DC gain. Therefore the residual motion may have been
too big for the linear range of the PDH signal. This could be mitigated by the normalization of the PDH signal by the TR. |
9718
|
Tue Mar 11 18:33:21 2014 |
Koji | Update | LSC | Composite Error Signal for ARms (6) | Today we modified the CESAR block.
- Now the sign(X) function is in a block.
- We decided to use the linearization of the PDH.
- By adding the offset to the TR signal used for the switching between TR and PDH, we can force pure 1/sqrt(TR) or pure PDH to control the cavity. |
Attachment 1: 14.png
|
|
9719
|
Tue Mar 11 18:34:11 2014 |
Jenne | Update | LSC | Composite Error Signal for ARms (7) | [Nic, Jenne, EricQ, and Koji]
We have used CESAR successfully to bring the Xarm into resonance. We start with the ALS signal, then as we approach resonance, the error signal is automatically transitioned to 1/sqrt(TRX), and ramped from there to POX, which we use as the PDH signal.
In the first plot, we have several spectra of the CESAR output signal (which is the error signal for the Xarm), at different arm resonance conditions. Dark blue is the signal when we are locked with the ALS beatnote, far from IR resonance. Gold is when we are starting to see IR resonance (arm buildup of about 0.03 or more), and we are using the 1/sqrt(TRX) signal for locking. Cyan is after we have achieved resonance, and are using only the POX PDH signal. Purple is the same condition as cyan, except that we have also engaged the low frequency boosts (FM 2, 3, 4) in the locking servo. FM4 is only usable once you are at IR resonance, and locked using the PDH signal. We see in the plot that our high frequency noise (and total RMS) decreases with each stage of CESAR (ALS, 1/sqrt(TR) and PDH).
To actually achieve the gold noise level of 1/sqrt(TR), we first had to increase the analog gain by swapping out a resistor on the whitening board.

The other plots attached are time series data. For the python plots (last 2), the error signals are calibrated to nanometers, but the dark blue, which is the transmitted power of the cavity, is left in normalized power units (where 1 is full IR resonance).
In the scan from off resonance to on resonance, around the 58 second mark, we see a glitch when we engage FM4, the strong low frequency boosts. Around the 75 second mark we turned off any contribution from 1/sqrt(TR), so the noise decreases once we are on pure PDH signal.
In the scan through the resonance, we see a little more clearly the glitch that happens when we switch from ALS to IR signals, around the 7 and 12 second marks.
We want to make some changes, so that the transition from ALS to IR signals is more smooth, and not a discrete switch.
|
Attachment 2: Screenshot-1.png
|
|
Attachment 3: ScanFromOffToOnResonance.pdf
|
|
Attachment 4: ScanThroughResonance.pdf
|
|
9724
|
Thu Mar 13 01:18:00 2014 |
Jenne | Update | LSC | Composite Error Signal for ARms (8) | [Jenne, EricQ]
As Koji suggested in his email this afternoon, we looked at how much actuator range is required for various forms of arm locking: (1) "regular" PDH lock aquisition, (2) ALS lock acquisition, (3) CESAR cooling.
To start, I looked at the spectra and time series data of the control signal (XARM_OUT) for several locking situations. Happily, when the arm is at the half fringe, where we expect the 1/sqrt(TRX) signal to be the most sensitive (versus the same signal at other arm powers), we see that it is in fact more quiet than even the PDH signal. Of course, we can't ever use this signal once the arm is at resonance, so we haven't discovered anything new here.

EricQ then made some violin plots with the time series data from these situations, and we determined that a limit of ~400 counts encompasses most of the steady-state peak-to-peak output from locking on the PDH signal.
 
[ericq: What's being plotted here are "kernel density estimates" of the time series data of XARM_OUT when locked on these signals. The extent of the line goes to the furthest outlier, while the dashed and dotted lines indicate the median and quartiles, respectively]
I tried acquiring ALS and transitioning to final PDH signals with different limiters set in the Xarm servo. I discovered that it's not too hard to do with a limit of 400 counts, but that below ~350 counts, I can't keep the ALS locked for long enough to find the IR resonance. Here's a plot of acquiring ALS lock, scanning for the resonance, and then using CESAR to transition to PDH, with the limit of 400 counts in place, and then a lockloss at the end. Even though I'm hitting the rails pretty consistently, until I transition to the more quiet signals, I don't ever lose lock (until, at the end, I started pushing other buttons...).

After that, I tried acquiring lock using our "regular" PDH method, and found that it wasn't too hard to capture lock with a limit of 400, but with limits below that I can't hold the lock through the boosts turning on.
 
Finally, I took spectra of the XARM_OUT control signal while locked using ALS only, but with different limiter values. Interestingly, I see much higher noise between 30-300 Hz with the limiter engaged, but the high frequency noise goes down. Since the high frequency is dominating the RMS, we see that the RMS value is actually decreasing a bit (although not much).

We have not made any changes to the LSC model, so there is still no smoothing between the ALS and IR signals. That is still on the to-do list. I started modifying things to be compatible with CARM rather than a single arm, but that's more of a daytime-y task, so that version of the c1lsc model is saved under a different name, and the model that is currently compiled and running is reverted as the "c1lsc.mdl" file. |
9728
|
Fri Mar 14 12:18:55 2014 |
Koji | Update | LSC | Composite Error Signal for ARms (9) | Asymptotic cooling of the mirror motion with CESAR was tested.
With ALS and the full control bandwidth (UGF 80-100Hz), the actuator amplitude of 8000cnt_pp is required.
Varying control bandwidth depending on the noise level of the signal, the arm was brought to the final configuration with the actuator amplitude of 800cnt_pp. |
Attachment 1: asymptotic_cooling.pdf
|
|
477
|
Wed May 14 14:05:40 2008 |
Andrey | Update | Computers | Computer Linux-2, MEDM screen "Watchdogs" |
Computer "Linux-2", MEDM screen "C1SUS_Watchdogs.adl": there is no indication for ETMY watchdogs, everything is white. There is information on that screen "C1SUS_Watchdogs.adl" about all other systems (MC, ETMX,...), but something is wrong with indicators for ETMY on that particular control computer. |
447
|
Fri Apr 25 11:33:40 2008 |
Andrey | Configuration | Computers | Computer controlling vaccum equipment |
Old computer (located in the south-end end of the interferometer room) that was almost unable to fulfill his duties of controlling vacuum equipment has been replaced to "Linux-3". MEDM runs on "Linux-3".
We checked later that day together with Steve Vass that vacuum equipment (like vacuum valves) can be really controlled from the MEDM-screen 'VacControl.adl'.
Unused flat LCD monitor, keyboard and mouse (parts of the former LINUX-3 computer) were put on the second shelf of the computer rack in the computer room near the HP printer. |
192
|
Sun Dec 16 16:52:40 2007 |
dmass | Update | Computers | Computer on the end Fixed | I had Mike Pedraza look at the computer on the end (tag c21256). It was running funny, and turns out it was a bad HD.
I backed up the SURF files as attachments to their wiki entries. Nothing else seemed important so the drive was (presumably) swapped, and a clean copy of xp pro was installed. The username/login is the standard one.
Also - that small corner of desk space is now clean, and it would be lovely if it stayed that way. |
10010
|
Mon Jun 9 11:42:00 2014 |
Jenne | Update | CDS | Computer status | Current computer status:
All fast machines except c1iscey are up and running. I can't ssh to c1iscey, so I'll need to go down to the end station and have a look-see. On the c1lsc machine, neither the c1oaf nor the c1cal models are running (but for the oaf model, we know that this is because we need to revert the blrms block changes to some earlier version, see Jamie's elog 9911).
Daqd process is running on framebuilder. However, when I try to open dataviewer, I get the popup error saying "Can't connect to rb", as well as an error in the terminal window that said something like "Error getting chan info".
Slow machines c1psl, c1auxex and c1auxey are not running (can't telnet to them, and white boxes on related medm screens for slow channels). All other slow machines seem to be running, however nothing has been done to them to point them at the new location of the shared hard drive, so their status isn't ready to green-light yet.
Things that we did on Friday for the fast machines:
The shared hard drive is "physically" on Chiara, at /home/cds/. Links are in place so that it looks like it's at the same place that it used to be: /opt/rtcds/......
The first nameserver on all of the workstation machines inside of the file /etc/resolv.conf has been changed to be 192.168.113.104, which is Chiara's IP address (it used to be 192.168.113.20, which was linux1). This change has also been made on the framebuilder, and in the framebuilder's /diskless/root/etc/resolv.conf file, which is what all of the fast front ends look to.
On the framebuilder, and in the /diskless place for the fast front ends, presumably we must have changed something to point at the new location for the shared drive, but I don't remember how we did that [ERIC, what did we do???]
The slow front ends that we have tried changing have not worked out.
First, we tried plugging a keyboard and monitor into c1auxey. When we key the crate to reboot the machine, we get some error message about a "disk A drive error", but then it goes on to prompt pushing F1 for something, and F2 for entering setup. No matter what we press, nothing happens. c1auxey is still not running.
We were able to telnet into c1auxex, c1psl, and c1iool0. On each of those machines, at the prompt, we used the command "bootChange". This initially gives us a series of:
$ telnet c1susaux
Trying 192.168.113.55...
Connected to c1susaux.
Escape character is '^]'.
c1susaux > bootChange
'.' = clear field; '-' = go to previous field; ^D = quit
boot device : ei
processor number : 0
host name : linux1
file name : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.55:ffffff00
inet on backplane (b):
host inet (h) : 192.168.113.20
gateway inet (g) :
user (u) : controls
ftp password (pw) (blank = use rsh):
flags (f) : 0x0
target name (tn) : c1susaux
startup script (s) : /cvs/cds/caltech/target/c1susaux/startup.cmd
other (o) :
value = 0 = 0x0
c1susaux >
If we go through that again (it comes up line-by-line, and you must press Enter to go to the next line) and put a period a the end of the Host Name line, and the Host Inet (h) line, they will come up blank the next time around. So, the next time you run bootChange, you can type "chiara" for the host name, and "192.168.113.104" for the "host inet (h)". If you run bootChange one more time, you'll see that the new things are in there, so that's good.
However, when we then try to reboot the computer, I think the machines weren't coming back after this point. (Unfortunately, this is one of those things that I should have elogged back on Friday, since I don't remember precisely). Certainly whatever the effect was, it wasn't what I wanted, and I left with the machines that I had tried rebooting, not running. |
10011
|
Mon Jun 9 12:19:17 2014 |
ericq | Update | CDS | Computer status |
Quote: |
The first nameserver on all of the workstation machines inside of the file /etc/resolv.conf has been changed to be 192.168.113.104, which is Chiara's IP address (it used to be 192.168.113.20, which was linux1). This change has also been made on the framebuilder, and in the framebuilder's /diskless/root/etc/resolv.conf file, which is what all of the fast front ends look to.
On the framebuilder, and in the /diskless place for the fast front ends, presumably we must have changed something to point at the new location for the shared drive, but I don't remember how we did that [ERIC, what did we do???]
|
In all of the fstabs, we're using chiara's IP instead of name, so that if the nameserver part isn't working, we can still get the NFS mounts.
On control room computers, we mount the NFS through /etc/fstab having lines like:
192.168.113.104:/home/cds /cvs/cds nfs rw,bg 0 0
fb:/frames /frames nfs ro,bg 0 0
Then, things like /cvs/cds/foo are locally symlinked to /opt/foo
For the diskless machines, we edited the files in /diskless/root. On FB, /diskless/root/etc/fstab becomes
master:/diskless/root / nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0
master:/usr /usr nfs sync,hard,intr,ro,nolock,rsize=8192,wsize=8192 0 0
master:/home /home nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0
none /proc proc defaults 0 0
none /var/log tmpfs size=100m,rw 0 0
none /var/lib/init.d tmpfs size=100m,rw 0 0
none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
none /sys sysfs defaults 0 0
master:/opt /opt nfs async,hard,intr,rw,nolock 0 0
192.168.113.104:/home/cds/rtcds /opt/rtcds nfs nolock 0 0
192.168.113.104:/home/cds/rtapps /opt/rtapps nfs nolock 0 0
("master" is defined in /diskless/root/etc/hosts to be 192.168.113.202, which is fb's IP)
and /diskless/root/etc/resolv.conf becomes:
search martian
nameserver 192.168.113.104 #Chiara
|
10585
|
Wed Oct 8 15:31:31 2014 |
Jenne | Update | CDS | Computer status | After the Great Computer Meltdown of 2014, we forgot about poor c0rga, which is why the RGA hasn't been recording scans for the past several months (as Steve noted in elog 10548).
Q helped me remember how to fix it. We added 3 lines to its /etc/fstab file, so that it knows to mount from Chiara and not Linux1. We changed the resolv.conf file, and Q made some simlinks.
Steve and I ran ..../scripts/RGA/RGAset.py on c0rga to setup the RGA's settings after the power outage, and we're checking to make sure that the RGA will run right now, then we'll set it back to the usual daily 4am run via cron.
EDIT, JCD: Ran ..../scripts/RGA/RGAlogger.py, saw that it works and logs data again. Also, c0rga had a slightly off time, so I ran sudo ntpdate -b -s -u pool.ntp.org , and that fixed it.
Quote:
|
In all of the fstabs, we're using chiara's IP instead of name, so that if the nameserver part isn't working, we can still get the NFS mounts.
On control room computers, we mount the NFS through /etc/fstab having lines like:
192.168.113.104:/home/cds /cvs/cds nfs rw,bg 0 0
fb:/frames /frames nfs ro,bg 0 0
Then, things like /cvs/cds/foo are locally symlinked to /opt/foo
For the diskless machines, we edited the files in /diskless/root. On FB, /diskless/root/etc/fstab becomes
master:/diskless/root / nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0
master:/usr /usr nfs sync,hard,intr,ro,nolock,rsize=8192,wsize=8192 0 0
master:/home /home nfs sync,hard,intr,rw,nolock,rsize=8192,wsize=8192 0 0
none /proc proc defaults 0 0
none /var/log tmpfs size=100m,rw 0 0
none /var/lib/init.d tmpfs size=100m,rw 0 0
none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
none /sys sysfs defaults 0 0
master:/opt /opt nfs async,hard,intr,rw,nolock 0 0
192.168.113.104:/home/cds/rtcds /opt/rtcds nfs nolock 0 0
192.168.113.104:/home/cds/rtapps /opt/rtapps nfs nolock 0 0
("master" is defined in /diskless/root/etc/hosts to be 192.168.113.202, which is fb's IP)
and /diskless/root/etc/resolv.conf becomes:
search martian
nameserver 192.168.113.104 #Chiara
|
|
10018
|
Tue Jun 10 09:25:29 2014 |
Jamie | Update | CDS | Computer status: should not be changing names | I really think it's a bad idea to be making all these names changes. You're making things much much harder for yourselves.
Instead of repointing everything to a new host, you should have just changed the DNS to point the name "linux1" to the IP address of the new server. That way you wouldn't need to reconfigure all of the clients. That's the whole point of name service: use a name so that you don't need to point to a number.
Also, pointing to an IP address for this stuff is not a good idea. If the IP address of the server changes, everything will break again.
Just point everything to linux1, and make the DNS entries for linux1 point to the IP address of chiara. You're doing all this work for nothing!
RXA: Of course, I understand what DNS means. I wanted to make the changes to the startup to remove any misconfigurations or spaghetti mount situations (of which we found many). The way the VME162 are designed, changing the name doesn't make the fix - it uses the number instead. And, of course, the main issue was not the DNS, but just that we had to setup RSH on the new machine. This is all detailed in the ELOG entries we've made, but it might be difficult to understand remotely if you are not familiar with the 40m CDS system. |
9662
|
Mon Feb 24 13:40:13 2014 |
Jenne | Update | CDS | Computer weirdness with c1lsc machine | I noticed that the fb lights on all of the models on the c1lsc machine are red, and that even though the MC was locked, there was no light flashing in the IFO. Also, all of the EPICS values on the LSC screen were frozen.

I tried restarting the ntp server on the frame builder, as in elog 9567, but that didn't fix things. (I realized later that the symptom there was a red light on every machine, while I'm just seeing problems with c1lsc.
I did an mxstream restart, as a harmless thing that had some small hope of helping (it didn't).
I logged on to c1lsc, and restarted all of the models (rtcds restart all), which stops all of the models (IOP last), and then restarts them (IOP first). This did not change the status of the lights on the status screen, but it did change the positioning of some optics (I suspect the tip tilts) significantly, and I was again seeing flashes in the arms. The LSC master enable switch was off, so I don't think that it was trying to send any signals out to the suspensions. The ASS model, which sends signals out to the input pointing tip tilts runs on c1lsc, and it was about when the ass model was restarted that the beam came back. Also, there are no jumps in any of the SOS OSEM sensors in the last few hours, except me misaligning and restoring the optics. I we don't have sensors on the tip tilts, so I can't show a jump in their positioning, but I suspect them.
I called Jamie, and he suggested restarting the machine, which I did. (Once again, the beam went somewhere, and I saw it scattering big-time off of something in the BS chamber, as viewed on the PRM-face camera). This made the oaf and cal models run (I think they were running before I did the restart all, but they didn't come back after that. Now, they're running again). Anyhow, that did not fix the problem. For kicks, I re-ran mxstream restart, and diag reset, to no avail. I also tried running the sudo /etc/init.d/ntp-client restart command on just the lsc machine, but it doesn't know the command 'ntp-client'.
Jamie suggested looking at the timing card in the chassis, to ensure all of the link lights are on, etc. I will do this next.
|
9663
|
Mon Feb 24 15:25:29 2014 |
Jenne | Update | CDS | Computer weirdness with c1lsc machine | The LSC machine isn't any better, and now c1sus is showing the same symptoms. Lame.
The link lights on the c1lsc I/O chassis and on the fiber timing system are the same as all other systems. On the timing card in the chassis, the light above the fibers was solid-on, and the light below blinks at 1pps.
Koji and I power-cycled both the lsc I/O chassis, and the computer, including removing the power cables (after softly shutting down) so there was seriously no power. Upon plugging back in and turning everything on, no change to the timing status. It was after this reboot that the c1sus machine also started exhibiting symptoms. |
10717
|
Fri Nov 14 15:45:34 2014 |
Jenne | Update | CDS | Computers back up after reboot | [Jenne, Q]
Everything seems to be back up and running.
The computers weren't such a big problem (or at least didn't seem to be). I turned off the watchdogs, and remotely rebooted all of the computers (except for c1lsc, which Manasa already had gotten working). After this, I also ssh-ed to c1lsc and restarted all of the models, since half of them froze or something while the other computers were being power cycled.
However, this power cycling somehow completely screwed up the vertex suspensions. The MC suspensions were fine, and SRM was fine, but the ITMs, BS and PRM were not damping. To get them to kind of damp rather than ring up, we had to flip the signs on the pos and pit gains. Also, we were a little suspicious of potential channel-hopping, since touching one optic was occasionally time-coincident with another optic ringing up. So, no hard evidence on the channel hopping, but suspicions.
Anyhow, at some point I was concerned about the suspension slow computer, since the watchdogs weren't tripping even though the osem sensor rmses were well over the thresholds, so I keyed that crate. After this, the watchdogs tripped as expected when we enabled damping but the RMS was higher than the threshold.
I eventually remotely rebooted c1sus again. This totally fixed everything. We put all of the local damping gains back to the values that we found them (in particular, undoing our sign flips), and everything seems good again. I don't know what happened, but we're back online now.
Q notes that the bounce mode for at least ITMX (haven't checked the others) is rung up. We should check if it is starting to go down in a few hours.
Also, the FSS slow servo was not running, we restarted it on op340m. |
695
|
Fri Jul 18 17:06:20 2008 |
Jenne | Update | Computers | Computers down for most of the day, but back up now | [Sharon, Alex, Rob, Alberto, Jenne]
Sharon and I have been having trouble with the C1ASS computer the past couple of days. She has been corresponding with Alex, who has been rebooting the computers for us. At some point this afternoon, as a result of this work, or other stuff (I'm not totally sure which) about half of the computers' status lights on the MEDM screen were red. Alberto and Sharon spoke to Alex, who then fixed all of them except C1ASC. Alberto and I couldn't telnet into C1ASC to follow the restart procedures on the Wiki, so Rob helped us hook up a monitor and keyboard to the computer and restart it the old fashioned way.
It seems like C1ASC has some confusion as to what its IP address is, or some other computer is now using C1ASC's IP address.
As of now, all the computers are back up. |
8324
|
Thu Mar 21 10:29:12 2013 |
Manasa | Update | Computers | Computers down since last night | I'm trying to figure out what went wrong last night. But the morning status...the computers are down.

|
Attachment 1: down.png
|
|
9130
|
Mon Sep 16 13:11:15 2013 |
Evan | Update | Computer Scripts / Programs | Comsol 4.3b upgrade | Comsol 4.3b is now installed under /cvs/cds/caltech/apps/linux64/COMSOL43b. I've left the existing Comsol 4.2 installation alone; according to the Comsol installation guide [PDF], it is unaffected by the new install. On megatron I've made a symlink so that you can call comsol in bash to start Comsol 4.3b.
The first time I ran comsol server, it asked me to choose a username/password combo, so I made it the same as the combo used to log on to megatron.
Edit: I've also added a ~/.screenrc on megatron (based on this Stackoverflow answer) so that I don't constantly go nuts trying to figure out if I'm already inside a screen session. |
9770
|
Tue Apr 1 17:37:57 2014 |
Evan | Update | Computer Scripts / Programs | Comsol 4.4 upgrade | Comsol 4.4 is now installed under /cvs/cds/caltech/apps/linux64/COMSOL44. I've left the other installations alone. I've changed the symlink on megatron so that comsol now starts Comsol 4.4.
The first time I ran comsol server, it asked me to choose a username/password combo, so I made it the same as the combo used to log on to megatron.
We should consider uninstalling some of the older Comsol versions; right now we have 4.0, 4.2, 4.3b, and 4.4 installed. |
15389
|
Thu Jun 11 09:37:38 2020 |
Jon | Update | BHD | Conclusions on Mode-Matching Telescopes | After further astigmatism/tolerance analysis [ELOG 15380, 15387] our conclusion is that the stock-optic telescope designs [ELOG 15379] are sufficient for the first round of BHD testing. However, for the final BHD hardware we should still plan to procure the custom-curvature optics [DCC E2000296]. The optimized custom-curvature designs are much more error-tolerant and have high probability of achieving < 2% mode-matching loss. The stock-curvature designs can only guarantee about 95% mode-matching.
Below are the final distances between optics in the relay paths. The base set of distances is taken from the 2020-05-21 layout. To minimize the changes required to the CAD model, I was able to achieve near-maximum mode-matching by moving only one optic in each relay path. In the AS path, AS3 moves inwards (towards the BHDBS) by 1.06 cm. In the LO path, LO4 moves backwards (away from the BHDBS) by 3.90 cm.
AS Path
Interval |
Distance (m) |
Change (cm) |
SRMAR-AS1 |
0.7192 |
0 |
AS1-AS2 |
0.5405 |
0 |
AS2-AS3 |
0.5955 |
-1.06 |
AS3-AS4 |
0.7058 |
-1.06 |
AS4-BHDBS |
0.5922 |
0 |
BHDBS-OMCIC |
0.1527 |
0 |
LO Path
Interval |
Distance (m) |
Change (cm) |
PR2AR-LO1 |
0.4027 |
0 |
LO1-LO2 |
2.5808 |
0 |
LO2-LO3 |
1.5870 |
0 |
LO3-LO4 |
0.3691 |
+3.90 |
LO4-BHDBS |
0.2573 |
+3.90 |
BHDBS-OMCIC |
0.1527 |
0 |
|
11491
|
Tue Aug 11 10:13:32 2015 |
Jessica | Update | General | Conductive SMAs seem to work best | After testing both the Conductive and Isolated front panels on the ALS delay line box using the actual beatbox and comparing this to the previous setup, I found that the conductive SMAs improved crosstalk the most. Also, as the old cables were 30m and the new ones are 50m, Eric gave me a conversion factor to apply to the new cables to normalize the comparison.
I used an amplitude of 1.41 Vpp and drove the following frequencies through each cable:
X: 30.019 MHz Y: 30.019203 MHz
which gave a difference of 203 Hz.
In the first figure, it can be seen that, for the old setup with the 30m cables, in both cables there is a spike at 203 Hz with an amplitude of above 4 m/s^2/sqrt(Hz). When the 50m cables were measured in the box with the conductive front panel, the amplitude drops at 203 Hz by a factor of around 3. I also compared the isolated front panel with the old setup, and found that the isolated front panel worse by a factor of just over 2 than the old setup. Therefore, I think that using the conductive front panel for the ALS Delay Line box will reduce noise and crosstalk between the cables the most. |
Attachment 1: best4.png
|
|
Attachment 2: isolated4.png
|
|
8433
|
Wed Apr 10 01:10:22 2013 |
Jenne | Update | Locking | Configure screen and scripts updated | I have gone through the different individual degrees of freedom on the IFO_CONFIGURE screen (I haven't done anything to the full IFO yet), and updated the burt snapshot request files to include all of the trigger thresholds (the DoF triggers were there, but the FM triggers and the FM mask - which filter modules to trigger - were not). I also made all of the restore scripts (which does the burt restore for all those settings) the same. They were widely different, rather than just different optics chosen for misaligning and restoring.
Before doing any of this work, I moved the whole folder ..../caltech/c1/burt/c1ifoconfigure to ..../caltech/c1/burt/c1ifoconfigure_OLD_but_SAVE , so we can go back and look at the past settings, if we need to.
I also changed the "C1save{DoF}" scripts to ask for keyboard input, and then added them as options to the CONFIGURE screen. The keyboard input is so that people randomly pushing the buttons don't overwrite our saved burt files. Here's the secret: It asks if you are REALLY sure you want to save the configuration. If you are, type the word "really", then hit enter (as in yes, I am really sure). Any other answer, and the script will tell you that it is quitting without saving.
I have also removed the "PRM" option, since we really only want the "PRMsb" for almost all purposes.
Also, I removed access to the very, very old text file about how to lock from the screen. That information is now on the wiki: https://wiki-40m.ligo.caltech.edu/How_To/Lock_the_Interferometer
I have noted in the drop-down menus that the "align" functions are not yet working. I know that Den has gotten at least one of the arms' ASSes working today, so once those scripts are ready, we can call them from the configure screen.
Anyhow, the IFO_CONFIGURE screen should be back to being useful! |
8722
|
Wed Jun 19 02:46:19 2013 |
Jenne | Update | CDS | Connected ADC channels from IOO model to ASS model | Following Jamie's table in elog 8654, I have connected up the channels 0, 1 and 2 from ADC0 on the IOO computer to rfm send blocks, which send the signals over to the rfm model, and then I use dolphin send blocks to get over to the ass model on the lsc machine.
I'm using the 1st 3 channels on the Pentek Generic interface board, which is why I'm using channels 0,1,2.
I compiled all 3 models (ioo, rfm, ass), and restarted them. I also restarted the daqd on the fb, since I put in a temporary set of filter banks in the ass model, to use as sinks for the signal (since I haven't done anything else to the ASS model yet).
All 3 models were checked in to the svn. |
16479
|
Mon Nov 22 17:42:19 2021 |
Anchal | Update | General | Connected Megatron to battery backed ports of another UPS | [Anchal, Paco]
I used the UPS that was providing battery backup for chiara earlier (a APS Back-UPS Pro 1000), to provide battery backup to Megatron. This completes UPS backup to all important computers in the lab. Note that this UPS nominally consumes 36% of UPS capacity in power delivery but at start-up, Megatron was many fans that use up to 90% of the capacity. So we should not use this UPS for any other computer or equipment.
While doing so, we found that PS3 on Megatron was malfunctioning. It's green LED was not lighting up on connecting to power, so we replaced it from the PS3 of old FB computer from the same rack. This solved this issue.
Another thing we found was that Megatron on restart does not get configured to correct nameserver resolution settings and loses the ability to resolve names chiara and fb1. This results in the nfs mounts to fail which in turn results in the script services to fail. We fixed this by identifying that the NetworkManager of ubuntu was not disabled and would mess up the nameserver settings which we want to be run by systemd-resolved instead. We corrected the symbolic link: /etc/resolv.conf -> /run/systemd/resolve/resolv.conf. the we stopped and diabled the NetworkManager service to keep this persistent on reboot. Following are the steps that did this:
> sudo rm /etc/resolv.conf
> ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf
> sudo systemctl stop NetworkManager.service
> sudo systemctl disable NetworkManager.service
|
16396
|
Tue Oct 12 17:20:12 2021 |
Anchal | Summary | CDS | Connected c1sus2 to martian network | I connected c1sus2 to the martian network by splitting the c1sim connection with a 5-way switch. I also ran another ethernet cable from the second port of c1sus2 to the DAQ network switch on 1X7.
Then I logged into chiara and added the following in chiara:/etc/dhcp/dhcpd.conf :
host c1sus2 {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.92;
}
And following line in chiara:/var/lib/bind/martian.hosts :
c1sus2 A 192.168.113.92
Note that entires c1bhd is already added in these files, probably during some earlier testing by Gautam or Jon. Then I ran following to restart the dhcp server and nameserver:
~> sudo service bind9 reload
[sudo] password for controls:
* Reloading domain name service... bind9 [ OK ]
~> sudo service isc-dhcp-server restart
isc-dhcp-server stop/waiting
isc-dhcp-server start/running, process 25764
Now, As I switched on c1sus2 from front panel, it booted over network from fb1 like other FE machines and I was able to login to it by first logging to fb1 and then sshing to c1sus2.
Next, I copied the simulink models and the medm screens of c1x06, xc1x07, c1bhd, c1sus2 from the paths mentioned on this wiki page. I also copied the medm screens from chiara(clone):/opt/rtcds/caltech/c1/medm to martian network chiara in the appropriate places. I have placed the file /opt/rtcds/caltech/c1/medm/teststand_sitemap.adl which can be used to open sitemap for c1bhd and c1sus2 IOP and user models.
Then I logged into c1sus2 (via fb1) and did make, install, start procedure:
controls@c1sus2:~ 0$ rtcds make c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### building c1x07...
Cleaning c1x07...
Done
Parsing the model c1x07...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1x07...
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl
Successfully compiled c1x07
***********************************************
Compile Warnings, found in c1x07_warnings.log:
***********************************************
***********************************************
controls@c1sus2:~ 0$ rtcds install c1x07
buildd: /opt/rtcds/caltech/c1/rtbuild/release
### installing c1x07...
Installing system=c1x07 site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1X07.txt
Installing /opt/rtcds/caltech/c1/target/c1x07/c1x07epics
Installing /opt/rtcds/caltech/c1/target/c1x07
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1x07
/opt/rtcds/caltech/c1/scripts/startc1x07
sudo: unable to resolve host c1sus2
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_211012_174226.par -gds_node=24 -site_letter=C -system=c1x07 -host=c1sus2
Installing GDS node 24 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1x07.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1X07.ini
Installing Epics MEDM screens
Running post-build script
safe.snap exists
controls@c1sus2:~ 0$ rtcds start c1x07
Cannot start/stop model 'c1x07' on host c1sus2.
controls@c1sus2:~ 4$ rtcds list
controls@c1sus2:~ 0$
One can see that even after making and installing, the model c1x07 is not listed as available models in rtcds list. Same is the case for c1sus2 as well. So I could not proceed with testing.
Good news is that nothing that I did affect the current CDS functioning. So we can probably do this testing safely from the main CDS setup. |
16397
|
Tue Oct 12 23:42:56 2021 |
Koji | Summary | CDS | Connected c1sus2 to martian network | Don't you need to add the new hosts to /diskless/root/etc/rtsystab at fb1? --> There looks many elogs talking about editing "rtsystab".
controls@fb1:/diskless/root/etc 0$ cat rtsystab
#
# host list of control systems to run, starting with IOP
#
c1iscex c1x01 c1scx c1asx
c1sus c1x02 c1sus c1mcs c1rfm c1pem
c1ioo c1x03 c1ioo c1als c1omc
c1lsc c1x04 c1lsc c1ass c1oaf c1cal c1dnn c1daf
c1iscey c1x05 c1scy c1asy
#c1test c1x10 c1tst2
|
|