ID |
Date |
Author |
Type |
Category |
Subject |
16226
|
Fri Jun 25 19:14:45 2021 |
Jon | Update | Equipment loan | Zurich Instruments analyzer |
I returned the Zurich Instruments analyzer I borrowed some time ago to test out at home. It is sitting on first table across from Steve's old desk. |
Attachment 1: ZI.JPG
|
|
16227
|
Mon Jun 28 12:35:19 2021 |
Yehonathan | Update | BHD | SOS assembly |
On Thursday, I glued another set of 6 dumbells+magnets using the same method as before. I made sure that dumbells are pressed onto the magnets.
I came in today to check the gluing situation. The situation looks much better than before. It seems like the glue is stable against small forces (magnetic etc.). I checked the assemblies under a microscope.
It seems like I used excessive amounts of glue (attachment 1,2). The surfaces of the dumbells were also contaminated (attachment 3). I cleaned the dumbells' surfaces using acetone and IPO (attachment 4) and scratched some of the glue residues from the sides of the assemblies.
Next time, I will make a shallow bath of glue to obtain precise amounts using a needle.
I glued a sample assembly on a metal bracket using epoxy. Once it cures I will hang a weight on the dumbell to test the gluing strength. |
Attachment 1: toomuchglue1.png
|
|
Attachment 2: toomuchglue2.png
|
|
Attachment 3: dirtydumbell.png
|
|
Attachment 4: cleandumbell.png
|
|
Attachment 5: assembly_on_metalbracket.png
|
|
16229
|
Tue Jun 29 20:45:52 2021 |
Yehonathan | Update | BHD | SOS assembly |
I glued another batch of 6 magnet+dumbell assemblies. I will take a look at them under the microscope once they are cured.
I also hanged a weight of ~150g from a sample dumbell made in the previous batch (attachments) to test the magnet+dumbell bonding strength. |
Attachment 1: 20210629_135736.jpg
|
|
Attachment 2: 20210629_135746.jpg
|
|
16230
|
Wed Jun 30 14:09:26 2021 |
Ian MacMillan | Update | CDS | SUS simPlant model |
I have looked at my code from the previous plot of the transfer function and realized that there is a slight error that must be fixed before we can analyze the difference between the theoretical transfer function and the measured transfer function.
The theoretical transfer function, which was generated from Photon has approximately 1000 data points while the measured one has about 120. There are no points between the two datasets that have the same frequency values, so they are not directly comparable. In order to compare them I must infer the data between the points. In the previous post [16195] I expanded the measured dataset. In other words: I filled in the space between points linearly so that I could compare the two data sets. Using this code:
#make values for the comparison
tck_mag = splrep(tst_f, tst_mag) # get bspline representation given (x,y) values
gen_mag = splev(sim_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(gen_mag)):
dif_mag.append(gen_mag[x]-sim_mag[x]) # measured minus predicted
tck_ph = splrep(tst_f, tst_ph) # get bspline representation given (x,y) values
gen_ph = splev(sim_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(gen_ph)):
dif_ph.append(gen_ph[x]-sim_ph[x])
At points like a sharp peak where the measured data set was sparse compared to the peak, the difference would see the difference between the intermediate “measured” values and the theoretical ones, which would make the difference much higher than it really was.
To fix this I changed the code to generate the intermediate values for the theoretical data set. Using the code here:
tck_mag = splrep(sim_f, sim_mag) # get bspline representation given (x,y) values
gen_mag = splev(tst_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(tst_mag)):
dif_mag.append(tst_mag[x]-gen_mag[x])#measured minus predicted
tck_ph = splrep(sim_f, sim_ph) # get bspline representation given (x,y) values
gen_ph = splev(tst_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(tst_ph)):
dif_ph.append(tst_ph[x]-gen_ph[x])
Because this dataset has far more values (about 10 times more) the previous problem is not such an issue. In addition, there is never an inferred measured value used. That makes it more representative of the true accuracy of the real transfer function.
This is an update to a previous plot, so I am still using the same data just changing the way it is coded. This plot/data does not have a Q of 1000. That plot will be in a later post along with the error estimation that we talked about in this week’s meeting.
The new plot is shown below in attachment 1. Data and code are contained in attachment 2 |
Attachment 1: SingleSusPlantTF.pdf
|
|
Attachment 2: Plant_TF_Test.zip
|
16234
|
Thu Jul 1 11:37:50 2021 |
Paco | Update | General | restarted c0rga |
Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then. |
16235
|
Thu Jul 1 16:45:25 2021 |
Yehonathan | Update | BHD | SOS assembly |
The bonding test passed - the weight still hangs from the dumbell. Unfortunately, I broke the bond trying to release the assembly from the bracket. I made another batch of 6 dumbell+magnet.
I used some of the leftover epoxy to bond an assembly from the previous batch to a bracket so I can test it. |
16238
|
Tue Jul 6 10:47:07 2021 |
Paco, Anchal | Update | IOO | Restored MC |
MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we
- Cleared the bogus WFS offsets
- Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
- With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
- Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.
The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script. |
16239
|
Tue Jul 6 16:35:04 2021 |
Anchal, Paco, Gautam | Update | IOO | Restored MC |
We found that megatron is unable to properly run scripts/MC/WFS/mcwfsoff and scripts/MC/WFS/mcwfson scripts. It fails cdsutils commands due to a library conflict. This meant that WFS loops were not turned off when IMC would get unlocked and they would keep integrating noise into offsets. The mcwfsoff script is also supposed to clear up WFS loop offsets, but that wasn't happening either. The mcwfson script was also not bringing back WFS loops on.
Gautam fixed these scripts temprorarily for running on megatron by using ezcawrite and ezcaswitch commands instead of cdsutils commands. Now these scripts are running normally. This could be the reason for wildly fluctuating WFS offsets that we have seen in teh past few months.
gautam: the problem here is that megatron is running Ubuntu18 - I'm not sure if there is any dedicated CDS group packaging for Ubuntu, and so we're using some shared install of the cdsutils (hosted on the shared chiara NFS drive), which is complaining about missing linked lib files. Depending on people's mood, it may be worth biting the bullet and make Megatron run Debian10, for which the CDS group maintains packages.
Quote: |
MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we
- Cleared the bogus WFS offsets
- Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
- With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
- Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.
The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.
|
|
16243
|
Fri Jul 9 18:35:32 2021 |
Yehonathan | Update | CDS | Opto-isolator for c1auxey |
Following Koji's channel list review, we made changes to the wiring spreadsheet.
Today, I made the changes real in the Acromag chassis. I went through the channel list one by one and made sure it is wired correctly. Additionally, since we now need all the channels the existing isolators have, I replaced the isolator with the defective channel with a new one.
The things to do next:
1. Create entries for the spare coil driver and satellite box channels in the EPICs DB.
2. Test the spare channels. |
16244
|
Mon Jul 12 18:06:25 2021 |
Yehonathan | Update | CDS | Opto-isolator for c1auxey |
I edited /cvs/cds/caltech/target/c1auxey1/ETMYaux.db (after creating a backup) and added the spare coil driver channels.
I tested those channels using caget while fixing wiring issues. The tests were all succesful. The digital output channel were tested using the Windows machine since they are locked by some EPICs mechanism I don't yet understand.
One worrying point is I found that the differential analog inputs to be unstable unless I connected a reference to some stable voltage source unlike previous tests showed. It was unstable (but less) even when I connected the ref to the ground connectors on the power supplies on the workbench. This is really puzzling.
When I say unstable I mean that most of the time the voltage reading shows the right value, but occasionly there is a transient sharp volage drop of the order of 0.5V. I will do a more quantitative analysis tomorrow.
|
16245
|
Wed Jul 14 16:19:44 2021 |
gautam | Update | General | Brrr |
Since the repair work, the temperature is significantly cooler. Surprisingly, even at the vertex (to be more specific, inside the PSL enclosure, which for the time being is the only place where we have a logged temperature sensor, but this is not attributable to any change in the HEPA speed), the temperature is a good 3 deg C cooler than it was before the HVAC work (even though Koji's wind vane suggest the vents at the vertex were working). The setpoint for the entire lab was modified? What should the setpoint even be?
Quote: |
- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.
- Then I went to the vertex and the east arm. The outlets and intakes are flowing.
|
|
Attachment 1: rmTemp.pdf
|
|
16246
|
Wed Jul 14 19:21:44 2021 |
Koji | Update | General | Brrr |
Jordan reported on Jun 18, 2021:
"HVAC tech came today, and replaced the thermostat and a coolant tube in the AC unit. It is working now and he left the thermostat set to 68F, which was what the old one was set to." |
16247
|
Wed Jul 14 20:42:04 2021 |
gautam | Update | LSC | Locking |
[paco, gautam]
we decided to give the PRFPMI lock a go early-ish. Summary of findings today eve:
- Arms under ALS control display normal noise and loop UGFs.
- PRMI took longer than usual to lock (when arms are held off resonance) - could be elevated sesimic, but warrants measuring PRMI loop TFs to rule out any funkiness. MICH loop also displayed some saturation on acquisition, but after the boosts and other filters were turned on, the lock seemed robust and the in-loop noise was at the usual levels.
- We are gonna do the high bandwidth single arm locking experiments during daytime to rule out any issues with the CM board.
The ALS--> IR CARM handoff is the problematic step. In the past, getting over this hump has just required some systematic loop TF measurements / gain slider readjustments. We will do this in the next few days. I don't think the ALS noise is any higher than it used to be, and I could do the direct handoff as recently as March, so probably something minor has changed. |
16248
|
Thu Jul 15 14:25:48 2021 |
Paco | Update | LSC | CM board |
[gautam, paco]
We tested the CM board by implementing the high bandwidth IR lock (single arm). In preparation for this test we temporarily connected the POY11_Q_MON output to the CM board IN1 input and checked the YARM POY transfer function by running the AA_YARM_TEMPLATE under users/Templates/LSC/LSC_loops/YARM_POY/ . We made sure the YARM dither optimized TRY so as to maximize the optical gain stage. Then we proceeded as follows:
- From the LSC --> CM Servo screen, we controlled the REFL 1 Gain (dB) slider (nominal +25) and MC Servo IN2 Gain (dB) slider (nominal -32 dB) to transfer the low bandwidth (digital) control to the high bandwidth (analog) control of the YARM.
- During this game, we monitored the C1:LSC-POY11_I_ERR_DQ & C1:LSC-CM_SLOW_OUT_DQ error signal channels for saturation, oscillations, or stability.
- Once a set of gains was successful in maintaining a stable lock, we measured the OLTF using SR 785 to track the UGF as we mix the two paths.
- Once the gains have increased, a boost and super-boost stages may be enabled as well.
Ultimately, our ability to progressively increase the control bandwidth of the YARM is a proxy that the CM board is working properly. Attachment 1 shows the OLTF progression as we increased the loop's UGF. Note how as we approached the maximum measured UGF of ~ 22 kHz, our phase margin decreased signifying poor stability.
At the end of this measurement, at about ~ 15:45 I restored the CM board IN1 input and disconnected the POY11_Q_MON
gautam: the conclusion here is that the CM board seems to work as advertised, and it's not solely responsible for not being able to achieve the IR handoff. |
Attachment 1: high_BW_TFs.pdf
|
|
16249
|
Fri Jul 16 16:26:50 2021 |
gautam | Update | Computers | Docker installed on nodus |
I wanted to try hosting some docker images on a "private" server, so I installed Docker on nodus following the instructions here. The install seems to have succeeded, and as far as I can tell, none of the functionality of nodus has been disturbed (I can ssh in, access shared drive, elog seems to work fine etc). But if you find a problem, maybe this action is responsible. Note that nodus is running Scientific Linux 7.3 (Nitrogen). |
16250
|
Sat Jul 17 00:52:33 2021 |
Koji | Update | General | Canon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL |
Canon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL |
Attachment 1: P_20210716_213850.jpg
|
|
16251
|
Mon Jul 19 22:16:08 2021 |
paco | Update | LSC | PRFPMI locking |
[gautam, paco]
Gautam managed to lock PRFPMI a little before ~ 22:00 local time. The ALS to RF handoff logic was found to be repeatable, which enabled us to lock a total of 4 times this evening. Under this nominal state, we can work on PRFPMI to narrow down less known issues and carry out systematic optimization. The second time we achieved lock, we ran sensing lines before entering the ASC stage (which we knew would destroy the lock), and offline analysis of the sensing matrix is pending (gpstime = 1310792709 + 5 min).
Things to note:
(a) there is an unexpected offset suggesting that the ALS and RF disagreed on what the lock setpoint should be, and it is still unclear where the offset is coming from.
(b) the first time the lock was reached, the ASC up stage destroyed it, suggesting these loops need some care (we were able to engage the ASC loops at low gains (0.2 instead of 1) but as soon as we enabled some integrators this consistently destroyed the lock
(c) gautam had (burt) restored to the settings from back in March when the PRFPMI was last locked, suggesting there was a small but somehow significant difference in the IFO that helped today relative to last week
Take home message--> The mere fact that we were able to lock PRFPMI rules out the considerably more serious problems with the signal chain electronics or processing. This should also be a good starting point for further debugging and optimization.
gautam: the circulating power, when the ASC was tweaked, hit 400 (normalized to single arm locked with a misaligned PRM) suggesting a recycling gain of 22.5, and an average arm loss of ~30ppm round trip (assuming 2% loss in the PRC). |
16252
|
Wed Jul 21 14:50:23 2021 |
Koji | Update | SUS | New electronics |
Received:
Jun 29, 2021 BIO I/F 6 units
Jul 19, 2021 PZT Drivers x2 / QPD Transimedance amp x2
|
Attachment 1: P_20210629_183950.jpeg
|
|
Attachment 2: P_20210719_135938.jpeg
|
|
16253
|
Wed Jul 21 18:08:35 2021 |
yehonathan | Update | Loss Measurement | Loss measurement |
{Gautam, Yehonathan, Anchal, Paco}
We prepared for the loss measurement using DC reflection method. We did the following changes:
1. REFL55_Q was disconnected and replaced with MC_T cable coming from the PD on the MC2 table. The cable has a red tag on it. Consequently we lost the AS beam. We realigned the optics and regained arm locks. The spot on the AS QPD had to be corrected.
2. We tried using AS55 as the PD for the DC measurement but we got ratios of ~ 0.97 which implies losses of more than 100 ppm. We decided to go with the traditional PD520 used for these measurements in the past.
3. We placed the PD520 used for loss measurements in front of the AS55 PD and optimized its position.
4. AS110 cable was disconnected from the PD and connected to PD520 to be used as the loss measurement cable.
5. In 1Y2 rack, AS110 PD cable was disconnected, REFL55_I was disconnected and AS110 cable was connected to REFL55_I channel.
So for the test, the MC transmission was measured at REFL55_Q and the AS DC was measured at REFL55_I.
We used the scripts/lossmap_scripts/armLoss/measArmLoss.py script. Note that this script assumes that you begin with the arm locked.
We are leaving the IFO in the configuration described above overnight and we plan to measure the XARM loss early AM. After which we shall restore the affected electrical and optical paths.
We ran the /scripts/lossmap_scripts/armLoss/measureArmLoss.py script in pianosa with 25 repetitions and a 30 s "duty cycle" (wait time) for the Y arm. Preliminary results give an estimated individual arm loss of ~ 30 ppm (on both X/Y arms) but we will provide a better estimate with this measurement. |
16254
|
Thu Jul 22 16:06:10 2021 |
Paco | Update | Loss Measurement | Loss measurement |
[yehonathan, anchal, paco, gautam]
We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.
The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt
Finally, the estimated YARM loss is 39 7 ppm, while the estimated XARM loss is 38 8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.
Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments) |
16255
|
Sun Jul 25 18:21:10 2021 |
Koji | Update | General | Canon camera / small silver tripod / macro zoom lens / LED ring light returned / Electronics borrowed |
Camera and accesories returned
One HAM-A coildriver and one sat amp borrowed -> QIL
https://nodus.ligo.caltech.edu:8081/QIL/2616
|
16256
|
Sun Jul 25 20:41:47 2021 |
rana | Update | Loss Measurement | Loss measurement |
What are the quantitative root causes for why the statistical uncertainty is so large? Its larger than 1/sqrt(N) |
16257
|
Mon Jul 26 17:34:23 2021 |
Paco | Update | Loss Measurement | Loss measurement |
[gautam, yehonathan, paco]
We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.
Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).
Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 2.6 ppm and a YARM loss of 38.9 0.6 ppm. To make the distinction more clear, Attachment 2 and Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.
For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet. |
Attachment 1: LossMeasurement_RawData.pdf
|
|
Attachment 2: YARM_loss_stats.pdf
|
|
Attachment 3: XARM_loss_stats.pdf
|
|
16259
|
Tue Jul 27 17:14:18 2021 |
Yehonathan | Update | BHD | SOS assembly |
Jordan has made 1/4" tap holes in the lower EQ stop holders (attachment). The 1/4" stops (schematics) fit nicely in them. Also, they are about the same length as the small EQ stops, so they can be used.
However, counting all the 1/4"-3/4" vented screws we have shows that we are missing 2 screws to cover all the 7 SOSs. We can either:
1. Order new vented screws.
2. Use 2 old (stained but clean) EQ stops.
3. Screw holes into existing 1/4"-3/4" screws and clean them.
4. Use small EQ stops for one SOS.
etc.
Also, I found a mistake in the schematics of the SOS tower. The 4-40 screws used to hold the lower EQ stop holders should be SS and not silver plated as noted. I'll have to find some (28) spares in the cleanroom or order new ones.
|
Attachment 1: 20210727_154506.png
|
|
16260
|
Tue Jul 27 20:12:53 2021 |
Koji | Update | BHD | SOS assembly |
1 or 2. The stained ones are just fine. If you find the vented 1/4-20 screws in the clean room, you can use them.
For the 28 screws, yeah find some spares in the clean room (faster), otherwise just order. |
16261
|
Tue Jul 27 23:04:37 2021 |
Anchal | Update | LSC | 40 meter party |
[ian, anchal, paco]
After our second attempt of locking PRFPMI tonight, we tried to resotre XARM and YARM locks to IR by clicking on IFO_CONFIGURE>Restore XARM (POX) and IFO_CONFIGURE>Restore YARM (POY) but the arms did not lock. The green lasers were locked to the arms at maximum power, so the relative alignments of each cavity was ok. We were also able to lock PRMI using IFO_CONFIGURE>Restore PRMI carrier.
This was very weird to us. We were pretty sure that the aligment is correct, so we decided to cehck the POX POY signal chain. There was essentially no signal coming at POX11 and there was a -100 offset on it. We could see some PDH signal on POY11 but not enough to catch the locks.
We tried running IFO_CONFIGURE>LSC OFFSETS to cancel out any dark current DC offsets. The changes made by the script are shown in attachment 1.
We went to check the tables and found no light visible on beam finder cards on POX11 or POY11. We found that ITMX was stuck on one of the coils. We unstuck it using the shaking method. The OPLEVs on ITMX after this could not be switched on as the OPLEV servo were railing to limits. But when we ran Restore XARM (POX) again, they started working fine. Something is done by this script that we are not aware of.
We're stopping here. We still can not lock any of the single arms.
Wed Jul 28 11:19:00 2021 Update:
[gautam, paco]
Gautam found that the restoring of POX/POY failed to restore the whitening filter gains in POX11 / POY11. These are meant to be restored to 30 dB and 18 dB for POX11 and POY11 respectively but were set to 0 dB in detriment of any POX/POY triggering/locking. The reason these are lowered is to avoid saturating the speakers during lock acquisition. Yesterday, burt-restore didn't work because we restored the c1lscepics.snap but said gains are actually in c1lscaux.snap . After manually restoring the POX11 and POY11 whitening filter gains, gautam ran the LSCOffsets script. The XARM and YARM were able to quickly lock after we restored these settings.
The root of our issue may be that we didn't run the CARM & DARM watch script (which can be accessed from the ALS/Watch Scripts in medm). Gautam added a line on the Transition_IR_ALS.py script to run the watch script instead. |
Attachment 1: Screenshot_2021-07-27_22-19-58.png
|
|
16262
|
Wed Jul 28 12:00:35 2021 |
Yehonathan | Update | BHD | SOS assembly |
After receiving two new tubes of EP-30 I resumed the gluing activities. I made a spreadsheet to track the assemblies that have been made, their position on the metal sheet in the cleanroom, their magnetic field, and the batch number.
I made another batch of 6 magnets yesterday (4th batch), the assembly from the 2nd batch is currently being tested for bonding strength.
One thing that we overlooked in calculating the amount of glue needed is that in addition to the minimum 8gr of EP-30 needed for every gluing session, there is also 4gr of EP-30 wasted on the mixing tube. So that means 12gr of EP-30 are used in every gluing session. We need 5 more batches so at least 60gr of EP-30 is needed. Luckily, we bought two tubes of 50gr each. |
16263
|
Wed Jul 28 12:47:52 2021 |
Yehonathan | Update | CDS | Opto-isolator for c1auxey |
To simulate a differential output I used two power supplies connected in series. The outer connectors were used as the outputs and the common connector was connected to the ground and used as a reference. I hooked these outputs to one of the differential analog channels and measured it over time using Striptool. The setup is shown in attachment 3.
I tested two cases: With reference disconnected (attachment 1), and connected (attachment 2). Clearly, the non-referred case is way too noisy. |
Attachment 1: SUS-ETMY_SparePDMon0_NoRef.png
|
|
Attachment 2: SUS-ETMY_SparePDMon0_Ref_WithGND.png
|
|
Attachment 3: DifferentialOutputTest.png
|
|
16264
|
Wed Jul 28 17:10:24 2021 |
Anchal | Update | LSC | Schnupp asymmetry |
[Anchal, Paco]
I redid the measurement of Schnupp asymmetry today and found it to be 3.8 cm 0.9 cm.
Method
- One of the arms is misalgined both at ITM and ETM.
- The other arm is locked and aligned using ASS.
- The SRCL oscillator's output is changed to the ETM of the chosen arm.
- The AS55_Q channel in demodulation of SRCL oscillator is configured (phase corrected) so that all signal comes in C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT.
- The rotation angle of AS55 RFPD is scanned and the C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT is averaged over 10s after waiting for 5s to let the transients pass.
- This data is used to find the zero crossing of AS55_Q signal when light is coming from one particular arm only.
- The same is repeated for the other arm.
- The difference in the zero crossing phase angles is twice the phase accumulated by a 55 MHz signal in travelling the length difference between the arm cavities i.e. the Schnupp Asymmetry.
I measured a phase difference of 5 1 degrees between the two paths.
The uncertainty in this measurement is much more than gautam's 15956 measurement. I'm not sure yet why, but would look into it.
Quote: |
I used the Valera technique to measure the Schnupp asymmetry to be , see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to.
|
|
Attachment 1: Lsch.pdf
|
|
16265
|
Wed Jul 28 20:20:09 2021 |
Yehonathan | Update | General | The temperature sensors and function generator have arrived in the lab |
I put the temperature sensors box on Anchal's table (attachment 1) and the function generator on the table in front of the c1auxey Acromag chassis (attachment 2).
|
Attachment 1: 20210728_201313.jpg
|
|
Attachment 2: 20210728_201607.jpg
|
|
16266
|
Thu Jul 29 14:51:39 2021 |
Paco | Update | Optical Levers | Recenter OpLevs |
[yehonathan, anchal, paco]
Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight |
16267
|
Mon Aug 2 16:18:23 2021 |
Paco | Update | ASC | AS WFS MICH commissioning |
[anchal, paco]
We picked up AS WFS comissioning for daytime work as suggested by gautam. In the end we want to comission this for the PRFPMI, but also for PRMI, and MICH for completeness. MICH is the simplest so we are starting here.
We started by restoromg the MICH configuration and aligning the AS DC QPD (on the AS table) by zeroing the C1:ASC-AS_DC_YAW_OUT and C1:ASC-AS_DC_PIT_OUT. Since the AS WFS gets the AS beam in transmission through a beamsplitter, we had to correct such a beamsplitters's aligment to recenter the AS beam onto the AS110 PD (for this we looked at the signal on a scope).
We then checked the rotation (R) C1:ASC-AS_RF55_SEGX_PHASE_R and delay (D) angles C1:ASC-AS_RF55_SEGX_PHASE_D (where X = 1, 2, 3, 4 for segment) to rotate all the signal into the I quadrature. We found that this optimized the PIT content on C1:ASC-AS_RF55_I_PIT_OUT and YAW content on C1:ASC-AS_RF55_I_YAW_OUTMON which is what we want anyways.
Finally, we set up some simple integrators for these WFS on the C1ASC-DHARD_PIT and C1ASC-DHARD_YAW filter banks with a pole at 0 Hz, a zero at 0.8 Hz, and a gain of -60 dB (similar to MC WFS). Nevertheless, when we closed the loop by actuating on the BS ASC PIT and ASC YAW inputs, it seemed like the ASC model outputs are not connected to the BS SUS model ASC inputs, so we might need to edit accordingly and restart the model. |
16268
|
Tue Aug 3 20:20:08 2021 |
Anchal | Update | Optical Levers | Recentered ETMX, ITMX and ETMY oplevs at good state |
Late elog. Original time 08/02/2021 21:00.
I locked both arms and ran ASS to reach to optimum alignment. ETMY PIT > 10urad, ITMX P > 10urad and ETMX P < -10urad. Everything else was ok absolute value less than 10urad. I recentered these three.
Than I locked PRMI, ran ASS on PRCL and MICH and checked BS and PRM alignment. They were also less than absolute value 10urad. |
16269
|
Wed Aug 4 18:19:26 2021 |
paco | Update | General | Added infrasensing temperature unit to martian network |
[ian, anchal, paco]
We hooked up the infrasensing unit to power and changed its default IP address from 192.168.11.160 (factory default) to 192.168.113.240 in the martian network. The sensor is online with user controls and the usual password for most workstations in that IP address. |
16270
|
Thu Aug 5 14:59:31 2021 |
Anchal | Update | General | Added temperature sensors at Yend and Vertex too |
I've added the other two temperature sensor modules on Y end (on 1Y4, IP: 192.168.113.241) and in the vertex on (1X2, IP: 192.168.113.242). I've updated the martian host table accordingly. From inside martian network, one can go to the browser and go to the IP address to see the temperature sensor status . These sensors can be set to trigger alarm and send emails/sms etc if temperature goes out of a defined range.
I feel something is off though. The vertex sensor shows temperature of ~28 degrees C, Xend says 20 degrees C and Yend says 26 degrees C. I believe these sensors might need calibration.
Remaining tasks are following:
- Modbus TCP solution:
- If we get it right, this will be easiest solution.
- We just need to add these sensors as streaming devices in some slow EPICS machine in there .cmd file and add the temperature sensing channels in a corresponding database file.
- Python workaround:
- Might be faster but dirty.
- We run a python script on megatron which requests temperature values every second or so from the IP addresses and write them on a soft EPICs channel.
- We still would need to create a soft EPICs channel fro this and add it to framebuilder data acquisition list.
- Even shorted workaround for near future could be to just write temperature every 30 min to a log file in some location.
[anchal, paco]
We made a script under scripts/PEM/temp_logger.py and ran it on megatron. The script uses the requests package to query the latest sensor data from the three sensors every 10 minutes as a json file and outputs accordingly. This is not a permanent solution. |
16271
|
Fri Aug 6 13:13:28 2021 |
Anchal | Update | BHD | c1teststand subnetwork now accessible remotely |
c1teststand subnetwork is now accessible remotely. To log into this network, one needs to do following:
- Log into nodus or pianosa. (This will only work from these two computers)
- ssh -CY controls@192.168.113.245
- Password is our usual workstation password.
- This will log you into c1teststand network.
- From here, you can log into fb1, chiara, c1bhd and c1sus2 which are all part of the teststand subnetwork.
Just to document the IT work I did, doing this connection was bit non-trivial than usual.
- The martian subnetwork is created by a NAT router which connects only nodus to outside GC network and all computers within the network have ip addresses 192.168.113.xxx with subnet mask of 255.255.255.0.
- The cloned test stand network was also running on the same IP address scheme, mostly because fb1 and chiara are clones in this network. So every computer in this network also had ip addresses 192.168.113.xxx.
- I setup a NAT router to connect to martian network forwarding ssh requests to c1teststand computer. My NAT router creates a separate subnet with IP addresses 10.0.1.xxx and suubnet mask 255.255.255.0 gated through 10.0.1.1.
- However, the issue is for c1teststand, there are now two networks accessible which have same IP addresses 192.168.113.xxx. So when you try to do ssh, it always search in its local c1teststand subnetwork instead of routing through the NAT router to the martian network.
- To work around this, I had to manually provide an ip router to c1teststand for connecting to two of the computers (nodus and pianosa) in martian network. This is done by:
ip route add 192.168.113.200 via 10.0.1.1 dev eno1
ip route add 192.168.113.216 via 10.0.1.1 dev eno1
- This gives c1teststand specific path for ssh requests to/from these computers in the martian network.
|
16272
|
Fri Aug 6 17:10:19 2021 |
Paco | Update | IMC | MC rollercoaster |
[anchal, yehonatan, paco]
For whatever reason (i.e. we don't really know) the MC unlocked into a weird state at ~ 10:40 AM today. We first tried to find a likely cause as we saw it couldn't recover itself after ~ 40 min... so we decided to try a few things. First we verified that no suspensions were acting weird by looking at the OSEMs on MC1, MC2, and MC3. After validating that the sensors were acting normally, we moved on to the WFS. The WFS loops were disabled the moment the IMC unlocked, as they should. We then proceeded to the last resort of tweaking the MC alignment a bit, first with MC2 and then MC1 and MC3 in that order to see if we could help the MC catch its lock. This didn't help much initially and we paused at about noon.
At about 5 pm, we resumed since the IMC had remained locked to some higher order mode (TEM-01 by the looks of it). While looking at C1:IOO-MC_TRANS_SUMFILT_OUT on ndscope, we kept on shifting the MC2 Yaw alignment slider (steps = +-0.01 counts) slowly to help the right mode "hop". Once the right mode caught on, the WFS loops triggered and the IMC was restored. The transmission during this last stage is shown in Attachment #1. |
Attachment 1: MC2_trans_sum_2021-08-06_17-18-54.png
|
|
16273
|
Mon Aug 9 10:38:48 2021 |
Anchal | Update | BHD | c1teststand subnetwork now accessible remotely |
I had to add following two lines in the /etc/network/interface file to make the special ip routes persistent even after reboot:
post-up ip route add 192.168.113.200 via 10.0.1.1 dev eno1
post-up ip route add 192.168.113.216 via 10.0.1.1 dev eno1 |
16274
|
Tue Aug 10 17:24:26 2021 |
paco | Update | General | Five day trend |
Attachment 1 shows a five and a half day minute-trend of the three temperature sensors. Logging started last Thursday ~ 2 pm when all sensors were finally deployed. While it appears that there is a 7 degree gradient along the XARM it seems like the "vertex" (more like ITMX) sensor was just placed on top of a network switch (which feels lukewarm to the touch) so this needs to be fixed. A similar situation is observed in the ETMY sensor. I shall do this later today.
Done. The temperature reading should now be more independent from nearby instruments.
Wed Aug 11 09:34:10 2021 I updated the plot with the full trend before and after rearranging the sensors. |
Attachment 1: six_day_minute_trend.png
|
|
16275
|
Wed Aug 11 11:35:36 2021 |
Paco | Update | LSC | PRMI MICH orthogonality plan |
[yehonathan, paco]
Yesterday we discussed a bit about working on the PRMI sensing matrix.
In particular we will start with the "issue" of non-orthogonality in the MICH actuated by BS + PRM. Yesterday afternoon we played a little with the oscillators and ran sensing lines in MICH and PRCL (gains of 50 and 5 respectively) in the times spanning [1312671582 -> 1312672300], [1312673242 -> 1312677350] for PRMI carrier and [1312673832 -> 1312674104] for PRMI sideband. Today we realize that we could have enabled the notchSensMat filter, which is a notch filter exactly at the oscillator's frequency, in FM10 and run a lower gain to get a similar SNR. We anyways want to investigate this in more depth, so here is our tentative plan of action which implies redoing these measurements:
Task: investigate orthogonality (or lack thereof) in the MICH when actuated by BS & PRM.
1) Run sensing MICH and PRCL oscillators with PRMI Carrier locked (remember to turn NotchSensMat filter on).
2) Analyze data and establish the reference sensing matrix.
3) Write a script that performs steps 2 and 3 in a robust and safe way.
4) Scan the C1:LSC-LOCKIN_OUTMTRX, MICH to BS and PRM elements around their nominal values.
5) Scan the MICH and PRCL RFPD rotation angles around their nominal values.
We also talked about the possibility that the sensing matrix is strongly frequnecy dependant such that measuring it at 311Hz doesn't give us accurate estimation of it. Is it worthwhile to try and measure it at lower frequencies using an appropriate notch filter?
Wed Aug 11 15:28:32 2021 Updated plan after group meeting
- The problem may be in the actuators since the orthogonality seems fine when actuating on the ITMX/ITMY, so we should instead focus on measuring the actuator transfer functions using OpLevs for example (same high freq. excitation so no OSEM will work > 10 Hz). |
16276
|
Wed Aug 11 12:06:40 2021 |
Yehonathan | Update | CDS | Opto-isolator for c1auxey |
I redid the differential input experiment using the DS360 function generator we recently got. I generated a low frequency (0.1Hz) sine wave signal with an amplitude 0.5V and connected the + and - output to a differential input on the new c1auxcey Acromag chassis. I recorded a time series of the corresponding EPICS channel with and without the common on the DS360 connected to the Ref connector on the Acromag unit. The common connector on the DS360 is not normally grounded (there is a few tens of kohms between the ground and common connectors). The attachment shows that, indeed, the analog input readout is extremely noisy with the Ref being disconnected. The point where the Ref was connected to common is marked in the picture.
Conclusion: Ref connector on the analog input Acromag units must be connected to some stable voltage source for normal operation. |
Attachment 1: SUS-ETMY_SparePDMon0_2.png
|
|
16277
|
Thu Aug 12 11:04:27 2021 |
Paco | Update | General | PSL shutter was closed this morning |
Thu Aug 12 11:04:42 2021 Arrived to find the PSL shutter closed. Why? Who? When? How? No elog, no fun. I opened it, IMC is now locked, and the arms were restored and aligned. |
16278
|
Thu Aug 12 14:59:25 2021 |
Koji | Update | General | PSL shutter was closed this morning |
What I was afraid of was the vacuum interlock. And indeed there was a pressure surge this morning. Is this real? Why didn't we receive the alert? |
Attachment 1: Screen_Shot_2021-08-12_at_14.58.59.png
|
|
16279
|
Thu Aug 12 20:52:04 2021 |
Koji | Update | General | PSL shutter was closed this morning |
I did a bit more investigation on this.
- I checked P1~P4, PTP2/3, N2, TP2, TP3. But found only P1a and P2 were affected.
- Looking at the min/mean/max of P1a and P2 (Attachment 1), the signal had a large fluctuation. It is impossible to have P1a from 0.004 to 0 instantaneously.
- Looking at the raw data of P1a and P2 (Attachment 2), the value was not steadily large. Instead it looks like fluctuating noise.
So my conclusion is that because of an unknown reason, an unknown noise coupled only into P1a and P2 and tripped the PSL shutter. I still don't know the status of the mail alert. |
Attachment 1: Screen_Shot_2021-08-12_at_20.51.19.png
|
|
Attachment 2: Screen_Shot_2021-08-12_at_20.51.34.png
|
|
16280
|
Mon Aug 16 23:30:34 2021 |
Paco | Update | CDS | AS WFS commissioning; restarting models |
[koji, ian, tega, paco]
With the remote/local assistance of Tega/Ian last friday I made changes on the c1sus model by connecting the C1:ASC model outputs (found within a block in c1ioo) to the BS and PRM suspension inputs (pitch and yaw). Then, Koji reviewed these changes today and made me notice that no changes are actually needed since the blocks were already in place, connected in the right ports, but the model probably just wasn't rebuilt...
So, today we ran "rtcds make", "rtcds install" on the c1ioo and c1sus models (in that order) but the whole system crashed. We spent a great deal of time restarting the machines and their processes but we struggled quite a lot with setting up the right dates to match the GPS times. What seemed to work in the end was to follow the format of the date in the fb1 machine and try to match the timing to the sub-second level. This is especially tricky when performed by a human action so the whole task is tedious. We anyways completed the reboot for almost all the models except the c1oaf (which tends to make things crashy) since we won't need it right away for the tasks ahead. One potential annoying issue we found was in manually rebooting c1iscey because one of its network ports is loose (the ethernet cable won't click in place) and it appears to use this link to boot (!!) so for a while this machine just wasn't coming back up.
Finally, as we restored the suspension controls and reopened the shutters, we noticed a great deal of misalignment to the point no reflected beam was coming back to the RFPD table. So we spent some time verifying the PRM alignment and TT1 and TT2 (tip tilts) and it turned out to be mostly the latter pair that were responsible for it. We used the green beams to help optimize the XARM and YARM transmissions and were able to relock the arms. We ran ASS on them, and then aligned the PRM OpLevs which also seemed off. This was done by giving a pitch offset to the input PRM oplev beam path and then correcting for it downstream (before the qpd). We also adjusted the BS OpLev in the end.
Summary; the ASC BS and PRM outputs are now built into the SUS models. Let the AS WFS loops be closed soon!
Addenda by KA
- Upon the RTS restarting,
Date/Time adjustment
sudo date --set='xxxxxx'
- If the time on the CDS status medm screen for each IOP match with the FB local time, we ran
rtcds start c1x01
(or c1x02, etc)
- Every time we restart the IOPs, fb was restarted by
telnet fb1 8083
> shutdown
and restarted mx_stream from the CDS screen because these actions change the "DC" status.
- Today we once succeeded to restart the vertex machines. However, the RFM signal transmission did fail. So the end two machines were power cycled as well as c1rfm, but this made all the machines in RED again. Hell...
- We checked the PRM oplev. The spot was around the center but was clipped. This made us so confused. Our conclusion was that the oplev was like that before the RTS reboot. |
16281
|
Tue Aug 17 04:30:35 2021 |
Koji | Update | SUS | New electronics |
Received:
Aug 17, 2021 2x ISC Whitening
Delivered 2x Sat Amp board to Todd
|
Attachment 1: P_20210816_234136.jpg
|
|
Attachment 2: P_20210816_235106.jpg
|
|
Attachment 3: P_20210816_234220.jpg
|
|
16282
|
Wed Aug 18 20:30:12 2021 |
Anchal | Update | ASS | Fixed runASS scripts |
Late elog: Original time of work Tue Aug 17 20:30 2021
I locked the arms yesterday remotely and tried running runASS.py scripts (generally ran by clicking Run ASS buttons on IFO OVERVIEW screen of ASC screen). We have known for few weeks that this script stopped working for some reason. It would start the dithering and would optimize the alignment but then would fail to freeze the state and save the alignment.
I found the caget('C1:LSC-TRX_OUT') or caget('C1:LSC-TRY_OUT') were not working in any of the workstations. This is weird since caget was able to acquire these fast channel values earlier and we have seen this script to work for about a month without any issue.
Anyways, to fix this, I just changed the channel name to 'C1:LSC-TRY_OUT16' when the script checks in the end if the arm has indeed been aligned. It was only this step that was failing. Now the script is working fine and I tested them on both arms. On the Y arm, I misaligned the arm by adding bias in yaw by changing C1:SUS-ITMY_YAW_OFFSET from -8 to 22. The script was able to align the arm back. |
16283
|
Thu Aug 19 03:23:00 2021 |
Anchal | Update | CDS | Time synchornization not running |
I tried to read a bit and understand the NTP synchronization implementation in FE computers. I'm quite sure that NTP synchronization should be 'yes' if timesyncd are running correctly in the output of timedatectl in these computers. As Koji reported in 15791, this is not the case. I logged into c1lsc, c1sus and c1ioo and saw that RTC has drifted from the software clocks too which does not happen if NTP synchronization was active. This would mean that almost certainly, if the computers are rebooted, the synchronization will be lost and the models will fail to come online.
My current findings are the following (this should be documented in wiki once we setup everything):
- nodus is running a NTP server using chronyd. One can check the configuration of this NTP serer in /etc/chornyd.conf
- fb1 is running an NTP server using ntpd that follows nodus and an IP address 131.215.239.14. This can be seen in /etc/ntp.conf.
- There are no comments to describe what this other server (131.215.239.14) is. Does the GC network have an NTP server too?
- c1lsc, c1sus and c1ioo all have systemd-timesyncd.service running with configuration file in /etc/systemd/timesyncd.conf.
- The configuration file set Servers=ntpserver but echo $ntpserver produces nothing (blank) on these computers and I've been unable to find anyplace where ntpserver is defined.
- In chiara (our name server), the name server file /etc/hosts does not have any entry for ntpserver either.
- I think the problem might be that these computers are unable to find the ntpserver as it is not defined anywhere.
The solution to this issue could be as simple as just defining ntpserver in the name server list. But I'm not sure if my understanding of this issue is correct. Comments/suggestions are welcome for future steps.
|
16284
|
Thu Aug 19 14:14:49 2021 |
Koji | Update | CDS | Time synchornization not running |
131.215.239.14 looks like Caltech's NTP server (ntp-02.caltech.edu)
https://webmagellan.com/explore/caltech.edu/28415b58-837f-4b46-a134-54f4b81bee53
I can't say it is correct or not as I did not make the survey at your level. I think you need a few tests of reconfiguring and restarting the NTP clients to see if time synchronization starts. Because the local time is not regulated right now anyway, this operation is safe I think.
|
16285
|
Fri Aug 20 00:28:55 2021 |
Anchal | Update | CDS | Time synchornization not running |
I added ntpserver as a known host name for address 192.168.113.201 (fb1's address where ntp server is running) in the martian host list in the following files in Chiara:
/var/lib/bind/martian.hosts
/var/lib/bind/rev.113.168.192.in-addr.arpa
Note: a host name called ntp was already defined at 192.168.113.11 but I don't know what computer this is.
Then, I restarted the DNS on chiara by doing:
sudo service bind9 restart
Then I logged into c1lsc and c1ioo and ran following:
controls@c1ioo:~ 0$ sudo systemctl restart systemd-timesyncd.service
controls@c1ioo:~ 0$ sudo systemctl status systemd-timesyncd.service -l
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
Active: active (running) since Fri 2021-08-20 07:24:03 UTC; 53s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 23965 (systemd-timesyn)
Status: "Idle."
CGroup: /system.slice/systemd-timesyncd.service
└─23965 /lib/systemd/systemd-timesyncd
Aug 20 07:24:03 c1ioo systemd[1]: Starting Network Time Synchronization...
Aug 20 07:24:03 c1ioo systemd[1]: Started Network Time Synchronization.
Aug 20 07:24:03 c1ioo systemd-timesyncd[23965]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 07:24:35 c1ioo systemd-timesyncd[23965]: Using NTP server 192.168.113.201:123 (ntpserver).
controls@c1ioo:~ 0$ timedatectl
Local time: Fri 2021-08-20 07:25:28 UTC
Universal time: Fri 2021-08-20 07:25:28 UTC
RTC time: Fri 2021-08-20 07:25:31
Time zone: Etc/UTC (UTC, +0000)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
The same output is shown in c1lsc too. The NTP synchronized flag in output of timedatectl command did not change to yes and the RTC is still 3 seconds ahead of the local clock.
Then I went to c1sus to see what was the status output before rstarting the timesyncd service. I got folloing output:
controls@c1sus:~ 0$ sudo systemctl status systemd-timesyncd.service -l
● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
Active: active (running) since Tue 2021-08-17 04:38:03 UTC; 3 days ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 243 (systemd-timesyn)
Status: "Idle."
CGroup: /system.slice/systemd-timesyncd.service
└─243 /lib/systemd/systemd-timesyncd
Aug 20 02:02:18 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 02:36:27 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 03:10:35 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 03:44:43 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 04:18:51 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 04:53:00 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 05:27:08 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 06:01:16 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 06:35:24 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 07:09:33 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
This actually shows that the service was able to find ntpserver correctly at 192.168.113.201 even before I changed the name server file in chiara. So I'm retracting the changes made to name server. They are probably not required.
The configuration files for timesynd.conf are read only even with sudo. I tried changing permissions but that did not work either. Maybe these files are not correctly configured. The man page of timesyncd says to use field 'NTP' to give the ntp servers. Our files are using field 'Servers'. But since we are not getting any error message, I don't think this is the issue here.
I'll look more into this problem. |