ID |
Date |
Author |
Type |
Category |
Subject |
14731
|
Sun Jul 7 17:54:34 2019 |
Milind | Update | Computer Scripts / Programs | PMC autolocker |
I modified the autolocker code I wrote to read from a .yaml configuration file instead of commandline arguements (that option still exists if one wishes to override what the .yaml file contains). I have pushed the code to github. I started reading about MCMC and will put up details of the remaining part of the work ASAP.
Quote: |
P.P.S. He also said that it would not do to have command line arguments as the main source from which parameters are procured and that .yml files ought to be used instead. I will make that change asap.
|
|
15007
|
Mon Nov 4 11:41:28 2019 |
shruti | Update | Computer Scripts / Programs | Epics installed on donatella |
I've installed pyepics on Donatella running
sudo yum install pyepics
Pip and ipython did not seem to be installed yet. |
15021
|
Thu Nov 7 17:55:37 2019 |
shruti | Update | Computer Scripts / Programs | Python packages on donatella |
Today I realized that pip and other python2,3 packages were installed in the conda base environment, so after running
conda activate
I could run the python-GPIB scripts to interface with the Agilent.
Although, I did have to add a python2 kernel to jupyter/ipython, which I did in a separate conda environment:
conda create -n ipykernel_py2 python=2 ipykernel
source activate ipykernel_py2
python -m ipykernel install --user
Quote: |
I've installed pyepics on Donatella running
sudo yum install pyepics
Pip and ipython did not seem to be installed yet.
|
|
15117
|
Mon Jan 13 15:47:37 2020 |
shruti | Configuration | Computer Scripts / Programs | c1psl burt restore |
[Yehonathan, Jon, Shruti]
Since the PMC would not lock, we initially burt-restored the c1psl machine to the last available shapshot (Dec 10th 2019), but it still would not lock.
Then, it was burt-restored to midnight of Dec 1st, 2019, after which it could be locked. |
15171
|
Wed Jan 29 00:27:13 2020 |
gautam | Update | Computer Scripts / Programs | mcup / mcdown modified |
To fix the apparent slowness of execution of the caput commands on megatron, I changed the "ewrite" macro in the mcup and mcdown scripts to use ezcawrite instead of caput. The old lines are simply commented out, and can be reverted to at any point if we so desire. After these changes, we saw that both scripts complete execution much faster. |
15207
|
Tue Feb 11 19:11:35 2020 |
shruti | Update | Computer Scripts / Programs | MATLAB on donatella |
Tried to open MATLAB on Donatella and found the error:
MATLAB is selecting SOFTWARE OPENGL rendering.
License checkout failed.
License Manager Error -9
This error may occur when:
-The hostid of this computer does not match the hostid in the license file.
-A Designated Computer installation is in use by another user.
If no other user is currently running MATLAB, you may need to activate.
Troubleshoot this issue by visiting:
http://www.mathworks.com/support/lme/R2015b/9
Diagnostic Information:
Feature: MATLAB
License path: /home/controls/.matlab/R2015b_licenses/license_donatella_865865_R2015b.lic:/cvs/cds/caltech/apps/lin
ux64/matlab15b/licenses/license.dat:/cvs/cds/caltech/apps/linux64/matlab15b/licenses/license_chiara_
865865_R2015b.lic:/cvs/cds/caltech/apps/linux64/matlab15b/licenses/license_pianosa_865865_R2015b.lic
Licensing error: -9,57.
So I used my caltech credentials to get an activation key for the computer. I could not find the option for a campus license so I used the individual single machine license.
Now it can be run by going to the location:
/cvs/cds/caltech/apps/matlab17b/bin
and running
./matlab
On opening MATLAB, there were a whole bunch of other errors including a low-level graphics error when we tried to plot something. |
15325
|
Tue May 12 17:51:25 2020 |
rana | Summary | Computer Scripts / Programs | updated LESS syntax highlight on nodus |
apt install source-highlight
then modified bashrc to point to /usr/share instead of /usr/bin |
15331
|
Thu May 14 00:47:55 2020 |
gautam | Summary | Computer Scripts / Programs | pcdev1 added to authorized keys on nodus |
This is to facilitate the summary page config fines to be pulled from nodus in a scripted way, without being asked for authentication. If someone knows of a better/more secure way for this to be done, please let me know. The site summary pages seem to pull the config files from a git repo, maybe that's better? |
15341
|
Wed May 20 20:10:34 2020 |
rana, John Z | Update | Computer Scripts / Programs | NDS2 server / conf updated - seems OK now |
We noticed about a week ago that the NDS2 channel lists were not getting updated on megatron. JZ and I investigated; he was able to fix it all up this afternoon by logging in and snooping around Megatron.
Please try it out and tell me about any problems in getting fresh data.
- The NDS2 server is what we connect to through our python NDS2 client software to download some data.
- It has been working for years, but it looks like there was a file corruption of the channel lists that it makes back in 2017.
- Since the NDS2 server code tries to make incremental changes, it was failing to make a new channel list. Was failing to parse the corrupted file.
- there was a controls crontab entry to restart the server every morning, but the file name in that tab had a typo, so that wasn't working. I commented it out, since it shouldn't be necessary (lets see how it goes...)
- the nds2mgr account also has a crontab, but that was failing since it didn't have sudo permission. JZ added nds2mgr to the sudoers list so that should work now.
- I was able to get new channels as of 4 PM today, so it seems to be working.
* we should remember to rebuild the NDS2 server code for Ubuntu. The thing running on there is for CentOS / SL7, but we moved to Ubuntu recently since the SL7 support is going away.
** the nds2 code & conf files are not backed up anywhere since its not on /cvs/cds. It has 52 GB(!!) of txt channel lists & archives which we don't need to backup
|
15342
|
Thu May 21 15:31:26 2020 |
gautam | Update | Computer Scripts / Programs | NDS2 service restarted |
The service had failed at 16:09 yesterday. I just restarted it and am now able to fetch data again.
Unrelated to this work: I restarted the httpd service on nodus a couple of times this afternoon while experimenting with the summary pages.
Quote: |
Please try it out and tell me about any problems in getting fresh data.
|
|
15345
|
Fri May 22 10:37:41 2020 |
rana | Update | Computer Scripts / Programs | NDS2 service restarted |
was dead again this morning - JZ notified
current restart instructions (after ssh to megatron):
cd /home/nds2mgr/nds2-megatron
sudo su nds2mgr
make -f test_restart |
15346
|
Mon May 25 10:54:41 2020 |
rana | Update | Computer Scripts / Programs | NDS2 service restarted |
so far it has run through the weekend with no problems (except that there are huge log files as usual).
I have started to set up monit to run on megatron to watch this process. In principle this would send us alerts when things break and also give a web interface to watch monit. I'm not sure how to do web port forwarding between megatron and nodus, so for now its just on the terminal. e.g.:
monit>sudo monit status
Monit 5.25.1 uptime: 4m
System 'megatron'
status OK
monitoring status Monitored
monitoring mode active
on reboot start
load average [0.15] [0.22] [0.25]
cpu 0.6%us 1.0%sy 0.2%wa
memory usage 1001.4 MB [25.0%]
swap usage 107.2 MB [1.9%]
uptime 40d 17h 55m
boot time Tue, 14 Apr 2020 17:47:49
data collected Mon, 25 May 2020 11:43:03
Process 'nds2'
status OK
monitoring status Monitored
monitoring mode active
on reboot start
pid 25007
parent pid 1
uid 4666
effective uid 4666
gid 4666
uptime 3d 1h 22m
threads 53
children 0
cpu 0.0%
cpu total 0.0%
memory 19.4% [776.1 MB]
memory total 19.4% [776.1 MB]
security attribute unconfined
disk read 0 B/s [2.3 GB total]
disk write 0 B/s [17.9 MB total]
data collected Mon, 25 May 2020 11:43:03
|
15618
|
Thu Oct 8 08:37:15 2020 |
gautam | Update | Computer Scripts / Programs | Finesse GUI |
This looks cool, we should have something similar, can be really useful. |
15621
|
Thu Oct 8 18:40:42 2020 |
Koji | Update | Computer Scripts / Programs | Finesse GUI |
Is it better than Luxor? https://labcit.ligo.caltech.edu/~jharms/luxor.html |
15693
|
Wed Dec 2 12:35:31 2020 |
Paco | Summary | Computer Scripts / Programs | TC200 python driver |
Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here.
*Warning* this first version of the driver remains untested |
15694
|
Wed Dec 2 15:27:06 2020 |
gautam | Summary | Computer Scripts / Programs | TC200 python driver |
FYI, there is this. Seems pretty well maintained, and so might be more useful in the long run. The available catalog of instruments is quite impressive - TC200 temp controller and SRS345 func gen are included and are things we use in the lab. maybe you can make a pull request to add MDT694B (there is some nice API already built I think). We should also put our netgpibdata stuff and the vacuum gauge control (basically everything that isn't rtcds) on there (unless there is some intellectual property rights issues that the Caltech lawyers have to sort out).
Quote: |
Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here.
*Warning* this first version of the driver remains untested
|
|
15716
|
Tue Dec 8 15:07:13 2020 |
gautam | Update | Computer Scripts / Programs | ndscope updated |
I updated the ndscope on rossa to a bleeding edge version (0.7.9+dev0) which has many of the fixes I've requested in recent times (e.g. direct PDF export, see Attachment #1). As usual if you find issue, report it on the issue tracker. The basic functionality for looking at signals seems to be okay so this shouldn't adversely impact locking efforts.
In hindsight - I decided to roll-back to 0.7.9, and have the bleeding edge as a separate binary. So if you call ndscope from the command line, you should still get 0.7.9 and not the bleeding edge. |
Attachment 1: test.pdf
|
|
15882
|
Mon Mar 8 20:11:51 2021 |
rana | Frogs | Computer Scripts / Programs | activate_matlab out of control on Megatron |
there were a zillion processes trying to activate (this is the initial activation after the initial installation) matlab 2015b on megatron, so I killed them all. Was someone logged in to megatron and trying to run matlab sometime in 2020? If so, speak now, or I will send the out-of-control process brute squad after you! |
15916
|
Fri Mar 12 18:10:01 2021 |
Anchal | Summary | Computer Scripts / Programs | Installed cds-workstation on allegra |
allegra had fresh Debian 10 installed on it already. I installed cds-workstation packages (with the help of Erik von Reis). I checked that command line caget, caput etc were working. I'll see if medm and other things are working next time we visit the lab. |
15940
|
Thu Mar 18 13:12:39 2021 |
gautam | Update | Computer Scripts / Programs | Omnigraffle vs draw.io |
What is the advantage of Omnigraffle c.f. draw.io? The latter also has a desktop app, and for creating drawings, seems to have all the functionality that Omnigraffle has, see for example here. draw.io doesn't require a license and I feel this is a much better tool for collaborative artwork. I really hate that I can't even open my old omnigraffle diagrams now that I no longer have a license.
Just curious if there's some major drawback(s), not like I'm making any money off draw.io.
Quote: |
After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle.
|
|
15967
|
Thu Mar 25 17:39:28 2021 |
gautam | Update | Computer Scripts / Programs | Spot position measurement scripts "modernized" |
I want to measure the spot positions on the IMC mirrors. We know that they can't be too far off centerBasically I did the bare minimum to get these scripts in /opt/rtcds/caltech/c1/scripts/ASS/MC/ running on rossa (python3 mainly). I confirmed that I get some kind of spot measurement from this, but not sure of the data quality / calibration to convert the demodulated response into mm of decentering on the MC mirrors. Perhaps it's something the MC suspension team can look into - seems implausible to me that we are off by 5mm in PIT and YAW on MC2? The spot positions I get are (in mm from the center):
MC1 P MC2P MC3P MC1Y MC2Y MC3Y
0.640515 -5.149050 0.476649 -0.279035 5.715120 -2.901459
A future iteration of the script should also truncate the number of significant figures per a reasonable statistical error estimation. |
Attachment 1: MCdecenter202103251735_mcmirror0.pdf
|
|
Attachment 2: MCdecenter202103251735_mcdecenter0.pdf
|
|
16075
|
Thu Apr 22 14:49:08 2021 |
gautam | Update | Computer Scripts / Programs | rossa added to RTS authorized keys |
This is to facilitate running of scripts like the CDS reboot script, mx_stream restart, etc, from rossa, without being pwd prompted every time, whereas previously it was only possible from pianosa. I added the public key of rossa to FB and the RT FE servers. I suppose I could add it to the Acromag servers too, but I haven't yet. |
16085
|
Mon Apr 26 18:52:52 2021 |
Anchal, Paco | HowTo | Computer Scripts / Programs | awg free slot |
Today we had some trouble launching an excitation on C1:IOO-MC_LSC_EXC from awggui. The error read:
awgSetChannel: failed getIndexAWG C1:SUS-MC2_LSC_EXC ret=-3
What solved this was the following :
- launch the dtt command line interface
- Anchal remembers a slot number 37008
- We issue >>
awg free 37008
- Slot freed, launch a new instance of
awggui
|
16324
|
Mon Sep 13 18:19:25 2021 |
Tega | Update | Computer Scripts / Programs | Moved modbus service from chiara to c1susaux |
[Tega, Anchal, Paco]
After talking to Anchal, it was made clear that chiara is not the place to host the modbus service for the temperature sensors. The obvious machine is c1pem, but the startup cmd script loads c object files and it is not clear how easy it would integrate the modbus functionality since we can only login via telnet, so we decided to instead host the service on c1susaux. We also modified the /etc/motd file on c1susaucx which displays the welcome message during login to inform the user that this machine hosts the modbus service for the temperature sensor. Anchal plans to also document this information on the temperature sensor wiki at some point in the future when the page is updated to include what has been learnt so far.
We might also consider updating the database file to a more modern way of reading the temperature sensor data using FLOAT32_LE which is available on EPICs version 3.14 and above, instead of the current method which works but leaves the reader bemused by the bitwise operations that convert the two 16 bits words (A and B) to IEEE-754 32-bit float, via
field(CALC, "(A&D?(A&C?-1:1):0)*((G|A&E)*J+B)*2^((A&D)/G-F)")
where
field(INPA, "$HiWord")
field(INPB, "$LoWord")
field(INPC, "0x8000") # Hi word, sign bit
field(INPD, "0x7F80") # Hi word, exponent mask
field(INPE, "0x00FF") # Hi word, mantissa mask (incl hidden bit)
field(INPF, "150") # Exponent offset plus 23-bit mantissa shift
field(INPG, "0x0080") # Mantissa hidden bit
field(INPJ, "65536") # Hi/Lo mantissa ratio
field(CALC, "(A&D?(A&C?-1:1):0)*((G|A&E)*J+B)*2^((A&D)/G-F)")
field(PREC, "4")
as opposed to the more modern form
field(INP,"@asyn($(PORT) $(OFFSET))FLOAT32_LE") |
16338
|
Thu Sep 16 12:06:17 2021 |
Tega | Update | Computer Scripts / Programs | Temperature sensors added to the summary pages |
We can now view the minute trend of the temperature sensors under the PEM tab of the summary pages. See attachment 1 for an example of today's temperature readings. |
Attachment 1: TempPlot_2021-09-16_12.04.19PM.png
|
|
16460
|
Tue Nov 9 13:40:02 2021 |
Ian MacMillan | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Tega]
After talking with Rana we have an updated plan. We will be working on this plan step by step in this order.
- Remove c1sim from the test stand rack and move it to the rack in the office next to the printer. When connecting it we will NOT connect it to the Martian network! This is to make sure that nothing is connected to the 40m system and we can't mess anything up.
- Once we have moved the computer over physically, we will need to update anyone who uses it on how to connect to it. The way we connect to it will have changed.
- Now that we have the computer moved and everyone can connect to it we will work on the model. Currently, we have the empty models connected.
- recompile the model since we moved the computer.
- verify that nothing has changed in the move and the model can still operate and compile properly
- The model has the proper structure but we need to fill it with the proper filters and such
- For the Plant model
- To get it up and running quickly we will use the premade plant filters for the plant model. These filters were made for the c1sup.mdl and should work in our modified plant model. This will allow us to verify that everything is working. And allow us to run tests on the system.
- We need to update the model and add the state space block. (we are skipping this step for now because we are fast-tracking the testing)
- Check with Chris to make sure that this is the right way to do it. I am pretty sure it is, but I don't know anything
- Make the 6 DOF state-space matrix. We only have a three DOF one. The surf never made a 6 DOF.
- Make the block to input into the model
- make a switch that will allow us to switch between the state-space model and the filter block
- For the controller
- Load filter coefficients for the controller model from one of the current optics and use this as a starting point.
- Add medm screens for the controller and plant. We are skipping this for now because we want results and we don't care if the screens look nice and are useable at the moment.
- Test the model
- we will take an open-loop transfer function of all six of the DOFs to all other DOFs which will leave us with 36 TFs. Many will be zero
- If you are looking at this post then we are measuring transfer functions from the blue flags to the green flags across the plant model.
- We will want to look at the TFs across the controller
|
16461
|
Tue Nov 9 16:55:52 2021 |
Ian MacMillan | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Tega]
We have moved c1sim computer from the test stand to the server rack in the office area. (see picture)
It is connected to the general campus network. Through the network switch at the top of the rack. This switch seeds the entire Martian network.
Test to show that I am not lying:
- you can ping it or ssh into it at
controls@131.215.114.116
Using the same password as before. Notice this is not going through the nodus network.
- It also has a different beginning of the IP addresses. Martian network IP addresses start with 191.168.113
c1sim is now as connected to the 40m network as my mom's 10-year-old laptop.
unfortunately, I have not been able to get the x2go client to connect to it. I will have to investigate further. It is nice to have access to the GUI of c1sim occasionally.
|
Attachment 1: IMG_8107.JPG
|
|
16462
|
Tue Nov 9 18:05:03 2021 |
Ian MacMillan | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Tega]
Now that the computer is in its new rack I have copied over the filter two files that I will use in the plant and the controller from pianosa:/opt/rtcds/caltech/c1/chans to the docker system in c1sim:/home/controls/docker-cymac/chans. That is to say, C1SUP.txt -> X1SUP.txt and C1SUS.txt -> X1SUS_CP.txt, where we have updated the names of the plant and controller inside the txt files to match our testing system, e.g. ITMX -> OPT_PLANT in plant model and ITMX -> OPT_CTRL in the controller and the remaining optics (BS, ITMY, PRM, SRM) are stripped out of C1SUS.txt in order to make X1SUS_CP.txt.
Once the filter files were copied over need to add them to the filters that are in my models to do this I run the commands:
$ cd docker-cymac
$ eval $(./env_cymac)
$ ./login_cymac
# cd /opt/rtcds/tst/x1/medm/x1sus_cp
# medm -x X1SUS_OPT_PLANT_TM_RESP.adl
see this post for more detail
Unfortunately, the graphics forwarding from the docker is not working and is giving the errors:
arg: X1SUS_OPT_PLANT_TM_RESP.adl
locateResource 'X1SUS_OPT_PLANT_TM_RESP.adl'
isNetworkRequest X1SUS_OPT_PLANT_TM_RESP.adl
canAccess('X1SUS_OPT_PLANT_TM_RESP.adl', 4) = 0
can directly access 'X1SUS_OPT_PLANT_TM_RESP.adl'
isNetworkRequest X1SUS_OPT_PLANT_TM_RESP.adl
locateResource(X1SUS_OPT_PLANT_TM_RESP.adl...) returning 1
Error: Can't open display:
This means that the easiest way to add the filters to the model is through the GUI that can be opened through X2go client. It is probably easiest to get that working. graphics forwarding from inside the docker is most likely very hard.
unfortunately again x2go client won't connect even with updated IP and routing. It gives me the error: unable to execute: startkde. Going into the files on c1sim:/usr/bin and trying to start startkde by myself also did not work, telling me that there was no such thing even though it was right in front of me.
|
16466
|
Mon Nov 15 15:12:28 2021 |
Ian MacMillan | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Tega]
We are working on three fronts for the suspension plant model:
- Filters
- We now have the state-space matrices as given at the end of this post. From these matrices, we can derive transfer functions that can be used as filter inputs. For a procedure see HERE. We accomplish this using Matlab's built-in
ss(A,B,C,D); function. then we make it discrete using c2d(sys, 1/f); this gives us our discrete system running at the right frequency. We can get the transfer functions of either of these systems using tf(sys);
- from there we can copy the transfer functions into our photon filters. Tega is working on this right now.
- State-Space
- We have our matrices as listed at the end of this post. With those compiled into a discrete system in MatLab we can use the code Chris made called
rtss.m to convert this system into a .c file and a .h file.
- from there we have moved those files under the userapps folder in the docker system. then we added a c-code block to our .mdl model for the plant and pointed it at the custom c file we made. See section 7.2 of T080135-v10
- We have done all this and this should implement a custom state-space function into our .mdl file. the downside of this is that to change our SS model we have to edit the matrices we can't edit this from an medm screen. We have to recompile every time.
- Python Check
- This python check is run by Raj and will take in the state-space matrices which are given then will take transfer functions along all inputs and outputs and will compare them to what we have from the CDS model.
Here are the State-space matrices:

A few notes: If you want the values for these parameters see the .yml file or the State-space model file. I also haven't been able to find what exactly this s is in the matrices.
UPDATE [11/16/21 4:26pm]: I updated the matrices to make them more general and eliminate the "s" that I couldn't identify.
The input vector will take the form:

where x is the position, theta is the pitch, phi is the yaw, and y is the y-direction displacement
|
16469
|
Tue Nov 16 17:29:49 2021 |
Ian MacMillan | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Tega]
Updated A, B, C, D matrices for the state-space model to remove bugs in the previous estimate of the system dynamics. Updated the last post to represent the current matrixes.
We used MatLab to get the correct time-series filter coefficients in ZPK format and added them to the filters running in the TM_RESP filter matrix.
Get the pos-pos transfer function from the CDS model. Strangely, this seems to take a lot longer than anticipated to generate the transfer function, even though we are mainly probing the low-frequency behavior of the system.
For example, a test that should be taking approximately 6 minutes is taking well over an hour to complete. This swept sine (results below) was on the low settings to get a fast answer and it looks bad. This is a VERY basic system it shouldn't be taking this long to complete a Swept sine TF.
Noticed that we need to run eval $(./env_cymac) every time we open a new terminal otherwise CDS doesn't work as expected. Since this has been the source of quite a few errors already, we have decided to put it in the startup .bashrc script.
loc=$(pwd)
cd ${HOME}/docker-cymac/
eval $(./env_cymac)
cd ${loc}
|
Attachment 1: x_x_TF1.pdf
|
|
16477
|
Thu Nov 18 20:00:43 2021 |
Ian MacMillan | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Raj, Tega]
Here is the comparison between the results of Raj's python model and the transfer function measurement done on the plant model by Tega and me.
As You can see in the graphs there are a few small spots of disagreement but it doesn't look too serious. Next we will measure the signals flowing through the entire plant and controller.
For a nicer (and printable) version of these plots look in the zipped folder under Plots/Plant_TF_Individuals.pdf |
Attachment 1: Final_Plant_Testing.zip
|
16478
|
Mon Nov 22 16:38:26 2021 |
Tega | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Tega, Ian]
TODO
1. Investigate cross-coupling btw the various degrees of freedom (dof) - turn on noise for each dof in the plant model and measure the transfer function of the other dofs.
2. Get a closed-loop transfer function using noise injection and give a detailed outline of the procedure in elog - IN1/IN2 for each TM_RESP filter while the others are turned off.
3. Derive analytic model of the closed-loop transfer functions for comparison.
4. Adapt control filters to fit optimized analytical solutions. |
16615
|
Mon Jan 24 17:10:25 2022 |
Tega | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Tega]
Connected the New SUS screens to the controller for the simplant model. Because of hard-coded links in the medm screen links, it was necessary to create the following path in the c1sim computer, where the new medm screen files are located:
/opt/rtcds/userapps/trunk/sus/c1/medm/templates/NEW_SUS_SCREENS
We noticed a few problems:
1. Some of the medm files still had C1 hard coded, so we need to replace them with $IFO instead, in order for the custom damping filter screen to be useful.
2. The "Load coefficient" button was initially blank on the new sus screen, but we were able to figure out that the problem came from setting the top-level DCU_ID to 63.
medm -x -macro "IFO=X1,OPTIC=OPT_CTRL,DCU_ID=63" SUS_SINGLE_OVERVIEW.adl
[TODO]
Get the data showing the controller damping the pendulum. This will involve tweaking some gains and such to fine-tune the settings in the controller medm screen. Then we will be able to post some data of the working controller.
[Useful aside]
We should have a single place with all the instructions that are currently spread over multiple elogs so that we can better navigate the simplant computer. |
Attachment 1: Screen_Shot_2022-01-24_at_5.33.15_PM.png
|
|
16626
|
Thu Jan 27 16:40:57 2022 |
Tega | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
[Ian, Paco, Tega]
Last night we set up the four main matrices that handle the conversion between the degrees of freedom bases and the sensor bases. We also wrote a bash script to automatically set up the system. The script sets the four change of bases matrices and activates the filters that control the plant. this script should fully set up the plant to its most basic form. The script also turns off all of the built-in noise generators.
After this, we tried damping the optic. The easiest part of the system to damp is the side or y motion of the optic because it is separate from the other degrees of freedom in both of the bases. We were able to damp that easily. in attachment 1 you can see that the last graph in the ndscope screen the side motion of the optic is damped. Today we decided to revisit the problem.
Anyways, looking at the problem with fresh eyes today, I noticed the in pit2pit coupling has the largest swing of all the plant filters and thought this might be the reason why the inputs (UL,UR,LR,LL) to the controller was hitting the rails for pit DoF. I reduce the gain of the pit2pit filter then slowly increased it back to one. I also reduced the gain in the OSEM input filter from 1 to 1/100. The attached image (Attachment2) is the output from this trial. This did not solve the problem. The output when all OSEM input filter gain set to one is shown in Attachment2.
We will try to continue to tweak the coefficients. We are probably going to ask Anchal and Paco to sit down with us and really hone in on the right coefficients. They have more experience and should be able to really get the right values. |
Attachment 1: simplant_control_1.png
|
|
Attachment 2: simplant_control_0.png
|
|
16645
|
Thu Feb 3 17:15:23 2022 |
Tega | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
Finally got the SIMPLANT damping to work following Rana's suggestion to try damping one DoF at a time, woo-hoo!
At first, things didn't look good even when we only focus on the POS DoF. I then noticed that the input value (X1:SUS-OPT_PLANT_TM_RESP_1_1_IN1) to the plant was always zero. This was odd bcos it meant the control signal was not making its way to the plant. So I decided to look at the sensor data
(X1:SUS-OPT_PLANT_COIL_IN_UL_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_UR_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_LR_OUTPUT, X1:SUS-OPT_PLANT_COIL_IN_LL_OUTPUT)
that adds up via the C2DOF matrix to give the POS DoF and I noticed that these interior nodes can take on large values but always sum up to zero because the pair (UL, LL) was always the negative of (UR,LR). These things should have the same sign, at least in our case where only the POS DoF is excited, so I tracked the issue back to the alternating (-,+,-,+,-) convention for the gains
(X1:SUS-OPT_CTRL_ULCOIL_GAIN, X1:SUS-OPT_CTRL_URCOIL_GAIN, X1:SUS-OPT_CTRL_LRCOIL_GAIN, X1:SUS-OPT_CTRL_LLCOIL_GAIN, X1:SUS-OPT_CTRL_SDCOIL_GAIN)
of the Coil Output filters used in the real system, which we adopted in the hopes that all was well. Anyways, I changed them all back to +1. This also means that we need to change the sign of the gain for the SIDE filter, which I have done also (and check that it damps OK). I decided to reduce the magnitude of the SIDE damping from 1 to 0.1 so that we can see the residuals since the value of -1 quickly sends the error to zero. I also increased the gain magnitude for the other DoF to 4.
When looking at the plot remember that the values actually represent counts with a scaling of 2^15 (or 32768) from the ADC. I switched back to the original filters on FM1 (e.g. pit_pit ) without damping coefficients present in the FM2 filter (e.g. pit_pit_damp).
FYI, Rana used the ETMY suspension MEDM screen to illustrate the working of the single suspension to me and changed maybe POS and PITCH gains while doing so.
Also, the Medify purifier 'replace filter' indicator issue occurred bcos the moonlight button should have been pressed for 3 seconds to reset the 'replace filter' indicator after filter replacement. |
Attachment 1: Screen_Shot_2022-02-03_at_8.23.07_PM.png
|
|
16654
|
Wed Feb 9 14:34:27 2022 |
Ian | Summary | Computer Scripts / Programs | SUS Plant Plan for New Optics |
Restarted the C1sim machine at about 12:30 to help diagnose a network problem. Everything is back up and running |
Attachment 1: SummaryMdemScreen.png
|
|
17036
|
Tue Jul 26 19:50:25 2022 |
Deeksha | Update | Computer Scripts / Programs | Vector fitting |
Trying to vectfit to the data taken from the DFD previously but failing horribly. I will update this post as soon as I get anything semi-decent. For now here is this fit. |
Attachment 1: data.png
|
|
Attachment 2: fit_attempt.png
|
|
17038
|
Tue Jul 26 21:16:41 2022 |
Koji | Update | Computer Scripts / Programs | Vector fitting |
I think the fit fails as the measurement quality is not good enough.
|
17075
|
Thu Aug 11 16:48:59 2022 |
rana | Update | Computer Scripts / Programs | NDS2 updates |
We had several problems with our NDS2 server configuration. It runs on megatron, but I think it may have had issues since perhaps not everyone was aware of it running there.
- channel lists were supposed to updated regularly, but the nds2_nightly script did not exist in the specified directory. I have moved it from Joe Areeda's personal directory (/home/nds2mgr/joework/server/src/utils/) to nds2mgr/channel-tracker/.
- The channel history files (/home/nds2mgr/channel-tracker/channel_history/) are stored on the local megatron disk. These files had grown up to ~50 GB over tha past several years. I backed these up to /users/rana/, and then wiped them out so that the NDS could regen them. Now that the megatron local disk is not full, it seems to work in giving raw data.
- Need to confirm that this serves up trend data (second and minute)
- I think there is a nds2-server package for Debian, so we should update megatrons OS to the preferred flavour of DebIan and use that. Who to get to help in this install?
Since Megatron is currently running the "Shanghai" Quad-core Opteron processor from ~2009, its ~time to replace it with a more up to date thing. I'll check with Neo to see if he has any old LDAS leftovers that are better. |
17261
|
Fri Nov 11 20:01:56 2022 |
rana | Frogs | Computer Scripts / Programs | FSS SLOW servo not running |
I was trying to debug why the NPRO PZT is all over the place, and it turns out that the new FSS SLOW script is not actually running.
The BLINKY is blinking, but the script is not running. I wasn't able to figure out how to kill the broken Docker thing, but if the code reports that its running but actually does not, we should probably just put back the old perl or python script that ran before. I don't know how to debug this current issue, but the IMC locks will be limited in length due to this servo being broken. Whoever knows about this, please stop that Docker PID and we can just run the old python script on megatron.
I also tried to post a trend plot, but the minute trends don't yet reach the current date (!!!). They seem to have stopped recording a few days ago, so I guess the Framebuilder still needs some help or its tough to figure out things like when exactly the new SLOW servo stopped working. |
17262
|
Fri Nov 11 20:59:13 2022 |
Chris | Frogs | Computer Scripts / Programs | FSS SLOW servo not running |
The problem with trends was due to the epics data collection process (standalone_edc) that runs on c1sus. When all the FEs were rebooted earlier this week, this process was started automatically, but for some reason it hasn’t been doing its thing and sending epics data to the framebuilder. I restarted it just now, and it’s working again. Until this problem is sorted out, we need to remember to check on this process after rebooting c1sus.
Quote: |
I also tried to post a trend plot, but the minute trends don't yet reach the current date (!!!). They seem to have stopped recording a few days ago, so I guess the Framebuilder still needs some help or its tough to figure out things like when exactly the new SLOW servo stopped working.
|
|
17263
|
Sat Nov 12 21:59:24 2022 |
Anchal | Frogs | Computer Scripts / Programs | FSS SLOW servo not running |
I stopped the Docker PID script and started the old python script on megatron. Instructions on how to do this are here.
On optimus I ran:
sudo docker stop scripts_PID_FSS_Slow_1
On megatron I ran:
sudo systemctl enable FSSSlow
sudo systemctl restart FSSSlow
However, the daemon service keeps failing and restarting. So currently the FSSSlow is not running. I do not know how to debug this script.
On a side note, I tested the docker service by restarting it, and it was working. From the logs, it seems like it got stuck because it could not find C1:IOO-MC_LOCK channel which occurs when c1psl epics servers fail or get stuck. The blinker on this script runs when the script is running but it does not stop if the script gets stuck somewhere. If someone decides to use this script in the future, they would need to correct error catching so that no reply from caget looks like an error and the script restarts rather than keep trying to get the channel value. Or the blinker implementation should change in the script so that it displays a stuck state.
Quote: |
Whoever knows about this, please stop that Docker PID and we can just run the old python script on megatron.
|
|
17281
|
Thu Nov 17 16:48:07 2022 |
Anchal | Frogs | Computer Scripts / Programs | FSS SLOW servo running Now |
I've moved the FSS Slow PID script running to megatron through systemd daemons. The script is working as expected right now. I've updated megatron motd and the always running scripts page here. |
32
|
Tue Oct 30 19:32:13 2007 |
tobin | Problem Fixed | Computers | conlogger restarted |
I noticed that the conlogger wasn't running. It looks like it hasn't been running since October 11th. I modified the restart_conlogger script to insist that it run on op340m instead of op440m, and then ran it on op340m. |
46
|
Thu Nov 1 16:34:47 2007 |
Andrey Rodionov | Summary | Computers | Limitation on attachment size of E-LOG |
I discovered yesterday when I was attaching photos that it is NOT possible to attach files whose size is 10Mb or more. Therefore, 10Mb or something very close to that value is the limit. |
71
|
Tue Nov 6 16:48:54 2007 |
tobin | Configuration | Computers | scopes on the net |
I configured our two 100 MHz Tektronix 3014B scopes with IP addresses: 131.215.113.24 (scope0) and 113.215.113.25 (scope1). Let the scripting commence!
There appears to be a Matlab Instrument Control Toolbox driver for this scope. |
72
|
Tue Nov 6 18:18:15 2007 |
tobin | Configuration | Computers | I broke (and fixed) conlogger |
It turns out that not only restart_conlogger, but also conlogger itself checks to see that it is running on the right machine. I had changed the restart_conlogger script to run on op340, but it would actually silently fail (because we cleverly redirect conlogger's output to /dev/null). Anyway, it's fixed now: I edited the conlogger source code where the hostname is hardcoded (blech!) and recompiled.
On another note, Andrey fixed the "su" command on op440m. It turns out that the GNU version, in /usr/local/bin, doesn't work, and was masking the (working) sun version in /bin. Andrey renamed the offending version as "su.backup". |
73
|
Tue Nov 6 23:45:38 2007 |
tobin | Configuration | Computers | tektronix scripts! |
I cooked up a little script to fetch the data from the networked Tektronix scope. Example usage:
linux2:scripts>tektronix/tek-dump scope0 ch1 foo.csv
"scope0" is the hostname of the scope, "ch1" is the channel you want to dump, and "foo.csv" is the file you want to dump it to. The script is written in Python since Python's libhttp gave me less trouble than Perl's HTTP::Lite. |
77
|
Wed Nov 7 10:55:21 2007 |
ajw | Configuration | Computers | backup script restarted |
Following the reboot of computers on 10/31/07, the backup script required restart (which unfortunately "can't" be automated because a password needs to be typed in). I restarted, following the instructions in /cvs/cds/caltech/scripts/backup/000README.txt and verified that it more-or-less worked last night (the rsync sometimes times out; it gets through after a couple of days of trying.) |
92
|
Sun Nov 11 21:21:04 2007 |
rana | HowTo | Computers | New DV |
To use the new ligoDV (previously GEO DV) to look at 40m data, open up a matlab, set up for mDV as usual,
and then from the /cvs/cds/caltech/apps/ligoDV/ directory, type 'ligoDV'.
Then select which NDS server you want to look at and then start clicking to get some plots. |
Attachment 1: Screenshot-1.png
|
|