ID |
Date |
Author |
Type |
Category |
Subject |
13580
|
Wed Jan 24 23:13:30 2018 |
johannes | Update | DAQ | c1auxex2 startup behavior |
Quote: |
The result is a smooth transition from idling to the controlled state with no sudden or large offset changes. 
|
[Gautam, Johannes]
While checking how smooth the transition is we still noticed significant motion of ETMX by looking at the locked green laser and OpLevs. We found that this motion was not caused by interruption of the slow offset adjust, but rather the Watchdog being re-initialized to its OFF state, which cuts the fast channels OFF. On other optics this is observed too, but not as severe. The cause is a rather large offset on the LR coil coming from the fast DAQ, which was reported as 50mV by the slow readback channel (while other readback values are <10mV). It is present even when turning the output of the CDS model OFF, but vanishes when the watchdog is triggered. This helped us trace it to an offset of the DAC output itself: it is present at the output of the AI board but vanishes when the DAC is disconnected. The actual offset is ~40mV, as opposed to other channels on the same board, which ahve offsets in the range 3-7mV.
While we can compensate for this offset in software - it made us wonder if the DAC channel is somehow busted and if that's what causing the 'wandering' of ETMX that we have been observing recently. There are two free DAC channels on the AI chassis that has the side coil and the green temperature control signals. We could re-route the LR signal through a different DAC channel to fix this.
gautam: 40mV offset at the AI board output gets multiplied by 3 in the dewhitening board, so there is a 120mV DC offset going to the coil (measured at dewhite board output with DMM). The offset itself isn't hurting us, but the fact that it is several times larger than other channels led us to wonder if it could be drifting around as well. From my SOS pitch balancing forays, in my head I have the number 30mrad as being the full range of the OSEM actuation - so if the offset swings by 120mV, that's ~150urad of motion, which is quite large, and is of the order of magnitude I'm used to seeing ETMX move around by. |
13590
|
Wed Jan 31 15:29:44 2018 |
johannes | Update | DAQ | PSL acromag server moved from megatron to c1auxex2 |
I moved the epics IOC server process for the single Acromag ADC that monitors the PSL signals from megatron to c1auxex2.
First, I disabled the legacy support on all channels as explained in elog 13565. Then I copied the files npro_config.cmd and NPRO.db from /opt/rtcds/caltech/c1/scripts/Acromag to /cvs/cds/caltech/target/c1psl2/ following the pattern of the old Motorola machines and the new c1auxex2. I had to make some edits for correct paths and expanded the epics records to the standard we're using for ETMX.
I then added a service to systemd on c1auxex2 that runs the epics IOC for the Acromag PSL channels: /etc/systemd/system/modbusPSL.service. No more tmux on megatron.
Running two IOCs on a signle machine at the same time did not produce any errors and seems fine so far. |
13742
|
Mon Apr 9 23:28:49 2018 |
johannes | Configuration | DAQ | c1psl channel list |
I made a list of all the physical c1psl channels to get a better idea for how many acromags we need to replace it eventually. There 3123 unit is the one whose failure had prevented c1psl from booting, which is why it was unplugged (elog post 12852), and its channels have been inactive since. Are the 126MOPA channels used for the current mephisto? 126 tells me it's for an old lightwave laser, but I was checking a few and found that they have non-zero, changing values, so they may have been rewired.
It also hosts some virtual channels for the ISS with root C1:PSL-ISS_ defined in iss.db and dc.db, the PSL particle counter with root C1:PEM- defined in PCount.db and a whole lot of PSL status channels defined in pslstatus.db. Transfering these virtual channels to a different machine is almost trivial, but the serial readout of the particle counter would have to find a new home.
Long story short - we need:
Function |
Type |
# Channels |
#Channels (no MOPA) |
# Units |
# Units (no MOPA) |
ADC |
XT1221 |
34 |
21 |
5 |
3 |
DAC |
XT1541 |
17 |
14 |
3 |
2 |
BIO |
XT1111 |
19 |
10 |
2 |
1 |
3113 - ADC
C1:PSL-126MOPA_126PWR
C1:PSL-126MOPA_DTMP
C1:PSL-126MOPA_LTMP
C1:PSL-126MOPA_DMON
C1:PSL-126MOPA_LMON
C1:PSL-126MOPA_CURMON
C1:PSL-126MOPA_DTEC
C1:PSL-126MOPA_LTEC
C1:PSL-126MOPA_CURMON2
C1:PSL-126MOPA_HTEMP
C1:PSL-126MOPA_HTEMPSET
C1:PSL-FSS_RFPDDC
C1:PSL-FSS_LODET
C1:PSL-FSS_FAST
C1:PSL-FSS_PCDRIVE
C1:PSL-FSS_MODET
C1:PSL-FSS_VCODETPWR
C1:PSL-FSS_TIDALOUT
C1:PSL-PMC_RFPDDC
C1:PSL-PMC_LODET
C1:PSL-PMC_PZT
C1:PSL-PMC_MODET
3123 - ADC (failed)
C1:PSL-126MOPA_AMPMON
C1:PSL-126MOPA_126MON
C1:PSL-FSS_RCTRANSPD
C1:PSL-FSS_MINCOMEAS
C1:PSL-FSS_RMTEMP
C1:PSL-FSS_RCTEMP
C1:PSL-FSS_MIXERM
C1:PSL-FSS_SLOWM
C1:PSL-FSS_TIDALINPUT
C1:PSL-PMC_PMCTRANSPD
C1:PSL-PMC_PMCERR
C1:PSL-PPKTP_TEMP
4116 - DAC
C1:PSL-126MOPA_126CURADJ
C1:PSL-126MOPA_DCAMP
C1:PSL-126MOPA_DCAMP-
C1:PSL-FSS_INOFFSET
C1:PSL-FSS_MGAIN
C1:PSL-FSS_FASTGAIN
C1:PSL-FSS_PHCON
C1:PSL-FSS_RFADJ
C1:PSL-FSS_SLOWDC
C1:PSL-FSS_VCOMODLEVEL
C1:PSL-FSS_TIDAL
C1:PSL-FSS_TIDALSET
C1:PSL-PMC_GAIN
C1:PSL-PMC_INOFFSET
C1:PSL-PMC_PHCON
C1:PSL-PMC_RFADJ
C1:PSL-PMC_RAMP
XVME-210 - Binary Input
C1:PSL-126MOPA_FAULT
C1:PSL-126MOPA_INTERLOCK
C1:PSL-126MOPA_SHUTTER
C1:PSL-126MOPA_126LASE
C1:PSL-126MOPA_AMPON
XVME-220 - Binary Output
C1:PSL-126MOPA_126NE
C1:PSL-126MOPA_126STANDBY
C1:PSL-126MOPA_SHUTOPENEX
C1:PSL-126MOPA_STANDBY
C1:PSL-FSS_SW1
C1:PSL-FSS_SW2
C1:PSL-FSS_FASTSWEEP
C1:PSL-FSS_PHFLIP
C1:PSL-FSS_VCOTESTSW
C1:PSL-FSS_VCOWIDESW
C1:PSL-PMC_SW1
C1:PSL-PMC_SW2
C1:PSL-PMC_PHFLIP
C1:PSL-PMC_BLANK |
14141
|
Mon Aug 6 20:41:10 2018 |
aaron | Update | DAQ | New DAC for the OMC |
Gautam and I tested out the DAC that he installed in the latter half of last week. We confirmed that at least one of the channels is can successfully drive a sine wave (ch10, 1-indexed). We had to measure the output directly on the SCSI connector (breakout in the FE hard drive cabinet along the Y arm), since the SCSI breakout box (D080303) seems not to be working (wiring diagram in Gautam's elog from his SURF years).
I added some DAC channels to our c1omc model:
PZT1_PIT
PZT1_YAW
PZT2_PIT
PZT2_YAQ
And determined that when we go to use the ADC, we will initially want the following channels (even these are probably unnecessary for the very first scans):
TRANS_PD1
TRANS_PD2
REFL_PD
DVMDC (drive voltage monitor, DC level)
DVMAC ("", AC level, only needed if we dither the length)
I attach a screenshot of the model, and a picture of where the whitening/dewhitening boards should go in the rack. |
Attachment 1: OMCDACmdl.png
|
|
14172
|
Tue Aug 21 03:09:59 2018 |
johannes | Omnistructure | DAQ | Panels for Acromag DAQ chassis |
I expanded the previous panels to 6U height for the new DAQ chassis we're buying for the upgrade. I figure it's best if we stick to the modular design, so I'm showing a panel for 8 BNC connectors as an example. The front panel has 12 slots, the back has 10 plus power connectors, switches, and the ethernet plug.
I moved the power switch to the rear because it's a waste of space to put it in the front, and it's not like we're power cycling this thing all the time. Note that the unit only requires +24V (for general operation, +20V also does the trick, as is the situation for ETMX) and +15V (excitation field for the binary I/O modules). While these could fit into a single CONEC power connector, it's probably for the better if we don't make a version that supplies a large positive voltage where negative is expected, so I put in two CONEC plugs for +/- 15 and +/- 24.
I want to order 5-6 of these as soon as possible, so if anyone wants anything changed or sees a problem, please do tell! |
Attachment 1: auxdaq_40m_6U_front.pdf
|
|
Attachment 2: auxdaq_40m_6U_rear.pdf
|
|
Attachment 3: auxdaq_40m_6U_BNC.pdf
|
|
14295
|
Wed Nov 14 18:58:35 2018 |
aaron | Update | DAQ | New DAC for the OMC |
I began moving the AA and AI chassis over to 1X1/1X2 as outlined in the elog.
The chassis were mostly filled with empty cables. There was one cable attached to the output of a QPD interface board, but there was nothing attached to the input so it was clearly not in use and I disconnected it.
I also attach a picture of some of the SMA connectors I had to rotate to accommodate the chassis in their new locations.
Update:
The chassis are installed, and the anti-imaging chassis can be seen second from the top; the anti-aliasing chassis can be seen 7th from the top.
I need to breakout the SCSI on the back of the AA chassis, because ADC breakout board only has a DB36 adapter available; the other cables are occupied by the signals from the WFS dewhitening outputs. |
Attachment 1: 6D079592-1350-4099-864B-1F61539A623F.jpeg
|
|
Attachment 2: 5868D030-0B97-43A1-BF70-B6A7F4569DFA.jpeg
|
|
15067
|
Tue Dec 3 20:32:37 2019 |
rana | Omnistructure | DAQ | NDS2 situation |
Recently, accordian to Gautam, the NDS2 server has been dying on Megatron ~daily or weekly. The prescription is to restart the server.
- I could find no instructions (that work) in the elog or wiki. We must remove the misleading entries from the wiki and update it with whatever works as of today.
- There is a line (which has been commented out) in the Megatron crontab which is close to the right command, but it has the wrong path.
- Running the command from the CRON (/home/nds2mgr/nds2-megatron/test_restart), gives several errrors.
- when I run the init.d command which is in the script, it seems to run fine
- the server then takes several minutes to get itself together; i.e. just because it is running doesn't mean that you can get data. I recommend waiting 5-10 minutes.
Also, megatron is running Ubuntu 12 !! Let's decide on a day to upgrade it to a Debian 18ish....word from Rolf is that Scientific Linux is fading out everywhere, so Debian is the new operating system for all conformists. |
Attachment 1: getData.py
|
#!/usr/bin/env python
# this function gets some data (from the 40m) and saves it as
# a .mat file for the matlabs
# Ex. python -O getData.py
from scipy.io import savemat,loadmat
import scipy.signal as sig
from astropy.time import Time
import nds2
... 116 more lines ...
|
Attachment 2: chanlist.txt
|
PEM-SEIS_BS_X_OUT_DQ
PEM-SEIS_BS_Y_OUT_DQ
PEM-SEIS_BS_Z_OUT_DQ
PEM-SEIS_EX_X_OUT_DQ
PEM-SEIS_EX_Y_OUT_DQ
PEM-SEIS_EX_Z_OUT_DQ
PEM-SEIS_EY_X_OUT_DQ
PEM-SEIS_EY_Y_OUT_DQ
PEM-SEIS_EY_Z_OUT_DQ
|
15302
|
Mon Apr 13 16:51:49 2020 |
rana | Summary | DAQ | NODUS: rsyncd daemon / service set up |
I just now modified the /etc/rsyncd.conf file as per Dan Kozak's instructions. The old conf file is still there with the file name appended with today's date.
I then enabled the rsync daemon to run on boot using 'enable'. I'll ask Dan to start the file transfers again and see if this works.
controls@nodus|etc> sudo systemctl start rsyncd.service
controls@nodus|etc> sudo systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
controls@nodus|etc> sudo systemctl status rsyncd.service
● rsyncd.service - fast remote file copy program daemon
Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-04-13 16:49:12 PDT; 1min 28s ago
Main PID: 4950 (rsync)
CGroup: /system.slice/rsyncd.service
└─4950 /usr/bin/rsync --daemon --no-detach
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Started fast remote file copy program daemon.
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Starting fast remote file copy program daemon...
|
15560
|
Sun Sep 6 13:15:44 2020 |
Jon | Update | DAQ | UPS for framebuilder |
Now that the old APC Smart-UPS 2200 is no longer in use by the vacuum system, I looked into whether it can be repurposed for the framebuilder machine. Yes, it can. The max power consumption of the framebuilder (a SunFire X4600) is 1.137kW. With fresh batteries, I estimate this UPS can power the framebuilder for >10 min. and possibly as long as 30 min., depending on the exact load.
@Chub/Jordan, this UPS is ready to be moved to rack 1X6/1X7. It just has to be disconnected from the wall outlet. All of the equipment it was previously powering has been moved to the new UPS. I have ordered a replacement battery (APC #RBC43) which is scheduled to arrive 9/09-11. |
16853
|
Sat May 14 08:36:03 2022 |
Chris | Update | DAQ | DAQ troubleshooting |
I heard a rumor about a DAQ problem at the 40m.
To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.
daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...
One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:
$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)
In /etc/resolv.conf on fb1 there was the following line:
search martian.113.168.192.in-addr.arpa
Changing this to search martian got address lookup on fb1 working:
$ host c1sus2
c1sus2.martian has address 192.168.113.87
But testpoints still could not be retrieved from c1sus2, even after a daqd restart.
In /etc/hosts on fb1 I found the following:
192.168.113.92 c1sus2
Changing the hardcoded address to the value returned by the nameserver (192.168.113.87) fixed the problem.
It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time. |
16854
|
Mon May 16 10:49:01 2022 |
Anchal | Update | DAQ | DAQ troubleshooting |
[Anchal, Paco, JC]
Thanks Chris for the fix. We are able to access the testpoints now but we started facing another issue this morning, not sure how it is related to what you did.
- The C1:LSC-TRX_OUT and C1:LSC-TRY_OUT channels are stuck to zero value.
- These were the channels we used until last friday to align the interferometer.
- These channels are routed through the c1rfm FE model (Reflected Memory model is the name, I think). These channels carry the IR transmission photodiode monitors at the two ends of the interferometer, where they are first logged into the local FEs as C1:SUS-ETMX_TRX and C1:SUS-ETMY_TRY .
- These channels are then fed to C1:SCX-RFM_TRX -> C1:RFM_TRX -> C1:RFM-LSC_TRX -> C1:LSC-TRX and similar for Y side.
- We are able to see channels in the end FE filtermodule testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT)
- However, we are unable to see the same signal in c1rfm filter module testpoints like C1:RFM_TRX_IN1, C1:RFM_TRY_IN1 etc
- There is an IPC error shown in CDS FE status screen for c1rfm in c1sus. But we remember seeing this red for a long time and have been ignoring it so far as everything was working regardless.
The steps we have tried to fix this are:
- Restart all the FE models in c1lsc, c1sus, and c1ioo (without restarting the computers themselves) , and then burt restore.
- Restart all the FE models in c1iscex, and c1iscey (only c1iscey computer was restarted) , and then burt restore.
These above steps did not fix the issue. Since we have the testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT) for now to monitor the transmission levels, we are going ahead with our upgrade work without resovling this issue. Please let us know if you have any insights. |
16855
|
Mon May 16 12:59:27 2022 |
Chris | Update | DAQ | DAQ troubleshooting |
It looks like the RFM problem started a little after 2am on Saturday morning (attachment 1). It’s subsequent to what I did, but during a time of no apparent activity, either by me or others.
The pattern of errors on c1rfm (attachment 2) looks very much like this one previously reported by Gautam (errors on all IRFM0 ipcs). Maybe the fix described in Koji’s followup will work again (involving hard reboots). |
Attachment 1: timeseries.png
|
|
Attachment 2: err.png
|
|
17387
|
Thu Jan 5 14:30:32 2023 |
Paco | HowTo | DAQ | nds2 server restart |
After being unable to fetch data offsite using the nds40 server, I found enlightment here. Our nds2 server is running on megatron and the link before should be sufficient to restore it after any hiccups.  |
280
|
Mon Jan 28 15:11:38 2008 |
rob | HowTo | DMF | running compiled matlab DMF tools |
I compiled Rana's seisBLRMS monitor, and it's now running on mafalda. To start your own DMF tools, here is a procedure:
1) build your tool in mDV, get it working the way you'd like.
2) Make a new directory /cvs/cds/caltech/apps/DMF/compiled_matlab/{your_new_directory} and copy the *.m file there.
3) Make the *.m in your new directory into a function with no args (just add a function line at the top)
4) compile it (from within a fully mDV-configured matlab) with mcc -m -R -nojvm {yourfile}.m at the matlab command line.
5) add a line corresponding to your new tool to the script /cvs/cds/caltech/apps/DMF/scripts/start_all
6) Run the start_all script referenced in part (5).
NB: Steps (4) and (6) must be carried out on mafalda. |
282
|
Mon Jan 28 18:56:47 2008 |
rana | Update | DMF | seisBLRMS 1.0 |
I made all of the updates I aludded to before:
- Expanded the dmf.db file on c1aux to include all accelerometers and the seismometer.
- Added the channels to the C0EDCU.ini file and restarted the framebuilder daqd.
- Modified seisBLRMS.m to use .conf files for the channels. The .conf files are now residing in the compiled matlab directory that Rob made.
I still have yet to compile and test the new version. Its running on linux3 right now, but feel free to kill it and compile it to run on Mafalda.
It should be making trends overnight and so we can finally see what the undergrads are really up to. |
284
|
Tue Jan 29 14:56:39 2008 |
rob | Update | DMF | seisBLRMS 1.0 |
The seisBLRMS 1.0 program crashed at ~7:20 pm last night, so we didn't get data from overnight. It crashed when framecaching failed. I added
a try-catch-end statement around the call to dttfft2 to let the program survive this, then compiled it and started it on mafalda. After ~45 minutes, the compiled version encountered the same error, and while it didn't crash per se, after 20 minutes it still wasn't able to read data. We may have to dig deeper into the guts of mDV to make this stuff run more robustly. |
287
|
Wed Jan 30 20:39:31 2008 |
rob | Update | DMF | seisBLRMS |
In order to reduce the probability of seisBLRMS crashing due to unavailability of data, I edited seisBLRMS.m so that it displays data from 6 minutes in the past, rather than 3. After compiling, this version ran for ~8 hours without crashing. I've killed the process now because it seems to interfere with alignment scripts that use ezcademod, causing "DATA RECEIVING ERROR 4608" messages. These don't cause ezcademod to crash, but so many of them are spit out that the scripts don't work very well. I guess running DMF constantly is just making the framebuilder work too hard with disk access. In the near term, we can maybe work around this by having DMF programs check the AutoDithering bit of the IFO state vector, and just not try to get data when we're running these sorts of scripts. |
291
|
Fri Feb 1 12:37:39 2008 |
rob | Update | DMF | seisBLRMS trends |
Here are DV trends of the output of seisBLRMS over the last ~36 hours (which is how long it's been running), and another of the last 2 hours (which show the construction crew taking what appears to be a lunch break). |
Attachment 1: seis36hours.png
|
|
Attachment 2: seis2hours.png
|
|
312
|
Tue Feb 12 16:34:07 2008 |
rob | DAQ | DMF | seisBLRMS 1.1 |
The compiled version of seisBLRMS had been running ~2 weeks without crashing as of last night, when I killed it
so it wouldn't interfere with alignment scripts. I added an EPICS channel C1:DMF-ENABLE, and updated the DMF
executables to check this channel while running. So far it seems to work. When you're running alignment acripts,
simply click the DISABLE button on the C1DMF_MASTER.adl screen, and then re-ENABLE when the scripts finish.
It's not clear why this is necessary though. Theories include the constant disk access is keeping the
framebuilder busy, reducing its ability to deal with ezcademod commands and DMF programs just flooding the
network with so much traffic that ezcademod-related packets run late and get ignored.
Also, for reasons of aesthetics, I changed the data delay from 6 minutes to 5 minutes. We'll see if that's enough. |
316
|
Thu Feb 14 15:04:53 2008 |
rob | DAQ | DMF | DMF delay |
Sometime ago I edited seisBLRMS to keep of track of how long it was taking to write RMS data (that is, the delay between the accelerometer data and the write of the EPICS rms data). Here's a plot of that info, showing how the delay increases over time. I think this indicates a logical flaw in the timing of the seisBLRMS program, which sort of relies on everything running well consistently; this should not be difficult to fix. I'll maybe try increasing the delay to ~10 minutes, and making it relatively inflexible. |
Attachment 1: DMFdelay.png
|
|
317
|
Thu Feb 14 15:05:18 2008 |
rob | DAQ | DMF | seisBLRMS 1.1 |
>
> Also, for reasons of aesthetics, I changed the data delay from 6 minutes to 5 minutes. We'll see if that's enough.
5 minutes didn't work. |
390
|
Fri Mar 21 17:01:21 2008 |
rana | Configuration | DMF | Locale change on Mafalda & seisBLRMS restart |
Ever since we moved the accelerometers to be around the MC and changed their names, the seisBLRMS
has not been working. I tried to restart it today after fixing the channel names in the par file
but I ran into a PERL / UBUNTU bug.
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_TIME = "en_US.ISO8859-1",
LC_MONETARY = "en_US.ISO8859-1",
LC_CTYPE = "en_US.ISO8859-1",
LC_COLLATE = "en_US.ISO8859-1",
LC_MESSAGES = "C",
LC_NUMERIC = "en_US.ISO8859-1",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
I don't know how this crept up or when it started. There were a bunch of fixes on the Ubuntu
forums which didn't work.
In the end I just set the 'unset' environment variables via our cshrc.40m file and this seemed
to make ligotools/perl happy. Lets hope this lasts...I love Linux. |
392
|
Fri Mar 21 23:17:47 2008 |
rana | Configuration | DMF | seisBLRMS restarted |
I updated the seisBLRMS par file with the new channel names of the accelerometers and the seismometer and then
recompiled the code and restarted it according to Rob's elog entry. It went fine and the seisBLRMS is now back in
action. |
1230
|
Thu Jan 15 22:30:32 2009 |
rana | Configuration | DMF | DMF start script |
I tried to restart the DMF using the start_all script: http://dziban.ligo.caltech.edu:40/40m/280
it didn't work  |
1232
|
Fri Jan 16 11:33:59 2009 |
rob | Configuration | DMF | DMF start script |
It should work soon. The PATH on mafalda does not include ".", so I added a line to the start_DMF subscript, which sets up the DMF ENV, to prepend this to the path before starting the tools. I didn't put it in the primary login path (such as in the .cshrc file) because Steve objects on philosophical grounds.
Also, the epics tools in general (such as tdsread) on mafalda were not working, due to PATH shenanigans and missing caRepeaters. Yoichi is harmonizing it. |
1359
|
Thu Mar 5 01:09:29 2009 |
rana, alberto | Update | DMF | still not working |
We tried to run DMF on mafalda, but it didn't work. I tried to compile it using Rob's elog instructions.
On mafalda, I started matlab and ran the mdv_config.m to set up mDV. I tested that the seisBLRMS.m
script ran and correctly produced changes in the seisBLRMS strip tool. but when I tried to compile it I got:
>> mcc -v -m -R -nojvm seisBLRMS.m
Warning: Duplicate directory name: /cvs/cds/caltech/apps/linux/matlab/toolbox/local.
Compiler version: 4.6 (R2007a)
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/matlab/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/signal/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/control/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/filterdesign/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/shared/controllib/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/ident/mcc.enc
Warning: an error occurred while parsing class FilterDesignDialog.AbstractEditor:
Undefined function or variable 'DAStudio.Object'.
> In /cvs/cds/caltech/apps/linux/matlab/toolbox/shared/filterdesignlib/@FilterDesignDialog/@CoeffEditor/schema.p>schema at 9
Warning: an error occurred while parsing class FilterDesignDialog.CoeffEditor:
Invalid superclass handle.
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/fixedpoint/mcc.enc
terminate called after throwing an instance of 'ApplicationRedefinedException*'
Abort (core dumped)
"/cvs/cds/caltech/apps/linux/matlab/bin/mcc" -E "/tmp/fileRnU5Qj_31324": Aborted
??? Error executing mcc, return status = 134.
In the meantime, I've started up a green terminal on allegra which is ssh'd into megatron.
On megatron there is a regular matlab session which is running seisBLRMS.m as a matlab script
and the seis DMF channels are getting updated. |
1379
|
Mon Mar 9 19:33:10 2009 |
rana | Update | DMF | seisBLRMS in temp condition |
The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This
is because I couldn't get the compiled matlab functionality to work.
Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I
have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem
I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not. |
Attachment 1: Untitled.png
|
|
1392
|
Thu Mar 12 00:29:39 2009 |
Jenne | Omnistructure | DMF | DMF being whiny again |
Quote: | The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. |
[Yoichi, Jenne]
seisBLRMS was down again. I assumed it was just because the DMF Master Enable was in the 'Disabled' state, but enabling it didn't do the trick. Rana's green terminal window was complaining about not being able to find nodus.ligo.caltech.edu. Yoichi and I stopped it, closed and restarted Matlab, ran mdv_config, then ran seisBLRMS again, and it seems happy now.
On the todo list still is making the DMF / seisBLRMS stuff happy all the time. |
1400
|
Fri Mar 13 19:26:09 2009 |
Yoichi | Update | DMF | seisBLRMS compiled |
I compiled seisBLRMS.
The tricks were the following:
(1) Don't add path in a deployed command.
It does not make sense to add paths in a compiled command because it may be moved to anywhere. Moreover, it can cause some weird side effects. Therefore, I enclosed the addpath part of mdv_config.m in a "if ~isdeployed ... end" clause to avoid adding paths when deployed. Instead of adding paths in the code, we have to add paths to necessary files with -I options at the compilation time. This way, mcc will add all the necessary files into the CTF archive.
(2) Add mex files to the CTF archive by -a options.
For some reason, mcc does not add necessary mex files into the CTF archive even though those files are called in the m-file which is being compiled. We have to add those files by -a options.
(3) NDS_GetData() is slow for nodus when compiled.
NDS_GetData(), which is called by get_data() stops for a few minutes when using nodus as an NDS server.
This problem does not happen when not compiled. I don't know the reason. To avoid this, I modified seisBLRMS.m so that when an environmental variable $NDS is defined, it will use an NDS server defined in this variable.
I wrote a Makefile to compile seisBLRMS. You can read the file to see the details of the tricks.
I also wrote a script start_seisBLRMS, which can be found in /cvs/cds/caltech/apps/DMF/compiled_matlab/seisblrms/. To start seisBLRMS, you can just call this script.
At this moment, seisBLRMS is running on megatron. Let's see if it continues to run without crashing.
Quote:
|
The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This
is because I couldn't get the compiled matlab functionality to work.
Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I
have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem
I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not.
|
|
1416
|
Sun Mar 22 22:47:58 2009 |
rana | Update | DMF | seisBLRMS compiled but still dying |
Looks like seisBLRMS was restarted ~1 AM Friday morning but only lasted for 5 hours. I just restarted it on megatron;
let's see how it does. I'm not optimistic. |
1484
|
Wed Apr 15 02:20:46 2009 |
rana, yoichi | Update | DMF | DMF now working copy |
We found that DMF/ was not an SVN working copy, so I wiped out the SVN version, imported the on-disk copy, moved it to DMFold/ and then checked out the SVN version.
We can delete DMFold/ whenever we are happy with the SVN copy. |
1485
|
Wed Apr 15 03:52:27 2009 |
rana | Update | DMF | NDS client32 updated for DMF |
Since our seisBLRMS.m complains about 'can't find hostname' after a few hours, even though matlab is able to ping fb40m,
I have recompiled the NDS mex client for 32-bit linux on mafalda and stuck it into the nds_mexs/ directory. This time I
compiled using the 'gcc' compiler instead of the 'ANSI C' compiler that is recommended in the README (which, I notice,
is now missing from Ben Johnsons web page!). Let's see how long this runs.
|
1977
|
Tue Sep 8 19:36:52 2009 |
Jenne | Omnistructure | DMF | DMF restarted |
I (think I) restarted DMF. It's on Mafalda, running in matlab (not the complied version which Rana was having trouble with back in the day). To start Matlab, I did "nohup matlab", ran mdv_config, then started seisBLRMS.m running. Since I used nohup, I then closed the terminal window, and am crossing my fingers in hopes that it continues to work. I would have used Screen, but that doesn't seem to work on Mafalda. |
1979
|
Tue Sep 8 20:25:03 2009 |
Jenne | Omnistructure | DMF | DMF restarted |
Quote: |
I (think I) restarted DMF. It's on Mafalda, running in matlab (not the complied version which Rana was having trouble with back in the day). To start Matlab, I did "nohup matlab", ran mdv_config, then started seisBLRMS.m running. Since I used nohup, I then closed the terminal window, and am crossing my fingers in hopes that it continues to work. I would have used Screen, but that doesn't seem to work on Mafalda.
|
Just kidding. That plan didn't work. The new plan: I started a terminal window on Op540, which is ssh-ed into Mafalda, and started up matlab to run seisBLRMS. That window is still open.
Because Unix was being finicky, I had to open an xterm window (xterm -bg green -fg black), and then ssh to mafalda and run matlab there. The symptoms which led to this were that even though in a regular terminal window on Op540, ssh-ed to mafalda, I could access tconvert, I could not make gps.m work in matlab. When Rana ssh-ed from Allegra to Op540 to Mafalda and ran matlab, he could get gps.m to work. So it seems like it was a Unix terminal crazy thing. Anyhow, starting an xterm window on Op540m and ssh-ing to mafalda from there seemed to work.
Hopefully this having a terminal window open and running DMF will be a temporary solution, and we can get the compiled version to work again soon. |
2072
|
Thu Oct 8 22:17:15 2009 |
rana | Configuration | DMF | input channels changed |
I changed the input channels of the DMF recently so that it now uses 3 Guralp channels in addition to the 3 ACC and 1 Ranger.
op440m:seisblrms>diff seisBLRMS-datachans.txt~ seisBLRMS-datachans.txt
4,7c4,7
< C1:PEM-ACC_MC2_X
< C1:PEM-ACC_MC2_Y
< C1:PEM-ACC_MC2_Z
< C1:PEM-SEIS_MC1_Y
---
> C1:PEM-SEIS_GUR1_X
> C1:PEM-SEIS_GUR1_Y
> C1:PEM-SEIS_GUR1_Z
> C1:PEM-SEIS_RANGER_Y
op440m:seisblrms>pwd
/cvs/cds/caltech/apps/DMF/compiled_matlab/seisblrms
The seisBLRMS channels still have the wrong names of IX and EX, but I have chosen to keep them like this so that we have a long trend. When looking at the historical seisBLRMS trend, we just have to remember that all of the sensors have been around the MC since last summer. |
3385
|
Sat Aug 7 21:57:56 2010 |
rana | Summary | DMF | seisBLRMS restarted |
The green xterm on op540m which is running the seisBLRMS DMF got stuck somehow ~3 days ago and lost its NDS connection. I closed the matlab session and restarted it. Seismic trends are now back online. |
Attachment 1: Untitled.png
|
|
3563
|
Mon Sep 13 02:45:59 2010 |
rana | Configuration | DMF | seisBLRMS restarts |
I restarted the seisBLRMS DMF monitor by ssh'ing into mafalda and starting up a matlab session. I also have started a StripTool session on rossa by forwarding the process from op440m.
We need to get the modern EPICS installation onto these linux machines by copying what K. Thorne has done at LLO. |
12543
|
Mon Oct 10 17:27:29 2016 |
rana | Update | DMF | summar pages dead again |
Been non-functional for 3 weeks. Anyone else notice this? images missing since ~Sep 21. |
12544
|
Mon Oct 10 17:42:47 2016 |
Max Isi | Update | DMF | summar pages dead again |
I've re-submitted the Condor job; pages should be back within the hour.
Quote: |
Been non-functional for 3 weeks. Anyone else notice this? images missing since ~Sep 21.
|
|
12548
|
Tue Oct 11 08:09:46 2016 |
Max Isi | Update | DMF | summar pages dead again |
Summary pages will be unavailable today due to LDAS server maintenance. This is unrelated to the issue that Rana reported.
Quote: |
I've re-submitted the Condor job; pages should be back within the hour.
Quote: |
Been non-functional for 3 weeks. Anyone else notice this? images missing since ~Sep 21.
|
|
|
17126
|
Thu Sep 1 09:00:02 2022 |
JC | Configuration | Daily Progress | Locked both arms and aligned Op Levs |
Each morning now, I am going to try to align both arms and lock. Along with that, sometime at towards the end of each week, we should align the OpLevs. This is a good habit that should be practiced more often, not only by me. As for the Y Arm, Yehonathan and I had to adjust the gain to 0.15 in order to stabilize the lock. |
Attachment 1: Daily.pdf
|
|
Attachment 2: Daily.pdf
|
|
17360
|
Thu Dec 15 08:37:52 2022 |
JC | Update | Daily Progress | IMC Misalignment |
PMC seems to have gotten very misaligned over the last 12 hours. I'm going in to align now. |
Attachment 1: Screenshot_2022-12-15_08-37-16.png
|
|
17471
|
Thu Feb 16 23:54:11 2023 |
Alex | Update | Daily Progress | Yaw and Pitch Calibration constants for ETMY op-lev |
This work was done by Ancal and I.
To recallibrate the op-lev for ETMY, a python script was first written to calculate the change in distance in x or y that the photodiode array will see when the mirror incurs a change in yaw or pitch. The python script approximates d by integrating, using a reimann sum, the area under a gaussian curve, given by I(r)= Io exp(−2r2/ 2w(z)2), where r is the radial position, and w(z) is the waist (radius) size of the gaussian beam where power reaches 1/e2 of its maximum. The distance d, is the difference from the center of the gaussian to the point at which the beam profile has a normalized area under the curve equal to that of the percent of the beam profile showing on one half of the circular photodiode array.
 
Above, the gaussian is related to the translation of the beam profile on the photodiode where the area calculated under the curve of the gaussian, is equivalent to the ratio of the beam profile in 2 adjacent quaters of the photodiode array.
The gaussian, is directly related to the waist size of the laser beam profile, and thus a beam profiler was used to calculate the waist size over an average of 100 takes. Due to the thickness of the beam profiler, we were unable to get a direct measurement of the size of the beam at the exact location of the photodiode. Instead, we took two seperate measurements while moving the profiler 1 inch further away from the photodiode and back calculated the average size of the beam at the photodiode assuming that at this distance away from the source, the beams width would expand linearly. This provided a 2*waist size of 1625 ± 40 um.

image above displays the laser beam profiler used to approximate the waist size of the op-lev laser.
The physically calculated translation of the beam profile, d, can then be used to determine the overall angle, theta, that the mirror has moved to create this offset. The relation between distance and theta is Theta = d/2R, where R is the length from the mirror surface to the photodiode. R was then measured by hand over the optics table, and estimated to the best of our ability using the accurate autocad drawings of ETMY. This provided us with an R length of 1.76 ± 0.02 meters.

Image above shows the current system in place for converting the photodiode counts into microradians. The calibration constant is implemented at the last green filter boxes for pitch and yaw.
Lastly, to calculate the callibration constants, a series of tests were run on the ETMY suspeneded mirror. First, a time averaged value of the photodiode counts was taken with the mirror locked in place. Next, pitch and yaw were adjusted by 10 counts seperately, and the photodiode outputs recorded. This was done again but by moving the mirror 50 counts in pitch and yaw (seperately). The final result of the difference of the calculated theta values over the difference of pitch or yaw counts provided the following callibration constants:
Pitch moved +10 counts: 131 ± 5 cts/urad
Pitch moved +50 counts: 155 ± 5 cts/urad
Yaw moved +10 counts: 237 ± 5 cts/urad
Yaw moved +50 counts: 241 ± 5 cts/urad
Given our results, we believe that the values found for our 50 count translation to be the best approximation of the calibration constant due to its movement being more significant than that of the change seen from adjusting yaw or pitch by only 10 counts.
Next steps will be to update the values in the controls system and improve the python script to be more autonomous rather than a a step by step calculation.
|
12692
|
Fri Dec 30 10:27:46 2016 |
rana | Update | DetChar | summary pages dead again |
Dead again. No outputs for the past month. We really need a cron job to check this out rather than wait for someone to look at the web page. |
12693
|
Thu Jan 5 21:43:16 2017 |
rana | Update | DetChar | summary pages dead again |
Max tells us that soem conf files were bad and that he did something and now some pages are being made. But the PEM and MEDM pages are bank. Also the ASC tab looks bogus to me. |
12910
|
Mon Mar 27 20:29:05 2017 |
rana | Summary | DetChar | Summary pages broken again |
Going to the summary pages and looking at 'Today' seems to break it and crash the browser. Other tabs are OK, but 'summary' is our default page.
I've noticed this happening for a couple of days now. Today, I moved the .ini files which define the config for the pages from the old chans/ location into the /users/public_html/detcharsummary/ConfigFiles/ dir. Somehow, we should be maintaining version control of detcharsummary, but I think right now its loose and free. |
13834
|
Fri May 11 18:17:07 2018 |
gautam | Update | DetChar | AUX laser PLL setup |
[koji, gautam]
As discussed at the meeting earlier this week, we will use some old *MOPA* channels for interfacing with the PLL system Jon is setting up. He is going to put a sketch+photos up here shortly, but in the meantime, Koji helped me identify a channel that can be used to tune the temperature of the Lightwave NPRO crystal via front panel BNC input. It is C1:PSL-126MOPA_126CURADJ, and is configured to output between +/-10V, which is exactly what the controller can accept. The conversion factor from EPICS value to volts is currently set to 1 (i.e. EPICS value of +1 corresponds to +1V output from the DAC). With the help of the wiring diagram, we identified pins 3 and 4 on cross-connect #J7 as the differential outputs corresponding to this channel. Not sure if we need to also setup a TTL channel for servo ENABLE/DISABLE, but if so, the wiring diagram should help us identify this as well.
The cable from the DAC to the cross-connect was wrongly labelled. I fixed this now. |
15309
|
Wed Apr 22 13:52:05 2020 |
gautam | Update | DetChar | Summary page revival |
Covid 19 motivated me to revive the summary pages. With Alex Urban's help, the infrastructure was modernized, the wiki is now up to date. I ran a test job for 2020 March 17th, just for the IOO tab, and it works, see here. The LDAS rsync of our frames is still catching up, so once that is up, we can start the old condor jobs and have these updated on a more regular basis. |
15370
|
Wed Jun 3 11:20:19 2020 |
gautam | Update | DetChar | Summary pages |
Summary:
The 40m summary pages have been revived. I've not had to make any manual interventions in the last 5 days, so things seem somewhat stable, but I'm sure there will need to be multiple tweaks made. The primary use of the pages right now are for vacuum, seismic and PSL diagnostics.
Resources:
Caveats:
- Intermittent failures of cron jobs
- The status page relies on the condor_q command executing successfully on the cluster end. I have seen this fail a few times, so the status page may say the pages are dead whereas they're actually still running.
- Similarly, the rsync of the pages to nodus where they're hosted can sometimes fail.
- Usually, these problems are fixed on the next iteration of the respective cron jobs, so check back in ~half hour.
- I haven't really looked into it in detail, but I think our existing C1:IFO-STATE word format is not compatible with what gwsumm wants - I think it expects single bits that are either 0 or 1 to indicate a particular state (e.g. MC locked, POX and POY locked etc). So if we want to take advantage of that infrastructure, we may need to add a few soft EPICS channels that take care of some logic checking (several such bits could also be and-ed together) and then assume either 0 or 1 value. Then we can have the nice duty cycle plots for the IMC (for example).
- I commented out the obsolete channels (e.g. PEM MIC channels). We can add them back later if we so desire.
- For some reason, the jobs that download the data are pretty memory-heavy: I have to request for machines on the cluster with >100 GB (yes 💯GB) ! of memory for the jobs to execute to completion. The frame-data certainly isn't so large, so I wonder what's going on here - is GWPy/GWsumm so heavy? The site summary pages run on a dedicated cluster, so probably the code isn't built for efficiency...
- Weather tab in PEM is still in there but none of those channels mean anything right now.
- The MEDM screenshot stuff is commented out for now too. This should be redone in a better way with some modern screen grab utilities, I'm sure there are plenty of python based ones.
- There seems to be a problem with the condor .dag lockfile / rescue file not being cleared correctly sometimes - I am looking into this.
|
15696
|
Wed Dec 2 18:35:31 2020 |
gautam | Update | DetChar | Summary page revival |
The summary pages were in a sad state of disrepair - the daily jobs haven't been running for > 1 month. I only noticed today because Jordan wanted to look at some vacuum trends and I thought summary pages is nice for long term lookback. I rebooted it just now, seems to be running. @Tega, maybe you want to set up some kind of scripted health check that also sends an alert. |