ID |
Date |
Author |
Type |
Category |
Subject |
784
|
Sat Aug 2 16:05:38 2008 |
rana | Configuration | Computer Scripts / Programs | mDV update |
I did an svn update on our mDV directory. Justin has improved it so that the NDS client binaries
are included for solaris, mac, linux32, and linux64. Now you can just use this version without
having to worry about any path definitions. |
988
|
Wed Sep 24 19:13:06 2008 |
rana | Configuration | Computer Scripts / Programs | updatedb & locate: megatron & rosalba |
I ran updatedb as root today on megatron and rosalba just before the meeting.
It finished at ~14:10 on both machines so that's ~20 minutes total.
The default updatedb.conf for these guys also seems to be set up right so that
it is indexing the NFS mount (/cvs/cds/) so that's good. Next, someone needs to
add the updatedb command to the daily cron for each of these guys (5 AM) and
add this to the wiki page on how we set up new computers.
I also found that the root passwd on Megatron was different from all of the other
machines, indicating that perhaps megatron was trying to free himself. I have put
down that rebellion viciously:D and he's now toeing the line. |
1056
|
Fri Oct 17 21:41:09 2008 |
Yoichi | Update | Computer Scripts / Programs | burtwb missing on Solaris but installed on linux64 |
c1lsc stalled this evening, so I powercycled it.
After that, I tried to lock arms to confirm the computer is working.
Then I realized that the restore alignment buttons do not work from any control room computer.
I found that it was because burtwb command was missing. For Solaris, looks like there used to be /cvs/cds/epics/extensions/burtwb but now
there is no /cvs/cds/epics directory. I thought there were directories other than "caltech" in /cvs/cds/, weren't there ?
Right now, there is only /cvs/cds/caltech.
Anyway, I installed burt for 64bit linux computer (under /cvs/cds/caltech/apps/linux64/epics/extensions/).
At this moment the alignment save/restore works on allegra (and probably on rosalba), but not on op440m yet. |
1058
|
Mon Oct 20 12:18:38 2008 |
Alan | Update | Computer Scripts / Programs | burtwb missing on Solaris but installed on linux64 |
Quote: | c1lsc stalled this evening, so I powercycled it.
After that, I tried to lock arms to confirm the computer is working.
Then I realized that the restore alignment buttons do not work from any control room computer.
I found that it was because burtwb command was missing. For Solaris, looks like there used to be /cvs/cds/epics/extensions/burtwb but now
there is no /cvs/cds/epics directory. I thought there were directories other than "caltech" in /cvs/cds/, weren't there ?
Right now, there is only /cvs/cds/caltech.
Anyway, I installed burt for 64bit linux computer (under /cvs/cds/caltech/apps/linux64/epics/extensions/).
At this moment the alignment save/restore works on allegra (and probably on rosalba), but not on op440m yet. |
The automatic backup of /cvs/cds (and /frames/minute-trends ) to the LIGO archive in Powell-Booth,
which runs from fb40m using the scripts in /cvs/cds/caltech/scripts/backup ,
stopped when fb40m was rebooted on June 28, 2008,
and the check_backup script I run to send an email when this happens also failed due to a scripting error.
But we still have a backup of /cvs/cds from June 27.
The backup of /cvs/cds (excluding /cvs/cds/caltech and /cvs/cds/tmp)
circa June 27, 2008
has been restored to
/cvs/cds/recover_20081020 .
Please check to see that it has what we need.
Before moving it over to where it belongs,
it would be really nice to figure out what happened...
Meanwhile, I have fixed the check_backup script and restarted the backup, which will run this evening...
but maybe I should wait till the dust settles?
Now is also a good time to think about whether there is anything else besides for
/cvs/cds and /frames/minute-trends that should be backed up regularly.
- Alan |
1075
|
Thu Oct 23 18:45:18 2008 |
Alberto | Omnistructure | Computer Scripts / Programs | Python code for GPIB devices developed for the Absl length experiment |
I wrote two Python scripts for my measurement that can be also used/imitated by others: sweepfrequency. py and HP8590.py. The first is is the one that we run by a Python interpreter (just typing "python <name script> <parameters>"from the terminal). It manages the parameters that we have to pass it for the measurement and calls the second one, HP8590.py which actually does most of the job.
Here what it does. It scans the frequency of the Marconi and, for each step, searches the highest peak in the Spectrum Analyzer (which is centered 50 KHz around the frequency of the Marconi). It then associates the amplitude of the peak to the frequency of the Marconi and write the two number in two columns of a file.
The file name, the GPIB-to/LAN interface IP address, the frequency range, the frequency step amplitude and the number of measures we want it to average for each step, are all set by the parameters when we call sweepfrequency.py.
More details are in the help of the function or just looking at the header of the code.
I guess that one can perform other similar measurement just with little changes in the code so I think it could turn out useful to anyone else. |
1091
|
Sun Oct 26 21:02:18 2008 |
rana | Update | Computer Scripts / Programs | SVN medm problem |
As we've seen in the past a few times, there's something wrong with the files in the trunk/medm area.
I get the following error message when doing a fresh checkout:A c1/lsc/help/C1LSC_LA_SET.txt
svn: In directory 'c1/lsc/help'
svn: Can't copy 'c1/lsc/help/.svn/tmp/text-base/C1LSC_RFadjust.txt.svn-base' to 'c1/lsc/help/.svn/tmp/C1LSC_RFadjust.txt.tmp.tmp': No such file or directory It looks like that there are some .svn files which have been checked in as if they're some kind of source code instead of just maintenance files.
We probably have to go through and clean this out and then remove these excess files somehow. |
1092
|
Mon Oct 27 10:02:16 2008 |
Yoichi | Update | Computer Scripts / Programs | SVN medm problem |
I tried to check out medm directory both from my laptop and nodus.
I did not get the error.
Have you already fixed it ? Or maybe it is to do with the version of the svn used to checkout ?
Quote: | As we've seen in the past a few times, there's something wrong with the files in the trunk/medm area.
I get the following error message when doing a fresh checkout:A c1/lsc/help/C1LSC_LA_SET.txt
svn: In directory 'c1/lsc/help'
svn: Can't copy 'c1/lsc/help/.svn/tmp/text-base/C1LSC_RFadjust.txt.svn-base' to 'c1/lsc/help/.svn/tmp/C1LSC_RFadjust.txt.tmp.tmp': No such file or directory It looks like that there are some .svn files which have been checked in as if they're some kind of source code instead of just maintenance files.
We probably have to go through and clean this out and then remove these excess files somehow. |
|
1213
|
Fri Jan 2 17:20:44 2009 |
rana | Summary | Computer Scripts / Programs | 40m GWINC |
I have made a '40m' directory in the iscmodeling CVS tree which allows one to run a 40m version of GWINC.
As does the previous one, it takes the default advLIGO config file and modifies some of the struct parameters
to make it appropriate for the 40m.
To make it run, I have added susp1.m to the GWINC directory. This calculates suspension thermal noise using
the Gonzalez-Saulson method that was later extended to
mirrors by Y. Levin. This is also the code used in the LIGO Noise Budget at the sites.
The previous code was giving a much larger value for thermal noise (probably because I didn't understand how
to use it right). It was based on a SURF report from '99.
Since we will have a mixture of MOSs and SOSs in the arms, I have just used SOSs in the model. So the suspension
thermal noise is overestimated by ~sqrt(2) (and realistically its uncertain by a much larger factor).
Since the new code now uses GWINC, the mirror and coating thermal noise are now more correct and use the
coherent therm-optic noise picture.
The 2 page PDF file shows the noise for 0 deg and 90 deg tuning of the SRC.
Although it looks like, from this plot, that we could measure coating thermal noise at the 40m, in reality we would have
to fix all of the technical noise sources first. Just the coil driver noise is probably at the level of 3 x 10^-17 m/rHz
at 100 Hz. |
1215
|
Sun Jan 4 13:17:23 2009 |
Alan | Omnistructure | Computer Scripts / Programs | New 40mWebStatus |
I have set up some code in /cvs/cds/caltech/scripts/webStatus along with a cronjob on controls@nodus to generate a webStatus every half hour, at 40mWebStatus
you are welcome to add/delete lines corresponding to interesting EPICS channels, in the template /cvs/cds/caltech/scripts/webStatus/webStatus_template.html . The 2nd number is the "golden" value of the EPICS channel; it can be edited by hand, or one could copy a "golden" webStatus.html to webStatus_template.html . I think it's probably premature to automate this...
I noticed that Yoichi also has a cron job posting 40m medm screen snapshots. Very nice.
controls@nodus also runs a third cronjob, which checks if the nightly backup fails, and if so, sends an email to me.
I guess we need some kind of "official" crontab file for controls@nodus so that we know how/where to add things. So, I put one in /cvs/cds/caltech/crontab/controls@nodus.crontab |
1216
|
Mon Jan 5 11:21:05 2009 |
Alan | Omnistructure | Computer Scripts / Programs | New 40mWebStatus |
Quote: |
I guess we need some kind of "official" crontab file for controls@nodus so that we know how/where to add things. So, I put one in /cvs/cds/caltech/crontab/controls@nodus.crontab |
Alan and I agreed that we should edit the crontab by "crontab -e" command rather than editing the "official" crontab in /cvs/cds/caltech/crontab/.
After confirming that the new crontab works as expected, you are encouraged to make a copy of the new crontab into /cvs/cds/caltech/crontab/ as a backup.
Then do "svn ci" in the directory. |
1328
|
Fri Feb 20 01:54:18 2009 |
Kakeru | Update | Computer Scripts / Programs | tdsdata might have a bug |
I found a strange jump of value in my data taken with tdsdata.
I couldn't find same jump in a playback of DataViewer, so I think this is a problem of tdsdata.
Be careful when you use tdsdata!
The attached file is an example of jumped data.
I try to get data with allegra and op440m, and both has same kind of jump.
(A downsampling or interpolation may be wrong.)
Rana said there is a fixed version of tdsdata in some PC, but 64bit linux may not have.
I try it tomorrow. |
1354
|
Wed Mar 4 12:38:07 2009 |
Alberto | Update | Computer Scripts / Programs | 3f locking simulations |
I simulated the REFL signals demodulated at the differential frequencies of the sidebands (f2-f1), at their summed frequencies (f2+f1). I also simulated their combination as in the Double Demodulation.
I repeated the simulation for:
- Old (current) 40m
- 40m Upgrade
- AdvLIGO
I'm attaching the results to this elog entry.
The plots show how the signal varies exploring the two-dimensional space of the demodulation frequencies (differential and sum).
Both the Upgrade and the Old40m's signals look anomalous since the zero-crossing point does not change with the demodulation phases.
I suspect there's is a problem with the optickle model of the 40m. |
1371
|
Sun Mar 8 23:14:52 2009 |
rana | Update | Computer Scripts / Programs | tdsdata doesn't work |
Matt logged in and rebuilt the TDS stuff for us on Mafalda in /cvs/cds/caltech/apps/linux/tds_090304.
He says that he can't build his stuff on 64-bit because there's not a sanctioned 64-bit build of GDS yet.
This should have all the latest fixes in it. I tried using both the old and new code from allegra and they both are fine:
./tdsdata 16384 2 C1:IOO-MC_F > /users/rana/test.txt
I loaded the data I got with the above command and there were no data dropouts. Possibly the dropout problem is only
associated with testpoints and so we have to wait for the TP fix. |
1375
|
Mon Mar 9 14:57:30 2009 |
Kakeru | Update | Computer Scripts / Programs | tdsdata doesn't work |
I tested new tdsdata and found it was working well.
I excited C1:SUS-ITMY_SUSPIT_EXC with tdssine, and get data from C1:LSC-TRY_OUT (testpoint) and C1:SUS- ITMY_OPLEV_PERROR (recorded point) with new and old tdsdata.
With old tdsdata (/cvs/cds/caltech/apps/linux/tds/bin/tdsdata), I found some jumps of datapoint, which is a same problem with before (Attachment 1).
With new tdsdata (/cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata), there looks to be no jumps (Attachment 2; taken about 10 minutes after Attachment 1).
The problem of old tdsdata looks to be remaining even for recordedpoints.
You should use /cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata.
Quote:
|
Matt logged in and rebuilt the TDS stuff for us on Mafalda in /cvs/cds/caltech/apps/linux/tds_090304.
He says that he can't build his stuff on 64-bit because there's not a sanctioned 64-bit build of GDS yet.
This should have all the latest fixes in it. I tried using both the old and new code from allegra and they both are fine:
./tdsdata 16384 2 C1:IOO-MC_F > /users/rana/test.txt
I loaded the data I got with the above command and there were no data dropouts. Possibly the dropout problem is only
associated with testpoints and so we have to wait for the TP fix.
|
|
1376
|
Mon Mar 9 16:54:52 2009 |
Kakeru, Rana | Update | Computer Scripts / Programs | tdsdata doesn't work |
We confirmed that new tds(/cvs/cds/caltech/apps/linux/tds_090304/) works well on linux 64, and replaced it to /cvs/cds/caltech/apps/linux/tds/
The old /cvs/cds/caltech/apps/linux/tds is put in /cvs/cds/caltech/apps/linux/tds.bak |
1380
|
Mon Mar 9 23:13:22 2009 |
Yoichi | Update | Computer Scripts / Programs | tdsdata doesn't work |
Quote: |
We confirmed that new tds(/cvs/cds/caltech/apps/linux/tds_090304/) works well on linux 64, and replaced it to /cvs/cds/caltech/apps/linux/tds/
The old /cvs/cds/caltech/apps/linux/tds is put in /cvs/cds/caltech/apps/linux/tds.bak
|
The tdscntr.pl in the new tds was probably the one from LLO, which is actually the version I sent to Tobin. It had paths and channel names defined for the LLO. So I copied back my original 40m version. |
1384
|
Wed Mar 11 04:33:57 2009 |
rana | Configuration | Computer Scripts / Programs | wild ndsproxy tclshexe |
The ndsproxy tcl task on nodus was eating up all the CPU and making the elog slow. I killed it and restarted it.
It looks like it hasn't been making a log file since January. Someone who has some skill in decoding the cryptic csh stdout redirection
syntax should look into this (its in target/ndsproxy/) |
1417
|
Sun Mar 22 23:16:41 2009 |
rana | DAQ | Computer Scripts / Programs | tpman restart |
Could get testpoints but couldn't start excitations. Restarted tpman on daqawg. Works now.
Still no log file.  |
1482
|
Tue Apr 14 17:20:36 2009 |
Yoichi | Update | Computer Scripts / Programs | Parallel Optickle |
I wrote a parallel version of tickle() function for Optickle.
The attached ptickle.m, which provides ptickle() command, can be used as a drop-in replacement of tickle() command.
Just download it and place it in the @Optickle directory.
This command will run multiple instances of Matlab to calculate the frequency responses in parallel.
The speed gain is roughly proportional to the number of CPU cores you use.
To use multiple cores, you have to run matlabpool() command first. See the comment at the beginning of ptickle.m for more detail.
The progress bar is disabled at this moment because it is not clear for me how to make a single progress bar from multiple instances of Matlab.
I sent the code to Matt, so this may be included in the next release of Optickle. |
1492
|
Thu Apr 16 17:48:00 2009 |
Yoichi | Configuration | Computer Scripts / Programs | AutoDTT |
As Peter mentioned in his entry on the last night's locking, I imported AutoDTT from Hanford.
It resides in /cvs/cds/caltech/scripts/AutoDTT/.
The main script is restoreRunSave, which takes three arguments, templete_file_name, result_file_name and log_file_name.
This script opens a template xml file and execute it. Then saves the result in the result file.
You can open the result file in a normal DTT.
You can call restoreRunSave from watch scripts, such as c1_watch_dr_bang.
watchLockLoss is a standalone watch script to detect a lock loss and call restoreRunSave.
It runs both on Linux and Solaris. However on Linux, diag fails 50% of the time with some glibc error.
So it is probably better to run it on op440m. |
1494
|
Fri Apr 17 11:37:32 2009 |
Yoichi | Configuration | Computer Scripts / Programs | AutoDTT |
In order to get test point data with AutoDTT, you have to pre-trigger test points you want to use.
This is done by starting a DTT measurement with necessary test points for a few second, then stop it but keep the DTT opened.
I made prepTP script which does this job.
It takes a file name of an XML file, which should include a DTT measurement setup with test point channels you want to open and the trigger time set to "now".
The script will open an xterm and run diag with the XML file. Unlike restoreRunSave script, it does not save the result nor quit diag. Therefore, you can keep the test points as long as you keep the xterm opened. You can manually exit the diag (Ctrl-D) when you no longer need the test points.
watchLockLoss script now calls prepTP at the beginning. Therefore, you have to be able to open an xterm. If you run the script through SSH, make sure that you give -X option to ssh. |
1510
|
Thu Apr 23 16:35:23 2009 |
Yoichi | Summary | Computer Scripts / Programs | restoreWatchdog script |
When the IFO loses lock during the lock acquisition steps, it often kicks the MC2 (through the CM servo) and trips the watchdog.
I wrote a script to restore the tripped watchdog (/cvs/cds/caltech/scripts/SUS/restoreWatchdog).
The script takes the name of a mirror (such as MC2) as an argument.
It will enable the coils and temporarily increase the watchdog threshold to a value higher than the current OSEM RMS signals.
Then it will bring the watchdog back to the normal state and wait for the mirror to be damped. After the mirror is damped enough, the
watchdog threshold will be restored to the original value.
The script will do nothing if the watchdog is not tripped.
I put this script in the drdown_bang so that the MC2 watchdog will be automatically restored when a lock loss kicks out the MC2. |
1562
|
Fri May 8 04:31:35 2009 |
rana | Update | Computer Scripts / Programs | elog and NDS |
In the middle of searching through the elog, its stopped responding. So I followed the Wiki instructions
and restarted it (BTW, don't use the start-elog-nodus script that's in that directory). Seems OK now,
but I am suspicious of how it sometimes does the PDF preview correctly and sometimes not. I found a
'gs' process on there running and taking up > 85% of the CPU.
I also got an email from Chris Wipf at MIT to try out this trick from LASTI to maybe fix the
problems I've been having with the DMF processes failing after a couple hours. I had compiled but
not tested the stuff a couple weeks ago.
Today after it failed, I tried running other stuff in matlab and got some "too many files open" error messages.
So I have now copied the 32-bit linux NDS mex files into the mDV/nds_mexs/ directory. Restarted the
seisBLRMS.m about an hour ago. |
1567
|
Fri May 8 16:29:53 2009 |
rana | Update | Computer Scripts / Programs | elog and NDS |
Looks like the new NDS client worked. Attached is 12 hours of BLRMS. |
1619
|
Fri May 22 00:43:24 2009 |
rob | Configuration | Computer Scripts / Programs | IFO configure scripts for XARM and YARM |
I edited the configure scripts (those called from the C1IFO_CONFIGURE screen) for restore XARM and YARM. These used to misalign the ITM of the unused arm, which is totally unnecessary here, as we have both POX and POY. They also used to turn off the drive to the unused ETM. I've commented out these lines, so now running the two restores in series will leave a state where both arms can be locked. This also means that the ITMs will never be deliberately mis-aligned by the restore scripts. |
1637
|
Mon Jun 1 14:33:42 2009 |
rob | Configuration | Computer Scripts / Programs | op540m Monitor added to web status |
I added op540m's display 0 (the northern-most monitor in the control room) to the MEDM screens webpage: https://nodus.ligo.caltech.edu:30889/medm/screenshot.html
Now we can see the StripTool displays that are usually parked on that screen.
|
1670
|
Fri Jun 12 02:01:03 2009 |
rob | Update | Computer Scripts / Programs | DRM matrix diagonalization |
I started two scripts, senseDRM and loadDRMImatrixData.m, which Peter will bang on until they're correct. They're in the $SCRIPTS/LSC directory. The first is a perl script which uses TDS tools to drive the DRM optics and measure the response at the double demod photo-detectors, and write these results to a series of files loadable by matlab. The second loads the output from the first script, inverts the resulting sensing matrix to get an input matrix, and spits out a tdswrite command which can be copied and pasted into a terminal to load the new input matrix values.
What's left is mainly in figuring out how to do the matrix inversion properly. Right now the script does not account for the output matrix, the gains in the feedback filters at the measurement frequency, or the fact that we'll likely want the UGF of our loops to be less than the measurement frequency. Peter's going to hash out these details. |
1718
|
Tue Jul 7 16:06:59 2009 |
Clara | Update | Computer Scripts / Programs | DTT synchronization errors, help would be appreciated |
I am attempting to use the DTT program to look at the coherence of the individual accelerometer signals with the MC_L signal. Rana suggested that I might break up the XYZ configuration, so i wanted to see how the coherence changed when I moved things around over the past couple of weeks, but I keep getting a synchronization error every time I try to set the start time to more than about 3 days ago. I tried restarting the program and checking the "reconnect" option in the "Input" tab, neither of which made any kind of difference. I can access this data with no problem from the Data Viewer and the Matlab scripts, so I'm not really sure what is happening. Help?
EDIT: Problem solved - Full data was not stored for the time I needed to access it for DTT. |
1940
|
Tue Aug 25 02:37:53 2009 |
rana | Configuration | Computer Scripts / Programs | Firefox 3.5 installed for 64 bit linux in apps/ |
|
2634
|
Tue Feb 23 16:42:02 2010 |
rana | Configuration | Computer Scripts / Programs | SVN restarted on NODUS |
I ran the start Apache script as described by Yoichi in the WIki. SVN back up. |
2775
|
Tue Apr 6 11:27:11 2010 |
Alberto | Update | Computer Scripts / Programs | Data formats in the Agilent AG4395a Spectrum Analyzer |
Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.
It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.
Another problem that I noticed in the GPIB downloaded data when I was measuring noise spectrum, is an unwanted factor of 2 in the amplitude spectral density.
For example, measuring the amplitude spectral density of the FSS RF PD's dark noise at its resonant frequency (~21.5 MHz), I would expect ~15nV/rtHz from the thermal noise - as Rana pointed out in the elog entry 2759). However, the spectrum analyzer reads 30nV/rtHz, in both the display and the GPIB downloaded data, except for the above mentioned little discrepancy between the two. (The discrepancy is about 0.5dBm/Hz in the power spectrum density).
My measurement, as I showed it in the elog entry 2760) is of ~15nV/rtHz, but only becasue I divided by 2. Now I realize that that division was unjustified.
I'm trying to figure out the reason for that. By now I'm not sure we can trust the netgpib package for spectrum measurements with the AG4395. |
2776
|
Tue Apr 6 16:55:28 2010 |
Alberto | Update | Computer Scripts / Programs | Data formats in the Agilent AG4395a Spectrum Analyzer |
Quote: |
Lately I've been trying to sort out the problem of the discrepancy that I noticed between the values read on the spectrum analyzer's display and what we get with the GPIB interface.
It turns out that the discrepancy originates from the two data vector that the display and the GPIB interface acquire. Whereas the display shows data in "RAW" format, the GPIB interface, for the way the netgpibdata script is written, acquires the so called "error-corrected data". That is the GPIB downloaded data is postprocessed and corrected for some internal calibration factors of the instrument.
Another problem that I noticed in the GPIB downloaded data when I was measuring noise spectrum, is an unwanted factor of 2 in the amplitude spectral density.
For example, measuring the amplitude spectral density of the FSS RF PD's dark noise at its resonant frequency (~21.5 MHz), I would expect ~15nV/rtHz from the thermal noise - as Rana pointed out in the elog entry 2759). However, the spectrum analyzer reads 30nV/rtHz, in both the display and the GPIB downloaded data, except for the above mentioned little discrepancy between the two. (The discrepancy is about 0.5dBm/Hz in the power spectrum density).
My measurement, as I showed it in the elog entry 2760) is of ~15nV/rtHz, but only becasue I divided by 2. Now I realize that that division was unjustified.
I'm trying to figure out the reason for that. By now I'm not sure we can trust the netgpib package for spectrum measurements with the AG4395.
|
I noticed that someone, that wasn't me, has edited the wiki page about the netgpibdata under my name saying:
" [...]
* A4395 Spectrum Units
Independetly by which unites are displayed by the A4395 spectrum analyzer on the screen, the data is saved in Watts/rtHz"
That is not correct. The spectrum is just in Watts, since it gives the power over the bandwidth. The correspondent power spectral density is showed under the "Noise" measurement format and it's in Watts/Hz.
Watts/rtHz is not a correct unit. |
2792
|
Mon Apr 12 17:48:32 2010 |
Aidan | Update | Computer Scripts / Programs | elog restarted |
The elog crashed when I was uploading a photo just now. I logged into nodus and restarted it. |
3125
|
Sat Jun 26 21:13:19 2010 |
rana | Summary | Computer Scripts / Programs | COMSOL 4.0 Installation |
I've installed COMSOL 4.0 for 32/64 bit Linux in /cvs/cds/caltech/apps/linux64/COMSOL40/
It seems to work, sort of.
Notes:
- It did NOT work according to the instructions. The CentOS automount had mounted /dev/scd0 on /media/COMSOL40. In this configuration, I was getting a permission denied error when trying to run the default setup script. I did a 'sudo umount /dev/scd0' to get rid of this bad mount and then remounted using 'sudo mount /dev/dvd /mnt'. After doing this, I ran the setup script '/mnt/setup' and got the GUI which started installing as usual.
- I also pointed it at the linux64/matlab/ installation.
- It seems to not work right on Rosalba because of my previous java episode. The x-forwarding from megatron also fails. It does work on allegra, however.
|
3296
|
Tue Jul 27 11:24:53 2010 |
josephb | HowTo | Computer Scripts / Programs | killdataviewer script |
I placed a script for killing all instances of the dataviewer program on the current computer in /cvs/cds/caltech/scripts/general/. Its called killdataviewer. This is intended to get rid of a bunch of zombie dataviewer processes quickly. These processes get into this bad state when the dataviewer program is closed in any way other than the graphical menu File -> Exit option.
Its contents are very simple:
#/bin/bash
kill `ps -ef | grep dataviewer | grep -v grep | grep -v killdataviewer | awk '{print $2}'` |
3345
|
Sun Aug 1 21:04:45 2010 |
rana | Summary | Computer Scripts / Programs | MC Autolocker fixed |
Someone had left a typo in the MC autolocker script recently while trying to set the lock threshold to 0.09. As a result, the autlocker wouldn't run.
I repaired it, made a few readability improvements, and checked in the new version to the SVN. If you make script changes, check them in. If you think its too minor of a change for a SVN checkin, don't do it at all. |
3384
|
Sat Aug 7 21:09:59 2010 |
Dmass | Configuration | Computer Scripts / Programs | eLog changes |
I made some changes to the elog on nodus:
- Made a backup of /cvs/cds/caltech/elog/elog-2.7.5/elogd.cfg called elogd.cfg.bk.20100407 in the same directory
- Added a folder: /cvs/cds/caltech/elog/elog-2.7.5/logbooks/EAGER_Lab
- Restarted the elog daemon via the start-elog-nodus script in the elog-2.7.5 directory
I saw that the current version of the elog seems to be in the svn, so tried to svn the changes from nodus via ssh, but got this message:
"svn: This client is too old to work with working copy '/cvs/cds/caltech/elog/elog-2.7.5'; please get a newer Subversion client."
I feel I should svn this but don't want to *&#@ the svn/elog up.
For now I will leave it alone and ask a question: Is the folder /cvs/cds/caltech/elog/elog-2.7.5/ under SVN control? Is it also under CVS control?
TL;DR: New tab added to elog.
|
3831
|
Sun Oct 31 00:19:35 2010 |
rana | Summary | Computer Scripts / Programs | HP3563A netGPIB function |
I've wheeled the old HP audio frequency signal analyzer into the control room to debug the GPIB/python interface. The wireless setup was getting more than 80% packet loss in the office area.
I also noticed that we have multiple and competing copies of the netgpib package installed. Kiwamu is going to merge them soon. Pleae only use the official location:
scripts/general/netgpibdata/
which is also the SVN working copy. Committ all changes periodicallty so that we can share the updated versions between sites. |
4245
|
Thu Feb 3 16:08:06 2011 |
Aidan | Update | Computer Scripts / Programs | RCG VCO frequency error |
Joe and I were looking at the RCG VCO algorithm to determine if we could adapt it to run at a faster rate (you can currently change its frequency at 1Hz). I noticed that the algorithm that is used to calculated the values of sine and cosine at time T1 is a truncated Taylor series which uses the values of sine and cosine calculated at time T1 - Delta t . I was concerned that there would be an accumulating phase error so I tested the algorithm in MATLAB and compared it to a proper calculation of sine and cosine. It turns out that at a given 'requested' frequency there is a constantly accumulating phase error - which means that the 'actual' frequency of the RCG VCO is incorrect. So I have plotted the frequency error vs requested VCO frequency. It gets pretty bad!
Here's the code I used:
dt = 1/16384;
diffList = [];
% set the frequencies
flist = 1:5:8192;
for f = flist;
% get the 'accurate' values of sine and cosine
tmax = 0.05;
time1 = dt:dt:tmax;
sineT = sin(2.0*pi*f*time1);
cosineT = cos(2.0*pi*f*time1);
% determine the phase change per cycle
dphi = f*dt*2*pi;
cosT1 = 1:numel(time1);
sinT1 = 0*(1:numel(time1));
% use the RCG VCO algorithm to determine the values of sine and cosine
for ii = 1:numel(time1) - 1;
cosNew = cosT1(ii)*(1 - 0.5*dphi^2) ...
- (dphi)*sinT1(ii);
sinNew = sinT1(ii)*(1 - 0.5*dphi^2) ...
+ (dphi)*cosT1(ii);
cosT1(ii+1) = [ cosNew];
sinT1(ii+1) = [ sinNew];
end
% extract the phase from the VCO values of sine and cosine
phaseT = unwrap(angle(cosineT + i* sineT));
phaseT1 = unwrap(angle((cosT1 + i*sinT1)));
% determine the phase error for 1 cycle
diff = phaseT1 - phaseT;
% determine the frequency error
slope = (diff(2) - diff(1))/(dt);
diffList = [diffList, slope];
disp(f)
pause(0.001)
end
% plot the results
close all
figure
orient landscape
loglog(flist, abs(diffList/(2.0*pi)))
xlabel('Requested VCO Frequency (Hz)')
ylabel('Frequency error (Hz)')
grid on
print('-dpdf', '/users/abrooks/VCO_error.pdf')
|
4460
|
Wed Mar 30 16:32:29 2011 |
Aidan | Configuration | Computer Scripts / Programs | Added a sitemap alias |
I added an alias to the sitemap MEDM screen in /cvs/cds/caltech/target/cshrc.40m
Now you can enjoy launching sitemap from a terminal.
alias sitemap 'medm -x /cvs/cds/rtcds/caltech/c1/medm/sitemap.adl'
|
4463
|
Wed Mar 30 18:50:57 2011 |
Koji | Configuration | Computer Scripts / Programs | Added a sitemap alias |
I thought that "m40m" was the traditional alias for the sitemap...
rossa:~>alias
...
m40m ${medm_base} ${medm_newtail} &
...
sitemap medm -x /cvs/cds/rtcds/caltech/c1/medm/sitemap.adl
rossa:~>set|grep medm
medm_base medm
medm_newtail -x /opt/rtcds/caltech/c1/medm/sitemap.adl
medm_tail -x /cvs/cds/caltech/medm/sitemap.adl
Quote: |
I added an alias to the sitemap MEDM screen in /cvs/cds/caltech/target/cshrc.40m
Now you can enjoy launching sitemap from a terminal.
alias sitemap 'medm -x /cvs/cds/rtcds/caltech/c1/medm/sitemap.adl'
|
|
4813
|
Tue Jun 14 03:15:29 2011 |
Koji | HowTo | Computer Scripts / Programs | Kissel Button Generator |
I have made a python script to generate the button designed by Jeff Kissel for his ISI screen.
It is currently located at the following location:
/cvs/cds/rtcds/caltech/c1/medm/c1lsc_tst/master/KisselButtonGenerator/generate_KisselButton.py
but should be relocated to somewhere appropriate.
It also uses fragmented medm files named "MATRIX*.adl_parts ".
# Jamie, could you suggest the right place?
The parameters are assigned at the beggining of the script.
This script print the result to stdout. So you need to redirect the output into a file.
e.g.
> ./generate_KisselButton.py >tmp.adl
The script should be modified such that it accepts the command line options.
It needs more python learning for me.
# Number of the column
mat_h = 20;
# Number of the row
mat_v = 10;
# horizontal pixel size of the rectangular display for each matrix element
button_width = 8;
# vertical pixel size of the rectangular display for each matrix element
button_height = 8;
replace_dict = {
# Title
'${DISPLAY_LABEL}':'ITMX_INMATRIX',
# Path of the MEDM file to be open by clicking the button
'${DISPLAY_NAME}':'/cvs/cds/rtcds/caltech/c1/medm/c1sus/master/C1SUS_ITMX_INMATRI
X_MASTER.adl',
# The channel name of the matrix element
# ($V and $H are replaced to the numbers i.e. "_3_4")
'${MATRIX_CHAN}':'C1:SUS-ITMX_INMATRIX_$V_$H'
};
|
4820
|
Wed Jun 15 00:50:11 2011 |
Koji | HowTo | Computer Scripts / Programs | Kissel Button Generator |
Now the Kissel-button generator takes the command line arguments and options.
The script is fully documented by the usage message of the script itself.
It still needs the external supporting files "MATRIX*.adl_parts ".
Now the LSC screen has these buttons for the input and output matrices.
The command lines to generate those buttons are listed at the end of this entry as the examples.
>pwd
/opt/rtcds/caltech/c1/medm/c1lsc_tst/master/KisselButtonGenerator
>./generate_KisselButton.py -h
usage:
generate_KisselButton.py [options] end_row end_column matrix_ch_name
This generates an MEDM screen of a button with the style designed by
Jeff Kissel for his ISI screens. This button has a display of a matrix
elements. If the matrix element is non-zero it glows in green. Otherwise
its color is dark. Usually the button created by this script
is to be copy-pasted to other screens.
Three arguments have to be given:
end_row the number of the row at the end
end_column the number of the column at the end
matrix_ch_name the channel name of the matrix to be monitored
e.g. give C1:LSC-OUTPUT_MTRX for C1:LSC-OUTPUT_MTRX_1_1, ...
There are options prepared in order to control the parameters of the button.
example:
generate_KisselButton.py 6 6 C1:LSC-OUTPUT_MTRX
6x6 matrix for C1:LSC-OUTPUT_MTRX
options:
-h, --help show this help message and exit
--sr=START_ROW specify the starting row number for the button array.
[default: 1]
--sc=START_COLUMN specify the starting column number for the button array.
[default: 1]
--bw=BUTTON_WIDTH specify the pixel width of the small button. [default:
8]
--bh=BUTTON_HEIGHT specify the pixel height of the small button. [default:
8]
--dl=DISPLAY_LABEL specify the button label. [default: channel name]
--sn=SCREEN_NAME specify the file name of the screen opened when one
click the button. The relative or absolute path can be
included. [default: a name guessed from the channel
name. e.g. C1LSC_OUTPUT_MTRX.adl for C1:LSC-OUTPUT_MTRX]
>./generate_KisselButton.py --bw=3 --bh=4 --dl="RFPD InMTRX" 16 8 C1:LSC-PD_DOF_MTRX > rfpd_mtrx.adl
>./generate_KisselButton.py --sc=21 --bw=6 --bh=4 --dl="DCPD InMTRX" 27 8 C1:LSC-PD_DOF_MTRX > dcpd_mtrx.adl
>./generate_KisselButton.py --bw=4 --bh=4 --dl="Trig MTRX" 11 8 C1:LSC-TRIG_MTRX > trig_mtrx.adl
>./generate_KisselButton.py --bw=4 --bh=4 --dl="Out MTRX" 9 10 C1:LSC-OUTPUT_MTRX > output_mtrx.adl
|
4935
|
Sun Jul 3 21:18:06 2011 |
rana | Update | Computer Scripts / Programs | statScreen scripts dead since Feb 4 / now revived |
This CSHRC mangling on Feb 4 did more than re-arrange FB binaries.
It broke the path to MEDM for the 32-bit machines in the lab (e.g. mafalda) and stopped the MEDM snapshots from being posted onto our MEDM Status Web Page.
This is because, in addition to the paths mentioned in the above elog, the paths to the EPICS directories were also commented out. I've re-inserted them into our
.cshrc file in the 32-bit section; the statScreen CRON that Yoichi set up is now back in business.
* for some reason, the 'cronjob.sh' script is wiping out its own log file. It would be great if someone who understands stderr output re-direction can fix it so that the log-file from each run is retained until the next time cron runs. |
5036
|
Tue Jul 26 09:01:53 2011 |
Jenny | Update | Computer Scripts / Programs | Mode matching |
I found a mode matching solution to match the beam coming to the PSL table from the AP table so that I can lock the laser beam coming onto the PSL table to the reference cavity on the table. I determined that at the polarizing beam splitter, I want a beam with a q=(147+25.1i)mm (w0=58mm). This came from applying the ABCD matrices for three distances,
- d1=693 mm,
- d12=660.4 mm, and
- d2=393.7 mm, separated
- an f=229.1 mm planoconvex lens and
- an R=300 mm curved mirror.
to a beam with q0 = 406.4i mm (w0=0.371 mm at the PMC).
I obtained the following mode matching solution, which I will try to implement on the PSL table:
The beam I have has waist 0.281 mm at -2.74 m (I set my origin at the polarizing beam splitter--the spot where I want my beam to match the beam coming from the PMC, so all waists are behind that point). These numbers come from the beam-profiling and MATLAB-fitting I did (see 5015).
The solution I chose was: f = 1145.6 mm at -0.95 m and f = 572.7 mm at -0.62 m. This may need to be changed however, if I need to add in some beam steering, which would increase the path length traveled by the beam.

|
5053
|
Thu Jul 28 16:00:28 2011 |
kiwamu | Update | Computer Scripts / Programs | another offset script : offset2 |
A new offset-zeroing script has been developed and it is ready to run.
The motivation is to replace the old zeroing script called offset by a better one because this old script somehow failed to revert the gain settings on a given filter bank.
The new script, named offset2, does the same job, but uses tdsavg instead of using ezcaservo. So it doesn't screw up the gain settings.
Additionally the structure of the script is much simpler than the old offset script, and fewer ezca-functions.
I will modify some scripts which use the old offset script so that all the offset-zeroing is done by offset2.
P.S.
Useful scripts are listed on the 40m wiki
http://blue.ligo-wa.caltech.edu:8000/40m/Computers_and_Scripts/All_Scripts |
5070
|
Sat Jul 30 10:03:32 2011 |
Jenny | Update | Computer Scripts / Programs | Mode matching |
I ended up having to switch to a different mode-matching solution, because I was unable to find the f = 572.7 mm lens. See my next elog entry (5069). |
5254
|
Wed Aug 17 12:14:27 2011 |
Josh Smith | Omnistructure | Computer Scripts / Programs | 40m summary page plans |
Josh Smith, Fabian Magana-Sandoval, Jackie Lee (Fullerton)
Thanks to Jamie and Jenne for the tour and the input on the pages.
We had a look at the GEO summary pages and thought about how best to make a 40m summary page that would eventually become and aligo summary page. Here's a rough plan:
- First we'll check that we can access the 40m NDS2 server to get data from the 40m lab in Fullerton.
- We'll make a first draft of a 40m summary page in python, using pynds, and base the layout on the current geo summary pages.
- When this takes shape we'll iterate with Jamie, Jenne, Rana to get more ideas for measurements, layout.
Other suggestions: Jenne is working on an automated noisebudget and suggests having a placeholder for it on the page. We can also incorporate some of the features of Aidan's 40m overview medm screen that's in progress, possibly with different plots corresponding to different parts of the drawing, etc. Jenne also will email us the link of once per hour medm screenshots.
|
5304
|
Thu Aug 25 17:40:07 2011 |
Dmass | Update | Computer Scripts / Programs | elog broke, fixed |
elog died b/c someone somewhere did something which may or may not have been innocuous. I ran the script in /cvs/cds/caltech/elog to restart the elog (thrice).
I have now banned Warren from clicking on the elog from home |
5340
|
Mon Sep 5 21:12:05 2011 |
Dmass | Update | Computer Scripts / Programs | elog broke, fixed |
Restarted elog 9:11PM 9/5/11 |