ID |
Date |
Author |
Type |
Category |
Subject |
12726
|
Tue Jan 17 20:39:30 2017 |
rana | Update | Computer Scripts / Programs | nodus web apache simlinks too soft |
I tried to follow these instructions today to make the Simulink Webview accessible:
controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/
Quote: |
The story is: we currently don't expose the whole /users/public_html folder. Instead, we are symlinking the folders from public_html to /export/home/ on nodus, which is where apache looks for things
So, I fixed the links on the Core Optics page by running:
controls@nodus|~ > ln -sfn /users/public_html/40m_phasemap /export/home/
|
But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security? |
12733
|
Wed Jan 18 12:46:47 2017 |
ericq | Update | Computer Scripts / Programs | nodus web apache simlinks too soft |
Quote: |
I tried to follow these instructions today to make the Simulink Webview accessible:
controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/
But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security?
|
This link works for me: https://nodus.ligo.caltech.edu:30889/FE/c1als_slwebview.html. The problem with just /FE/ is that there is no index.html, and we have turned off automatic directory listings.
IIRC, this arrangement was due to the fact that authentication of some of the folders (maybe the wikis) was broken during the nodus upgrade, so there was sensitive information being publicly displayed. This setup gives us discretion over what gets exposed. |
12735
|
Wed Jan 18 15:17:38 2017 |
rana | Update | Computer Scripts / Programs | nodus web apache simlinks too soft |
I suppose before directory listings were turned off we should have fixed the script to make an index.html, but that's how it goes with "up"-grades. How about re-allow directory listing so that our old links for webview work again?
EQ: https://nodus.ligo.caltech.edu:30889/FE is live |
12740
|
Thu Jan 19 16:36:35 2017 |
ericq | Update | Computer Scripts / Programs | nodus web apache simlinks too soft |
This was done by adding "Options +Indexes" to /etc/apache/sites-available/nodus
I've added a little more info about the apache configuration on the wiki: ApacheOnNodus |
12859
|
Wed Mar 1 16:00:41 2017 |
gautam | Update | Computer Scripts / Programs | Matlab R2016b installed |
Since it would be nice to have the latest version of Matlab, with all its swanky new features (?), available on the control room computers and Optimus, I downloaded Matlab R2016b and activated it with the Caltech Campus license. I installed it into /cvs/cds/caltech/apps/linux64/matlab16b. Specifically, I would like to run the coating optimization code on Optimus, where I can try giving it more stringent convergence criterion to see if it converges to a better spot.
I trust that this way, we don't interfere with any of the rtcds stuff.
If I've done something illegal license-wise or if this is likely to cause havoc, please point me to what is the correct way to do this.
GV 18 Mar 2017: Though I installed this using the campus network license key, this seems to only work on Rossa. If I run it on the other control room machines/Optimus, it throws up a licensing error. I will check with Larry W. as to how to resolve this...
|
12874
|
Wed Mar 8 18:18:51 2017 |
johannes | Update | Computer Scripts / Programs | loss script |
I started a loss script on Donatella that will scan the beam spot across ETMY, recording the reflected power from the arm via the networked scope at the AS port until later tonight (should be done by 9 pm). ITMX is currently strongly misaligned for this, but can be restored with the saved values. I mostly adapted the mapping scipts for the scope readout but still have to iron out a few kinks, which is why I'm running this test. In particular, I still need to calibrate how much the spot actually moves on the optic and control the ASS demodulation offsets to keep the beam stationary on ITMY. |
12879
|
Thu Mar 9 22:28:11 2017 |
johannes | Update | Computer Scripts / Programs | loss script |
loss map script running on Rossa that moves the beam on ETMX. Yarm was misaligned for this, most recent PIT and YAW settings were saved beforehand. This will take until late at night, I estimate 2-3 am.
Quote: |
I started a loss script on Donatella that will scan the beam spot across ETMY, recording the reflected power from the arm via the networked scope at the AS port until later tonight (should be done by 9 pm). ITMX is currently strongly misaligned for this, but can be restored with the saved values. I mostly adapted the mapping scipts for the scope readout but still have to iron out a few kinks, which is why I'm running this test. In particular, I still need to calibrate how much the spot actually moves on the optic and control the ASS demodulation offsets to keep the beam stationary on ITMY.
|
|
12880
|
Fri Mar 10 11:37:25 2017 |
gautam | Update | Computer Scripts / Programs | loss script |
This was still running at ~9.30am today morning, at which point I manually terminated it after confirming with Johannes that it was okay to do so. Judging by the StripTool traces in the control room, the mode cleaner remained locked for most of the night, there should be plenty of usable data...
Note that I re-aligned the Y-arm (to experiment further with photo-taking) at about 9.30am, so the data after this time should be disregarded...
Quote: |
loss map script running on Rossa that moves the beam on ETMX. Yarm was misaligned for this, most recent PIT and YAW settings were saved beforehand. This will take until late at night, I estimate 2-3 am.
|
|
12882
|
Fri Mar 10 19:48:56 2017 |
johannes | Update | Computer Scripts / Programs | loss script |
Loss script running again, on Pianosa this time. Due to an oversight in the code the beam wasn't actually moved across ETMY last night. This time I confirmed that the correct offset value is written as a demodulation parameter to the correct mirror degree of freedom. Script will probably run through the night. Yarm is currently misaligned but previous alignment was saved. |
12883
|
Sat Mar 11 20:11:58 2017 |
johannes | Update | Computer Scripts / Programs | loss script |
Yarm script running on Pianosa. Still working on visualization of the ETMX lossmap.
Quote: |
Loss script running again, on Pianosa this time. Due to an oversight in the code the beam wasn't actually moved across ETMY last night. This time I confirmed that the correct offset value is written as a demodulation parameter to the correct mirror degree of freedom. Script will probably run through the night. Yarm is currently misaligned but previous alignment was saved.
|
|
12923
|
Sun Apr 2 23:14:30 2017 |
rana | Update | Computer Scripts / Programs | nodus update/upgrade/reboot |
I just did remote apt-get update, apt-get upgrade, and then reboot on nodus. ELOG started up by itself. |
12992
|
Mon May 15 19:21:04 2017 |
Koji | Update | Computer Scripts / Programs | FSSslow / MCautolocker restarted |
It seems that FSS slow servo stopped working.
I found that megatron was restarted (by Rana, to finish an apt-get upgrade) on ~18:47 PDT today.
controls@megatron|~> last -5
controls pts/0 192.168.113.216 Mon May 15 19:15 still logged in
controls pts/0 192.168.113.216 Mon May 15 19:14 - 19:15 (00:01)
reboot system boot 3.2.0-126-generi Mon May 15 18:50 - 19:19 (00:29)
controls pts/0 192.168.113.200 Mon May 15 18:43 - down (00:04)
controls pts/0 192.168.113.200 Mon May 15 15:25 - 17:38 (02:12)
FSSslow / MCautolocker were restarted on megatron.
|
13013
|
Thu May 25 16:42:41 2017 |
jigyasa | Update | Computer Scripts / Programs | Making pylon installation on shared directory |
I have been working on interfacing with the GigE’s. I went through Joe Be’s paper and the previous elogs and verified that the code files are installed.
I then downloaded and extracted a copy of the Pylon software onto my home directory on Allegra. Gautam helped me find installation instructions on Johannes’ directory so that I could make the installation on the shared directory.
So far , according to instructions, these commands need to be executed so that the installation takes place and the rules for camera permissions are set up.
sudo tar –C /opt/rtcds/caltech/c1/scripts/GigE –xzf pylon SDK*.tar.gz
followed by ./setup-usb.sh
The Pylon viewer can then be accessed with /scripts/GigE/pylon5/bin/PylonViewerApp
Should I go ahead with the installation in the shared directory? |
13014
|
Thu May 25 18:37:11 2017 |
jigyasa | Update | Computer Scripts / Programs | Making pylon installation on shared directory |
Gautam helped me execute the commands mentioned above and Pylon has now been installed on the shared directory. We extracted the pylon installation from Johannes's directory to the shared drive and executing the command tar –C /opt/rtcds/caltech/c1/scripts/GigE –xzf pylon SDK*.tar.gz created an unzipped pylon5 folder in /scripts. The ./setup-usb.sh set up the udev rules for the GigE.
The installation took place without any errors.
The Pylon viewer app can now be accessed at /opt/rtcds/caltech/c1/scripts/GigE/pylon5/bin followed by ./PylonViewerApp
Quote: |
Should I go ahead with the installation in the shared directory?
|
|
13023
|
Wed May 31 14:23:42 2017 |
jigyasa | Update | Computer Scripts / Programs | Establishing the EPICs channels for the GigE |
To set up the EPICs channels for the GigE, Gautam and I followed the steps in the elog by him 8957 .
We copied the 11 required channels from scripts/GigE/SnapPy/example_camera.db to c1cam.db that we created, however due to conflicts with the existing CAM-AS_PORT channels, the channels could not be accessed.
We later changed the database file to Video.db and on restarting the slow machine, it was verified that the channels indeed could be written to and read from.
11 channels were added
C1: CAM-MC1_X (X centroid position)
C1: CAM-MC1_Y (Y centroid position)
C1: CAM-MC1_WX (Gaussian width in the X direction)
C1: CAM-MC1_WY (Gaussian width in the Y direction)
C1: CAM-MC1_XY (Gaussian width along XY line)
C1: CAM-MC1_SUM (Pixel sum)
C1: CAM-MC1_EXP (Exposure time in microseconds)
C1: CAM-MC1_SNAP (Control signal for taking snapshots)
C1: CAM-MC1_FILE(File name for image to saved to - time stamp automatically appended)
C1: CAM-MC1_RELOAD (Reloads configuration file)
C1: CAM-MC1_AUTO (1 means autoexposure on, 0 means autoexposure off)
The procedure followed –
- Add the channel names to the file C0EDCU.ini (path = /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini).
- Make a database (.db) file so that these channels are actually recorded (path = /cvs/cds/caltech/target/c1aux/Video.db).
- Restarted
the slow machine. FB
- Verify that the channels indeed exist and can be read and written to using
ezcaread and ezcawrite.
GV: Initially, I made a new directory called c1cam in /cvs/cds/caltech/target/ and made a .db file in there. However, the channels were not accessible after re-starting FB (attempting to read these channels threw up the "Channel does not exist" error). On digging a little further, I saw that there were already some "C1:CAM-AS_PORT" channels in C0EDCU.ini. The corresponding database records were defined inside /cvs/cds/caltech/target/c1aux/Video.db. So I just added the new records there. I also had to uncomment out the dummy channel in C0EDCU.ini to keep an even number of channels. Restarting FB still did not allow read/write access to the channels. Looking through the files in /cvs/cds/caltech/target/c1aux, I suspected that the epics database records are loaded when the machine is first booted up - so on a hunch I re-started c1aux by keying the crate, and this did the trick. The channels can now be read / written to (tested using Python cdsutils). |
13056
|
Fri Jun 9 16:37:29 2017 |
jigyasa | Update | Computer Scripts / Programs | OpenCV installation |
OpenCV 3.1.0 has been installed by following the commands locally on Donatella
git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 3.1.0
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv_contrib
git checkout 3.0.0
cd ~/opencv
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/~/opencv_contrib/modules/ ~/opencv/
In ~/opencv/release, make and sudo make install were executed.
This completed the installation. The version of the installation was verified pkg-config --modversion opencv which showed 3.1.0. Also verified the import of cv2 module in python and it seems to work fine.
|
13066
|
Thu Jun 15 18:56:31 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.
The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa
Apologies for any inconvenience.
Data analysis will follow. |
13072
|
Mon Jun 19 18:32:18 2017 |
jigyasa | Update | Computer Scripts / Programs | Software Installation for image analysis |
The IRAF software from the National Optical Astronomy Observatory has been installed locally on Donatella(for testing) following the instructions listed here at http://www.astronomy.ohio-state.edu/~khan/iraf/iraf_step_by_step_installation_64bit
This is a step towards "aperture photometry" and would help identify point scatterers in the images of the test masses.
I will be testing this software, in particular, the use of DAOPHOT and if it seems to work out, we may install it on the shared directory.
Hope this isn't an inconvenience.
|
13073
|
Mon Jun 19 18:41:12 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The previous run of the script had produced some dubious results!
The script has been modified and now scans the transmission sum for a longer duration to provide a better estimate on the average transmission. The pitch and yaw offsets have been set to the values that were randomly generated in the previous run as this would enable comparison with the current data.
I am starting it on Donatella and it should run for a couple of hours.
Apologies for the inconvenience.
Quote: |
A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.
The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa
|
|
13076
|
Tue Jun 20 17:44:12 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.
Quote: |
I am starting it on Donatella and it should run for a couple of hours.
Apologies for the inconvenience.
Quote: |
A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.
The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa
|
|
|
13077
|
Fri Jun 23 02:43:43 2017 |
Kaustubh | HowTo | Computer Scripts / Programs | Taking Measurements From AG4395A |
Summary:
I have written a code(a basic one which needs a lot of improvements, but still does the job) for taking multiple measurements from the AG4395A. I have also written a separate code for plotting the data taken from the previoius code along with the error bars upto 1 standard deviation.
Details on How To Operate AG4395A:
- Under 'Measurement' tab, press the 'Meas' button and select the Analyzer Type (Network Analyzer or Spectrum Analyzer).
- Then under the same options select which 'ratio' needs to be measured (A/R, B/R or A/B).
- Then press the 'Format' button to select what needs to be measured (Eg - Log|Mag|, Phase, etc.).
- In order to measure and see two channels at the same time (Eg - Log|Mag| and Phase), press the 'Display' button and select 'Dual Channel'.
- Using the 'Scale' button we can set the scale/div or use autoscale and also set the attenuator values of the different channels.
- The 'Bw/Avg' option gives us an averaging option which averages few sets of data to produce the result. In doing this we lose quiet a lot of data and the resulting plot isn't able to give us the information on the statistical errors.
- This option also allows us to set the 'Intermediate Frequency' Bandwidth. This basically dictates the sampling rate of the Analyzer. The lower the IF bw, the higher is lesser is the noise (due to less uncertainty in Frequency).
- The 'Cal' button helps us calibrate the Analyzer to the current connections and signals. This is done because there is usually a difference in the 'cable lengths' for the two channels which introduces an extra phase term depnding upon the rf frequency. The calibration can be simply done by removing the Device Under Test (DUT) and diectly connecting the coaxial cables to the channels. After this the 'Calibrate Menu' allows us to calibrate the response using the short, open and thru methods.
- Now, under the 'Sweep' tab, the 'Sweep' button allows us to select various sweep options such as 'Sweep Time' (Auto, or set a time), 'Number of Points' (b/w 201-801) and 'Sweep Type' (Linear, Log, List Freq. etc.).
- Using the 'Source' button we can set the source power in dBm units (Usually kept as -20 to -10 dBm).
- The Scan Range can be set in a few ways such as using the start and end points or using the center and span range/width.
- After setting up all of the above, we can take the measurement either from the analyzer itself or using one of the control PCs. The command to download the data from AG4395A is netgpibdata -i 192.168.113.105 -d AG4395A -a 10 -f [filename].
Brief Details on How the 'AGmeasure' command works:
AGmeasure is a python script developed by some of the people who work at 40m. It is set as a global command and can be used from within any directory. The source code is in the scripts folder on the network, or else it can also be found in Eric Quintero's git repository. This command accepts at the very least a parameter file. This is supposed to be a .yml file. A template (TFAG4395Atemplate.yml) can be found in the scripts folder or in Eric's repo. There are some other options that can be passed to this command, see the help for more details.
The Multi_Measurement Script:
This script calls the 'AGmeasure' command repetitively and keeps storing the data files in a folder. Right now, the script needs to be fed in th template file manually at prompt.
The Test_Plotting Script:
This script plots the a set of data files obtained from the above mentioned script and produces a plot along with the errors bands upto 1 standard deviation of the data. The format (names) and total number of text files need to be explicitly known, for now at least.
Attachments:
- The output test files and the two scripts.
- This is the 'Bode Plot' for a data set made using the above two scripts.
To Do:
- Improve upon the two scripts to be as compatible as the AGmeasure function itself.
- Try and incorporate the whole script into AGmeasure itself along with improving upon the templates.
- The above details, with some edits perhaps, can go into the 40m wiki too(?).
Update: Increased the font size in the plot. Added a few comments to the two scripts
To Do: Need to consider the transfer function as a single physical quantity (both the magnitude and phase) and then take the averages and calculate the standard deviation and then plot these results.
EDIT:
The attachment with the test files and the code now also contains a pdf with all the relations/equations I have used to calculate the averages and errors. |
Attachment 1: Test_Files_and_Code.zip
|
Attachment 2: Bode_Plot_with_Error_Bands.pdf
|
|
13078
|
Fri Jun 23 02:55:18 2017 |
Kaustubh | Update | Computer Scripts / Programs | Script Running |
I am leaving a script running on the Pianoso for the night. For this purpose, even the AG4395A is kept on. I'll see the result of the script in the morning (it should be complete by then). Just check so before fiddling with the Analyzer.
Thank you. |
13084
|
Tue Jun 27 18:47:49 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The values generated from the script were analyzed and a 3D scatter plot in addition to a 2D map were plotted.
Yesterday, Rana pointed me to another method of collecting and analyzing the data. So I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.
Quote: |
The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.
|
|
13086
|
Thu Jun 29 00:13:08 2017 |
Kaustubh | Update | Computer Scripts / Programs | Transfer Function Testing |
In continuation to my previous posts, I have been working on evaluating the data on transfer function. Recently, I have calculated the correlation values between the real and imaginary part of the transfer function. Also I have written the code for plotting the transfer function data stream at each frequency in the argand plane just for referring to. Also I have done a few calculations and found the errors in magnitude and phase using those in the real and imaginary parts of the transfer function. More details for the process are in this git repository.
The following attachments have been added:
- The correlation plot at different frequencies. This data is for a 100 data files.
- The Test files used to produce the abover plot along with the code for the plotting it as well as the text file containing the correlation values. (Most of the code is commented as that part wasn't needed fo rhte recent changes.)
Conclusion:
Seeing the correlation values, it sounds reasonable that the gaussian in real and imaginary parts approximation is actually holding. This is because the correlation values are mostly quite small. This can be seen by studying the distribution of the transfer function on the argand plane. The entire distribution can be seen to be somewhat, if not entirely, circular. Even when the ellipticity of the curve seems to be high, the curve still appears to be elliptical along the real and imaginary axes, i.e., correlation in them is still low.
To Do:
- Use a better way to estimate the errors in magnitude and phase as the method used right now is a only valid with the liner approximation and gives insane values which are totally out of bounds when the magnitude is extrmely small and the phase is varying as mad.
- Use the errors in the transfer function to estimate the coherence in the data for each frequency point. That is basically plot a cohernece Vs frequency plot showing how the coherence of the measurements vary as the frequency is varied.
In order to test the above again, with an even larger data set, I am leaving a script running on Ottavia. It should take more than just the night(I estimate around 10-11 hours) if there are no problems. |
Attachment 1: Correlation_Plot.pdf
|
|
Attachment 2: 2x100_Test_Files_and_Code_and_Correlation_Files.zip
|
13087
|
Thu Jun 29 10:04:18 2017 |
jigyasa | Update | Computer Scripts / Programs | MC2 Pitch-Yaw offset |
The script is being executed again, now.
Quote: |
I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.
|
|
13105
|
Mon Jul 10 17:13:21 2017 |
jigyasa | Update | Computer Scripts / Programs | Capture image without pylon GUI |
Over the day, I have been working on a C++ program to interface with Pylon to capture images and reduce dependence on the Pylon GUI. The program uses the Pylon header files along with opencv headers. While ultimately a wrapper in python may be developed for the program, the current C++ program at,
/users/jigyasa/GigEcode/Grab/Grab.cpp when compiled as
g++ -Wl,--enable-new-dtags -Wl,-rpath,/opt/pylon5/lib64 -o Grab Grab.o -L/opt/pylon5/lib64 -Wl,-E -lpylonbase -lpylonutility -lGenApi_gcc_v3_0_Basler_pylon_v5_0 -lGCBase_gcc_v3_0_Basler_pylon_v5_0 `pkg-config opencv --cflags --libs`
returns an executable file named Grab which can be executed as ./Grab
This captures one image from the camera and displays it, additionally it also displays the gray value of the first pixel.
I am working on adding more utility to the program such as manually adjusting exposure, gain and also on the python wrapper (Cython has been installed locally on Ottavia for the purpose)! |
13109
|
Mon Jul 10 21:31:15 2017 |
Kaustubh | HowTo | Computer Scripts / Programs | Details on Cavity Scan Analysis |
Summary:
The following elog describes the procedure followed for generating a sample simulation for a cavity scan, fitting an actual cavity scan and calculating the relevant paramaters using the cavity scan and fit data.
1. Cavity Scan Simulation:
- First, we define the sample cavity parameters, i.e., the reflectivitie,transmissivities of the mirrors, the RoCs of the mirrors and the absolute cavity length.
- We then define a frequency range using numpy.linspace function for which we want to take a scan.
- We then define a function that returns the tranmission power output of a Fabry-Perot cavity using the cavity equations as follows:
where Pt is the transmission power ratio of the output power to input power, t1,t2,r1,r2 are the transmissivities and reflectivities of the two mirrors, L is the absolute cavity length, f is the frequency of the input laser, c is the speed of light, is the gouy phase shift with g1,g2 being the g-factors for the two cavity mirrors(g=1-L/R). 'n' and 'm' correspond to the TEMnm higher order mode.
- We now obtain a cavity scan by giving the above defined function the cavity parameters and by adding the outputs for different higher order mode('n', 'm' values). Appropriate factors for the HOMs need to be chosen. The above function with appropriate coefficients can be used ti also add the modulated sidebands to the total transmission power.
- To this obtained total power we can add some random noise using numpy modules random.normal function. We need to normalise the data with respect to the max. power transmission ratio.
- We can now perform fitting on the above data using the procedure stated in the next section and then plot the two data sets using matplotlib module.
- A similar code to do the above is given here.
2. Fitting a Cavity Scan:
- The actual data for a cavity scan can be found in this elog entry or attached below in the zip folder.
- We read this data and separate the frequency data and the transmission data.
- Using the peakutils module's indexes function, we find the indices of the various peaks in the data set.
- These peaks are from the fundamental resonances, sideband resonances(both 11MHz and 55MHz) as well as a few HOMs.
- Each of these resonances follows the cavity equations and hence can be modelled as Lorentzian within small intervals around the peak frequencies. A detailed description of how this is possible is given here and is in the atached zip folder('Functionsused.pdf').
- We define a Lorentzian function which returns the fo
llows: where 'a' is the peak transmission value, 'b' is the 'linewidth' of the Lorentzian and is the peak frequency about which the cavity equations behave like a lorentzian.
- We now, using the Lorentzian function, fit the various identified peaks using the curve_fit function of the scipy module. Remember to turn the 'absolute_sigma' parameter to 'True'.
- The parameters now obtained can be evaluated using the procedure given in the next section.
- The total transmission power is evaluated by feeding in the above obtained parameters back into the Lorentzian function and adding it for each peak.
- We can plot the actual data set and the data obtained using the fit of different peaks in a plot using matplotlib module. We can also plot the residuals for a better depiction of the fit quality.
- The code to analyse the above mentioned cavity scan data is given here and attached below in the zip folder.
3. Calculating Physically Relevant Parameters:
- The data obtained from the fitting the peaks in the previous section now needs to be analysed in order to obtain some physically relevant information such as the FSR value, the TMS value, the modulation depths of the sidebands and perhaps even the linear caliberation of the frequency.
- First we need to identify the fundamental, TEM00 resonances among all the peaks. This we do by using the numpy.where function. We find the peaks with transmission values more than 0.9(or any suitable value).
- Using these indices we will now calculate the FSR and the Finesse of the peaks. A description of the correlation between the Fit Parameters and the FSR and Finesse is given here.
- We define a Linear fitting function for fitting the frequency values of the fundamental resonances against the ith fundamental resonance. The slope of this line gives us the value of FSR and the error in it.
- The Finesse can be calculated by fitting the linewidth with a constant function.
- The cavity length can be calculated using the FSR values as follows:
.
- Now, the approximate positions of the sideband frequncies is given by 11*106%FSR and 55*106%FSR away from the fundamental, carrier resonances.
- The modulation depth, 'm', is given as
where Pc is the carrier transmission power, Ps is the transmission power of the sideband and Jv is the Bessel Function of order 'v'.
- We define a function 'Bessel Ratio' using which we'll fit the transmission power ratio of the carrier to the sideband for the multiple sideband resonances.
- We also check for the Linearity in frequency data by plotting Fitting the frequencies corresponding to peaks in the actual data to ones obtained after fitting.
- After this we attempt to identify the other HOMs. For this we first determine a rough estimate for the value of TMS using the already known parameters of the mirrors,i.e., the RoC. We then look in small intervals (0.5 MHz) around frequencies where we would expect the HOMs to be, i.e., 1*TMS, 2*TMS, 3*TMS... away from the fundamental resonances. These positions are all modulo FSR.
- After identifying the HOMs, we take the difference from the fundamental resonance and then study these modulo the FSR.
- We perform a Linear Fit between these obtained values and (n+m). As 'n','m' are degenerate, we can simply perform the fit against some variable 'k' and obtain the value of TMS as the slope of the linear fit.
- The code to do the above stated analysis is given here.
Most of the above info and some smaller details can be found in the markdown readme file in this git repo. |
Attachment 1: Attachments.zip
|
13116
|
Thu Jul 13 16:10:34 2017 |
Kaustubh | Summary | Computer Scripts / Programs | Cavity Scan Simulation Code |
The code to produce a cavity scan simulation and then fitting the data and re-evaluating the initiallt set parameters can be found in this git repo.
The 'CavitScanSimulation' python script now produces a cavity scan with custom parameters which can be easily modified. It also introduces the first TEMs(n+m=0,1,2,3,4) to the laser with power going as (1/(2(n+m)+1))^2 {Selected carefully}. The only care that needs to be taken is that the frequency span should be somewhere near an integral multiple of the FSR so that there are equal number of resonances for all modes and sidebands. This code, as of now also calls the 'FitCavityScan' script which performs the fitting procedure on the data generated above{This data is actually written in a '.mat' file} and generates the Fit parameter data files. The Simulation code then calls the 'CalculatingPhysicalParameters' script which evaluates the data based on the Fit parameters and outputs some physically relevant results like the FSR, Finesse, Modulation Depths, TMS{Current Output is the Estimated RoCs of the two mirrors which isn't something we want directly, so it can be modified a bit to output TMS based on the HOMs}. The scripts do some 'Linearity' checks which might not really be of much significance but can be seen as a reference. Also, the ipython notebook will show all intermediate plots for the actual data and data with custom noise, fit data, FSR fitting, linearity checks, Bessel Ratio plot with mod_depths.
Note: The scripts should be run using either an IDE like 'spyder'{for .py files}{Comes with Anaconda} or using an ipython notebook{for .ipynb files}.
|
13131
|
Fri Jul 21 19:44:58 2017 |
Naomi | Summary | Computer Scripts / Programs | Using PyKat to run Finesse |
I have been working on using PyKat to run Finesse. There appear to be several ways to run an equivalent simulation using Finesse:
1: .kat only
Run a .kat file directly from the terminal. For example, if in the directory containing the Finesse kat.ini file, run the command ‘./kat file.kat ’. This method does not use PyKat.
To edit the simulation using this method, one must directly edit the .kat file. This is not ideal, as all parameters must be hard-coded, and there is no looping method for duplicate commands.
Both of the following methods use PyKat in some manner. To run Finesse using PyKat from a .py file, the command ‘from pykat import finesse’ should be included. In addition, two environment variables must be defined:
- ‘
FINESSE_DIR' : directory containing ‘kat’ executable
- ‘
KATINI ’: location and name of kat.ini file
Within a .py file running PyKat, the kat object contains all of the optical components and their states. To create a kat object, we use the command:
kat = finesse.kat()
2: .kat + .py
To load Finesse commands from a .kat file, we can use the command loadKatFile() . For example, using the kat object as defined above:
kat.loadKatFile(‘file.kat’)
The kat object now contains any components defined in the .kat file. The states of these components can be altered using PyKat. For example, if in the .kat file, we defined a mirror named ‘ITM’, with R = 0.9, T = 0.1, phi = 0, and with nodes 1 and 2 to its left and right, respectively, using the Finesse command
m ITM 0.9 0.1 0 n1 n2
we can now alter the state of the mirror using a PyKat command such as
kat.ITM.phi = 30
which changes the ‘phi’ property of the mirror to 30 degrees. Once all alterations to objects are made, we can run Finesse using the command
out = kat.run()
which stores the output of the Finesse simulation in the variable out .
3: .py only
We can also run a Finesse simulation without any .kat file. There are two ways to define Finesse objects within a .py file.
- Parse a string containing Finesse commands, as would be found in a .kat file, using the command parseCommands() . For example,
kat.parseCommands(‘m ITM 0.9 0.1 0 n1 n2’)
defines the same mirror as above. This object can now be altered using pyKat in the same manner as above.
- Define an object using the classes defined in PyKat. For example, to define the same ITM mirror, we can use:
ITM = mirror(‘ITM’, ‘n1’, ‘n2’, 0.9, 0.1, 0)
kat.add(ITM)
The syntax for these classes can be found in the files included in the PyKat package named ‘commands.py’, ‘detectors.py’, and ‘components.py’.
We can also run Finesse commands (rather than just defining components) using their respective classes. These must also be added to the kat object. For example:
x = xaxis(‘lin’, [‘-4M’, ‘4M’], ‘f’, 1000, ‘laser’)
kat.add(x)
This runs the command ‘xaxis ’, which sets the x-axis of the output data to run from freq = -4 MHz to 4 MHz, in 1000 steps. This is equivalent to the following Finesse command:
xaxis laser f lin -4M 4M 1000
In theory, we should be able to use PyKat to run any Finesse command. However, not all Finesse commands appear to be defined in PyKat; one example is the Finesse command ‘yaxis ’, which I cannot locate in PyKat. In addition, I have had difficulty running some commands such as ‘cav ’ and ‘pd ’, despite following their class definitions in the PyKat files. However, these commands can still be easily run in PyKat using parseCommands() . |
13299
|
Wed Sep 6 01:09:11 2017 |
johannes | Update | Computer Scripts / Programs | New set of loss measurements |
I stumbled upon a faster way to stream data from the TDS3014 oscilloscopes to disk, which speeds the loss measurements up by a lot: ftp://sprite.ssl.berkeley.edu/pub/sharris/MAVEN_LPW_Preamp/109_TDS3014B_control/tds3014b.py
This convenient(!) set of scripts contains a function that parses the scope's native binary format, for which the acquisition of 1 screenful of data takes <1s as opposed to ~20s, into readable data. I tested it for a bit and concluded that it does what it actually claims to do, but there's one weirdness: It get's the channel offset wrong. However this doesn't matter in our measurement because we're subtracting the dark level, which sees the same (wrong) offset. Other than that it seems okay.
So I started a new set of armloss measurements, and since the data acquisition is now much faster, I was able to squeeze a set of 20 individual measurements for each arm into ~30 minutes. This is the procedure I follow when I take these measurements for the XARM (symmetric under XARM <-> YARM):
- Dither-align the interferometer with both arms locked. Freeze outputs when done.
- Misalign ETMY + ITMY.
- ITMY needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement.
- Start the script, which does the following:
- Resume dithering of the XARM
- Check XARM dither error signal rms with CDS. If they're calm enough, proceed.
- Freeze dithering
- Start a new set of averages on the scope, wait T_WAIT (5 seconds)
- Read data (= ASDC power and MC2 trans) from scope and save
- Misalign ETMX and wait 5s
- Read data from scope and save
- Repeat desired amount of times
- Close the PSL shutter and measure the PD dark levels
I will write a more comprehensive post describing the data acquisition and processing, let's just look at the results for now: The "uncertainties" reported by the individual measurements are on the order of 1-2 ppm (~1.9 for the XARM, ~1.3 for the YARM). This accounts for fluctuations of the data read from the scope and uncertainties in mode-matching and modulation depths in the EOM. I made histograms for the 20 datapoints taken for each arm: the standard deviation of the spread is a little over 2ppm. We end up with something like:
XARM: 49.3 +/- 2.1 ppm
YARM: 20.3 +/- 2.3 ppm

|
Attachment 1: XARM_20170905.pdf
|
|
Attachment 2: YARM_20170905.pdf
|
|
13307
|
Mon Sep 11 12:56:40 2017 |
johannes | Update | Computer Scripts / Programs | lossmap attempts |
I was trying to get a lossmap measurement over the weekend but had some trouble first with the IMC and then with the PMC.
For the IMC: It was a bit too misaligned to catch and maintain lock, but I had a hard time improving the alignment by hand. Fortunately, turning on the WFS quickly once it was locked restored the transmission to nominal levels and made it maintain the lock for longer, but only for several minutes, not enough for a lossmap scan (can take up to an hour). Using the WFS information I manually realigned the IMC, which made locking easier but wouldn't help with staying locked.
For the PMC: The PZT feedback signal had railed and the PMC had been unlocked for 8+ hours. The PMC medm screen controls were generally responsive (I could see the modes on the CCDs changing) but I just couldn't get it locked. c1psl was responding to ping but refusing telnet so I keyed the crate, followed by a burt restore and finally it worked.
After the PMC came back the IMC has already maintained lock for more than an hour, so I'm now running the first lossmap measurements. |
13316
|
Mon Sep 18 15:00:15 2017 |
rana, gautam | Frogs | Computer Scripts / Programs | gateway PWD change |
We implemented the post-SURF-season nodus password change today.
New password can be found at the usual location. |
13329
|
Sun Sep 24 20:47:15 2017 |
rana | Update | Computer Scripts / Programs | RF TF Uncertainties |
I have made several changes to Craig's script for better pythonism. Its more robust with different libraries and syntaxes and makes a tarball by default (w/o a command line flag). These kinds of general util scripts will be going into a general use folder in the git.ligo.org/40m/ team area so that it can be used throughout the LSC.
I don't think we need/want a coherence calculation, so I have not included it. Usually, we use coherence to estimate the uncertainty, and here we are just plotting it directly from the dist of the sweeps so coherence seems superfluous. |
Attachment 1: TFAG4395A_21-09-2017_115547_FourSquare.pdf
|
|
Attachment 2: TFAG4395A_21-09-2017_115547.tgz
|
13330
|
Mon Sep 25 17:56:33 2017 |
johannes | Update | Computer Scripts / Programs | transmitted power during lossmap |
I had to do a reboot + burt restore of c1psl today. It was unresponsive and I couldn't get the PMC to lock. I also had to slightly realign the PMC, and the IMC was too misaligned for the autolocker to catch lock. Adjusting it manually, it was predominantly MC1 PIT that was off. The YARM locked on a 10 mode and had to be aligned manually as well.
I left a script running on Donnatella that tilts ETMX and thus moves the beam on ITMX. I'm monitoring the transmitted power to evaluate sane thresholds for the demodulation offsets in a lossmap measurement. The script will return the IFO to normal after it is done and will take <2 hours to complete (no real clue, but there's no way it takes longer than that for ~50 datapoints). |
13464
|
Thu Dec 7 11:14:37 2017 |
johannes | HowTo | Computer Scripts / Programs | Lots of red on the FE status screen |
Since we're getting ready to put the replacement slow DAQ for c1auxex in I wanted to bring the IFO back to operating condition after the PMC hasn't been locked for days. Something seems wrong with the CDS system though, many of the frontent models have red background and don't seem to be responsive. I followed the instructions laid out in https://wiki-40m.ligo.caltech.edu/Computer_Restart_Procedures.
In the attached screenshot, initially all c1ioo models were red, and on c1iscex only c1x01 was blue, the other ones red. I was able to ssh into both machines and tried to restart indivitual models, which didn't work and instead turned their background white. Still following the wiki page, I restarted both machines but they don't respond to pinging anymore and thus I cannot use ssh to reach them. Not sure what to do, I also rebooted fb over telnet.
So far I couldn't find any records of how to fix this situation. |
Attachment 1: 22.png
|
|
13465
|
Thu Dec 7 15:02:37 2017 |
Koji | HowTo | Computer Scripts / Programs | Lots of red on the FE status screen |
Once a realtime machine was rebooted, it did not come back. I suspect that the diskless hosts have a difficulty to boot up. |
Attachment 1: DSC_0552.JPG
|
|
13466
|
Thu Dec 7 15:46:31 2017 |
johannes | HowTo | Computer Scripts / Programs | Lots of red on the FE status screen |
[Koji, Johannes]
The issue was partially fixed and the interferometer is in workable condition now.
What -probably- fixed it was restarting the dhcp server on chiara
sudo service isc-dhcp-server restart
Afterwards the frontends were restarted one by one. SSH access was possible and the essential models for IFO operation were started.
c1iscex reported initially that no DAQ card was found, and inside the IO chassis the LED indicator strip was red. Turning off the machine, checking the cables and rebooting fixed this. |
Attachment 1: 04.png
|
|
13524
|
Wed Jan 10 14:17:57 2018 |
johannes | Configuration | Computer Scripts / Programs | autoburt no longer making backups |
I was looking into setting up autoburt for the new c1auxex2 and found that it stopped making automatic backups for all machines after the beginning of the new year. There is no 2018 folder (it was the only one missing) in /opt/rtcds/caltech/c1/burt/autoburt/snapshots and the /latest/ link in /opt/rtcds/caltech/c1/burt/autoburt/ leads to the last backup of 2017 on 12/31/17 at 23:19.
The autoburt log file shows that the back script last ran today 01/10/18 at 14:19, as it should have, but doesn't show any errors and ends with "You are at the 40m".
I'm not familiar with the autoburt scheduling using cronjobs. If I'm not mistaken the relevant cronjob file is /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.cron which executes /cvs/cds/rtcds/caltech/c1/scripts/autoburt/autoburt.pl
I've never used perl but there's the following statement when establishing the directory for the new backup:
$yearpath = $autoburtpath."/snapshots/".$thisyear;
# print "scanning for path $yearpath\n";
if (!-e $yearpath) {
die "ERROR: Year directory $yearpath does not exist\n";
}
I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes. |
13525
|
Wed Jan 10 15:25:43 2018 |
johannes | Configuration | Computer Scripts / Programs | autoburt making backups again |
Quote: |
I manually created the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2018/ directory. Maybe this fixes the hickup? Gotta wait about 30 minutes.
|
It worked. The first backup of the year is now from Wednesday, 01/10/18 at 15:19. Ten days of automatic backups are missing. Up until 2204 the year folders had been pre-emptively created so why was 2018 missing?
gautam: this is a bit suspect still - the snapshot file for c1auxex at least seemed to be too light on channels recorded. this was before any c1auxex switching. to be investigated. |
13603
|
Fri Feb 2 23:28:13 2018 |
Koji | Update | Computer Scripts / Programs | netgpib data missing / PROLOGIX yellow box (crocetta) not working |
I could not understand why 'netgpibdata' scripts are missing in "scripts/general" folder on pianosa... Where did they go???
Also, I found that the PROLOGIX GPIB-LAN controller for crocetta (192.168.113.108) is no longer working. I need to reconfigure it with "telnet"... |
13604
|
Sat Feb 3 13:03:45 2018 |
gautam | Update | Computer Scripts / Programs | netgpib data missing / PROLOGIX yellow box (crocetta) not working |
The netgpibdata scripts are now under git version control at /opt/rtcds/caltech/c1/scripts/general/labutils/netgpibdata. I think the idea was to make this directory a collection of useful utilities that we could then pull at various labs / at the sites.
Quote: |
I could not understand why 'netgpibdata' scripts are missing in "scripts/general" folder on pianosa... Where did they go???
|
|
13607
|
Mon Feb 5 18:04:35 2018 |
Koji | Update | Computer Scripts / Programs | netgpib data missing / PROLOGIX yellow box (crocetta) not working |
crochetta was reconfigured to have 192.168.113.108. It was confirmed that it can be used with netgpibdata.py
Configuration: I have connected my mac with the unit using an Apple USB-Ethernet adapter. The adapter was configured to have a manual IP of 192.168.113.222/255.255.255.0. "netfinder.exe" was run to assign the IP addr to the unit. It seemed that NVRAM of the unit evaporated as it had the IP of 0.0.0.0. Once it was configrued, it could be run with netgpibdata as usual. |
13783
|
Tue Apr 24 10:10:43 2018 |
gautam | Update | Computer Scripts / Programs | Particle swarm hyper parameter optimization |
I'm copying and pasting Nikhil's email here as he was unable to login to the elog (but should now be able to in order to reply to any comments, and add more details about this test, motivation, methodology etc).
I did some post-processing after running the grid search. The following steps were carried out:
1) Selected those sets whose cost fun were less than a specific threshold (here 10000)
2) Next task was to see if the parameters of these good solutions had some pattern
3) I used a dimensionality reduction technique called t-SNE to project the 6 dimensional parameter space to 2 dim (for better visualization )
4) Made a scatter plot of these (see fig )
5) Used K-Means to find the clusters in this data
6) MarkerSize & Color reflect the cost fun. Bigger the marker size means better the solution.
7) Visual inspection implied cluster 5 had the best ranking points & more than any other cluster
8) These points had the following Parameter set: Workers {20,40}, SwarmSize {500}, MaxIter {500}, Self Adjustment {1}, Social Adjustment {1}, Tolerance {1e-3,1e-8}
See fig: for the box plot
9) It looks like is a particular set of values rather than individual values that gives the best results.
|
Attachment 1: ClusterFminScaled.png
|
|
Attachment 2: ClusterID_5.png
|
|
13801
|
Mon Apr 30 23:13:12 2018 |
Kevin | Update | Computer Scripts / Programs | DataViewer leapseconds |
I was trying to plot trends (min, 10 min, and hour) in DataViewer and got the following error message
Connecting.... done
mjd = 58235
leapsecs_read()
Opening leapsecs.dat
Open of leapsecs.dat failed
leapsecs_read() returning 0
frameMemRead - gpstimest = 1208844718
thoough the plots showed up fine after. Do we need to fix something with the leapsecs.dat file? |
13929
|
Thu Jun 7 20:21:15 2018 |
Koji | Update | Computer Scripts / Programs | /cvs/cds Backup in danger |
Local backup on chiara seems not working since Nov 19, 2017.
/opt/rtcds/caltech/c1/scripts/backup/localbackup.log
2017-11-18 07:00:01,504 INFO Updating backup image of /cvs/cds
2017-11-18 07:03:00,113 INFO Backup rsync job ran successfully, transferred 1954 files.
2017-11-19 07:00:02,564 INFO Updating backup image of /cvs/cds
2017-11-19 07:00:02,592 ERROR External drive not mounted!!!
|
13963
|
Thu Jun 14 15:21:58 2018 |
gautam | Update | Computer Scripts / Programs | /cvs/cds Backup in danger |
I think this is because /cvs/cds is getting too big. lsblk reveals:
controls@chiara|~> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 446.9G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 18.9G 0 part [SWAP]
sdb 8:16 0 2.7T 0 disk
└─sdb1 8:17 0 2T 0 part /home/cds
sr0 11:0 1 1024M 0 rom
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part /media/40mBackup
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
I believe one of sdc or sdd is connected via SATA while the other is an external USB drive. Maybe we have to get bigger backup disks, but this may be a huge pain to setup as it will involve taking chiara down. Actually, now that I check the backup log, seems like backup is executing successfully - not sure if this is due to my unelogged mounting of sdc (using sudo mount /dev/sdc1 /media/40mBackup) last week, or if this is some LDAS backup. But in any case, seems undesirable that sdb1 is larger than sdc1 or sdd1.
2018-06-06 07:00:01,086 INFO Updating backup image of /cvs/cds
2018-06-06 07:00:01,086 ERROR External drive not mounted!!!
2018-06-07 07:00:01,147 INFO Updating backup image of /cvs/cds
2018-06-07 07:00:01,147 ERROR External drive not mounted!!!
2018-06-08 07:00:01,244 INFO Updating backup image of /cvs/cds
2018-06-08 08:23:32,939 INFO Backup rsync job ran successfully, transferred 316870 files.
2018-06-09 07:00:01,465 INFO Updating backup image of /cvs/cds
2018-06-09 07:12:11,865 INFO Backup rsync job ran successfully, transferred 1926 files.
2018-06-10 07:00:01,842 INFO Updating backup image of /cvs/cds
2018-06-10 07:12:28,931 INFO Backup rsync job ran successfully, transferred 1656 files.
2018-06-11 07:00:01,294 INFO Updating backup image of /cvs/cds
2018-06-11 07:06:14,748 INFO Backup rsync job ran successfully, transferred 1664 files.
2018-06-12 07:00:02,081 INFO Updating backup image of /cvs/cds
2018-06-12 07:07:36,775 INFO Backup rsync job ran successfully, transferred 1870 files.
2018-06-13 07:00:02,194 INFO Updating backup image of /cvs/cds
2018-06-13 07:08:37,356 INFO Backup rsync job ran successfully, transferred 1818 files.
2018-06-14 07:00:01,753 INFO Updating backup image of /cvs/cds
2018-06-14 07:01:43,270 INFO Backup rsync job ran successfully, transferred 1744 files.
Quote: |
Local backup on chiara seems not working since Nov 19, 2017.
/opt/rtcds/caltech/c1/scripts/backup/localbackup.log
2017-11-18 07:00:01,504 INFO Updating backup image of /cvs/cds
2017-11-18 07:03:00,113 INFO Backup rsync job ran successfully, transferred 1954 files.
2017-11-19 07:00:02,564 INFO Updating backup image of /cvs/cds
2017-11-19 07:00:02,592 ERROR External drive not mounted!!!
|
|
13978
|
Mon Jun 18 10:34:45 2018 |
johannes | Update | Computer Scripts / Programs | running comsol job on optimus |
I'm running a comsol job on optimus in a tmux session named cryocavs. Should be done in less than 24 hours, judging by past durations. |
14049
|
Tue Jul 10 16:59:12 2018 |
Izabella Pastrana | HowTo | Computer Scripts / Programs | Taking Remote TF Measurements with the Agilent 4395A |
I copied the netgpibdata folder onto rossa (under the directory ~/Agilent/), which contains all the necessary scripts and templates you'll need to remotely set up, run, and download the results of measurements taken on the AG4395A network analyzer. The computer will communicate with the network analyzer through the GPIB device (plugged into the back of the Agilent, and whose communication protocol is found in the AG4395A.py file in the directory ~/Agilent/netgpibdata/).
The parameter template file you'll be concerned with is TFAG4395Atemplate.yml (again, under ~/Agilent/netgpibdata/), which you can edit to fit your measurement needs. (The parameters you can change are all helpfully commented, so it's pretty straightforward to use! Note: this template file should remain in the same directory as AGmeasure, which is the executable python script you'll be using). Then, to actually set up, run, and download your measurement, you'll want to navigate to the ~/Agilent/netgpibdata/ directory, where you can run on the command line the following: python AGmeasure TFAG4395Atemplate.yml
The above command will run the measurement defined in your template file and then save a .txt file of your measured data points to the directory specified in your parameters. If you set up the template file such that the data is also plotted and saved after the measurement, a .pdf of the plot will be saved along with your .txt file.
Now if you want to just download the data currently on the instrument display, you can run: python AGmeasure -i 192.168.113.105 -a 10 --getdata
Those are the big points, but you can also run python AGmeasure --help to learn about all the other functions of AGmeasure (alternatively, you can read through the actual python script).
Happy remote measuring! :)
|
14157
|
Mon Aug 13 11:44:32 2018 |
gautam | Update | Computer Scripts / Programs | Patch updates on nodus |
Larry W said that some security issues were flagged on nodus. So I ran
sudo yum upgrade --exclude=elog-3.1.3-2.el7.x86_64
on nodus. The exclude flag is because there were some conflicts related to that particular package. Hopefully this has fixed the problem. It's been a while since the last update, which was in January of this year.
controls@nodus|~> sudo yum history
Loaded plugins: langpacks
ID | Command line | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
29 | upgrade --exclude=elog-3 | 2018-08-13 11:36 | E, I, U | 136 EE
28 | install yum-utils | 2018-08-13 11:31 | Update | 1
27 | install nmap | 2018-06-29 01:57 | Install | 2
26 | install grace | 2018-05-31 16:52 | Install | 11
25 | install https://dl.fedor | 2018-05-31 16:51 | Install | 1
24 | install perl-Digest-SHA1 | 2018-05-31 15:34 | Install | 1
23 | install python-devel | 2018-01-13 15:33 | Install | 1
22 | install gcc | 2018-01-13 15:32 | Install | 6
21 | install git | 2018-01-12 18:11 | Install | 4
20 | update | 2018-01-12 18:01 | I, U | 39
19 | install motif | 2018-01-05 17:35 | Install | 3
18 | install sendmail sendmai | 2017-12-03 05:11 | Install | 6
17 | install vim | 2017-11-21 18:12 | Install | 3
16 | reinstall mod_dav_svn | 2017-11-21 17:40 | Reinstall | 1
15 | install mod_dav_svn | 2017-11-21 17:39 | Install | 1
14 | install subversion | 2017-11-21 15:36 | Install | 2
13 | -y install php | 2017-11-20 22:15 | Install | 4
12 | install links | 2017-11-20 19:10 | Install | 2
11 | install openssl098e.i686 | 2017-11-18 18:28 | Install | 1
10 | install openssl-libs.i68 | 2017-11-18 18:26 | Install | 11
history list |
14243
|
Thu Oct 11 13:40:51 2018 |
yuki | Update | Computer Scripts / Programs | loss measurements |
Quote: |
This is the procedure I follow when I take these measurements for the XARM (symmetric under XARM <-> YARM):
- Dither-align the interferometer with both arms locked. Freeze outputs when done.
- Misalign ETMY + ITMY.
- ITMY needs to be misaligned further. Moving the slider by at least +0.2 is plentiful to not have the other beam interfere with the measurement.
- Start the script, which does the following:
- Resume dithering of the XARM
- Check XARM dither error signal rms with CDS. If they're calm enough, proceed.
- Freeze dithering
- Start a new set of averages on the scope, wait T_WAIT (5 seconds)
- Read data (= ASDC power and MC2 trans) from scope and save
- Misalign ETMX and wait 5s
- Read data from scope and save
- Repeat desired amount of times
- Close the PSL shutter and measure the PD dark levels
|
Information for the armloss measurement:
- Script which gets the data: /users/johannes/40m/armloss/scripts/armloss_scope/armloss_dcrefl_asdcpd_scope.py
- Script which calculates the loss: /users/johannes/40m/armloss/scripts/misc/armloss_AS_calc.py
- Before doing the procedure Johannes wrote you have to prepare as follows:
- put a PD in anti-symmetric beam path to get ASDC signal.
- put a PD in MC2 box to get tranmitted light of IMC. It is used to normalize the beam power.
- connect those 2 PDs to oscilloscope and insert an internet cable to it.
- Usage: python2 armloss_dcrefl_asdcpd_scope.py [IP address of Scope] [ScopeCH for AS] [ScopeCH for MC] [Num of iteration] [ArmMode]
Note: The scripts uses httplib2 module. You have to install it if you don't have.
The locked arms are needed to calculate armloss but the alignment of PMC is deadly bad now. So at first I will make it aligned. (Gautam aligned it and PMC is locked now.)
gautam: The PMC alignment was fine, the problem was that the c1psl slow machine had become unresponsive, which prevented the PMC length servo from functioning correctly. I rebooted the machine and undid the alignment changes Yuki had made on the PSL table. |