ID |
Date |
Author |
Type |
Category |
Subject |
17980
|
Wed Nov 15 17:37:17 2023 |
Koji | Update | SUS | PRM SUS UL PD not responding | Radhika reported that the PRM UL OSEM PD is not responding. This PD has been identified to have a shorting problem, but the short existed only at the bias pin of the PD. We disconnected the bias voltages and the PD was working with no (=0V) bias.
It seems that it lost the signal about 8 days ago and the signal intermittently appeared and disappeared.
I suggested to Radhika to remove the Al foil suspecting the other pin of the PD is not shorting. |
3118
|
Fri Jun 25 01:28:33 2010 |
Dmass | HowTo | SVN | SVN woes | I am trying to get an actual complete install of the 40m svn on my machine. It keeps stopping at the same point:
I do a
svn checkout --username svn40m https://nodus.ligo.caltech.edu:30889/svn /Users/dmass/svn
A blah blah blah many files
...
A /Users/dmass/svn/trunk/medm/c1/lsc/C1LSC_ComMode.adl.28oct06
svn: In directory '/Users/dmass/svn/trunk/medm/c1/lsc'
svn: Can't copy '/Users/dmass/svn/trunk/medm/c1/lsc/.svn/tmp/text-base/C1LSC_MENU.adl.svn-base' to '/Users/dmass/svn/trunk/medm/c1/lsc/.svn/tmp/C1LSC_MENU.adl.tmp.tmp': No such file or directory
I believe I have always had this error come up when trying to do a full svn install. Any illumination is welcome.
|
3123
|
Sat Jun 26 05:02:04 2010 |
rana | HowTo | SVN | SVN woes |
Quote: |
I am trying to get an actual complete install of the 40m svn on my machine. It keeps stopping at the same point:
|
I have always seen this when checking out the 40m medm SVN on a non-Linux box. I don't know what it is, but Yoichi and I investigated it at some point and couldn't reproduce it on CentOS. I think its some weirdness in the permissions of tmp files. It can probably be fixed by doing some clever checkin from the control room.
Even worse is that it looks like the whole 'SVN' mantra has been violated in the medm directory by the 'newCDS' team. It could be that Joe has decided to make the 40m a part of the official CDS SVN, which is OK, but will take some retraining on our part. |
3287
|
Sun Jul 25 18:47:23 2010 |
Alberto | Update | SVN | Optickle 40mUpgrade model updated to include short cavity length corrections | I uploaded an updated optickle model of the upgrade to the SVN directory with the optickle models (here). |
7948
|
Mon Jan 28 19:15:14 2013 |
Manasa | Update | Scattering | Scattering setup | [Jan, Manasa]
We are trying to get some scattering measurements in the Y-arm cavity. We have removed one of the viewport windows window covers of ETMY chamber and have installed cameras on a ring that clamps to the window. The window along with the ring attachment is covered with aluminium foil when not in use. |
7962
|
Wed Jan 30 11:18:31 2013 |
Manasa | Update | Scattering | Scattering setup |
Quote: |
[Jan, Manasa]
We are trying to get some scattering measurements in the Y-arm cavity. We have removed one of the viewport windows window covers of ETMY chamber and have installed cameras on a ring that clamps to the window. The window along with the ring attachment is covered with aluminium foil when not in use.
|
[Jan, Manasa]
To align the camera to see small angle scattering from the ITMY, we tried shooting a green laser pointer at the pickoff mirror that was installed in the ETMY chamber such that we hit the face of ITMY. But we concluded that to be a very bad way to align the camera because we have no means to reconfirm that the camera was exactly looking at the scattering from ITMY.
Since we are in air, we came up with a plan B. The plan is to temporarily install a mirror in the ITMY chamber to steer the beam from the laser pointer (installed on the POY table) through ITMY to the pickoff mirror at the ETMY end. This way, we can install the camera at the ETMY window and be sure we are looking at ITMY scattered light. |
7971
|
Thu Jan 31 11:53:31 2013 |
Manasa | Update | Scattering | Scattering setup |
Since we are in air, we came up with a plan B. The plan is to temporarily install a mirror in the ITMY chamber to steer the beam from the laser pointer (installed on the POY table) through ITMY to the pickoff mirror at the ETMY end. This way, we can install the camera at the ETMY window and be sure we are looking at ITMY scattered light.
|
[Jan,Manasa]
We executed plan B. We installed the green laser pointer on POY table and steered the beam through ITMY to hit the pick off mirror at the ETM end by installing *temporary mirrors. The pick off mirror was adjusted in pitch and yaw to center the reflected beam on the viewport window. We have installed irides on the ring attached to the viewport window to direct the beam to the camera.
*Temporary mirrors were removed from the ITMY chamber after this alignment. |
8072
|
Tue Feb 12 23:22:14 2013 |
Manasa | Update | Scattering | Scattering setup |
[Jan, Manasa]
We installed a camera at the ETMY end to look at the scattering pickoff from the ITMY. We were able to see the whole of the beam tube. We need to meditate on where to assemble the camera and use appropriate lenses to narrow the field of view such that we avoid looking at scattering from other sources inside the chamber. |
6957
|
Wed Jul 11 10:17:18 2012 |
Sasha | Summary | Simulations | SURF - Week 2 and 3 - Summary | These past two weeks, I've been working on simulating a basic Fabry-Perot cavity. I finished up a simulation involving static, non-suspension mirrors last week. It was supposed to output the electric field in the cavities given a certain shaking (of the mirrors), and the interesting thing was that it outputted the real and imaginary components seperately, so I ended up with six different bode plots. Since we're only interested in the real part, bodes 2, 4, and 6 can be discarded (see attachment 1). There was a LOT of split-peak behavior, and I think it has to do either with matlab overloading or with the modes of the cavity being very close together (I actually think the first is more likely since a smaller value of T_1 resulted in actual peaks instead of split ones).
At any rate, there really wasn't much I could improve on that simulation (neither was there any point), but I attach the subsystem governing the electric field in the cavity as a matter of academic interest (see attachment 2). So I moved onto simulations where the mirrors are actually suspended pendulums as they are in reality.
A basic simulation of the suspended mirrors gave me fairly good results (see attachment 3). A negative Q resulted in a phase flip, detuning the resonance from the wrong side resulted in a complete loss of the resonance peak, and the peak looked fairly consistent with what it should be. The simulation itself is pretty bare bones, and relies on the two transfer functions P(s) and K(s); P(s) is the transfer function for translating the force of the shaking of the two test masses (lumped together into one transfer function) into actual displacement. Note that s = i*w, where w is the frequency of the force being applied. K(s), on the other hand, is the transfer function that feeds displacement back into the original applied force-based shaking. Like I said, pretty bare bones, but working (see attachment 4 for a bode plot of a standard detuning value and positive Q). Tweaking the restoring (or anti restoring, depending on the sign of the detuning) force constant (K_0 for short) results in some interesting behavior. The most realistic results are produced for K_0 = 1e4, when the gain is much lower overall but the peak in resonance gets you a gain of 100 in dB. For those curious as to where I got P(s) and K(s), see "Measurement of radiation-pressure-induced optomechanical dynamics in a suspended Fabry-Perot cavity" by Thomas Corbitt, et. al.
I'm currently working on a more realistic simulation, with frequency and force noise as well as electronic feedback (via transfer functions, see attachment 5). The biggest thing so far has been trying to get the electronic transfer functions right. Corbitt's group gave some really interesting transfer functions (H_f(s) and H_l(s) for short; H_f(s) gives the frequency-based electronic transfer function, while H_l(s) gives the length-based electronic transfer function), which I've been trying to copy so that I can reproduce their results (see attachment 6). It looks like H_l(s) is a lowpass Butterworth filter, while H_f(s) is a Bessel filter (order TBD). Once that is successful, I'll figure out what H_f(s) and H_l(s) are for us (they might be the same!), add in degrees of freedom, and my first shot at the OSEM system of figuring out where the mirror's position is.
|
6985
|
Wed Jul 18 09:53:20 2012 |
Sasha | Summary | Simulations | SURF - Week 4 - Summary | This past week, I've been working on moving forward with the basic cavity model I developed last week (for future reference, that model was FP_3, and I am now working on FP_4) and refining the suspensions. I added three degrees of freedom to my simulation (such that it now consists of yaw, pitch, displacement, and side-to-side motion) and am attempting to integrate them with the OSEMS. I have also added mechanical damping for all degrees of freedom, and am adding electric damping and feedback. Concerning that, are all of the degrees of freedom locally damped in addition to being actuated on by the control system? Or does the control system do all of the damping itself? The first is the way I'm working on setting it up, but can easily change this if needed.
The next iteration of FP (FP_5) will replace my complicated OSEM --> Degrees of Freedom and vice versa system with the matrix system (see the poster Jenne and Jamie made, "Advanced Suspension Diagnostic Procedure"), as well as adding bounce/roll, yaw/y coupling, various non-damping filters as needed (i.e. the a2f filters), and noise sources. However, I'll only move on to that once I'm sure I have FP_4 working reasonably well. For now at least, the inputs/outputs look fine, and some of the DOF show resonance peaks. I'll become more concerned about where these resonance peaks actually are once I add damping.
Attached is a screenshot my work in progress. Only one of the suspensions has a basic feedback/damping loop going (as a prototype). It looks complicated now, but will simplify dramatically once I have damping worked out. Pink inputs are noises (will probably replace those with noise generators in FP_5) and green inputs are the OSEMS. The red output is the displacement of the cavity from resonance. The blue boxes are suspensions. |
6990
|
Wed Jul 18 15:38:05 2012 |
Eric | Summary | Simulations | SURF Update | Most of my work has been on continuing to develop the Simulink model of the differential arm length control loop.
I have filled in transfer functions for the digital components after looking up the configuration of filters and
gains on the control screens. Filters that were active at the time included 1:50 and 1000:10 on C1LSC_YARM and
C1LSC_POY11 with a gain of 0.1. Jamie also introduced me to foton so that I could obtain the transfer functions
for the necessary filters. I have also continued to work on obtaining the open loop gain and length response
function from the model. The majority of the work now is to refine what I've accomplished so far. Adding details
to the arm cavity and the optics is one potential area for improvement.
I have also spent some time looking at real-time calibration methods from GEO and a proposal for a similar system
on LIGO in P040057-x0 from the DCC. While the work for this project may follow a different path for a real-time
calibration, having a sense for what's been accomplished so far should be helpful in working on a new system. |
7022
|
Wed Jul 25 10:31:33 2012 |
Sasha | Summary | Simulations | SURF - Week 5 - Summary | This week I've been working on refining my simulation and getting it ready to be plugged into the control system. In particular, I've added a first attempt at a PDH control system, matrix conversion from OSEMs to DOF and back, and all necessary DAC/ADC/AA/AI/whitening/dewhitening filters. Most of these work well, but the whitening filters have been giving me trouble. At one point, they were amplifying the signal instead of flatting it out, such that my simulation started outputting NaN (again).
This was wholeheartedly depressing, but switching out the whitening filters for flat ones seemed to make the problem go away, but brought another problem to light. The output to input ratio is minuscule (as in 10^-300/10^243, see Attachment 3 for the resulting bode plot between a force on the suspension pt in the x-direction and my two outputs - error signal and length signal, which is pretty much what you would expect it to be). I suspect that its related to the whitening filter problem (perhaps the dewhitening filter is flattening the signal instead of amplifying?). If that is the case, then switching the whitening/dewhitening filters ought to work. I'll try today and see what happens. The white/dewhite filters together result in a total gain of 1, which is a good fundamental test, but could mean absolutely nothing (i.e. they could both be wrong!). Judging from the fact that we want to flatten out low frequency signal when it goes through the whitening filter, the filters don't look switched (see Attachment 4 for a bode plot of white and dewhite).
The only other source of problems (given that the suspensions/local damping have been debugged extensively throughout this process - though they could bug out in conjunction with the cavity controls?) is the PDH system. However, separating each of the components showed that the error signal generated is not absurd (I haven't tested whether it makes sense or not, but at any rate it doesn't result in an output on the order of 10^-300).
In summary, I've made progress this week, but there is still far to go. Attachment 1 is my simulation from last week, Attachment 2 is my simulation from this week. A talk with Jamie about the "big picture" behind my project helped tremendously. |
7025
|
Wed Jul 25 11:34:31 2012 |
Eric | Summary | Simulations | SURF Update | I am continuing work on simulating the DARM control loop. There is now a block for the length response
function that allows one to recover the h(t) GW input to the model. However, in order to add this
block I had to add some artificial poles to the length response function beacuse Simulink gave me errors
when the transfer function had more zeros than poles. The artificial poles are at 10^6 Hz and higher, so
that they should not affect the response function at the lower frequencies of interest. This approach
appears a bit computationally unstable though because without changing any parameters and re-running
the simulation, a different magnitude for h(t) would be calculated sometimes. A different method may be
necessary to get this working more accurately.
By looking through the C1LSC Simulink model and the C1LSC control screens, Jenne helped me determine
which digital filters are active while the interferometer is locked. To do this, open the C1LSC control
screen, then open the trigger matrix. Inside the trigger matrix window there is a button titled Filter
Module Triggers which opens another window that indicates which filters are triggered for a given channel,
and what values trigger them. For the y arm servo filters FM2, 3, 6, 7, 8 are triggered while in lock and
FM4 and 5 are controlled manually; I am including all of these in the model now.
I have changed the way I manipulate the output from the model for analysis, using Rana's advice. I also
improved the plotting code, now using a custom Bode plot instead.
Attached is a screenshot of the Simulink model as it currently stands, and an older implementation of the
open loop gain. I am in the process of updating the servo filters now, and what is shown in the plot does
not include all the filter modules for the servo filter.
|
7028
|
Wed Jul 25 14:35:45 2012 |
Sasha | Summary | Simulations | SURF - Week 5 - Summary |
Quote: |
This week I've been working on refining my simulation and getting it ready to be plugged into the control system. In particular, I've added a first attempt at a PDH control system, matrix conversion from OSEMs to DOF and back, and all necessary DAC/ADC/AA/AI/whitening/dewhitening filters. Most of these work well, but the whitening filters have been giving me trouble. At one point, they were amplifying the signal instead of flatting it out, such that my simulation started outputting NaN (again).
This was wholeheartedly depressing, but switching out the whitening filters for flat ones seemed to make the problem go away, but brought another problem to light. The output to input ratio is minuscule (as in 10^-300/10^243, see Attachment 3 for the resulting bode plot between a force on the suspension pt in the x-direction and my two outputs - error signal and length signal, which is pretty much what you would expect it to be). I suspect that its related to the whitening filter problem (perhaps the dewhitening filter is flattening the signal instead of amplifying?). If that is the case, then switching the whitening/dewhitening filters ought to work. I'll try today and see what happens. The white/dewhite filters together result in a total gain of 1, which is a good fundamental test, but could mean absolutely nothing (i.e. they could both be wrong!). Judging from the fact that we want to flatten out low frequency signal when it goes through the whitening filter, the filters don't look switched (see Attachment 4 for a bode plot of white and dewhite).
The only other source of problems (given that the suspensions/local damping have been debugged extensively throughout this process - though they could bug out in conjunction with the cavity controls?) is the PDH system. However, separating each of the components showed that the error signal generated is not absurd (I haven't tested whether it makes sense or not, but at any rate it doesn't result in an output on the order of 10^-300).
In summary, I've made progress this week, but there is still far to go. Attachment 1 is my simulation from last week, Attachment 2 is my simulation from this week. A talk with Jamie about the "big picture" behind my project helped tremendously.
|
Here's a screenshot of what's going on inside the cavity (Attachment 1). The PDH/mixer system outputs 0 for pretty much everything except really high numbers, which is the problem I'm trying to solve now. I assumed that the sidebands were anti-resonant, calculated reflection coefficient F(dL) = Z * 4pi * i/(lambda), where Z = (-r_1 + r_2*(r_1^2 + t_1^2)/(1 - r_1*r_3)), then calculated P_ref = 2*P_s - 4sqrt(P_c*P_s) * Im(F(dL)) * sin(12.5 MHz * t) (this is pictured in Attachment 2), then mixed it with a sin(12.5MHz * t) and low-passed it to get rid of everything but the DC term (this is pictured in Attachment 3), which is the term that then gets whitened/anti-aliased/passed through the loop.
|
7065
|
Wed Aug 1 10:45:38 2012 |
Sasha | Summary | Simulations | SURF - Week 6 - Summary | This week, I worked on transferring my Simulink simulation to the RCG. I made all relevant library parts, now under "SASHA library" in the main Simulink library browser. My main concern has been noises - I've added some rudimentry shot noise, amplitude noise, phase noise, and intensity noise. I have yet to add local oscillator noise, and plan to upgrade the existing noises to actually have the PSD they should (using equations from Rana's and Robb Ward's theses). I'm fairly certain this can be achieved by applying the correct transfer function to white noise (a technique I learned from Masha this week!), so the RCG should be able to handle it (theoretically).
I've also been tweaking my main simulation. After a brief (but scary) attempt at adding optical levers, I decided to shelve it in order to focus on noises/RCG simulating. This is not permanent, and I plan to return to them at some point this week or next. My main problem with them was that I knew how to get from optical lever input to pitch/yaw, but had no idea how to get from pitch/yaw to optical lever input. If I had a complete basis for one in terms of the other, I'd be able to, but I don't think this is the way to go. I'm sure there is a good way to do it (it was done SOMEHOW in the original simulation of the suspensions), I just don't know it yet.
In the aftermath of the optical lever semi-disaster, my simulation is once again not really outputting anything, but since it actually worked before adding the optical levers, I'm pretty sure I can get it to work again (this is why its important to use either git or BACKUPS, >.< (or both)).
We also wrote our progress reports this week. Mine includes some discussion on the basics of cavities/the mechanics of the suspensions/brief intro to PDH, and I'll add a section on noises in the next draft. Maybe this'll be of some use to someone someday? One can only hope.
|
7076
|
Thu Aug 2 03:06:57 2012 |
Sasha | Update | Simulations | LS Plant (LSP) is officially ONLINE | My ls plant compiled!! The RCG code can now be found in /opt/rtcds/rtscore/tags/advLigoRTS-2.5. I uploaded a copy of c1lsp.mdl onto the svn.
The weird "failed to connect" error was due to the fact that I named my inputs the same thing as my goto/from tags, so the RCG got confused. Once I renamed my inputs, it worked! I'm not sure what happened to the original "not enough parts" error; it didn't appear a single time during the rebuilding process. Anyway, I made the PDH block much neater, though the lines between PDH and ADC are looking wonky (this is purely an aesthetic problem, not a "oh god my simulation will DIE right now if I don't fix it" problem). I'll fix it in the morning; screenshot attached!
The original c1lsp was kind of sad. I updated it extensively and brought it into the modern era with color. The original c1lsp.mdl should also be on the svn. Tommorow, I'll get started on figuring out how to get LIGO specific noises from white noise. |
7079
|
Thu Aug 2 21:40:55 2012 |
Jamie | Update | Simulations | ETMX simplant model revived | I revived the ETMX simplant model, c1spx. It's running on cpu4 on c1iscex, and interfaces with C1scx via SHMEM.
The channel names for the simplant suspensions will be SUP, so the channel from this model will C1:SUP-ETMX_.
Next I'll try to get the ITMX and LSC ("LSP") simplant models running so we can run a "full" cavity simulation.
Sasha has been working on LSP, so we should be ready to do something with that soon. In the mean time she's going to fix up the SPX MEDM screens, since some channel names have been changed since it was last run. |
7097
|
Mon Aug 6 20:27:59 2012 |
Jamie | Update | Simulations | More work on getting simplant models running: c1lsp and c1sup | I'm trying to get more of the simplant models running so that we (me and Sasha Surf) can get a full real-time cavity simplant running. As I reported last week, c1spx is running again on c1iscex.
The two new simplant models are c1sup, which holds the simplant for ITMX, and c1lsp, which holds the IFO simplant, specifically the one we're working on for XARM.
Here's the relevant info:
model host dcuid cpu
c1spx c1iscex 61 4
c1sup c1sus 62 6
c1lsp c1lsc 60 6
c1spx and c1sup will be running the sus_single_plant parts for ETMX and ITMX simplant. All the simplant suspension channels will be names "SUP" (as opposed to "SUS" for control).
c1lsp is now running, but c1sup won't run for unknown reasons. The c1sup model is not very complicated, and in fact is more-or-less identical to c1spx. It compiles and installs and even loads, but it completely unresponsive after loading. Unfortunately I've had enough CDS bullshit for today, so I'll try to figure out what's going on tomorrow. |
7116
|
Wed Aug 8 11:16:06 2012 |
Sasha | Summary | Simulations | SURF - Week 7 - Summary | This week, I brought my c1lsp model online and fixed up some the medm screens for c1spx. Along the way, I ran into a few interesting problems (and learned a bit about bash scripting and emacs! :D). The screens for the main TM_RESP matrix are not generating automatically, and the medm screens don't want to link to the channels, for some reason. I don't have this problem with the other matrices (i.e. C2DOF and SEN_OUT), so I think it has something to do with TM_RESP being a filter matrix (which the others are not). In addition, the noise overview medm screens for c1spx are practically nonexistent - someone just copied the file for the SUS-ETMX screens into the master directory for c1spx, so they need a complete overhaul. I am willing to do this, but Jamie told me to focus my attentions elsewhere.
So I went back to noise generation. I've been using Matlab to figure out how to recreate the various noise sources (laser amplitude noise, local oscillator phase/amplitude noise, and 60 Hz/ADC. Frequency noise will be added some time this week and seismic noise should be already covered in Jamie's suspension model) in my c1lsp model. I'm doing it the way the RCG does it - by applying a filter to white noise. I'm generating white noise by just using a random number generator and pwelch-ing it to get the power spectral density.
For the filters themselves, I picked z, p, k such that it shaped the white noise PSD to look like the PSD of the noise in question. This was fairly straightforward once I figured out how zeroes and poles affected PSD. Once I'd picked zpk, I applied a bilinear transform to get a discrete zpk out, then converted to a second order section to make computation faster. I then applied that to the white noise (matlab has a convenient "sosfilt" function) and pwelch-ed/graphed it to get the result.
Attached is my attempt at filtering white noise to look like 60 Hz. noise. I can't seem to find a way to pick z and p such that the peak is more narrow (i.e. other than having two complex conjugated poles at 60 Hz.). I took the spectrum of 60 Hz. noise from a terminated ADC channel (Masha kindly let me borrow one of her GURALP channels).
EDIT: I also remembered that I've been looking for how to get a good power spectrum for the rest of the noises. Jenne referred me to Kiwamu's work on this, and I'm mostly going off elog #6133. If you have any other good elogs/data on noises, please feel free to send them my way.
I then measured the PSD of the sensors on the real suspended optics and a PSD of the suspension model. It looks like the OSEMs on the suspension model are only reading white noise, which probably means a lost connection somewhere (Attachment 2 is what the model SHOULD produce, Attachment 3 is what the model ACTUALLY produces). I perused Jamie's model, but couldn't find anything. Not sure where else to check, but I'll continue thinking about it/trying to fix it. |
7132
|
Thu Aug 9 04:26:51 2012 |
Sasha | Update | Simulations | All c1spx screens working | As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.
I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.
Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.
Masha and I ate some of Jamie's popcorn. It was good. |
7133
|
Thu Aug 9 07:24:58 2012 |
Sasha | Update | Simulations | All c1spx screens working |
Quote: |
As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.
I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.
Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.
Masha and I ate some of Jamie's popcorn. It was good.
|
Okay! Attached are two power spectra. The first is a power spectrum of reality, the second is a power spectrum of the simPlant. Its looking much better (as in, no longer obviously white noise!), but there seems to be a gain problem somewhere (and it doesn't have seismic noise). I'll see if I can fix the first problem then move on to trying to find the seismic noise filters. |
7141
|
Fri Aug 10 11:00:52 2012 |
Sasha | Update | Simulations | Messing with ETMX | I've been trying to get the simPlant model to work, and my main method of testing is switching between the real ETMX and the simulated ETMX and comparing the resulting power spectrum (the closer the two are, the more our simulation works). While the simPlant is on, ETMX is NOT BEING DAMPED. I started this ~Wednesday, and the testing will continue today, then hopefully we'll get a similiar simPlant up for ITMX (at which point, testing will continue for both ITMX and ETMX).
TL;DR: ETMX is not being continuously damped, XARM will likely be exhibiting some wonky behavior next week. |
7151
|
Sat Aug 11 01:10:26 2012 |
Sasha | Update | Simulations | Sim_Plant Working! | Sim_Plant going okay. Adding seismic noise tomorrow - we'll see what happens. The gain is still semi-off, but I know how to fix it - its just nice to have it gained up while I play with noise.
P.S. JAMIE DO YOU NOTICE HOW PRETTY MY GRAPH IS? |
7152
|
Sat Aug 11 18:05:49 2012 |
Sasha | Update | Simulations | Sim_Plant Working! |
Quote: |
Sim_Plant going okay. Adding seismic noise tomorrow - we'll see what happens. The gain is still semi-off, but I know how to fix it - its just nice to have it gained up while I play with noise.
P.S. JAMIE DO YOU NOTICE HOW PRETTY MY GRAPH IS?
|
Developed some seismic noise. I adapted the seismic noise filters we used for the MC model way back when. They looked questionable to begin with, but I added some poles/zeroes to make it more accurate (see Attached). |
7212
|
Fri Aug 17 04:13:45 2012 |
Sasha | Update | Simulations | The SimPlant Saga CONTINUES | THE GOOD: SimPlants ITMX and ETMX are officially ONLINE. Damping has been instituted in both, with varying degrees of success (see Attachment 1). An overview screen for the SimPlants is up (under XARM_Overview in the sitemap - you can ignore the seperate screens for ETMX and ITMX for now, I'll remove them later), C1LSP will be online/functional by Monday.
The super high low-frequency noise in my simPlant is from seismic noise and having a DC response of 1, so that the seismic noise at low frequencies is just passed as is and then amplified along with everything else in the m --> counts conversion. Not quite sure how to deal with this except by NOT having a DC response of 1 (which it technically doesn't have when you do the algebra - Rana said that "it made sense" for the optic to have unity gain at low frequencies, but the behavior is not matching up with reality).
THE BAD: It looks like the ITMX Switch from Reality to simPlant doesn't work (or some of the signals aren't getting switched). When switching from reality to simulation, it looks like the control system is receiving signals from the SimPlant, but is transmitting them to the real thing. As a result, when you flip the switch from reality to sim, ITMX goes seriously crazy and starts slamming back and forth against the stop. REALLY NOT GOOD. As soon as I saw what was going on, I turned back to reality and flipped the watch dogs on (YES THEY WERE OFF). I'll investigate the connections between the plant and control system some more in the morning (i.e. later today) (this is also probably what is causing the "lost connections" in c1sup/sus we can see in the machine status screen). |
7218
|
Fri Aug 17 12:47:30 2012 |
Sasha | Update | Simulations | The SimPlant Saga CONTINUES |
Quote: |
THE GOOD: SimPlants ITMX and ETMX are officially ONLINE. Damping has been instituted in both, with varying degrees of success (see Attachment 1). An overview screen for the SimPlants is up (under XARM_Overview in the sitemap - you can ignore the seperate screens for ETMX and ITMX for now, I'll remove them later), C1LSP will be online/functional by Monday.
The super high low-frequency noise in my simPlant is from seismic noise and having a DC response of 1, so that the seismic noise at low frequencies is just passed as is and then amplified along with everything else in the m --> counts conversion. Not quite sure how to deal with this except by NOT having a DC response of 1 (which it technically doesn't have when you do the algebra - Rana said that "it made sense" for the optic to have unity gain at low frequencies, but the behavior is not matching up with reality).
THE BAD: It looks like the ITMX Switch from Reality to simPlant doesn't work (or some of the signals aren't getting switched). When switching from reality to simulation, it looks like the control system is receiving signals from the SimPlant, but is transmitting them to the real thing. As a result, when you flip the switch from reality to sim, ITMX goes seriously crazy and starts slamming back and forth against the stop. REALLY NOT GOOD. As soon as I saw what was going on, I turned back to reality and flipped the watch dogs on (YES THEY WERE OFF). I'll investigate the connections between the plant and control system some more in the morning (i.e. later today) (this is also probably what is causing the "lost connections" in c1sup/sus we can see in the machine status screen).
|
Problem with ITMX solved! The ITMX block in c1sup hadn't been tagged as "top_names". I rebuilt and installed the model, and there are no longer lost connections, :D |
7225
|
Sat Aug 18 17:09:01 2012 |
Sasha | Update | Simulations | C1LSP MEDM Screens Added | C1LSP has been added to the site map. I'll work on filling in the structure some more today and tomorrow (as well as putting up PDH and REFL/AS MEDM screens).
NOTE: Does anyone know how to access channels (or if they're even there) for straight Simulink inputs and outputs (i.e. I have some sort of input, do something to it in the simulink model, then get some output)? I've been trying to add ADC MEDM screens to c1lsp, but channels along the lines of C1LSP-ADC0_0_Analog_Input or C1LSP-ADC0_A0 don't seem to exist. |
7227
|
Sat Aug 18 19:40:47 2012 |
Sasha | Update | Simulations | C1LSP MEDM Screens Added |
Quote: |
C1LSP has been added to the site map. I'll work on filling in the structure some more today and tomorrow (as well as putting up PDH and REFL/AS MEDM screens).
NOTE: Does anyone know how to access channels (or if they're even there) for straight Simulink inputs and outputs (i.e. I have some sort of input, do something to it in the simulink model, then get some output)? I've been trying to add ADC MEDM screens to c1lsp, but channels along the lines of C1LSP-ADC0_0_Analog_Input or C1LSP-ADC0_A0 don't seem to exist.
|
NVM. Figured out that I can just look in dataviewer for the channels. It looks like there aren't any channels for ADC0...I'll try reinstalling the model and restarting the framebuilder. |
6195
|
Fri Jan 13 00:51:40 2012 |
Leo Singer | Update | Stewart platform | Frequency-dependent requirements for Stewart platform | Below are revised design parameters for the Stewart platform based on ground motion measurements.
The goal is that the actuators should be able to exceed ground motion by a healthy factor (say, two decades in amplitude) across a range from about .1 Hz to 500 Hz. I would like to stitch together data from at least two seismometers, an accelerometer, and (if one is available) a microphone, but since today this week I was only able to retrieve data from one of the Guralps, I will use just that for now.
The spectra below, spanning GPS times 1010311450--1010321450, show the x, y, and z axes of one of the Guralps. Since the Guralp's sensitivity cuts off at 50 Hz or so, I assumed that the ground velocity continues to fall as f-1, but eventually flattens at acoustic frequencies. The black line shows a very coarse, visual, piecewise linear fit to these spectra. The corner frequencies are at 0.1, 0.4, 10, 100, and 500 Hz. From 0.1 to 0.4 Hz, the dependence is f-2, covering the upper edge of what I presume is the microseismic peak. From 0.4 to 10 Hz, the fit is flat at 2x10-7 m/s/sqrt(Hz). Then, the fit is f-1 up to 100 Hz. Finally, the fit remains flat out to 500 Hz.

Outside this band of interest, I chose the velocity requirement based on practical considerations. At high frequencies, the force requirement should go to zero, so the velocity requirement should go as f--2 or faster at high frequencies. At low frequencies, the displacement requirement should be finite, so the velocity requirement should go as f or faster.
The figure below shows the velocity spectrum extended to DC and infinite frequency using these considerations, and the derived acceleration and displacement requirements.

As a starting point for the design of the platform and the selection of the actuators, let's assume a payload of ~12 kg. Let's multiply this by 1.5 as a guess for the additional mass of the top platform itself, to make 18 kg. For the acceleration, let's take the maximum value at any frequency for the acceleration requirement, ~6x10-5 m/s2, which occurs at 500 Hz. From my previous investigations, I know that for the optimal Stewart platform geometry the actuator force requirement is (2+sqrt(3))/(3 sqrt(2))~0.88 of the net force requirement. Finally, let's throw in as factor of 100 so that the platform beats ground motion by a factor of 100. Altogether, the actuator force requirement, which is always of the same order of magnitude as the force requirement, is
(12)(1.5)(6x10-5)(0.88)(100) ~ 10 mN.
Next, the torque requirement. According to <http://www.iris.edu/hq/instrumentation_meeting/files/pdfs/rotation_iris_igel.pdf>, for a plane shear wave traveling in a medium with phase velocity c, the acceleration a(x, t) is related to the angular rate W'(x, t) through
a(x, t) / W'(x, t) = -2 c.
This implies that |W''(f)| = |a(f)| pi f / c,
where W''(f) is the amplitude spectral density of the angular acceleration and a(f) of the transverse linear acceleration. I assume that the medium is cement, which according to Wolfram Alpha has a shear modulus of mu = 2.2 GPa and about the density of water: rho ~ 1000 kg/m3. The shear wave speed in concrete is c = sqrt(mu / rho) ~ 1500 m/s.
The maximum of the acceleration requirement graph is, again, 6x10-5 m/s2 at 500 Hz.. According to Janeen's SolidWorks drawing, the largest principal moment of inertia of the SOS is about 0.26 kg m2. Including the same fudge factor of (1.5)(100), the net torque requirement is
(0.26) (1.5) (6x10-5) (100) pi (500) / (1500) N m ~ 2.5x10-3 N m.
The quotient of the torque and force requirements is about 0.25 m, so, using some of my previous results, the dimensions of the platform should be as follows:
radius of top plate = 0.25 m,
radius of bottom plate = 2 * 0.25 m = 0.5 m, and
plate separation in home position = sqrt(3) * 0.25 m = 0.43 m.
One last thing: perhaps the static load should be taken up directly by the piezos. If this is the case, then we might rather take the force requirement as being
(10 m/s2)(1.5)(12 kg) = 180 N.
An actuator that can exert a dynamic force of 180 N would easily meet the ground motion requirements by a huge margin. The dimensions of the platform could also be reduced. The alternative, I suppose, would be for each piezo to be mechanically in parallel with some sort of passive component to take up some of the static load. |
6196
|
Fri Jan 13 16:16:05 2012 |
Leo Singer | Update | Stewart platform | Flexure type for leg joints | I had been thinking of using this flexure for the bearings for the leg joints, but I finally realized that it was not the right type of bearing. The joints for the Stewart platform need to be free to both yaw and pitch, but this bearing actually constrains yaw (while leaving out-of-plane translation free). |
7695
|
Fri Nov 9 18:28:23 2012 |
Charles | Update | Summary Pages | Calendar | The calendar tab now displays calendars with weeks that run from Sunday to Saturday (as opposed to Monday to Sunday). However, the frame on the left hand side of the main page still has 'incorrect' calendars.
|
8003
|
Tue Feb 5 12:08:43 2013 |
Max Horton | Update | Summary Pages | Updating summary pages | Getting started: Worked on understanding the functionality of summary_page.py. The problem with the code is that it was written in one 8000 line python script, with sparse documentation. This makes it difficult to understand and tedious to edit, because it's hard to tell what the precise order of execution is without tracing through the code line by line. In other words, it's difficult to get an overview of what the code generally does, without literally reading all of it. I commented several functions / added docstrings to improve clarity and start fixing this problem.
Crontab: I believe I may have discovered the cause of the 6PM stop on data processing. I am told that the script that runs the summary_pages.py is called every 6 hours. I believe that at midnight, the script is processing the next day's data (which is essentially empty) and thus not updating the data from 6PM to midnight for any of the days.
Git: Finally, created git repository called public_html/__max_40m-summary_testing to use for testing the functionality of my changes to the code (without risking crashing the summary_pages). |
8055
|
Mon Feb 11 13:07:17 2013 |
Max Horton | Update | Summary Pages | Fixed A Calendar Bug | Understanding the Code: Documented more functions in summary_pages.py. Since it is very difficult and slow to understand what is going on, it might be best to just start trying to factor out the code into multiple files, and understand how the code works from there.
Crontab: Started learning how the program is called by cron / what cron is, so that I can fix the problem that forces data to only be displayed up until 6PM.
Calendars: One of the problems with the page is that the calendars on the left column didn't have any of the months of 2013 in them.
I identified the incorrect block of code as:
Original Code:
# loop over months
while t < e:
if t.month < startday.month or t >= endday:
ptable[t.year].append(str)
else:
ptable[t.year].append(calendar_link(t, firstweekday, tab=tab, run=run))
# increment by month
# Move forward day by day, until a new month is reached.
m = t.month
while t.month == m:
t = t + d
# Ensure that y still represents the current year.
if t.year > y:
y = t.year
ptable[y] = []
The problem is that the months between the startday and endday aren't being treated properly.
Modified Code:
# loop over months
while t < e:
if (t.month < startday.month and t.year <= startday.year) or t >= endday:
ptable[t.year].append(str)
else:
ptable[t.year].append(calendar_link(t, firstweekday, tab=tab, run=run))
# increment by month
# Move forward day by day, until a new month is reached.
m = t.month
while t.month == m:
t = t + d
# Ensure that y still represents the current year.
if t.year > y:
y = t.year
ptable[y] = []
After this change, the calendars display the year of 2013, as desired.
|
8098
|
Mon Feb 18 11:54:15 2013 |
Max Horton | Update | Summary Pages | Timing Issues and Calendars | Crontab: The bug of data only plotting until 5PM is being investigated. The crontab's final call to the summary page generator was at 5PM. This means that the data plots were not being generated after 5PM, so clearly they never contained data from after 5PM. In fact, the original crontab reads:
0 11,5,11,17 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh 2>&1
I'm not exactly sure what inspired these entries. The 11,5,11,17 entries are supposed to be the hours at which the program is run. Why is it run twice at 11? I assume it was just a typo or something.
The final call time was changed to 11:59PM in an attempt to plot the entire day's data, but this method didn't appear to work because the program would still be running past midnight, which was apparently inhibiting its functionality (most likely, the day change was affecting how the data is fetched). The best solution is probably to just wait until the next day, then call the summary page generator on the previous day's data. This will be implemented soon.
Calendars: Although the calendar tabs on the left side of the page were fixed, the calendars displayed at: https://nodus.ligo.caltech.edu:30889/40m-summary/calendar/ appear to still have squished together text. The calendar is being fetched from https://nodus.ligo.caltech.edu:30889/40m-summary/calendar/calendar.html and displayed in the page. This error is peculiar because the URL from which the calendar is being fetched does NOT have squished together text, but the resulting calendar at 40m-summary/calendar/ will not display spaces between the text. This issue is still being investigated. |
8148
|
Sat Feb 23 16:16:11 2013 |
Max Horton | Update | Summary Pages | Multiprocessing | Calendars: The calendar issue discussed previously (http://nodus.ligo.caltech.edu:8080/40m/8098) where the numbers are squished together is very difficult for me to find. I am not going to worry about it for the time being.
Multiprocessing: Reviewed the implementation of Multiprocessing in python (using Multiprocessing package). Wrote a simple test function and ran it on megatron, to verify that multiprocessing could successfully take advantage of megatron’s multiple cores – it could. Now, I will work on implementing multiprocessing in the program. I began testing at a section in the program where a for loop calls process_data() (which has a long runtime) multiple times. The megatron terminals I had open began to run very slowly. Why? I believe that the process_data() function loads data into global variables to accomplish its task. The global variables in the original implementation were cleared before the subsequent calls to process_data(). But in the multiprocessing version, the data is not cleared, meaning the memory fills quickly, which drastically reduces performance. In the short term, I could try generating fewer processes at a time, wait for them to finish, then clearing the data, then generating more processes, etc. This will probably generate a nominal performance boost. In the long-term, restructuring of the way the program handles data may help (but not for sure). In the coming week I will experiment with these techniques and try to decrease the run time of the program. |
8194
|
Wed Feb 27 22:46:53 2013 |
Max Horton | Update | Summary Pages | Multiprocessing Implementation | Overview: In order to make the code more maintainable, I need to factor it into different well-documented classes. To do this carefully and rigorously, I need to run tests every time I make changes to the code. The runtime of the code is currently quite high, so I will work on improving the runtime of the program before factoring it into classes. This will be more efficient (minimize testing time) and allow me to factor more quickly. So, my current goal is to improve runtime as much as possible.
Multiprocessing Implementation:
I invented a simple way to implement multiprocessing in the summary_pages.py file. Here is an example: in the code, there is a process_data() function, which is run 75 times and takes rather long to run. I created multiple processes to run these calls concurrently, as follows:
Original Code: (around line 7840)
for sec in datasections:
for run in run_opts:
run_opt = 'run_%s_time' % run
if hasattr(tabs[sec], run_opt) and getattr(tabs[sec], run_opt):
process_data(cp, ifo, start, end, tabs[sec],\
cache=datacache, segcache=segcache, run=run,\
veto_def_table=veto_table[run], plots=do['plots'],\
subplots=do['subplots'], html_only=do['html_only'])
#
# free data memory
#
keys = globvar.data.keys()
for ch in keys:
del globvar.data[ch]
The weakness in this code is that process_data() is called many times, and doesn't take advantage of megatron's multiple threads. I changed the code to:
Modified Code: (around line 7840)
import multiprocessing
if do['dataplot']:
... etc... (same as before)
if hasattr(tabs[sec], run_opt) and getattr(tabs[sec], run_opt):
# Create the process
p = multiprocessing.Process(target=process_data, args=(cp, ifo, start, end, tabs[sec], datacache, segcache, run, veto_table[run], do['plots'], do['subplots'], do['html_only']))
# Add the process to the list of processes
plist += [p]
Then, I run the process in groups of size "numconcur", as follows:
numconcur = 8
curlist = []
for i in range(len(plist)):
curlist += [plist[i]]
if (i % numconcur == (numconcur - 1)):
for item in curlist:
item.start()
for item in curlist:
item.join()
item.terminate()
keys = globvar.data.keys()
for ch in keys:
del globvar.data[ch]
curlist = []
The value of numconcur (which defines how many threads megatron will use concurrently to run the program) greatly effects the speed of the program! With numconcur = 8, the program runs in ~45% of the time of the original code! This is the optimal value -- megatron has 8 threads. Several other values were tested - numconcur = 4 and numconcur = 6 had almost the same performance as numconcur = 8, but numconcur = 1 (which is essentially the same as the unmodified code) has a much worse performance.
Improvement Cap:
Why does numcores = 4 have almost the same performance as numcores = 8? I monitored the available memory of megatron, and it is quickly consumed during these runs. I believe that once 4 or more cores are being used, the fact that the data can't all fit in megatron's memory (which was entirely filled during these trials) counteracts the usefulness of additional threads.
Summary of Improvements:
Original Runtime of all process_data() statements: (approximate): 8400 sec
Runtime with 8 processes (approximate): 3842 sec
This is about a 55% improvement for speed, in this particular sector (not in the overall performance of the entire program). It saves about 4600 seconds (~1.3 hours) per run of the program. Note that these values are approximate (since other processes are running on megatron during my tests. They might be inflated or deflated by some margin of error).
Next Time:
This same optimization method will be applied to all repetitive processes with reasonably large runtimes. |
8195
|
Wed Feb 27 23:19:54 2013 |
rana | Update | Summary Pages | Multiprocessing Implementation | At first I thought that this was goofy, but then I logged in and saw that Megatron only has 8GB of RAM. I guess that used to be cool in the old days, but now is tiny (my laptop has 8 GB of RAM). I'll see if someone around has some free RAM for a 4600; in the meantime, I've killed a MEDM that was running on there and using up a few hundred MB.
Run your ssh-MEDMs elsewhere or else I'll make a cronjob to kill them periodically. |
8201
|
Thu Feb 28 14:19:20 2013 |
Max Horton | Update | Summary Pages | Multiprocessing Implementation | Okay, more memory would definitely be good. I don't think I have been using MEDM (which Jamie tells me is the controls interface) so making a cronjob would probably be a good idea. |
8218
|
Mon Mar 4 10:41:18 2013 |
Max Horton | Update | Summary Pages | Multiprocessing Implementation | Update:
Upon investigation, the other methods besides process_data() take almost no time at all to run, by comparison. The process_data() method takes roughly 2521 seconds to run using Multiprocessing with eight threads. After its execution, the rest of the program only takes 120 seconds to run. So, since I still need to restructure the code, I won't bother adding multiprocessing to these other methods yet, since it won't significantly improve speed (and any improvements might not be compatible with how I restructure the code). For now, the code is just about as efficient as it can be (in terms of multiprocessing). Further improvements may or may not be gained when the code is restructured. |
8227
|
Mon Mar 4 21:05:49 2013 |
Max Horton | Update | Summary Pages | Multiprocessing and Crontab | Multiprocessing: In its current form, the code uses multiprocessing to the maximal extent possible. It takes roughly 2600 seconds to run (times may vary depending on what else megatron is running, etc.). Multiprocessing is only used on the process_data() function calls, because this by far takes the longest. The other function calls after the process_data() calls take a combined ~120 seconds. See http://nodus.ligo.caltech.edu:8080/40m/8218 for details on the use of Multiprocessing to call process_data().
Crontab: I also updated the crontab in an attempt to fix the problem where data is only displayed until 5PM. Recall that previously (http://nodus.ligo.caltech.edu:8080/40m/8098) I found that the crontab wasn't even calling the summary_pages.py script after 5PM. I changed it then to be called at 11:59PM, which also didn't work because of the day change after midnight.
I decided it would be easiest to just call the function on the previous day's data at 12:01AM the next day. So, I changed the crontab.
Previous Crontab:
59 5,11,17,23 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh 2>&1
New Crontab:
0 6,12,18 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh 2>&1
1 0 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh $(date "+%Y/%m/%d" --date="1 days ago") 2>&1
For some reason, as of 9:00PM today (March 4, 2013) I still don't see any data up, even though the change to the crontab was made on February 28. Even more bizarre is the fact that data is present for March 1-3. Perhaps some error was introduced into the code somehow, or I don't understand how crontab does its job. I will look into this now.
Next:
Once I fix the above problem, I will begin refactoring the code into different well-documented classes. |
8273
|
Mon Mar 11 22:28:30 2013 |
Max Horton | Update | Summary Pages | Fixing Plot Limits | Quick Note on Multiprocessing: The multiprocessing was plugged into the codebase on March 4. Since then, the various pages that appear when you click on certain tabs (such as the page found here: https://nodus.ligo.caltech.edu:30889/40m-summary/archive_daily/20130304/ifo/dc_mon/ from clicking the 'IFO' tab) don't display graphs. But, the graphs are being generated (if you click here or here, you will find the two graphs that are supposed to be displayed). So, for some reason, the multiprocessing is preventing these graphs from appearing, even though they are being generated. I rolled back the multiprocessing changes temporarily, so that the newly generated pages look correct until I find the cause of this.
Fixing Plot Limits: The plots generated by the summary_pages.py script have a few problems, one of which is: the graphs don't choose their boundaries in a very useful way. For example, in these pressure plots, the dropout 0 values 'ruin' the graph in the sense that they cause the plot to be scaled from 0 to 760, instead of a more useful range like 740 to 760 (which would allow us to see details better).
The call to the plotting functions begins in process_data() of summary_pages.py, around line 972, with a call to plot_data(). This function takes in a data list (which represents the x-y data values, as well as a few other fields such as axes labels). The easiest way to fix the plots would be to "cleanse" the data list before calling plot_data(). In doing so, we would remove dropout values and obtain a more meaningful plot.
To observe the data list that is passed to plot_data(), I added the following code:
# outfile is a string that represents the name of the .png file that will be generated by the code.
print_verbose("Saving data into a file.")
print_verbose(outfile)
outfile_mch = open(outfile + '.dat', 'w')
# at this point in process_data(), data is an array that should contain the desired data values.
if (data == []):
print_verbose("Empty data!")
print >> outfile_mch, data
outfile_mch.close()
When I ran this in the code midday, it gave a human-readable array of values that appeared to match the plots of pressure (i.e. values between 740 and 760, with a few dropout 0 values). However, when I let the code run overnight, instead of observing a nice list in 'outfile.dat', I observed:
[('Pressure', array([ 1.04667840e+09, 1.04667846e+09, 1.04667852e+09, ...,
1.04674284e+09, 1.04674290e+09, 1.04674296e+09]), masked_array(data = [ 744.11076965 744.14254761 744.14889221 ..., 742.01931356 742.05930208
742.03433228],
mask = False,
fill_value = 1e+20)
)]
I.e. there was an ellipsis (...) instead of actual data, for some reason. Python does this when printing lists in a few specific situations. The most common of which is that the list is recursively defined. For example:
INPUT:
a = [5]
a.append(a)
print a
OUTPUT:
[5, [...]]
It doesn't seem possible that the definitions for the data array become recursive (especially since the test worked midday). Perhaps the list becomes too long, and python doesn't want to print it all because of some setting.
Instead, I will use cPickle to save the data. The disadvantage is that the output is not human readable. But cPickle is very simple to use. I added the lines:
import cPickle
cPickle.dump(data, open(outfile + 'pickle.dat', 'w'))
This should save the 'data' array into a file, from which it can be later retrieved by cPickle.load().
Next:
There are other modules I can use that will produce human-readable output, but I'll stick with cPickle for now since it's well supported. Once I verify this works, I will be able to do two things:
1) Cut out the dropout data values to make better plots.
2) When the process_data() function is run in its current form, it reprocesses all the data every time. Instead, I will be able to draw the existing data out of the cPickle file I create. So, I can load the existing data, and only add new values. This will help the program run faster. |
8286
|
Wed Mar 13 15:30:37 2013 |
Max Horton | Update | Summary Pages | Fixing Plot Limits | Jamie has informed me of numpy's numpy.savetxt() method, which is exactly what I want for this situation (human-readable text storage of an array). So, I will now be using:
# outfile is the name of the .png graph. data is the array with our desired data.
numpy.savetxt(outfile + '.dat', data)
to save the data. I can later retrieve it with numpy.loadtxt() |
8414
|
Thu Apr 4 13:39:12 2013 |
Max Horton | Update | Summary Pages | Graph Limits | Graph Limits: The limits on graphs have been problematic. They often reflect too large of a range of values, usually because of dropouts in data collection. Thus, they do not provide useful information because the important information is washed out by the large limits on the graph. For example, the graph below shows data over an unnecessarily large range, because of the dropout in the 300-1000Hz pressure values.

The limits on the graphs can be modified using the config file found in /40m-summary/share/c1_summary_page.ini. At the entry for the appropriate graph, change the amplitude-lim=y1,y2 line by setting y1 to the desired lower limit and y2 to the desired upper limit. For example, I changed the amplitude limits on the above graph to amplitude-lim=.001,1, and achieved the following graph.

The limits could be tightened further to improve clarity - this is easily done by modifying the config file. I modified the config file for all the 2D plots to improve the bounds. However, on some plots, I wasn't sure what bounds were appropriate or what range of values we were interested in, so I will have to ask someone to find out.
Next: I now want to fix all the funny little problems with the site, such as scroll bars appearing where they should not appear, and graphs only plotting until 6PM. In order to do this most effectively, I need to restructure the code and factor it into several files. Otherwise, the code will not only be much harder to edit, but will become more and more confusing as I add on to it, compounding the problems that we currently have (i.e. that this code isn't very well documented and nobody knows how it works). We need lots of specific documentation on what exactly is happening before too many changes are made. Take the config files, for example. Someone put a lot of work into them, but we need a README specifying which options are supported for which types of graphs, etc. So we are slowed down because I have to figure out what is going on before I make small changes.
To fix this, I will divide the code into three main sectors. The division of labor will be:
- Sector 1: Figure out what the user wants (i.e. read config files, create a ConfigParser, etc...)
- Sector 2: Process the data and generate the plots based on what the user wants
- Sector 3: Generate the HTML |
8476
|
Tue Apr 23 15:02:19 2013 |
Max Horton | Update | Summary Pages | Importing New Code | Duncan Macleod (original author of summary pages) has an updated version that I would like to import and work on. The code and installation instructions are found below.
I am not sure where we want to host this. I could put it in a new folder in /users/public_html/ on megatron, for example. Duncan appears to have just included the summary page code in the pylal repository. Should I reimport the whole repository? I'm not sure if this will mess up other things on megatron that use pylal. I am working on talking to Rana and Jamie to see what is best.
http://www.lsc-group.phys.uwm.edu/cgit/lalsuite/tree/pylal/bin/pylal_summary_page
https://www.lsc-group.phys.uwm.edu/daswg/docs/howto/lal-install.html
|
8496
|
Fri Apr 26 15:50:48 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
I am following the instructions here:
https://www.lsc-group.phys.uwm.edu/daswg/docs/howto/lal-install.html#test
But there as an error when I run the ./00boot command near the beginning. I have asked Duncan Macleod about this and am waiting to hear back.
For now, I am putting things into /home/controls on allegra. My understanding is that this is not shared, so I don't have a chance of messing up anyone else's work. I have been moving slow and being extra cautious about what I do because I don't want to accidentally nuke anything. |
8504
|
Mon Apr 29 15:35:31 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
I installed the new version of LAL on allegra. I don't think it has interfered with the existing version, but if anyone has problems, let me know. The old version on allegra was 6.9.1, but the new code uses 6.10.0.1. To use it, add . /opt/lscsoft/lal/etc/lal-user-ench.sh to the end of the .bashrc file (this is the simplest way, since it will automatically pull the new version).
I am having a little trouble getting some other unmet dependencies for the summary pages such as the new lalframe, etc. But I am working on it.
Once I get it working on allegra and know that I can get it without messing up current versions of lal, I will do this again on megatron so I can test and edit the new version of the summary pages. |
8523
|
Thu May 2 14:14:10 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
LALFrame was successfully installed. Allegra had unmet dependencies with some of the library tools. I tried to install LALMetaIO, but there were unmet dependencies with other LSC software. After updating the LSC software, the problem has persisted. I will try some more, and ask Duncan if I'm not successful.
Installing these packages is rather time consuming, it would be nice if there was a way to do it all at once. |
8536
|
Tue May 7 15:09:38 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
I am now working on megatron, installing in /home/controls/lal. I am having some unmet dependency issues that I have asked Duncan about. |
8572
|
Tue May 14 16:14:47 2013 |
Max Horton | Update | Summary Pages | Importing New Code | I have figured out all the issues, and successfully installed the new versions of the LAL software. I am now going to get the summary pages set up using the new code. |
11411
|
Tue Jul 14 16:47:18 2015 |
Eve | Update | Summary Pages | Summary page updates continue during upgrade | I've continued to make changes to the summary pages on my own environment, which I plan on implementing on the main summary pages when they are back online.
Motivation:
I created my own summary page environment and manipulated data from June 30 to make additional plots and change already existing plots. The main summary pages (https://nodus.ligo.caltech.edu:30889/detcharsummary/ or https://ldas-jobs.ligo.caltech.edu/~max.isi/summary/) are currently down due to the CDS upgrade, so my own summary page environment acts as a temporary playground to continue working on my SURF project. My summary pages can be found here (https://ldas-jobs.ligo.caltech.edu/~eve.chase/summary/day/20150630/); they contian identical plots to the main summary pages, except for the Summary tab. I'm open to suggestions, so I can make the summary pages as useful as possible.
What I did:
- SUS OpLev: For every already existing optical lever timeseries, I created a corresponding spectrum, showing all channels present in the original timeseries. The spectra are now placed to the right of their corresponding timeseries. I'm still playing with the axes to make sure I set the best ranges.
- SUSdrift: I added two new timeseries, DRMI SUS Pitch and DRMI SUS Yaw, to add to the four already-existing timeseries in this tab. These plots represent channels not previously displayed on the summary pages
- Minor changes
- Added axis labels on IOO plot 6
- Changes axis ranges of IOO: MC2 Trans QPD and IOO: IMC REFL RFPD DC
- Changes axis label on PSL plot 6
Results:
So far, all of these changes have been properly implemented into my personal summary page environment. I would like some feedback as to how I can improve the summary pages.
|
|