ID |
Date |
Author |
Type |
Category |
Subject |
7227
|
Sat Aug 18 19:40:47 2012 |
Sasha | Update | Simulations | C1LSP MEDM Screens Added |
Quote: |
C1LSP has been added to the site map. I'll work on filling in the structure some more today and tomorrow (as well as putting up PDH and REFL/AS MEDM screens).
NOTE: Does anyone know how to access channels (or if they're even there) for straight Simulink inputs and outputs (i.e. I have some sort of input, do something to it in the simulink model, then get some output)? I've been trying to add ADC MEDM screens to c1lsp, but channels along the lines of C1LSP-ADC0_0_Analog_Input or C1LSP-ADC0_A0 don't seem to exist.
|
NVM. Figured out that I can just look in dataviewer for the channels. It looks like there aren't any channels for ADC0...I'll try reinstalling the model and restarting the framebuilder. |
6195
|
Fri Jan 13 00:51:40 2012 |
Leo Singer | Update | Stewart platform | Frequency-dependent requirements for Stewart platform | Below are revised design parameters for the Stewart platform based on ground motion measurements.
The goal is that the actuators should be able to exceed ground motion by a healthy factor (say, two decades in amplitude) across a range from about .1 Hz to 500 Hz. I would like to stitch together data from at least two seismometers, an accelerometer, and (if one is available) a microphone, but since today this week I was only able to retrieve data from one of the Guralps, I will use just that for now.
The spectra below, spanning GPS times 1010311450--1010321450, show the x, y, and z axes of one of the Guralps. Since the Guralp's sensitivity cuts off at 50 Hz or so, I assumed that the ground velocity continues to fall as f-1, but eventually flattens at acoustic frequencies. The black line shows a very coarse, visual, piecewise linear fit to these spectra. The corner frequencies are at 0.1, 0.4, 10, 100, and 500 Hz. From 0.1 to 0.4 Hz, the dependence is f-2, covering the upper edge of what I presume is the microseismic peak. From 0.4 to 10 Hz, the fit is flat at 2x10-7 m/s/sqrt(Hz). Then, the fit is f-1 up to 100 Hz. Finally, the fit remains flat out to 500 Hz.

Outside this band of interest, I chose the velocity requirement based on practical considerations. At high frequencies, the force requirement should go to zero, so the velocity requirement should go as f--2 or faster at high frequencies. At low frequencies, the displacement requirement should be finite, so the velocity requirement should go as f or faster.
The figure below shows the velocity spectrum extended to DC and infinite frequency using these considerations, and the derived acceleration and displacement requirements.

As a starting point for the design of the platform and the selection of the actuators, let's assume a payload of ~12 kg. Let's multiply this by 1.5 as a guess for the additional mass of the top platform itself, to make 18 kg. For the acceleration, let's take the maximum value at any frequency for the acceleration requirement, ~6x10-5 m/s2, which occurs at 500 Hz. From my previous investigations, I know that for the optimal Stewart platform geometry the actuator force requirement is (2+sqrt(3))/(3 sqrt(2))~0.88 of the net force requirement. Finally, let's throw in as factor of 100 so that the platform beats ground motion by a factor of 100. Altogether, the actuator force requirement, which is always of the same order of magnitude as the force requirement, is
(12)(1.5)(6x10-5)(0.88)(100) ~ 10 mN.
Next, the torque requirement. According to <http://www.iris.edu/hq/instrumentation_meeting/files/pdfs/rotation_iris_igel.pdf>, for a plane shear wave traveling in a medium with phase velocity c, the acceleration a(x, t) is related to the angular rate W'(x, t) through
a(x, t) / W'(x, t) = -2 c.
This implies that |W''(f)| = |a(f)| pi f / c,
where W''(f) is the amplitude spectral density of the angular acceleration and a(f) of the transverse linear acceleration. I assume that the medium is cement, which according to Wolfram Alpha has a shear modulus of mu = 2.2 GPa and about the density of water: rho ~ 1000 kg/m3. The shear wave speed in concrete is c = sqrt(mu / rho) ~ 1500 m/s.
The maximum of the acceleration requirement graph is, again, 6x10-5 m/s2 at 500 Hz.. According to Janeen's SolidWorks drawing, the largest principal moment of inertia of the SOS is about 0.26 kg m2. Including the same fudge factor of (1.5)(100), the net torque requirement is
(0.26) (1.5) (6x10-5) (100) pi (500) / (1500) N m ~ 2.5x10-3 N m.
The quotient of the torque and force requirements is about 0.25 m, so, using some of my previous results, the dimensions of the platform should be as follows:
radius of top plate = 0.25 m,
radius of bottom plate = 2 * 0.25 m = 0.5 m, and
plate separation in home position = sqrt(3) * 0.25 m = 0.43 m.
One last thing: perhaps the static load should be taken up directly by the piezos. If this is the case, then we might rather take the force requirement as being
(10 m/s2)(1.5)(12 kg) = 180 N.
An actuator that can exert a dynamic force of 180 N would easily meet the ground motion requirements by a huge margin. The dimensions of the platform could also be reduced. The alternative, I suppose, would be for each piezo to be mechanically in parallel with some sort of passive component to take up some of the static load. |
6196
|
Fri Jan 13 16:16:05 2012 |
Leo Singer | Update | Stewart platform | Flexure type for leg joints | I had been thinking of using this flexure for the bearings for the leg joints, but I finally realized that it was not the right type of bearing. The joints for the Stewart platform need to be free to both yaw and pitch, but this bearing actually constrains yaw (while leaving out-of-plane translation free). |
7695
|
Fri Nov 9 18:28:23 2012 |
Charles | Update | Summary Pages | Calendar | The calendar tab now displays calendars with weeks that run from Sunday to Saturday (as opposed to Monday to Sunday). However, the frame on the left hand side of the main page still has 'incorrect' calendars.
|
8003
|
Tue Feb 5 12:08:43 2013 |
Max Horton | Update | Summary Pages | Updating summary pages | Getting started: Worked on understanding the functionality of summary_page.py. The problem with the code is that it was written in one 8000 line python script, with sparse documentation. This makes it difficult to understand and tedious to edit, because it's hard to tell what the precise order of execution is without tracing through the code line by line. In other words, it's difficult to get an overview of what the code generally does, without literally reading all of it. I commented several functions / added docstrings to improve clarity and start fixing this problem.
Crontab: I believe I may have discovered the cause of the 6PM stop on data processing. I am told that the script that runs the summary_pages.py is called every 6 hours. I believe that at midnight, the script is processing the next day's data (which is essentially empty) and thus not updating the data from 6PM to midnight for any of the days.
Git: Finally, created git repository called public_html/__max_40m-summary_testing to use for testing the functionality of my changes to the code (without risking crashing the summary_pages). |
8055
|
Mon Feb 11 13:07:17 2013 |
Max Horton | Update | Summary Pages | Fixed A Calendar Bug | Understanding the Code: Documented more functions in summary_pages.py. Since it is very difficult and slow to understand what is going on, it might be best to just start trying to factor out the code into multiple files, and understand how the code works from there.
Crontab: Started learning how the program is called by cron / what cron is, so that I can fix the problem that forces data to only be displayed up until 6PM.
Calendars: One of the problems with the page is that the calendars on the left column didn't have any of the months of 2013 in them.
I identified the incorrect block of code as:
Original Code:
# loop over months
while t < e:
if t.month < startday.month or t >= endday:
ptable[t.year].append(str)
else:
ptable[t.year].append(calendar_link(t, firstweekday, tab=tab, run=run))
# increment by month
# Move forward day by day, until a new month is reached.
m = t.month
while t.month == m:
t = t + d
# Ensure that y still represents the current year.
if t.year > y:
y = t.year
ptable[y] = []
The problem is that the months between the startday and endday aren't being treated properly.
Modified Code:
# loop over months
while t < e:
if (t.month < startday.month and t.year <= startday.year) or t >= endday:
ptable[t.year].append(str)
else:
ptable[t.year].append(calendar_link(t, firstweekday, tab=tab, run=run))
# increment by month
# Move forward day by day, until a new month is reached.
m = t.month
while t.month == m:
t = t + d
# Ensure that y still represents the current year.
if t.year > y:
y = t.year
ptable[y] = []
After this change, the calendars display the year of 2013, as desired.
|
8098
|
Mon Feb 18 11:54:15 2013 |
Max Horton | Update | Summary Pages | Timing Issues and Calendars | Crontab: The bug of data only plotting until 5PM is being investigated. The crontab's final call to the summary page generator was at 5PM. This means that the data plots were not being generated after 5PM, so clearly they never contained data from after 5PM. In fact, the original crontab reads:
0 11,5,11,17 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh 2>&1
I'm not exactly sure what inspired these entries. The 11,5,11,17 entries are supposed to be the hours at which the program is run. Why is it run twice at 11? I assume it was just a typo or something.
The final call time was changed to 11:59PM in an attempt to plot the entire day's data, but this method didn't appear to work because the program would still be running past midnight, which was apparently inhibiting its functionality (most likely, the day change was affecting how the data is fetched). The best solution is probably to just wait until the next day, then call the summary page generator on the previous day's data. This will be implemented soon.
Calendars: Although the calendar tabs on the left side of the page were fixed, the calendars displayed at: https://nodus.ligo.caltech.edu:30889/40m-summary/calendar/ appear to still have squished together text. The calendar is being fetched from https://nodus.ligo.caltech.edu:30889/40m-summary/calendar/calendar.html and displayed in the page. This error is peculiar because the URL from which the calendar is being fetched does NOT have squished together text, but the resulting calendar at 40m-summary/calendar/ will not display spaces between the text. This issue is still being investigated. |
8148
|
Sat Feb 23 16:16:11 2013 |
Max Horton | Update | Summary Pages | Multiprocessing | Calendars: The calendar issue discussed previously (http://nodus.ligo.caltech.edu:8080/40m/8098) where the numbers are squished together is very difficult for me to find. I am not going to worry about it for the time being.
Multiprocessing: Reviewed the implementation of Multiprocessing in python (using Multiprocessing package). Wrote a simple test function and ran it on megatron, to verify that multiprocessing could successfully take advantage of megatron’s multiple cores – it could. Now, I will work on implementing multiprocessing in the program. I began testing at a section in the program where a for loop calls process_data() (which has a long runtime) multiple times. The megatron terminals I had open began to run very slowly. Why? I believe that the process_data() function loads data into global variables to accomplish its task. The global variables in the original implementation were cleared before the subsequent calls to process_data(). But in the multiprocessing version, the data is not cleared, meaning the memory fills quickly, which drastically reduces performance. In the short term, I could try generating fewer processes at a time, wait for them to finish, then clearing the data, then generating more processes, etc. This will probably generate a nominal performance boost. In the long-term, restructuring of the way the program handles data may help (but not for sure). In the coming week I will experiment with these techniques and try to decrease the run time of the program. |
8194
|
Wed Feb 27 22:46:53 2013 |
Max Horton | Update | Summary Pages | Multiprocessing Implementation | Overview: In order to make the code more maintainable, I need to factor it into different well-documented classes. To do this carefully and rigorously, I need to run tests every time I make changes to the code. The runtime of the code is currently quite high, so I will work on improving the runtime of the program before factoring it into classes. This will be more efficient (minimize testing time) and allow me to factor more quickly. So, my current goal is to improve runtime as much as possible.
Multiprocessing Implementation:
I invented a simple way to implement multiprocessing in the summary_pages.py file. Here is an example: in the code, there is a process_data() function, which is run 75 times and takes rather long to run. I created multiple processes to run these calls concurrently, as follows:
Original Code: (around line 7840)
for sec in datasections:
for run in run_opts:
run_opt = 'run_%s_time' % run
if hasattr(tabs[sec], run_opt) and getattr(tabs[sec], run_opt):
process_data(cp, ifo, start, end, tabs[sec],\
cache=datacache, segcache=segcache, run=run,\
veto_def_table=veto_table[run], plots=do['plots'],\
subplots=do['subplots'], html_only=do['html_only'])
#
# free data memory
#
keys = globvar.data.keys()
for ch in keys:
del globvar.data[ch]
The weakness in this code is that process_data() is called many times, and doesn't take advantage of megatron's multiple threads. I changed the code to:
Modified Code: (around line 7840)
import multiprocessing
if do['dataplot']:
... etc... (same as before)
if hasattr(tabs[sec], run_opt) and getattr(tabs[sec], run_opt):
# Create the process
p = multiprocessing.Process(target=process_data, args=(cp, ifo, start, end, tabs[sec], datacache, segcache, run, veto_table[run], do['plots'], do['subplots'], do['html_only']))
# Add the process to the list of processes
plist += [p]
Then, I run the process in groups of size "numconcur", as follows:
numconcur = 8
curlist = []
for i in range(len(plist)):
curlist += [plist[i]]
if (i % numconcur == (numconcur - 1)):
for item in curlist:
item.start()
for item in curlist:
item.join()
item.terminate()
keys = globvar.data.keys()
for ch in keys:
del globvar.data[ch]
curlist = []
The value of numconcur (which defines how many threads megatron will use concurrently to run the program) greatly effects the speed of the program! With numconcur = 8, the program runs in ~45% of the time of the original code! This is the optimal value -- megatron has 8 threads. Several other values were tested - numconcur = 4 and numconcur = 6 had almost the same performance as numconcur = 8, but numconcur = 1 (which is essentially the same as the unmodified code) has a much worse performance.
Improvement Cap:
Why does numcores = 4 have almost the same performance as numcores = 8? I monitored the available memory of megatron, and it is quickly consumed during these runs. I believe that once 4 or more cores are being used, the fact that the data can't all fit in megatron's memory (which was entirely filled during these trials) counteracts the usefulness of additional threads.
Summary of Improvements:
Original Runtime of all process_data() statements: (approximate): 8400 sec
Runtime with 8 processes (approximate): 3842 sec
This is about a 55% improvement for speed, in this particular sector (not in the overall performance of the entire program). It saves about 4600 seconds (~1.3 hours) per run of the program. Note that these values are approximate (since other processes are running on megatron during my tests. They might be inflated or deflated by some margin of error).
Next Time:
This same optimization method will be applied to all repetitive processes with reasonably large runtimes. |
8195
|
Wed Feb 27 23:19:54 2013 |
rana | Update | Summary Pages | Multiprocessing Implementation | At first I thought that this was goofy, but then I logged in and saw that Megatron only has 8GB of RAM. I guess that used to be cool in the old days, but now is tiny (my laptop has 8 GB of RAM). I'll see if someone around has some free RAM for a 4600; in the meantime, I've killed a MEDM that was running on there and using up a few hundred MB.
Run your ssh-MEDMs elsewhere or else I'll make a cronjob to kill them periodically. |
8201
|
Thu Feb 28 14:19:20 2013 |
Max Horton | Update | Summary Pages | Multiprocessing Implementation | Okay, more memory would definitely be good. I don't think I have been using MEDM (which Jamie tells me is the controls interface) so making a cronjob would probably be a good idea. |
8218
|
Mon Mar 4 10:41:18 2013 |
Max Horton | Update | Summary Pages | Multiprocessing Implementation | Update:
Upon investigation, the other methods besides process_data() take almost no time at all to run, by comparison. The process_data() method takes roughly 2521 seconds to run using Multiprocessing with eight threads. After its execution, the rest of the program only takes 120 seconds to run. So, since I still need to restructure the code, I won't bother adding multiprocessing to these other methods yet, since it won't significantly improve speed (and any improvements might not be compatible with how I restructure the code). For now, the code is just about as efficient as it can be (in terms of multiprocessing). Further improvements may or may not be gained when the code is restructured. |
8227
|
Mon Mar 4 21:05:49 2013 |
Max Horton | Update | Summary Pages | Multiprocessing and Crontab | Multiprocessing: In its current form, the code uses multiprocessing to the maximal extent possible. It takes roughly 2600 seconds to run (times may vary depending on what else megatron is running, etc.). Multiprocessing is only used on the process_data() function calls, because this by far takes the longest. The other function calls after the process_data() calls take a combined ~120 seconds. See http://nodus.ligo.caltech.edu:8080/40m/8218 for details on the use of Multiprocessing to call process_data().
Crontab: I also updated the crontab in an attempt to fix the problem where data is only displayed until 5PM. Recall that previously (http://nodus.ligo.caltech.edu:8080/40m/8098) I found that the crontab wasn't even calling the summary_pages.py script after 5PM. I changed it then to be called at 11:59PM, which also didn't work because of the day change after midnight.
I decided it would be easiest to just call the function on the previous day's data at 12:01AM the next day. So, I changed the crontab.
Previous Crontab:
59 5,11,17,23 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh 2>&1
New Crontab:
0 6,12,18 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh 2>&1
1 0 * * * /users/public_html/40m-summary/bin/c1_summary_page.sh $(date "+%Y/%m/%d" --date="1 days ago") 2>&1
For some reason, as of 9:00PM today (March 4, 2013) I still don't see any data up, even though the change to the crontab was made on February 28. Even more bizarre is the fact that data is present for March 1-3. Perhaps some error was introduced into the code somehow, or I don't understand how crontab does its job. I will look into this now.
Next:
Once I fix the above problem, I will begin refactoring the code into different well-documented classes. |
8273
|
Mon Mar 11 22:28:30 2013 |
Max Horton | Update | Summary Pages | Fixing Plot Limits | Quick Note on Multiprocessing: The multiprocessing was plugged into the codebase on March 4. Since then, the various pages that appear when you click on certain tabs (such as the page found here: https://nodus.ligo.caltech.edu:30889/40m-summary/archive_daily/20130304/ifo/dc_mon/ from clicking the 'IFO' tab) don't display graphs. But, the graphs are being generated (if you click here or here, you will find the two graphs that are supposed to be displayed). So, for some reason, the multiprocessing is preventing these graphs from appearing, even though they are being generated. I rolled back the multiprocessing changes temporarily, so that the newly generated pages look correct until I find the cause of this.
Fixing Plot Limits: The plots generated by the summary_pages.py script have a few problems, one of which is: the graphs don't choose their boundaries in a very useful way. For example, in these pressure plots, the dropout 0 values 'ruin' the graph in the sense that they cause the plot to be scaled from 0 to 760, instead of a more useful range like 740 to 760 (which would allow us to see details better).
The call to the plotting functions begins in process_data() of summary_pages.py, around line 972, with a call to plot_data(). This function takes in a data list (which represents the x-y data values, as well as a few other fields such as axes labels). The easiest way to fix the plots would be to "cleanse" the data list before calling plot_data(). In doing so, we would remove dropout values and obtain a more meaningful plot.
To observe the data list that is passed to plot_data(), I added the following code:
# outfile is a string that represents the name of the .png file that will be generated by the code.
print_verbose("Saving data into a file.")
print_verbose(outfile)
outfile_mch = open(outfile + '.dat', 'w')
# at this point in process_data(), data is an array that should contain the desired data values.
if (data == []):
print_verbose("Empty data!")
print >> outfile_mch, data
outfile_mch.close()
When I ran this in the code midday, it gave a human-readable array of values that appeared to match the plots of pressure (i.e. values between 740 and 760, with a few dropout 0 values). However, when I let the code run overnight, instead of observing a nice list in 'outfile.dat', I observed:
[('Pressure', array([ 1.04667840e+09, 1.04667846e+09, 1.04667852e+09, ...,
1.04674284e+09, 1.04674290e+09, 1.04674296e+09]), masked_array(data = [ 744.11076965 744.14254761 744.14889221 ..., 742.01931356 742.05930208
742.03433228],
mask = False,
fill_value = 1e+20)
)]
I.e. there was an ellipsis (...) instead of actual data, for some reason. Python does this when printing lists in a few specific situations. The most common of which is that the list is recursively defined. For example:
INPUT:
a = [5]
a.append(a)
print a
OUTPUT:
[5, [...]]
It doesn't seem possible that the definitions for the data array become recursive (especially since the test worked midday). Perhaps the list becomes too long, and python doesn't want to print it all because of some setting.
Instead, I will use cPickle to save the data. The disadvantage is that the output is not human readable. But cPickle is very simple to use. I added the lines:
import cPickle
cPickle.dump(data, open(outfile + 'pickle.dat', 'w'))
This should save the 'data' array into a file, from which it can be later retrieved by cPickle.load().
Next:
There are other modules I can use that will produce human-readable output, but I'll stick with cPickle for now since it's well supported. Once I verify this works, I will be able to do two things:
1) Cut out the dropout data values to make better plots.
2) When the process_data() function is run in its current form, it reprocesses all the data every time. Instead, I will be able to draw the existing data out of the cPickle file I create. So, I can load the existing data, and only add new values. This will help the program run faster. |
8286
|
Wed Mar 13 15:30:37 2013 |
Max Horton | Update | Summary Pages | Fixing Plot Limits | Jamie has informed me of numpy's numpy.savetxt() method, which is exactly what I want for this situation (human-readable text storage of an array). So, I will now be using:
# outfile is the name of the .png graph. data is the array with our desired data.
numpy.savetxt(outfile + '.dat', data)
to save the data. I can later retrieve it with numpy.loadtxt() |
8414
|
Thu Apr 4 13:39:12 2013 |
Max Horton | Update | Summary Pages | Graph Limits | Graph Limits: The limits on graphs have been problematic. They often reflect too large of a range of values, usually because of dropouts in data collection. Thus, they do not provide useful information because the important information is washed out by the large limits on the graph. For example, the graph below shows data over an unnecessarily large range, because of the dropout in the 300-1000Hz pressure values.

The limits on the graphs can be modified using the config file found in /40m-summary/share/c1_summary_page.ini. At the entry for the appropriate graph, change the amplitude-lim=y1,y2 line by setting y1 to the desired lower limit and y2 to the desired upper limit. For example, I changed the amplitude limits on the above graph to amplitude-lim=.001,1, and achieved the following graph.

The limits could be tightened further to improve clarity - this is easily done by modifying the config file. I modified the config file for all the 2D plots to improve the bounds. However, on some plots, I wasn't sure what bounds were appropriate or what range of values we were interested in, so I will have to ask someone to find out.
Next: I now want to fix all the funny little problems with the site, such as scroll bars appearing where they should not appear, and graphs only plotting until 6PM. In order to do this most effectively, I need to restructure the code and factor it into several files. Otherwise, the code will not only be much harder to edit, but will become more and more confusing as I add on to it, compounding the problems that we currently have (i.e. that this code isn't very well documented and nobody knows how it works). We need lots of specific documentation on what exactly is happening before too many changes are made. Take the config files, for example. Someone put a lot of work into them, but we need a README specifying which options are supported for which types of graphs, etc. So we are slowed down because I have to figure out what is going on before I make small changes.
To fix this, I will divide the code into three main sectors. The division of labor will be:
- Sector 1: Figure out what the user wants (i.e. read config files, create a ConfigParser, etc...)
- Sector 2: Process the data and generate the plots based on what the user wants
- Sector 3: Generate the HTML |
8476
|
Tue Apr 23 15:02:19 2013 |
Max Horton | Update | Summary Pages | Importing New Code | Duncan Macleod (original author of summary pages) has an updated version that I would like to import and work on. The code and installation instructions are found below.
I am not sure where we want to host this. I could put it in a new folder in /users/public_html/ on megatron, for example. Duncan appears to have just included the summary page code in the pylal repository. Should I reimport the whole repository? I'm not sure if this will mess up other things on megatron that use pylal. I am working on talking to Rana and Jamie to see what is best.
http://www.lsc-group.phys.uwm.edu/cgit/lalsuite/tree/pylal/bin/pylal_summary_page
https://www.lsc-group.phys.uwm.edu/daswg/docs/howto/lal-install.html
|
8496
|
Fri Apr 26 15:50:48 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
I am following the instructions here:
https://www.lsc-group.phys.uwm.edu/daswg/docs/howto/lal-install.html#test
But there as an error when I run the ./00boot command near the beginning. I have asked Duncan Macleod about this and am waiting to hear back.
For now, I am putting things into /home/controls on allegra. My understanding is that this is not shared, so I don't have a chance of messing up anyone else's work. I have been moving slow and being extra cautious about what I do because I don't want to accidentally nuke anything. |
8504
|
Mon Apr 29 15:35:31 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
I installed the new version of LAL on allegra. I don't think it has interfered with the existing version, but if anyone has problems, let me know. The old version on allegra was 6.9.1, but the new code uses 6.10.0.1. To use it, add . /opt/lscsoft/lal/etc/lal-user-ench.sh to the end of the .bashrc file (this is the simplest way, since it will automatically pull the new version).
I am having a little trouble getting some other unmet dependencies for the summary pages such as the new lalframe, etc. But I am working on it.
Once I get it working on allegra and know that I can get it without messing up current versions of lal, I will do this again on megatron so I can test and edit the new version of the summary pages. |
8523
|
Thu May 2 14:14:10 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
LALFrame was successfully installed. Allegra had unmet dependencies with some of the library tools. I tried to install LALMetaIO, but there were unmet dependencies with other LSC software. After updating the LSC software, the problem has persisted. I will try some more, and ask Duncan if I'm not successful.
Installing these packages is rather time consuming, it would be nice if there was a way to do it all at once. |
8536
|
Tue May 7 15:09:38 2013 |
Max Horton | Update | Summary Pages | Importing New Code |
I am now working on megatron, installing in /home/controls/lal. I am having some unmet dependency issues that I have asked Duncan about. |
8572
|
Tue May 14 16:14:47 2013 |
Max Horton | Update | Summary Pages | Importing New Code | I have figured out all the issues, and successfully installed the new versions of the LAL software. I am now going to get the summary pages set up using the new code. |
11411
|
Tue Jul 14 16:47:18 2015 |
Eve | Update | Summary Pages | Summary page updates continue during upgrade | I've continued to make changes to the summary pages on my own environment, which I plan on implementing on the main summary pages when they are back online.
Motivation:
I created my own summary page environment and manipulated data from June 30 to make additional plots and change already existing plots. The main summary pages (https://nodus.ligo.caltech.edu:30889/detcharsummary/ or https://ldas-jobs.ligo.caltech.edu/~max.isi/summary/) are currently down due to the CDS upgrade, so my own summary page environment acts as a temporary playground to continue working on my SURF project. My summary pages can be found here (https://ldas-jobs.ligo.caltech.edu/~eve.chase/summary/day/20150630/); they contian identical plots to the main summary pages, except for the Summary tab. I'm open to suggestions, so I can make the summary pages as useful as possible.
What I did:
- SUS OpLev: For every already existing optical lever timeseries, I created a corresponding spectrum, showing all channels present in the original timeseries. The spectra are now placed to the right of their corresponding timeseries. I'm still playing with the axes to make sure I set the best ranges.
- SUSdrift: I added two new timeseries, DRMI SUS Pitch and DRMI SUS Yaw, to add to the four already-existing timeseries in this tab. These plots represent channels not previously displayed on the summary pages
- Minor changes
- Added axis labels on IOO plot 6
- Changes axis ranges of IOO: MC2 Trans QPD and IOO: IMC REFL RFPD DC
- Changes axis label on PSL plot 6
Results:
So far, all of these changes have been properly implemented into my personal summary page environment. I would like some feedback as to how I can improve the summary pages.
|
11414
|
Tue Jul 14 17:14:23 2015 |
Eve | Summary | Summary Pages | Future summary pages improvements | Here is a list of suggested improvements to the summary pages. Let me know if there's something you'd like for me to add to this list!
- A lot of plots are missing axis labels and titles, and I often don't know what to call these labels. I could use some help with this.
- Check the weather and vacuum tabs to make sure that we're getting the expected output. Set the axis labels accordingly.
- Investigate past periods of missing data on DataViewer to see if the problem was with the data requisition process, the summary page production process, or something else.
- Based on trends in data over the past three months, set axis ranges accordingly to encapsulate the full data range.
- Create a CDS tab to store statistics of our digital systems. We will use the CDS signals to determine when the digital system is running and when the minute trend is missing. This will allow us to exclude irrelevant parts of the data.
- Provide duty ratio statistics for the IMC.
- Set triggers for certain plots. For example, for channels C1:LSC-XARM OUT DQ and page 4 LIGO-T1500123–v1 C1:LSC-YARM OUT DQ to be plotted in the Arm LSC Control signals figures, C1:LSCTRX OUT DQ and C1:LSC-TRY OUT DQ must be higher than 0.5, thus acting as triggers.
- Include some flag or other marking indicating when data is not being represented at a certain time for specific plots.
- Maybe include some cool features like interactive plots.
|
11437
|
Wed Jul 22 22:06:42 2015 |
Eve | Summary | Summary Pages | Future summary pages improvements | - CDS Tab
We want to monitor the status of the digital control system.
1st plot
Title: EPICS DAQ Status
I wonder we can plot the binary numbers as statuses of the data acquisition for the realtime codes.
We want to use the status indicators. Like this:
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20150722/plots/H1-MULTI_A8CE50_SEGMENTS-1121558417-86400.png
channels:
C1:DAQ-DC0_C1X04_STATUS
C1:DAQ-DC0_C1LSC_STATUS
C1:DAQ-DC0_C1ASS_STATUS
C1:DAQ-DC0_C1OAF_STATUS
C1:DAQ-DC0_C1CAL_STATUS
C1:DAQ-DC0_C1X02_STATUS
C1:DAQ-DC0_C1SUS_STATUS
C1:DAQ-DC0_C1MCS_STATUS
C1:DAQ-DC0_C1RFM_STATUS
C1:DAQ-DC0_C1PEM_STATUS
C1:DAQ-DC0_C1X03_STATUS
C1:DAQ-DC0_C1IOO_STATUS
C1:DAQ-DC0_C1ALS_STATUS
C1:DAQ-DC0_C1X01_STATUS
C1:DAQ-DC0_C1SCX_STATUS
C1:DAQ-DC0_C1ASX_STATUS
C1:DAQ-DC0_C1X05_STATUS
C1:DAQ-DC0_C1SCY_STATUS
C1:DAQ-DC0_C1TST_STATUS
1st plot
Title: IOP Fast Channel DAQ Status
These have two bits each. How can we handle it?
If we need to shrink it to a single bit take "AND" of them.
C1:FEC-40_FB_NET_STATUS (legend: c1x04, if a legend placable)
C1:FEC-20_FB_NET_STATUS (legend: c1x02)
C1:FEC-33_FB_NET_STATUS (legend: c1x03)
C1:FEC-19_FB_NET_STATUS (legend: c1x01)
C1:FEC-46_FB_NET_STATUS (legend: c1x05)
3rd plot
Title C1LSC CPU Meters
channels:
C1:FEC-40_CPU_METER (legend: c1x04)
C1:FEC-42_CPU_METER (legend: c1lsc)
C1:FEC-48_CPU_METER (legend: c1ass)
C1:FEC-22_CPU_METER (legend: c1oaf)
C1:FEC-50_CPU_METER (legend: c1cal)
The range is from 0 to 75 except for c1oaf that could go to 500.
Can we plot c1oaf with the value being devided by 8? (Then the legend should be c1oaf /8)
4th plot
Title C1SUS CPU Meters
channels:
C1:FEC-20_CPU_METER (legend: c1x02)
C1:FEC-21_CPU_METER (legend: c1sus)
C1:FEC-36_CPU_METER (legend: c1mcs)
C1:FEC-38_CPU_METER (legend: c1rfm)
C1:FEC-39_CPU_METER (legend: c1pem)
The range is be from 0 to 75 except for c1pem that could go to 500.
Can we plot c1pem with the value being devided by 8? (Then the legend should be c1pem /8)
5th plot
Title C1IOO CPU Meters
channels:
C1:FEC-33_CPU_METER (legend: c1x03)
C1:FEC-34_CPU_METER (legend: c1ioo)
C1:FEC-28_CPU_METER (legend: c1als)
The range is be from 0 to 75.
6th plot
Title C1ISCEX CPU Meters
channels:
C1:FEC-19_CPU_METER (legend: c1x01)
C1:FEC-45_CPU_METER (legend: c1scx)
C1:FEC-44_CPU_METER (legend: c1asx)
The range is be from 0 to 75.
7th plot
Title C1ISCEY CPU Meters
channels:
C1:FEC-46_CPU_METER (legend: c1x05)
C1:FEC-47_CPU_METER (legend: c1scy)
C1:FEC-91_CPU_METER (legend: c1tst)
The range is be from 0 to 75.
=====================
IOO
We want a duty ratio plot for the IMC. C1:IOO-MC_TRANS_SUM >1e4 is the good period.
Duty ratio plot looks like the right plot of the following link
https://ldas-jobs.ligo-wa.caltech.edu/~detchar/summary/day/20150722/lock/segments/
=====================
SUS: OPLEV
OL_PIT_INMON and OL_YAW_INMON are good for the slow drift monitor.
But their sampling rate is too slow for the PSDs.
Can you use
C1:SUS-ETM_OPLEV_PERROR
C1:SUS-ETM_OPLEV_YERROR
etc...
For the PSDs? They are 2kHz sampling DQ channels. You would be able to plot
it up to ~1kHz. In fact, we want to monitor the PSD from 100mHz to 1kHz.
How can you set up the resolution (=FFT length)?
=====================
LSC / ASC / ALS tabs
Let's make new tabs LSC, ASC, and ALS
LSC:
We should have a plot for
C1:LSC-TRX_OUT_DQ
C1:LSC-TRY_OUT_DQ
C1:LSC-POPDC_OUT_DQ
It's OK to use the minute trend for now.
You can check the range using dataviewer.
ASC:
Let's use
C1:SUS_MC1_ASCPIT_OUT16 (legend: IMC WFS)
C1:ASS-XARM_ITM_YAW_OSC_CLKGAIN (legend: XARM ASS)
C1:ASS-YARM_ITM_YAW_OSC_CLKGAIN (legend: YARM ASS)
C1:ASX-XARM_M1_PIT_OSC_CLKGAIN (legend: XARM Green ASS)
as the status indicators. There is no YARM Green ASS yet.
ALS:
Title: ALS Green transmission
We want a time series of
ALS-TRX_OUT16
ALS-TRY_OUT16
Title: ALS Green beatnote
Another time series
ALS-BEATX_FINE_Q_MON
ALS-BEATY_FINE_Q_MON
Title: Frequency monitor
We have frequency counter outputs, but I have to talk to Eric to know the channel names |
11448
|
Mon Jul 27 17:51:06 2015 |
Eve | Update | Summary Pages | New summary page tabs and other improvements | The summary pages can still be found at https://nodus.ligo.caltech.edu:30889/detcharsummary/ (EDIT: in an older version of this post I listed an incorrect url). They are operational and include data from some channels for intermittent periods of time.
Motivation: to make the summary pages more informative and useful for all
What I did:
I have added tabs for ALS, ASC, and LSC subsystems. While there is currently no data on the plots, I plan on checking all channels with DataViewer to set appropriate axis ranges so that we can actually see the data.
I altered which channels are used to represent spectra for OpLev systems to more appropriately provide PSDs.
I've changed the check code status page to include "warning" labels. Previously, when the summary pages ran, resulting in a warning message, the check code status page would list this as an "error", implying that the summary pages were not properly produced.
Results:
All features were implemented, but I need to investigate some of these channels to understand why we aren't seeing many channels in the plots. I am working on some other changes to the summary pages, including providing a Locked status which will only show data in a timeseries for a selected period of time. |
11467
|
Thu Jul 30 14:27:18 2015 |
Eve | Update | Summary Pages | ALS, ASC, LSC Summary Pages | I've switches the ALS, ASC, and LSC plots on the summary pages from plotting raw frames, to plotting minute trends, instead. Now, the plots contain information, instead of being completely blank, but data is not recorded on the plots after 12UTC.
Typically, I make changes to the summary pages on my own version of the pages, found at https://ldas-jobs.ligo.caltech.edu/~eve.chase/summary/day/, where I change the summary pages for June 30 and then import such changes into the main summary pages.
|
11474
|
Sat Aug 1 17:04:29 2015 |
Eve | Update | Summary Pages | States and Triggers in SPs | I've added states to the summary pages to only show data for times at which one certain channel is above a specified threshold. So far, I've incorporated states for the IOO tab to show when the mode cleaner is locked.
You can see these changes implemented in the IOO tab of my personal summary pages for June 30: https://ldas-jobs.ligo.caltech.edu/~eve.chase/summary/day/20150630/ioo/.
I've written a description of how to add states to summary pages here: https://wiki-40m.ligo.caltech.edu/DailySummaryHelp#How_to_Define_and_Implement_States. |
11480
|
Wed Aug 5 17:15:08 2015 |
Eve | Update | Summary Pages | Fixed ASC Tab | I've fixed the ASC tab on the summary pages to populate the graphs with data without causing an error.
Motivation: The ASC tab was showing no data. It resulted in a name error when generated.
What I did:
A name error indicates a bad channel name in the plot definition. I identified two errors in the code:
- I said C1:SUS_MC1_ASCPIT_OUT16.mean instead of C1:SUS-MC1_ASCPIT_OUT16.mean (underscore should be dash)
- The channel C1:ASX-XARM_M1_PUT_OSC_CLKGAIN was resulting in a name error. I removed it.
Results:
The plots are not processing without error. However, no titles or axis labels are present on the plots- I'll work on adding these.
|
11585
|
Wed Sep 9 11:33:58 2015 |
rana | Update | Summary Pages | Summary Page updates |
- Made most plots in IOO tab only plot when MC_TRANS > 10000 using Eve's MC_LOCK state definition.
- added the 0.03 - 0.1 Hz and 10-30 Hz bands to the PEM SEIS BLRMS tab and set the y-scales to the same as SeismicRainbowSTS.stp
- set state PMC_LOCK in PSL tab and made some of those plots only plot when PMC trans > 0.6.
- SUS-OL page showed me that the ETM yaw spectrum was wacky, which lead me to find that it was completely uncentered. Stop leaving the room lights ON Steve!!
I also set the quadrant offsets by blocking the QPD with a piece of metal (teflon doesn't work).
- set c1summary to only plot some when X or Y arms are locked
|
12703
|
Wed Jan 11 19:20:23 2017 |
Max Isi | Update | Summary Pages | December outage | The summary pages were not successfully generated for a long period of time at the end of 2016 due to syntax errors in the PEM and Weather configuration files.
These errors caused the INI parser to crash and brought down the whole gwsumm system. It seems that changes in the configuration of the Condor daemon at the CIT clusters may have made our infrastructure less robust against these kinds of problems (which would explain why there wasn't a better error message/alert), but this requires further investigation.
In any case, the solution was as simple as correcting the typos in the config side (on the nodus side) and restarting the cron jobs (on the cluster side, by doing `condor_rm 40m && condor_submit DetectorChar/condor/gw_daily_summary.sub`). Producing pages for the missing days will take some time (how to do so for a particular day is explained in the wiki https://wiki-40m.ligo.caltech.edu/DailySummaryHelp).
RXA: later, Max sent us this secret note:
However, I realize it might not be clear from the page which are the key steps. These are just running:
1) ./DetectorChar/bin/gw_daily_summary --day YYYYMMDD --file-tag some_custom_tag To create pages for day YYYYMMDD (the file-tag option is not strictly necessary but will prevent conflict with other instances of the code running simultaneously).
2) sync those days back to nodus by doing, eg: ./DetectorChar/bin/pushnodus 20160701 20160702
This must all be done from the cluster using the 40m shared account. |
12709
|
Thu Jan 12 23:22:34 2017 |
rana | Update | Summary Pages | December outage | Pages still not working: PEM and MEDM blank.
- Committed existing MEDM grabbing scripts to SVN. Ran the cron job on megatron by hand. It grabs PNG files, but somehow its not getting into the summary pages.
- Changed the MEDM grabbing scripts to use '/usr/bin/env'.
- GW summary log files were numbering in the many thousands, so I moved everything over 320 days old into the OLD/ sub-directory using 'find . -type f -mtime +320 -exec mv {} OLD/ \;' (the semi-colon is needed)
- Did apt-get upgrade on Megatron.
- pinged Max
- Stared at GWsumm docs to see if there's a clue about what (if anything) is wrong with the .ini file.
|
12713
|
Fri Jan 13 14:33:00 2017 |
MAX (not Rana) | Update | Summary Pages | December outage | PEM config file was also lacking a section named "summary", which is necessary for all parent tabs; this has now been solved. I have deactivated the MEDM pages because Praful's screencap script seemed to be broken (I should have logged this, I apologize).
Quote: |
Pages still not working: PEM and MEDM blank.
- Committed existing MEDM grabbing scripts to SVN. Ran the cron job on megatron by hand. It grabs PNG files, but somehow its not getting into the summary pages.
- Changed the MEDM grabbing scripts to use '/usr/bin/env'.
- GW summary log files were numbering in the many thousands, so I moved everything over 320 days old into the OLD/ sub-directory using 'find . -type f -mtime +320 -exec mv {} OLD/ \;' (the semi-colon is needed)
- Did apt-get upgrade on Megatron.
- pinged Max
- Stared at GWsumm docs to see if there's a clue about what (if anything) is wrong with the .ini file.
|
|
12749
|
Tue Jan 24 07:36:56 2017 |
Max Isi | Update | Summary Pages | Cluster maintenance | System-wide CIT LDAS cluster maintenance may cause disruptions to summary pages today. |
12752
|
Wed Jan 25 09:00:39 2017 |
Max Isi | Update | Summary Pages | Cluster maintenance | LDAS has not recovered from maintenance causing the pages to remain unavailable until further notice.
> System-wide CIT LDAS cluster maintenance may cause disruptions to summary pages today. |
12787
|
Thu Feb 2 11:25:45 2017 |
Max Isi | Update | Summary Pages | Cluster maintenance | FYI this issue has still not been solved, but the pages are working because I got the software running on an
alternative headnode (pcdev2). This may cause unexpected behavior (or not).
> LDAS has not recovered from maintenance causing the pages to remain unavailable until further notice.
>
> > System-wide CIT LDAS cluster maintenance may cause disruptions to summary pages today. |
12831
|
Wed Feb 15 22:16:05 2017 |
Max Isi | Update | Summary Pages | New condor_q format | There has been a change in the default format for the output of the condor_q command at CIT clusters. This could be problematic for the summary page status monitor, so I have disabled the default behavior in favor of the old one. Specifically, I ran the following commands from the 40m shared account: mkdir -p ~/.condor echo "CONDOR_Q_DASH_BATCH_IS_DEFAULT=False" >> ~/.condor/user_config This should have no effect on the pages themselves. |
12870
|
Mon Mar 6 14:47:49 2017 |
gautam | Update | Summary Pages | Code status check script modified | For a few days now, the "code status" page has been telling us that the summary pages are DEAD, even though the pages themselves seemed to be generating plots. I logged into the 40m shared account on the cluster and checked the status of the condor job (with condor_q), and did not find anything odd there. I decided to consult Max, who pointed out that the script that checks the code status (/home/40m/DetectorChar/bin/checkstatus) was looking for a particular string in the log files ("gw_daily_summary"), while the recent change in the default output of condor_q meant that the string actually being written to the log files was "gw_daily_summa". This script has now been modified to look for instances of "gw_daily" instead, and so the code status indicator seems to be working again...
The execution of the summary page scripts has also been moved back to pcdev1 (from pcdev2, where it was moved to temporarily because of some technical problems with pcdev1). |
13049
|
Wed Jun 7 14:27:23 2017 |
Steve | Update | Summary Pages | summery pages not working | Last good page May 18, 2017
Not found, error message May 19 - June 4,2017
Blank plots, June 5, 2017 |
224
|
Thu Jan 3 12:38:49 2008 |
rob | Bureaucracy | TMI | Sore throat |
Quote: |
I did not feel anything wrong yesterday, but unfortunately I have a very much sore throat today. I need to drink warm milk with honey and rinse my throat often today. So far I do not have other illness symptomes (no fever), so I hope that this small disease will not last for a long time, but I feel that it is better for me to cure my sore throat today at home (and probably it is safer for others in 40-m).
I took yesterday the book "Digital Signal Processing", so I have it for reading at home.
Hope to see you tomorrow. |
I've added a new category--TMI--for entries along these lines. |
229
|
Wed Jan 9 20:29:47 2008 |
Dmass | AoG | TMI | Coffee Carafe | If you have been using the coffee machine in the 40m, you may have noticed small brown flecks in your coffee mug. The carafe in the 40m has accumulated a layer of what is presumed to be old dried up coffee. When a small amount of water is swirled around in the bottom, flecks of the brown layer come off. Pictures below are of the inside of the carafe.
But does it provide adequate protection from 1064 light? |
279
|
Mon Jan 28 12:42:48 2008 |
Dmass | Bureaucracy | TMI | Coffee | There is tea in the coffee carafe @ the 40m. It is sitting as though it were fresh coffee. There is also nothing on the post it. |
341
|
Tue Feb 26 20:24:04 2008 |
Andrey | Summary | TMI | Sorrow | As for that plot of three-dimensional surface, I indeed was wrong with the axis "Q_ETMX-Q_ITMX" (I put there wrong string "Q_ITMX-Q_ETMX"). On Friday plot there were values 10^(-12) on the z-axis, and that should be really meters, but the point that as I realized on Monday, I have never calibrated experimental measurement results from counts to meters , that's why it is this difference between 10^(-6) and 10^(-12). I still did not find the way to compare experim. and theoretical plots, because even if I leave "counts" on both plots, so that I have scale 10^(-6) on both plots, then the change in theoretical plot is just 0.02*10^(-6) for the range of Q-factors change, while the change in experimental measurements is an order of magnitude more 0.4*10^(-6), so the surface for theretical plot would be almost flat in the same axes as experimental results. |
2887
|
Thu May 6 17:47:01 2010 |
Alberto, kiwamu, Jc The 3rd (aka The Drigg) | Omnistructure | TMI | Minutes from the Lab Organization Commitee meeting | Today we met and we finally come up with a lot of cool, clever, brilliant, outstanding ideas to organize the lab.
You can find them on the Wiki page created for the occasion.
http://lhocds.ligo-wa.caltech.edu:8000/40m/40m_Internals/Lab_Organization
Enjoy! |
2888
|
Thu May 6 17:54:44 2010 |
Zach Korth -- Committee Oversight (Fun Division) | Omnistructure | TMI | Minutes from the Lab Organization Commitee meeting | Where are we going to put the tiki bar? The ice cream machine? I am disappointed in the details that appear to have been glossed over..
Quote: |
Today we met and we finally come up with a lot of cool, clever, brilliant, outstanding ideas to organize the lab.
You can find them on the Wiki page created for the occasion.
http://lhocds.ligo-wa.caltech.edu:8000/40m/40m_Internals/Lab_Organization
Enjoy!
|
|
14041
|
Fri Jul 6 12:12:09 2018 |
Annalisa | Configuration | Thermal Compensation | Thermal compensation setup | I tried to put together a rudimentary heater setup.
As a heating element, I used the soldering iron tip heated up to ~800°C.
To make a reflector, I used the small basket which holds the cork of champains battles (see figure 1), and I covered it with alumnum foil. Of course, it cannot be really considered as a parabolic reflector, but it's something close (see figure 2).
Then, I put a ZnSe 1 inch lens, 3.5 inch FL (borrowed from TCS lab) right after the reflector, in order to collect as much as possible the radiation and focus it onto an image (figure 3). In principle, if the heat is collimated by the reflector, the lens should focus it in a pretty small image. Finally, in order to see the image, I put a screen and a small piece of packaging sponge (because it shouldn't diffuse too much), and I tried to see the projected pattern with a thermal camera (also borrowed from Aidan). However, putting the screen in the lens focal plane didn't really give a sharp image, maybe because the reflector is not exactly parabolic and the heater not in its focus. However, light is still focused on the focal plane, although the image appears still blurred. Perahps I should find a better material (with less dispersion) to project the thermal image onto. (figure 4)
Finally, I measured the transmitted power with a broadband power meter, which resulted to be around 10mW in the focal plane. |
14043
|
Sat Jul 7 19:50:38 2018 |
Annalisa | Configuration | Thermal Compensation | Study about the Thermal projection setup and its effect on the cavity | I made some simulation to study the change that the heater setup can induce on the Radius of Curvature of the ETM.
Heat pattern
First, I used a non-sequential ray tracing software (Zemax) to calculate the heat pattern. I made a CAD of the elliptical reflector and I put a radiative element inside it (similar to the rod-heater 30mm long, 3.8mm diameter that we ordered), placing it in such a way that the heater tip is as close as possible to the ellipse first focus. (figure 1)
Then, by putting a screen at the second focus of the ellipse (where we suppose to place the mirror HR surface), I could find the projected heat pattern, as shown in figure 2 and 3 (section). Notice that the scale is in INCH, even if the label says mm. As you can see, the heat pattern is pretty broad, but still enough to induce a RoC change.
Mirror deformation
In order to compute the mirror deformation induced by this kind of pattern, I used this map produced with Zemax as absorption map in COMSOL. I considered ~1W total power absorbed by the mirror (just to have a unitary number).
The mirror temperature and deformation maps induced by this heat pattern are shown in figures 4 and 5.
RoC change evaluation
Then I had to evaluate the RoC change. In particular, I did it by fitting the Radius of Curvature over a circle of radius:

where is the waist of tha Gaussian mode on the ETMY (5mm) and n is the mode order. This is a way to approximately know which is the Radius of Curvature as "seen" by each HOM, and is shown in figure 6 (the RoC of the cold mirror is set to be 57.37m). Of course, besides being very tiny, the difference in RoC strongly depends on the heat pattern.
Gouy phase variation
Considering this absorbed power, the cavity Gouy phase variation between hot and cold state is roughly 15kHz (I leave to the SURFs the details of the calculation).
Unanswered points
So the still unaswered questions are:
- which is the minimum variation we are able to resolve with our measurement
- how much heating power do we expect to be projected onto the mirror surface (I'll make another entry on that) |
14050
|
Tue Jul 10 23:44:23 2018 |
Annalisa | Configuration | Thermal Compensation | Heater setup assembly | [Annalisa, Koji]
Today both the heater and the reflector were delivered, and we set down the setup to make some first test.
The schematic is the usual: the rod heater (30mm long, 3.8 mm diameter) is set inside the elliptical reflector, as close as possible to the first focus. In the second focus we put the power meter in order to measure the radiated power. The broadband power meter wavelength calibration has been set at 4µm: indeed, the heater emits all over the spectrum with the Black Body radiation distribution, and the broadband power meter measures all of them, but only starting from 4µm they will be actually absorbed my the mirror, that's why that calibration was chosen.
We measured the cold resistance of the heater, and it was about 3.5 Ohm. The heater was powered with the BK precision DC power supply 1735, and we took measurements at different input current.
Current [A] |
Voltage [V] |
Measured radiated power [mW] |
Resistance [Ohm] |
0.5 |
2.2 |
20 |
4.4 |
0.8 |
6 |
120 |
7.5 |
1 |
11 |
400 |
11 |
1.2 |
18 |
970 |
15 |
We also aimed at measuring the heater temperature at each step, but the Fluke thermal camera is sensitive up to 300°C and also the FLIR seems to have a very limited temperature range (150°C?). We thought about using a thermocouple, but we tested its response and it seems definitely too slow.
Some pictures of the setup are shown in figures 1 and 6.
Then we put an absorbing screen in the suspension mount to see the heat pattern, in such a way to get an idea of the heat spot position and size on the ETMY. (figure 2)
The projected pattern is shown in figures 3-4-5
The optimal position of the heater which minimizes the heat beam spot seems when the heater inserted by 2/3 in the reflector (1/3 out). However, this is just a qualitative evaluation.
Finally, two more pictures showing the DB connector on the flange and the in-vacuum cables.
Some more considerations about in-vacuum cabling to come.
Steve: how are you going to protect the magnets ? |
14071
|
Fri Jul 13 23:39:46 2018 |
Annalisa | Configuration | Thermal Compensation | Thermal compensation setup - power supply | [Annalisa, Rana]
In order to power the heater setup to be installed in the ETMY chamber, we took the Sorensen DSC33-33E power supply from the Xend rack which was supposed to power the heater for the seismometer setup.
We modified the J3 connector behind in such a way to allow a remote control (unsoldered pins 9 and 8).
Now pins 9 and 12 need to be connected to a BNC cable running to the EPICS.
RXA update: the Sorensen's have the capability to be controlled by an external current source, voltage source, or resistive load. We have configured it so that 0-5V moves the output from 0-33 V. There is also the possibility to make it a current source and have the output current (rather than voltage) follow the control voltage. This might be useful since out heater resistance is changing with temperature. |
14078
|
Tue Jul 17 17:37:46 2018 |
Annalisa, Terra | Configuration | Thermal Compensation | Heaters installation | Summary
We installed two heaters setup on the ETMY bench in order to try inducing some radius of curvature change and therefore HOMs frequency shift.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We installed two heaters setup.
Elliptic reflector setup (H1): heater put in the focus of the elliptical reflector: this will make a heat pattern as descirbed in the elogs #14043 and #14050.
Lenses setup (H2): heater put in a cylndrical reflector (made up with aluminum foil) 1'' diameter, and 2 ZnSe lenses telescope, composed by a 1.5'' and a 1'' diameter respectively, both 3.5'' focal length. The telescope is designed in such a way to focus the heat map on the mirror HR surface. For this latter the schematic was supposed to be the following:

This setup will project on the mirror a heat pattern like this:

which is very convenient if we want to see a different radius of curvature for different HOMs. However, the power that we are supposed to have absorbed by the mirror with this setup is very low (order of 40-ish mW with 18V, 1.2A) which is probably not enough to see an effect. Moreover, mostly for space reasons (post base too big), the distances were not fully kept, and we ended up with the following setup:

In this configuration we won't probably have a perfect focusing of the heat pattern on the mirror.

In vacuum connections
See Koji's elog #14077 for the final pin connection details. In summary, in vacuum the pins are:
13 to 8 --> cable bunch 0
7 to 2 --> cable bunch 2
25 to 20 --> cable bunch 1
19 to 14 --> cable bunch 3
where Elliptic reflector setup (H1) is connected to cables 0 and 1, and the lenses setup is connected to cables 2 and 3.
Installed setup
This is the installed setup as seen from above:


|
|