40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 255 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  1720   Wed Jul 8 11:05:40 2009 Chris ZimmermanUpdateGeneralWeek 3/4 Update

The last week I've spent mostly working on calculating shot noise and other sensitivities in three michelson sensor setups, the standard michelson, the "long range" michelson (with wave plates), and the proposed EUCLID setup.  The goal is to show that there is some inherent advantage to the latter two setups as displacement sensors.  This involved looking into polarization and optics a lot more, so I've been spending a lot of time on that also.  For example, the displacement sensitivity/shot noise on the standard michelson is around 6:805*10^-17 m/rHz at L_=1*10^-7m, as shown in the graph.  NSD_Displacement.png

  1750   Wed Jul 15 12:44:28 2009 Chris ZimmermanUpdateGeneralWeek 4/5 Update

I've spent most of the last week working on finishing up the UCSD calculations, comparing it to the EUCLID design, and thinking about getting started with a prototype and modelling in MATLAB.  Attached is something on EUCLID/UCSD sensors.

Attachment 1: Comparison.pdf
Comparison.pdf Comparison.pdf
  6986   Wed Jul 18 10:08:01 2012 LizUpdateComputer Scripts / ProgramsWeek 5 update/progress

Over the past week, I have been focusing on the issues I brought up in my last ELOG,  6956.  I spent quite a while attempting to modify the script and create my own spectrogram function within the existing code.  I also checked out the channels on the PSL table for the PSL health page and produced a spectrogram plot of the PMC reflected, transmitted, and input powers, the PZT Voltage and the laser output power.  When I was entering these channels into the configuration script, I came across an issue with the way the python script parses this.  If there were spaces between the channel names (for example: C1:PSL-PMC_INPUT_DC, C1:PSL-PMC_RFPDDC... etc) the program would not recognize the channels.  I made some alterations to the parsing script such that all white spaces at the beginning and end of the channels were stripped and the program could find them.

 The next thing that I worked on was attempting to see if the microphone channels were actually stopping the program or just taking an extraordinarily long time.  I tried running the program with shorter time samples and that seemed to work quite well!  However, I had to leave it running overnight in order to finish.  I am sure that this difference comes from the fact that the microphone channels are fast channels.  I would like to somehow make it run more quickly, and am thinking about how best to do this.

I finally got my spectrogram function to work after quite a bit of trouble.  There were issues with mismatched data and limit sets that I discovered came from times when only a few frames (one or two) were in one block.  I added some code to  ignore small data blocks  like those and the program works very well now!  It seems like the best way to get the right limits is to let the program automatically set the limits (they are nicely log-scaled and everything) but there are some issues that produce questionable results.  I spent a while adding a colormap option to the script so that the spectrogram colors can be adjusted!  This mostly took so long because, on Monday night, some strange things were happening with the PMC that made the program fail (zeros were being output, which caused an uproar in the logarithmic data limits).  I was incredibly worried about this and thought that I had somehow messed up the script (this happened in the middle of when I was tinkering with the cmap option) so I undid all of my work!  It was only when I realized it was still going on and Masha and Jenne were talking about the PMC issues that I figured out that it was an external issue.  I then went in and set manual limits so that a blank spectrogram and redid everything.

The spectrogram is not operational and the colormap can be customized.  I need to fix the problem with the autoscaled axes (perhaps adding a lower bound?) so that the program does not crash when there is an issue.

Yesterday, I spoke with Rana about what my next step should be.  He advised me to look at ELOGs from Steve (6678) and Koji (6675) about what they wanted to see on the site.  These gave me a good map of what is needed on the site and where I will go next.

I need to find out what is going on with the weather channels and figure out how to calibrate the microphones.  I will also be making sure there are correct units on all of the plots and figure out how to take only a short section of data for the microphone channels.  I have already modified the tab template so that it is similar to Koji's ELOG idea and will be making further changes to the layout of the summary pages themselves.  I will also be working on having the right plots up consistently on the site.

 

  1779   Wed Jul 22 16:15:52 2009 Chris ZimmermanUpdateGeneralWeek 5/6 Update

The last week I've started setting up the HeNe laser on the PSL table and doing some basic measurements (Beam waist, etc) with the beam scan, shown on the graph.  Today I moved a few steering mirrors that steve showed me from at table on the NW corner to the PSL table.  The goal setup is shown below, based on the UCSD setup.  Also, I found something that confused me in the EUCLID setup, a  pair of quarter wave plates in the arm of their interferometer, so I've been working out how they organized that to get the results that they did.  I also finished calculating the shot noise levels in the basic and UCSD models, and those are also shown below (at 633nm, 4mw) where the two phase-shifted elements (green/red) are the UCSD outputs, in quadrature (the legend is difficult to read).

 

 

Attachment 1: Beam_Scan.jpg
Beam_Scan.jpg
Attachment 2: Long_Range_Michelson_Setup_1_-_Actual.png
Long_Range_Michelson_Setup_1_-_Actual.png
Attachment 3: NSD_Displacement.png
NSD_Displacement.png
  1789   Sat Jul 25 13:34:58 2009 KojiUpdateGeneralWeek 5/6 Update

Quote:

The last week I've started setting up the HeNe laser on the PSL table and doing some basic measurements (Beam waist, etc) with the beam scan, shown on the graph.  Today I moved a few steering mirrors that steve showed me from at table on the NW corner to the PSL table.  The goal setup is shown below, based on the UCSD setup.  Also, I found something that confused me in the EUCLID setup, a  pair of quarter wave plates in the arm of their interferometer, so I've been working out how they organized that to get the results that they did.  I also finished calculating the shot noise levels in the basic and UCSD models, and those are also shown below (at 633nm, 4mw) where the two phase-shifted elements (green/red) are the UCSD outputs, in quadrature (the legend is difficult to read).

 

 

Chris,

Some comments:

0. Probably, you are working on the SP table, not on the PSL table.

1. The profile measurement looks very nice.

2. You can simplify the optical layout if you consider the following issues
  A. The matching lenses just after the laser:
      You can make a collimated beam only with a single lens, instead of two.
      Just put a lens of f0 with distance of f0 from the waist. (Just like Geometrical Optics to make a parallel-going beam.)

      Or even you don't need any lens. In this case, whole optical setup should be smaller so that your beam
      can be accomodated by the aperture of your optics. But that's adequately possible.

  B. The steering mirrors after the laser:
      If you have a well elevated beam from the table (3~4 inches), you can omit two steering mirrors.
      If you have a laser beam whose tilte can not be corrected by the laser mount, you can add a mirror to fix it.

  C. The steering mirrors in the arms:
      You don't need the steering mirrors in the arms as all d.o.f. of the Michelson alignment can be adjusted
      by the beamsplitter and the mirror at the reflected arm. Also The arm can be much shorter (5~6 inches?)

  D. The lenses and the mirrors after the PBS:
      You can put one of the lenses before the PBS, instead of two after the lens.
      You can omit the mirror at the reflection side of the PBS as the PBS mount should have alignment adjustment.

The simpler, the faster and the easier to work with!
Cheers.

  7023   Wed Jul 25 11:22:39 2012 LizUpdateComputer Scripts / ProgramsWeek 6 update

This week, I made several modifications to the Summary page scripts, made preliminary Microphone BLRMS channels and, with Rana's help, got the Weather Station working again. 

I changed the spectrogram and spectrum options in the Summary Pages so that, given the sampling frequency (which is gathered by the program), the NFFT and overlap are calculated internally.  This is an improvement over user-entered values because it saves the time of having to know the sampling frequency for each desired plot.  In addition, I set up another .sh file that can generate summary pages for any given day.  Although this will probably not be useful in the final site, it is quite helpful now because I can go back and populate the pages.  The current summary pages file is called "c1_summary_page.sh" and the one that is set up to get a specific day is called "liz_c1_summary_page.sh".  I also made a few adjustments to the .css file for the webpage so that plots completely show up (they were getting cut off on the edges before) and are easier to see.  I also figured out that the minute and second trend options weren't working because the channel names have to be modified to CHANNEL.mean, CHANNEL.min and CHANNEL.max.  So that is all in working order now, although I'm not sure if I should just use the mean trends or look at all of them (the plots could get crowded if I choose to do this).  Another modification I made to the python summary page script was adding an option to have an image on one of the pages.  This was useful because I can now put requested MEDM screens up on the site.  The image option can be accessed if, in the configuration file, you use "image-" instead of "data-" for the first word of the section header.

I also added a link to the final summary page website on the 40 meter wiki page (my summary page are currently located in the summary-test pages, but they will be moved over once they are more finalized).  I fleshed out the graphs on the summary pages as well, and have useful plots for the OSEM and OPLEV channels.  Instead of using the STS BLRMS channels, I have decided to use the GUR BLRMS channels that Masha made.  I ELOGged about my progress and asked for any advice or recommendations a few days ago (7012) and it would still be great if everyone could take a look at what I currently have up on the website and tell me what they think!  July 22 and 23 are the most finalized pages thus far, so are probably the best to look at.

https://nodus.ligo.caltech.edu:30889/40m-summary-test/archive_daily/20120723/

 

This week, I also tried to fix the problems with the Weather Station, which had not been operational since 2010.  All of the channels on the weather station monitor seemed to be producing accurate data except the rain gauge, so I went on the roof of the Machine Shop to see if anything was blatantly wrong with it.  Other than a lot of dust and spiders, it was in working condition.  I plan on going up again to clean it because, in the manual, it is recommended that the rain collector be cleaned every one to two years...  I also cleared the "daily rain" option on the monitor and set all rain-related things to zero.  Rana and I then traced the cabled from c1pem1 to the weather station monitor, and found that thy were disconnected.  In fact, the connector was broken apart and the pins were bent.  After we reconnected them, the weather station was once again operational!  In order to prevent accidental disconnection in the future, it may be wise to secure this connection with cable ties.  It went out of order again briefly on Tuesday, but I reconnected it and now it is in much sturdier shape!

 

The most recent thing that I have been doing in relation to my project has been making BLRMS channels for the MIC channels.  With Jenne's assistance, I made the channels, compiled and ran the model on c1sus, made filters, and included the channels on the PEM MEDM screen .  I have a few modifications to make and want to .  One issue that I have come across is that the sampling rate for the PEM system is 2 kHz, and the audio frequencies range all the way up to 20 kHz.  Because of this, I am only taking BLRMS data in the 1-1000 Hz range.  This may be problematic because some of these channels may only show noise (For example, 1-3 and 3-10 Hz may be completely useless).

 

The pictures below are of the main connections in the Weather Station.  This first is the one that Rana and I connected (it is now better connected and looks like a small beige box), located near the beam-splitter chamber, and the second is the c1pem1 rack.  For more information on the subject, there is a convenient wiki page: https://wiki-40m.ligo.caltech.edu/Weather_Station

Attachment 1: P7230026.JPG
P7230026.JPG
Attachment 2: P7230031.JPG
P7230031.JPG
  7063   Wed Aug 1 10:07:16 2012 LizUpdateComputer Scripts / ProgramsWeek 7 Update

Over the past week, I have continued refining the summary pages.  They are now online in their final home, and can be easily accessed from the 40 meter Wiki page!  (It can be accessed by the Daily Summary link under "LOGS").  I have one final section to add plots to (the IFO section is currently still only "dummy" plots) but the rest are showing correct data!  I have many edits to make in order for them to be more intelligible, but they are available for browsing if anyone feels so inclined.

I also spent quite a while formatting the pages so that the days are in PDT time instead of UTC time.  This process was quite time consuming and required modifications in several files, but I tracked my changes with git so they are easy to pinpoint.  I also did a bit of css editing and rewriting of a few html generation functions so that the website is more appealing.  (One example of this is that the graphs on each individual summary page are now full sized instead of a third of the size.

This week, I also worked with the BLRMS mic channels I made.  I edited the band pass and low pass filters that I had created last week and made coherence plots of the channels.  I encountered two major issues while doing this.  Firstly, the coherence of the channels decreases dramatically above 40 Hz.  I will look at this more today, but am wondering why it is the case.  If nothing could be done about this, it would render three of my channels ineffective.  The other issue is that the Nyquist frequency is at 1000 Hz, which is the upper limit of my highest frequency channel (300-1000 Hz).  I am not sure if this really affects the channel, but it looks very different from all of the other channels.  I am also wondering whether the channels below 20 Hz are useful at all, or whether they are just showing noise.

The microphone calibration has been something I have been trying to figure out for quite some time, but I recently found a value on the website that makes the EM172 microphones and has a value for their sensitivity.  I determined the transfer factor from this sensitivity as 39.8107 mV/Pa, although I am not sure if all of the mics will be consistent with this.

  7115   Wed Aug 8 10:38:43 2012 LizUpdateComputer Scripts / ProgramsWeek 8/Summary Pages update

Over the past week, I have been working on my progress report and finalizing the summary pages.  I have a few more things to address in the pages (such as starting at 6 AM, including spectrograms where necessary and generating plots for the days more than ~a week ago) but they are mostly finalized.  I added all of the existing acoustic and seismic channels so the PEM page is up to date.  The microphone plots include information about the transfer factor that I found on their information sheet (http://www.primomic.com/).  If there are any plots that are missing or need editing, please let me know!

I also modified the c1_summary_page.sh script to run either the daily plots or current updating plots by taking in an argument in the command line.  It can be run ./c1_summary_page.sh 2012/07/27
 or ./c1_summary_page.sh now to generate the current day's pages.  (Essentially, I combined the two scripts I had been running separately.)  I have been commenting my code so it is more easily understandable and have been working on writing a file that explains how to run the code and the main alterations I made.  The most exciting thing that has taken place this week is that  the script went from taking ~6 hours to run to taking less than 5 minutes.  This was done by using minute trends for all of the channels and limiting the spectrum plot data.

The summary pages for each day now contain only the most essential plots that give a good overview of the state of the interferometer and its environment instead of every plot that is created for that day.

I am waiting for Duncan to send me some spectrogram updates he has made that downsample the timeseries data before plotting the spectrogram.  This will make it run much more quickly and introduce a more viable spectrogram option.

 

Today's Summary Pages can be accessed by the link on the wiki page or at:

https://nodus.ligo.caltech.edu:30889/40m-summary/archive_daily/20120808/

  7120   Wed Aug 8 13:37:46 2012 KojiUpdateComputer Scripts / ProgramsWeek 8/Summary Pages update

Hey, the pages got significantly nicer than before. I will continue to give you comments if I find anything.

So far: There are many 10^-100 in logarithmic plots. Once they are removed, we should be able to see the seismic excitation during these recent earth quakes?

Incidentally, where the script is located? "./" isn't the absolute path description.

Quote:

Over the past week, I have been working on my progress report and finalizing the summary pages.  I have a few more things to address in the pages (such as starting at 6 AM, including spectrograms where necessary and generating plots for the days more than ~a week ago) but they are mostly finalized.  I added all of the existing acoustic and seismic channels so the PEM page is up to date.  The microphone plots include information about the transfer factor that I found on their information sheet (http://www.primomic.com/).  If there are any plots that are missing or need editing, please let me know!

I also modified the c1_summary_page.sh script to run either the daily plots or current updating plots by taking in an argument in the command line.  It can be run ./c1_summary_page.sh 2012/07/27
 or ./c1_summary_page.sh now to generate the current day's pages.  (Essentially, I combined the two scripts I had been running separately.)  I have been commenting my code so it is more easily understandable and have been working on writing a file that explains how to run the code and the main alterations I made.  The most exciting thing that has taken place this week is that  the script went from taking ~6 hours to run to taking less than 5 minutes.  This was done by using minute trends for all of the channels and limiting the spectrum plot data.

The summary pages for each day now contain only the most essential plots that give a good overview of the state of the interferometer and its environment instead of every plot that is created for that day.

I am waiting for Duncan to send me some spectrogram updates he has made that downsample the timeseries data before plotting the spectrogram.  This will make it run much more quickly and introduce a more viable spectrogram option.

 

Today's Summary Pages can be accessed by the link on the wiki page or at:

https://nodus.ligo.caltech.edu:30889/40m-summary/archive_daily/20120808/

 

  6958   Wed Jul 11 11:00:45 2012 MashaSummaryGeneralWeek Summary

This week, my work fell into two categories: Artificial Neural Networks and lab-related projects.

Artificial Neural Networks

- I played around with radial basis functions and k-means classification algorithms for a bit in order to develop an algorithm to pick out various features of seismic signals. However, I soon realized that k-means is an extremely slow algorithm in practice, and that radial basis functions are thus difficult to implement since their centers are chosen by the k-means algorithm in practice.

- Thus, I moved on to artificial neural networks. Specifically, I chose to implement a sigmoidal neural network, where the activation function of each neuron is f(u) = 1/ (1 + e-u/T), T constant, which is nice because it's bounded in [0, 1]. Classification, then, is achieved by generating a final output vector from the output layer of the form [c1, c2, c3, ..., cN] where N is the number of classes, ci = 1 (ideally) if the input is of class i, and ck = 0 otherwise.

- First, I built a network with randomly generated weights, ten neurons in the one hidden layer, and two output neurons - to simply classify [1, 0] (earthquake) and [0, 1] (not an earthquake). I ran this on fake input I generated myself, and it quickly converged to error 0. Thus, I decided to built a network for real data.

- My current network is a 2-layer, 10 neuron / 2 neuron sigmoidal network that also classified earthquake / not an earthquake. It trains in roughly 80 - 100 iterations (it's learning curve on training data it attached). It decimates full data from DataViewer by a factor of 256 in order to run faster.

- Next steps: currently, my greatest limitation is data - I can use US Geological Survey statistics to classify each earthquake (so that N = 10, rather than 2, for example), but I would like definite training data on people, cars, trucks, etc. for supervised learning, in order to develop those classes. Currently, however, the seismometers are being used for mine and Yaakov's triangulation project, so this may have to wait a few days.

Lab-Related Projects

- I apologize for all of the E-logs, but I changed the filters in the RMS system (to elliptic and butterworth filters) and changed the seismic.strip display file.

- I repositioned the seismometers so that Yaakov and I can triangulate signals and determine seismic noise hot-spots (as a side-project).

Right now I'm going to try for more classes based on USGS statistics, and I will also explore other data sources Den suggested.

 

Thanks for your help, everybody in 40m!

 

Attachment 1: Error.fig
  6984   Wed Jul 18 09:44:13 2012 MashaSummaryGeneralWeek Summary

This week, I continued to work with my Artificial Neural Network. Specifically, I implemented a 3-hidden layer sigmoidal, gradient-descent supervised network, with 3 neurons in the final output layer, since I have introduced a new class, trucks. I have overcome my past data limitation, since I observed that there is a multitude of trucks that comes between 9 and 10 am, and thus I have observed a bunch of trucks after the fact (their seismic patterns are rather distinct, and thus there could prove to be a very large supply of this data - I have gathered data on the past 50 days so far, and have gathered 60+ truck patterns).

With 3 classes, the two-layer network converges in ~200 epochs, while the 3-layer network takes around ~1200 (and more time per iteration). Since the error gradients in the stochastic gradient descent are recursively calculated, the only real time limitation in the algorithm is just lots of multiplication of weight / input vectors, lots of computation of sigmoidal functions, and lots of data I/O (actually, since the sigmoidal function is technically an exponentiation to a decimal power and a division, I would be curious to know if theory or Matlab has any clever ways of computing this faster that can be easily implemented - I will look into this today). Thus, the networks take a long time to train. I'm currently looking at optimizing the number of layers / number of neurons, but this will be a background process for at least several days, if not the next week. In the greater scope of things, however, training time isn't really a problem, since the actual running of the algorithm requires only one pass through the network, and the network should be as well-trained as possible. However, due to the fact that I am only here until the end of August, it would be nice to speed things up.

As far as other classifications, I can simulate signals either by dropping the copper block from the Stacis experiment, or by applying transfer functions to general seismic noise. However, I would like more real data on noise sources, but the only other one plausible to LIGO that I can currently think of (cars don't show up very well) is the LA Metro. Perhaps I will take a day to clock trains as they come in (since the schedule is imprecise) and see if there is any visible seismic pattern.

I also, with the help of Yaakov, Jenne, and Den, now have three working, triangulated seismometers, which can now begin taking triangulation data (the rock tumblers are still working, so there should be opportunities to do this), both to find hot-spots as Rana suggested, and to measure the velocity and test out my algorithm, as Den suggested.

 

  7066   Wed Aug 1 11:46:16 2012 MashaSummaryGeneralWeek Summary

 A lot of my time this week was spent struggling with implementing my neural network code in Simulink in order to experiment with using neural network control systems, as Rana suggested. Perhaps I'm inept at S-Functions, but I decided try to use the Model Reference Controller block in Simulink instead of my own code, and experiment with using that to control a driven, damped harmonic oscillator with noise. The block consists of two neural networks, one which is trained on a model of the plant to simulate the plant, and one that it trained on a reference model (which, in my case, is just input -> gain 0 -> output since my desired signal for now is 0. So far, I have managed to adjust the parameters enough to stop the neural network controller from outputting too much force (this causing the amplitude of the oscillator to increase with each iteration), but outputting enough to keep the plant oscillating with a maximal displacement of 2 m/s (with 30 neurons, 100 reference time delays, and 100 plant output time delays). I will continue to work on this, especially with added noise sources, and see how feasible it is to train such a controller to actually perform well.

control.png

20-100-100-Perform.png

As far as classification, I took up that project again, and realized that I have been approaching it the wrong way the whole time - instead of using a neural network to classify an entire time series, I could just classify it online using a recurrent neural network. This, however, meant that my data (which used to be in packets of 900 seconds) had to be parsed through to generate time-dependent desired output vectors. I did this last night, and have been trying various combinations of neuron numbers, time delays, and learning parameters this morning. Below is my current best result for mean square error with time, obtained with 1 neuron in the hidden layer, 50 time delays (so that there are actually 51 neurons feeding into the hidden layer, and a subnetwork of 50 neurons connected to 50 neurons, and learning parameter 0.7). The peak is due to the fact that a large amount of sharp earthquakes occur around that time, essentially giving the neural network a surprise, and causing it to rapidly learn. However, I suspect this sharp rise would decrease if I were to stop decimating my data by a factor of 256, and using all of the inputs as they come in (this, however would be drastically slower). Currently, I have a massive loop running which tries different combinations of neurons and time delays.

1-50-0.7.png

 

In terms of other lab stuff, Jenne and I ordered parts to make a cable for Gurlap-1, and I updated the pin map for Gurlap-1. Also, I wrote my progress report.

 

  7117   Wed Aug 8 11:46:09 2012 MashaSummaryGeneralWeek Summary

The main thing that I did this week was write a C block that, given static weights, would classify seismic signals (into one of three categories - Earthquake, Truck, Quiet). I have successfully debugged the C block so that it works without segmentation faults, etc, and have made various versions - one that uses a recurrent neural network, and one that uses a time-delayed input vector (rather than keeping recurrent weights). I've timed my code, and it works very fast (to the point where clock() differences in <time.h> are 0.000000 seconds per iteration). This is good news, because it means that operations can be performed in real-time, whether we are sampling at 2048 Hz, or, as Rana suggested, 256 Hz (currently, my weights are for 256 Hz, and I can decimate the incoming signal which is at 2048 Hz right now).

In order to optimize my code, since at the core it involves a lot of matrix multiplications, I considered how the data is actually stored in the computer, and attempted to minimize pointer movement. Suppose you have an array in C of the form A[num_row][num_col] - the way this array is actually stored on the stack or heap is row_1 / row_2 / row_3 / ... / row_num_row, so it makes sense to move across a matrix from left to right and then down (as though reading on a page). Likewise, there's no efficient algorithm for matrix multiplication which is less that O(N^2) (I think), so it's essentially impossible to avoid double for loops (however, the way I process the matricies, as mentioned before, minimizes this time).

The code is also fast because, rather than using an actual e^-u operation for the sigmoidal activation function, it uses a parametrized hyperbola - this arithmetic operations are the only ones that occur, and this is much faster than exponentiation (which I believe is just computer by Taylor series in the C math library..)

The weight vectors for this block are static (since they're made from training data where the signal type is already known). I am not currently satisfied with the performance of the block on data read from a file, so I am retraining my network. I realized that what is currently happening is that, given a time-dependent desired output vector, the network first trains to output a "quiet" vector, then a "disturbance" vector, and then retrains again to output a "quiet vector" and completely forgets how to classify disturbance. Currently, I am trying to get around this problem by shifting my earthquake data time-series, so that when I train in batch (on all of my data), there is probably an earthquake at all time points, so that the network does not only train on "quiet" at certain iterations. Likewise, I realized that I should perform several epochs (not just one) on the data - I tried this last night, and training performance MSE decreased by a factor of 1 per iteration (when on average, it's about 40, and 20 at best).

After I input the static weight vectors (which shouldn't take long since my code is very generalized), the C block can be added to the c1pem frame, and a channel can be made for the seismic disturbance class. I've made sure to keep with all of the C block rules when writing my code (both in terms of function input/output, and in terms of not using any C libraries).

As for neural networks for control, I talked to Denis about the controller block, and he realized that we should, instead of adding noises, at first attempt to use a reference plant with a lower Q pendulum and a real plant with a higher Q pendulum (since we want pendulum motion to be damped). I've tried training the controller block several times, but each time so far the plant pendulum has started oscillating greatly. My current guess at fixing this is training more.

Also, Jenne and I made a cable for Guralp 1 (I soldered, she explained how to do it), and it seems to work well, as mentioned in my previous E-log. Hopefully it can be used to permanently keep the seismometer all the way down the arm.

  10205   Tue Jul 15 18:39:04 2014 HarryUpdateGeneralWeekly Plan (7.16.14)

 The Past Week

 

Attempted to design coupling telescope, turned out waist measurement was still off. Took another waist measurement, this time more reasonable.

Used recent waist measurement to actually design a coupling system to couple NPRO light into Panda PM980 fibers (see recent elog)

The Next Week

Assemble fiber coupling system

Measure coupling efficiency, ensure it's at least 60%

Begin measuring Polarization Extinction ratio

Materials

 

PLCX lens with f = 0.25m ------> status: here

Fiber Coupled Powermeter//PD ------> status: unknown (have any laying around?)

Quarter Wave Plate, Polarizing Beamsplitter, Photodiodes ------> status: here

other components from original razorblade measurement  setup

  10289   Tue Jul 29 19:00:40 2014 HarryUpdateGeneralWeekly Plan (7.29.14)

 The Past Week

In the past week, I have improved the coupling in the fiber testing setup on the SP table to up to ~45%

I also measured the input/output modes of the fiber with collimators.

Manasa, Q and I have designed, and redesigned a setup to measure Polarization Extinction Ratio introduced by fibers.

I have also partially assembled the box that will hold the frequency counters and RPi for FOL.

Today (Tuesday) I measured waists of PSL and AUX, at dumped light from the SHG's for use in designing coupling telescopes for FOL.

Next Week

In the next week, I will design and couple light from PSL and AUX (Y arm) into fibers for use in testing FOL.

Once that's done, I will continue testing fiber characteristics, starting with Polarization Extinction Ratio.

Items Needed

Power cord for Raspberry Pi (ordered)

AD9.5F collimator adapter (ordered)

 

  10158   Tue Jul 8 23:59:49 2014 HarryUpdateGeneralWeekly Plan (7.8.14)

 Last Week:

-I continued to struggle with the razorblade beam analysis, though after a sixth round of measurements, and a lot of fiddling around with fit parameters in matlab, there seems to be a light at the end of the tunnel.

 

Next Week:

-I plan to check my work with the beamscan tomorrow (wednesday) morning

-Further characterize the light from the fibers, and set up the collimator

-Design and hopefully construct the telescope that will focus the beam into the collimator

 

Materials:

- Razorblade setup or beamscan (preferably beamscan)

- Fiber Illuminator

- Collimator (soon to be ordered)

- Lenses for telescope (TBD)

 

  10336   Wed Aug 6 10:10:45 2014 HarryUpdateGeneralWeekly Plan 8.6.14

Last Week

 

Took first round of PER measurements after a long setup.

Started setting up to take measurement of the other polarization--ran into issues with mounts again. (Spinning of their own free will again.)

Devised a new scheme for taking more robust measurements of PER--still in progress.

Next Week

Finish data analysis of these latest PER measurements

Hopefully finally move on to frequency noise characterization

Materials Needed

None for PER

Unknown for frequency noise

 

  721   Wed Jul 23 10:49:37 2008 MaxUpdateComputer Scripts / ProgramsWeekly Progress Report
This week I installed the magnetometer. The channels seem to be reading correctly. I'm back to working on noise budget and have added the MICH and will soon add the PRC source. The various source-specific scripts still need to be adjusted and the transfer functions remeasured since they do not match in any reasonable manner the SRD Rana put out in the e-log yesterday.
  10097   Wed Jun 25 02:01:21 2014 NichinSummaryGeneralWeekly Report

 Attached is the weekly work plan / equipment requirement / lab expert's presence needed for the upcoming week.

Attachment 1: Nichin_Week4_update.pdf
Nichin_Week4_update.pdf Nichin_Week4_update.pdf
  678   Wed Jul 16 10:50:55 2008 EricSummaryCamerasWeekly Summary
Finished unwrapping, cleaning, baking, wrapping, wrapping again, packing, and shipping the baffles.

Attempted to set up the Snap software so that it could talk directly to EPICS channels. This is not currently working due to a series of very strange bugs in compiling and linking the channel access libraries. Alex Ivanov directed Joe and me to a script and makefile that are similar to what we're trying to do and it may solve our problem, but at the moment this still doesn't work. We're currently using a workaround that involves making unix system calls to ezca command line tools, but this is too hacky to leave in the final program.

Attempted to fit Josh's PZT voltage vs power plot of the OMC (from about a year ago) to lorentzians in order to try to develop fitting tools for more recent data. This isn't working, due to systematic error distorting the shapes of the peaks. Good fits can be obtained by cutting the number of points to a very small number around the peak of resonance, but this leads to such a small percentage of the peak being used that I don't trust these results either. (In the graph (shows the very top of the tallest peak): blue is Josh's original data, green is a fit to this peak using the top 66% of the peak and arbitrary, equal values for the error on each point, red is Josh's data averaged over bins of size 0.005, teal is a fit to these bins where the error on each point is the standard deviation of each bin, and magenta is a fit to these same bins, except cropped to the top ~10% of the peak, x-axis is voltage, y-axis is transmission power). Rana suggested that I take my own sweeps of the PMC using scripts that are already written: I'm currently figuring out where these scripts are and how to use them without accidently breaking something.

We've begun running the Snap software for long periods of time to see how stable it is. Currently, its only problem appears to be that it memory leaks somewhat: it was up 78% memory usage after a little over an hour. It doesn't put much strain on the computer, using only ~20% CPU. Stress put on the network from the constant transfer of images from the camera to the computer is not yet known.
Attachment 1: AttemptedPeakFit3.tiff
  722   Wed Jul 23 12:42:23 2008 EricSummaryCamerasWeekly Summary
I finally got the ezcaPut command working. The camera code can now talk directly to the EPICS channels. However, after repeated calls of the ezcaPut function, the function begins claim to time out, even though it continues to write values to the channel successfully (EPICS is successfully getting the new value for the channel, but failing to reply back to the program in time, I think). It has seg-faulted once as well, so the stability cannot yet be trusted for running long term. For now, however, it works well enough to test a servo in the short term. The current approach simply uses a terminal running ezcaservo with the pitch and yaw offset channels of the ETMX, as well as the channels that the camera code output to. This hasn't actually been tested since we haven't had enough time with the x-arm locked.

Tested various fixed zoom lens on the camera, since the one we were previously using was too heavy for its mount and likely more expensive than necessary. The 16mm lens gets a good picture of the beam and the optic together, though the beam is a little too small in the picture to reliably fit a gaussian to. The 24mm lens zooms too much to see the whole optic, but the beam profile itself is much clearer. The 24mm lens is currently on the camera.

Scanned the PZT voltage of the PMC across its full offset range to gain a plot of voltage vs intensity. I used DTT's triggered time series response system to measure the outputs of the slow PZT voltage and transmission intensity channels, and used the script triangle wave to drive the PZT ramp channel slowly over its full range (I couldn't get DTT to output to the channel). Clear resonances did appear (PMCScanWide.tif), but the number of data points per peak was far too small reliably fit a lorentzian to (PMCScanSinglePeak.tif). When I decreased the scanning range and increased the time in order to collect a large number of points on a few peaks, the resulting data was too messy to fit to a lorentzian (PMCSlowSinglePeak.tif).
Attachment 1: PMCScanSinglePeak.tif
PMCScanSinglePeak.tif
Attachment 2: PMCScanWide.tif
PMCScanWide.tif
Attachment 3: PMCSlowSinglePeak.tif
PMCSlowSinglePeak.tif
  766   Wed Jul 30 13:08:44 2008 Max JonesUpdateComputer Scripts / ProgramsWeekly Summary
This week I've been working on the noise budget script. The goal is to add Siesmic, Darm, Mich, Prc and magnetometer noise. I believe I've added Seismic noise in a reasonable and 40m specific manner (please see the attached graph). The seismic noise in the noise budget at 100 Hz was 10 times higher than that predicted by Rana in elog #718. This could be due to the fact that graph is taken from data today when the device is unlocked and construction workers are busy next door. I am currently trying to fix the getDarm.m file to add the DARM source to the noise budget. I have run into several problems, the most pressing of which was that the C1:LSC-DARM_ERR channel is zero except with the interferometer is being locked. According to Rob, we only save data for approximately a day (we save trends for much longer but this is insufficient for the noise budget script) and sometimes we are not locked the night before. Rob showed me how I may introduce an artificial noise in the DARM_ERR signal but I'm having trouble making the script output a graphic. I'm still unsure how to make the getDarm function 40m specific.

Today I will start working on my second progress report and abstract.
Attachment 1: C1_NoiseBudgetPlot.pdf
C1_NoiseBudgetPlot.pdf
  769   Wed Jul 30 13:52:41 2008 EricSummaryCamerasWeekly Summary
I tracked the tendency for ezcaPut to fail and sometimes seg-fault in the camera code to a conflict between the camera API and ezca, either on the 
network level or the thread level.  Since neither are sophisticated enough to provide controls over how they handle these two things, I instead 
separated the call to ezcaPut out into a small, separate script (a stripped down ezcawrite), which the camera code calls at the system level.  This is a 
bit hacky of a solution, but its the only thing that seems to work.

I've developed a transformation based on Euler angles that should be able to take the 4 OSEMs in a picture of the end mirror and use their relative 
positions to determine the angle of the camera to the optic.  This would allow the position data determined by the fitting software to be converted 
from pixels to meaningful lengths, and should aid any servo-ing done on the beams position.  I've yet to actually test if the equations work, though.

The servo code needs to have slew rate limiters and maximums/minimums to protect the mirrors written in to it before it can be tested again, but I 
have no idea what reasonable values for these limits are.

Joe and I recently scanned the PMC by driving C1:PSL-PMC_RAMP with the trianglewave script over a range of -3.5 to -1.25 (around 50 to 150 volts 
to the PZT) and read out C1:PSL-ISS_INMONPD to measure the transmission intensity.  This included slightly under 2 FSRs.  For slow scans (covering 
the range in 150 to 300 s), the peaks were very messy (even with the laser power at 1/6 its normal value), and it was difficult to place where the 
actual peak center occurred.  For faster sans (covering the range in 30 seconds or so), the peaks were very clean and nearly symmetric, but were 
not placed logically (the same peak showed up at two very different values for the PZT voltage in two separate runs).  I don't have time to put 
together graphs of the scans at the moment; I'll have that up sometime this afternoon.
  861   Wed Aug 20 12:39:11 2008 EricSummaryCamerasWeekly Summary
I attempted to model the noise produced by the mirror defects in the ETMX images, in order to better assure that the fit to the beam Gaussian in these images is actually accurate. My first attempt involved treating the defects as random Gaussians which were scaled by the power of the beam's Gaussian. This didn't work at all (it didn't really look like the noise on the ETMX), and resulted in very different behavior from the fitting software (it fit to one of the noise peaks, instead of the beam Gaussian). I'll try some other models another time.

I made a copy of the ezcaservo source code and added options to it that allow the addition of minimum value, maximum value, and slew rate limits. This should allow the camera code to servo on ITMX without accidently driving the mirror too far or too fast. In order to get the code to recompile, I had to strip out part of the servo that changed the step value based on the amount of time that had elapsed (it relied on some GDS libraries and header files). Since the amount of time that passes is reasonably constant (about 2-3 steps per second) and the required accuracy for this particular purpose isn't extremely high, I didn't think it would matter very much.

I put together two MATLAB functions that attempt to convert pixel position in an image to actual position in real space. The first function takes four points that have known locations in real space (with respect to some origin which the camera is pointing at) and compares them to where those 4 points fall in the image. From the distortion of the four points, it calculates the three rotational angles of the the camera, as well as a scaling factor that converts pixels to real spatial dimensions. The second function takes these 4 parameters and 'unrotates' the image, yielding the positions of other features in the image (though they must be on the same flat plane) in real space. The purpose of this is to allow the cameras to provide positions in terms of physically meaningful units. It should also decouple the x and y axes so that the two dimensions can be servo'd on independently. Some results are attached; the 'original' image is the image as it came out of the camera (units in pixels), while the 'modified' image is the result of running the two functions in succession. The four points were the corners of the 'restricted access' sign and of the TV screen, while the origin was taken as the center of the sign or the TV. The accuracy of the transformation is reasonably good, but seems to depend considerably on assuring that the origin chosen in real space matches the origin in the image. To make these the same, they will be calculated by taking the intersections of the 2 lines defined by 2 sets diagonal points in each image. The first function will remain in MATLAB, since it only needs to be run once each time the camera is moved. The second function must be ported to C since the transformation must be done in realtime during the servo.

Joe and I attempted another scan of the PMC this morning. We turned the laser power down by a factor of ~50 (reflection off of the unlocked PMC went from ~118 to ~2.2) and blocked one beam in the MZ. We scanned from 40 V to 185 V ( -1 to -4.25 on the PZT ramp channel) with periods of 60 seconds and 10 seconds. In both cases, thermal effects were still clearly visible. We turned the laser power down by another factor of 2 (~1 on the PMC reflection channel), and did a long scan of 300 seconds and a short scan of 10 seconds. The 10 second scan produced what may be clean peaks, although there was clear digitization noise, while the peaks in the 300 second scan showed thermal effects. I've yet to actually analyze the data closely, however.
Attachment 1: OriginalSignImage.png
OriginalSignImage.png
Attachment 2: ModifiedSignImage.png
ModifiedSignImage.png
Attachment 3: OriginalTVImage.png
OriginalTVImage.png
Attachment 4: ModifiedTVImage.png
ModifiedTVImage.png
  891   Wed Aug 27 12:09:10 2008 EricSummaryCamerasWeekly Summary
I added a configuration file parser to the Snap code. This allows all command line parameters (like exposure time, etc.) to be saved in a file and loaded automatically. It also provides a method of loading parameters to transform a point from its location on the image to its location in actual space (loading these parameters on the command line would substantially clutter it). The code is now fully set-up to test servo-ing one of the mirrors again, and I will test this as soon as the PMC board stops being broken and I can lock the X-arm.

I also took an image of the OSEMs on ETMX in order to apply the rotation transform code in order to determine the parameters to pass to Snap. The results were alpha = 2.9505, beta = 0.0800, gamma = -2.4282, c = 0.4790. These results are reasonable but far from perfect. One of the biggest causes of error was in locating the OSEMs: it is difficult to determine where in the spot of light the OSEM actually is, and in one case, the center was hidden behind another piece of equipment. Nevertheless, the parameters are good enough to use in a test of the ability to servo, though it would probably be worth trying to improve them before using them for other purposes. The original and rotated images are attached.

I've begun working on calculations to figure out how much power loss can occur due to a given cavity misalignment or change in a mirror's radius of curvature from heating. The goal is to determine how well a camera can indirectly detect these power losses, since a misalignment produces a change in beam position and a change in radius of curvature produces a change in beam waist, both of which can be measured by the camera.

Joe and I hunted down the requisite equipment to amplify the photodiode at the output of the PMC, allowing us to turn the laser power down even more during a scan of the PMC, hopefully avoiding thermal effects. This measurement can be done once the PMC works again.
Attachment 1: originalETMX.png
originalETMX.png
Attachment 2: rotatedETMX.png
rotatedETMX.png
  914   Wed Sep 3 12:26:49 2008 EricSummaryCamerasWeekly Summary
Finished up simulating the end mirror error in order to test the whether the fitting code still provides reasonable answers despite the noise caused by the defects on the end mirror. The model I used to simulate the defects is far from perfect, but its good enough given the time I have remaining, and I have no reason to believe the differences between it and the real noise would cause any radical changes in how the fit operates. A comparison between a modeled image and a real image is attached. Average error (difference between the estimated value and the real value) for each of the parameters is

For the fit:
Max Intensity: 2767.4 (Max intensities ranged from 8000 to 11000)
X-Position: 0.9401 pixels
X Beam Waist: 1.3406 pixels (beam waists ranged from 35 to 45)
Y-Position: 0.9997 pixels
Y Beam Waist: 1.3059 pixels (beam waists ranged from 35 to 45)
Intensity Offset: 12.7705 (Offsets ranged from 1000 to 4000)

For the center of mass calculation (with a threshold that cut off everything above 13000)
X-Position: 0.0087 pixels
Y-Position: 0.0286 pixels

Thus, the fit is generally trustworthy for all parameters except for maximum intensity, for which it is very inaccurate. Additionally, this shows that the center of mass calculation actually does a much better job than the fit when this much noise is in the image. For the end mirrors, the fit is really only useful for finding beam waist, and even this is not extremely accurate (~3% error). All the parameters for the modeling is on the svn in /trunk/docs/emintun/MatLabFiles/EndMirrorErrorSimulation.txt.

Finished working on the calculations that convert a beam misalignment as measured as a change in the beam position on the two mirrors to a power loss in the cavity. Joe calculated the minimum measurable change in beam position to be around a tenth of a pixel, which corresponds to half a micron when the beam is directly incident on the camera. This gives the ability to measure fractional power losses as low as 2*10^-10 for the 40m main arm cavities. To me, this seems unusually low, though it scales with beam position squared, so if anything else limited the ability to measure changes in the beam position, it would have a large effect on the sensitivity to power losses. Additionally, it scales inversely with length, so shorter cavities provide less sensitivity.

This morning Joe and I tested the ability for the camera code to servo the ITMX in order to change the beam's position on the ETMX. Two major things have been changed since the last time we tried this. First, the calculated beam center that gets output to the EPICS channels now first goes through a transform that converts it from pixels into physical units, and should account for the oblique angle of the camera. The output to the EPICS channels should now be in the form of 'mm from the center of the optic', although this is not very precise at the moment. The second thing that was changed was that the servo was run with a modified servo script that included options to set a minimum, maximum, and slew rate in order to protect the mirrors from being swung around too much. The servo was generally successful: for a given x-position, it was capable of changing the yaw of ITMX so that the position seen on the camera moved to this new location. The biggest problem is that the x and y dimensions do not appear to be decoupled (the transform converting it to physical units should have done this), so that modifying the yaw of the mirror changed both the x and y positions (the y about half as much) as output by the camera. This could cause a problem when trying to servo in both dimensions at once, since one servo could end up opposing the other. I don't know the cause of this problem yet, since the transform that is currently in use appears to be correctly orienting the image.
Attachment 1: SimulatedErrorComparison.png
SimulatedErrorComparison.png
  5000   Wed Jul 20 12:05:08 2011 NicoleSummarySUSWeekly Summary

Since last week Wednesday, I have since found a Pomona Electronics box (thanks to Jenne)

to use for my photosensor head circuit (to house the LED and 2 photodiodes). Suresh has

shown me how to use the 9-pin Dsub connector punch, and I have punched a hole in this box

to attach the Dsub connector. 

 

Since this past entry regarding my mechanical design for the photosensor head (Photosensor Head Lessons),

I have modified the design to use a Teflon sheet instead of a copper PCB and I have moved the LED

and photodiodes closer together, upon the suggestions of Jamie and Koji.  The distance between

components is now 0.112" instead of the initial 0.28".  Last night, I cut the PCB board for the LED

and photodiodes and I drilled holes onto the PCB board and Teflon sheet so that the two may be

mounted to the metal plate face of the Pomona box.  I still need to cut the viewer hole for and

drill screws into the face plate.

P7200054.JPG

I have also been attempting to debug my photosensor circuit (box and LED/photodiode combination).

Since this last entry (Painful Votlage Regulator and Circuit Lessons), Suresh has helped me to get the parts

that I need from the Downs Electronics lab (15 wire ribbon cable, two 9 pin D-sub connectors M,

one 15 pin D-sub connector M, one 16 pin IDC connector). Upon the suggestion of Jamie, I have

also made additional safety changes to the circuit by fixing some of the soldering connections

so that all connections are done with wires (I had a few immediate lines connected with solder).

I believe the the photosensor circuit box is finally ready for testing. I may just need some help

attaching the IDC connector to the ribbon cable. After this, I would like to resume SAFELY

testing my circuit.

 P7200055.JPG

I have also been exploring SimMechanics. Unfortunately, I haven't been able to run the

inverted pendulum model by Sekiguchi Takanori. Everytime I attempt to run it, it says

there is an error and it shuts down Matlab. In the meanwhile, I have been watching

SimMechanics demos and trying to understand how to build a model. I'm thinking that

maybe once I figure out how SimMechanics works, I can use the image of his model

(I can see the model but it will not run) to construct a similar one that will hopefully work.

 

I have also been attempting to figure out the circuitry for the pre-assembled

accelerometer (made with the LIS3106AL chip).  I have been trying to use a multi-meter

to figure out what the components are (beyond the accelerometer chip, which I have

printed out the datasheet for), but have been unsuccessful at that. I have figured out

that the small 5 pin chip says LAMR and is a voltage regulator. I am hoping that if I can

find the data sheet for this voltage regulator, I can figure out the circuitry. Unfortunately,

I cannot find any datasheets for a LAMR voltage regulator. There is one by LAMAR, but

the ones I have seen are all much larger. Does anyone know what the miniature voltage

regulator below is called and if "LAMR" is short for "LAMAR"?

 

P7200056.JPG

 

  5003   Wed Jul 20 18:44:54 2011 KojiSummarySUSWeekly Summary

Find Frank and ask him about those components.

  5039   Wed Jul 27 01:57:28 2011 SonaliUpdateGreen LockingWeekly Summary

1. I have used the PMC  trans beam in my set-up as the required PSL beam.

2. I have superposed the ETMX-Fibre output with the PSL beam on the PSL table.

3. I have used suitable beam splitters and lens to match the power and the  sizes of the overlapping beams and have aligned them to the optimum.

4. A lens having f=7.6 cms is used to focus the beam into the PD.

5. Initially, I used the broadband 1611 NewFocus PD to find the IR beat signal by scanning the oven temperature. (using the digital sitemap controls.)

6. I checked the previous elog entries by Suresh and Koji on the green beat signal they had worked on and used their data to get an idea of the temperature range of the oven where I could obtain a beat.

7. I obtained peaks at three different temperatures as had been noted previously and set the temperature so that I am now sitting in the middle stable regime.

8. Then I switched to the 1811 100 MHz PD as it has a larger gain. It has a saturation power of 100 microWatts. The input power at the PD is measured to be 80 microWatts.

9. I was having trouble getting a clean peak due to presence of many harmonics as seen on the spectrum analyser. This happened because there was too much power incident on the PD which led to arising of non-linearity giving rise to harmonics.

10.To reduce the power entering the PD, I put in a ND 1.0 Filter just before the beam enters the PD and obtained a clean signal.

11. I will use  the frequency counter tomorrow to check the resonant frequency and try to connect the output to acquire a digital signal.

12. Otherwise I will proceed to build a Mixer Frequency Discriminator.

13. After the feed-back loop is completed, I will proceed to compare the frequency-noises of the green-beat lock and the IR-beat lock.

  5044   Wed Jul 27 12:19:19 2011 NicoleSummarySUSWeekly Summary

Since last week, I've been working on building the photosensor head and have been making adjustments to my photosensor circuit box.

Changes to photosensor circuit (for box):

1) Last week, I was reading in the two signals from the two heads through a single input. Now there are two separate inputs for the two separate photosensors

2)During one of my many voltage regulator replacements, I apparently used a 7915 voltage regulator instead of a 7805 (thanks, Koji, for pointing that out! I never would have caught that mistake X___X)

3)I was powering my 5V voltage regulator with 10V...Now I'm using 15 V (now I only need 1 power supply and 3 voltage input plugs)

I have also began assembling my first photosensor head. Here is what I have so far:

sensorhead.JPG

 

Here is what needs to be done still for the photosensor head

I need to find four Teflon washers and nuts to rigidly attach the isolated PCB (PCB, Teflon sheet combination) to the box. I already have the plastic screws in (I want to use plastic and Teflon for electrical isolation purposes, so as to not short my circuit).

I need to attach the sheath of my signal cable to the box of the photosensor head for noise reduction (plan: drill screw into photosensor head box to wrap sheath wires around)

I need to attach the D-sub to the other end of my signal cable so that it can connect to the circuit box. So far, I only have the D-sub to connect the cable to my photosensor head

Yesterday, Suresh helped to walk me through the photosensor box circuit so that I now understand what voltages to expect for my circuit box trouble-shooting. After this lesson, we figured out that the problem with my photosensor box was that the two op-amps were saturated (so I fixed the feedback!). After replacing the resistor, I got the LED to light up! I still had problems reading the voltage signals from the photodiodes. I was reading 13.5V from the op amp output, but Koji explained to me that this meant that I was too close to saturation (the photodiodes were perhaps producing too much photocurrent, bringing the output close to saturation). I switched the 150 K resistor in the feedback loop to a 3.4K resistor and have thus successfully gotten displacement-dependent voltage outputs (i.e. the voltage output fluctuates as I move my hand closer and farther from the photosensor head). 

Now that I have a successful circuit to power and read outputs from one photosensor, I can begin working on the other half of the circuit to power the other photosensor! 

sensorcircuit.JPG

  5107   Wed Aug 3 12:27:01 2011 NicoleSummarySUSWeekly Summary

This week I have determined the linear region for my photosensor. I have determined the linear region to be -14.32 V/cm in the region 0.4cm 0.75 cm.

In order to obtain this voltage plot, I used a 287K resistor to set the max voltage output for the photodiodes. This calibration was obtained using a small rectangular standing mirror (not the TT testing mirror that Steve has ordered for me).

calibrationplot.jpg

I have also been working on the second half of the photosensor circuit (to power the LED and read out voltages for the second photosensor head). I have assembled the constant-current section of the circuit and need to do the voltage-output section of the circuit. I also need to finish assembling the second photosensor head and cables.

 

I submitted my Second Progress Report on Tuesday.

 

I have attached the mirror to the TT suspension. We are using 0.006 diameter tungsten wire to suspend the mirror. I am currently working on balancing the mirror.

 

This morning, I realized that the current set-up of the horizontal shaker does not allow for the TT to be securely mounted. I was going to change the drill holes in the horizontal slider base (1 inch pitch). Jamie has suggested that it is better to make a pair of holes in the base larger. The circled holes are the ones that will be expanded to a 0.26" diameter so that I can mount the mirror securely to the horizontal slider base. There is a concern that a bit of the TT suspension base will hang over the edge of the horizontal sliding plate. We are not sure if this will cause problems with shaking the mirror evenly. Suggestions/advice are appreciated.

newholestobe.JPG

Attachment 1: calibrationplot.jpg
calibrationplot.jpg
Attachment 2: calibrationplot.jpg
calibrationplot.jpg
  5108   Wed Aug 3 12:37:57 2011 KojiSummarySUSWeekly Summary

I vote for making an adapter plate between the sliding plate and the bottom base.

Quote:

This morning, I realized that the current set-up of the horizontal shaker does not allow for the TT to be securely mounted. I was going to change the drill holes in the horizontal slider base (1 inch pitch). Jamie has suggested that it is better to make a pair of holes in the base larger. The circled holes are the ones that will be expanded to a 0.26" diameter so that I can mount the mirror securely to the horizontal slider base. There is a concern that a bit of the TT suspension base will hang over the edge of the horizontal sliding plate. We are not sure if this will cause problems with shaking the mirror evenly. Suggestions/advice are appreciated.

 

  5160   Tue Aug 9 19:53:56 2011 NicoleSummarySUSWeekly Summary

This week, I have finished assembling everything I need to begin shaking. I built an intermediary mounting stage to mount the TT suspension base to the horizontal sliding platform, finished assembling the second photodiode, finished assembling the photosensor circuit box, and calibrated the two photosensors. Today I built a platform/stage to mount the photodiodes so that they are located close enough to the mirror/suspension that they can operate in the linear range.  Below is an image of the set-up.

entiresetup.jpg

The amplifer that Koji fixed is acting a bit strange again...It is sometimes shutting off (Apparently, it can only manage to do short runs ~ 1minute? That should be enough time?).

The set-up is ready to begin taking measurements.

  7190   Wed Aug 15 11:40:15 2012 YaakovSummarySTACISWeekly Summary

This week I've been focusing mainly on two things: 1) Designing a port for the STACIS that will allow external actuation and/or local feedback and 2) Investigating the seismic differential motion along the interferometer arms.

The circuit for the port is just a signal summing junction (in case we want to do feedforward and feedback at the same time) with BNC inputs for the external signal and switches that allow you to turn the external signal or feedback signal on/off. I'll test this on a breadboard and post the schematic if it works. I looked at the noise of the geophone pre-amp and DAC, which would be the feedback and external signal sources, respectively. According to Rolf Bork, the DAC noise is 700 nV/rtHz, and I measured the pre-amp board's minimum noise level at 20*10^-6 V/rtHz (which seems quite high). Both these noises are higher than the op-amp noise for my circuit (I'm considering the op-amp LT1012), which according to the specs is 30 nV/rtHz. This confirms that my circuit will not be the limiting noise source

Along with Den, I calibrated the seismometers in the lab and measured the displacement differential arm motion (see eLog 7186: http://nodus.ligo.caltech.edu:8080/40m/7186). I'm trying to find a transfer function for the seismic stacks (and pendulum, but that's simpler) so I can calculate the differential motion in the chamber. After doing this offline, I'll make new channels in the PEM to look at the ground and chamber differential motion along the arms online.

I also am looking at the noise of the geophones with their shunt resistor (4k resistor across the coil) removed, to see if it improves the noise at low frequencies. My motivation for this was that the geophone specs show a better V/m/s sensitivity at low frequencies when the shunt resistor is removed, so the actual signal may become larger than the internal noise at these frequencies.

  5169   Wed Aug 10 12:32:09 2011 NicoleSummarySUSWeekly Summary Update

Last night, I attached a metal plate to the Vout faceplate of my photosensor circuit box because the BNC connection terminals were loose. This was Jamie's suggestion to establish a more secure connection (I had originally drilled holes for the BNCs that were much too large).

 

I have also fixed the mechancial set-up of my shaking experiment so that the horizontal sliding platform does not interfere with the photodiode mounting stage. Koji pointed out last night that in the full range of motion, the photodiode mounting stage interferes with the movement of the sliding platform when the platform is at its full range.

 

I have began shaking. I am getting a problem, as my voltage outputs are just appearing a high-frequency noise.

  4908   Wed Jun 29 11:25:07 2011 NicoleSummarySUSWeekly Summary of Work

Update of Week 3 Work:

-I've finished reading The Art of Electronics Ch 1, 2, and 4.

-The mechanical stage for the horizontal displacement measurements is set up.

-I've opened up the circuit box for the quad photodiode and am currently working on the circuit diagram for the box and for the quad photodiode sensors.

 

Later this week, I plan to finish the circuit diagrams and figure out how the circuits work with the four inputs. I also plan to start working on my first

progress report.

 

  3101   Wed Jun 23 11:31:12 2010 nancyUpdateWIKI-40M UpdateWeekly Update

This week I attended a whole lot of orientations, lectures, and meetings related to SURF. Done with general and laser safety training.

read Nergis' thesis for, and other material on WFS.

got confused with how the sidebands and shifted carrier frequencies are chosen for the Interferometer, read initial chapters of Regehr's thesis for teh same.

Made a plan for proceeding with the WFS work through discussions with Koji.

Understood the MC cavity and drew a diagram for it and the sensors.

Did Calculations for Electric field amplitudes inside and outside the MC cavity.

Saw the hardware of the WFS and QPD inside, and their routes to computers. Figured out which computer shows up the conditioned data from teh sensors.

Tried calculating the cavity axis for MC using geometry and ray tracing. Too complicated to be done manually.

Read some material (mainly Seigman) for physics of calculating the eigen-axis of the MC cavity with mirrors mis-aligned. Will calculate that using simulations, using the ABCD matrices approach.

Made a simple feedback simulink model yesterday to learn simulink. Made it run/compile. Saw the behaviour thru time signals at different points.

in the night, Made a simulink model of the sensor-mirror thing, with transfer functions for everything as dummy TFs. Compiles, shows signals in time. Remaining part is to put in real/near-real TFs in the model.

  3143   Wed Jun 30 11:39:20 2010 nancyUpdateWIKI-40M UpdateWeekly Update

Wednesday Morning E-log :

 

Most of the time through this week, i was working towards making the simulink model work.

It involved learning simulink functions better, and also improving on the knowledge of control theory in general, and control theory of our system.

1. Thusrday : found tfs for the feedback loop. and tried many different filters and gains to stabilize the system (using the transient response of the system). - not through

2. Friday : decided to use error response and nullify the steady state error instead of looking at convergence of output. tried many other filter functions for that.

Rana then showed me his files for WFS.

3. Sunday - played with rana's files, learnt how to club simluink with matlab, and also about how to plot tfs using bode plots in matlab.

4. Monday : Read about state-space models, and also how to linearize in matlab. done with the latter, but the former still needs deeper understanding.

read ray-optics theory to calculate the geometric sensing matrix.

It first requires to calculate the eigen mode of the cavity with tilted mirrors. this eigen mode is needed to be found out using ray-optics transfer matrices for the optics involved  . figured out  matrices for the tilted plane mirrors, and am working on computing the same for MC2.

5. Tuesday : went to Universal Studios , Hollywood :P

6. Wednesday (today) : Writing the report to be submitted to SFP.

  3168   Wed Jul 7 12:45:00 2010 nancyUpdate Weekly Update

Wednesday after the meeting - Started report, learnt mode cleaner locking from Kiwamu and Rana, saw how to move optics on the tables with Rana and kiwamu.

Thursday - Made the report

Tuesday - report.

Today - am trying locking the MC with kiwamu's help to see the WFS signals and also to start characterizing the QPD.

  3218   Wed Jul 14 12:31:11 2010 nancyUpdateGeneralWeekly Update

Summary of this week's work Wednesday - Aligned the mode cleaner with Koji, and then measured the beam characteristics at MC2 end. Koji then taught me how to read the WFS signals Thursday - wrote a script to measure the signals and calculated the coefficients relating mirror movement and DC signals of WFS. To know the possibility of the control, found SVD of the coeff matrix, and condition number. Friday - Set up the measurement of QPD linear response using a laser outside the cavity. Took data. Monday - did the calculations and plotting for the above experiment. Then played around with the MEDM screens , and also tried to see what happens to the Power Spectrum of WFS signals by changing the coefficients in teh matrix. (failed) Tuesday - played around with WFS, tried seeing what it does when switched on at different points, and also what it does when I disturb the system while WFS has kept it locked.

Today - had switched off the WFS sensors yesterday night after locking the MC as wanted to know that how does MC behave when no WFS gain is applied. I checked in the morning, the MC was locked all night. I am now proceding with my calculations for the sensing matrix
  4964   Wed Jul 13 12:24:46 2011 NicoleUpdateSUSWeekly Update
This week, I have been working on the photosensor circuit box.  This photosensor box will contain the current-stabilizing power supply and
voltage readout for the two photosensors I plan to build.
 
Suresh helped to walk me through the design of the photosensor circuit (image below) so I now understand how the circuit works.
PHOTOSENSORPLAN.JPG
 
 
Jaimie helped me to reorganize the original circuit layout I had to make it easier to follow. I have now redone half of the circuit (enough for one LED and photodiode pair). I still need to put in the voltage-regulators to provide the + and - 15 V neeCto power the op-amps but I will do that after testing the circuit.
prelimcircuit.JPG

In order to test this preliminary circuit, I need to build the photosensor heads.  Yesterday, Suresh helped me to open one of the professionally-build photosensors in the lab to understand how to arrange my photosensor heads. I now understand that I need to rigidly-mount the PCB to photosensor head box. I plan to use the PCB below. It will be sufficient for the lower-frequency range (below 10Hz) that I am interested in. 

PCBforphotosensor.JPG

 I would like to use a metal box like the one below to make each photosensor head. I looked in the lab last night for similar boxes but could not find one. Does anyone know where I can find a similar metal box?

lookingforbox.JPG

 

I am now working on accelerometer. I am working on attaching these metal wires to the pins of the accelerometer so that I can use clip leads to power and extract voltage measurements from my circuit.

 accelerometer.JPG

  10096   Wed Jun 25 01:18:24 2014 AkhilUpdateelogWeekly Update

 Plans for the Week:

  • Phase and noise characterization of the UFC RF Frequency Counter.
  •  Characterization of the temperature actuator.

Progress and Problems Faced:

  • Since past two days, I have been trying to measure the phase difference between the input and output signals of the FC using a 16 bit ADC ADS1115 (for input phase measurement) on Raspberry Pi(RPI).For that I have assembled a Circuit on a breadboard( Details will be mentioned in my next eLog). 
  • The interfacing and the codes seem to be alright but the RPI is not able to detect the address of the ADC chip. I will try to debug the issue as soon as possible and try to take data through the Pi so that I can have both delay and noise introduced by the FC.
  • Now since the minimum sampling time of the FC has been brought down to 0.1s, I will test how accurately the FC is writing values every 0.1 s for a modulating input.
  • The output data of the FC will be fitted into the input and the order of accuracy will be presented.Also the gain plots will be plotted at higher frequencies like 50 MHz, 100 MHz, 500 MHz and 1000 MHz using the network analyzer.

Work Inside the Lab:

  • On Friday Morning I will  go inside the  lab with Manasa to make measurements from the NPRO to characterize its response to the temperature.
  • I will be in the lab in the morning session(9 am- 1 pm) and make the required measurements. I will then analyze the data and by the end of the week will finish characterization of the FC and temperature actuator.
  • For the rest of the time in this week, I'll be on my desk and will not be entering  the lab.

Electronics Required:

  • I will require the network analyzer on Wednesday and Thursday to make measurements at higher frequencies(30 MHz <F<1000 MHz) from the R PI.

Goal- By the end of the week:

  • To characterize both the FC and the NPRO that would go into the FOL-PID loop.
  • The FC will be ready to replace the network analyzers that are currently being used in the 40m.

 

 

 

 

 

  10100   Wed Jun 25 09:30:44 2014 HarryUpdateGeneralWeekly Update

See attached weekly update

Attachment 1: Weekly_Update—June_25_thru_July_1.pdf
Weekly_Update—June_25_thru_July_1.pdf
  10161   Wed Jul 9 08:50:30 2014 AkhilUpdateGeneralWeekly Update

 

Last Week's Work:

  • Worked on the setup to mitigate timing issues arising due to the non-synchronization of clocks of the Frequency counter and Raspberry Pi .
  • Took measurements with the mentioned setup and generated PSD plots of the FC.
  • Completed the setup for phase measurements by using an external ADC.

 

Work Plan for this Week:

  • Complete the installation of the Mini Circuits Frequency Counter on the EPICS. This involves installation of EPICS base on Raspberry Pi, creating an IOC server on the R Pi and writing the data from the FC into a specific IOC  channel. 
  • Complete phase measurements and obtain the delay in the FC thus completing the characterization of the FC.
  • Install the FC at a suitable place inside the 40m so that the beat note system can be remotely managed from any of the Control computers thus effectively replacing the spectrum analyzer(This will be done with proper supervision once the recently ordered FC is shipped)

 

Inside the 40m :

  • I will be going inside the lab today around 9 am with Manasa to make a plan about where the FC must be placed and the routing of the RF cables and the cables which run  into computers from the FC.
  • Once the channel is created and tested, we will install the FC inside the lab possibly by the end of this week or by next week.
  10215   Wed Jul 16 07:51:52 2014 AkhilUpdateGeneralWeekly Update

Work Done:

  • Solved all the timing issues pertaining to the R Pi and the FC.
  • Took all the measurements for complete characterization of the frequency counter(Phase Plots to follow shortly).
  • Finished  installation of the FC on the martian and created a channel  for the FC frequencies(will be tested in this week).

Plans for this Week:

  • Testing of the EPICS soft IOC created for the FC as a channel access server and hence completing the installation of the FC.
  • Placing the FC inside the lab( plan discussed in this elog: http://nodus.ligo.caltech.edu:8080/40m/10163) with proper supervision.
  • Characterization of the temperature actuator.

Inside the 40m Lab:        

I will need to be inside the lab to place the FC . This will be done in the morning session (on thursday) with supervision of Manasa and Steve(if required).

 

 

 

 

 

 

 

 

  10256   Tue Jul 22 17:45:11 2014 HarryUpdateGeneralWeekly Update

 The Past Week

 

I spent the past week coupling NPRO light into the fibers, and subsequently measuring the fiber mode profile using the beam profiler.

The Next Week

In the next week, I plan to at least do measurements of the Polarization Extinction Ratio of the fibers.

Materials

My current optical setup, plus an additional polarizing beam splitter (have it).

  10257   Tue Jul 22 23:10:12 2014 AkhilUpdateGeneralWeekly Update

 Work Done:

  • Created a Channel Access Server on the Raspberry Pi  to write data from the FC into EPICS Channel.
  • Completed characterization and noise estimation of the FC counter with improved timing.
  • Started installation of FC inside the 40m.

Plans for this Week:

  • Testing how well the FC can replace the spectrum analyzer which is in the control room. For this I have asked Steve to order  an RF adder/combiner to see how frequency counter responds to two RF signals at different frequencies(much like the RF signal fed to the spectrum analyzer) .
  • Complete the installation of FC insode the 40m and start initial testing.
  • Characterization of the Temperature Actuator and initial PID loop design.

Inside the 40m Lab:

  • I will have to go inside the 40m lab this week for routing the RF mon cables to the FC box(in detail:http://nodus.ligo.caltech.edu:8080/40m/10163) .
  • Also to setup for characterization of the temperature actuator, I will be required to go inside the lab in this week.
  10260   Wed Jul 23 10:40:23 2014 NichinUpdateGeneralWeekly Update

To do:

  1. Measure and calibrate out  attenuation and phase changes due to RF cables in the PDFR system.
  2. Create a database of canonical plots for comparison each time new data is acquired.
  3. Vector fitting or LISO fitting of transimpedance curves.

Does not require time from a lab expert.

  10297   Wed Jul 30 11:15:44 2014 AkhilUpdateGeneralWeekly Update

 Plan for the week:

  • PID loop design and testing with the Green laser beat note by actuating the arm cavity length.
  • Beat note readout on MEDM screens and Strip tool.
  • Calibration of the laser frequency response to PZT signal in MHz/V using a test DC input(Koji assigned me this task because this calibration has not been done and is very useful).

Inside the Lab:

  • Placing the FOL box sometime in the afternoon today(with supervision of Manasa / EricQ).
  • Calibration of the PZT(Today or tomorrow).

 

  10373   Wed Aug 13 10:49:39 2014 HarryUpdateGeneralWeekly Update

 In the past week, I designed and assembled coupling telescopes for the PSL and Y Arm Lasers

The Y Arm was coupled to ~5mV, and the PSL remains uncoupled.

 

For the next week, I'm planning on working on things like my presentation and/or final report.

Though as of last night, my computer refuses to turn on, so there may be some further "troubleshooting" involved in that whole process.

ELOG V3.1.3-