ID |
Date |
Author |
Type |
Category |
Subject |
530
|
Wed Jun 11 15:30:55 2008 |
josephb | Configuration | Cameras | GC1280 | The trial use GC1280 has arrived. This is a higher resolution CMOS camera (similar to the GC750). Other than higher resolution, it has a piece of glass covering and protecting the sensor as opposed to a plastic piece as used in the GC750. This may explain the reduced sensitivity to 1064nm light that the camera seems to exhibit. For example, the image averages presented here required a 60,000 microsecond exposure time, compared to 1000-3000 microseconds for similar images from the GC750. This is an inexact comparison, and the actual sensitivity difference will be determined once we have identical beams on both cameras.
The attached pdfs (same image, different angles of view) are from 200 averaged images looking at 1064nm laser light scattering from a piece of paper. The important thing to note is there doesn't seem to be any definite structure, as was seen in the GC750 scatter images.
One possibility is that too much power is reaching the CMOS detector, penetrating, and then reflecting back to the back side of the detector. Lower power and higher exposure times may avoid this problem, and the glass of the GC1280 is simply cutting down on the amount passing through.
This theory will be tested either this evening or tomorrow morning, by reducing the power on the GC750 to the point at which it needs to be exposed for 60,000 microseconds to get a decent image.
The other possibility is that the GC750 was damaged at some point by too much incident power, although its unclear what kind of failure mode would generate the images we have seen recently from the GC750. |
Attachment 1: GC1280_60000E_scatter_2d.pdf
|
|
Attachment 2: GC1280_60000E_scatter_3d.pdf
|
|
558
|
Tue Jun 24 17:12:10 2008 |
josephb, Eric | Configuration | Cameras | GC750 setup, 1X4 Hub connected, ETMX images | The GC750 camera has been setup to look at ETMX. In addition, the new 1X4 rack mounted switch (131.215.113.200) has been connected via new cat6 cable to the control room hub (131.215.113.1?), thanks to Eric. The camera is now plugged into 1X4 rack switch and now has a gigabit connection to the control room computers as well as Mafalda (131.215.113.23).
By using ssh -X mafalda or ssh -X 131.215.113.23, then typing:
target
cd Prosilica/bin-pc/x86/
./Sampleviewer
A viewer will be brought up. By clicking on the 3rd icon from the left (looks like an eye) will bring up a live view.
Closing the view, and then cd ../../40mCode, and then running ./Snap --help will tell you how to use a simple code for taking .tiff images as well as setting things such as exposure length and size of image (in pixels) to send.
When the interferometer was set to an X-arm only configuration, we took two series of 200 images each, with two different exposure lengths.
Attached are three pdf images. The first is just a black and white single image, the second is an average of 100 images, and the third is the standard deviation of the 100 images. |
Attachment 1: GC750_ETMX_E30000_single.pdf
|
|
Attachment 2: GC750_ETMX_E30000_avg.pdf
|
|
Attachment 3: GC750_ETMX_E30000_std.pdf
|
|
566
|
Wed Jun 25 12:25:28 2008 |
Eric | Summary | Cameras | 2D Gaussian Fitting Code | I initially wrote a script in MATLAB that takes pictures of the laser beam's profile and fits them to a two dimensional gaussian in order to determine the position and width of the beam. This code is now (mostly) ported to C so that it can be imbedded in the camera software package that Joe is writing. The fitting works fairly well for pictures with the beam directly incident on the camera, and less well for pictures of scatter off the end mirrors of the arms, since scatter from defects in the mirror have intensities much greater than the intensity of the beam's gaussian profile.
The next steps are to finish up porting the fitting code to C, and then modify it so it can better handle the images off the end mirror. Some thoughts on how to do this are to use a fourier transform and a low pass filter, or to simply use a center-of-mass calculation (with the defect peaks reduced in intensity), since position is more important than beam width in this calculation. The eventual goal is to include the edge of the optic in the picture and use the fit of the beam position in comparison to the optic's position to find the beam's location on the mirror. |
622
|
Wed Jul 2 10:35:02 2008 |
Eric | Summary | Cameras | General Summary | I finished up the 2D Gaussian fitting code, and, along with Joe, integrated into the Snap software so that it automatically does a fit to every 100th image. While the fitting works, it is too slow for use in any feedback to the servos. I put together a center of mass calculation to use instead that is somewhat less accurate but much faster (almost instantaneous versus 5-10 seconds). This has yet to be added to the Snap software, but doing so would not be difficult.
I put together a different fitting function for fitting the multiple lorentzian resonance peaks in a power spectrum that would result from sweeping the length of any of the mode cleaners. This simply doesn't work. I tested it on some of Josh Weiner's data collected on the OMC last year, and the data fits poorly. Attempting to fit it all at once requires fitting 80000 data points with 37 free parameters (12 peaks at 3 parameters per peak and 1 offset parameter), which cannot be done in any reasonable time period. Attempting to fit to one specific peak doesn't work due to the corruption of the other nearby peaks, even though they are comparatively small. The fit places the offset incorrectly if given the opportunity (green line in attemptedSinglePeakFitWithoutOffset.tiff and attemptedSinglePeakFitWithoutOffsetZoomed.tiff). Removing this as a parameter causes the fit to do a much better job (red line in these two graphs). The fit still places the peak 0.01 to the right of the actual peak, which worse than could simply be obtained by looking at the maximum point value. Additionally, this slight shift means that attempting to subtract out the peak so that the other peaks are accessible doesn't work -- the peaks are so steep that the error of 0.01 is enough to cause significant problems (red in attemptedPeakSubtraction.tiff is the attempted subtraction). Part of the problem is that the peaks are far from perfect lorentzians, as seen by cropping to any particular peak (OMCSweepSinglePeak.tiff ). This might be corrected in part by correcting for the conversion from PZT voltage to position, which isn't perfectly linear; though I doubt it would remove all the irregularities. At the moment, the best approach seems to be simply using a center of mass calculation cropped to the particular peak, though I have yet to try this.
Changing Josh's code to work for the digital cameras and the PMC or MC shouldn't be difficult. Changing to the MC or PMC should simply involve changing the EPICs tags for the OMC photodiodes and PZTs to those of the PMC or MC. Making the code work for the digital cameras should be as simple as redirecting the call to the framegrabber software to the Snap software. |
Attachment 1: attemptedSinglePeakFitWithoutOffset.tiff
|
Attachment 2: attemptedSinglePeakFitWithoutOffsetZoomed.tiff
|
Attachment 3: attemptedPeakSubtraction.tiff
|
Attachment 4: OMCSweepSinglePeak.tiff
|
657
|
Thu Jul 10 23:27:57 2008 |
John | Metaphysics | Cameras | Secret handshakes | Rob and I have joined the ranks of the illuminati and exercised our power.
Quote: | Osamu showed me the secret way to change the video labels for the quads and
so we fixed them. He made me swear not to divulge this art.
- Rana Adhikari |
|
660
|
Fri Jul 11 20:16:01 2008 |
Eric | DAQ | Cameras | Taking data from the GC 750 Camera | Mafalda has been set up with a background process to constantly take data from the GC 750 camera (at the end of the x-arm) for the weekend. This camera will otherwise be inoperable until then.
In the small chance that this slows either Mafalda or the network to a crawl, the process to kill should have PID 26265. |
671
|
Tue Jul 15 10:09:42 2008 |
Eric | DAQ | Cameras | Did anyone kill the picture taking process on Mafalda? | Did anyone kill the process on Mafalda that was taking pictures of the end mirror of the x-arm last Friday? I need to know whether or not it crashed of its own accord. |
678
|
Wed Jul 16 10:50:55 2008 |
Eric | Summary | Cameras | Weekly Summary | Finished unwrapping, cleaning, baking, wrapping, wrapping again, packing, and shipping the baffles.
Attempted to set up the Snap software so that it could talk directly to EPICS channels. This is not currently working due to a series of very strange bugs in compiling and linking the channel access libraries. Alex Ivanov directed Joe and me to a script and makefile that are similar to what we're trying to do and it may solve our problem, but at the moment this still doesn't work. We're currently using a workaround that involves making unix system calls to ezca command line tools, but this is too hacky to leave in the final program.
Attempted to fit Josh's PZT voltage vs power plot of the OMC (from about a year ago) to lorentzians in order to try to develop fitting tools for more recent data. This isn't working, due to systematic error distorting the shapes of the peaks. Good fits can be obtained by cutting the number of points to a very small number around the peak of resonance, but this leads to such a small percentage of the peak being used that I don't trust these results either. (In the graph (shows the very top of the tallest peak): blue is Josh's original data, green is a fit to this peak using the top 66% of the peak and arbitrary, equal values for the error on each point, red is Josh's data averaged over bins of size 0.005, teal is a fit to these bins where the error on each point is the standard deviation of each bin, and magenta is a fit to these same bins, except cropped to the top ~10% of the peak, x-axis is voltage, y-axis is transmission power). Rana suggested that I take my own sweeps of the PMC using scripts that are already written: I'm currently figuring out where these scripts are and how to use them without accidently breaking something.
We've begun running the Snap software for long periods of time to see how stable it is. Currently, its only problem appears to be that it memory leaks somewhat: it was up 78% memory usage after a little over an hour. It doesn't put much strain on the computer, using only ~20% CPU. Stress put on the network from the constant transfer of images from the camera to the computer is not yet known. |
Attachment 1: AttemptedPeakFit3.tiff
|
681
|
Wed Jul 16 15:59:04 2008 |
josephb, Eric | Configuration | Cameras | PMC trans camera path | In order to reduce saturation, we placed a Y1 plate (spare from the SP table) in transmission just before the GC650 camera looking at the PMC transmision. The reflection (most of the light) was dumped to a convient razor blade dump. We also removed the 0.3 and 0.5 ND filters and placed them in the 24 hour loan ND filter box.
Good exposure values to view are now around 3000 for that camera. |
693
|
Fri Jul 18 12:24:15 2008 |
josephb, Eric | Configuration | Cameras | Changed Lenses on GC750 at ETMX | We removed the giant TV zoom lens and replaced it with a much smaller fixed zoom lens. Currently it views the entire optic. We have another (also small) zoom lens which focuses much better on the spot itself. With how far back the camera is currently placed, neither of these fixed zoom lenses can touch or hit the view port or the chamber while still attached to camera and mount, even using all of the mount's motion range. So this should be less of a safety issue.
Ideally, we'd like to get some images of the full optic (including osems and so forth) with the X-arm locked, and then use the higher zoom lens while still locked, to get images we can use to calibrate the x and y length scales. |
722
|
Wed Jul 23 12:42:23 2008 |
Eric | Summary | Cameras | Weekly Summary | I finally got the ezcaPut command working. The camera code can now talk directly to the EPICS channels. However, after repeated calls of the ezcaPut function, the function begins claim to time out, even though it continues to write values to the channel successfully (EPICS is successfully getting the new value for the channel, but failing to reply back to the program in time, I think). It has seg-faulted once as well, so the stability cannot yet be trusted for running long term. For now, however, it works well enough to test a servo in the short term. The current approach simply uses a terminal running ezcaservo with the pitch and yaw offset channels of the ETMX, as well as the channels that the camera code output to. This hasn't actually been tested since we haven't had enough time with the x-arm locked.
Tested various fixed zoom lens on the camera, since the one we were previously using was too heavy for its mount and likely more expensive than necessary. The 16mm lens gets a good picture of the beam and the optic together, though the beam is a little too small in the picture to reliably fit a gaussian to. The 24mm lens zooms too much to see the whole optic, but the beam profile itself is much clearer. The 24mm lens is currently on the camera.
Scanned the PZT voltage of the PMC across its full offset range to gain a plot of voltage vs intensity. I used DTT's triggered time series response system to measure the outputs of the slow PZT voltage and transmission intensity channels, and used the script triangle wave to drive the PZT ramp channel slowly over its full range (I couldn't get DTT to output to the channel). Clear resonances did appear (PMCScanWide.tif), but the number of data points per peak was far too small reliably fit a lorentzian to (PMCScanSinglePeak.tif). When I decreased the scanning range and increased the time in order to collect a large number of points on a few peaks, the resulting data was too messy to fit to a lorentzian (PMCSlowSinglePeak.tif). |
Attachment 1: PMCScanSinglePeak.tif
|
|
Attachment 2: PMCScanWide.tif
|
|
Attachment 3: PMCSlowSinglePeak.tif
|
|
769
|
Wed Jul 30 13:52:41 2008 |
Eric | Summary | Cameras | Weekly Summary | I tracked the tendency for ezcaPut to fail and sometimes seg-fault in the camera code to a conflict between the camera API and ezca, either on the
network level or the thread level. Since neither are sophisticated enough to provide controls over how they handle these two things, I instead
separated the call to ezcaPut out into a small, separate script (a stripped down ezcawrite), which the camera code calls at the system level. This is a
bit hacky of a solution, but its the only thing that seems to work.
I've developed a transformation based on Euler angles that should be able to take the 4 OSEMs in a picture of the end mirror and use their relative
positions to determine the angle of the camera to the optic. This would allow the position data determined by the fitting software to be converted
from pixels to meaningful lengths, and should aid any servo-ing done on the beams position. I've yet to actually test if the equations work, though.
The servo code needs to have slew rate limiters and maximums/minimums to protect the mirrors written in to it before it can be tested again, but I
have no idea what reasonable values for these limits are.
Joe and I recently scanned the PMC by driving C1:PSL-PMC_RAMP with the trianglewave script over a range of -3.5 to -1.25 (around 50 to 150 volts
to the PZT) and read out C1:PSL-ISS_INMONPD to measure the transmission intensity. This included slightly under 2 FSRs. For slow scans (covering
the range in 150 to 300 s), the peaks were very messy (even with the laser power at 1/6 its normal value), and it was difficult to place where the
actual peak center occurred. For faster sans (covering the range in 30 seconds or so), the peaks were very clean and nearly symmetric, but were
not placed logically (the same peak showed up at two very different values for the PZT voltage in two separate runs). I don't have time to put
together graphs of the scans at the moment; I'll have that up sometime this afternoon. |
809
|
Thu Aug 7 11:54:26 2008 |
josephb | Configuration | Cameras | New code + gstreamer allows for easy saving and compression of images | Modified the CamSnap code to output the image data stream to standard out. This can then be piped into a gstreamer plugin and then be used to save, encode, transmit, receive, slice, dice and or mangle video (or virtually any type of data stream).
The gstreamer webpage can be found at: http://www.gstreamer.net/
Under documentation you can find a list off all available plug-ins. Some good, some bad, some ugly.
Running the following command on Mafalda (via ssh -X mafalda) or Rosalba while in /cvs/cds/caltech/target/Prosilica/40mCode/SnapCode/
CamSnap -F 'Mono8' -c 44058 -E 15000 -X 0 -Y 0 -H 480 -W 752 -l 0 -m 1000 | gst-launch-0.10 fdsrc fd=0 blocksize=360960 ! video/x-raw-gray, height=480, width=752, bpp=8,depth=8,framerate=1/1 ! ffmpegcolorspace ! ximagesink
This command will create a window which displays what the camera with UID 44058 is looking at. It will display 1000 images, then quit. (You can switch the -m 100 to -i to just have it continue until the process is stopped).
You can also encode the data into compressed format and save it in a media file. The following command line will encode the images into an ogg media file (.ogm), which can be played with the totem viewer (available on Rosalba or almost any machine running Ubuntu or Centos) or any other viewer capable of handling ogm files. By switching the plugins you can generate other formats as well.
The compression is good, putting 300 images normally about 500K individually uncompressed to about 580K as a single file.
The following command line was used to generate the attached video file:
CamSnap -F 'Mono8' -c 44058 -E 5000 -X 0 -Y 0 -H 480 -W 752 -l 0 -m 300 | gst-launch-0.10 fdsrc fd=0 blocksize=360960 ! video/x-raw-gray, height=480, width=752, bpp=8,depth=8,framerate=30/1 ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location="./testVideo.ogm"
Currently looking into plugins which allow you to pull individual frames out of a video file and display or save them in a variety of formats. This would allow us to save long term images in compressed video format, and then pull out individual frames as needed.
Also need to look into how to "T" the streams, so one can be displaying while another encodes and saves. |
Attachment 1: testVideo.ogm
|
812
|
Fri Aug 8 09:54:10 2008 |
rana | Update | Cameras | New code + gstreamer allows for easy saving and compression of images |
Quote: | Modified the CamSnap code to output the image data stream to standard out. This can then be piped into a gstreamer plugin and then be used |
Didn't work; Prosilica has only 1 "l". Even so, sshing from op440m to mafalda, I got this:
mafalda:SnapCode>CamSnap -F 'Mono8' -c 44058 -E 5000 -X 0 -Y 0 -H 480 -W 752 -l 0 -m 300 | gst-launch-0.10 fdsrc fd=0 blocksize=360960 ! video/x-raw
-gray, height=480, width=752, bpp=8,depth=8,framerate=30/1 ! ffmpegcolorspace ! theoraenc ! oggmux ! filesink location="./testVideo.ogm"
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
** (gst-launch-0.10:27121): WARNING **: Size 60 is not a multiple of unit size 360960
Caught SIGSEGV accessing address 0x487c
ERROR: from element /pipeline0/ffmpegcsp0: subclass did not specify output size
Additional debug info:
gstbasetransform.c(1495): gst_base_transform_handle_buffer (): /pipeline0/ffmpegcsp0:
subclass did not specify output size
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
#0 0xffffe410 in __kernel_vsyscall ()
#1 0xb7deddae in __lll_mutex_lock_wait ()
#2 0xb7de9aac in _L_mutex_lock_51 () from /lib/tls/i686/cmov/libpthread.so.0
#3 0xb7de949d in pthread_mutex_lock ()
#4 0xb7e452e0 in g_static_rec_mutex_lock () from /usr/lib/libglib-2.0.so.0
#5 0xb7f1fa08 in ?? () from /usr/lib/libgstreamer-0.10.so.0
#6 0x080c1220 in ?? ()
#7 0x00000001 in ?? ()
#8 0x0809586c in ?? ()
#9 0x00000001 in ?? ()
#10 0x08095868 in ?? ()
#11 0xb7f7a2a8 in ?? () from /usr/lib/libgstreamer-0.10.so.0
#12 0xb7e8da80 in ?? () from /usr/lib/libglib-2.0.so.0
#13 0xb7f7a2a8 in ?? () from /usr/lib/libgstreamer-0.10.so.0
#14 0xb7f7a2a8 in ?? () from /usr/lib/libgstreamer-0.10.so.0
#15 0x00000000 in ?? ()
Spinning. Please run 'gdb gst-launch 27121' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.
Caught interrupt --
|
813
|
Fri Aug 8 10:58:05 2008 |
josephb | Configuration | Cameras | Cameras and gstreamer | In regards to camera failure:
1) I forgot to reconnect that particular camera to the network (my fault) so thats why it was failing.
2) Even with the correct camera connected, I've realized at full frame rate, op440m is going to get a few frames and then fail, as I don't think it has a fast enough ethernet card. It will work on Rosalba, and will also work ssh-ing from Rosalba because it is using a new ethernet card. It also works on my laptop, which is where I originally tested the command. One way to get around this is to increase the time between pictures, by changing -l 0 to -l 1 (or higher), where the number after the "ell" is the number of seconds to wait between frame captures.
3) What I should do is figure out the UDP transmission plugins for gstreamers and compress first (using the theoraenc since it gets compression ratios of better than 100:1) and transmit that over the network.
I have since reconnected the camera, so it should work on Rosalba and any sufficiently well connected computer. For other machines like linux2 or op440, try the following line:
Running the following command on Mafalda (via ssh -X mafalda) or Rosalba while in /cvs/cds/caltech/target/Prosilica/40mCode/SnapCode/
CamSnap -F 'Mono8' -c 44058 -E 10000 -X 0 -Y 0 -H 480 -W 752 -l 1 -m 100 | gst-launch-0.10 fdsrc fd=0 blocksize=360960 ! video/x-raw-gray, height=480, width=752, bpp=8,depth=8,framerate=1/1 ! ffmpegcolorspace ! ximagesink
This will be at a much slower frame rate (1 per second) but should work on any of the machines. (Tested on linux2). |
828
|
Tue Aug 12 12:21:13 2008 |
josephb | Configuration | Cameras | Variation in fit over 140 images for GC650 and GC750 | Used matlab to calculate Gaussian fits on 145 GC650 images and 142 GC750 images. These were individual images (no averaging) looking at the PSL output from May 29th 2008. The GC650 and GC750 were looking at a split, but had different exposure values, slightly different distances to the nominal waist of the beam, and were not centered on the beam identically. Mostly this is a test of the fluctuations in the fit from image to image.
Note the mm refer to the size or position on the CCD or CMOS detector itself.
GC650
Mean
Amplitude X center Y center X waist Y waist Background offset from zero
position (mm) position (mm) (mm) (mm)
0.3743 1.7378 2.6220 0.7901 0.8650 0.0047
Standard Deviation
Amplitude X center Y center X waist Y waist Background offset from zero
position (mm) position (mm) (mm) (mm)
0.0024 0.0006 0.0005 0.0005 0.0003 0.00001
Std/Mean x100 (percent)
Amplitude X center Y center X waist Y waist Background offset from zero
position (mm) position (mm) (mm) (mm)
0.6% 0.03% 0.02% 0.06% 0.04% 0.29%
GC750
Mean
Amplitude X center Y center X waist Y waist Background offset from zero
position (mm) position (mm) (mm) (mm)
0.2024 2.5967 1.4458 0.8245 0.9194 0.0418
Standard Deviation
Amplitude X center Y center X waist Y waist Background offset from zero
position (mm) position (mm) (mm) (mm)
0.0011 0.0005 0.0005 0.0003 0.0005 0.00003
Std/Mean x100 (percent)
Amplitude X center Y center X waist Y waist Background offset from zero
position (mm) position (mm) (mm) (mm)
0.6% 0.02% 0.04% 0.04% 0.05% 0.07%
|
835
|
Thu Aug 14 15:51:35 2008 |
josephb | Summary | Cameras | FOUND! The Missing Standoff! | We used a zoom lens on the GC750 to take this picture of the standoff while inside a plastic rubber-glove bag. The standoff with bag is currently scotch-taped to the periodic table of the elements. |
Attachment 1: standoff.png
|
|
839
|
Fri Aug 15 11:52:32 2008 |
josephb | Configuration | Cameras | Multi-computer display and recording of digital camera output | Through the magic of gstreamer, I've been able to live play on one machine, compress the image, send it to another machine via udp, and also display it there. The "tee" function also allows one to save at the same images at time as well.
The command line used on the "server", say Rosalba or Mafalda is:
CamServe -F 'Mono8' -c 44058 -E 20000 -X 0 -Y 0 -H 480 -W 752 -l 0 -m 100 | gst-launch-0.10 fdsrc fd=0 blocksize=360960 ! video/x-raw-gray, height=480, width=752, bpp=8,depth=8,framerate=60/1 ! tee name=t1 t1. ! video/x-raw-gray, height=480, width=752, bpp=8,depth=8,framerate=60/1 ! ffmpegcolorspace ! ximagesink t1. ! video/x-raw-gray, height=480, width=752, bpp=8,depth=8,framerate=60/1 ! ffmpegcolorspace ! queue ! smokeenc keyframe=8 qmax=40 ! udpsink host="131.215.113.103" port=5000
This both displays the image and sends it to the host 131.215.113.103 in this case.
I've written a primitive shell script that does most of this.
It requires at the minimum an IP address. You can also give it a number of images (the -m number) and also the exposure value (-E 20000).
Currently in /cvs/cds/caltech/target/Prosilica/40mCode/SnapCode/ there is a script called CameraServerScript.
Typing in "CameraServerScript 131.215.113.107" would send it to that IP address.
Typing in "CameraServerScript 131.215.113.107 500 40000" would run for 500 images at an exposure value of 40000.
To actually receive, you need gstreamer installed and run the following command:
gst-launch udpsrc port=5000 ! smokedec ! queue ! ffmpegcolorspace ! ximagesink sync=false
Make sure you have the right IP address to send to.
Still working on multicasting (basically a server is constantly sending out images, and the client subscribes to the multicast). |
847
|
Mon Aug 18 15:32:18 2008 |
josephb | Configuration | Cameras | How to multicast with gstreamer and Gige Cameras | In order to get multicasting to work, one simply needs to understand the address scheme.
In general, the address range 224.0.0.0 - 239.255.255.255 are reserved for multicasting. Within in this address space, there are some base level operations in the 224.0.0.x range which shouldn't be interfered with.
For a single site, the address range between 239.252.0.0 and 239.255.255.255 is probably best.
Gstreamer and the current 40m network hubs are designed to handle this kind of communication already, so one merely needs to point them at the correct addresses.
While in /cvs/cds/caltech/target/Prosilica/40mCode/SnapCode type:
CamServe -F 'Mono8' -c 44058 -E 20000 -X 0 -Y 0 -H 480 -W 752 -l 0 -m 300 | gst-launch-0.10 fdsrc fd=0 blocksize=360960 ! video/x-raw-gray, height=480, width=752, bpp=8,depth=8,framerate=60/1 ! ffmpegcolorspace ! queue ! smokeenc keyframe=8 qmax=40 ! udpsink host=239.255.1.1 port=5000
This will multicast to the 239.255.1.1 address, using port 5000.
On the machine you wish to subscribe type:
gst-launch udpsrc multicast-group=239.255.1.1 port=5000 ! smokedec ! ffmpegcolorspace ! ximagesink sync=false |
861
|
Wed Aug 20 12:39:11 2008 |
Eric | Summary | Cameras | Weekly Summary | I attempted to model the noise produced by the mirror defects in the ETMX images, in order to better assure that the fit to the beam Gaussian in these images is actually accurate. My first attempt involved treating the defects as random Gaussians which were scaled by the power of the beam's Gaussian. This didn't work at all (it didn't really look like the noise on the ETMX), and resulted in very different behavior from the fitting software (it fit to one of the noise peaks, instead of the beam Gaussian). I'll try some other models another time.
I made a copy of the ezcaservo source code and added options to it that allow the addition of minimum value, maximum value, and slew rate limits. This should allow the camera code to servo on ITMX without accidently driving the mirror too far or too fast. In order to get the code to recompile, I had to strip out part of the servo that changed the step value based on the amount of time that had elapsed (it relied on some GDS libraries and header files). Since the amount of time that passes is reasonably constant (about 2-3 steps per second) and the required accuracy for this particular purpose isn't extremely high, I didn't think it would matter very much.
I put together two MATLAB functions that attempt to convert pixel position in an image to actual position in real space. The first function takes four points that have known locations in real space (with respect to some origin which the camera is pointing at) and compares them to where those 4 points fall in the image. From the distortion of the four points, it calculates the three rotational angles of the the camera, as well as a scaling factor that converts pixels to real spatial dimensions. The second function takes these 4 parameters and 'unrotates' the image, yielding the positions of other features in the image (though they must be on the same flat plane) in real space. The purpose of this is to allow the cameras to provide positions in terms of physically meaningful units. It should also decouple the x and y axes so that the two dimensions can be servo'd on independently. Some results are attached; the 'original' image is the image as it came out of the camera (units in pixels), while the 'modified' image is the result of running the two functions in succession. The four points were the corners of the 'restricted access' sign and of the TV screen, while the origin was taken as the center of the sign or the TV. The accuracy of the transformation is reasonably good, but seems to depend considerably on assuring that the origin chosen in real space matches the origin in the image. To make these the same, they will be calculated by taking the intersections of the 2 lines defined by 2 sets diagonal points in each image. The first function will remain in MATLAB, since it only needs to be run once each time the camera is moved. The second function must be ported to C since the transformation must be done in realtime during the servo.
Joe and I attempted another scan of the PMC this morning. We turned the laser power down by a factor of ~50 (reflection off of the unlocked PMC went from ~118 to ~2.2) and blocked one beam in the MZ. We scanned from 40 V to 185 V ( -1 to -4.25 on the PZT ramp channel) with periods of 60 seconds and 10 seconds. In both cases, thermal effects were still clearly visible. We turned the laser power down by another factor of 2 (~1 on the PMC reflection channel), and did a long scan of 300 seconds and a short scan of 10 seconds. The 10 second scan produced what may be clean peaks, although there was clear digitization noise, while the peaks in the 300 second scan showed thermal effects. I've yet to actually analyze the data closely, however. |
Attachment 1: OriginalSignImage.png
|
|
Attachment 2: ModifiedSignImage.png
|
|
Attachment 3: OriginalTVImage.png
|
|
Attachment 4: ModifiedTVImage.png
|
|
880
|
Mon Aug 25 14:42:09 2008 |
Eric | Configuration | Cameras | ETMX Digital Camera | I changed the lens on the camera looking at the ETMX to a 16mm, 1:1.4 zoom lens. This is in preparation to measure a couple parameters that depend on the camera's position and angle, so please avoid repositioning it for a couple of days. |
891
|
Wed Aug 27 12:09:10 2008 |
Eric | Summary | Cameras | Weekly Summary | I added a configuration file parser to the Snap code. This allows all command line parameters (like exposure time, etc.) to be saved in a file and loaded automatically. It also provides a method of loading parameters to transform a point from its location on the image to its location in actual space (loading these parameters on the command line would substantially clutter it). The code is now fully set-up to test servo-ing one of the mirrors again, and I will test this as soon as the PMC board stops being broken and I can lock the X-arm.
I also took an image of the OSEMs on ETMX in order to apply the rotation transform code in order to determine the parameters to pass to Snap. The results were alpha = 2.9505, beta = 0.0800, gamma = -2.4282, c = 0.4790. These results are reasonable but far from perfect. One of the biggest causes of error was in locating the OSEMs: it is difficult to determine where in the spot of light the OSEM actually is, and in one case, the center was hidden behind another piece of equipment. Nevertheless, the parameters are good enough to use in a test of the ability to servo, though it would probably be worth trying to improve them before using them for other purposes. The original and rotated images are attached.
I've begun working on calculations to figure out how much power loss can occur due to a given cavity misalignment or change in a mirror's radius of curvature from heating. The goal is to determine how well a camera can indirectly detect these power losses, since a misalignment produces a change in beam position and a change in radius of curvature produces a change in beam waist, both of which can be measured by the camera.
Joe and I hunted down the requisite equipment to amplify the photodiode at the output of the PMC, allowing us to turn the laser power down even more during a scan of the PMC, hopefully avoiding thermal effects. This measurement can be done once the PMC works again. |
Attachment 1: originalETMX.png
|
|
Attachment 2: rotatedETMX.png
|
|
914
|
Wed Sep 3 12:26:49 2008 |
Eric | Summary | Cameras | Weekly Summary | Finished up simulating the end mirror error in order to test the whether the fitting code still provides reasonable answers despite the noise caused by the defects on the end mirror. The model I used to simulate the defects is far from perfect, but its good enough given the time I have remaining, and I have no reason to believe the differences between it and the real noise would cause any radical changes in how the fit operates. A comparison between a modeled image and a real image is attached. Average error (difference between the estimated value and the real value) for each of the parameters is
For the fit:
Max Intensity: 2767.4 (Max intensities ranged from 8000 to 11000)
X-Position: 0.9401 pixels
X Beam Waist: 1.3406 pixels (beam waists ranged from 35 to 45)
Y-Position: 0.9997 pixels
Y Beam Waist: 1.3059 pixels (beam waists ranged from 35 to 45)
Intensity Offset: 12.7705 (Offsets ranged from 1000 to 4000)
For the center of mass calculation (with a threshold that cut off everything above 13000)
X-Position: 0.0087 pixels
Y-Position: 0.0286 pixels
Thus, the fit is generally trustworthy for all parameters except for maximum intensity, for which it is very inaccurate. Additionally, this shows that the center of mass calculation actually does a much better job than the fit when this much noise is in the image. For the end mirrors, the fit is really only useful for finding beam waist, and even this is not extremely accurate (~3% error). All the parameters for the modeling is on the svn in /trunk/docs/emintun/MatLabFiles/EndMirrorErrorSimulation.txt.
Finished working on the calculations that convert a beam misalignment as measured as a change in the beam position on the two mirrors to a power loss in the cavity. Joe calculated the minimum measurable change in beam position to be around a tenth of a pixel, which corresponds to half a micron when the beam is directly incident on the camera. This gives the ability to measure fractional power losses as low as 2*10^-10 for the 40m main arm cavities. To me, this seems unusually low, though it scales with beam position squared, so if anything else limited the ability to measure changes in the beam position, it would have a large effect on the sensitivity to power losses. Additionally, it scales inversely with length, so shorter cavities provide less sensitivity.
This morning Joe and I tested the ability for the camera code to servo the ITMX in order to change the beam's position on the ETMX. Two major things have been changed since the last time we tried this. First, the calculated beam center that gets output to the EPICS channels now first goes through a transform that converts it from pixels into physical units, and should account for the oblique angle of the camera. The output to the EPICS channels should now be in the form of 'mm from the center of the optic', although this is not very precise at the moment. The second thing that was changed was that the servo was run with a modified servo script that included options to set a minimum, maximum, and slew rate in order to protect the mirrors from being swung around too much. The servo was generally successful: for a given x-position, it was capable of changing the yaw of ITMX so that the position seen on the camera moved to this new location. The biggest problem is that the x and y dimensions do not appear to be decoupled (the transform converting it to physical units should have done this), so that modifying the yaw of the mirror changed both the x and y positions (the y about half as much) as output by the camera. This could cause a problem when trying to servo in both dimensions at once, since one servo could end up opposing the other. I don't know the cause of this problem yet, since the transform that is currently in use appears to be correctly orienting the image. |
Attachment 1: SimulatedErrorComparison.png
|
|
1321
|
Wed Feb 18 21:03:22 2009 |
rana | Update | Cameras | ETMY Camera work not elogged! | The control room video is showing us a false ETMY image. Who worked on the ETMY camera or video today??!! |
1333
|
Mon Feb 23 16:42:08 2009 |
josephb | Configuration | Cameras | Camera Beta Testing | I've setup the GC650 camera (ID 32223) to look at the mode cleaner transmission. I've also added an alias to the camera server and client for this camera.
To use, type: "pserv1 &"on the machine you want to run the server on and "pcam1 &" on the machine you want to actually view the video. At the moment, this only works for the 64-bit Centos 5 machines, Rosalba, Allegra and Ottavia.
Note, you will generally want to start the client first (pcam1 or pcam2) to see if a server is already running somewhere. The server will complain that it can't connect to a camera if it already is in use.
I've setup the GC750 camera (ID 44026) to look at the the right most analog quad TV. This can be run by using "pserv2 &" and "pcam2 &".
If the image stops playing you can try starting and stoping the server to see if will start back up.
You can also try increasing or decreasing the exposure, to see if that helps. The increase and decrease buttons change the exposure by a factor of 2 for each press.
Lastly, the button "Read Epic Channel" reads in the current value from the channel: "C1:PEM-stacis_EEEX_geo" and uses it as the exposure value, in microseconds (in principle 10 to 1000000 should work).
For example, to exposre for 10000 microseconds, use "ezcawrite C1:PEM-stacis_EEEX_geo 10000" and press the "Read Epic Channel" button. |
1355
|
Wed Mar 4 17:20:04 2009 |
josephb | Update | Cameras | Camera code upgrades | I've updated the digital camera python code as well as changed the network topology.
At the moment, both cameras are connected to a small gigabit switch which only talks to Ottavia. This means all camera servers must be run on Ottavia, allow camera output is still UDP multicast so any machine capable of running gstreamer can pick up the images.
The server and client programs now have the ability to read a configuration file for the setup of the cameras. They default to pcameraSettings.ini, but this can default can be changed with a -c or --config option
For example, "serverV3.py --config pcam1.ini" will run the server using the pcam1.ini settings file. Similarly, "client.py --config pcam1.ini" will also take the IP settings from the config file so that it knows at which port and IP to listen.
These programs and .ini files have been placed in /cvs/cds/caltech/apps/linux64/python/pcamera/
I've updated the cshrc.40m aliases so that it uses the new configuration file options, so now pcam1 calls "client.py -c pcam1.ini" in the above directory.
So to start a client use pcam1 or pcam2 (for the 32223 camera in PSL looking at MC trans or 44026 looking at an analog moniter in the control room respectively). These can be run on Allegra, Rosalba or Ottavia at the moment.
To start a server, use pserv1 or pserv2. These *must* be run on Ottavia.
I've also added a -n or --no-gui option at Yoichi's request, one which just starts up and plays, with no graphical gui.
Lastly, I've made some changes to the base pcamerasrc.py file, which should make display more robust. After a failed transmission of an image from the camera to Ottavia, it should re-attempt up to 10 times before giving up. I'm hoping this will make it more robust against packet loss. The change in network topology has also helped this, allowing 640x480 to be transmitted on both cameras before tens of minutes before a packet loss causes a stop. |
1385
|
Wed Mar 11 11:30:15 2009 |
josephb | Configuration | Cameras | | I modified the Video.db file used by c1aux located in /cvs/cds/caltech/target/c1aux.
I added the following channels to the db file, intended for either read in or read out by the digital camera scripts.
C1:VID-ETMY_X_COM
C1:VID-ETMY_Y_COM
C1:VID-ETMY_X_STDEV
C1:VID-ETMY_Y_STDEV
C1:VID-ETMY_XY_COVAR
C1:VID-ETMY_EXPOSURE
C1:VID-ETMY_GAIN
C1:VID-ETMY_X_UL
C1:VID-ETMY_Y_UL
C1:VID_ETMY_X_SIZE
C1:VID_ETMY_Y_SIZE
A better naming scheme can probably be devised, but these will do for now. |
1496
|
Sun Apr 19 11:34:33 2009 |
josephb | HowTo | Cameras | USB Frame Grabber - How to | To use the Sensoray 2250 USB frame grabber:
Ensure you have the following packages installed: build-essential, libusb-dev
Download the Linux manual and linux SDK from the Sensoray website at:
http://www.sensoray.com/products/2250data.htm
Go to the Software and Manual tab near the bottom to find the links. The software can also be found on the 40m computers at /cvs/cds/caltech/users/josephb/sensoray/
The files are Manual2250LinuxV120.pdf and s2250_v120.tar.gz
Run the following commands in the directory where you have the files.
tar -xvf s2250_v120.tar.gz
cd s2250_v120
make
cd ezloader
make
sudo make modules_install
cd ..
At this point plug in the 2250 frame grabber.
sudo modprobe s2250_ezloader
Now you can run the demo with
./sraydemo or ./sraydemo64
Options will show up on screen. A simple set to start with is "encode 0", which sets the recording type, "recvid test.mpg", which starts the recording in file test.mpg, and "stop", which stops recording. Note there is no on screen playback. One needs an installed mpeg player to view the saved file, such as Totem (which can screen cap to .png format) or mplayer.
All these instructions are on the first few pages of the Manual2250LinuxV120 pdf.
|
1497
|
Sun Apr 19 11:51:05 2009 |
josephb | Update | Cameras | Mafalda may need an update | I tried installing libusb-dev on mafalda in order to try getting the usb frame grabber to work on it, but could not as it could not download the package.
I then tried to do a sudo apt-get update, which failed completely, as the repository seems to have ceased existing. Basically I had all 404 Not Found errors.
Turns out Mafalda is still running Ubuntu 7.04, whose support ended late 2008. So there's a couple things that can be done:
1) Ignore it, and simply not update Mafalda anymore. This also means some newer software and hardware simply won't work with it (like the usb frame grabber)
2) Try to find another, unofficial repository which still has all of the Ubuntu 7.04 packages.
3) Upgrade to a newer, still supported Ubuntu, such as 7.10, 8.04, or 8.10.
I'd personally lean towards the 3rd option, and go to the 8.04 long term support version. If people agree with it, I could do the upgrade sometime Monday or Tuesday.
|
1499
|
Mon Apr 20 11:57:27 2009 |
rob | Update | Cameras | Mafalda may need an update |
Quote: |
I tried installing libusb-dev on mafalda in order to try getting the usb frame grabber to work on it, but could not as it could not download the package.
I then tried to do a sudo apt-get update, which failed completely, as the repository seems to have ceased existing. Basically I had all 404 Not Found errors.
Turns out Mafalda is still running Ubuntu 7.04, whose support ended late 2008. So there's a couple things that can be done:
1) Ignore it, and simply not update Mafalda anymore. This also means some newer software and hardware simply won't work with it (like the usb frame grabber)
2) Try to find another, unofficial repository which still has all of the Ubuntu 7.04 packages.
3) Upgrade to a newer, still supported Ubuntu, such as 7.10, 8.04, or 8.10.
I'd personally lean towards the 3rd option, and go to the 8.04 long term support version. If people agree with it, I could do the upgrade sometime Monday or Tuesday.
|
I don't see a reason to proliferate operating systems. Is there any reason we actually need Ubuntu? Can we put CentOS on it? |
1501
|
Mon Apr 20 18:36:37 2009 |
rana | Update | Cameras | Mafalda may need an update | Sadly, the sensoray crap doesn't seem to build on CentOS. I too would prefer a homogenous solution,
but I don't know how to make this happen without punishing Joe with sensoray driver development on CentOS. |
1581
|
Wed May 13 12:41:14 2009 |
josephb | Update | Cameras | Timing and stability tests of GigE Camera code | At the request of people down at LLO I've been trying to work on the reliability and speed of the GigE camera code. In my testing, after several hours, the code would tend to lock up on the camera end. It was also reported at LLO after several minutes the camera display would slow down, but I haven't been able to replicate that problem.
I've recently added some additional error checking and have updated to a more recent SDK which seems to help. Attached are two plots of the frames per second of the code. In this case, the frames per second are measured as the time between calls to the C camera code for a new frame for gstreamer to encode and transmit. The data points in the first graph are actually the averaged time for sets of 1000 frames. The camera was sending 640x480 pixel frames, with an exposure time of 0.01 seconds. Since the FPS was mostly between 45 and 55, it is taking the code roughly 0.01 second to process, encode, and transmit a frame.
During the test, the memory usage by the server code was roughly 1% (or 40 megabytes out of 4 gigabytes) and 50% of a CPU (out a total of CPUs). |
Attachment 1: newCodeFPS.png
|
|
Attachment 2: newCodeFPS_hist.png
|
|
1590
|
Fri May 15 16:47:44 2009 |
josephb | Update | Cameras | Improved camera code | At Rob's request I've added the following features to the camera code.
The camera server, which can be started on Ottavia by just typing pserv1 (for camera 1) or pserv2 (for camera 2), now has the ability to save individual jpeg snap shots, as well as taking a jpeg image every X seconds, as defined by the user.
The first text box is for the file name (i.e. ./default.jpg will save the file to the local directory and call it default.jpg). If the camera is running (i.e. you've pressed start), prsessing "Take Snapshot to" will take an image immediately and save it. If the camera is not running, it will take an image as soon as you do start it.
If you press "Start image capture every X seconds", it will do exactly that. The file name is the same as for the first button, but it appends a time stamp to the end of the file.
There is also a viedo recording client now. This is access by typing "pcam1-mov" or "pcam2-mov". The text box is for setting the file name. It is currently using the open source Theora encoder and Ogg format (.ogm). Totem is capable of reading this format (and I also believe vlc). This can be run on any of the Linux machines.
The viewing client is still accessed by "pcam1" or "pcam2".
I'll try rolling out these updates to the sites on Monday.
The configuration files for camera 1 and camera 2 can be found by typing in camera (which is aliased to cd /cvs/cds/caltech/apps/linux64/python/pcamera) and are called pcam1.ini, pcam2.ini, etc.
|
1697
|
Wed Jun 24 12:04:22 2009 |
Zach | Update | Cameras | SURF entry | This week, I've been reading some literature concerning PLL and familiarizing myself with LINUX, MATLAB, and high-pass filter circuits. In MATLAB, I started constructing matrices to be used for a beam path analysis from the laser output to the ccd camera. I also built a simple high-pass filter on a bread-board that Joe and I are currently testing with the spectrum analyzer. |
1712
|
Wed Jul 1 11:04:27 2009 |
Zach | Update | Cameras | GigE Phase Camera | This past week, I have building a sine wave rectifier and trying to write a simple program that displays a ccd image to familiarize myself with the code. I also wrote a progress report in which I included the following images of the sine wave rectifier circuit as well as the optical chain including the phase-locked loop. The hirose connector arrived so I can begin soldering the electronics together and testing the trigger box with the ccd. I am waiting on the universal PDH box as well as another fiber coupler to begin setting up the optics. In order to avoid the frustrations associated with sending a laser beam down a long pipe to an optical bench across the room, I will be transmitting laser 1 to the ccd by means of a fiber optic cable and dealing with the alternative new and exciting frustrations.
|
Attachment 1: trigger.jpg
|
|
Attachment 2: fig1b.pdf
|
|
1721
|
Wed Jul 8 11:08:43 2009 |
Zach | Update | Cameras | GigE Phase Camera | The plan for the optical setup has been corrected after it was realized that it would be impossible to isolate a 29.501 MHz frequency from a 29.499 MHz one because they are so close in value. Instead, we decided to adopt the setup pictured below. In this way, the low-pass filter should have no trouble isolating 29.501-29.5 MHz from 29.501 + 29.5 MHz. Also, we decided to scrap the idea of sending Alberto's laser through a fiber optic cable after hearing rumors of extra lasers. Since I shouldn't have to share a beam when the second laser comes in, I plan on setting up both lasers on the same optics bench. I've been working on the software while waiting for supplies, but I should be able to start building the trigger box today (assuming the four-pair cable is delivered). |
Attachment 1: fig1.pdf
|
|
Attachment 2: fig2.pdf
|
|
1751
|
Wed Jul 15 14:42:31 2009 |
Zach | Update | Cameras | GigE Phase Camera | Lately, I have been able to externally trigger the camera using a signal generator passing through the op-amp circuit that I built. The op-amp circuit stabilizes the jitter in the sine wave from the signal generator and rectifies the wave. I wrote the calculations into the code allowing me to find the phase and amplitude from the images I take. I still need to develop code that will plot these arrays of phase and amplitude.
The mysterious dark band at the top of the ccd images continues to defy explanation. However, I have found that it only appears for short exposure times even when the lens is completly covered. During the next couple of days, I will try to write a routine to correct for this structure in the dark field.
Koji recommended that we use the optical setup pictured below. This configuration would require fewer optics and we would have to rely on slight misalignments between the carrier and reference beams to test the effectiveness of the phase camera instead of a wavefront-deforming lens. |
Attachment 1: fig1koji.pdf
|
|
1753
|
Wed Jul 15 18:22:15 2009 |
Koji | Update | Cameras | Re: GigE Phase Camera |
Quote: |
Koji recommended that we use the optical setup pictured below. Although it uses fewer optics, I can't think of a way to test the phase camera using this configuration because any modulation of the wavefront with a lens or whatever would be automatically corrected for in the PLL so I think I'll have to stick with the old configuration.
|
I talked with Zach. So this is just a note for the others.
The setup I suggested was totally equivalent with the setup proposed in the entry http://131.215.115.52:8080/40m/1721, except that the PLL PD sees not only 29.501MHz, but also 1kHz and 59.001MHz. These additional beating are excluded by the PD and the PLL servo. In any case the beating at 1kHz is present at the camera. So if you play with the beamsplitter alignment you will see not only the perfect Gaussian picture, but also distorted picture which is resulted by mismatching of the two wave fronts. That's the fun part!
The point is that you can get an equivalent type of the test with fewer optics and fewer efforts. Particularly, I guess the setup would not be the final goal. So, these features would be nice for you. |
1778
|
Wed Jul 22 14:44:57 2009 |
Zach | Update | Cameras | GigE Phase Camera | This past week, I have mostly been debugging my software. I have tried to use the fluorescent lights to test the camera, but I can't tell for sure if my code is finding the correct amplitude and phase or not. I am currently using Mathematica to double check my calculations in solving for the phase and amplitude.
Also, I have taken dark field images using a lens with a closed shutter. I have found that the dark band across the top of the images only appears after the camera heats up. Also, there is an average electronic noise of 14 with a maximum of 40. However, this electronic noise as well as any consistent ambient noise will be automatically corrected for in the calculations I'm using because I'm taking the differences between the CCD images to calculate relative phases and amplitudes.
I should be able to start setting up optics and performing better tests of my software this week. |
1807
|
Wed Jul 29 14:22:33 2009 |
Zach | Update | Cameras | GigE Phase Camera | This week, Joe and I have been setting up the laser and optics. The mephisto laser is emitting a very ugly beam that we can hopefully remedy using an iris and a lens. After scanning the beam width at a few different distances from the laser, I am currently trying to determine the appropriate lenses to use. |
1822
|
Mon Aug 3 18:56:59 2009 |
Zach | Update | Cameras | GigE Phase Camera | While aligning the optics, we tried to start up the CCD. Although nothing should have changed since the last time I used it, the code claimed it could not find the camera. All the right leds are lit up. The only indication that something is awry is that the yellow led on the camera isn't blinking as it does when there is ethernet activity. |
1824
|
Tue Aug 4 11:45:29 2009 |
Zach | Update | Cameras | GigE Phase Camera | The camera wasn't working because the router has no built-in dhcp server. We had to manually start the server after rebooting the computer. |
1861
|
Fri Aug 7 17:46:21 2009 |
Zach | Update | Cameras | The phase camera is sort of working | Shown below are the plots of the amplitude and phase of the Mephisto laser light modulated with a chopper as a square wave at about 1 kHz. The color bar for the phase should run from -pi to pi, and it does when I don't accidently comment out the color bar function. Anyway, the phase is consistently pi/4 or pi/4 plus or minus pi. Usually all three of these phases occur within the same image, as shown below. Also, the amplitude is a factor of two or so higher than it should be where this phase jump occurs. I think these problems are associated with the nature of the square wave. However, there is a software bug that appears to be independent of the input data: there is a rounding error that causes the amplitude to jump to infinity at certain points. This happened for only a dozen or so pixels so I deleted them from the amplitude plot shown below. I am currently working on a more robust code that will use the Newton-Raphson method for nonlinear systems of equations. |
Attachment 1: ampAv.png
|
|
Attachment 2: phaseAv.png
|
|
1862
|
Fri Aug 7 17:51:50 2009 |
Zach | Update | Cameras | CMOS vs. CCD | The images that I just posted were taken with the CMOS camera. We switched from the CCD to the CMOS because the CCD was exhibiting much higher blooming effects. Unlike the CCD, there is a slight background structure if you look carefully in the amplitude image, but I can correct for this consistent background by taking a uniformly exposed image by placing a convex lens in front of the CMOS. I will then divide each frame taken of the laser wavefront by the background image. |
2120
|
Mon Oct 19 18:14:28 2009 |
rob | Update | Cameras | video switch broken | The Chameleon HB (by Knox) video switch that we use for routing video signals into the control room monitors is broken. Well, either it's broken, or something is wrong with the mv162 EPICS IOC which communicates with it via RS-232. Multiple reboots/resets of both machines has not yet worked. The CHHB has two RS-232 inputs--I switched to the second one, and there is now one signal coming through to a monitor but no switching yet. I've been unable to further debug it because we don't have anything in the lab (other than the omega iserver formerly used for the RGA logger) which can communicate with RS-232 ports. I've been trying to get this thing (the iserver) working again, but can't communicate with it yet. For now I'm just going to bypass the video switch entirely and use up all the BNC barrel connectors in the lab, so we can at least have the useful video displays back. |
2304
|
Fri Nov 20 00:18:45 2009 |
rana | Summary | Cameras | Video MUX Selection Wiki page | Steve is summarizing the Video Matrix choices into this Wiki page:
http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/VideoMUX
Requirements:
Price: < 5k$
Control: RS-232 and Ethernet
Interface: BNC (Composite Video)
Please check into the page on Monday for a final list of choices and add comments to the wiki page. |
2314
|
Mon Nov 23 16:28:12 2009 |
steve | Summary | Cameras | Video swicher options |
Quote: |
Steve is summarizing the Video Matrix choices into this Wiki page:
http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/VideoMUX
Requirements:
Price: < 5k$
Control: RS-232 and Ethernet
Interface: BNC (Composite Video)
Please check into the page on Monday for a final list of choices and add comments to the wiki page.
|
Composite video matrix switchers with 32 BNC in and 32 BNC channels out are listed. |
2371
|
Wed Dec 9 10:53:41 2009 |
josephb | Update | Cameras | Camera client wasn't able to talk to server on port 5010, reboot fixed it. | I finally got around to taking a look at the digital camera setup today. Rob had complained the client had stopped working on Rosalba.
After looking at the code start up and not complain, yet not produce any window output, it looks like it was a network problem. I tried rebooting Rosalba, but that didn't fix anything.
Using netstat -an, I looked for the port 5010 on both rosalba and ottavia, since that is the port that was being used by the camera. Ottavia was saying there were 6 established connections after Rosalba had rebooted (rosalba is 131.215.113.103). I can only presume 6 instances of the camera code had somehow shutdown in such a way they had not closed the connection.
[root@ottavia controls]#netstat -an | grep 5010
tcp 0 0 0.0.0.0:5010 0.0.0.0:* LISTEN
tcp 0 0 131.215.113.97:5010 131.215.113.103:57366 ESTABLISHED
tcp 0 0 131.215.113.97:5010 131.215.113.103:58417 ESTABLISHED
tcp 1 0 131.215.113.97:46459 131.215.113.97:5010 CLOSE_WAIT
tcp 0 0 131.215.113.97:5010 131.215.113.103:57211 ESTABLISHED
tcp 0 0 131.215.113.97:5010 131.215.113.103:57300 ESTABLISHED
tcp 0 0 131.215.113.97:5010 131.215.113.103:57299 ESTABLISHED
tcp 0 0 131.215.113.97:5010 131.215.113.103:57315 ESTABLISHED
I switched the code to use port 5022 which worked fine. However, I'm not sure what would have caused the original connection closure failures, as I test several close methods (including the kill command on the server end used by the medm screen), and none seemed to generate this broken connection state. I rebooted Ottavia, and this seemed to fix the connections, and allowed port 5010 to work. I also tried creating 10 connections, which all seem to run fine simultaneously. So its not someone overloading that port with too many connections which caused the problem. Its like the the port stopped working somehow, which froze the connection status, but how or why I don't know at this point. |
2464
|
Tue Dec 29 04:28:27 2009 |
kiwamu, rana, haixing | Update | Cameras | New Video Switch Installed | We have installed the new Video Matrix.
Its still in an intermediate state, so don't try to "fix" anything before Kiwamu and I get back onto it in the afternoon.
The status so far is that we have removed the old switch (it was a 256 input x 128 output !! mux) and installed the new one in the same rack. We have hooked it up to the CDS network and have set up its matrix by using the web interface (i.e. NOT EPICS).
Along the way, we discovered that there is lack of impedance matching in the video all over the 40m. Video signals are RF and need to be treated that way. The PSL signals are T'd around and sent on 50 Ohm cables to high impedance monitor inputs.
We should eliminate any switches besides the new one (called Luciana) and control the PSL's Video Monitor from the main MUX interface. No more Rogue Video Switches.
Another couple of things we have found is about RCR camera.
(1) The long cable which connects the RCR camera box and the video matrix doesn't work. Although the signal is alive and we can see it by the local tv monitor nearby PSL.
(1) The reflected beam going to the camera is too weak to see in the monitor. We found a strange polarized cube splitter in front of the camera. We should modify it sooner or later. |
2469
|
Wed Dec 30 20:33:36 2009 |
rana, alberto | Configuration | Cameras | ITMY & MC2 Camera work | We restored the good state of the ITMY camera and neatened both the MC2 and ITMY camera.
The MC2 camera was driving a triple T jungle into some random cables and spoiling the image. We removed all T's and the MC2 camera now drives only The Matrix.
The ITMY camera was completely unmounted and T'd. So it was misaligned just by the force of gravity acting on its BNC cable. We swapped the lens for a reasonable sized one and remounted it in its can. We then used orange cable ties to secure the power and BNC cable for the MC2 and ITMY cameras so that tugging on the cables doesn't misalign the cameras. This is part of the camera's SOP.
No more driving 50 Ohm cables and T's with video cables, Steve! If you need a portable video, just use a spigot of the Matrix and then you can control it with a web browser.
  
I also wiped out the D40's memory card after uploading all of the semi-useful files to the Picasa page. |
|