Looks like there was a power glitch at around 10am today.
All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).
Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.
GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)
[Koji, Rana, Gautam]
The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:
Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error?
Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.
Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params.
Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.
Attachment #4 - Warning lights on C1IOO
Now IFO work like fixing ASS can continue...
Atm 1 & 5, showing the ruby R ~10 mm as it is seated on Al SOS test mass
Atm. 2, 3 & 4 chipped long edges with SOS sus wire OD 43 micron as calibration
Ruby wire standoff received from China. I looked one of them with our small USB camera. They did a good job. The long edges of the prism are chipped.
The v-groove cutter must avoid them. Pictures will follow.
While attempting to execute the Python/Pylon code for the camera server, camera_server.py, the compiler couldn’t locate the pylon-5.0.5.so file. So I included the path for the required .so file as
So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
The code reads-
Not sure what I am missing here.
I think there might be a problem with the fact that the installation of the various components such as the .ini file and the Pylon software are in directories different from the ones Joe B. specifies in his paper.
Instead of modifying the paths in the code itself, I tried creating the paths to match the code-
Update in /ligo directory
/cds/caltech/c1/camera/L1-CAM-MC1.ini created and then I ran the camera_server.py from scripts/GigE/SnapPy as
./camera_server.py -c /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini
This prompted up the following on terminal-
finished loading settings from /ligo/cds/caltech/c1/camera/L1-CAM-MC1.ini and lists the settings in the configuration file.
However the gst.ElementNotFoundError: textoverlay still persists.
Probably I could try putting all files in exactly the same directories as specified in the document.
So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
Right - we want to be compatible with new version of the code, so instead of moving the files to where the code wants them you should make symlinks. The symlinkks go in the place that the code wants and points back to the place where we have the files now.
For the textoverlay, you can just comment it out for now. We can add it back in later once we decide on how to label the video.
This evening, Gautam helped me resolve the error I had been encountering. I had been trying to run the code on Allegra and that threw up the gst.elementfactory_make(“textoverlay”, “text0”); gst.ElementNotFoundError: textoverlay error.
As an attempt to resolve the error, I had set up the paths to match those mentioned in the document.
However as it turns out, it wasn't really needed.
When Gautam ran the code from Pianosa, the following error showed up
gst.elementfactory_make(“x264enc”, “ en ”);gst.ElementNotFoundError: x264.
We found that the x264 and x264enc are different entities.
Gautam then installed the Ubuntu- restricted-extras package with the following
And eventually on compilation, the message ‘starting server’ was displayed on the screen. This was interrupted by another error GenICAM_3_0_Basler_pylon_v5_0::RuntimeException’
So there is apparently a problem executing the commands on Allegra, because the camera server starts running on Donatella and Pianosa.
I will now be looking into this newly encountered error and also be setting up the symlinks for the various paths in the code.
So with the file linked, the python program gets executed but then shows an error self.text= gst.elementfactory_make(“textoverlay”, “text0”)
With the network config, mounting, and symlinks setup, rossa is able to be used as a workstation for dataviewer and MEDM. For DTT, no luck since there is so far no lscsoft support past the Ubuntu14 stage.
50mm 1.8 lens with Basler camera at MC2 face with micro clamp 350617 Camera manuals plus
Thanks to Steve and Gautam, the IMC was locked.
I was able to capture images with the Rainbow 50 mm lens at exposure times of 100, 300, 1000, 3000, 10000 and 30 microseconds.(The pictures are in the same order). These pictures were taken at a gain of 300 and black level 64.
Special credits to Steve spent a lot of time help me a with setting up the hardware and focusing on the beam spot with the camera.
I can't thank you enough Steve! :)
In the afternoon, Steve and I tried to install the camera near MC2 and get some images of the mirrors. Due to a restricted field of view of the lens on the camera, after many efforts to focus on the optic, we were able to get this image. MC2 was unlocked so this image captures some resonating higher order mode.
With MC2 locked, I will get some images of the mirror at different exposure times and try to get an HDR image.
The Y arm ac thermostate was calibrated after cooling water relay replacement by Mike.... yesterday. The set temp is remaind to be 70F
The east end south wall temp is reading 22C
Gautam and Steve,
The medm monitor & vac control screens were totally blank since ~ May 24, 2017 Experienced vacuum knowledge is required for this job.
IDENTIFY valve configuration:
How to confirm valve configuration when all vac mons are blank? Each valve has a manual-mechanical position indicator. Look at pressure readings and turbo pump controllers. VAC NORMAL configuration was confirmed based on these information.
Preparation: disconnect valves ( disconnect meaning: valve closes and stays paralized ) in this sequence VC2, VC1 power, VA6, V5, V4 & V1 power, at ifo pressure 7.3E-6 Torr-it ( it = InstruTech cold cathode gauge )
This gauge is independent from all other rack mounted instrumentation and it is still not logged.
Switching to this valve configuration with disconnected valves will insure NOT venting of the vacuum envelope by accidental glitching voltage drop or computer malfunction.
RESET v1Vac1 .........in 2-3 minutes........ ( v1Vac1 - 2 ) the vac control screen started reading pressures & position
Connected cables to valves (meaning: valve will open if it was open before it was disconnected and it will be control able from computer ) in the following order: V4, V1 power, V5, VA6, VC2 & VC1 power, at ifo 2E-5 Torr-it.....
....vac configuration is reading VAC NORMAL,
ifo 7.4E-6 Torr-it
We have to hook up the it-cold cathode gauge to be monitored - logged ! this should be the substitute for the out of order CC1 pressure gauge.
Rana suggested taking a look at the Y-arm test mass actuator TFs (measured by driving the coils one at a time, with only local damping loops on, using the Oplev to measure the response to a given drive). Attached are the results from this measurement (I used the Oplev pitch error signal for all 8 measurements). Although the magnitude response for all coils have the expected 1/f^2 shape, there seems to be some significant (~10dB) asymmetry in both the ETM and ITM coils. The phase-response is also not well understood. If we are just measuring the TF of a pendulum with 1 Hz resonant frequency, then at and above 10Hz, I would expect the phase to be either 0 or 180 deg. Looks like there is a notch at 60 Hz somewhere, but it is unclear to me where the ~90 degree phase at ~100Hz is coming from.
For the ITM, the UL OSEM was replaced during the 2016 summer vent - the coil that is in there is now of the short OSEM variety, perhaps it has a different number of turns or something. I don't recall any coil balancing being done after this OSEM swap. For the ETM, it is unclear to me how long this situation has been like this.
Yesterday night, I tried to measure the ASS output matrix by stepping the ITM, ETM and TTs in PIT and YAW, and looking at the response in the various ASS error signals. During this test, I found the ETM and ITM pitch and yaw error signals to be highly coupled (the input matrix was diagonal). As Rana suggested, I think the whole coil driver signal chain from DAC output to coil driver board output has to be checked before attempting to fix ASS. Results from this investigation to follow.
Note: The OSEM calibration hasn't been done in a while (though the HeNes have been swapped out), but as Attachment #2 shows, if we believe the shadow sensor calibration, then the relative calibrations of the ITM and ETM Oplevs agree. So we can directly compare the TFs for the ITM and ETM.
Last good page May 18, 2017
Not found, error message May 19 - June 4,2017
Blank plots, June 5, 2017
Randy Trudeau scanned our Window laptop Dell 13" Vostro and Steve's memory stick for virus. Nothing was found. The search continues...
Rana thinks that I'm creating these virus beasts with taking pictures with Dino Capture and /or Data Ray on the window machine........
I repeated the test of driving C1:SUS-<Optic>_<coil>_EXC individually and measuring the transfer function to C1:SUS-<Optic>_OPLEV_PERROR for Optic in (ITMX, ITMY, ETMX, ETMY, BS), coil in (LLCOIL, LRCOIL, ULCOIL, URCOIL).
There seems to be a few dB imbalance in the coils in both ETMs, as well as ITMX. ITMY and the BS seem to have pretty much identical TFs for all the coils - I will cross-check using OPLEV_YERROR, but is there any reason why we shouldn't adjust the gains in the coil output (not output matrix) filter banks to correct for this observed imbalance? The Oplev calibrations for the various optics are unknown, so it may not be fair to compare the TFs between optics (I guess the same applies to comparing TF magnitudes from coil to OPLEV_PERROR and OPLEV_YERROR, perhaps we should fix the OL calibrations before fiddling with coil gains...)
The anomalous behaviour of ITMY_UL (10dB greater than the others) was traced down to a rogue x3 gain in the filter module . This has been removed, and now Y arm ASS works fine (with the original dither servo settings). X arm dither still doesn't converge - I double checked the digital filters and all seems in order, will investigate the analog part of the drive electronics now.
I investigated the analog electronics in the coil driver chain by using awggui to drive a given channel with Uniform noise between DC and 8kHz, with an overall gain of 1000 cts. This test was done for both ITMs and the BS. The Whitening/De-Whitening was off during the test. I measured the spectra in
Attachment #1 - There is good agreement between all 3 measurements. To convert the DTT spectrum to Vrms/rtHz, I multiplied the Y-axis by 10V / ( 2*sqrt(2) * 2^15 cts). Between DC and ~1kHz, the measured spectrum everywhere is flat, as expected given the test conditions. The AI filter response is also seen.
Attachment #2 - Zoomed in view of Attachment #1 (without the AI filter part).
*The DTT plots have been coarse-grained to keep the PDF file size managable. X (Y) axes are shared for all the plots in columns (rows).
Similar verification remains to be done for the ETMs, after which the test has to be repeated with the Whitening/DeWhitening engaged. But it's encouraging that things make sense so far (except perhaps the coil balancing can be better as suggested by the previous elog).
I've left both arms locked. The Y-arm dither alignment is working well again, but for the X arm, the loops that actuate on the BS are still weird. Nothing obvious in the tests so far though.
GV 6pm 8 Jun 2017: I realized the X arm transmission was being monitored by the high-gain PD and not the QPD (which is how we usually run the ASS). The ASC mini screen suggested the transmitted beam was reasonably well centered on the X end QPD, and so I switched to this after which the X end dither alignment too converged. Possibly the beam was falling off the other PD, which is why the BS loops, which control the beam spot position on the ETM, were acting weirdly.
will investigate the analog part of the drive electronics now.
Not related to this work:
I noticed the X-arm LSC servo was often hitting its limit - so I reduced the gain from 0.03 to 0.02. This reduced the control signal RMS, and re-acquiring lock at this lower gain wasn't a problem either. See attachment #3 (will be rotated later) for control signal spectra at this revised setting.
Updates in the He-Ne beam profiling experiment.
New and improved plots for the He-Ne profiling experiment
Font size has been increased to 30.
The plots are maximum size (Following Rana's advice, I saved the plots as eps files(maximized) and converted them to pdf later).
There is a shaded region around the trendline that represents the parameter error.
Function that I fit my data to (should have mentioned this in my earlier elog entries)
Description of my error analysis -
1. I have assumed a 20% deviation from markings in the micrometer error.
2. Using the error in the micrometer, I have calculated the propogated error in the beam power :
I added this error to the stastistical error due to the fluctuation of the oscilloscope reading to obtain the total error in power.
3. I found the Fisher Matrix by numerically differentiating the function at different data points with respect to the parameters and .
I then found the covariance matrix by inverting the Fisher Matrix and found the error in spot size estimation.
EDIT : Residuals added to plots and all axes made equal
We should move on with getting this lens from Edmonds #67-717 at 1064 R<3%
Computar M5018-SWIR is an other choice
AR coatings 500 - 1100nm R<1% are expensive.
*Another issue with the IMC autolocker I've noticed in the recent past: sometimes, the mcup script doesn't get run even though the MC catches a TEM00 mode. So the IMC servo remains in acquisition state (e.g. boosts and WFS servos don't get turned on). Looking at the autolocker log doesn't shed much light - the "saw a flash" log message gets printed, but while normally the mcup script gets run at this point, in these cases, the MC just remains in this weird state.
OpenCV 3.1.0 has been installed by following the commands locally on Donatella
git clone https://github.com/Itseez/opencv.git
git checkout 3.1.0
git clone https://github.com/Itseez/opencv_contrib.git
git checkout 3.0.0
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=/~/opencv_contrib/modules/ ~/opencv/
In ~/opencv/release, make and sudo make install were executed.
This completed the installation. The version of the installation was verified pkg-config --modversion opencv which showed 3.1.0. Also verified the import of cv2 module in python and it seems to work fine.
In order to switch on the angular alignment for the IMC mirrors, we needed to center the laser onto the quad-photodiodes at the IMC and the AS Table(WFS1 and WFS2)
I and Gautam went to the IMC table and did the dc centering for the quad-photodiode by varying the beamsplitter angles. After this, we turned the WFS loops off and performed beam centering for the Quad PDs at the AS Table, the WFS1 and WFS2.
Once we had the beam approximately centered for all of the above 3 PDs, we turned on the locking for IMC, and it seems to work just fine. We are waiting for another hour for switching on the angular allignment for the mirrors to make sure the alignment holds with WFS turned off.
It happened again. MC2 UL seems to have gotten the biggest glitch. It's a rather small jump in the signal level compared to what I have seen in the recent past in connection with suspect Satellite boxes, and LL and UR sensors barely see it.
I will squish Sat box cables and check the cabling at the coil driver board end as well, given that these are two areas where there has been some work recently. WFS loops will remain off till I figure this out. At least the (newly centered) DC spot positions on the WFS and MC2 TRANS QPD should serve as some kind of reference for good MC alignment.
GV edit 9pm: I tightened up all the cables, but doesn't seem to have helped. There was another, larger glitch just now. UR and LL basically don't see it at all (see Attachment #2). It also seems to be a much slower process than the glitches seen on MC1, with the misalignment happening over a few seconds (it is also a lot slower). I have to see if this is consistent with a glitch in the bias voltage to one of the coils which gets low passed by a 4xpole@1Hz filter.
Reboots for c1susaux, c1iscaux, c1auxex today. I took this opportunity to squish the Sat. Box. Cabling for MC2 (both on the Sat box end and also the vacuum feedthrough) as some work has been recently ongoing there, maybe something got accidently jiggled during the process and was causing MC2 alignment to jump around.
Relocked PMC to offload some of the DC offset, and re-aligned IMC after c1susaux reboot. PMC and IMC transmission back to nominal levels now. Let's see if MC2 is better behaved after this sat. box. voodoo.
Interestingly, since Feb 6, there were no slow machine reboots for almost 3 months, while there have been three reboots in the last three weeks. Not sure what (if anything) to make of that.
wonder if its possible that the slow glitches in MC are just glitches in MC2 trans QPD? Steve sometimes dances on top of the MC2 chamber when he adjusts the MC2 camera.
I've re-enabled the WFS at 22:25 (I think Gautam had them off as part of the MC2 glitch investigation). WFS1 spot position seems way off in pitch & yaw.
From the turn on transient, it seems that the cross-coupled loops have a time constant of ~3 minutes for the MC2 spot, so maybe that's not consistent with the ~30 second long steps seen earlier.
Happy MC after last glitch at 10:28 so the credit goes to Rana
GV edit 11:30am: I think the stuff at 10:28 is not a glitch but just the WFS servos coming on - the IMC was only hand aligned before this.
I tried playing around with the Oplev loop shape on ITMY, in order to see if I could successfully engage the Coil Driver whitening. Unfortunately, I had no success tonight.
I was trying to guess a loop shape that would work - I guess this will need some more careful thought about loop shape optimization. I was basically trying to keep all the existing filters, and modify the low-passing that minimizes control noise injection. By adding a 4th order elliptic low pass with corner at 50Hz and stopband attenuation of 60dB yielded a stable loop with upper UGF of ~6Hz and ~25deg of phase margin (which is on the low side). But I was able to successfully engage this loop, and as seen in Attachment #1, the noise performance above 50Hz is vastly improved. But it also seems that there is some injection of noise around 6Hz. In any case, as soon as I tried to engage the dewhitening, the DAC output quickly saturated. The whitening filter for the ITMs has ~40dB of gain at ~40Hz already, so it looks like the high frequency roll-off has to be more severe.
I am not even sure if the Elliptic filter is the right choice here - it does have the steepest roll off for a given filter order, but I need to look up how to achieve good roll off without compromising on the phase margin of the overall loop. I am going to try and do the optimization in a more systematic way, and perhaps play around with some of the other filters' poles and zeros as well to get a stable controller that minimizes control noise injection everywhere.
Today, I and Jigyasa connected the Ottavia to one of the unused monitor screens Donatella. The Ottavia CPU had a label saying 'SMOKED''. One of the past elogs, 11091, dated back in March 2015, by Jenne had an update regarding the Ottavia smelling 'burny'. It seems to be working fine for about 2 hours now. Once it is connected to the Martian Network we can test it further. The Donatella screen we used seems to have a graphic problem, a damage to the display screen. Its a minor issue and does not affect the display that much, but perhaps it'll be better to use another screen if we plan to use the Ottavia in the future. We will power it down if there is an issue with it.
A python script to randomly vary the MC2 pitch and yaw offset and correspondingly record the value of MC transmission has been started on Donatella in the control room and should run for a couple of hours overnight.
The script is named MC_TRANS_1.py and is located in my user directory at /users/jigyasa
Apologies for any inconvenience.
Data analysis will follow.
It has been working fine the whole day(we didn't do much testing on it though). We are leaving it on for the night.
Ottavia had been left running overnight and it seems to work fine. There has been no smell or any noticeable problems in the working. This morning Gautam, Kaustubh and I connected Ottavia to the Matrian Network through the Netgear switch in the 40m lab area. We were able to SSH into Ottavia through Pianosa and access directories. On the ottavia itself we were able to run ipython, access the internet. Since it seems to work out fine, Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.
Reboots for c1psl, c1iool0, c1iscaux today. MC autolocker log was complaining that the C1:IOO-MC_AUTOLOCK_BEAT EPICS channel did not exist, and running the usual slow machine check script revealed that these three machines required reboots. PMC was relocked, IMC Autolocker was restarted on Megatron and everything seems fine now.
I just connected the Ottavia to the Netgear box and its working just fine. It'll remain switched on over the weekend.
Kaustubh and I are going to enable the ethernet connection to Ottavia and secure the wiring now.
The IRAF software from the National Optical Astronomy Observatory has been installed locally on Donatella(for testing) following the instructions listed here at http://www.astronomy.ohio-state.edu/~khan/iraf/iraf_step_by_step_installation_64bit
This is a step towards "aperture photometry" and would help identify point scatterers in the images of the test masses.
I will be testing this software, in particular, the use of DAOPHOT and if it seems to work out, we may install it on the shared directory.
Hope this isn't an inconvenience.
The previous run of the script had produced some dubious results!
The script has been modified and now scans the transmission sum for a longer duration to provide a better estimate on the average transmission. The pitch and yaw offsets have been set to the values that were randomly generated in the previous run as this would enable comparison with the current data.
I am starting it on Donatella and it should run for a couple of hours.
Apologies for the inconvenience.
GigE can be connected to ethernet. AR coated 1064 f50 can arrive any day now.
One of the additional GigE cameras has been IP configured for use and installation.
Static IP assigned to the camera- 192.168.113.152
Subnet mask- 255.255.255.0
The script didn't run properly last night, due to an oversight of variable names! It's been started again and has been running for half an hour now.
I am leaving a script running on the Pianoso for the night. For this purpose, even the AG4395A is kept on. I'll see the result of the script in the morning (it should be complete by then). Just check so before fiddling with the Analyzer.
I tried all versions of power cycling and debugging this problem known to me, including those suggested in this thread and from a more recent time. I am leaving things as it for the night, will look into this more tomorrow. I've also shutdown the ETMX watchdog for the time being. Looks like this has been down since 24Jun 8am UTC.
I tried a couple of things, but no fundamental improvement of the missing LED light on the timing board.
- The power supply cable to the timing board at c1iscex indicated +12.3V
- I swapped the timing fiber to the new one (orange) in the digital cabinet. It didn't help.
- I swapped the opto-electronic I/F for the timing fiber with the Y-end one. The X-end one worked at Y-end, and Y-end one didn't work at X-end.
- I suspected the timing board itself -> I brought a "spare" timing board from the digital cabinet and tried to swap the board. This didn't help.
- Bring the X-end fiber to C1SUS or C1IOO to see if the fiber is OK or not.
- We checked the opto-electronic I/F is OK
- Try to swap the IO chassis with the Y-end one.
- If this helps, swap the timing board only to see this is the problem or not.
To re-cap, every time I tried to do this in the last month or so, the optic would get kicked around. I suspected that the main cause was the insufficient low-pass filtering on the Oplev loops, which was causing the DAC rms to rail when the whitening was turned on.
I had tried some loop-tweaking by hand of the OL loops without much success last week - today I had a little more success. The existing OL loops are comprised of the following:
THe elliptic low pass was too shallow. For a first pass at loop shaping today, I checked if the resonant gain filter had any effect on the transmitted power RMS profile - turns out it had negligible effect. So I disabled this filter, replaced the elliptic low pass with a 5th order ELP with 2dB passband ripple and 80dB stopband attenuation. I also adjusted the overall loop gain to have an upper UGF for the OL loops around 2Hz. Looking at the spectrum of one coil output in this configuration (ITMY UL), I determined that the DAC rms was no longer in danger of railing.
However, I was still unable to smoothly engage the de-whitening. The optic again kept getting kicked around each time I tried. So I tried engaging the de-whitening on the ITM with just the local damping loop on, but with the arm locked. This transition was successful, but not smooth. Looking at the transmon spot on the camera, every time I engage the whitening, the spot gets a sizeable kick (I will post a video shortly). In my ~10 trials this afternoon, the arm is able to stay locked when turning the whitening on, but always loses lock when turning the whitening off.
The issue here is certainly not the DAC rms railing. I had a brief discussion with Gabriele just now about this, and he suggested checking for some electronic voltage offset between the two paths (de-whitening engaged and bypassed). I also wonder if this has something to do with some latency between the actual analog switching of paths (done by a slow machine) and the fast computation by the real time model? To be investigated.
GV 170628 11pm: I guess this isn't a viable explanation as the de-whitening switching is handled by the one of the BIO cards which is also handled by the fast FEs, so there isn't any question of latency.
With the Oplev loops disengaged, the initial kick given to the optic when engaging the whitening settles down in about a second. Once the ITM was stable again, I was able to turn on both Oplev loops without any problems. I did not investigate the new Oplev loop shape in detail, but compared to the original loop shape, there wasn't a significant difference in the TRY spectrum in this configuration (plot to follow). This remains to be done in a systematic manner.
Plots to support all of this to follow later in the evening.
Attachment #1: Video of ETMY transmission CCD while engaging whitening. I confirmed that this "glitch" happens while engaging the whitening on the UL channel. This is reminiscent of the Satellite Box glitches seen recently. In that case, the problem was resolved by replacing the high-current buffer in the offending channel. Perhaps something similar is the problem here?
Attachment #2: Summary of the ITMY UL coil output spectra under various conditions.
The 50mm lens has arrived. (Delivered yesterday).
Also the GigE has been wired and conencted to the Martian. Image acquisition is possible with Pylon.
The values generated from the script were analyzed and a 3D scatter plot in addition to a 2D map were plotted.
Yesterday, Rana pointed me to another method of collecting and analyzing the data. So I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.
There were a few more flaky things in the Expansion chassis - the IDE connectors don't have "keys" that fix the orientation they should go in, and the whole timing card assembly is kind of difficult and not exactly secure. But for now, things are back to normal it seems.
In continuation to my previous posts, I have been working on evaluating the data on transfer function. Recently, I have calculated the correlation values between the real and imaginary part of the transfer function. Also I have written the code for plotting the transfer function data stream at each frequency in the argand plane just for referring to. Also I have done a few calculations and found the errors in magnitude and phase using those in the real and imaginary parts of the transfer function. More details for the process are in this git repository.
The following attachments have been added:
Seeing the correlation values, it sounds reasonable that the gaussian in real and imaginary parts approximation is actually holding. This is because the correlation values are mostly quite small. This can be seen by studying the distribution of the transfer function on the argand plane. The entire distribution can be seen to be somewhat, if not entirely, circular. Even when the ellipticity of the curve seems to be high, the curve still appears to be elliptical along the real and imaginary axes, i.e., correlation in them is still low.
In order to test the above again, with an even larger data set, I am leaving a script running on Ottavia. It should take more than just the night(I estimate around 10-11 hours) if there are no problems.
The script is being executed again, now.
I worked on the code today and have left a script (MC2rerun.py) running on Ottavia which should run overnight.