Aim: To find telescopic lens solution to image test mass onto the sensor of GigE camera.
I wrote a python code to find an appropriate combination of lenses to focus the optic onto the camera keeping in mind practical constraints like distance of GigE camera from the optic ~ 1m and distance between the lenses need to be in accordance with the Thorlab lens tubes available. We have to image both the enire optic of size 3" and beam spot of 1" using this combination of lens. The image size that efficiently utilizes the entire sensor array is 1/4". Therefore the magnification required for imaging the entire optic is 1/12 and that for the beam spot is 1/4.
I checked the website of Thorlabs to get the available focal lengths of 2" lenses (instead of 1" lenses to collect sufficient power). I have tried several combination of lenses and the ones I found close enough to what is required have been listed below along with thier colorbar plots.
a) 150mm-150mm (Attachment 2 & 3)
With this combination, object distance varies like 50cm for 1" beam spot to 105cm for 3" spot. Therefore, it posses a difficulty that there is a difference of ~48cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
b) 125mm-150mm (Attachment 4 & 5)
With this combination, object distance varies like 45cm for 1" beam spot to 95cm for 3" spot. There is a difference of ~43cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
c) 125mm-125mm (Attachment 6 & 7)
The object distance varies like 45cm for 1" beam spot to 90cm for 3" spot. There is a difference of ~39cm in the distances between the optic and camera in the two cases: imaging the entire optic and the beam spot.
Sensitivity check was also done for these combination of lenses. An error of 1cm in object distance and 5mm in the distance between the lenses gives an error in magnification <2%.
The schematic of the telescopic lens system has been given in Attachment 8.
Article from EE Times, describing why metal foil (NOT metal film) resistors are really better than wirewound when it comes to everything except high power dissipation.
Need to do some diggin to see if we can find ~1k metal foil resistors which can handle ~1W of heat.
Steve: here it is
In the IMC actuation chain, it looks like the MC1/MC3 de-whitening boards, and also all three MC optics' coil driver boards, are showing higher noise than expected from LISO modeling. One possible candidate is thick film resistors on the coil driver boards. The plan is to debug these further by pulling the board out of the Eurocrate and investigating on the electronics bench.
Why bother? Mainly because I want to see how good the IR ALS noise is, and currently, the PSL frequency noise is causing the measurement to be worse than references taken from previous known good times.
Sometime ago, rana suggested to me that I should do this measurement more systematically.
I've now restored all the wiring at 1X6 to their state before this work.
I guess it's fine for now while we are still finalizing the setup at EX, but we should eventually line up the seismometer axes with the IFO axes. Is there a photo of the orientation of the seismometer pre heater can tests? If not, probably good to make some sort of markings on the granite slab / seismometer to allow easy lining up of these axes...
I have attached the graph for the seismometer temperature fluctuations over 3 days. As we can see, there is a noticeable fluctuation in daily temperature as well as a difference between days in the maximum and minimum temperatures. I will repeat this test but take the can off to see if there's any difference between having the can on or off.
Today Steve and I tried to to capture the image of scattering of light by dust particles on the surface of ETMX using GigE camera. The image ( at gain =100, exposure time = 125000) obtained has been attached. Unlike the previous images, a creepy shape of bright spots was seen. Gautam helped us lock infrared light and see the image. A similar less intense shape was seen. This may be because of the dust on the lens.
Today, I tested the new mini-circuit frequency counter by connecting it with the beat signal output. The frequency counter works fine. Now I am trying to get a display of the frequency in the computer screen using python programming. I have made the code for remotely changing oscilator frequency and it is saved in the folder 'ksnair'. A picture of the new mini circuits frequency counter is attached below. Part no: UFC-6000, S/N: 11501040012, Run: M075270.
It appears that one of the wires was disconnected overnight or this morning so I wasn't able to gather data over a full 24 hour period. Perhaps someone accidentally kicked it. I placed some cones in that area so hopefully the wires won't be in the way as much and I can get the data tomorrow. From the data I do have it seems that the seismometer is at a colder temperature when the can is not on, though it is difficult to see by how many degrees the temperature fluctuates. I've included the data from 5 days back to see the comparison.
I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice. Plans:
If there are no objections, I will execute Step #5 in the next couple of hours. I'm going to start with Steps 1-4.
(keerthana, gautam, jon)
In the morning, Jon gave me an overview of the Auxiliary laser system which we are planning to setup. Based on the diagram he uploaded in the elog, I have made the MEDM diagram for controlling and displaying the parameters. Here the parameters which we will be controlling are temperature (in terms of voltage), oscilator frequency ( with the help of IFR 2023B), the frequency offset and the PID controls. The display includes the beat frequency, error signal voltage, control voltage and a switch to give feed back to the AUX laser. As the frequency counter is not connected at the moment, I haven't included its channel number in it. The screenshot of the diagram is attached with this. I am also considering to give a PID feedback to the slow control from the AUX feedback signal. The screen can be accessed from the PSL dropdown menu in sitemap.
This work is now complete. MC1 coil driver board has been reinstalled, local damping of MC1 restored, and IMC has been locked. Detailed report + photos to follow, but measurement of the noise (for one channel) on the electronics workbench shows a broadband noise level of 5nV/rtHz () around 100Hz, which is lower than what was measured here and consistent with what we expect from LISO modeling (with fast input terminated with 50ohm, slow input grounded).
I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice.
This time the test went without issue. The first attachment is the data for the past 24 hours and the second attachment is the full data over 6 days. The average temperature fluctuations (from highest point to lowest point) for the can on was 0.43 C and for the can off it came out to 0.55 C. In addition the seismometer with the can off is about 1 C cooler than with the can on. I'd like to leave the can off until the end of the week so we can get a comparable data set for both the can on and off. Eventually I'll need to figure out a way to clamp the can down to the block in order to get better insulation and hopefully get even smaller temperature fluctuations.
In any case, if it is indeed true that the optic sees this current noise, the place to make the measurement is probably the Sat. Box. Who knows what the pickup is over the ~15m of cable from 1X6 to the optic.
Detailed report + photos to follow
All models on the c1lsc front end were dead. Looking at slow trend data, looks like this happened ~6hours ago. I rebooted c1lsc and now all models are back up and running to their "nominal state".
Rana said that it wasn't necessary to gather more data on the temperature fluctuations so I have reconnected the heater circuit and restarted the PID loop with the can on the seismometer.
We will need to order a few things for our final setup.
There is an effort to switch to an all-digital system for the GigE camera feeds similar to the one running at LLO, which uses Joe Betzwieser's custom SnapPy package to interface with the cameras in Python and aggregate their feeds into a fancy GUI. Joe's code is a SWIG-wrapping of the commercial camera-driver API, Pylon, from Basler. The wrapping allows the low-level camera driver methods to be called from within Python, and their feeds are forwarded to a gstreamer stream also initiated from within Python. The problem is that his wrapping (and the underlying Pylon software itself) is only runnable on an older version of Ubuntu. Efforts to run his software on newer distributions at the 40m have failed.
I'm working on a fix to essentially rewrite his high-level SnapPy code (generators of GUIs, etc.) to use the newest version of Pylon (pylon5) to interface at a low level with the cameras. I discovered that since the last attempt to digitize the camera system, Basler has released their own official version of a Python wrapping for Pylon on github (PyPylon).
Progress so far:
The next and final step is to modify Joe's SnapPy package to import pypylon instead of his custom wrapping of an older version of the camera software, and update all of the Pylon calls to use the new methods. I'll hopefully get back to this early next week.
I've updated the parts list to be an excel document and included every single part we will need. This is ony a first draft so it will probably be updated in the future. I also made a mistake in hole sizing for the front panel so I've updated it and attached it as well (second attachment).
Edit: re-attached the EX can panel fpd file so that everything is in one place
Chris replaced some air condition filters and ordered some replacement filter today.
Yesterday morning was dusty. I wonder why?
The PRM sus damping was restored this morning.
Yesterday afternoon at 4 the dust count peaked 70,000 counts
Manasa's alergy was bad at the X-end yesterday. What is going on?
There was no wind and CES neighbors did not do anything.
Air cond filters checked by Chris. The 400 days plot show 3 bad peaks at 1-20, 2-5 & 2-19
Last night, Rana fact-checked my story about the coil driver noise measurement. Conclusions:
Note: All measurements were made with the fast input of the coil driver board terminated with 50ohms and bias input shorted to ground with a crocodile clip cable.
The first goal is to figure out where this pickup is happening, and if it is actually going to the optic. To this end, I will put a passive 100 kHz filter between the coil driver output and the preamp (Busby Box instead of SR560). By getting a clean measurement of the noise floor with the coil driver board in the Eurocrate (with the bias input driven), we can confirm that the optic isn't being buffeted by the excess coil driver noise. If we confirm that the excess noise is not a measurement artefact, we need to think about were the pickup is actually happening and come up with mitigation strategies.
RXA: good section EMI/RFI in Op Amp Applications handbook (2006) by Walt Jung. Also this page: http://www.electronicdesign.com/analog/what-was-noise
To get our rsync back to LDAS back up, I followed instructions from Dan Kozak:
Next need to figure out what the SL7 protocol is for running this as a daemon after boot - some kind of init.d thing probably
We noticed quite a strong burning smell in the office area and control room ~20mins ago. We did a round of the bake lab, 40m VEA and the perimeter of the CES building, and saw nothing burning. But the smell persists inside the office area/control room (although it may be getting less noticeable). There is a whining noise coming from the fan belt on top of the office area. Anyways, since nothing seems to be burning down, we are not investigating further.
Steve [ 10am 5-31 ] we should always check partical count in IFO room
The AUX laser is down to 5.4 mW output power
What's worse, because we wanted those fast switching times by the AOM for ringdowns, I made the beam really small, which
When going though the labs with Koji last week I discovered a stash of modulators in the Crackle lab. Among them there's an 80 MHz AOM with compact driver that had a modulation bandwidth of 30MHz. The fall time with this one should be around 100ns, and since the arm cavities have linewidths of ~10kHz their ringdown times are a few microseconds, so that would be sufficient. I suggest we swap this or a similar one in for the current one, make the beam larger, and redo the fiber modematching. That way we may get ~3mW onto the AS table.
I think I want to use AS110 for the ringdowns, so in the next couple days I'll look into its noise to get a better idea about what power we need for the arm ringdowns.
Seems like as a result of my recent poking around at 1X6, MC3 is more glitchy than usual (I've noticed that the IMC lock duty cycle seems degraded since Tuesday). I'll try the usual cable squishing voodo.
gautam 8.15pm: Glitches persisted despite my usual cable squishing. I've left PSL shutter closed and MC watchdog shutdown to see if the glitches persist. I'll restore the MC a little later in the eve.
Jon informed me that there are some EPICS channels that JoeB's camera server code looks for that don't exist. I thought Jigyasa and I had added everything last year but turned out not to be the case. I followed my instructions from here, did the trick. While cleaning up, I also re-named the "*MC1" channels to "*ETMX", since that's where the camera now resides. New channels are:
C1: CAM-ETMX_ARCHIVE_INTERVAL (Archival interval in minutes)
C1: CAM-ETMX_ARCHIVE_RESET (Reset Archival interval in minutes)
C1: CAM-ETMX_CONFIG_FILE (Config file)
I have attached the result of running the PID script on the seismometer with the can on. The daily fluctuations are no more than 0.07 degrees off from the setpoint of 39 degrees. Not really sure what happened in the past day to cause the strange behavior. It seems to have returned back to normal today.
megatron had full of zombie medm processes due to some of the screenshot scripts.
I also found that apache2 is running on megatron without any configuration. I just disable it by
sudo update-rc.d apache2 disable
rc.d apache2 disable
The model of our martian wifi router (NETGEAR R6400) was found in the FBI router list to be rebooted asociated with the malware "VPNFilter" issue.
I checked the attached devices and found bunch of (legit) devices blocked to access the wifi router. This is not an immediate problem as most of the packets do not go through the wifi router. But potentially a problem in some cases like Wifi enables GPIB adapters. So I marked them to be "allowed".
In this opprtunity, I have updated the firmware of the wifi router and this naturally involved rebooting of the device.
I wanted to recover the DRMI locking. Among other things, Jon mentioned that his mode spectroscopy can be done in the DRMI config. But I was foiled last night by a rogue waveplate in the AS beampath, and today evening, I noticed the resurfacing of this problem. Clearly, this is indicative of some issue in the analog whitening electronics, as the DC light level on the AS55 PD is consistent with previous measurements. Moreover, last time, the problem "fixed itself" so I don't know what exactly the problem was in the first place. I'll try doing the same test in the linked elog tomorrow. As a quick test, I cycled through the whitening gains (0-45dB) to see if it was some stuck ADC register, but that didn't fix the problem.
The problem seems to be with REFL55 only - I am able to lock the PRMI with carrier resonant without any issues, and the error signal levels are consistent with what I remember them being while the PRMI is swinging around. AS55 lives on the same whitening board and doesn't seem to suffer from the same probelms.
Decided to do the check tonight, but as Attachment #1 shows, no real red flags from the whitening gain side.
As it happened last time, the problem apparently fixed itself - somehow the act of me disconnecting the cables and reconnecting them seems to solve the problem, need to think about this.
Anyway, DRMI was locked a few times tonight. I got in a good long stretch where I ran some sensing lines and collected some data, analysis tomorrow. I am going to center the vertex oplevs as an alignment reference for now. A major source of lockloss seems to be angular instability - see for example this video grab of POP:
Could be due to noise injection from the noisy PRM Oplev HeNe, or just TT mirror angular motion (I couldn't get the PRC angular FF going tonight).
Aim: To synchronize data from the captured video and the signal applied to ETMX
In order to correlate the intensity fluctuations of the scattered light with the motion of the test mass, we are planning to use the technique of neural network. For this, we need a synchronised video of scattered light with the signal applied to the test mass. Gautam helped me capture 60sec video of scattering of infrared laser light after ETMX was dithered in PITCH at ~0.2Hz..
I developed a python program to capture the video and convert it into a time series of the sum of pixel values in each frame using OpenCV to see the variation. Initially we had tried the same with green laser light and signal of approximately 11.12Hz. But in order to see the variation clearly, we repeated with a lower frequency signal after locking IR laser today. I have attached the plots that we got below. The first graph gives the intensity fluctuations from the video. The third and fourth graphs are that of transmitted light and the signal applied to ETMX to shake it. Since the video captured using the camera was very noisy and intensity fluctuations in the scattered light had twice the frequency of the signal applied, we captured a video after turning off the laser. The second plot gives the background noise probably from the camera. Since camera noise is very high, it may not be possible to train this data set in neural network.
Since the videos captured consume a lot of memory I haven't uploaded it here. I have uploaded the python code 'sync_plots.py' in github (https://github.com/CaltechExperimentalGravity/GigEcamera/tree/master/Pooja%20Sekhar/PythonCode).
I brought the NPRO from the Crackle experiment over to the 40m Lab and set it up on the PSL table to replace the slowly dying AUX laser. I also brought along a Faraday isolator, broadband EOM, and an ISOMET AOM with driver electronics from the optics storage in the Crackle Lab.
This laser is a much newer model, made in 2008, and still has all its mojo, but we should probably keep up the practice of turning it off when it's not going to be used for a while. I measured 320 mW leaving the laser, and 299mW of that going through the Faraday isolator, whose Brewster-angle polarizer I had to clean because they were a little dusty. While the laser output is going strong, the controller displays a power output of only 10 mW, which makes me think that the power monitoring PD is busted. This is a completely different failure mode from what we've seen with the other NPROs that we can hopefully get repaired at some point, particularly because the laser is newer, but for now it's installed on the PSL table. This likely means that the noise eater isn't working on this unit either, for different reasons, but at least we have plenty of optical power.
The setup is very similar to before, with the addition of a Faraday isolator and a broadband EOM, in case we decide to get more bandwidth in the PLL. I changed the Crystal Technologies 3200-113 200 MHz AOM for an ISOMET 80 MHz AOM with RF driver from the Crackle lab's optics storage and sized the AUX beam to a diameter of 200 micron. I couldn't locate an appropriate heat sink for the driver, which is still in factory condiction, but since the PSL AOM also runs on 80MHz I used that one instead. The two AOMs saturate at different RF powers, so care must be taken to not drive the AUX AOM too high. At 600 mV input to the driver the deflection into the first order was maximal at 73 % of the input power, with the second order beam and the first order on the other side cleary visible.
In order to speed things up I didn't spend too much time on mode-matching, but the advantage of the fiber setup is that we can always improve later if need be without affecting things downstream. I coupled the first order beam into the fiber to the AS table with 58% efficiency, and restored the beat with the PSL laser on the NewFocus 1611. The contrast there is only about 20%, netting a -20 dBm beat note. This is only a marginal improvement from before, so the PLL will work as usual, but if we get the visibility up a little in the future we won't need to amplify the PD signal for the PLL anymore.
Some more things I wanted to do but didn't get to today are
I'll resume this work tomorrow. I turned the aux laser and the AOM driver input off. For the PSL beat the AOM drive is not needed, and the power in the optical fiber should not exceed 100 mW, so the offset voltage to the AOM RF driver has to remain below 300 mV.
> While the laser output is going strong, the controller displays a power output of only 10 mW, which makes me think that the power monitoring PD is busted.
NPRO internal power monitor often shows smaller value than the actual due to a broken PD or misalignment. I don't think we need to fix it.
STEVE: Aux Lightwave M126-1064-200, sn259 [July 2009] 1.76A, ADJ 9, 9mW on it's display should not mislead you. It's output 320mW
I couldn't locate an appropriate heat sink for the driver, which is still in factory condiction, but since the PSL AOM also runs on 80MHz I used that one instead.
We have the appropriate heatsink - I'd like to minimize interference with the main beam wherever possible.
For the PSL beat the AOM drive is not needed, and the power in the optical fiber should not exceed 100 mW, so the offset voltage to the AOM RF driver has to remain below 300 mV.
If damage to the fiber is a concern, I think it's better to use a PBS + waveplate to attenuate the power going into the fiber. When the AOM switching is hooked up to CDS, it's easy to imagine a wrong button being pressed or a wrong value being typed in.
It would probably also be good to have a pickoff monitor for the NPRO DC power so that we can confirm its health (in the short run, we can hijack a PSL Acromag channel for this purpose, as we now do for FSS_RMTEMP). I don't know that we need an EOM for the PLL, as in order to get that going, we probably need some fast electronics for the EOM path, like an FSS box.
STEVE: I ordered the right heatsink for the acousto after Koji pointed out that the vertical fins are 20% more efficient. Why? Because hot air rises. It will be here in 3-4 days.
I spent a day trying to modify Joe B.'s LLO camera client-server code without ultimate success. His codes now runs without throwing any errors, but something inside the black-box handoff of his camera source code to gstreamer appears to be SILENTLY FAILING. Gautam suggested a call with Joe B., which I think is worth a try.
In the meantime, I've impemented a simple Python video feed streamer which does work, and which students can use as a base framework to implement more complicated things (e.g., stream multiple feeds in one window, save a video stream movie or animation).
It uses the same PyPylon API to interface with the GigE cameras as does Joe's code. However, it uses matplotlib instead of gstreamer to render the imaging. The matplotlib code is optimized for maximum refresh rate and I observed it to achieve ~5 Hz for a single video feed. However, this demo code does not set any custom cameras settings (it just initializes a camera with its defaults), so it's quite possible that the refresh rate is actually limited by, e.g., the camera exposure time.
Location of the code (on the shared network drive):
This demo initializes a single GigE camera with its default settings and continuously streams its video feed in a pop-up window. It runs continuously until the window is closed. I installed PyPylon from source on the SL7 machine (rossa) and have only tested it on that machine. I believe it should work on all our versions of Linux, but if not, run the camera software on rossa for now.
From within the above directory, the code is executed as
$python stream_camera_to_mpl.py [Camera IP address]
with a single argument specifying the IP address of the desired camera. At the time I tested, there was only one GigE camera on our network, at 192.168.113.152.
The cavity scan data obtained from the Finesse simulation is attached here. Fig1 indicates the cavity scan data in the absence of induced misalignment. In that case only the fundemental mode is resonating. But when a misalignment is induced, higher order modes are also present as seen in Fig2. This is in the absence of surface figure error in the mirrors. Now I am trying to provide perturbations to the mirror surface in the form of zernike polynomials and get the scan data fom the simulation. These cavity scan data can be used to develop fitting models. Once we have a model, we can use it to analyse the data from the experimental cavity scan.
I was wondering why the PMC modulation sidebands are showing up on the control room analyzer with ~6dB difference in amplitude. Then I realized that it is reasonable for the cabling to have 6dB higher loss at 80 MHz compared to 20 MHz.
Aha! Video is back!
I think it would be good to add a flag whereby the video can be saved to disk in some uncompressed video format (ogg, avi, ?) instead of displayed to a matplotlib window. We could then use the default to just display video, but use the save-to-disk flag to grab a few minutes of video for image processing.
We added the following channels to C0EDCU.ini and restarted the daqd processes. Channels seem to have been added successfully, we will check trend writing later today. Motivation is to have a long term record of annulus pressure (even though we are not currently pumping on the annulus).
plot next day
For some time now, I've been puzzled by the unreliability of the ASS_X dither alignment servo. Leaving the servo on, TRX often begins to decay to a lower value, and even after freezing the dither at the maximum TRX values, I can manually align the mirrors to increase TRX. We have suspected some kind of clipping in the TRX path that is responsible for this behaviour. Today I decided to investigate this a bit further. To have the arm locked and to inspect the beam, we have to change the locking trigger - TRX is what is normally used, but I misaligned the Y arm completely, and used AS110 as a trigger instead. There is some strangeness in the triggering topology, but this deserves a separate elog.
Once the arm was locked (and relocks using the AS110 trigger in the event of an unlock), I was able to trace the beampath on the EX table with an IR card. The TRX beam is rather large and weak, so it is hard to see, but as best as I can tell, the only real danger of clipping (or perhaps the beam is already clipped) is on the final steering mirror before the beam hits the (Thorlabs) PD. Steve/Pooja are working on getting a photo of this, and will upload it here shortly. Options to mitigate this:
The EX QPD has stopped working since the Acromag install. If it were working, we wouldn't have to rely on the alternate triggering with AS110 and instead just use the QPD as TRX, while we debug the Thorlabs PD path.
I though that the "C1LSC_TRIG_MTRX" MEDM screen completely controls the triggring of LSC signals. But today while trying to trigger the X-arm locking servo on AS110 instead of TRX, I found some strange behaviour. Summary of important points:
All very strange, not sure what's going on here. The simulink model diagram also didn't give me any clues. Need's further investigation.
Got this 1U box from the Y arm that we could potentially use (attachment 1). It doesn't have handles on the front but I guess we could attach them if necessary. Attachment 2 is a switch that could be used instead of a light up switch, but now we need to add LEDs on the front panel that indicate that the switch is functional. Attachment 3 is a terminal block that we can use to attach the 16 gage wire to since it is thick and attaching it directly to the board would be difficult. If this is alright to use then I'll change up my designs for the front panel and PCB to accomodate these parts.
(Johannes, Koji, Keerthana)
The PLL loop ensures that the frequency difference between the PSL laser and the AUX laser is equal to the frequency we provide to the Local Oscillator (LO) with the help of a Marconi. Only a small pick off part of both the AUX and PSL lasers are going to the PLL loop. The other part of both the lasers are going to the interferometer. Before entering into the optical fibre, the AUX laser passes through an AOM which changes its frequency by an amount of 80MHz. When the PLL is locked, the frequency coming out of the PLL will be equal to the frequency set up in the Marconi (fm). When it passes through AOM, the frequency becomes fdiff = fm ±80 MHz. If this frequency beam and the PSL laser beam is aligned properly, and if this frequency is equal to the product of an integer and the free spectral range of the cavity, this will resonate in the cavity. Then we expect to get a peak in the ETM transmission spectrum corresponding to the frequency we injected through the optical Fibre.
Through out the experiment we need to make sure that the PSL is locked. Thus, the signal detected by the photo detector when only PSL is resonating inside the cavity, act as a DC signal. Then we give a narrow scan to the Marconi. When fdiff = N*FSRy this condition is satisfied, we will observe a peak in the output. Here FSRy is the free spectral range of the cavity which is approximately equal to 3.893 MHz.
Yesterday afternoon, Johannes, Koji and myself tried to observe this peak. We aligned the cavity by observing the output signal from the AS100 photo detector. We made the alignment in such a way that the intensity output getting from this photo detector is maximum. We used a Spectrum analyser to see the output. After that we connected a photo detector to collect the YEND transmission signal from the ETM mirror. We used a lens to focus this directly to the photodetector. Then we connected this photodetector to the spectrum analyser, which was located near the AS table. We took a large cable to meet this purpose. But still the cable was not lengthy enough, so we joined it with another cable and finally connected it with the spectrum analyser. Then we gave a scan to the Marconi from 51 MHZ to 55 MHz. We repeated this experiment with a scan of 55 MHz to 59 MHz also. We repeated this a few times, but we were not able to see the peak.
We assume that this can be because of some issue with the alignment or it can be because of some issue with the photo detector we used. We would like to repeat this experiment and get the signal properly.
I am attaching a flow chart of the setup and also a picture of the mirrors and photo detector we inserted in the Y-End table.
FSS slow wasn't running so PSL PZT voltage was swinging around a lot. Reason was that was c1psl unresponsive. I keyed the crate, now it's okay. Now ITMX is stuck - Johannes just told be about an un-elogged c1susaux reboot. Seems that ITMX got stuck at ~4:30pm yesterday PT. After some shaking, the optic was loosened. Please follow the procedure in future and if you do a reboot, please elog it and verify that the optic didn't get stuck.
I think this table will help us to fix the scanning range of the Marconi frequency. This will also help in predicting the position of the resonance peak corresponding to the injected frequency.
fdiff = fm ±80 MHz ; fdiff = N*FSRy ; FSRy = 3.893 MHz.
I opted for the quickest fix - I raised the height of the offending steering mirror using a 0.25" shim. In the long term, we can get a taller post machined. After raising the mirror height, I then checked the DC centering of the spot on the DC PD using a scope.
Looking at the performance of the X arm ASS, I no longer see the strange oscillatory behaviour I described in my previous post . Moreover, the TRX level was ~1 before be raising the steering mirror - but it is now ~1.2. So we were certainly losing some power.
Just to inform, I'm working in optimus to develop python code to train the neural network since it requires a lot of memory.
Local backup on chiara seems not working since Nov 19, 2017.
2017-11-18 07:00:01,504 INFO Updating backup image of /cvs/cds
2017-11-18 07:03:00,113 INFO Backup rsync job ran successfully, transferred 1954 files.
2017-11-19 07:00:02,564 INFO Updating backup image of /cvs/cds
2017-11-19 07:00:02,592 ERROR External drive not mounted!!!
I worked a bit on the PSL table today