40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 64 of 339  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  12963   Wed May 3 16:00:00 2017 gautamSummaryGeneralNetwork Topology Check

[johannes, gautam]

I forgot we had done this last year already, but we updated the control room network switch labels and double checked all the connections. Here is the status of the connections and labels as of today:

There are a few minor changes w.r.t. labeling and port numbers compared to the Dec 2015 entry. But it looks like there was no IP clash between Rossa and anything (which was one of the motivations behind embarking on this cleanup). We confirmed by detatching the cable at the PC end of Rossa, and noticed the break in the ping signals. Plugging the cable back in returned the pings. Because Rossa is currently un-bootable, I couldn't check the MAC address.

We also confirmed all of this by using the web browser interface for the switch (IP = 192.168.113.249).

Attachment 1: Network_topology_3May2017.pdf
Network_topology_3May2017.pdf
  12977   Mon May 8 21:53:56 2017 ranaSummarySEIattempt to get seismic BLRMS minute trend

I tried to get some minute trend data today, but was unable to get it from inside or outside the control room using our matlab or python tools.

It seems the NDS2 interface will not work anywhere since it needs our minute trends to be written as frames; in the last version that Jamie left us, our minute trend frame files are not being written since they lead to periodic daqd crashes.

From inside the control room, we can get the minute trend (only with DataViewer). I've attached 30 days of BS_X just to show its real.

We can get the numerical data from the Grace plot window using the menu option Data->Export->ASCII.

You must select all of the 'Write Sets' to get all of the traces in the plot window. The resulting ascii file is not in a great format, but its not terrible.

Attachment 1: BLRMS_trend.png
BLRMS_trend.png
  12998   Thu May 18 15:20:29 2017 jigyasaSummarytelescope designTelescope Design for the Gig-E cameras

With the objective of designing a telescope system for the Gig-E, a system of two lenses is implemented. A rough schematic of the telescope system is attached. Variables in the system include the focal lengths of the two spherical lenses(f1, f2), distance between the lenses(t), distance between the test mass and the lens combination(u), distance between the other lens and the sensor(v). Also the size of the object to be desired ranges from 3’’ which is the size of the test mass to 1’’ which is approximately focusing on the beam spot implying that the required magnification ranges from 0.06089 to 0.1826 (since the sensor image circle size if ¼”)
The lenses are selected to be 2” in diameter so as to ensure sufficient collected power.

Going through the focal lengths available, namely 50, 100, 150, 200, 250 mm, and noting that the object distance would be within the ranges of 1500 to 2500 mm, plots of various accessible u and v for different values of t were obtained. This optimization was done to ensure the proper selection of the lenses. Additionally, a sensitivity analysis was performed and plots depicting the dependence of magnification on the precision limiting measurements of u (1 mm) and t (5 mm) were obtained. (These were scatter plots quantifying the deviation from the desired magnification ranges). The plots depict the error term induced on the magnification if there was an error in measuring the distance between the lenses as 5mm and if the precision in measuring the object to lens distance by 1mm.

The telescope design might be limited by spherical aberrations and coma, which might be resolved by either using aspherical lenses or by increasing the f-number (typically with an f number around 5 or 6). The use of aspherical lenses particularly parabolic lenses was considered, however this was found to be quite an expensive route. 

Analyzing the plots and taking into consideration the restrictions of the slotted lens tubes, the precision in measurement of the distances, a 150 mm- 250mm focal length solution is proposed. With a diameter of 2”, the f number is computed to be 2.95 and 4.92. With this combination and the object distances lying between 1500 to 2500 mm, the image distance to the sensor varies between 51 to 100mm. So a slotted lens tube controlling the distance between the lenses would be required.

I also considered a combination of focal lengths 250mm and 250mm, as then both of the lenses would at least have an f number of 4.92. The results for this combination are also attached. The image distance from the lens combination is about a 100 to a 140 mm. However, this would require much longer slotted length tubes thereby adding to the cost of the system. The number of accessible u-v points is the same as that for the 150-250 combination. 

I am still trying to search for a much more concrete way of quantifying aberrations.

Attachment 1: ray.png
ray.png
Attachment 2: Schematic.png
Schematic.png
Attachment 3: 150-250uv.png
150-250uv.png
Attachment 4: 150-250error.png
150-250error.png
Attachment 5: 250-250.png
250-250.png
Attachment 6: 250-250error.png
250-250error.png
  12999   Fri May 19 19:18:53 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040

Motivation:

I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week.

Tools and Machines Used:

We worked on the optical bench right in front of the main entrance to the lab. We put the cables, power chords, etc. to their respective places. We used screws, poles, T's, I's, multimeter, Network/Spectrum Analyzer(along with the moving table), a lab computer, Oscilloscope, power supply and the aforementioned PDs for our testing. We took these items from the stack of tools at the Y-arm and the boxes of various different labelled palced near the X-arm. We moved the Network Analyzer(along with the bench) from near the Y-arm to our workplace.

Procedure:

I will include a rough schematic of the setup later.

We alligned the reference PD(High Speed Photoreceiver model 1611) and the test PD(ET-3040 in this case) to get optimal power output. We had set the pump current for the laser at 19.5mA which produced a power of 1.00mW at the output of the fiber couple. At the reference detector the measured voltage was about 1.8V and at the DUT it was about 15mV. The DC transimpedance for the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75A/W. Using this we calculate the power at the reference detector to be 0.24mW. The DC transimpedance for the DUT is 50Ohm and the responsivity of about 0.9A/W. This amounts to a power of about 0.33mW. After measuring the DC voltages, we connected the laser input to the Network Analyzer and gave in an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz. The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. We found that the cut-off frequency for the ET3040 model was at arounf 55 MHz(stated as >50MHz in the data sheet). We have stored the data using the lab PC in the directory .../scripts/general/netgpibdata/data.

Result:

The bandwidth of the ET-3040 PD is as stated in the data sheet, >50 MHz.

Precaution:

These PDs have an internal power supply of 3V for ET-3040 and 6V for ET-3010. Do not leave these connected to any instruments after the experiments have been performed or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

A similar procedure has to be followed in order to test the ET-3010 PD. I will be doing this tentatively on Monday.

Attachment 1: IMG_20170519_173247922.jpg
IMG_20170519_173247922.jpg
Attachment 2: IMG_20170519_173253252.jpg
IMG_20170519_173253252.jpg
Attachment 3: IMG_20170519_173300174.jpg
IMG_20170519_173300174.jpg
Attachment 4: PD_test_setup.png
PD_test_setup.png
  13000   Mon May 22 10:15:14 2017 jigyasaSummarytelescope designLens tubes and object distances

Since the f numbers of the lenses in the proposed design with biconvex lenses are a little less than 5 and the conjugate ratio(that is the ratio of object to image distance) is greater than 5, I explored the use of plano convex lenses, but with the same focal lengths, the accessible u-v range is restricted with the planoconvex rather than biconvex lenses.
On Friday, I had a discussion with Gautam and Steve about the hardware that is the cylindrical enclosures for the camera and the telescope and we examined two such aluminum cylindrical enclosures. One of them was the one being currently employed for the cameras. The dimensions were measured and the length was found to be 8’’ and an outer diameter of 26 cm within an error of 0.5 cm.
The other enclosure was longer with a length of 52 cm(±0.5 cm), outer diameter of 10”(±0.1”) and an inner diameter of 23.7cm(±0.1cm). Pictures of these enclosures are attached.
Both of these enclosures have internal optical rail to mount the camera and the telescope system. Depending on the weight of the telescope system(that includes the weight of the slotted lens tubes, the lenses), it might be more efficient to clamp the telescope system itself on the rails with the low weight camera mounted on the lens tube.
I also went around to get an idea of distance of the GigE from the test masses. This was just a step to verify if the object distances were really in the ranges being taken into consideration, that is between 1500 and 2500 mm. I also tried to cross check the measurements with the CAD drawing of the 40m. However, as I have been informed, the distances in the CAD version are not updated.

The distances from the optic to the CCD detector would range from around 75.1 cm for MC2, 94.01 cm for ITMX, 97.21 cm for ETMX, 117.19 cm for ITMY and 88.463 cm for ETMY. The illuminator for the ETMY was disconnected, so Gautam helped me access the manual lamp control to enable me to take measurements.
The values for ETMX, MC2 and ITMY are subject to an error of ±1’’. Due to a lot of obstructions, the values for ETMY and ITMX may be subject to a lot more error. Even so, these distances are clearly less than 2 meters, prompting me to run the simulations again and verify that the chosen combination is still useful.

As for the slotted lens tubes to mount the 2” lenses, the following options are available on the Thorlabs catalog. CVI and Edmunds do not seem to offer much of the stackable lens tubes.

SM2L30C is a lens tube onto which the optic can be mounted without the need of a spanner wrench. It also has a length of 3”. However, it has a rotatable slip shield which can be rotated open as and when the access to optic is required. However, there might be a slight compromise with rigidity here.

SM2L30 is a lens tube with internal thread depth of 3”, the optic can be mounted using spanner wrench and a retainer ring. The optic cannot be accessed from both ends of the tube here.
SM2M30 is a lens tube with no external threads, therefore lens tube couplers would be required to stack the tubes. The optic is accessible from both ends here though.

Considering the merits and demerits of all these available options, the use of SM2L30 might be considered as it provides a quick and efficient way of stacking multiple lens tubes. As for accessing the optic from both sides, using multiple tubes helps overcome the problem and still ensures that we are able to access a number of separation distances as per requirement.
Thorlabs also offers an internal C to external SM2 adapter so that the lens tube could be fixed onto the C mount of the camera. 

I would be examining the use of 1" diameter lenses for the eyepiece as suggested by Rana, as that might give us more flexibility. 

Attachment 1: Pictures1.pdf
Pictures1.pdf Pictures1.pdf Pictures1.pdf Pictures1.pdf
  13005   Mon May 22 18:20:27 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040

I am adding the text files with the data readings and paramater settings along with the Bode Plot of the data. I plotted these graphs using matplotlib module with python 2.7.

Quote:

Motivation:

I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week...

 

Attachment 1: ET-3040_test.zip
Attachment 2: ET-3040_test.pdf
ET-3040_test.pdf
  13020   Tue May 30 17:45:35 2017 jigyasaSummaryCamerasGigE configuration

To verify the Pylon Installation on the shared drive, I tried connecting the Basler acA640-100gm to the PoE connector and running it through Allegra.

Each time the camera was opened, I got a message on Terminal saying ‘Failed to get the node ‘AcquisitionFrameRate’ from the device’s nodemap’.

Yet, I was able to capture images in single shot and continuous shot mode. I tried to emulate the analog controls (gain at 360, Black level 121) as in Johannes’ elog  12617 and varied the exposure rate from 1 to 5 milliseconds. The camera had the Rainbow 50mm lens with which I was able to focus on some markings on the white board, however the image was extremely magnified and this lens was extremely sensitive which meant that the image went quickly out of focus.
I checked the CCD cabinet in the 40m to find 12 mm lenses which couldn’t focus properly. So I couldn’t quite get an image as Johannes had been able to obtain! I also got an image of a cable in focus but it is very dark due to the exposure time.
 WIth the components for the telescope design arriving(hopefully) by tomorrow, I should be able to assemble the telescope and capture some more images.

From Joe B’s paper and discussion with Gautam and Johannes, I came up with three models for configuring the GigE’s. Three configuration models for the GigE have been proposed which connect the camera to a computer network. While the first model is just involves connecting the camera directly to a PC with Pylon installation using a Power over Ethernet adapter, it would be only efficient in the basic IP configuration of the camera without involving a complex network. The second model describes the integration of the camera to 'Martian'. The third model combines the creation of a separate camera subnetwork and integrating this network with the main network in the lab through a switch. This model would be more efficient to employ as the number of cameras increases. The same purpose could be achieved by using a PC with two network ports one of which connects to the camera subnet while another links it to the Martian where the computers running the client script could stream desired frames.

 

Attachment 1: GigEconfiguration.pdf
GigEconfiguration.pdf GigEconfiguration.pdf GigEconfiguration.pdf
  13024   Wed May 31 18:17:28 2017 jigyasaSummaryCamerasGigE configuration

This evening I was able to obtain some images with the same lens on the GigE. 
The problem earlier, as Johannes pointed out, was that we were using too many adapters on the camera and so it was able to focus at really shallow distances or at really low depths of focus. 
So after removing the adapters we were able to focus on objects at much larger distances.

The mug for example was at a distance greater than 1.5 meters from the camera.

Here are some images that were captured on Allegra by plugging in the GigE to the PoE connector connected to the Martian. 

Quote:

I tried to emulate the analog controls (gain at 360, Black level 121) as in Johannes’ elog  12617 and varied the exposure rate from 1 to 5 milliseconds. The camera had the Rainbow 50mm lens with which I was able to focus on some markings on the white board, however the image was extremely magnified and this lens was extremely sensitive which meant that the image went quickly out of focus.

 

Attachment 1: PictureswithPylon.pdf
PictureswithPylon.pdf PictureswithPylon.pdf
  13025   Wed May 31 19:18:53 2017 jigyasaSummaryCamerasGigE configuration

And here's another picture of Kaustubh, my fellow SURF, captured in all his glory by Rana! :)

 

Quote:

This evening I was able to obtain some images with the same lens on the GigE. 
The problem earlier, as Johannes pointed out, was that we were using too many adapters on the camera and so it was able to focus at really shallow distances or at really low depths of focus. 
So after removing the adapters we were able to focus on objects at much larger distances.

The mug for example was at a distance greater than 1.5 meters from the camera.

Here are some images that were captured on Allegra by plugging in the GigE to the PoE connector connected to the Martian. 

Quote:

I tried to emulate the analog controls (gain at 360, Black level 121) as in Johannes’ elog  12617 and varied the exposure rate from 1 to 5 milliseconds. The camera had the Rainbow 50mm lens with which I was able to focus on some markings on the white board, however the image was extremely magnified and this lens was extremely sensitive which meant that the image went quickly out of focus.

 

 

Attachment 1: Image__2017-05-31__18-49-37.bmp
  13080   Mon Jun 26 09:39:15 2017 NaomiSummaryGeneralMeasure transfer functions of Mini-Circuits filters

I have spent my first few days as a SURF getting experience working with the Network/Spectrum Analyzer (AG 4395A). After an introduction to the 40m by Koji, I was tasked with using the AG4395A to measure the transfer function of several filters (for example, Mini-Circuits Low Pass Filter SLP-30). I am now familiar with configuring the AG 4395A, taking a single set of data using a command from one of the control computers, and plotting the dataset as a Bode plot (separate plots for magnitude and phase) using Python.

To Do:

  • Use AGmeasure to take multiple datasets with a single command.
  • Plot multiple datasets for each filter on a single Bode plot and perform some statistical analysis. 

To experiment with plotting multiple datasets on a single Bode plot, I used a single dataset from the Network Analyzer using the SLP-30 filter and added random noise to create ten datasets to plot. I am attaching the resulting Bode plot, which has the ten generated sets of data plotted along with their average.

We discussed with Rana and Koji how to interpret this type of dataset from the Network Analyzer. Instead of considering the magnitude and phase as separate quantities, we should consider them together as a single complex number in the form H(f) = M exp(iπP/180), where M is the magnitude and P is the phase in degrees. We can then find the average value of the measured quantity in its complex number form (x + iy), as opposed to just taking the average of the magnitude and phase separately.  

Attachment 1: Bode_Plot.png
Bode_Plot.png
  13099   Fri Jul 7 09:03:40 2017 SteveSummaryHistoryas it was in 1994


 

Attachment 1: 1994.PDF
1994.PDF 1994.PDF
  13116   Thu Jul 13 16:10:34 2017 KaustubhSummaryComputer Scripts / ProgramsCavity Scan Simulation Code

The code to produce a cavity scan simulation and then fitting the data and re-evaluating the initiallt set parameters can be found in this git repo.


The 'CavitScanSimulation' python script now produces a cavity scan with custom parameters which can be easily modified. It also introduces the first TEMs(n+m=0,1,2,3,4) to the laser with power going as (1/(2(n+m)+1))^2 {Selected carefully}. The only care that needs to be taken is that the frequency span should be somewhere near an integral multiple of the FSR so that there are equal number of resonances for all modes and sidebands. This code, as of now also calls the 'FitCavityScan' script which performs the fitting procedure on the data generated above{This data is actually written in a '.mat' file} and generates the Fit parameter data files. The Simulation code then calls the 'CalculatingPhysicalParameters' script which evaluates the data based on the Fit parameters and outputs some physically relevant results like the FSR, Finesse, Modulation Depths, TMS{Current Output is the Estimated RoCs of the two mirrors which isn't something we want directly, so it can be modified a bit to output TMS based on the HOMs}. The scripts do some 'Linearity' checks which might not really be of much significance but can be seen as a reference. Also, the ipython notebook will show all intermediate plots for the actual data and data with custom noise, fit data, FSR fitting, linearity checks, Bessel Ratio plot with mod_depths.

 

Note: The scripts should be run using either an IDE like 'spyder'{for .py files}{Comes with Anaconda} or using an ipython notebook{for .ipynb files}.
  13126   Wed Jul 19 12:47:43 2017 SteveSummarySUSoplev laser summary updated

1103P, sn P893518 of  2013 vintage is dead at the sus  fiber demo

Quote:

 

Quote:

                    Oct.  5, 2015              ETMY He/Ne replaced by 1103P, sr P919645,  made Dec 2014, after 2 years

                   Jan. 24, 2017              ETMY He/Ne replaced by 1103P,  sr P947049,  made Apr 2016,  after 477 hrs running hot

    Jan. 26,  2017              RIN test stared with P947034, made Apr. 2016  

    Apr.  10,  2017              purchased two 1103P from Edmund:  sr P964438 & sr P964431, made 02/2017

   

     July  19,  2017             1103P, sn P964438 as new installed at the south end for the glass fiber illumination. Turn laser off when you are done.

  13128   Wed Jul 19 18:24:15 2017 ranaSummarySUSoplev laser summary updated

The $1000 HeNe should not be used for illuminating fibers.

You should purchase these (total price per laser less than $6):

  1. 10 red laser diodes for $2.39 total
  2. 3-5 Vdc adapters
  13129   Fri Jul 21 08:05:07 2017 SteveSummarySUSred, blue, green laser diodes ordered

Also ordered 1 ea.

Blue 20mW

Green 10mW

IR 780nm 3mW

Quote:

The $1000 HeNe should not be used for illuminating fibers.

You should purchase these (total price per laser less than $6):

  1. 10 red laser diodes for $2.39 total
  2. 3-5 Vdc adapters

HeNe 1103P 2mW Recertified

  13131   Fri Jul 21 19:44:58 2017 NaomiSummaryComputer Scripts / ProgramsUsing PyKat to run Finesse

I have been working on using PyKat to run Finesse. There appear to be several ways to run an equivalent simulation using Finesse:

1: .kat only

Run a .kat file directly from the terminal. For example, if in the directory containing the Finesse kat.ini file, run the command ‘./kat file.kat’. This method does not use PyKat.

To edit the simulation using this method, one must directly edit the .kat file. This is not ideal, as all parameters must be hard-coded, and there is no looping method for duplicate commands.


Both of the following methods use PyKat in some manner. To run Finesse using PyKat from a .py file, the command ‘from pykat import finesse’ should be included. In addition, two environment variables must be defined:

  • FINESSE_DIR': directory containing ‘kat’ executable
  • KATINI’: location and name of kat.ini file

Within a .py file running PyKat, the kat object contains all of the optical components and their states. To create a kat object, we use the command:

kat = finesse.kat()


2: .kat + .py

To load Finesse commands from a .kat file, we can use the command loadKatFile(). For example, using the kat object as defined above:

kat.loadKatFile(‘file.kat’)

The kat object now contains any components defined in the .kat file. The states of these components can be altered using PyKat. For example, if in the .kat file, we defined a mirror named ‘ITM’, with R = 0.9, T = 0.1, phi = 0, and with nodes 1 and 2 to its left and right, respectively, using the Finesse command

m ITM 0.9 0.1 0 n1 n2

we can now alter the state of the mirror using a PyKat command such as

kat.ITM.phi = 30

which changes the ‘phi’ property of the mirror to 30 degrees. Once all alterations to objects are made, we can run Finesse using the command

out = kat.run()

which stores the output of the Finesse simulation in the variable out.


3: .py only

We can also run a Finesse simulation without any .kat file. There are two ways to define Finesse objects within a .py file.

- Parse a string containing Finesse commands, as would be found in a .kat file, using the command parseCommands(). For example,

            kat.parseCommands(‘m ITM 0.9 0.1 0 n1 n2’)

defines the same mirror as above. This object can now be altered using pyKat in the same manner as above.

- Define an object using the classes defined in PyKat. For example, to define the same ITM mirror, we can use:

ITM = mirror(‘ITM’, ‘n1’, ‘n2’, 0.9, 0.1, 0)

kat.add(ITM)

The syntax for these classes can be found in the files included in the PyKat package named ‘commands.py’, ‘detectors.py’, and ‘components.py’.

We can also run Finesse commands (rather than just defining components) using their respective classes. These must also be added to the kat object. For example:

x = xaxis(‘lin’, [‘-4M’, ‘4M’], ‘f’, 1000, ‘laser’)

kat.add(x)

This runs the command ‘xaxis’, which sets the x-axis of the output data to run from freq = -4 MHz to 4 MHz, in 1000 steps. This is equivalent to the following Finesse command:

xaxis laser f lin -4M 4M 1000

In theory, we should be able to use PyKat to run any Finesse command. However, not all Finesse commands appear to be defined in PyKat; one example is the Finesse command ‘yaxis’, which I cannot locate in PyKat. In addition, I have had difficulty running some commands such as ‘cav’ and ‘pd’, despite following their class definitions in the PyKat files. However, these commands can still be easily run in PyKat using parseCommands().

  13154   Mon Jul 31 20:35:42 2017 KojiSummaryComputersChiara backup situation summary

Summary
- CDS Shared files system: backed up
- Chiara system itself: not backed up


controls@chiara|~> df -m
Filesystem     1M-blocks    Used Available Use% Mounted on
/dev/sda1         450420   11039    416501   3% /
udev               15543       1     15543   1% /dev
tmpfs               3111       1      3110   1% /run
none                   5       0         5   0% /run/lock
none               15554       1     15554   1% /run/shm
/dev/sdb1        2064245 1718929    240459  88% /home/cds
/dev/sdd1        1877792 1426378    356028  81% /media/fb9bba0d-7024-41a6-9d29-b14e631a2628
/dev/sdc1        1877764 1686420     95960  95% /media/40mBackup

/dev/sda1 : System boot disk
/dev/sdb1 : main cds disk file system 2TB partition of 3TB disk (1TB vacant)
/dev/sdc1 : Daily backup of /dev/sdb1 via a cron job (/opt/rtcds/caltech/c1/scripts/backup/localbackup)

/dev/sdd1 : 2014 snap shot of cds. Not actively used. USB

https://nodus.ligo.caltech.edu:8081/40m/11640

 

  13159   Wed Aug 2 14:47:20 2017 KojiSummaryComputersChiara backup situation summary

I further made the burt snapshot directories compressed along with ELOG 11640. This freed up additional ~130GB. This will eventually help to give more space to the local backup (/dev/sdc1)

controls@chiara|~> df -m
Filesystem     1M-blocks    Used Available Use% Mounted on
/dev/sda1         450420   11039    416501   3% /
udev               15543       1     15543   1% /dev
tmpfs               3111       1      3110   1% /run
none                   5       0         5   0% /run/lock
none               15554       1     15554   1% /run/shm
/dev/sdb1        2064245 1581871    377517  81% /home/cds
/dev/sdd1        1877792 1426378    356028  81% /media/fb9bba0d-7024-41a6-9d29-b14e631a2628
/dev/sdc1        1877764 1698489     83891  96% /media/40mBackup

 

 

  13183   Thu Aug 10 14:13:23 2017 KiraSummaryPEMtemperature sensor

Goal is to build a temperature sensor accurate to 1 mK. Schematic is shown below. This does not take into account the DC gain that occurs.

Parts that would be used for this: LM317 regulator, AD592 temperature transducer, OP amp (low input noise and high impedance), 100K (or maybe 10k) resistor. This is what is currently proposed, but the exact parts we use could be changed to better fit the sensor. The resistor and the OP amp will be decided depending on the output of the AD592.

Once this is built, I would like to create a few copies of it and put them into an insulated container and measure the output from each one. This would allow us to calculate the temperature noise of the circuit, as we can take out the variations due to temperature changes inside the container by comparing the outputs.

I can also model the noise in the circuit to see how much noise there is before building it. There are three terms to the noise that we have, and we need to decide which one dominates at low frequencies.

Our final goal is to create an additional circuit that could cancel out the DC gain. I have attached an additional schematic proposed by Rana that would help with this issue. I will leave this second half for when the first part works.

Attachment 1: IMG_20170810_121637~2.jpg
IMG_20170810_121637~2.jpg
Attachment 2: IMG_20170810_134422~2.jpg
IMG_20170810_134422~2.jpg
  13188   Thu Aug 10 21:22:06 2017 ranaSummaryPEMtemperature sensor
  • Should we use the AD590 or the AD592? They're sort of the same, but have slightly different packages and specs.
  • Given the sensitivity of the AD590 to power supply drift, I would recommend using a precision voltage reference like the AD581 or AD587. Take a look at the datasheets and order a few different varieties so we can see what works best for us. I believe that the voltage regulators have too much drift to use for precision temperature control.
  • The AD590 datasheet has a few example circuits showing how we can subtract off the offset which comes the 1 uA/K coefficient of the AD590 (i.e. we would have 295 uA at room temperature, trying to stabilize to +/- 0.1 K)
  • For the first prototypes its fine to use some solderable protoboard assembly, but we will eventually have to figure out how to package this thing and interface it with our Acromag slow controls system.

also, I've attached some temperature noise spectra from the LISA group at the AEI in Hannover. It will be interesting to see if we get the same results.

Attachment 1: tempsensnoise2.pdf
tempsensnoise2.pdf
Attachment 2: tempnoise_final.pdf
tempnoise_final.pdf
  13209   Tue Aug 15 11:50:21 2017 KiraSummaryPEMtemp sensor packaging/mount

For the final packaging/mounting of the sensor to the seismometer, I have thought of two options.

1. Attach circuit to a PCB board and place it inside the can, while leaving the AD590 open to the air inside the can.

  • This makes sure that the sensor gets a direct measurement of the temperature of the air in the can, as it is exposed to the air. 
  • But, it takes a limited area of measurement, so it could be the case that the area we place it in happens to be a hot or cold pocket, and the measurement would be inaccurate.
  • This can be solved by placing multiple copies of the circuit in various places of the can and averaging the values.

2. Attach the AD590 to a copper plate with thermal paste and put it into a pomona box.

  • This solves the problem of having a limited sample area the first option had, as the copper plate should have a uniform temp distribution, thus we are sampling the temp of that whole area.
  • Need to make sure that the response time to the temperature variations of copper is less than the frequency that we are measuring.
  • This can be calculated using equations for heat transfer (listed below).

If anyone has input on which method is preferred or any additional options that we may have, I would appreciate it.

Heat transfer:

q = k A dT / s

  • k = thermal conductivity
  • A = area
  • dT = temperature gradient
  • s = thickness

For copper, k = 401 W/mK, x = 1.27 mm, A = 2.66x10^-3 m^2 (for the particular copper plate I measured), dT = 1K (assume). Thus the heat transfer will be 839 J/s.

I'm not completely sure what to do with this yet, but it could help us decide whether the copper plate option will be useful for us.

 

  13235   Mon Aug 21 20:11:25 2017 johannesSummaryGeneralLoss measurements plan

There are three methods we (will soon) have available to evaluate the round-trip dissipative losses in the arms that do not suffer from the ITM loss dominance:

  • DC reflection method:
    • Compare reflected light levels from [ITM only] vs [arm cavity on resonance]
  • Basler CCDs:
    • Infer large (or small) angle scatter loss with calibrated CCDs
  • Reflection ringdowns:
    • Need AS port light injection, principle is similar to DC method but better (?)

DCREFL

The DC method comparing reflectivities has been used in the past and is relatively easy to do. After the recent vacuum troubles the first step should be to re-perform these as CDS permits (needs some ASS functionality and of course the MC to behave). It wouldn't hurt to know the parameters this depends on, aka mode overlap and modulation depths with better certainty. Maybe the SURF scripts for mode-spectroscopy can be applied?

CCDs

With the new CCD cameras calibrated, pre-vent we can determine the magnitude of the large-angle scatter loss (assuming isotropic scatter) of ETMX and possibly ETMY. Can we look past ETMX/ETMY from the viewports? Then we can probably also look at the small angle scatter of ITMX and ITMY. If not, once we open one of the chambers there's the option of installing mirrors as close as possible to the main beam path. The easiest is probably to look at ITMX, since there is plenty of space in the XEND chamber, and the camera is already installed.

ASPORT

This requires a lot of up-front work. We decided to use the spare 200mW NPRO. It will be placed on the PSL table and injected into an optical fiber, which terminates on the AS table. The again free space beam there needs to be sort-of mode-matched into the SRC ("sort-of" because mode-spectroscopy). We want to be able to phaselock this secondary beam to the PSL with at least a couple kHz bandwidth and also completely extinguish the beam on time-scales of a few microseconds. We will likely need to purchase a few components that we can salvage from other labs, I'm still going through the inventory and will know more soon (more detailed post to follow). We need to settle for the polarization we want to send in from the back.

 

Tentative Schedule (aggressive)

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

Week Aug 28 - Sep 3:

  • Re-evaluate modulation indices if necessary
  • Optical beat AS Port Auxiliary Laser (ASAL) - PSL
  • PLL setup
  • CCD large angle prep work

Week Sep 4 - Sep 10:

  • PLL CDS integration
  • Amplitude-modulation preparation
  • CCD large angles

Week Sep 11 - Sep 17:

  • Fiber-injection
  • AS table preliminary mode-matching
  • Faraday setup
  • CCD small angle prep work

Week Sep 18 - Sep 24:

  • ASAL amplitude switching
  • CCD small angles

Week Sep 25 - Oct 1:

  • AS port ringdowns

 

  13236   Mon Aug 21 21:26:41 2017 gautamSummaryGeneralLoss measurements plan

In case you want to use it, I had profiled the Lightwave NPRO sometime back, and we were even using it as the AUX X laser for a short period of time. 

As for using the AS laser for mode spectroscopy: don't we want to match the beam into the cavity as best as possible, and then use some technique to disturb the input mode (like the dental tooth scraper technique from Chris Mueller's thesis)? 

Johannes and I did an arm scan of the X arm today (arm controlled with ALS, monitoring IR transmission) - only 2 IR FSRs were scanned, but there should be sufficient information in there to extract the modulation depth and mode matching - can we use Kaustubh's/Naomi's code?. The Y arm ALS needs to be touched up so I don't have a Y arm scan yet. Note that to get a good arm scan measurement, the High Gain Thorlabs PD should be used as the transmission PD.

Quote:
 

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

 

  13241   Tue Aug 22 16:56:54 2017 johannesSummaryGeneralAS laser existing components inventory

I surveyed the lab today to see what we may need to buy for the AS laser setup.

We have:

NPRO 200 mW + Driver

Faraday Isolator from cabinet

ISOMET Model 1201E: This is a free space AOM I found in the modulator cabinet. It needs to be driven at 40MHz (to be confirmed) with ~6W of electrical power. For a 500 micron beam it can allegedly achieve rise times of '93' [units not specified, could this be nanoseconds?]. I did not find a dedicated driver for it, however there was a 5W minicircuits amplifier ZHL-5W-1 in the RF cabinet and a switch ZSDR-230, which has a typical switch time of 2 microseconds, however I'm not sure how this translates to rise/fall times of the deflected power. It seems we have everything to set this up, so we'll by the end of the week if we can use a combination of these things or if we need to buy additional driver electronics.

New Focus model 4004 broadband phase modulator which is labeled as dusty, and in fact quite dirty when looking through. We should attempt to clean this thing and maybe we can use it here or at the ends.

Probably all the optics we need for the PSL table setup.

 

We need:

Beat PD: How about one of these: EOT ET-3000A? I didn't find a broadband PD for the beat with the PSL

Fiber Stuff: coupler & polarization maintaining fiber 20m & collimator. There are a couple options here, which we can discuss in the meeting.

Faraday Isolator: If we want to inject P-polarization. If S is okay we can use a polarizing plate beamsplitter instead.

Possibly some large lenses for mode-matching to IFO (TBD)

 

 

  13250   Thu Aug 24 18:02:16 2017 GabrieleSummaryLSCFirst cavity length reconstruction with a neural network

1) Introduction

In brief, I trained a deep neural network (DNN) to recosntuct the cavity length, using as input only the transmitted power and the reflection PDH signals. The training was performed with simulated data, computed along 0.25s long trajectories sampled at 8kHz, with random ending point in the [-lambda/4, lambda/4] unique region and with random velocity.

The goal of thsi work is to validate the whole approach of length reconstruction witn DNN in the Fabry-Perot case, by comparing the DNN reconstruction with the ALS caivity lenght measurement. The final target is to deploy a system to lock PRMI and DRMI. Actually, the Fabry-Perot cavity problem is harder for a DNN: the cavity linewidth is quite narrow, forcing me to use very high sampling frequency (8kHz) to be able to capture a few samples at each resonance crossing. I'm using a recurrent neural network (RNN), in the input layers of the DNN, and this is traine using truncated backpropagation in time (TBPT): during training each layer of RNN is unrolled into as many copies as there are input time samples (8192 * 0.25 = 2048). So in practice I'm training a DNN with >2000 layers! The limit here is computational, mostly the GPU memory. That's why I'm not able to use longer data stretches.

But in brief, the DNN reconstruction is performing well for the first attempt.

2) Training simulation

In the results shown below, I'm using a pre-trained network with parameters that do not match very well the actual data, in particular for the distribution of mirror velocity and the sensing noises. I'm working on improving the training.

I used the following parameters for the Fabry-Perot cavity:

The uncertaint is assumed to be the 90% confidence level of a gaussian distribution. The DNN is trained on 100000 examples, each one a 0.25/8kHz long trajectory with random velocity between 0.1 and 5 um/s, and ending point distributed as follow: 33% uniform on the [-lambda/4, lambda/4] region, plus 33% gaussian distribution peaked at the center with 5 nm width. In addition there are 33% more static examples, distributed near the center. 

For each point along the trajectory, the signals TRA, POX11_I and POX11_Q are computed and used as input to the DNN.

3) Experimental data

Gautam collected about 10 minutes of data with the free swinging cavity, with ALS locked on the arm. Some more data were collected with the cavity driven, to increase the motion. I used the driven dataset in the analysis below.

3.1) ALS calibration

The ALS signal is calibrated in green Hz. After converting it to meters, I checked the calibration by measuring the distance between carrier peaks. It turned out that the ALS signal is undercalibrated by about 26%. After correcting for this, I found that there is a small non-linearity in the ALS response over multiple FSR. So I binned the ALS signal over the entire range and averaged the TRA power in each bin, to get the transmission signals as a function of ALS (in nm) below:

I used a peak detection algorithm to extract the carrier and 11 MHz sideband peaks, and compared them with the nominal positions. The difference between the expected and measured peak positions as a function of the ALS signal is shown below, with a quadratic fit that I used to improve the ALS calibration

The result is

z_initial = 1e9 * L*lamba/c *1.26. * ALS
z_corrected = 2.1e-06 z^2  -1.9e-02 z  -6.91e+02

The ALS calibrated z error from the peak position is of the order of 3 nm (one sigma)

3.2) Mirror velocity

Using the calibrated ALS signal, I computed the cavity length velocity. The histogram below shows that this is well described by a gaussian with width of about 3 um/s. In my DNN training I used a different velocity distribution, but this shouldn't have a big impact. I'm retraining with a different distirbution.

4) DNN results

The plot below shows a stretch of time domain DNN reconstruction, compared with the ALS calibrated signal. The DNN output is limited in the [-lambda/4, lambda/4] region, so the ALS signal is also wrapped in the same region. In general the DNN reconstruction follows reasonably well the real motion, mostly failing when the velocity is small and the cavity is simultanously out of resonance. This is a limitation that i see also in simulation, and it is due to the short training time of 0.25s.

I did not hand-pick a good period, this is representative of the average performance. To get a better understanding of the performance, here's a histogram of the error for 100 seconds of data:

The central peak was fitted with a gaussian, just to give a rough idea of its width, although the tails are much wider. A more interesting plot is the hisrogram below of the reconstructed position as a function of the ALS position, Ideally one would expect a perfect diagonal. The result isn't too far from the expectation:

The largest off diagonal peak is at (-27, 125) and marked with the red cross. Its origin is more clear in the plot below, which shows the mean, RMS and maximum error as a function of the cavity length. The second peak corresponds to where the 55 MHz sideband resonate. In my training model, there were no 55 MHz sidebands nor higher order modes. 

5) Conclusions and next steps

The DNN reconstruction performance is already quite good, considering that the DNN couldn't be trained optimally because of computation power limitations. This is a validation of the whole idea of training the DNN offline on a simulation and then deploy the system online.

I'm working to improve the results by

  • training on a more realistic distribution of velocity
  • adding the 55 MHz sidebands
  • maybe adding HOMs
  • tune the DNN architecture

However I won't spend too much time on this, since I think the idea has been already validated.

 

  13251   Thu Aug 24 18:51:57 2017 KojiSummaryLSCFirst cavity length reconstruction with a neural network

Phenomenal!

  13256   Sat Aug 26 09:56:34 2017 GabrieleSummaryLSCFirst cavity length reconstruction with a neural network

Update

I included the 55 MHz sideband and higher order modes in my training examples. To keep things simple, I just assumed there are higher order modes up to n+m=4 in the input beam. The power in each HOM is randomly chosen from a random gaussian distribution with width determined from experimental cavity scans. I used a value of 0.913+-0.01 rad for the Gouy phase (again estimated from cavity scans, but in reasonable agreement with the nominal radius of curvature of ETMX)

Results are improved. The plot belows show the performance of the neural network on 100s of experimental data

For reference, the plots below show the performance of the same network on simulated data (that includes sensing noise but no higher order modes)

  13258   Mon Aug 28 08:47:32 2017 JamieSummaryLSCFirst cavity length reconstruction with a neural network
Quote:

Phenomenal!

truly.

  13269   Tue Aug 29 15:41:17 2017 KiraSummaryPEMheater circuit

I worked with Kevin and Gautam to create a heater circuit. The first attachment is Kevin's schematic of the circuit. The OP amp connects to the gate of the power MOSFET, and the power supply connects to the drain, while the source goes into the heater. We set the power supply voltage to 22V and varied the voltage of the input to the OP amp. At 6V to the OP amp, we got a current of 0.35A flowing through the heater and resistor. This was the peak current we got due to the OP amp being saturated (an increase in either of the power supplies did not change the current), but when we increased the voltage of the supply rails of the OP amp from 15V to 20V, we got a current of 0.5A. We would want a higher current than this, so we will need to get a different OP amp with a higher max voltage rating, and a resistor that can take more power than this one (it currently takes 5W of power, and is the best one we could find).

Kevin and I created a simulation of this circuit using CircuitLab to understand why the current was so low (second attachment). The horizontal axis is the voltage we supply to the OP amp. The blue line shows the voltage at the point between the output of the OP amp and the gate of the MOSFET. The orange line is the voltage at the point between the source of the MOSFET and the heater. The brown line is the voltage at the point between the heater and resistor. Thus, we can see that saturation occurs at about 2.1V. At that point, the gate-source voltage is the difference between the blue curve and the orange curve, which is about 4V, which is what we measured. Likewise, the voltage across the heater is the difference between the orange curve and the brown curve, which comes out to around 8V, which is also what we measured. Lastly, the voltage across the resistor is the brown curve, which is about 2V, which matches our observations. The circuit works as it should, but saturates too soon to get a high enough current out of it.

Gautam noted that it is important to measure the current correctly. We can't just use an ammeter and place it across the resistor or heater, because the internal resistance of the ammeter (~0.5 ohm) is comparable to the resistance we want to measure, so the current gets split between the circuit and the ammeter and we get an equivalent resistance of 1/R = 1/R0 + 1/Ra, where R0 is the resistance of the part we want to measure the current across, and Ra is the ammeter resistance. Thus, the new resistance will be lower and the ammeter will show a higher current value than what is actually there. So to accurately measure the current, we must place the ammeter in series with the part we want to measure. We initially got a 1A reading on the heater, which was not correct, and our setup did not heat up at all basically. When we placed the ammeter in series with the heater, we got only 0.35A.

The last two images are the setup for testing of the heater. We wrapped it around an aluminum piece and covered it with a few layers of insulating material. We can stick a thermometer in between the insulation and heater to see the temperature change. In later tests, we may insulate the whole piece so that less heat gets dissipated. In addition, we used a heat sink and thermal paste to secure the MOSFET to it, as it got very hot.

Our next steps will be to get a resistor and an OP amp that are better suited for our purposes. We will also run simulations with components that we choose to make sure that it can provide the desired current of 1A (the maximum output of the power supply is 24V, and the heater is 24 ohm, so max current is 1A). Kevin is working on that now.

Attachment 1: heater_circuit.pdf
heater_circuit.pdf
Attachment 2: simulation.png
simulation.png
Attachment 3: heater_setup.jpg
heater_setup.jpg
Attachment 4: IMG_20170829_131126.jpg
IMG_20170829_131126.jpg
  13272   Wed Aug 30 06:45:32 2017 KevinSummaryPEMNew Heater Circuit

I changed the heater circuit described in this elog to a current sink. The new and old circuits are shown in the attachment. The heater is R_h and is currently 24Ω; the sense resistor R is currently 6Ω. The op-amp is still an OP27 and the MOSFET is still an IRF630.

The current through the old circuit was saturating because the gate voltage on the MOSFET was saturating at the op-amp supply rails. This is because the source voltage is relatively high: V_S = I(R + R_h).

In the new circuit the source voltage is lower and the op-amp can thus drive a large enough V_{GS} to draw more current (until the power supply saturates at 25V/30Ω = 0.8A in this case). The source and DAC voltages are equal in this caseV_{\mathrm{DAC}} = V_S and so the current is I = V_{\mathrm{DAC}}/R. Since this is the same current through the heater, the drain voltage is V_D = V_{cc} - IR_h. I observed this behavior in this circuit until the power supply saturated at 0.8A. Note that when this happens V_D = V_S and the gate voltage saturates at the supply rails in an attempt to supply the necessary current.

Attachment 1: circuit.pdf
circuit.pdf
  13274   Wed Aug 30 11:04:08 2017 GabrieleSummaryLSCFirst look at neural network reconstruction of PRMI motion

Introduction

I trained a deep neural network (DNN) to reconstruct MICH and PRCL degrees of freedom in the PRMI configuration. For details on the DNN architecture please refer to G1701455 or G1701589. Or if you really want all the details you can look at the code. I used the following signals as input to the DNN: POPDC, POP22_Q, ASDC, REFL11_I/Q, REFL55_I/Q, AS55_I/Q.

Gautam took some PRMI data in free swinging and driven configuration:

  • 1187819331 + 10mins: Free swinging PRMI (after first locking PRMI on carrier and dither aligning).
  • 1187820070 + 5mins: PRM driven at low freq.
  • 1187820446 + 5mins: BS driven at low freq.

In contrast to the Fabry-Perot cavity case, we don't have a direct measurement of the real PRCL/MICH degrees of freedom, so it's more difficult to assess if the DNN is working well.

Results

All MICH and PRCL values are wrapped into the unique region [-lambda/4, lambda/4]^2. It's even a bit more complicated than simpling wrapping. Indeed, MICH is periodic over [-lambda/2, lambda/2]. However, the Michelson interferometer reflectivity (as seen from PRC) in the first half of the segment is  the same as in the second half, except for a change in sign. This change of sign in Michelson reflectivity can be compensated by moving PRCL by lambda/4, thus generating a pi phase shift in the PRC round trip propagation that compensate for the MICH sign change. Therefore, the unit cell of unique values for all signals can be taken as [-lambda/4, lambda/4] x [-lambda/4, lambda/4] for MICH x PRCL. But when we hit the border of the MICH region, PRCL is also affected by addtion of lambda/4. Graphically, the square regions A B C below are all equivalent, as well as more that are not highlighted:

This makes it a bit hard to un-wrap the resonstructed signal, especially when you add in the factor that in the reconstruction the wrapping is "soft".

The plot below shows an example of the time domain reconstruction of MICH/PRCL during the free swinging period.

It's hard to tell if the positions look reasonable, with all the wrapping going on.

Two-dimensional maps of signals

Here's an attempt at validating the DNN reconstruction. Using the reconstructed MICH/PRCL signal, I can create a 2d map of the values of the optical signals. I binned the reconstructed MICH/PRCL in a 51x51 grid, and computed the mean value of all optical signals for each bin. The result is shown in the plot below, directly compared with the expectation from a simulation.

The power signals (POP_DC, AS_DC, PO22_Q) looks reasonably good. REFL11_I/Q also looks good (please note that due to an early mistake in my code, I reversed the convention for I/Q, so PRCL signal is maximized in Q instead than in I). The 55MHz signals look a bit less clear...

Steps forward

  • I'm quite confident in the tuning of demodulation phase and signs for REFL11 and POP22, but less so for REFL55 and not sure at all for AS55. So it would be useful to measure a full sensing matrix of PRCL and MICH against those signals, to compare with my simulation
  • I'm working on an idea to fine tune the DNN using the real interferometer data, more to follow when the idea crystallizes in a clear form.
  13279   Thu Aug 31 00:46:57 2017 ranaSummaryCDSallegra -> Scientific Linux 7.3

I made a 'LiveCD' on a 16 GB USB stick using this command after the GUIs didn't work and looking at some blog posts:

sudo dd if=SL-7.3-x86_64-2017-01-20-LiveCD.iso of=/dev/sdf

Quote:

Debian doesn't like EPICS. Or our XY plots of beam spots...Sad!

Quote:
Quote:

No, not confused on that point. We just will not be testing OS versions at the 40m or running multiple OS's on our workstations. As I've said before, we will only move to so-called 'reference' systems once they've been in use for a long time.

Ubuntu16 is not to my knowledge used for any CDS system anywhere.  I'm not sure how you expect to have better support for that.  There are no pre-compiled packages of any kind available for Ubuntu16.  Good luck, you big smelly doofuses. Nyah, nyah, nyah.

K Thorne recommends that we use SL7.3 with the 'xfce' window manager instead of the Debian family of products, so we'll try it out on allegra and rossa to see how it works for us. Hopefully the LLO CDS team will be the tip of the spear on solving the usual software problems we have when we "~up" grade.

  13292   Tue Sep 5 09:47:34 2017 KiraSummaryPEMheater circuit calculations

I decided to calculate the fluctuation in power that we will have in the heater circuit. The resistors we ordered have 50 ppm/C and it would be useful to know what kind of fluctuation we would expect. For this, I assumed that the heater itself is an ideal resistor that has no temperature variation. The circuit diagram is found in Kevin's elog here. At saturation, the total resistance (we will have a 1\Omega resistor instead of 6\Omega for our new design) will be R_{tot}=R+R_{h}=1\Omega +24\Omega =25\Omega. Therefore, with a 24V input, the saturation current should be I=\frac{V_{in}}{R_{tot}}=\frac{24V}{25\Omega}=0.96A.  Therefore, the power in the heater should be (in the ideal case) P=I^2R{_{h}}=22.1184W

Now, in the case where the resistor is not ideal, let's assume the temperature of the resistor changes by 10C (which is about how much we would like to heat the whole thing). Therefore, the resistor will have a new value of R_{new}=R+50ppm/C\times 10C\times 10^{-6}=1.0005\Omega. The new current will then be I_{new}=\frac{V_{in}}{R_{new}}=0.95998A and the new power will be P_{new}=I_{new}^{2}R_{h}=22.1175W. So the difference in power going through the heater is about 0.00088W.

We can use this power difference to calculate how much the temperature of the metal can we wish to heat up will change. \Delta T=\Delta P\times (1/\kappa) /x where \kappa is the thermal conductivity and x is the thickness of the material. For our seismometer, I calculated it to be 0.012K.

  13294   Tue Sep 5 16:37:47 2017 GabrieleSummaryLSCImproved PRMI deep learning reconstruction

This is an update on the results already presented earlier (refer to elog 13274 for more introductory details). I improved significantly the results with the following tricks:

  • I retuned the demodulation phase of AS55, this time ensuring that the (alleged) MICH motion is visible mostly in Q when crossing a carrier resonance. Further fine tunings of phases will be possible once we have a measurement of the length optical matrix

  • I fine tuned the netwrok by training it again using the real data. The ides is the following. I started with the network trained on the simulated data, and froze the parameters of the input recurrent layers. I fed the real signal to the network, computed the reconstructed PRCL/MICH, and fed them to my PRMI model to compute simulated signals. I allowed some of the parameters of the models to vary (expecially demodulation phases). I then trained again the network by trying to match the model predicted signals with the real input signals. I allowed only the parameters of the fully connected layers to vary (mostly for technical reasons, I'm working on re-training also the recurrent layers)

An example of time domain reconstruction is visible below. It already looks better than the old results:

As before, to better evaluate the performance I plotted averaged values of the real signals as a function of the reconstructed MICH and PRCL positions. The results are compared with simulation below. They match quite well (left real data, right simualtion expectation)

One thing to better understand is that MICH seems to be somewhat compressed: most of the output values are between -100 and +100 nm, instead of the expected -lambda/4, lambda/4. The reason is still unclear to me. It might be a bug that I haven't been able to track down yet.

  13304   Fri Sep 8 12:08:32 2017 GabrieleSummaryLSCGood reconstruction of PRMI degrees of freedom with deep learning

Introduction

This is an update of my previous reports on applications of deep learning to the reconstruction of PRMI degrees of freedom (MICH/PRCL) from real free swinging data. The results shown here are improved with respect to elog 13274 and 13294. The training is performed in two steps, the first one using simulated data, and the second one fine tuning the parameters on real data.

First step: training with simulation

This step is exactly the same already described in the previous entries and in my talks at the CSWG and LVC. For details on the DNN architecture please refer to G1701455 or G1701589. Or if you really want all the details you can look at the code. I used the following signals as input to the DNN: POPDC, POP22_Q, ASDC, REFL11_I/Q, REFL55_I/Q, AS55_I/Q. The network is trained using linear trajectories in the PRCL/MICH space, and signals obtained from a model that simulates the PRMI behavior in the plane wave approximation. A total of 150000 trajectories are used. The model includes uncertainties in all the optical parameters of the 40m PRMI configuration, so that the optical signals for each trajectory are actually computed using random optical parameteres, drwn from gaussian distributions with proper mean and width. Also, white random gaussian sensing noise is added to all signals with levels comparable to the measured sensing noise.

The typical performance on real data of a network pre-trained in this way was already described in elog 13274, and although being reasoble, it was not too good.

Second step: training with real data

Real free swinging data is used in this step. I fine tuned the demodulation phases of the real signals. Please note that due to an old mistake, my convention for phases is 90 degrees off, so for example REFL11 is tuned such that PRCL is maximized in Q instead of I. Regardless of this convention confusion, here's how I tuned the phases:

  • REFL11: PRCL is all in Q when crossing the carrier resonance
  • REFL55: PRCL is all in Q when crossing the carrier resonance
  • AS55: MICH is all in Q when crossing the PRCL carrier resonance
  • POP22: signal peaking in Q when crossing carrier or sideband resonances. Carrier resonance crossing gives positive sign

Then I built the following training architecture. The neural network takes the real signals and produces estimates of PRCL and MICH for each time sample. Those estimates are used as the input for the PRMI model, to produce the corresponding simulated optical signals. My cost function is the squared difference of the simulated versus real signals. The training data is generated from the real signals, by selection 100000 random 0.25s long chunks: the history of real signal over the whole 0.25s is used as input, and only the last sample is used for the cost function computation. The weights and biases of the neural network, as well as the model parameters are allowed to change during the learning process. The model parameters are regularized to suppress large deviations from the nominal values.

One side note here. At first sight it might seems weird that I'm actually fedding as input the last sample and at the same time using it as the reference for the loss function. However, you have to remember that there is no "direct" path from input to output: instead all goes through the estimated MICH/PRCL degrees of freedom, and the optical model. So this actually forces the network to tune the reconstruction to the model. This approach is very similar to the auto-encoder architectures used in unsupervised feature learning in image recognition.

Results

After trainng the network with the two previous steps, I can produce time domain plots like the one below, which show MICH and PRCL signals behaving reasonably well:

To get a feeling of how good the reconstruction is, I produced the 2d maps shown below. I divided the MICH/PRCL plane in 51x51 bins, and averaged the real optical signals with binning determined by the reconstructed MICH and PRCL degrees of freedom. For comparison the expected simulation results are shown. I would say that reconstructed and simulated results match quite well. It looks like MICH reconstruction is still a bit "compressed", but this should not be a big issue, since it should still work for lock acquisition.

Next steps

There a few things that can be done to futher tune the network. Those are mostly details, and I don't expect significant improvements. However, I think the results are good enough to move on to the next step, which is the on-line implementation of the neural network in the real time system.

  13365   Fri Oct 6 12:56:40 2017 gautamSummaryLSCRTCDS NN

[gabriele, gautam]

Gabriele had prepared a C code implementation of his NN for MICH/PRCL state estimation. He had been trying to get it going on some of the machines in WB, but was unsuccessful. The version of RCG he was trying to compile and run the code on was rather dated so we decided to give it a whirl on our new RCG3.4 here at the 40m. Just noting down stuff we tried here:

  • Code has been installed at /opt/rtcds/userapps/release/cds/c1/src/nn.This is to facilitate compilation by the RCG.
  • There is also a simulink block diagram (x3tst.mdl) in there which we copied and pasted into c1pem. Changed the appropriate paths in the C-Code block to point to the location in the previous bullet point.
  • c1pem was chosen for this test as it runs at 2k which is what the network is designed for.
  • Since we were running the test on c1sus and expected the machine to crash, I shutdown all the watchdogs for the test.
  • Code compiled without any problems (i.e. rtcds make c1pem and rtcds install c1pem executed successfully). There were some warning messages related to C-Code blocks but these are not new, they show up in all models in which we have custom C-code blocks. 
  • Unfortunately, the whole c1sus FE crashed when we tried rtcds restart c1pem.
  • We tried a couple of more iterations, and attempted to monitor dmesg during the restart process to see if there were any clues as to why this wasn't working, but got nothing useful.

All models have been reverted to their state prior to this test, and everything on the CDS_OVERVIEW MEDM screen is green now.

Attachment 1: post_NN_test.png
post_NN_test.png
  13383   Tue Oct 17 17:53:25 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction

I've been preparing for testing Gabriele's deep neural network MICH/PRCL reconstruction.  No changes to the front end have been made yet, this is all just prep/testing work.

Background:

We have been unable to get Gabriele's nn.c code running in kernel space for reasons unknown (see tests described in previous post).  However, Rolf recently added functionality to the RCG that allows front end models to be run in user space, without needing to be loaded into the kernel.  Surprisingly, this seems to work very well, and is much more stable for the overall system (starting/stopping the user space models will not ever crash the front end machine).  The nn.c code has been running fine on a test machine in this configuration.  The RCG version that supports user space models is not that much newer than what the 40m is running now, so we should be able to run user space models on the existing system without upgrading anything at the 40m.  Again, I've tested this on a test machine and it seems to work fine.

The new RCG with user space support compiles and installs both kernel and user-space versions of the model.

Work done:

  • Create 'c1dnn' model for the nn.c code.  This will run on the c1lsc front end machine (on core 6 which is currently empty), and will communicate with the c1lsc model via SHMEM IPC.  It lives at:
    • /opt/rtcds/userapps/release/isc/c1/models/c1dnn.mdl
  • Got latest copy of nn.c code from Gabriele's git, and put it at:
    • /opt/rtcds/userapps/release/isc/c1/src/nn/
  • Checked out the latest version of the RCG (currently SVN trunk r4532):
    • /opt/rtcds/rtscore/test/nn-test
  • Set up the appropriate build area:
    • /opt/rtcds/caltech/c1/rtbuild/test/nn-test
  • Built the model in the new nn-test build directory ("make c1dnn")
  • Installed the model from the nn-test build dir ("make install-c1dnn")

Test:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

Attachment 1: c1dnn.png
c1dnn.png
  13390   Wed Oct 18 12:14:08 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction
Quote:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

I tried moving the model to c1ioo, where there are plenty of free cores sitting idle, and the model seems runs fine.  I think the problem was just CPU contention on the c1lsc machine, where there were only two free cores and the kernel was using both for all the rest of the normal user space processes.

So there are two options:

  • Use cpuset on c1lsc to tell the kernel to remove all other processes from CPU6 and save it just for the c1dnn model.  This should not have any impact on the running of c1lsc, since that's exactly what would be happening if we were running the model in kernel space (e.g. isolating the core for the front end model).  The auxilliary support user space processes (epics seq/ioc, awgtpman) should all run fine on CPU0, since that's what usually happens.  Linux is only using the additional core since it's there.  We don't have much experience with cpuset yet, though, so more offline testing will be required first.
  • Run the model on c1ioo and ship the needed signals to/from c1lsc via PCIe dolphin.  This is potentially slightly more invasive of a change, and would put more work on the dolphin network, but it should be able to handle it.

I'm going to start testing cpuset offline to figure out exactly what would need to be done.

  13395   Thu Oct 19 15:42:03 2017 jamieSummaryLSCMICH/PRCL reconstruction neural network running on c1lsc

Gabriele's PRCL/MICH reconstruction neural network is now running on c1lsc.  Summary:

  • front-end model is called c1dnn, and is running as an experimental user-space process
  • c1dnn is getting most of it's needed inputs from existing SHMEM IPC outputs from c1lsc
  • none of the output signals from the network are being sent anywhere yet (grounded)
  • c1dnn has not been integrated in any way, into the DAQ etc.  it is being run manually by hand, and will be completely shut down after this test

Simple MEDM screen I made to monitor the input/output signals:

The RTS process seems to run fine, but there is quite a bit of jitter in the CPU_METER, at the 50% level:

It's not running over the limit, but it is jumping around more than I think it should be.  Will look into that...

cpuset for cpu isolation for user-space model

The c1dnn model is running on CPU6 on c1lsc.  CPU6 was isolated from the rest of the system using cpuset.  The "cset" utility was used to create a "system" CPU set that was assigned to CPU0, and the kernel was instructed to move all running processes to that set:

controls@c1lsc:~ 2$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   343    0 /
controls@c1lsc:~ 0$ sudo cset set -c 0 -s system --cpu_exclusive
cset: --> created cpuset "system"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y   342    1 /
       system          0 y       0 n     0    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k
cset: moving all tasks from root to /system
cset: moving 292 userspace tasks to /system
cset: moving 0 kernel threads to: /system
cset: --> not moving 50 threads (not unbound, use --force)
[==================================================]%
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    50    1 /
       system          0 y       0 n   292    0 /system
controls@c1lsc:~ 0$ sudo cset proc --move -f root -t system -k --force
cset: moving all tasks from root to /system
cset: moving 50 kernel threads to: /system
[==================================================]%
cset: **> 29 tasks are not movable, impossible to move
cset: done
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    29    1 /
       system          0 y       0 n   313    0 /system
controls@c1lsc:~ 0$

I then created a set for the RTS process ("rts-c1dnn") on CPU6, and executed the c1dnn model in that set:

controls@c1lsc:~ 0$ sudo cset set -c 6 -s rts-c1dnn --cpu_exclusive
cset: --> created cpuset "rts-c1dnn"
controls@c1lsc:~ 0$ sudo cset set
cset:
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0,6 y       0 y    24    2 /
    rts-c1dnn          6 y       0 n     0    0 /rts-c1dnn
       system          0 y       0 n   340    0 /system
controls@c1lsc:~ 0$ sudo cset proc -s rts-c1dnn --exec /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -- -m c1dnn
cset: --> last message, executed args into cpuset "/rts-c1dnn", new pid is: 27572
sysname = c1dnn
....

When done I just hit Ctrl-C.

I left the cpusets as they are, with all system processes in the "system" set.  This should not pose any problems since it's the identical configuration as would be if a normal kernel-level model was running in CPU6.

The c1dnn process and it's EPICS sequencer were shutdown after this test.

  13400   Tue Oct 24 20:14:21 2017 jamieSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ

In order to try to isolate CPU6 for the c1dnn neural network reconstruction model, I set the CPUAffinity in /etc/systemd/system.conf to "0" for the front end machines.  This sets the cpu affinity for the init process, so that init and all child processes are run on CPU0.  Unfortunately, this does not affect the kernel threads.  So after reboot all user space processes where on CPU0, but the kernel threads were still spread around.  Will continue trying to isolate the kernel as well...

In any event, this amount of isolation was still good enough to get the c1dnn user space model running fairly stably.  It's been running for the last hour without issue.

I added the c1dnn channel and testpoint files to the daqd master file, and restarted daqd_dc on fb1, so now the c1dnn channels and test points are available through dataviewer etc.  We were then able to observe the reconstructed signals:

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

  13401   Wed Oct 25 09:32:14 2017 GabrieleSummaryLSCfurther testing of c1dnn integration; plugged in to DAQ
Quote:

 

 

We'll need to set the phase rotation of the demodulated RF PD signals (REFL11, REFL55, AS55, POP22) to match them with what the NN expects...

Here are the demodulation phases and rotation matrices tuned for the network. For the matrices, I am assuming that the input is [I, Q] and the output is [I,Q].

POP22
phi = 153 degrees
[[-0.894, 0.447],
 [-0.447, -0.894]]

REFL11
phi = 93 degrees
[[-0.058, 0.998],
 [-0.998, -0.058]]

REFL55
phi = -90 degrees
[[0.000, -1.000],
 [1.000, 0.000]]

AS55
phi = 7 degrees
[[0.993, 0.122],
 [-0.122, 0.993]]

  13405   Sun Oct 29 16:40:17 2017 ranaSummaryComputersdisk cleanup

Backed up all the wikis. Theyr'e in wiki_backups/*.tar.xz (because xz -9e gives better compression than gzip or bzip2)

Moved old user directories in the /users/OLD/

  13411   Mon Nov 6 18:22:48 2017 jamieSummaryLSCcurrent procedure for running c1dnn code

This is the current procedure to start the c1dnn model:

$ ssh c1lsc
$ sudo systemctl start rts-epics@c1dnn
$ sudo systemctl start rts-awgtpman@c1dnn
$ sudo /usr/bin/cset proc -s rts-c1dnn --exec /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -- -m c1dnn
...

Then to shutdown:

...
Ctrl-C
$ sudo systemctl stop rts-awgtpman@c1dnn
$ sudo systemctl stop rts-epics@c1dnn

The daqd already knows about this model, so nothing should need to be done to the daqd to make the dnn channels available.

  13421   Thu Nov 9 10:51:37 2017 gautamSummaryLSCcurrent procedure for compiling and installing c1dnn code

Jamie pointed out that the compile and install instructions are different for c1dnn:

cd /opt/rtcds/caltech/c1/rtbuild/test/nn-test
make c1dnn
make install-c1dnn

See also: https://nodus.ligo.caltech.edu:8081/40m/13383.

I think these build instructions have to be run on the c1lsc frontend - in the past, I have been able to compile and install models on any computer with the shared drive mounted (including the control room workstations), but I'm guessing that something has changed since the RCG upgrade. Jamie can correct me on this if I'm wrong.

  13425   Fri Nov 10 18:57:41 2017 ranaSummaryElectronicsIthaca 1201 vs. SR560

I characterized the black Ithaca 1201 pre-amp that we had sitting in the racks. It works fine and the input referred noise is < 10 nV/rHz. I also checked that the filter selection switches on the front panel did what they claim and that the gain knob gives us the correct gain.

For comparison I have also included the G=100, 1000 input referred noise of one of the best SR560 that we have (s/n 02763) in the lab. Above a few Hz, the SR560 is better, but for low frequency measurements it seems that the 1201 is our friend.

As with the SR560, you don't actually get low noise performance for G < 100, due to some fixed output noise level.

Steve:  sn48332 of Ithaca 1201

Attachment 1: Ithaca1201.pdf
Ithaca1201.pdf
  13427   Tue Nov 14 16:02:43 2017 KiraSummaryPEMseismometer can testing

I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.

\frac{dQ}{dt}=mc\frac{dT(t)}{dt} so T(t)=\frac{1}{mc}\int \frac{dQ}{dt}dt=\frac{1}{mc}\int P_{net}dt and P_{net}=P_{in}-P_{out}

In our case, we will heat the can to a certain temerature and wait for it to cool on its own so P_{in}=0

We know that P_{out}=\frac{kA\Delta T}{d} where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, \Delta T is the change in temperature, d is the thickness of the insulation.

Therefore,

T(t)=\frac{1}{mc}\int_{0}^{t}\frac{kA}{d}[T_{lab}-T(t')]dt'=\frac{kA}{mcd}(T_{lab}t-\int_{0}^{t}T(t')dt')

We can take the derivative of this to get

T'(t)=\frac{kAT_{lab}}{mcd}-\frac{kA}{mcd}T(t), or T'(t)=B-CT(t) 

We can guess the solution to be

T(t)=C_{1}e^{-t/\tau}+C_{2} where tau is the time constant, which we would like to find.

The boundary conditions are T(0)=40 and T(\infty)=T_{lab}=24. I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,

C_{1}+C_{2}=40, C_{2}=24, so C_{1}=16

We can plug everything back into the derivative T'(t)

T'(t)=-\frac{16}{\tau}e^{-t/\tau}=B-C[16e^{-t/\tau}+24]

Equating the exponential terms on both sides, we can solve for tau

\frac{16}{\tau}e^{-t/\tau}=16Be^{-t/\tau}, \frac{1}{\tau}=B, \tau=\frac{1}{B}=\frac{mcd}{kA}

Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.

To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.

Attachment 1: time_const.png
time_const.png
  13460   Fri Dec 1 17:09:29 2017 Udit KhandelwalSummaryGeneral 

Current objectives and statuses:

  • CAD Model of 40m lab (facility, chambers, invacuum components etc)
    • Status: On hold since I'm unable to acquire general dimensions of 
  13472   Wed Dec 13 17:46:08 2017 Udit KhandelwalSummaryGeneralSummary of Current Tasks

40m Lab CAD

1. 40m_bldg.dwg has 2D drawing of the 40m building

  • After importing file as a 2D sketch into solidworks, make sure to retrace all the lines before performing any 3D extrusion stuff.
  • Made walls 3m high

2. 40m_VE.dwg has the Vaccuum Envelope.

  • Divided the file into individual sketches for the tubes, test mass, and beam splitter chambers (so they can be individually modified later if required).

3. 40melev.dwg has the relative positioning between (1) and (2).

  • Using this file to position objects inside building cad.

4. All files can be found in Dropbox folder [40m SOS Modeling], which should be renamed to [40m CAD].

5. Next step would be to add the optical table, mirrors.

Tip-Tilt Suspension

1. Current objective: (refer to D070172) - Increase the length of the side arms (so it matches the dimensions of D960001), while keeping the test mass subassembly at the same height.
2. Future objective: Resonant frequency FEM of the frame (sans the test mass), and then change height to get the desired frequency.

Past Work

  • Completed solidworks model of SOS (D960001). I understand this is not the focus right now so this is for reference that the model is ready to be used.

Comments

  • I will be in India from 16th December until 6th January so this is my final visit for this year. I have enough material to work from home, and will correspond with Koji over email regarding Lab CAD and tip-tilt suspension.
  13484   Fri Dec 15 18:24:46 2017 ranaSummaryOptical LeversPRM

Today Angelina and I looked at the PRM OL with an eye towards installing a 2nd QPD. We want to try out using 2 QPDs for a single optic to see if theres a way to make a linear combination of them to reduce the sensitivity to jitter of the HeNe laser or acoustic noise on the table.

The power supply for the HeNe was gone, so I took one from the SP table.

There are WAY too many optics in use to get the beam from the HeNe into the vacuum and then back out. What we want is 1 steering mirror after the laser and then 1 steering mirror before the QPD. Even though there are rumors that this is impossible, I checked today and in fact it is very, very possible.

More optics = more noise = bad.

  13489   Wed Dec 20 00:43:58 2017 KevinSummaryGeneralDAC noise contribution to squeezing noise budget

Gautam and I looked into the DAC noise contribution to the noise budget for homodyne detection at the 40m. DAC noise is currently the most likely limiting source of technical noise.

Several of us have previously looked into the optimal SRC detuning and homodyne angle to observe pondermotive squeezing at the 40m. The first attachment summarizes these investigations and shows the amount of squeezing below vacuum obtainable as a function of homodyne angle for an optimal SRC detuning including fundamental classical sources of noise (seismic, CTN, and suspension thermal). These calculations are done with an Optickle model. According to the calculations, it's possible to see 6 dBvac of squeezing around 100 Hz.

The second attachment shows the amount of squeezing obtainable including DAC noise as a function of current noise in the DAC electronics. These calculations are done at the optimal -0.45 deg SRC detuning and 97 deg homodyne angle. Estimates of this noise are computed as is done in elog 13146 and include de-whitening. It is not possible to observe squeezing with the current 400 Ω series resistor which corresponds to 30 pA/rtHz current noise at 100 Hz. We can get to 0 dBvac for current noise of around 10 pA/rtHz (1.2 kΩ series resistor) and can see 3 dBvac of squeezing with current noise of about 5 pA/rtHz at 100 Hz (2.5 kΩ series resistor). At this point it will be difficult to control the optics however.

So it seems reasonable to reduce the DAC noise to sufficient levels to observe squeezing, but we will need to think about the controls problem more.

Attachment 1: 40m_squeezing.pdf
40m_squeezing.pdf
Attachment 2: 40mDAC_squeezing.pdf
40mDAC_squeezing.pdf
ELOG V3.1.3-