40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 4 of 357  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Type Categoryup Subject
  14278   Tue Nov 6 19:41:46 2018 JonOmnistructure c1vac1/2 replacement

This afternoon I started setting up the Supermicro 5017A-EP that will replace c1vac1/2. Following Johannes's procedure in 13681 I installed Debian 8.11 (jessie). There is a more recent stable release, 9.5, now available since the first acromag machine was assembled, but I stuck to version 8 for consistency. We already know that version to work. The setup is sitting on the left side of the electronics bench for now.

  14282   Wed Nov 7 19:17:18 2018 JonOmnistructure modbusIOC is running on c1vac replacement

Today I finished setting up the server that will replace the c1vac1/2 machines. I put it on the martian network at the unassigned IP I assigned it the hostname c1vac and added it to the DNS lookup tables on chiara.

I created a new targets directory on the network drive for the new machine: /cvs/cds/caltech/target/c1vac. After setting EPICS environment environment variables according to 13681 and copying over (and modifiying) the files from /cvs/cds/caltech/target/c1auxex as templates, I was able to start a modbusIOC server on the new machine. I was able to read and write (soft) channel values to the EPICS IOC from other machines on the martian network.

I scripted it as a systemd-managed process which automatically starts on boot and restarts after failure, just as it is set up on c1auxex.

  14287   Fri Nov 9 22:24:22 2018 JonOmnistructure Wiring of Vacuum Acromag Chassis Complete

Wiring of the power, Ethernet, and indicator lights for the vacuum Acromag chassis is complete. Even though this crate will only use +24V DC, I wired the +/-15V connector and indicator lights as well to conform to the LIGO standard. There was no wiring diagram available, so I had to reverse-engineer the wiring from the partially complete c1susaux crate. Attached is a diagram for future use. The crate is ready to begin software developing on Monday. 

  14296   Wed Nov 14 21:34:44 2018 JonOmnistructure Vacuum Acromags installed and tested

All 7 Acromag units are now installed in the vacuum chassis. They are connected to 24V DC power and Ethernet.

I have merged and migrated the two EPICS databases from c1vac1 and c1vac2 onto the new machine, with appropriate modifications to address the Acromags rather than VME crate.

I have tested all the digital output channels with a voltmeter, and some of the inputs. Still more channels to be tested.

I’ll follow up with a wiring diagram for channel assignments.

  14308   Mon Nov 19 22:45:23 2018 JonOmnistructure Vacuum System Subnetwork

I've set up a closed subnetwork for interfacing the vacuum hardware (Acromags and serial devices) with the new controls machine (c1vac; The controls machine has two Ethernet interfaces, one which faces outward into the martian network and another which faces the internal subnetwork, 192.168.114.xxx. The second network interface was configured via the following procedure.

1. Add the following lines to /etc/network/interfaces:

allow-hotplug eth1
iface eth1 inet static

2. Restart the networking services:

$sudo /etc/init.d/networking restart

3. Enable DNS lookup on the martian network by adding the following lines to /etc/resolv.conf:

search martian

4. Enable IP forwarding from eth1 to eth0:

$sudo echo 1 > /proc/sys/net/ipv4/ip_forward

5. Configure IP tables to allow outgoing connections, while keeping the LAN invisible from outside the gateway (c1vac):

$sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
$sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

6. Finally, because the EPICS 3.14 server binds to all network interfaces, client applications running on c1vac now see two instances of the EPICS server---one at the outward-facing address and one at the LAN address. To resolve this ambiguity, two additional enviroment variables must be set that specify to local clients which server address to use. Add the following lines to /home/controls/.bashrc:


A list of IP addresses so far assigned on the subnetwork follows.

Device IP Address
Acromag XT1111a
Acromag XT1111b
Acromag XT1111c
Acromag XT1111d
Acromag XT1111e
Acromag XT1121a
Acromag XT1121b
  14309   Mon Nov 19 23:38:41 2018 JonOmnistructure Vacuum Acromag Channel Assignments

I've completed bench testing of all seven vacuum Acromags installed in a custom rackmount chassis. The system contains five XT1111 modules (sinking digital I/O) used for readbacks of the state of the valves, TP1, CP1, and the RPs. It also contains two XT1121 modules (sourcing digital I/O) used to pass 24V DC control signals to the AC relays actuating the valves and RPs. The list of Acromag channel assignments is attached.

I tested each input channel using a manual flip-switch wired between signal pin and return, verifying the EPICS channel readout to change appropriately when the switch is flipped open vs. closed. I tested each output channel using a voltmeter placed between signal pin and return, toggling the EPICS channel on/off state and verifying the output voltage to change appropriately. These tests confirm the Acromag units all work, and that all the EPICS channels are correctly addressed.

  14315   Sun Nov 25 17:41:43 2018 JonOmnistructure Vacuum Controls Upgrade - Status and Plans

New hardware has been installed in the vacuum controls rack. It is shown in the below post-install photo.

  • Supermicro server (c1vac) which will be replacing c1vac1 and c1vac2.
  • 16-port Ethernet switch providing a closed local network for all vacuum devices.
  • 16-port IOLAN terminal server for multiplexing/Ethernetizing all RS-232 serial devices.

Below is a high-level summary of where things stand, and what remains to be done.


 Set up of replacement controls server (c1vac).

  • Supermicro 1U rackmount server, running Debian 8.5.
  • Hosting an EPICS modbus IOC, scripted to start/restart automatically as a system service.
  • First Ethernet interface put on the martian network at
  • Second Ethernet interface configured to host a LAN at 192.168.114.xxx for communications with all vacuum electronics. It connects to a 16-port Ethernet switch installed in the vacuum electronics rack.
  • Server installed in vacuum electronics rack (see photo).

 Set up of Acromag terminals.

  • 6U rackmount chassis frame assembled; 15V DC, 24V DC, and Ethernet wired.
  • Acromags installed in chassis and configured for the LAN (5 XT1111 units, 2 XT1121 units).

 EPICS database migration.

  • All vacuum channels moved to the modbus IOC, with the database updated to address the new Acromags. [The new channels are running concurrently at "C1:Vac2-...." to avoid conflict with the existing system.]
  • Each hard channel was individually tested on the electronics bench to confirm correct addressing and Acromag operation.

 Set up of 16-port IOLAN terminal server (for multiplexing/Ethernetizing the serial devices).

  • Configured for operation on the LAN. Each serial device port is assigned a unique IP address, making the terminal server transparent to client TCP applications.
  • Most of the pressure gauges are now communicating with the controls server via TCP.

Ongoing this week:

  • [Jon] Continue migrating serial devices to ports on the terminal server. Still left are the turbo pumps, N2 gauge, and RGA.
  • [Jon] Continue developing Python code for communicating with gauges and pumps via TCP sockets. A beta version of gauge readout code is running now.
  • [Chub] Install feedthrough panels on the Acromag chassis. Connect the wiring from feedthrough panels to the assigned Acromag slots.
  • [Chub/Jon] Test all the hard EPICS channels on the electronics bench, prior to installing the crate in the vacuum rack.
  • [Chub/Jon] Install the crate in the vacuum rack; connect valve/pump readbacks and actuators; test each hard EPICS channel in situ.
  • [Jon] Once all the signal connections have been made, in situ testing of the Python interlock code can begin.
  14320   Mon Nov 26 21:58:08 2018 JonOmnistructure Serial Vacuum Signals

All the serial vacuum signals are now interfaced to the new digital controls system. A set of persistent Python scripts will query each device at regular intervals (up to ~10 Hz) and push the readings to soft channels hosted by the modbus IOC. Similar scripts will push on/off state commands to the serial turbo pumps.

IP Addresses/Comm Settings

Each serial device is assigned an IP address on the local subnet as follows. Its serial communication parameters as configured in the terminal server are also listed.

Device IP Address Baud Rate Data Bits Stop Bits Parity
MKS937a vacuum gauge controller 9600 8 1 even
MKS937b vacuum gauge controller 9600 8 1 even
GP307 vacuum gauge controller 9600 8 1 even
GP316a vacuum gauge controller 9600 8 1 even
GP316b vacuum gauge controller 9600 8 1 even
N2 pressure line gauge 9600 7 1 odd
TP2/3 9600 8 1 none

Hardware Modifications

  • Each of the five vacuum gauge controllers has an RJ45 adapter installed directly on its DB9/DB25 output port. Because the RJ45 cable now plugs directly into the terminal server, instead of passing through some additional adapters as it formerly did, it was necessary to reverse the wiring of the controller TXD and RXD pins to Ethernet pins. The DB9/25-to-RJ45 adapters on the back of the controllers are now wired as follows.
    • For the MKS controllers: DB2 (RXD) --> Eth4;  DB3 (TXD) --> Eth5;  DB5 (RTN) --> Eth6
    • For the Granville-Phillips controllers: DB2 (TXD) --> Eth5;  DB3 (RXD) --> Eth4;  DB7 (RTN) --> Eth6
  • I traced a communications error with the GP307 gauge controller all the way back to what I would have suspected least, the controller itself. The comm card inside each controller has a set of mechanical relay switches which set the communications parameters (baud rate, parity, etc.). Knowing that this controller was not part of the original installation, but was swapped in to replace the original in 2009, I pulled the controller from the rack and checked the internal switch settings. Sure enough, the switch settings (pictured below) were wrong. In the background of the photo is the unit removed in 2009, which has the correct settings. After setting the correct communications parameters, the controller immediately began communicating with the server. Did these readouts (PRP, PTP1) never work since 2009? I don't see how they could.
  14420   Tue Jan 29 16:12:21 2019 ChubUpdate  

The foam in the cable tray wall passage had been falling on the floor in little bite-sized pieces, so I investigated and found a fiber cable that had be chewed/clawed through.  I didn't find any droppings anywhere in the 40m, but I decided to bait an un-set trap and see if we'd find activity around it. There has been none so far.  If there is still none tomorrow, I will move the trap and keep looking for signs of rodentia.  At the moment, the trap is in a box in front of the double doors at the north end of the control room.  Next it will be place in the IFO room, up in the cable tray. 

gautam: the fiber that was damaged was the one from the LSC rack FiBox to the control room FiBox. So no DAFI action for a bit...

  14435   Tue Feb 5 10:22:03 2019 chubUpdate oil added to RP-1 & 3

I added lubricating oil to roughing pumps RP1 and RP3 yesterday and this morning.  Also, I found a nearly full 5 gallon jug of grade 19 oil in the lab.  This should set us up for quite a while.  If you need to add oil the the roughing pumps, use the oil in the quart bottle in the flammables cabinet.  It is labeled as Leybold HE-175 Vacuum Pump Oil.  This bottle is small enough to fill the pumps in close quarters.

  14437   Wed Feb 6 10:07:23 2019 ChubUpdate pre-construction inspection

The Central Plant building will be undergoing seismic upgrades in the near future.  The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant.  Project manager Eugene Kim has explained the work to me and also noted our concerns.  He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.

Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab.  If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at eugene.kim@caltech.edu . 

  14476   Fri Mar 8 08:40:26 2019 AnjaliConfiguration Frequency stabilization of 1 micron source

The schematic of the homodyne configuration is shown below.

Following are the list of components

Item Quantity Availability Part number  Remarks
Laser (NPRO) 1 Yes    
Couplers (50/50) 5 3 No's FOSC-2-64-50-L-1-H64F-2 Fiber type : Hi1060 Flex fiber
Delay fiber  two loops of 80 m Yes PM 980


One set of fiber is now kept along the arm of the interferometer

InGaAs PD (BW > 100 MHz) 4 Yes NF1611

Fiber coupled (3 No's)

Free space ( 2 No's)

SR560 3 Yes    
  • The fiber mismatch between the couplers and the delay fiber could affect the coupling efficiency
  14621   Sat May 18 12:19:36 2019 KruthiUpdate CCD calibration and telescope design

I went through all the elog entries related to CCD calibration. I was wondering if we can use Spectralon diffuse reflectance standards (https://www.labsphere.com/labsphere-products-solutions/materials-coatings-2/targets-standards/diffuse-reflectance-standards/diffuse-reflectance-standards/) instead of a white paper as they would be a better approximation to a Lambertian scatterer.

Telescope design:
On calculating the accessible u-v ranges and the % error in magnification (more precisely, %deviation), I got %deviation of order 10 and in some cases of order 100 (attachments 1 to 4), which matches with Pooja's calculations. But I'm not able reproduce Jigyasa's %error calculations where the %error is of order 10^-1. I couldn't find the code that she had used for these calculations and I even mailed her about the same. We can still image with 150-250 mm combination as proposed by Jigyasa, but I don't think it ensures maximum usage of pixel array. Also for this combination the resulting conjugate ratio will be greater than 5. So, use of plano-convex lenses will reduce spherical aberrations. I also explored other focal length combinations such as 250-500 mm and 500-500mm. In these cases, both the lenses will have f-numbers greater than 5. But the conjugate ratios will be less than 5, so biconvex lenses will be a better choice.

Constraints: available lens tube length (max value of d) = 3" ; object distances range (u) = 70 cm to 150 cm ; available cylindrical enclosures (max value of d+v) are 52cm and 20cm long (https://nodus.ligo.caltech.edu:8081/40m/13000).

I calculated the resultant image distance (v) and the required distance between lenses (d), for fixed magnifications (i.e. m = -0.06089 and m = -0.1826 for imaging test masses and beam spot respectively) and different values of 'u'. This way we can ensure that no pixels are wasted. The focal length combinations - 300-300mm (for imaging beam spot), and 100-125mm (for imaging test masses) - were the only combinations that gave all positive values for 'd' and 'v', for given range of 'u' (attachments 5-6). But here 'd' ranges from 0 to 30cm in first case, which exceeds the available lens tube length. Also, in the second case the f-numbers will be less than 5 for 2" lenses and thus may result in spherical aberration.

All this fuss about f-numbers, conjugate ratios, and plano-convex/biconvex lenses is to reduce spherical aberrations. But how much will spherical aberrations affect our readings? 

We have two 2" biconvex lenses of 150mm focal length and one 2" biconvex lens of focal length 250mm in stock. I'll start off with these and once I have a metric to quantify spherical aberrations we can further decide upon lenses to improve the telescopic lens system.

  14626   Mon May 20 21:45:20 2019 MilindUpdate Traditional cv for beam spot motion

Went through all of Pooja's elog posts, her report and am currently cleaning up her code and working on setting up the simulations of spot motion from her work last year. I've also just begun to look at some material sent by Gautam on resonators.

This week, I plan to do the following:

1) Review Gabriele's CNN work for beam spot tracking and get his code running.

2) Since the relation between the angular motion of the optic and beam spot motion can be determined theoretically, I think a neural network is not mandatory for the tracking of beam spot motion. I strongly believe that a more traditional approach such as thresholding, followed by a hough transform ought to do the trick as the contours of the beam spot are circles. I did try a quick and dirty implementation today using opencv and ran into the problem of no detection or detection of spurious circles (the number of which decreased with the increased application of median blur). I will defer a more careful analysis of this until step (1) is done as Gautam has advised.

3) Clean up Pooja's code on beam tracking and obtain the simulated data.

4) Also data like this  (https://drive.google.com/file/d/1VbXcPTfC9GH2ttZNWM7Lg0RqD7qiCZuA/view) is incredibly noisy. I will look up some standard techniques for cleaning such data though I'm not sure if the impact of that can be measured until I figure out an algorithm to track the beam spot.


A more interesting question Gautam raised was the validity of using the beam spot motion for detection of angular motion in the presence of other factors such as surface irregularities. Another question is the relevance of using the beam spot motion when the oplevs are already in place. It is not immediately obvious to me how I can ascertain this and I will put more thought into this.

  14782   Fri Jul 19 22:48:08 2019 KruthiUpdate Dataviewer error

I'm not able to get trends of the TM adjustment test that Rana had asked us to perform, from the dataviewer. It's throwing the following error:

Connecting to NDS Server fb (TCP port 8088)
Connecting.... done
Server error 7: connect() failed
datasrv: DataWrite failed: daq_send: Resource temporarily unavailable
T0=19-07-20-01-27-39; Length=600 (s)
No data output.

  14783   Sat Jul 20 01:03:37 2019 gautamUpdate Dataviewer error

What channels are you trying to read?


I'm not able to get trends of the TM adjustment test that Rana had asked us to perform, from the dataviewer. It's throwing the following error:

Connecting to NDS Server fb (TCP port 8088)
Connecting.... done
Server error 7: connect() failed
datasrv: DataWrite failed: daq_send: Resource temporarily unavailable
T0=19-07-20-01-27-39; Length=600 (s)
No data output.

  14869   Tue Sep 10 16:10:40 2019 ChubUpdate Rack Update

Still removing old cable, terminal blocks and hardware.  Once new strain reliefs and cable guides are in place, I will need to disconnect cables and reroute them.  Please let me know dates and times when that is not going to interrupt your work! 

  15225   Wed Feb 26 17:17:17 2020 YehonathanUpdate Arms DC loss measurements

{Yehonathan, Gautam}

In order to measure the loss in the arm cavities in reflection, we use the DC method described in T1700117.

It was not trivial to find free channels on the LSC rack. The least intrusive way we found was to disconnect the ALS signals DSUB9 (Attachment 1) and connect a DSUB breakout board instead (Attachment 2).

The names of the channels are ALS_BEATY_FINE_I_IN1_DQ for AS reflection and ALS_BEATY_FINE_Q_IN1_DQ for MC transmission. Actually, the script that downloads the data uses these channels exactly...

We misalign the Y arm (both ITM ad ETM) and start a 30 rep measurement of the X arm loss cavity using /scripts/lossmap_scripts/armLoss/measureArmLoss.py and download the data using dlData.py.

We analyze the data. Raw data is shown in attachment 3. There is some drift in the measurement, probably due to drift of the spot on the mirror. We take the data starting from t=400s when the data seems stable (green vertical line). Attachment 5 shows the histogram of the measurement

X Arm cavity RT loss calculated to be 69.4ppm.

We repeat the same procedure for the Y Arm cavity the day after. Raw data is shown in attachment 5, the histogram in attachment 6.

Y Arm cavity RT loss calculated to be 44.8ppm. The previous measurement of Y Arm was ~ 100ppm...

Loss map measurement is in order.

  15729   Thu Dec 10 17:12:43 2020 JonUpdate New SMA cables on order

As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.

  • x3 12"
  • x3 24"
  • x2 48"

They're expected to arrive mid next week.

  15739   Sat Dec 19 00:25:20 2020 JonUpdate New SMA cables on order

I re-ordered the below cables, this time going with flexible, double-shielded RG316-DS. Jordan will pick up and return the RG-405 cables after the holidays.


As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.

  • x3 12"
  • x3 24"
  • x2 48"
  15740   Sat Dec 19 02:42:56 2020 KojiUpdate New SMA cables on order

Our favorite (flexible) RF cable is Belden's 1671J (Jacketed solder-soaked coax cable). It is compatible RG405. I'm not sure if there is off-the-shelf SMA cables with 1671J.


  16308   Thu Sep 2 19:28:02 2021 KojiUpdate This week's FB1 GPS Timing Issue Solved

After the disk system trouble, we could not make the RTS running at the nominal state. A part of the troubleshoot FB1 was rebooted. But the we found that the GPS time was a year off from the current time

controls@fb1:/diskless/root/etc 0$ cat /proc/gps 
controls@fb1:/diskless/root/etc 0$ date
Thu Sep  2 18:43:02 PDT 2021
controls@fb1:/diskless/root/etc 0$ timedatectl 
      Local time: Thu 2021-09-02 18:43:08 PDT
  Universal time: Fri 2021-09-03 01:43:08 UTC
        RTC time: Fri 2021-09-03 01:43:08
       Time zone: America/Los_Angeles (PDT, -0700)
     NTP enabled: no
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-14 01:59:59 PST
                  Sun 2021-03-14 03:00:00 PDT
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-11-07 01:59:59 PDT
                  Sun 2021-11-07 01:00:00 PST

Paco went through the process described in Jamie's elog [40m ELOG 16299] (except for the installation part) and it actually made the GPS time even strange

controls@fb1:~ 0$ cat /proc/gps

I decided to remove the gpstime module and then load it again. This made the gps time back to normal again.

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$ cat /proc/gps
cat: /proc/gps: No such file or directory
controls@fb1:~ 1$ sudo modprobe gpstime
controls@fb1:~ 0$ cat /proc/gps


  16339   Thu Sep 16 14:08:14 2021 Ian MacMillanFrogs Tour

I gave some of the data analysts a look around because they asked and nothing was currently going on in the 40m. Nothing was changed.

  16812   Mon Apr 25 18:00:03 2022 Ian MacMillanUpdate Cable supports update

I have designed new cable supports for the new ribbon cables running up the side of the tables in the vacuum chambers. This is a ribbon cable version of the existing cable clamp cf D010120

The clamps that I have designed (shown in basic sketch attachment 1) will secure the cable at each of the currently used cable supports. 

The support consists of a backplate and a frontplate. The backplate is secured to the leg of the table using a threaded screw. The frontplate clamps the cable to the backplate using two screws: one on either side. Between two fascinating points, the cable should have some slack. This should keep the cable from being stiff and help reduce the transfer of seismic noise to the table. 

It is possible to stack multiple cables in one of these fasteners. Either you can put two cables together and clamp them down with one faceplate or you can stack multiple faceplates with one cable between each faceplate. in this case the stack would go backplate then cable then faceplate then cable then the second faceplate. this configuration would require longer screws.

The exact specifics about which size screws and which size plates to use still have not been measured by me. But it will happen

  17418   Wed Jan 25 10:02:47 2023 yehonathanUpdate Earthquake, MC3 watchdog tripped

We came this morning and noticed the FSS_FAST channel was moving very rapidly. Short inspection showed that MC3 watchdog got tripped. After reenabling the watchdog the issue was fixed and the MC is stable again.

There is a spike in the seismometers 8 hours ago and it was probably due to the 4.2 magnitude earthquake that happened in Malibu beach around that time.

  17595   Fri May 19 09:23:58 2023 PacoUpdate MCF Noise

The fan behind the PSL controller is injecting excess band limited noise angry


While doing noise hunting to improve the BHD lock stability, we noticed peculiar noise bumps in the BH44 error point near (but not exactly at) the even line-harmonics (for example 120 Hz, 240 Hz, ...). Other channels such as C1:HPC-BHDC_SUM, C1:LSC-PRCL_IN1, or even C1:LSC-XARM_IN1 didn't show these features, so we looked at C1:IOO-MC_F_DQ (which represents the free running laser noise above 100 Hz) and to our surprise found this excess noise!

Noise hunting around PSL

Since this noise is present upstream, we decided to hunt around using C1:IOO-MC_F_DQ. We set up a diaggui measurement to do some "live demodulation" as suggested by Koji in order to understand the nature of this noise. In order to get some "video bandwidth" we set up a power spectrum measurement from 114 Hz to 140 Hz (to monitor the usual 120 Hz line noise peak) with a bandwidth of 1 Hz. A single exponential average gave us the 1 Hz narrow spectrum in "real time", from which we noticed its nonstationary character. The band limited excess noise is the result of a peak hovering in the range of 125 to 131 Hz. With this diagnostic set up, we started hunting for its source.

  1. We checked the fan behind the PSL controller (Attachment #1). 
    1. After disconnecting the molex powering it with +15 VDC the noise dissappeared!

To show the impact in the complete noise spectrum, we took a 10 fixed average measurement with and without the fan being on. The result is in Attachement #2. The spectra are shown along with their rms, which is significantly reduced when the fan is off near the 100 Hz frequency band (where these bumps appear). Anyways, we have left the fan on because the PSL controller needs it so the problem remains, but we have at least identified the source.

  17630   Tue Jun 13 14:12:34 2023 MayankConfiguration Restarted the NDS2 script on Megatron

[Radhika, Mayank]

Restarted the NDS2 script on Megatron following the instructions here

1) SSH to megatron - "ssh megatron"

2) Switch to nds2mgr using "sudo su nds2mgr" 

3) Stop and restart the service using 

"sudo /etc/init.d/nds2 stop
sleep 20
sudo /etc/init.d/nds2 start" 

  17636   Fri Jun 16 23:09:31 2023 MayankUpdate IFO working after ETMX electronics upgrade

After lots of burtrestores and alignment by Koji. The interferometer is live again.

The X/Y green beams are resonating. The X/Y arms were locked with IR. We saw the MI fringe on the POP. However, the AS spot is not visible. Needs to check the optical table?

The auto-restoration process did not bring back all the settings (incomplete restoration), and we had to restore them manually. We took some time to realize it.

  17637   Fri Jun 16 23:23:56 2023 KojiUpdate IFO working after ETMX electronics upgrade

Remaining work:


  • The +/-15V power supply for the Eurocard crate has not yet turned on. Check the functionality of the modules. [Done ELOG 17638]
  • Because the eurocards are not energized, the following functions are not working.
  • The HV power for the PZTs are off now.
  • The Transmon high-gain PD has a temporary BNC cable. The original cable has a LEMO 4pin instead of BNC. This connector should be fixed.
  • The end green beam is not probably aligned well. Partly because the PZTs are not energized.
  • The network-enabled AC power strip is not yet installed.
  • The network switch should be moved to the planned slot of the rack.
  • The green shutter is not present right now as there is Acromag. One has to go to the end and switch the Uniblitz shutter unit to change the state.

Software / Control

  • High gain PD has lower gain than before (x15 given at C1:LSC-TRX_GAIN). it is quite different from the Yend setup. This needs investigation.
  • Xarm is still locked with ITMX actuation. ETMX coil anti-dewhitening filters were not yet updated.
  • We still don't know how good the input matrix is and how good the damping gains are.
  • IMC WFS does not turn on for some reason. [Seems fixed by someone needs elog]


  • The end station is currently pretty messy. Remove all the tools / all the trash cables / salvage useful stuff [Cleaning work on Jun 21, 2023 needs elog]
  • An AA chassis was removed during the confusion of the power reversal trouble. The removed unit must be opened for the screw restoration and function test.
  • One of the Sorensen had the volume knob flaky. Before it is returned to the stock pile, it needs an inspection.


On top of these, the Acromag unit should be brought to the workbench, and the upgrade needs to be done. [Moved to the workbench needs elog]

  17638   Mon Jun 19 01:05:49 2023 KojiUpdate IFO working after ETMX electronics upgrade

ETMX Eurocard Crate investigation ~ Summary

- The eurocard crate was not properly grounded, which had created a weird powering condition.
- Some busted components on the QPD whitening were replaced. The channels were tested to be OK.
- The OPLEV IF board is functioning as expected
- BUT the OPLEV head seemed busted.

I came in on Sun evening to work on the Eurocard crate.

- Checked the power supply situation with an extender board while no other circuits were mounted on the crate. The +/-20V and +/-15V were supplied, and the corresponding LEDs were lit.

- Then I inserted QPD whitening board (D990399), which made the +15V LED dim... and something on the QPD whitening board burnt off! It was a tantalum cap on the daughterboard. (The daughter board was for the end transmon LPF)

- I checked OPLEV I/F D010033 and this also made +15V LED dim. Something was clearly wrong

- The voltages on the extender board were measured: it turned out that the ground level of the modules (measured at the connector shells) was about 3V off from 0V, which is defined by the Sorensens gnd terminal (which is connected to the rack).
  Immediately turned off the +/-15V system and measured the ground resistance between these two grounds. It was ~350Ohm. Does this mean there was an 8~9mA current through somewhere? The R between the Sorensens and the rack frame was basically 0 Ohm.
  So it looked like the Eurocard crate was not properly grounded.

- I traced the side panels and found that the ground coming from Sorensen was disconnected, and the ground wire for the crate was isolated. I borrowed a ground slot of the vacant fuse slots (Attachment 1). This made the R between a connector shell to the rack properly ~0Ohm and the powering situation looked finally OK to me.

We now need to check the boards before putting them back in. The QPD Whitening Board (D990399-B) was inspected on the workbench.

- First of all, the daughterboard was isolated.
- The +/-15V supply was connected to the board via an extender board. The board was energized correctly.
- A 1kHz 60mVpp signal was injected from the input nodes. This should make the output voltages to be ~11Vpp.
- It seemed that CH3 had a large (~2V) voltage, and the final output was 0V. Two AD620s for U4 and U21 were replaced. This made the CH3 work identically to the other 3 channels.

- Finally, the daughterboard was taken care. A Ta cap was soldered. The daughterboard was engaged to the power supply nodes, and there seemed to be no problem.20mVpp test input resulted in 320mVpp output.

As Ta Caps are very weak to the reverse and overvoltages, This indicated that there seemed to be some reverse voltage applied to the system.

The board was inserted to the Eurocard crate and now we see a reasonable end QPD transmon signals. (The SUM gain matched with the end transmon PD signal).

Next, I stepped into the OPLEV I/F board. The test indicated that the board was okay.

Then the board was inserted into the crate. It made no problem. And then, the QPD cable was connected => The +15V was drained. So it seemed that the QPD HEAD has an issue. I'll check the head later this week.

  17642   Wed Jun 21 08:21:03 2023 JCUpdate IFO working after ETMX electronics upgrade

Andrei asked me yesterday for the location of the ambient temperature sensor for the X-end. He mentioned that he needed to run so tests because there were ~80° spike over the weekend. After I walked him over, I noticed that the sensor itself was crooked. We may have hit it when we were removing all the ancient cabling from the rack. I removed the sensor and soldered the sensor back on. The Temperature monitor is now working without an issue.


  17646   Thu Jun 22 00:27:27 2023 KojiUpdate IFO working after ETMX electronics upgrade

[Koji, Hiroki]

ETMX OPLEV QPD was fixed.

- What we knew was that the QPD head drains +15V when it was connected to the I/F board.
- Suspected the reverse voltage / blown a tantalum cap

- The head position was marked by bases (Attachment 1).
- The head unit was opened. The PCB is D980325 (Attachment 2) and transimpedance R seemed 97.6k. The OPamp is OP497 (quad)

- Removed both tantalum caps. Checked the insulation. One still has a good insulation and the other has short circuit. Yes, this is a typical failure mode of Ta caps.
- Replaced them with two 100uF electrolytic caps (50V rating) (Attachment 3)

- We tested the head unit with the phone flashlight on/off for all four segments. They responded well. So we declared the head was functioning (on the workbench).

- The head was brought back to the Xend table. Positioned it with the bases.

- The ETMX Oplev screen indicated that the head was functioning well with the ambient light level. (i.e. low but about equal light level on the segments)
  At the time, there was no oplev light coming into the QPD. So we shined it with phone lights to see the response.
  All the segments were responding, so we declared that the oplev signals are nicely transmitted in the CDS.

  17647   Thu Jun 22 00:47:00 2023 KojiUpdate IFO working after ETMX electronics upgrade

[Mayank, Koji]

ETMX Sat amp (D080276) CH3 PD receiver circuit fixed.

At today's meeting, Yuta reported that ETMX LL OSEM had no response.

Mayank and I opened the sat amp for the investigation. Mayank's observation was that the LEDs on the video screen were lit, indicating the LED drivers are all good. Also, he found that the CH3 output (corresponds to LL PD) on the PDmon is +15V saturated.

We checked the I-V amp output (TP6) and it was zero as expected. However, the whitening out (TP8) was saturated at +15V. Once the chip U3 was removed, the monitor and the DAQ out went back to zero.
The chip U3 was replaced with a spare AD822, but this didn't solve the issue. So something was wrong with the whitening stage.

We carefully inspected the board and suspected R24 was not well soldered. This R24 had been originally 19.6k and was replaced by 4.99k at the 40m to bring the whitening pole/zero frequencies to the 40m standard. (See https://dcc.ligo.org/LIGO-S2100744 as an example)

The R24 was soldered again and we replaced AD822 too. This solved the saturation issue.

Mayank returned the unit on the rack and reported that the LL OSEM was responding.

  17648   Thu Jun 22 01:04:42 2023 KojiUpdate IFO recovery

I wanted to check the oplev functionality but the IFO was not well aligned. So started the CDS recovery.

Here is the procedure

- Played a CDS restarting whack-a-mole: a host suffered from DACKILL (DK), so reboot the machine. And this made the other machine fell into DK.

- I decided to power cycle affected machines: sudo-shutdowned c1lsc/c1sus/c1sus2/c1iscex
- Went to the racks and power cycled their IO chassis.
- Powered up these hosts.

After a while, all the RTS came up with no DK error (very good)

Then, burt fest. I wasn't sure what to be restored, but I tried to burtrestore as many snapshots as possible. The snapshots at 20:19 on Jun 21, 2023 were used.

- IMC / WFS is now working well.
- Someone restored the AS/BHD beam spot in the daytime (thanks)
- Both arms are locked and aligned by ASS.
- The green beams are flashing in the arms

ETMX Oplev
- As a consequence of omitting the PENTEK amp (?), the oplev signals (C1:SUS-ETMX_OL_SEGx_IN1, etc.) are now negative. Gave -1 to C1:SUS-ETMX_OL_SEGx_GAIN to compensate it.
- The ETMX damping seemed weak, so C1:SUS-ETMX_SUSPIT_GAIN and C1:SUS-ETMX_SUSYAW_GAIN were doubled to 4.0 and 7.0.
- The oplev calibrations are probably largely off. The servo gains were reduced to 0.01 to have a reasonable servo response. (OLTF to be inspected)

ETMY Oplev
- I don't know how long it was like that, but the servo inputs were off. And the oplev servo gains of -5 was way too big. They were reduced to -1.

ETMX Transmon PD/QPD setting
- The high gain PD gains were recalibrated to be ~1. All the gains were put on C1:SUS-ETMX_TRX_GAIN, and C1:LSC-TRX_GAIN was returned to 1. (previously 15)
- The transmon QPD gain (C1:SUS-ETMX_QPD_SUM_GAIN) was trimmed to be 0.075 to make it equivalent to the high gain PD output.

  17651   Thu Jun 22 22:34:43 2023 KojiUpdate IFO recovery

[Hiroki, Mayank, Koji]

The X end PZT driver for green alignment:

  • The driver module was inserted to the Eurocard crate. Turned on the 100V HV supply. There was nothing dramatic seemed to happen.
  • There are PZT steerers M1 and M2. Each is supposed to have Pitch, Yaw, and Bias outputs, corresponding the three cables of PI S320.00.
  • We looked at the monitor output voltages. M1P=6.6V (corresponding to 100V), M1Y=0.0V, M1B=6.6V. M2P=~0V, M2Y=6.6V, M2V=6.6V.
  • This is strange as the circuit is supposed to bias Pitch and Yaw only by 3.3V (corresponding to 50V).
  • We inspected the PZT driver board D980323. It seems that this board was modified by Gautan in 2013 (ELOG 8932).
    According to his elog, the gain was modified to be 5.8 rather than 16, and the offset trimmers were tuned to give 11V which corresponds to the output offset of 64V
    The bias outputs are supposed to have 6.6V out at the monitors and they actually do. However, the other channels are not showing the correct output levels.
    Does this mean the HV transistors are not functioning? This required the test involving the HV power supply at the workbench.
  • Meanwhile, I wonder how the PZT is supposed to be biased. Is the bias cable supposed to have 50V or 100V, assuming the other two have the nominal midrange bias of 50V?
    This probably needs a bit careful investigation of the PZT+PZT driver+HV supply.
  17819   Thu Aug 31 10:15:21 2023 RadhikaUpdate Electronic CARM to ALS CARM handoff

[Paco, Radhika]

ALS control of CARM

Yesterday evening, Paco and I aimed to:

1. lock electronic FPMI (e-CARM = POX + POY; e-DARM = AS55)
2. hand off CARM control to ALS (CARM = BEATX + BEATY)
3. add a CARM offset

Once e-FPMI was locked (POX + POY --> CARM_A), we fed the ALS beatnote error signals to CARM_B and slowly mixed CARM_A and CARM_B. ALS control of CARM was successful.

The final values used in C1LSC_AUX_ERR_MTRX were (-0.3 ALSX + 0.3 ALSY) --> CARM_B. Note that these signs depend on the sign of each beatnote. The sign of ALSY could be determined by giving an offset, but without an Acromag we had to use trial and error for the sign of ALSX. We observed that using 0.5 magnitude for each signal resulted in too high of a CARM UGF, making the loop unstable. The magnitudes were reduced to 0.3 to give us a comparable UGF to POX/POY control of CARM.

The final ALS CARM OLTF can be found in Attachment 1. Some "wobblyness" was observed in the OLTF. Attachment 2 shows the suppressed in-loop CARM_B error and the out-of-loop CARM_A error. We couldn't identify why CARM_A error has a notch ~325 Hz; this is also present when closing the loop with CARM_A.

We tried to add an LSC CARM offset (would push the PSL frequency away) but could not see the transmission in the arms drop.

Next steps

Increase stability of ALS CARM, turning loop gains

Achieve a CARM offset maintaining lock

Then proceed to lock PRMI sidebands and reduce the CARM offset for PRFPMI

  17833   Thu Sep 7 21:09:17 2023 PacoUpdate eCARM and eDARM to ALS CARM and ALS DARM

Tonight I managed to lock CARM and DARM under ALS control only


  • Arm cavities well aligned, TRY ~ 1.07, TRX ~ 0.98, GTRY ~ 1.2, GTRX ~ 0.77
  • HEPA off, WFS 60s offloaded, PD offsets removed, all lights inside the lab were off
  • BEATX and BEATY ool residual noise shown in Attachment #1.
  • Error points, the A path was the same as what is used for electronic FPMI. For the B path, I describe the tuning below.
    • CARM_A = 0.5 * POX11_I + 0.5 * POY11_I
    • DARM_A = 0.19 * POX11_I - 0.19 * POY11_I
    • CARM_B = -0.7 * ALSX + 0.4 * ALSY  
    • DARM_B = -0.25 * ALSX - 0.17 * ALSY
  • Power normalization of the error signals was 0.5 * TRX + 0.5 * TRY in both paths.
  • LSC filter banks are the same ones we use for electronic FPMI, and the gains were
    • CARM = 0.011  (UGF ~ 200 Hz) using FM1 to FM5, FM6 and FM8
    • DARM = 0.055 (UGF ~ 150 Hz) using FM1 to FM5, FM6 and FM8
  • Control points, I temporarily disabled violin filters around 600 Hz to ease the lock acquisition ... we should really use the VIO TRIG here to avoid having to do this.
    • CARM = -0.734 * MC2
    • DARM = 0.5 * ETMX - 0.5 * ETMY

ALS error signal tuning

To find the error signals for CARM/DARM, I turned on the oscillators (at 307.8 and 313.31 Hz respectively) with 150 counts and enabled FM10 (Notch for sensing matrix) in the CARM and DARM servo banks. I then removed the ALS offsets (C1:LSC-ALSX_OFFSET, C1:LSC-ALSY_OFFSET) and looked at the transfer functions shown in Attachment #2. I optimized the ALS blending until I maximized the CARM and DARM A to B paths and minimized CARM and DARM cross couplings. The signs were chosen to leave a phase of 0.


After measuring the OLTFs for eCARM and eDARM (loop closed with the A error point) and tuning the ALS error signals, I gradually blended the A and B paths and checked the OLTFs for CARM and DARM. During this I realized I needed to disable some of the notch violin filters because they sometimes made the DARM loop unstable after >50% blending. In the end the simultaneous CARM_A/DARM_A to CARM_B/DARM_B handoff was successful in 0.5 seconds. Attachment #3 shows the OLTFs under ALS control.

CARM offset

After getting nominally stable ALS control, I tried adding an offset. The LSC CARM offset range was insufficient, so I ended up directly scanning the C1:LSC-ALSX_OFFSET and C1:LSC-ALSY_OFFSET. The first couple of attempts the ramp time was set to 2.0 seconds, and a step of 0.01 was enough to break the lock. I managed to hold the control with as much as C1:LSC_CARM_A_IN1 offset by ~ 500 (rms ~ 200 counts). I roughly estimate this to be ~ 5% of the CARM pole which is 4 kHz in this case so overall 200 Hz which is not that large.

  17873   Mon Sep 25 17:01:46 2023 MurtazaUpdate IFO ALIGNED (WITH SOME ISSUES)

[JC, Paco, Radhika, Murtaza]


1. WFS Relief
We tried to change the gain to offload the offsets in the reliefMCWFS script but it The gains might need some tuning to get it to work

2. WFS Error Signals Diverging
The error signal C1:IOO-WFS2_I_PIT_MON was staying at a constant offset from 0 using the existing output matrix (C1IOO_WFS_OUTMATRIX). We tried changing the matrix coefficients that may have caused this behavior but it led to divergence in other signals (C1:IOO-WFS1_I_PIT_MON, C1:IOO-WFS2_I_YAW_MON). THIS NEEDS TO BE FIXED

3. BS was aligned using OPLEV readouts and the damping filters were checked. No funny business for BS anymore.

4. The original IFO Align scripts used the suffix "COMM" for each optic. This was changed to "OFFSET" for all arguments by editing the IFO_ALIGN screen (left click-> Execute -> Edit this screen -> !Align -> Label/Cmd/Args)

5. The OPLEV gains for ITMY were unstable and needed some tuning. New gains: C1:SUS-ITMY_OL_PIT_GAIN (14->3.5) and C1:SUS-ITMY_OL_YAW_GAIN (-8->-4). (The upgrade should not have affected this so this could be revisited later).

6. YARM (transmission ~ 1) and XARM (transmission ~ 0.6) were locked successfully!



  17874   Tue Sep 26 14:07:00 2023 MurtazaUpdate IFO ALIGNED (WITH SOME ISSUES)

[Paco, JC, Murtaza]


To fix the WFS loops, went through the following steps

With the WFS loop turned off

- We manually aligned the optics MC1, MC2 and MC3 (IOO -> C1IOO_MC_Align) to maximize transmission (MC Trans Sum -> ~13300)

- We manually aligned the QPDS for WFS1 and WFS2 to center the beam by looking at the DC signals (C1IOO_WFS_QPD). The laser was clipping on WFS2

With the WFS loop turned on

- We changed the gains of the WFS filters for all signals (1.0 -> 2.0), this led to faster conversion but clipping on C1:IOO-WFS2_YAW_OUTPUT. The gains were restored to 1.0 and thus left unchanged.

- We increased the reliefMCWFS gain by a factor of 10 by changing the arguments (Execute -> Edit this screen -> Actions -> label/cmd/args -> Arguments) (0.02 -> 0.2) 

- We ran WFSoutMatBalancing.py (17334) to calculate the new output matrix


  17879   Thu Sep 28 12:43:02 2023 RadhikaUpdate IFO ALIGNED (WITH SOME ISSUES)

While aligning today I realized the cavAlign step sizes and step factors had not been updated after the upgrade.

Here are the new MEDM command arguments to launch cavAlign. Only factors not equal to 1 are listed. The updates made by me are in red

Optic pair Step size Step factor
PR3-ETMY 1  
TT1-TT2 0.001  
PR2-PR3 1  
TT2-PR3 0.001 1000
PR3-ITMY 1  
SR2-AS1 1  
SR2-AS4 1 3.6
LO1-AS4 1  
PRM-PR2 1  
TT2-PRM 0.001 1000

Interesting notes:

- the factor of 3.6 for AS4 relative to SR2 is interesting - don't know where this comes from.
- LO1-AS4 step size was never updated from 0.001 to 1. I made the change.
- ITMX-ITMY step size for MICH was originally 0.0001. I've set the new step size to 0.1 to reflect this.
  Draft   Thu Sep 28 16:45:54 2023 MurtazaUpdate IFO ALIGNED (WITH SOME ISSUES)

[Rana, Radhika, Murtaza]

WFS Loop Debugging

- We turned on the WFS loops with very small gain (0.01) to see how the error signals behave. There is an existing template to look at the error signals in ndscope (users->Templates->ndscope->IOO->WFS->WFS-overview.yml). We observed C1:IOO-WFS2_IY_DQ stay at a constant offset as we increased the gain to 1.

- The output matrix for WFS (C1IOO_WFS_INMATRIX) was restored to the original value using burtgooey to mitigate the WFSoutMatBalancing.py change 17874.

- (TODO) WFS1 and WFS2 are slightly misaligned as seen on the C1IOO_LOCKMC screen. These need to be aligned when the IFO is unlocked so that the beam is centered on them.

- (TODO) With the PSL shutter turned off, WFS heads should show 0 reading which is not the case. This needs to be corrected for to mitigate the offset readings.

- The electronics upgrade should ideally only affect the suspensions (everything upstream should not need any changes).

- Note: The MC_TRANS error signals look very small in PIT and YAW.

  2653   Wed Mar 3 18:32:25 2010 AlbertoUpdate40m Upgrading11 MHz RFPD elctronics
** Please add LISO file w/ component values.
I designed the circuit for one of the 11 MHz photodiodes that we're going to install in the 40m Upgrade.

This is a simple representation of the schematic:

#          |
#          Cw2
#          |
#          n23
#          |
#          Lw2
#          |
#           n22
#          |
#          Rw2                
#                 |                   |\            
#           n2- - - C2 - n3 -  - -  - |  \          
#            |    |      |   |        |4106>-- n5 - Rs -- no
# iinput    Rd   L1     L2 R24    n6- |  /     |           |
#      nin - |    |      |   |    |   |/       |         Rload    
#           Cd   n7     R22 gnd   |            |           |          
#            |    |      |        | - - - R8 - -          gnd              
#           gnd  R1     gnd      R7 
#                 |               |
#         gnd               gnd

I chose the values of the components in a realistic way, that is using part available from Coilcraft or Digikey.

Using LISO I simulated the Tranfer Function and the noise of the circuit.

I'm attaching the results.

I'll post the 55MHz rfpd later.

  2655   Thu Mar 4 08:43:35 2010 AlbertoUpdate40m Upgrading11 MHz RFPD elctronics

** Please add LISO file w/ component values.

oops, forgotten the third attachment...

here it is

  2656   Thu Mar 4 19:53:56 2010 AlbertoUpdate40m Upgrading11MHz PD designed adjusted for diode's resistance; 55 MHz RFPD designed
After reading this study done at LIGO MIT in 1998 I understood why it is difficult to define an effective impedance for a photodiode.

I read a few datasheets of the C30642GH photodiode that we're going to use for the 11 and 55 MHz. Considering the  values listed for the resistance and the capacitance in what they define "typical conditions" (that is, specific values of bias voltage and DC photocurrent) I fixed Rd=25Ohms and Cd=175pF.

Then I picked the tunable components in the circuit so that we could adjust for the variability of those parameters.

Finally with LISO I simulated transfer functions and noise curves for both the 11 and the 55MHz photodiodes.

I'm attaching the results and the LISO source files.


  2657   Thu Mar 4 22:07:21 2010 ranaUpdate40m Upgrading11MHz PD not yet designed

Use 10 Ohms for the resistance - I have never seen a diode with 25 Ohms.

p.s. PDFs can be joined together using the joinPDF command or a few command line options of 'gs'.

  2704   Tue Mar 23 22:46:43 2010 AlbertoUpdate40m UpgradingREFL11 upgraded
I modified REFL11 according to the changes lsited in this schematic (see wiki  / Upgrade 09 / RF System / Upgraded RF Photodiodes ).
I tuned it to be resonant at 11.06MHz and to have a notch at 22.12MHz.
These are the transfer functions that I measured compared with what I expected from the LISO model.


The electronics transfer function is measured directily between the "Test Input" and the "RF Out" connector of the box. the optical transfer function is measured by means of a AM laser (the "Jenne laser") modulated by the network analyzer.
The AM laser's current was set at 20.0mA and the DC output of the photodiode box read about 40mV.
The LISO model has a different overall gain compared to the measured one, probably because it does not include the rest of the parts of the circuit other than the RF out path.

I spent some time trying to understand how touching the metal cage inside or bending the PCB board affected the photodiode response. It turned out that there was some weak soldering of one of the inductors.

  2711   Wed Mar 24 14:57:21 2010 AlbertoUpdate40m UpgradingREFL11 upgraded


 Hartmut suggested a possible explanation for the way the electronics transfer function starts picking up at ~50MHz. He said that the 10KOhm resistance in series with the Test Input connector of the box might have some parasitic capacitance that at high frequency lowers the input impedance.

Although Hartmut also admitted that considering the high frequency at which the effect is observed, anything can be happening with the electronics inside of the box.

  2715   Thu Mar 25 17:32:42 2010 AlbertoUpdate40m UpgradingREFL55 Upgraded

I upgraded the old REFL199 to the new REFL55.

To do that I had to replace the old photodiode inside, switching to a 2mm one.

Electronics and optical transfer functions, non normalized are shown in the attached plot.


The details about the modifications are contained in this dedicated wiki page (Upgrade_09 / RF System / Upgraded RF Photodiodes)

  2761   Sat Apr 3 19:54:19 2010 AlbertoUpdate40m UpgradingREFL11 and REFL55 PDs Noise Spectrum

These are the dark noise spectrum that I measured on the 11MHz and 55MHz PD prototypes I modified.

The plots take into account the 50Ohm input impedance of the spectrum analyzer (that is, the nosie is divided by 2).

2010-04-03_REFL11_darknoise.png 2010-04-03_REFL55_darknoise.png

With an estimated transimpedance of about 300Ohm, I would expect to have 2-3nV/rtHz at all frequencies except for the resonant frequencies of each PD. At those resonances I would expect to have ~15nV/rtHz (cfr elog entry 2760).


  1. For the 55MHz PD the resonance peak is too small
  2. In the 55 MHz: noise is present at about 7MHz
  3. In the 11MHz PD there's a lot of noise below 10 MHz.

I have to figure out what are the sources of such noises.


  2767   Mon Apr 5 10:23:40 2010 AlbertoUpdate40m UpgradingREFL11 Low Frequency Oscilaltion Reduced

After adding an inductor L=100uH and a resistor R=10Ohm in parallel after the OP547A opamp that provide the bias for the photodiode of REFL11, the noise at low frequency that I had observed, was significantly reduced.

See this plot:


A closer inspection of the should at 11MHz in the noise spectrum, showed some harmonics on it, spaced with about 200KHz. Closing the RF cage and the box lid made them disappear. See next plot:


The full noise spectrum looks like this:


A big bump is present at ~275MHz. it could important if it also shows up on the shot noise spectrum.

ELOG V3.1.3-