This afternoon I revisited the soldering of Polyimide (Kapton) heaters. All the tests to date have failed to form a nice joint between the exposed pads and hookup wire. The standard Kester Pb63Sn37 solder usually balls up and refuses to take, any joint it does make is brittle and too unreliable for vacuum use.
I tested two theories:
I set up a metal plate with the portable vice grips, attached a heater sticker and heated it with the soldering heat gun to just under 200C (max limit of the Kapton). I then tried soldering with 287 C soldering tip taking care to really heat up the joint. Standard Pb63Sn37 solder still didn't take, even with heat high enough to damag the Polyimide. No amount of heat is going to improve this. I repeated again after rouging up the surface with an abrasive. This helped a little but the wire easily detached with a little tug. Too brittle.
I found a few different types of solder in the PSL lab. I also looked at the 40m, EE workshop and ATF lab draws, but they only had the standard Pb/Sn combinations. I found that Sn60Cu? actually worked a bit better. The flow was about the same, but the solder wetted much closer to the pad. The joint was better than before but with a little more force (0.1-0.2 N) it also snapped off brittlely. There was no solder residue left on the pad, it all stuck to the wire. No good.
Looking around online, some other people had the same problem. I found a data sheet from Omega, Kapton heater KIT, that gives some instructions for soldering. Its suggested to use a high
temperature solder (Pb97.5Ag1.5Sn1) with about 288°C (550°F) solder tip. We didn't have this in stock, so I've order a non-rosin core spool from McMaster. I'll try this out in the next few days.
I looks like the only feasible way to attach wires to our Kapton heaters in vacuum is to solder them on. I can't find any suitable clips/crimps. There are two issues. First, I and Yinzi both found it hard to form a reliable solder joint to the Kapton pads in our initial tests: the surface type and the fact it is thin and holds no heat means that the solder beads onto the wire and doesn't wet to the flat surface of the heater electrical contact. Secondly, solder has rosin and other contaminants that are potentially bad for outgassing and redepositing on optics in vacuum. Furthermore, it is difficult to bake because it has such a low melting point.
We don't have a full image of either acromag1 or ws3 in the PSL lab. This could be a problem if we have a drive failer.
I'd like to rebuild acromag1 at some stage using something newer than Ubuntu 12.01, but first we need backups.
In a tmux session on acromag1, I ran
sudo dd if=/dev/sda bs=64K | bzip2 > /mnt/external/acromag1Image.bz2
This should clone the whole drive into a compressed image but its going to take a while. To restore expect to run
bzcat /media/usb/acromag1Image.bz2 | dd of=/dev/sda
bzcat /media/usb/acromag1Image.bz2 | dd of=/dev/sda
Will check tomorrow to see if successful.
Clone of acromag1 was successful:
It hasn't reported any errors, not sure how to check the integrity.
I am backing up ws3 now.
Ran a backup of ws3, the computer used for medm interfaces in the lab, it completed successfully:
controls@ws3:/media/controls/CTNLABBAK$ sudo dd if=/dev/sda bs=64K | bzip2 > /media/controls/CTNLABBAK/ws3Image.bz2
3815602+1 records in
3815602+1 records out
250059350016 bytes (250 GB) copied, 38551.7 s, 6.5 MB/s
The external drive is in the hard drives draw in the blue cabinet.
I just checked the 40m's stock of wedged, AR coated, optics in the pull out draws.
It looks like there is about nine CVI W2-LW-1-1025-UV-1064-45P windows: these are 1° wedged and coated on both sides for 45 incident p-pol. Don't think this is what we want (i.e. 45 degree polarized). Also, 1 inch might be inconveniently small to use in practice.
There is one CVI W2-LW-1-2050-C-1064-0, this is the not UV grade fused silica so should probably not be used. Also we need four.
Everything else is either coated only on one side, the wrong type of glass, wedge or coating.
That was unfortunate. But why does BK7 uncompatible with the purpose? We need UV fused silica only for the high power reason, I thought.
I guess BK7 is fine, we're not going to be putting high power in. I just though UV fused silica would be better practice if someone wanted to repurpose the flange in a few years. Seems like a minor extra cost.
Bottom line is that I don't think we have four identical of anything we can use. Should order 1.5" W2s from CVI or somewhere else?
Recently we were given the idea of sending the beam to the photodiode at Brewster angle. If we do so, ideally one particular polarization (parallel to the plane of incidence) will not reflect back. So if we send the beam polarized in this direction (or set the incidence plane such that these conditions are matched), we can minimize the reflection from PD significantly.
Sounded like a good idea, so I started reading about the InGaAs detector we have. Unfortunately, the datasheet of the C30642 detector we use does not mention either the fraction of In in InGaAs or the refractive index of it. So I went into the literature found these two papers:
Kim et al. Applied Physics Let Vol 81, 13 23 (2002) DOI: 10.1063/1.1509093
Adachi et al. Journal of Applied Physics 53, 5863 (1982); doi: 10.1063/1.331425
Using the empirical coefficients and functions from these paper, I calculated the refractive index for InGaAs for various fractions of x and the corresponding Brewster angles (Find Attached).
However, just after doing the analysis, we realized that doing this is not really possible. The Brewster angle is arctan(n2/n1) where n2 is the medium light is going into. This implies the Brewster angle would always be greater than 45 degrees and detector won't really absorb much light at this angle. So currently the conclusion is that this idea won't work.
However, there might be some error in our assumption of InGaAs as a transparent medium as the calculations do not take into account absorption of the photon at all. Attaching the python notebook too in case someone figures this out.
See also: https://arxiv.org/abs/1710.07399 or https://doi.org/10.1364/AO.57.003372
Upgrade from PID to intelligent controls:
As seen from earlier elogs, PID control of the temperature of the cavities seems problematic - the time taken for the temperature to converge to the set-point is very large and moreover, the PID parameters may require non-trivial tuning that varies with the desired set-point. Intelligent controls, specifically neural networks, seem like an attractive upgrade to PID as such a network would be able to learn for itself the non-linearities in the model and predict the optimal actuation.
More precisely, in this system, the requirement for the intelligent control is to be able to predict the optimal amount of heat actuation to be supplied in each time-step that converges fastest to the set-point temperature. In its final form, this prediction would be implemented as a series of matrix multiplications, with optimal weights (matrix coefficients), simulating the non-linear function describing the required actuation on taking as input the current state of the system.
(Refer figure) A neural network consists of layers of nodes. The layers of nodes begin with an input layer, followed by one or more hidden layers, and finally an output layer. Each layer (represented as an n-dimensional vector, where n is the number of nodes in the layer) can obtained from the previous layer via multiplication by an m-by-n matrix on the previous m-dimensional layer. Each node also has an associated activation function which for the hidden layers is preferably non-linear (ReLU, tanh, etc) to take into account the non-linearities in the model. An optimisation algorithm then attempts to find a `fit' for the components of all matrices.
In order to achieve the final form, the weights need to be optimised via some learning algorithm that learns ‘from experience’. For this, a loss or cost function is calculated as a function of the current state, roughly representing the distance between the current state from the set-point. This is fed to the optimiser which moves over the parameter space of the weights associated to the nodes in the neural network (coefficients of all matrices that serve as a transformation from one layer to the next) to a point closer to the optimum. The weights predict an actuation which is applied to the system giving a new state for which the process is repeated until the minimum is reached. This learning algorithm can be implemented, in our case, either using reinforcement learning (RL) or supervised learning (SL).
Reinforcement Learning (RL): RL deals with game-like problems where observations of the state are made and the algorithm learns to find the optimal action to perform based on the state. Each action has an associated reward which provides the basis for back-propagation or feedback that is used to predict future actions. In order to implement this, neither the internal working of the game nor a set of a priori `correct actions' need be known.
Supervised Learning (SL): SL operates on a set of labelled data, which includes a training and testing data set consisting of input states and their corresponding correct outputs. The algorithm learns to predict the output by learning with the training data set without any prior knowledge of the mechanism by which the outputs are obtained. In order to use SL in our system, a labelled data set must be obtained. This can be initially done using the model for thermal dynamics of our system and later on by taking real experimental data.
In order to train and test RL algorithms and also possibly SL algorithms, the physical system can be simulated as a `game environment' on which the neural net would learn the optimal action at each step. OpenAI, an open-source platform for Artificial Intelligence (AI) development, contains Gym and Baselines, which is a set of games on which RL algorithms can be trained and tested, and a set of high performance RL algorithms, respectively.
Our particular system as a gym environment:
An initial model of the system only includes the vacuum can and the heat conduction through the foam surrounding it. The dynamics of this is represented as a first order differential equation and therefore the evolution can be predicted by knowing only the temperature of the can (assuming all system parameters are known accurately). The action or actuation would correspond to a specific value for heating power that would be applied to the can during the next time-step.
To formulate this as a gym game environment in python on which RL algorithms (such as those on baselines) may be trained and tested, the following methods are to be defined:
step(), reset(), seed().
step(), reset(), seed().
render() and close() may also be used to visualise the gameplay.
reset() begins a new game and returns an observation or initial state, deterministically or randomly as per choice.
step() accepts an action and returns a tuple consisting of the next state (observation), reward received after previous action (reward), boolean determining whether the game is over (done) and a dictionary for additional information, if any (info). This method is one time step of the evolution.
return (observation, reward, done, info)
seed() contains seeds for the random number generators used in the program.
In addition, the environment also has the following attributes:
action_space - space of valid actions
observation_space - space of valid states or observations
reward_range - tuple of min and max possible reward
The action is given externally and should belong to the space of valid actions. In our case a learning algorithm, with a neural network, would feed this into the game at every time-step.
PMC in the North path got misaligned while working on the input path. We aligned it back again today. PMC locking method summary:
We were able to achieve fringe visibility of 76.16% (measured by VIS program in labs programmable calculator). Also, we were able to get rid of all other modes enough that they were not visible anymore in the oscilloscope.
Keywords for future search: North Cavity Pre Mode Cleaner modecleaner visibility PMC
The RF Cage that goes on top of the circuit in resonant RFPDs is grounded. It helps in avoiding interferance, however, if some component, most likely the tunable inductors touch this cage when they are closed, it creates parasitic inductance and/or capacitance. I observed this happening in one of our RFPDs where the resonant frequency would drop significantly as well as the peak magnitude once the cage is put on. I circumvented this problem by applying a layer of insulation tape on the inside of the RF cage cover. This helps avoiding any parasitic contact with the gnd.
Issued in Public Interest.
For some reason after the weekend, I found that the laser power on the south path decreased by about 300 uW. On checking power with a laser power meter, I found that a PBS, right after South EAOM and a half waveplate, is dumping more than half of the power as it is vertically polarized. The half waveplate is supposed to rotate the vertical polarization from EOAM into horizontal for cleaning by the PBS. I have rotated this waveplate to get maximum output from PBS.
Still, the loss at this stage seems much more in comparison to North Path.
Loss at North Path after EAOM due to horizontal polarization at output = (354 uW)/(2431 uW) x 100 = 14.56%
Loss at South Path after EAOM due to horizontal polarization at output = (545 uW)/(1957 Uw) x 100 = 27.85%
I have checked that input polarization is correctly vertical for both EAOMs.(Only about ~1% light horizontally polarized for both paths at this point). Also, both EAOMs (New Focus 4104) are currently terminated with 50 Ohm terminators at their modulation ports.
From the configuration run (see PSL:2257), I estimated some new PID coefficients. I left PID on for the long weekend with the new coefficient with the initial point about 5 MHz off from lock set point. Apparently, the system never converged (see the orange trace in attached screenshot of StripTool). It instead kept oscillating.
At this point, it seems that maybe the PID control won't work good enough. We need to keep beatnote frequency within 10kHz of setpoint to reach desired resolution range and avoid jumps in Marconi. We either need a new way of temperature control, or if PID works nearby setpoint with no hard changes, we need to reach there with almost zero slope.
This week I started working on a new method to drive the frequency to setpoint with near to zero first derivative. The purpose of this method is to reach close enough to setpoint in a calm manner so that PID can function nicely. If it works out better, it can be used as primary control near setpoint as well.
There would be three codes working together to achieve temperature driving (and/or control).
This plan is in an infant stage. We might need to change a lot of things and models depending on what data is acquired. So, for now, to reduce development time (with a possible chance of failure), I've only incorporated the DataMiner code which is on Git/cit_ctnlab/ctn_scripts/ now. I've also written a DataMinerHelper.py which spans the actuation space slowly while keeping frequency in range so that good knowledge data can be mined. These two codes are near final release.
Depending on what the knowledge data looks like, we will decide if we should go further with this plan. Until then, I'll work on noiseBudget code again.
I have reconfigured the ufc-6000 to have a sampling rate of 0.1 Hz which is minimum possible from design. I have changed ufc2.py to make sure it is set to this sampling rate everytime we read precavity beatnote frequency data.
seems a little too much like hunt and peck. Don't you have a time domain simulation of the plant? If not, why not?
We have come to the conclusion that to properly tune PID, we need to accurately model a physical system that represents the cavity thermodynamics and use it to tune PID coefficients. For this, we need good temperature sensors on the cavity (not present right now) and better heat actuation with known control. Awade also mentioned that we have new gold plated heat shields ready to be installed. So we have decided to replace the heat shield and install new temperature sensors as well. Following is the plan of action for the same:
Hopefully, by the end of this work, we get better stability on beatnote frequency locking to get good spectrum reading at higher resolution from Marconi.
ws1 is unable to read and write on some EPICS channels while I can see these channels in fb4 or acromag1. These channels are:
I'm not sure what is causing this. I have rebooted cromag1 several times but this problem persists. Interestingly, there are a lot of channels which are getting updated on medm screens so the origin of the problem is probably localized to a single .db file. But everything looks fine to me, at least after first few debugging trials.
Well, because of this, it is impossible almost impossible to even manually tune the beatnote frequency to the required point. I'll first fix this because it seems like an error which shouldn't be ignored. Suggestions are welcome as I am new to EPICS-Modbus-upstart-docker things and might be missing something silly.
I see that we were lacking in complete documentation of PLL transfer function and noise analysis. I made this document for the same.
Edit Fri May 17 14:02:14 2019:
Above link is no longer maintained. Now there is a DCC doc (see LIGO-T1900263) for this analysis.
We have been thinking for a while to migrate all epics channels from acromag1 (10.0.1.33) to c3iocserver (10.0.1.36) which is rack mount with the latest supported ubuntu Debian.
Unfortunately, my first attempt failed and I tried to put everything back to the status quo but the docker instance on iocserver which was running PMC interface is not working. Here are the steps I took:
I couldn't debug remotely further for the cause of this problem. So the status is worse than I started. The PMC channels are not running and hence everything must be unlocked in the lab right now.
Edit Mon Mar 11 18:38:03 2019 (awade): crossed out Ubuntu added Debian
# Use the following commands for TCP/IP
# drvAsynIPPortConfigure(const char *portName,
# const char *hostInfo,
# unsigned int priority,
# int noAutoConnect,
# int noProcessEos);
# Example: drvAsynIPPortConfigure("c3test1","10.0.0.42:502",0,0,1)
[ED by KA, catalogs should not be put on ELOG. This is public.]
Did we just miss this all along?
C30642 has a bandwidth of 20 MHz only. That means at 36 MHz and 37 MHz, the RF output would be -8.1 dB (0.3933) and -8.3 dB (0.3827) lesser than expected or maybe worse. However, this bandwidth is mentioned for load resistance of 50 Ohms, so I need to go into more details. But definitely will have to look into this.
Maybe we need to replace these with faster photodiodes. Do we have any of C30619 or C30641 in stock?
(NO - this is a incorrect interpretation of bandwidth in this case)
There is no problem. It is just a matter of noise requirement.
How much shotnoise intercept current do you require?
At the 40m, the 33MHz PD has the shotnoise intercept current of 0.52mA. https://wiki-40m.ligo.caltech.edu/Electronics/RFPD/REFL33
Is that enough? How much is yours?
You should be able to realize the similar value because the technology used for your resonant PD is the same as the 40m one, I suppose.
If you require super low noise like uA (=sub pA/rtHz current noise), then we will need a high gain and low junction capacitance.
Yeah, I realized I understood the mentioned bandwidth wrongly. It is mentioned for a direct 50 Ohm load while we load our photodiode with a different resonant circuit.
I haven't made actual measurements for the shot noise intercept current similar to the link, but I made a ltspice simulation of the circuit and I believe the shot noise intercept current is about 0.15 mA. Yes we are good with that. This topic should close here. False Alarm.
I have completed making an optical layout for CTN lab. From now onwards, I'll update this layout if I make any major changes in the path.
Please comment if you think I should represent something better.
I have finally completed the documentation for mathematical analysis of using PLL for frequency noise measurement. This document also contains noise analysis for various sources.
Please read and let me know if you have any comments.
Last week, the default GNOME Classic desktop environment of WS1 started giving hiccups. I was unable to see any medm screens. I restarted the computer assuming that will solve it but then I was unable to login from the graphical user interface. That was weird as I was able to ssh into the computer and do whatever I want. So it seemed that some graphics engine was unable to run. On login screen, after typing in the correct password, a black screen would appear and it would go back to the login screen. This is mentioned in various forums on the internet as "login loop". Taking advice and directions from Jamie, I tried to troubleshoot this to the best of my ability but we both were unsure in the end what was the problem. Our best guess is that installation of LSCsof package upgraded some graphics library beyond the capability of the present system. I tried all different desktop environments and I was able to login only through GNOME on Wayland (supposedly bleeding edge for debian 8). Another mystery is that I found medm uninstalled from the computer. I installed it back with:
sudo apt-get install medm
And I found that GNOME on Wayland is still usable option. After some more discussion with Jamie, I decided to upgrade the OS to next stable version debian 9. I followed the instructions on:
to dot and have successfully upgraded to debian 9. However, still only "GNOME on Wayland" is the only desktop environment through which we can login without facing a login loop
On another note, I found today that clicking the scroll button over a medm channel screen (which ideally displays the channel name) freezes the computer. The mouse pointer disappears and the computer does not respond to anything. Again, I can ssh into the computer and do anything. This problem goes away on restart.
So bottom line is, the overall system environment of ws1 is becoming messy and old and ideally we should do a clean install of a new os (something like Jon did in QIL). But I was heavily discouraged for doing so as it seemed it will take a lot of time to setup a new workstation.
Due to recent cleanup and upgrades in our computer environment in lab, this workstation ws1 is a mere looking window in the experiment. There are no active scripts that it runs (except a precavity beatnote reading) and the experiment can be carried out without it present as well. However, it is nice to keep a computer loaded with EPICS, nds2, python environments and a screen to carry out day-t-day functioning of the lab.
beware the Debian zealots! All the workstations at the 40m are SL7.
If you really can only use Debian, you could just clone the new workstation that Jon setup in QIL.
Due to testing of CTN:2406 I expect the beatnote to be away from 27.34 MHz. So I have changed the beatntoe detector to wideband New Focus 1811 for the long weekend. It has badnwidth of 125 MHz.
The plan of action has been moved to a new wiki page for better documentation.
Attached is a first attempt at tracing the rays on reflection from the wedged and tilted window together with the cavity mirror.
I used Sean Leavey's zero and created a ray tracing module for simple purposes which is fast and easy to use. Check out the examples to see the capabilities.
To use, git pull labutils to update and keep labutils /traceit in your python path.
More info about each ray can be seen by layout['R4'] kind of statements. Or just write layout.rays to see info about all rays. This includes their vector positions, the origin phenomenon, and ray etc.
I know a lot can be done to make this look better. But I'm not going to dive into developing this module right now. However, suggestions on how to make the ray trace diagram more useful are welcome so that I can make it more informational.
Seems like most of the reflections would be bunched together in two directions where we should put the beam dumps.
Liquid Instrument's Application Engineer at La Jolla told me that connection with moku might be mroe stable if it is directly connected to the computer through a USB cable. It still gets identified with Name, Serial Number or IP address, just the connection is mroerobust. So today, I have connected our moku with USB. I have seen in past couple of weeks that every few days the moku data transfer gets stuck or it fails to connect through LAN. So trying this out.
I and Ian discussed what the transfer functions would look like. Then today, using some old calculations, I put up this notebook which does the calculations for us. The notebook has the calculations typed up in latex too.
This is the first attempt. We have to work on making the EOM path's transfer function closer to the expected model transfer function. And we should use a faster opamp as well, I think.
Edit Fri Oct 4 14:44:33 2019 anchal:
Schematic of Plant Model
I and Ian discussed what the transfer functions would look like. Then today, using some old calculations, I put up this notebook which does the calculations for us. The proposed circuit schematic is attached. The notebook has the calculations typed up in latex too.
The nodes at input and output of buffer in the PZT path are connected together. That is wrong. Also, If possible, you should name the elements same as the zero model in the notebook. Anyways, I think we are ready to solder a circuit board.
I have updated the plant model to contain the cavity pole also. Cavity pole is a pair of positive and negative real poles, so it is hard (or maybe impossible) to imitate it exactly with an electronic circuit. Or maybe, my analysis is wrong.
Nevertheless, I have for now made this circuit which has a second-order pole, so it correctly matches the magnitude of the model transfer function up to 1 MHz for both PZT and EOM paths. Note that the elliptical filter is not included in this as we can connect the circuit to Test port 1 which injects just before the filter in LIGO-D0901894. Also, for the gains in EOM path, I had to add some factors to make it the same as the model transfer function. All components are calculated for E12 series resistors and capacitors.
Attached is a pdf of the notebook which contains all the mathematics in latex and a zip file with all files to recreate and further work on this. Ian can use these as support to learn zero further.
On Oct 11th at 15:04:04, the south laser switched off on its own. I would like to know if anyone entered the lab around this time. Koji did mention that our Laser Safety sign outside was blinking, but I have no more information than that. Attached is the data of south PMC reflection DC, which is the first photodiode that measures the laser. It suddenly went to zero, indicating the laser was switched off and the locks did not drive it to this point. I'm also finding that the laser intensity is reducedasit used to saturate the South PMC reflection photodiode when unlocked but presently shows around 5V. I'm trying to put the experiment back to same parameters as before.
Code and Data
South PMC Error signal is showing huge weird capacitor charging type oscillations. Attached is an oscilloscope measurement of it. A more weird thing is that this peak is appearing at random frequencies, that is, if I take single sequence measurements of 1 ms length, I see these peaks occurring at a difference from 100 us to 500 us, randomly.
Following up after CTN:2452, the laser safety sign is not working. Hence the lab has been shut down now. All lasers are switched off with key turned to off position. I'll fix the laser safettysign before turning the experiment back one. Possible reasons might be an interlock glitch or bulb fuse.
I have taken the bulb from ATF lab inside sign for now. I'm ordering a new one to replace that soon.
With Laser actuation not connected, I see that the South PMC Servo board is acting up. Firstly, it is not responding to changes made on the ramp when engage is off. This shows that maybe the engage DAC channel is faulty and PMC lock is always on. Need to investigate more on this so for now I have disconnected the PMC PZT from the servo board so that nothing further happens. North side is completely happy and sound.
I have a wierd observation. The following two combination work:
But when I connect the South PMC to its own South PMC Servo Card, the PZT output voltage does not change with changes made to Ramp. The other side, works.
I checked the capacitance of the South PMC PZT and it came to about 395 nF which is same as specifications with the error bars. So the PZT isn't bad.
But if I disconnect the South PMC PZT from the South PMC Servo Card, the output voltage at the servo card changes as expected with ramp voltage.
This is very perplexing. I think I need second opinion here to do sanity checks otherwise I'll go mad in the basement.
Updated schematics for reference: South PMC Servo Card
Are you sure that all the cables involved are isolated and there is no polarity inversion? e.g. The unfunctional combination provides HV to GND directry at the cable, for example.
Yeah, the cables are isolated and no inversion could happen.
Even more bizzarething is that it works now! I'm not halucinating here but the same thing was not working before.I even have lab notes from yesterday when it wasn't working.
This is pretty bad as I don't want to be unaware of something in the lab that caused this. The only other clue here in all this is that the laser intensity changed. We control the intensity of light going into the experiment at (24,110) through the half waveplate before a PBS. Rana told me that the polarization direction of laser coming from NPRO shouldn't change, but since the incident of last Friday, I have had to change the rotation of this half waveplate in both directions to ensure same amound of light reaches the cavity as on North. since no alignment was changed at the south PMC, there is no reason for the mode matching to change drastically there or during the day. but this is the only fishy clue I've got for now.
Both cavities are locked. with same amount of gains in the FSS and PMC loops in both paths as before. The can's temperature as reached to the setpoint and the beatnote frequency PID is working to take it to 27.34 MHz. I'll set trigger for tonight for beatnote frequency noise measurement if the frequency reaches in the range of the photodiode.We then will know what is the impact on the experiment noise.
Latest BN Spectrum: CTN_Latest_BN_Spec.pdf
Daily BN Spectrum: CTN_Daily_BN_Spec.pdf
I have designed a passive circuit that seems to match the ideal transfer functions in shape. Scaling should just be a game of playing with the values of the resistors and capacitors. The phase still seems to be an issue. There is an unwanted phase shift from 0 -> -90.
The next step is trying to finalize the values for the resistors and caps. Possibly model in zero if I have time. Then build and test. Also fix the phase.
The same thing happened again. This time, not just with the SPMC actuation voltage, but the South Laser slow voltage control was also unresponsive. However, I am not very sure about the latter. This was resolved once the restarted the whole lab. This narrows down the problem to following possibilities:
These still don't explain CTN:2456. Again, since this is an irreproducible error, I will just have to wait for it to happen to gather more clues. Right now, everything is fine and beatnote is traveling towards set frequency.