ID |
Date |
Author |
Type |
Category |
Subject |
16257
|
Mon Jul 26 17:34:23 2021 |
Paco | Update | Loss Measurement | Loss measurement | [gautam, yehonathan, paco]
We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.
Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).
Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 2.6 ppm and a YARM loss of 38.9 0.6 ppm. To make the distinction more clear, Attachment 2 and Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.
For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet. |
Attachment 1: LossMeasurement_RawData.pdf
|
|
Attachment 2: YARM_loss_stats.pdf
|
|
Attachment 3: XARM_loss_stats.pdf
|
|
335
|
Fri Feb 22 14:45:06 2008 |
steve | Update | MOPA | laser power levels |
At the beginning of this 1000 days plot shows the laser that was running at 22C head temp
and it was send to LLO
The laser from LHO PA#102 with NPRO#206 were installed at Nov. 29, 2005 @ 49,943 hrs
Now,almost 20,000 hrs later we have 50% less PSL-126MOPA_AMPMON power |
Attachment 1: lpower1000d.jpg
|
|
1027
|
Mon Oct 6 10:00:49 2008 |
steve | Update | MOPA | MOPA_HTEMP is up | Monday morning conditions:
The laser head temp is up to 20.5 C
The laser shut down on Friday without any good reason.
I was expecting the temp to come down slowly. It did not.
The control room temp is 73-74 F, Matt Evans air deflector in perfect position.
The laser chiller temp is 22.2 C
ISS is saturating. Alarm is on. Turning gain down from 7 to 2 pleases alarm handler.
c1LSC computer is down |
Attachment 1: htup.jpg
|
|
1116
|
Thu Nov 6 09:45:27 2008 |
steve | Update | MOPA | head temp hick-up vs power | The control room AC temp was lowered from 74F to 70F around Oct 10
This hold the head temp rock solid 18.45C for ~30 days as it shows on this 40 days plot.
We just had our first head temp hick-up
note: the laser chiller did not produce any water during this period |
Attachment 1: htpr.jpg
|
|
1282
|
Fri Feb 6 16:23:54 2009 |
steve | Update | MOPA | MOPAs of 7 years | MOPAs and their settings, powers of 7 years in the 40m |
Attachment 1: 7ymopas.jpg
|
|
1324
|
Thu Feb 19 11:51:56 2009 |
steve | Update | MOPA | HTEMP variation is too much | The C1:PSL-MOPA_HTEMP variation is more than 0.5 C daily
Normally this temp stays well within 0.1 C
This 80 days plots shows that we have just entered this unstable region some days ago.
The control room temp set unchanged at 70 F, actual temp at ac-mon 69-70 with occasional peaks at 74 F
Water temp at chiller repeatedly around 20.6 C at 8 am
This should be rock solid at 20.00C +- 0.02C
|
Attachment 1: 80dhtemp.jpg
|
|
1387
|
Wed Mar 11 16:41:22 2009 |
steve | Update | MOPA | spare NPRO power | The spare M126N-1064-700, sn 5519 of Dec 2006 rebuilt NPRO's power output
measured 750mW at DC2.06A with Ohpir meter.
Alberto's controller unit 125/126-OPN-PS, sn516m was disconnected from lenght measurment NPRO on the AP table.
5519 NPRO was clamp to the optical table without heatsink and it was on for 15 minutes. |
1542
|
Mon May 4 10:38:52 2009 |
steve | Update | MOPA | laser power is dropped | As PSL-126MOPA_DTEC went up, the power out put went down yesterday |
Attachment 1: dtecup.jpg
|
|
1543
|
Mon May 4 16:49:56 2009 |
Alberto | Update | MOPA | laser power is dropped |
Quote: |
As PSL-126MOPA_DTEC went up, the power out put went down yesterday
|
Alberto, Jenne, Rob, Steve,
later on in the afternoon, we realized that the power from the MOPA was not recovering and we decided to hack the chiller's pipe that cools the box.
Without unlocking the safety nut on the water valve inside the box, Jenne performed some Voodoo and twisted a bit the screw that opens it with a screw driver. All the sudden some devilish bubbling was heard coming from the pipes.
The exorcism must have freed some Sumerian ghost stuck in our MOPA's chilling pipes (we have strong reasons to believe it might have looked like this) because then the NPRO's radiator started getting cooler.
I also jiggled a bit with the valve while I was trying to unlock the safety nut, but I stopped when I noticed that the nut was stuck to the plastic support it is mounted on.
We're now watching the MOPA power's monitor to see if eventually all the tinkering succeeded.
[From Jenne: When we first opened up the MOPA box, the NPRO's cooling fins were HOT. This is a clear sign of something badbadbad. They should be COLD to the touch (cooler than room temp). After jiggling the needle valve, and hearing the water-rushing sounds, the NPRO radiator fins started getting cooler. After ~10min or so, they were once again cool to the touch. Good news. It was a little worrisome however that just after our needle-valve machinations, the DTEC was going down (good), but the HTEMP started to rise again (bad). It wasn't until after Alberto's tinkering that the HTEMP actually started to go down, and the power started to go up. This is probably a lot to do with the fact that these temperature things have a fairly long time constant.
Also, when we first went out to check on things, there was a lot more condensation on the water tubes/connections than I have seen before. On the outside of the MOPA box, at the metal connectors where the water pipes are connected to the box, there was actually a little puddle, ~1cm diameter, of water. Steve didn't seem concerned, and we dried it off. It's probably just more humid than usual today, but it might be something to check up on later.] |
1547
|
Tue May 5 10:42:18 2009 |
steve | Update | MOPA | laser power is back |
Quote: |
As PSL-126MOPA_DTEC went up, the power out put went down yesterday
|
The NPRO cooling water was clogged at the needle valve. The heat sink temp was around ~37C
The flow-regulator needle valve position is locked with a nut and it is frozen. It is not adjustable. However Jeenne's tapping and pushing down on the plastic hardware cleared the way for the water flow.
We have to remember to replace this needle valve when the new NPRO will be swapped in. I checked on the heat sink temp this morning. It is ~18C
There is condensation on the south end of the NPRO body, I wish that the DTEC value would just a little higher like 0.5V
The wavelenght of the diode is temp dependent: 0.3 nm/C. The fine tuning of this diode is done by thermo-electric cooler ( TEC )
To keep the diode precisely tuned to the absorption of the laser gain material the diode temp is held constant using electronic feedback control.
This value is zero now.
|
Attachment 1: uncloged.jpg
|
|
1646
|
Wed Jun 3 03:30:52 2009 |
rana | Update | MOPA | NPRO current adjust | I increased the NPRO's current to the max allowed via EPICS before the chiller shutdown. Yesterday, I did this
again just to see the effect. It is minimal.
If we trust the LMON as a proportional readout of the NPRO power, the current increase from 2.3 to 2.47 A gave us
a power boost from 525 to 585 mW (a factor of 1.11). The corresponding change in MOPA output is 2.4 to 2.5 W
( a factor of 1.04).
Therefore, I conclude that the amplifier's pump has degraded so much that it is partially saturating on the NPRO
side. So the intensity noise from NPRO should also be suppressed by a similar factor.
We should plan to replace this old MOPA with a 2 W Innolight NPRO and give the NPRO from this MOPA back to the
bridge labs. We can probably get Eric G to buy half of our new NPRO as a trade in credit. |
Attachment 1: Untitled.png
|
|
2000
|
Thu Sep 24 21:04:15 2009 |
Jenne | Update | MOPA | Increasing the power from the MOPA | [Jenne, Rana, Koji]
Since the MOPA has been having a bad few weeks (and got even more significantly worse in the last day or so), we opened up the MOPA box to increase the power. This involved some adjusting of the NPRO, and some adjusting of the alignment between the NPRO and the Amplifier. Afterward, the power out of the MOPA box was increased. Hooray!
Steps taken:
0. Before we touched anything, the AMPMON was 2.26, PMC_Trans was 2.23, PSL-126MOPA_126MON was 152 (and when the photodiode was blocked, it's dark reading was 23).
1. We took off the side panel of the MOPA box nearest the NPRO, to gain access to the potentiometers that control the NPRO settings. We selectively changed some of the pots while watching PSL-126MOPA_126MON on Striptool.
2. We adjusted the pot labeled "DTEMP" first. (You have to use a dental mirror to see the labels on the PCB, but they're there). We went 3.25 turns clockwise, and got the 126MON to 158.
3. To give us some elbow room, we changed the PSL-126MOPA_126CURADJ from +10.000 to 0.000 so that we have some space to move around on the slider. This changed 126MON to 142. The 126MOPA_CURMON was at 2.308.
4. We tried adjusting the "USR_CUR" pot, which is labeled "POWER" on the back panel of the NPRO (you reach this pot through a hole in the back of the NPRO, not through the side which we took off, like all the other pots today). This pot did nothing at all, so we left it in its original position. This may have been disabled since we use the slider.
5. We adjusted the CUR_SET pot, and got the 126MON up to 185. This changed the 126MOPA_CURMON to 2. 772 and the AMPMON to 2.45
We decided that that was enough fiddling with the NPRO, and moved on to adjusting the alignment into the Amplifier.
6. We teed off of the AMPMON photodiode so that we could see the DC values on a DMM. When we used a T to connect both the DMM and the regular DAQ cable, the DMM read a value a factor of 2 smaller than when the DMM was connected directly to the PD. This shouldn't happen.....it's something on the to-fix-someday list.
7. Rana adjusted the 2 steering mirrors immediately in front of the amplifier, inside the MOPA box. This changed the DMM reading from its original 0.204 to 0.210, and the AMPMON reading from 2.45 to 2.55. While this did help increase the power, the mirrors weren't really moved very much.
8. We then noticed that the beam wasn't really well aligned onto the AMPMON PD. When Rana leaned on the MOPA box, the PD's reading changed. So we moved the PD a little bit to maximize its readings. After this, the AMPMON read 2.68, and the DMM read 0.220.
9. Then Rana adjusted the 2 waveplates in the path from the NPRO to the Amplifier. The first waveplate in the path didn't really change anything. Adjusting the 2nd waveplate gave us an AMPMON of 2.72, and a DMM reading of 0.222.
10. We closed up the MOPA box, and locked the PMC. Unfortunately, the PMC_Trans was only 1.78, down from the 2.26 when we began our activities. Not so great, considering that in the end, the MOPA power went up from 2.26 to 2.72.
11. Koji and I adjusted the steering mirrors in front of the PMC, but we could not get a transmission higher than 1.78.
12. We came back to the control room, and changed the 126MOPA_126CURADJ slider to -2.263 which gives a 126MOPA_CURMON to 2.503. This increased PMC_TRANS up to 2.1.
13. Koji did a bit more steering mirror adjustment, but didn't get any more improvement.
14. Koji then did a scan of the FSS SLOW actuator, and found a better temperature place (~ -5.0)for the laser to sit in. This place (presumably with less mode hopping) lets the PMC_TRANS get up to 2.3, almost 2.4. We leave things at this place, with the 126MOPA_126CURADJ slider at -2.263.
Now that the MOPA is putting out more power, we can adjust the waveplate before the PBS to determine how much power we dump, so that we have ~constant power all the time.
Also, the PMCR view on the Quad TVs in the Control Room has been changed so it actually is PMCR, not PMCT like it has been for a long time. |
2002
|
Fri Sep 25 16:45:29 2009 |
Jenne | Update | MOPA | Total MOPA power is constant, but the NPRO's power has decreased after last night's activities? | [Koji, Jenne]
Steve pointed this out to me today, and Koji and I just took a look at it together: The total power coming out of the MOPA box is constant, about 2.7W. However, the NPRO power (as measured by 126MOPA_126MON) has decreased from where we left it last night. It's an exponential decay, and Koji and I aren't sure what is causing it. This may be some misalignment on the PD which actually measures 126MON or something though, because 126MOPA_LMON, which measures the NPRO power inside the NPRO box (that's how it looks on the MEDM screen at least...) has stayed constant. I'm hesitant to be sure that it's a misalignment issue since the decay is gradual, rather than a jump.
Koji and I are going to keep an eye on the 126MON value. Perhaps on Monday we'll take a look at maybe aligning the beam onto this PD, and look at the impedance of both this PD, and the AMPMON PD to see why the reading on the DMM changed last night when we had the DAQ cable T-ed in, and not T-ed in. |
Attachment 1: AMPMONconstant_126MONdown.jpg
|
|
2003
|
Fri Sep 25 17:51:51 2009 |
Koji | Update | MOPA | Solved (Re: Total MOPA power is constant, but the NPRO's power has decreased after last night's activities?) | Jenne, Koji
The cause of the decrease was found and the problem was solved. We found this entry, which says
Yoich> We opened the MOPA box and installed a mirror to direct a picked off NPRO beam to the outside of the box through an unused hole.
Yoich> We set up a lens and a PD outside of the MOPA box to receive this beam. The output from the PD is connected to the 126MON cable.
We went to the PSL table and found the dc power cable for 126MOPA_AMPMON was clipping the 126MON beam.
We also made a cable stay with a pole and a cable tie.
After the work, 126MON went up to 161 which was the value we saw last night.
We also found that the cause of the AMPMON signal change by the DAQ connection, mentioned in this entry:
Jenne> 6. We teed off of the AMPMON photodiode so that we could see the DC values on a DMM.
Jenne> When we used a T to connect both the DMM and the regular DAQ cable, the DMM read
Jenne> a value a factor of 2 smaller than when the DMM was connected directly to the PD.
We found a 30dB attenuator is connected after the PD. It explains missing factor of 2.
Quote: |
[Koji, Jenne]
Steve pointed this out to me today, and Koji and I just took a look at it together: The total power coming out of the MOPA box is constant, about 2.7W. However, the NPRO power (as measured by 126MOPA_126MON) has decreased from where we left it last night. It's an exponential decay, and Koji and I aren't sure what is causing it. This may be some misalignment on the PD which actually measures 126MON or something though, because 126MOPA_LMON, which measures the NPRO power inside the NPRO box (that's how it looks on the MEDM screen at least...) has stayed constant. I'm hesitant to be sure that it's a misalignment issue since the decay is gradual, rather than a jump.
Koji and I are going to keep an eye on the 126MON value. Perhaps on Monday we'll take a look at maybe aligning the beam onto this PD, and look at the impedance of both this PD, and the AMPMON PD to see why the reading on the DMM changed last night when we had the DAQ cable T-ed in, and not T-ed in.
|
|
2007
|
Sun Sep 27 12:52:56 2009 |
rana | Update | MOPA | Increasing the power from the MOPA | This is a trend of the last 20 days. After our work with the NPRO, we have recovered only 5% in PMC trans power, although there's an apparent 15% increase in AMPMON.
The AMPMON increase is partly fake; the AMPMON PD has too much of an ND filter in front of it and it has a strong angle dependence. In the future, we should not use this filter in a permanent setup. This is not a humidity dependence.
The recovery of the refcav power mainly came from tweaking the two steering mirrors just before and just after the 21.5 MHz PC. I used those knobs because that is the part of the refcav path closest to the initial disturbance (NPRO).
BTW, the cost of a 1W Innolight NPRO is $35k and a 2W Innolight NPRO is $53k. Since Jenne is on fellowship this year, we can afford the 2W laser, but she has to be given priority in naming the laser. |
Attachment 1: Picture_3.png
|
|
2164
|
Fri Oct 30 09:24:45 2009 |
steve | HowTo | MOPA | how to squeeze more out of little |
Quote: |
Here is the plots for the powers. MC TRANS is still rising.
What I noticed was that C1:PSL-FSS_PCDRIVE nolonger hit the yellow alert.
The mean reduced from 0.4 to 0.3. This is good, at least for now.
|
Koji did a nice job increasing light power with some joggling. |
Attachment 1: 44to34.jpg
|
|
2297
|
Thu Nov 19 09:25:19 2009 |
steve | Update | MOPA | water was added to the laser chiller | I added ~500 cc of distilled water to the laser chiller yesterday. |
Attachment 1: htempwtr.png
|
|
2556
|
Mon Feb 1 18:33:10 2010 |
steve | Update | MOPA | Ve half the lazer! | The 2W NPRO from Valera arrived today and I haf hidden it somewere in the 40m lab!
Rana was so kind to make this entry for me |
Attachment 1: inno2w.JPG
|
|
Attachment 2: inno2Wb.JPG
|
|
3033
|
Wed Jun 2 07:54:55 2010 |
steve | Update | MOPA | laser headtemp is up | Is the cooling line clogged? The chiller temp is 21C See 1 and 20 days plots |
Attachment 1: htemp.jpg
|
|
Attachment 2: htemp20d.jpg
|
|
3035
|
Wed Jun 2 11:28:31 2010 |
Koji | Update | MOPA | laser headtemp is up | Last night we stopped the air conditioning. It made HDTEMP increase.
Later we restored them and the temperature slowly recovered. I don't know why the recovery was so slow.
Quote: |
Is the cooling line clogged? The chiller temp is 21C See 1 and 20 days plots
|
|
3108
|
Wed Jun 23 17:48:16 2010 |
steve | Update | MOPA | laser head temp | The laser chiller temp is fluctuating and the power output is decreasing. See 120 days plot.
Yesterday I removed ~300cc water from the overflowing chiller tank. |
Attachment 1: htemp120d.jpg
|
|
3130
|
Tue Jun 29 08:41:06 2010 |
steve | Update | MOPA | MOPA is dead | I found the laser dead this morning.
The crane people are here to unjam it.
Laser hazard mode is lifted and LASER SAFE MODE is in place. No safety glasses but CRANE HAZARD is still active.
Stay out of the 40m lab !
|
Attachment 1: laserisdead.jpg
|
|
3132
|
Tue Jun 29 10:20:58 2010 |
rana | Update | MOPA | MOPA is NOT dead | Not dead. It just had a HT fault. You can tell by reading the front panel. Cycling the power usually fixes this. |
3137
|
Tue Jun 29 16:44:12 2010 |
Jenne, rana | Update | MOPA | MOPA is NOT dead, was just asleep |
Quote: |
Not dead. It just had a HT fault. You can tell by reading the front panel. Cycling the power usually fixes this.
|
MOPA is back onliine. Rana found that the fuse in the AC power connector's fuse had blown. This was evident by smelling all of the inputs and outputs of the MOPA controller. The power cord we were using for this was only rated for 10A and therefore was a safety hazard. The fuse should be rated to blow before the power cord catches on fire. The power cord end was slightly melted. I don't know why it hadn't failed in the last 12 years, but I guess the MOPA was drawing a lot of extra current for the DTEC or something due to the high temperature of the head.
We got some new fuses from Todd @ Downs.
The ones we got however were fast-blow, and that's what we want The fuses are 10A, 250V. The fuses are ~.08 inches long, 0.2 inches in diameter. |
3202
|
Tue Jul 13 10:02:30 2010 |
steve | Update | MOPA | laser power is dropping slowly | I have just removed an other 400 cc of water from the chiller. I have been doing this since the HTEMP started fluctuating.
The Neslab bath temp is 20.7C, control room temp 71F
|
Attachment 1: power100d.jpg
|
|
3577
|
Wed Sep 15 16:00:26 2010 |
koji, steve | Update | MOPA | | We removed the Lightwave MOPA Controller from 1X1 (south) It was a real painfully messy job to pull out the umbilical.
Note: the umbilical is shading it plastic cover. It is functional but it has to be taken out side and cleaned. Do not remove it from it's plastic bag in a clean environment.
Now Joe has room for IOO chassy in this rack.
We also removed the Minco temp controller and ref. cavity ion pump power supply.
|
3578
|
Wed Sep 15 16:12:35 2010 |
koji, steve | Update | MOPA | MOPA Controller is taken out of the PSL rack | We removed the Lightwave MOPA Controller, PA#102, NPRO206 power supply to make room for the IOO chassy at 1X1 (south) rack.
The umbilical cord was a real pain to take out. It is shading its plastic cover. The unused Minco was disconnected and removed.
The ref. cavity ion pump controller- power supply was temporarily taken out also. |
Attachment 1: P1060843.JPG
|
|
1195
|
Fri Dec 19 11:29:16 2008 |
Alberto, Yoichi | Configuration | MZ | MZ Trans PD | Lately, it seems that the matching of the input beam to the Mode Cleaner has changed. Also, it is drifting such that it has become necessary to continuously adjust the MC cavity alignment for it to lock properly.
Looking for causes we stopped on the Mach Zehnder. We found that the monitor channel: C1:PSL-MZ_MZTRANSPD
which supposedly reads the voltage from some photodiode measuring the transmitted power from the Mach Zehnder, is totally unreliable and actually not related to any beam at all.
Blocking either the MZ input or output beam does not change the channel's readout. The reflection channel readout responds well, so it seems ok. |
2006
|
Sat Sep 26 13:55:20 2009 |
Jenne | Update | MZ | MZ was locked in a bad place | I found the MZ locked in a bad place earlier today. It was locked in a similarly bad spot yesterday after we fixed the cable situation for 126MOPA_126MON, with reflection of ~0.8, rather than the nominal 0.305. It's good now though. |
2017
|
Tue Sep 29 10:44:29 2009 |
Koji | Update | MZ | MZ investigation | Rana, Jenne, Koji
Last night we checked MZ. The apparent thing we found was the gain slider does not work.
The slider actually changes the voltage at the cross connection of 1Y2 (31 pin4?), the gain does not change.
The error spectrum didn't change at all even when the slider was moved.
Rana poked the flat cable at the bottom of 1Y2, we had no imporvement.
We coudn't find the VME extender board, so we just replaced AD602 (=VGA) and LT1125 (=Buffer for the ctrl voltage).
Even after the replacement, the gain slider is not working yet.
Today, I will put a lead or probe to the board to see whether the slider changes the voltage on the board or not.
Somehow the gain is sitting at a intermediate place that is not to low not to high. So I still don't know the gain slider
is the cause of the MZ instability or not. |
2018
|
Tue Sep 29 12:47:08 2009 |
Koji | Update | MZ | MZ unlocked | 12:45 I started the work on MZ. Thus the MZ was unlocked.
Found the bad connection on the FLKM 64pin cross connection board. We need a replacement.
I went to Wilson and got the replacement, two VME extender boards, three 7815, and three 7915. Thanks, Ben! |
2020
|
Tue Sep 29 18:21:41 2009 |
Koji | Update | MZ | MZ work done | The MZ work completed. I replaced the bad cross connection terminal. The gain slider is working now.
I looked at the error spectrum on an FFT analyzer. I could see the lock was more tight.
Then I proceeded to the MZ epics panel.
1) C1:PSL-MZ_MZTRANSPD has no meaning (not connected). So I put C1:PSL-ISS_INMONPD as the MZ trans monitor.
2) The EPICS setting for the MZ gain slider was totaly wrong.
Today I learned from the circuit, the full scale of the gain slider C1:PSL-MZ_GAIN gave us +/-10V at the DAC.
This yield +/-1V to V_ctrl of the AD602 after the internal 1/10 attenuation stage.
This +/-1V didn't correspond to -10dB~+30dB, but does -22dB~+42dB and is beyond the spec of the chip.
The gain of AD602 is calculated by
G [dB] = 32 V_crtl + 10, for -0.625 [V]< V_ctrl < +0.625 [V].
In order to fix this I used the following commands which overrode the EPICS parameters.
The tip of EGUF/EGUL is to know how much the gain (virtually) goes for the full scale of the DAC output.
ezcawrite C1:PSL-MZ_GAIN.EGUF 42
ezcawrite C1:PSL-MZ_GAIN.EGUL -22
ezcawrite C1:PSL-MZ_GAIN.DRVH 30
ezcawrite C1:PSL-MZ_GAIN.DRVL -10
ezcawrite C1:PSL-MZ_GAIN.HOPR 30
ezcawrite C1:PSL-MZ_GAIN.LOPR -10
and for the permanent change I modified the db file /cvs/cds/caltech/target/c1iool0/c1iooMZservo.db
This will be active when cliool0 is rebooted.
# This yields the output limited to -6.25V ~ +6.25V, which corresponds to -10dB ~ +30dB
# modified by Koji Arai (29-Sept-2009)
grecord(ao,"C1:PSL-MZ_GAIN")
{
field(DESC,"GAIN- overall pre-modecleaner servo loop gain")
field(SCAN,"Passive")
field(PINI,"YES")
field(DISV,"1")
field(DTYP,"VMIVME-4116")
field(OUT,"#C3 S5 @")
field(EGUF,"42")
field(EGUL,"-22")
field(PREC,"1")
field(EGU,"dB")
field(HOPR,"30")
field(LOPR,"-10")
field(DRVH,"30")
field(DRVL,"-10")
field(LINR,"LINEAR")
field(OROC,"0")
field(DOL,"0")
}
# previous code
grecord(ao,"C1:PSL-MZ_GAIN")
{
field(DESC,"GAIN- overall pre-modecleaner servo loop gain")
field(SCAN,"Passive")
field(PINI,"YES")
field(DISV,"1")
field(DTYP,"VMIVME-4116")
field(OUT,"#C3 S5 @")
field(EGUF,"30")
field(EGUL,"-10")
field(PREC,"4")
field(EGU,"Volts")
field(HOPR,"30")
field(LOPR,"-10")
field(LINR,"LINEAR")
field(OROC,"0")
field(DOL,"0")
}
Quote: |
12:45 I started the work on MZ. Thus the MZ was unlocked.
Fond the bad connection on the FLKM 64pin cross connection board. We need the replacement.
I went to Wilson and got the replacement, two VME extender boards, three 7815, and three 7915. Thanks, Ben!
|
|
2021
|
Tue Sep 29 21:37:09 2009 |
rana | Update | MZ | MZ work done : some noise checking | Since we used to run with a gain slider setting of +15 dB on the MZ, I wanted to check that the new setting of +30dB was OK. It is.
To check it I turned it up and looked for some excess noise in the ISS or in the MC. There was none. I also set the input offset slider by unlocking the PMC and zeroing the mixer monitor readback. The new slider setting is -6.5V.
I don't know why we would need more gain on the MZ loop, but we can have some if we want it by turning up the gain before the servo (optical or RF). The attached plot shows the MC_F and ISS signals with the ISS loop on and off. There was no change in either of these spectra with the MZ gain high or low. |
Attachment 1: fsm.pdf
|
|
2022
|
Tue Sep 29 21:51:32 2009 |
Koji | Update | MZ | MZ work done : some noise checking | The previous "+15" was Vctrl = 0.25 [V]. Which was +18 dB.
Quote: |
Since we used to run with a gain slider setting of +15 dB on the MZ, I wanted to check that the new setting of +30dB was OK.
|
|
2023
|
Tue Sep 29 22:51:20 2009 |
Koji | Update | MZ | Possible gain mis-calibration at other places (Re: MZ work done) | Probably there is the same mistake for the PMC gain slider. Possibly on the FSS slider, too???
Quote: |
2) The EPICS setting for the MZ gain slider was totaly wrong.
Today I learned from the circuit, the full scale of the gain slider C1:PSL-MZ_GAIN gave us +/-10V at the DAC.
This yield +/-1V to V_ctrl of the AD602 after the internal 1/10 attenuation stage.
This +/-1V didn't correspond to -10dB~+30dB, but does -22dB~+42dB and is beyond the spec of the chip.
|
|
2032
|
Thu Oct 1 09:36:09 2009 |
Koji | Update | MZ | MZ relocked (Re:suspention damping restored and MZ HV stuck) | MZ stayed unlocked. Now It was relocked.
Quote: |
Earthquake of magnitude 5.0 shakes ETMY loose.
MC2 lost it's damping later.
|
|
2035
|
Thu Oct 1 13:12:41 2009 |
Koji | Update | MZ | MZ Work from 13:00- | I will investigate the MZ board. I will unlock MZ (and MC). |
2038
|
Thu Oct 1 19:04:05 2009 |
Koji | Update | MZ | MZ work done (Re: MZ Work from 13:00-) | MZ work has been done. I did not change anything on the circuit.
Recently we observed that the MZ PZT output was sticking at a certain voltage. I found the reason.
Shortly to say "we must return the PZT Ramp offset to 0, after the lock"
I am going to write a MZ auto lock script someday, to do it automatically.
According to the resister values used in the circuit, the PZT HV output voltage is determined by the following formula:
V_PZT = 150 - 12 V_ctrl - 24 Vramp
Here the ramp voltage Vramp moves from -10V to +10V, the feedback control voltage V_ctrl moves from -13V to +13V.
The baseline offset of 150V is provided in the circuit board.
When V_ramp is 0, V_PZT runs from 0 to 300. This is just enough for the full scale of the actual V_PZT range,
that is 0V~280V.
If any Vramp offset is given, V_PZT rails at either side of the edges. This limits the actual range of the PZT out.
This is not nice, but is what happened recently.
Quote: |
I will investigate the MZ board. I will unlock MZ (and MC).
|
|
Attachment 1: MZ_PZT.pdf
|
|
2349
|
Mon Nov 30 19:23:50 2009 |
Jenne | Update | MZ | MZ down | Came back from dinner to find the Mach Zehnder unlocked. The poor IFO is kind of having a crappy day (computers, MZ, and I think the Mode Cleaner alignment might be bad too). |
7418
|
Thu Sep 20 08:50:14 2012 |
Masha | Update | MachineLearning | Machine Learning Update | Hi everyone,
I've been working a bit on neural network code for a controller, and thus far I have code that creates a reference plant neural network. This is necessary to perform a gradient-descent learning algorithm on a controller neural network (one that reads an error signal and outputs actuation force). Because the error signal is read after the previous output of the controller neural network has passed through the plant, in order to calculate the gradient, either the inverse of the plant needs to be calculated, or the plant can be simulated through a neural network, and the error signal can thus be back-propagated through the plant neural network to find the gradient with respect to the output (as opposed to with respect to the plant), and thus back-propagated through the controller network in order to learn.
I have uploaded to my directory a directory neural_plant. The most important file is reference_plant.c, which compiles with the command
gcc reference_plant.c -o reference_plant -lfann -lm
The code runs on a file called reference_plant.data, which consists of a series of delayed inputs i_1, i_2, i_3 ... i_{n - 1} of the plant signals and then an output that is i_{n}, the subsequent signal. Parallel streams may also be used, if more than one signal is to be read. The top of the file must contain the number of total training packets (input-output groups), followed by the number of inputs, and the number of outputs. reference_plant.c also has constant variables which specify the number of hidden neurons, which can be changed.
All of this code runs on the FANN library. If the code doesn't seem to be compiling, then it means the library might have to be downloaded and built from source.
Thus far, I have created my own plant in simulink (the driven, damped harmonic oscillator, as before), and obtained results of 0.0002929292 training MSE after 5 epochs (subsequently lowered to 0.000) and 0.000 training error. This, however, is due to the fact that my plant is overly simple, and seems only to need 3 time-delayed plant signals, rather 31 to specify it (since all motion is second-order).
It should be fairly easy to use interferometer signals as input to this plant by just reading some signals and parsing them into time-delayed groups. (I tried this over the summer with my previous code, and it seemed to work, although I haven't accessed any of the channels to obtain data lately).
In terms of LIGO stuff this week, I'm going to be finishing up (writing) my final report, but please let me know if you have any comments or concerns.
Thanks! |
7424
|
Thu Sep 20 22:52:38 2012 |
Den | Update | MachineLearning | Feedback controller |
Quote: |
I have uploaded to my directory a directory neural_plant. The most important file is reference_plant.c, which compiles with the command
|
We would appreciate some plots. Learning curves of recurrent NN working as a plant are interesting. For harmonic oscillator your RNN should not contain any hidden layers - only 1 input and 1 output node and 2 delays at each of them. Activation function should be linear. If your code is correct, this configuration will match oscillator perfectly. The question is how much time does it take to adapt.
Does FANN support regularization? I think this will make your controller more stable. Try to use more advanced algorithms then gradient descent for adaptation. They will increase convergence speed. For example, look at fminunc function at Matlab. |
7661
|
Fri Nov 2 13:20:35 2012 |
Masha | Update | MachineLearning | Feedback controller |
Quote: |
Quote: |
I have uploaded to my directory a directory neural_plant. The most important file is reference_plant.c, which compiles with the command
|
We would appreciate some plots. Learning curves of recurrent NN working as a plant are interesting. For harmonic oscillator your RNN should not contain any hidden layers - only 1 input and 1 output node and 2 delays at each of them. Activation function should be linear. If your code is correct, this configuration will match oscillator perfectly. The question is how much time does it take to adapt.
Does FANN support regularization? I think this will make your controller more stable. Try to use more advanced algorithms then gradient descent for adaptation. They will increase convergence speed. For example, look at fminunc function at Matlab.
|
Hi everyone,
I've been on break this week, so in addition to working at my lab here, I've done some NN stuff. In response to Den's response to my last post, I've included learning curve plotting capabilities,
I've explored all of the currently documented capabilities of FANN (Fast Artificial Neural Network - it's a C library) (most likely, there are additions to the library floating around in open-source communities, but I have yet to look into those). There is extensive FANN documentation on the FANN website (http://leenissen.dk/fann/html/files/fann-h.html), but I'll cut it down to the basics here:
FANN Neural Network Architectures
standard: This creates a fully connected network, useful for small networks, as in the reference plant case
sparse: This creates a sparsely connected network (not all of the connections between all neurons exist at all times), useful for large networks, but not useful in the reference plant case, since the number of neurons is relatively small
shortcut: This creates some connections in the network which skip over various hidden layers. Not useful in the harmonic oscillator case since there are no hidden layers. Probably won't be useful in a better-modeled referrence plant since this reduces the non-linear capabilities of the model.
FANN Training
TRAIN_INCREMENTAL: updates the weights after every iteration, rather than after each epoch. This is faster than the other algorithms for the reference plant.
TRAIN_BATCH: updates the weights after training on the whole set. This should not be used on batches of data for the reference plant, seeing as the time history dependence of the plant is smaller than the size of the entire data set.
TRAIN_RPROP: batch training algorithm which updates the learning parameter.
TRAIN_QUICKPROP: updates the learning parameter, and uses second derivative information, instead of just first derivative, for backpropagation.
FANN Activation Functions
FANN offers a bunch of activation functions, including a function FANN_ELLIOT, which is essentially the "signmoid like" activation function Den and I used this summer, which runs in the order of multiplication and addition. The function parameters (steepness) can also be set.
FANN Parameters
As usual, the learning parameter can be set. While over the summer we worked with lower learning parameters, in the case of the harmonic oscillator reference plant, since the error is low after the first iteration, higher learning parameters (0.9, for example), work better. However, this is a very isolated case, and, in general, lower parameters, though convergence is slower, produce more optimal results.
The learning momentum is another parameter that can be set - the momentum factor is a coefficient in the weight adjustment equation which allows for the difference in weights beyond the previous weight to be factored in. In the case of the reference plant, a higher learning momentum (0.9) is optimal, although in most cases, a lower learning momentum is optimal so that the learning curve doesn't oscillate terribly.
FANN does not explicitly include regularization, but this can be implemented by checking the MSE at each iteration against the MSE at the n previous iterations, where n is the regularization parameter, and stopping training if there is no significant decrease (also determined by a parameter). The error bound I specified during training was 0.0001
The best result for the reference plant was obtained using FANN_TRAIN_INCREMENTAL, a "standard" architecture, a learning rate of 0.9 (as explained above) and a learning momentum of 0.9 (these values should NOT be used for highly non-linear and more complicated systems).
I have included plots of the learning curves - each title includes the architecture, the learning algorithm, the learning parameter, and the learning momentum if I modified it explicitly.
All of my code (and more plots!) can be found in /users/masha/neural_plant
On the whole, FANN has rather limited capabilities, especially in terms of learning algorithms, where it only has 4 (+ all of the changes one can make to parameters and rates). It is, however, much more intuitive to code with and faster han the Matlab NN library, although the later has more algorithms. I'll browse around for more open-source packages.
Best,
Masha
   

|
7267
|
Fri Aug 24 00:23:20 2012 |
Den | Update | Modern Control | feedback using LQG method | I did a simulation of linear quadratic gaussian (LQG) controller applied to local damping. The cost function was frequency shaped to have a peak at 1 Hz. This technique prevents the controller from adding sensor noise at high and very low frequencies.
Noise was simulated to have 1/f spectrum (seismic) multiplied by stack with a resonance at 4 Hz with Q=5.

|
7270
|
Fri Aug 24 13:22:19 2012 |
Den | Update | Modern Control | cavity simulation | I did a simulation of a cavity, feedback signal was calculated using LQG controller. I assumed that there is not length -> angle coupling and 2 mirrors that form the cavity have the same equation of motions (Q and eigen frequencies are the same). Cost functional was chosen in such a way that frequencies below 15 Hz contribute much more then other frequencies.

Gains in the controller are calculated to minimize the cost functional.

This technique works well, but it requires full information about the system states. If we do not assume that cavity mirrors have the same equations of motion then we need to apply Kalman filter to approximate the position of one of the mirrors. |
7412
|
Wed Sep 19 17:48:47 2012 |
Den | Update | Modern Control | ETMX | Time domain control using LQR technique is now applied to ETMX sus position. The plan was to do it for oplevs, I'll do it after the vent.
The cost function for state space variables was determined by TF 900 / (s + 30)^2. There was no penulty imposed for velocity, only for position. We can try that configuration as well.

|
7430
|
Sun Sep 23 22:40:48 2012 |
Den | Update | Modern Control | MC_L locking | I've applied LQR approach to MC_L locking. Results show that LQR does not make MC_F signal smaller below 0.3 Hz in contrast with classical locking. This might indicate that in this frequency range we see sensing noise as LQR was provided with state-space model of MC only so it tries to reduce displacement noise. It is also possible that state-space model is not accurate enough.
|
Attachment 1: LQR_MCL.pdf
|
|
7497
|
Sun Oct 7 23:39:10 2012 |
Den | Update | Modern Control | state estimation | I've applied online state estimation technique using Kalman filter to LQG controller. It helps to estimate states that we do not measure. I've considered MC2 local damping, we measure position and want to estimate velocity that we need for control. We can either differentiate the signal or apply state estimation to avoid huge noise injection at high frequencies. In state estimation we need to know noise covariance, I've assumed that LID sensor noise is 0.1 nm. Though covariance can be calculated better.
In the time-domain figure C1:SUS-MC2_SUSPOS_IN1 = MC2 postion, C1:SUS-MC2_SUSPOS_OUT = MC2 velocity obtained by differentiation, 2 other channels are estimations of position and velocity. |
Attachment 1: est_time.png
|
|
Attachment 2: est_freq.pdf
|
|
7499
|
Mon Oct 8 09:51:30 2012 |
rana | Update | Modern Control | state estimation |
I guess that the estimated state has the same low pass filter, effectively, that we use to low pass the feedback signal in SUSPOS. I wonder if there is an advantage to the state estimation or not. Doesn't the algorithm also need to know about the expected seismic noise transmission from the ground to the optic? |
7503
|
Mon Oct 8 12:34:52 2012 |
Den | Update | Modern Control | state estimation |
Quote: |
I guess that the estimated state has the same low pass filter, effectively, that we use to low pass the feedback signal in SUSPOS. I wonder if there is an advantage to the state estimation or not. Doesn't the algorithm also need to know about the expected seismic noise transmission from the ground to the optic?
|
I think state estimation and optimal control are two different techniques that are often used together. Sometimes (as for pendulum) we can use LQG without state estimation as we need only position and velocity. But for more complex systems (like quad suspension) the states of all 4 masses can be reconstructed in some optimal way using information from only one of them if the dynamics is sufficiently well known. When current system states are measured/estimated we can apply control where all our filters are hidden.
The algorithm needs to know about expected seismic noise transmission from the ground to the optic, but it might be not very precise. I gave it some rough estimate, there are better ways to do it. I think that we'll understand whether we need state estimation or not when we'll move to more complex systems. Brett uses a similar approach for his modal control. Interesting if these methods + seismometer readings will be able to say if one of your sensors is noisier then others.
|
7714
|
Thu Nov 15 02:18:24 2012 |
Den | Update | Modern Control | BS oplev | I've applied LQR feedback technique to BS oplev in pitch. I think the most inconvenient thing in using LQR controller is the amount of additional states created during cost function shaping. It requires 1 filter bank for each state. To avoid this I wrote state estimation code so all states are calculated inside one function.
On the plots below cost function and oplev feedback controller performance are shown.

|
|