ID |
Date |
Author |
Type |
Category |
Subject |
16006
|
Wed Apr 7 22:48:48 2021 |
gautam | Update | IOO | Waveplate commissioning | Summary:
I spent an hour today evening checking out the remote waveplate operation. Basic remote operation was established 👍 . To run a test on the main beam (or any beam for that matter), we need to lay out some long cabling, and install the controller in a rack. I will work with Jordan in the coming days to do these things. Apart from the hardware, some EPICS channel will need to be added to the c1ioo.db file and a python script will need to be set up as a service to allow remote operation.
Part numbers:
- The controller is a NewFocus ESP300.
- The waveplate stage is a PR50CC. The waveplate itself that is mounted has a 1" diameter (clear aperture is more like 21mm), which I think is ~twice the size of the waveplates we have in the lab, good thing Livingston shipped us the waveplate itself too. It is labelled QWPO-1064-10-2, so should be a half wave plate as we want, but I didn't explicitly check with a linearly polarized beam today. Before any serious high power tests, we can first contact clean the waveplate to avoid any burning of dirt. The damage threshold is rated as 1 MW/cm^2, and I estimate that we will be well below this threshold for any power levels (<30W) we are planning to put through this waveplate. For a 100um radius beam with 30W, the peak intensity is ~0.2 MW/cm^2. This is 20% of the rated damage threshold, so may be better to enforce that the beam be >200um going through this waveplate.
- The dimensions of the mount look compatible with the space we have on the PSL table (though of course once the amplifier comes into the picture, we will have to change the layout. Maybe it's better to keep everything downstream of the PMC fixed - then we just re-position the seed beam (i.e. NPRO) and amplifier, and then mode-match the output of the amplifier to the PMC.
Electrical tests:
- First, I connected a power cord to the ESP300 and powered it on - the front display lit up and displayed a bunch of diagnostics, and said something to the effect of "No stage connected".
- Next, I connected the rotary mount to "Axis #1": Male DB25 on the stage to female DB25 on the rear of the ESP300. The stage was recognized.
- Used the buttons on the front panel to rotate the waveplate, and confirmed visually that rotation was happening 👍 . I didn't calibrate the actual degrees of rotation against the readback on the front panel, but 45 degrees on the panel looked like 45 degrees rotation of the physical stage so seems fine.
RS232 tests:
- This unit only has a 9-pin Dsub connector to interface remotely to it, via RS232 protocol. c1psl Supermicro host was designated the computer with which I would attempt remote control.
- To test, I decided to use a serial-USB adapter. Since this is only a single unit, no need to get an RS232-ethernet interface like the one used in the vacuum rack, but if there are strong opinions otherwise we can adopt some other wiring/control philosophy.
- No drivers needed to be installed, the host recognized the adapter immediately. I then shifted the waveplate and controller assembly to inside the VEA - they are sitting on a cart behind 1X2. Once the controller was connected to the USB-serial adapter cable, it was registered at /dev/ttyUSB0 immediately. I had to chown this port to the controls user for accessing it using python serial.
- Initially, I was pleasantly surprised when I found not one but TWO projects on PyPi that already claimed to do what I want! Sadly, neither NewportESP1.1 nor PyMeasure0.9.0 actually worked - the former is for python2 (and the string typesetting has changed for PySerial compatible with python3), while the latter seems to be optimized for Labview interfacing and didn't play so nice with the serial-USB adapter. I didn't want to spend >10mins on this and I know enough python serial to do the interfacing myself, so I pushed ahead. Good thing we have several pySerial experts in the group now, if any of you want to figure out how we can make either of these two utilities actually work for us - there is also this repo which claims to work for python 3 but I didn't try it because it isn't a managed package.
- The command list is rather intimidating, it runs for some 100 (!) pages. Nevertheless, I used some basic commands to readback the serial number of the controller, and also succeeded in moving the stage around by issuing the "PR" command appropriately 👍. BTW, I forgot that I didn't test the motor enable/disable which is an essential channel I think.
- I think we actually only need a very minimal set of commands, so we don't need to read all 100 pages of instructions:
- motor enable/disable
- absolute and relative rotations
- readback of the current position
- readback of the moving status
- a stop command
- an interlock
- Note that as a part of this work, in addition to chowning /dev/ttyUSB0, I installed the two aforementioned python packages on c1psl. I saw no reason to manually restart the modbus and latch services running on it, and I don't believe this work would have impacted the correct functioning of either of those two services, but be aware that I was poking around on c1psl. I was also reminded that the system python on this machine is 2.7 - basically, only the latch service that takes care of the gains for the IMC servo board are dependent on python (and my proposed waveplate control script will be too), but we should really upgrade the default python to 3.7/3.8.
Next steps:
Satisfied that the unit works basically as expected, I decided to stop for today. My thinking was that we can have the ESP300 installed in 1X1 or 1X2 (depending on where space is more readily available). I will upload have uploaded a cartoon here so people can comment if they like/dislike my plan.
- We need to use a long-ish cable to run from 1X1/1X2, where the controller will be housed, to the PSL enclosure. Livingston did ship one such long cable (still on Rana's table), but I didn't check if the length is sufficient / the functionality of this long cable.
- We need to set up some EPICS channels for the rotation stage angle, motor ENABLE/DISABLE, a "move stage" button, motion status, and maybe a channel to control the rotation speed?
- We need a python script that is reading from / writing to these EPICS channel in a while loop. Should be straightforward to setup something to run like the latch.py service that has worked decently reliably for ~a year now. afaik, there isn't a good way to run this synchronously, and the delay in sending/completing the execution of some of the serial commands might be ~1 second, but for the purpose of slowly ramping up the power, this shouldn't be a problem.
- One question I do have is, what is the strategy to protect the IFO from the high power when the lock is lost? Surely we are not gonna rely on this waveplate for any fast actuation? With the current input power of 1W, the MCREFL photodiode sees ~100mW when the IMC loses lock. So if the final input power is 35W, do we wanna change the T=10% beamsplitter in the MCREFL path to keep this ratio?
Once everything is installed, we can run some tests to see if the rotary motion disturbs the PSL in any meaningful way. I will upload some photos to the picasa later. Photos here. |
Attachment 1: remotePowCtrl.pdf
|
|
16022
|
Tue Apr 13 17:47:07 2021 |
gautam | Update | IOO | Waveplate commissioning - software prepared | I spent some time today setting up a workable user interface to control the waveplate.
- Created some EPICS database records at /cvs/cds/caltech/target/ESP300.db. These are all soft channels. This required a couple of restarts of the modbus service on c1psl - as far as I can tell, everything has come back up without problems.
- Hacked newportESP to make it work, mainly some string encoding BS in the python2-->python3 paradigm shift.
- Made a python script at /cvs/cds/caltech/target/ESP300.py that is based on similar services I've set up for the CM servo and IMC servo boards. I have not yet set this up to run as a service on c1psl, but that is pretty trivial.
- Made a minimal MEDM screen, see Attachment #1. It is saved at /opt/rtcds/caltech/c1/medm/c1psl/C1PSL_POW_CTRL.adl and can be accessed from the "PSL" tab on sitemap. We can eventually "calibrate" the angular position to power units.
- Confirmed that I can move the waveplate using this MEDM screen.
So this system is ready to be installed once Jordan and I find some time to lay out cabling + install the ESP300 controller in a rack.
At the moment, there is no high power and there is minimal risk of damaging anything, but someone should double check my logic to make sure that we aren't gonna burn the precious IFO optics. We should also probably hook up a hardware interlock to this controller.
I went through some aLIGO documentation and believe that they are using a custom made potentiometer based angle sensor rather than the integrated Newport (or similar) sensor+motor. My reading of the situation was that there were several problems to do with hysterisis, the "find home" routine etc. I guess for our purposes, none of these are real problems, as long as we are careful not to randomly rotate the waveplate through a full 180 degrees and go through the full fringe in the process. Need to think of a clever way to guard against careless / accidental MEDM button presses / slider drags.
Unrelated to this work: I haven't been in the lab for ~a week so I took the opportunity today to go through the various configs (POX/POY/PRMI resonant carrier etc). I didn't make a noise budget for each config but at least they can be locked 👍 . I also re-aligned the badly misaligned PMC and offloaded the somewhat large DC WFS offsets (~100 cts, which I estimate to be ~150 nNm of torque, corresponding to ~50 urad of misalignment) to the IMC suspensions' slow bias voltages. |
Attachment 1: remoteHWP.png
|
|
16023
|
Tue Apr 13 19:24:45 2021 |
gautam | Update | PSL | High power operations | We (rana, yehonathan and i) briefly talked about having high power going into the IFO. I worked on some calcs a couple of years ago, that are summarized here. There is some discussion in the linked page about how much power we even need. In summary, if we can have
- T_PMC ~85% which is what I measured it to be back in 2019
- T_IMC * T_inputFaraday ~60% which is what I estimate it to be now
- 98% mode matching into the IMC
- power recycling gain of 40-45 once we improve the folding mirror situation in the recycling cavities
- and a gain of 270-280 in the arm cavities (20-30ppm round trip loss)
then we can have an overall gain of ~2400 from laser to each arm cavity (since the BS divides the power equally between the two arms). The easiest place to get some improvement is to improve T_IMC * T_inputFaraday. If we can get that up to ~90%, then we can have an overall gain of ~4000, which is I think the limit of what is possible with what we have.
We also talked about the EOM. At the same time, I had also looked into the damage threshold as well as clipping losses associated with the finite aperture of our EOM, which is a NewFocus 4064 (KTP is the Pockel medium). The results are summarized in Attachments #1 and #2 respectively. Rana thinks the EOM can handle factor of ~3 greater power than the rated damage threshold of 20W/mm^2. |
Attachment 1: intensityDist.pdf
|
|
Attachment 2: clippingLoss.pdf
|
|
16025
|
Wed Apr 14 12:27:10 2021 |
gautam | Update | General | Lab left open again | Once again, I found the door to the outside in the control room open when I came in ~1215pm. I closed it. |
16028
|
Wed Apr 14 14:52:42 2021 |
gautam | Update | General | IFO State | The C1:IFO-STATE variable is actually a bunch (16 to be precise) of bits, and the byte they form (2 bytes) converted to decimal is what is written to the EPICS channel. It was reported on the call today that the nominal value of the variable when the IMC is locked was "8", while it has become "10" today. In fact, this has nothing to do with the IMC. You can see that the "PMC locked" bit is set in Attachment #1. This is done in the AutoLock.sh PMC autolocker script, which was run a few days ago. Nominally, I just lock the PMC by moving some sliders, and I neglect to set/unset this bit.
Basically, there is no anomalous behavior. This is not to say that the situation cannot be improved. Indeed, we should get rid of the obsolete states (e.g. FSS Locked, MZ locked), and add some other states like "PRMI locked". While there is nothing wrong with setting these bits at the end of execution of some script, a better way would be to configure the EPICS record to automatically set / unset itself based on some diagnostic channels. For example, the "PMC locked" bit should be set if (i) the PMC REFL is < 0.1 AND (ii) PMC TRANS is >0.65 (the exact thresholds are up for debate). Then we are truly recording the state of the IFO and not relying on some script to write to the bit (I haven't thoguht through if there are some edge cases where we need an unreasonable number of diagnostic channels to determine if we are in a certain state or not). |
Attachment 1: IFOSTATE.png
|
|
16032
|
Wed Apr 14 19:48:18 2021 |
gautam | Update | PSL | Laser amplifier | A couple of years ago, I got some info about the amplifier setup at the sites from Terra - sharing here in case there is some useful info in there (our setup will be rather different, but it looked to me like our Amp is a 2017 vintage and it may be that the performance is not the same as reported in the 2019 paper).
collection of docs (table layout in 'Proposed....setup') : https://dcc.ligo.org/LIGO-T1700046
LVC 70W presentation: https://dcc.ligo.org/LIGO-G1800538
I guess we should double check that the beam size everywhere (in vacuum and on the PSL table) is such that we don't exceed any damage thresholds for the mirrors used. |
16033
|
Wed Apr 14 23:55:34 2021 |
gautam | Update | Electronics | HV Coil driver assembly | I've occcupied the southernmost electronics bench for assembling the 4 production version HV coil driver chassis. I estimate it will take me 3 days, and have left a sign indicating as much. Once the chassis assembly is done, I will need to occupy the northernmost bench where bench supplies are to run some functionality tests / noise measurements, and so unless there are objections, I will move the Acromag box which has been sitting there. |
16036
|
Thu Apr 15 15:54:46 2021 |
gautam | Update | IOO | Waveplate commissioning - hardware installed | [jordan, gautam]
We did the following this afternoon.
- Disconnected the cable from the unused (and possibly not working) RefCav heater power supply, and removed said PS from 1X1. There was insufficient space to install the ESP300 controller elsewhere. I have stored the power supply along the east arm under the beamtube, approximately directly opposite the RFPD cabinet.
- Installed the ESP 300 - conveniently, the HP DCPS was already sitting on some rails and so we didn't need to add any.
- Ran a long D25-D25 cable from the ESP300 to the NE corner area of the PSL enclosure. The ends of the cable are labelled as "ESP end" and "Waveplate end". The HEPA was turned on for the duration we had the enclosure open, and I have now turned it off.
- Connected the waveplate to this cable. Also re-connected the ESP300 to the c1psl supermicro host via the USB-RS232 adapter cable.
The IMC stayed locked throughout our work, and judging by the CDS overview screen, we don't seem to have done any lasting damage, but I will run more tests. Note that the waveplate isn't yet installed in the beam path - I may do this later today evening depending on lab activity, but for now, it is just sitting on the lower shelf inside the PSL enclosure. I will post some photos later.
Quote: |
So this system is ready to be installed once Jordan and I find some time to lay out cabling + install the ESP300 controller in a rack.
|
Update: The waveplate was installed. I gave it a couple of rounds of cleaning by first contact, and visually, it looked good to me. More photos uploaded. I also made some minor improvements to the MEDM screen, and setup the communication script with the ESP300 to run as a systemd service on c1psl. Let's see how stable things are... I think the philosophy at the sites is to calibrate the waveplate rotation angle in terms of power units, but i'm not sure how the unit we have performs in terms of backlash error. We can do a trial by requesting ~100 "random" angles, monitoring the power in s- and p-polatizations, and then quanitfying the error between requested and realized angles, but I haven't done this yet. I also haven't added these channels to the set recorded to frames / to the burt snapshot - do we want to record these channels long term? |
16073
|
Thu Apr 22 14:22:39 2021 |
gautam | Update | SUS | Settings restored | The MC / WFS stability seemed off to me. Trending some channels at random, I saw that the MC3 PIT/YAW gains were restored mixed up (PIT was restored to YAW and vice versa) in the last day sometime - I wasn't sure what other settings are off so I did a global burtrestore from the last time I had the interferometer locked since those were settings that at least allow locking (I am not claiming they are optimal).
How are these settings being restored after the suspension optimization? If the burtrestore is randomly mixing up channels, seems like something we should be worried about and look into. I guess it'd also be helpful to make sure we are recording snapshots of all the channels we are changing - I'm not sure if the .req file gets updated automatically / if it really records every EPICS record. It'd be painful to lose some setting because it isn't recorded.
Unconnected to this work - the lights in the BS/PRM chamber were ON, so I turned them OFF. Also unconnected to this work, the summary pages job that updates the "live" plots every half hour seem to be dead again. There is a separate job whose real purpose is to wait for the data from EOD to be transferred to LDAS before filling in the last couple of hours of timeseries data, but seems to me like that is what is covering the entire day now. |
Attachment 1: MCdamping.png
|
|
16075
|
Thu Apr 22 14:49:08 2021 |
gautam | Update | Computer Scripts / Programs | rossa added to RTS authorized keys | This is to facilitate running of scripts like the CDS reboot script, mx_stream restart, etc, from rossa, without being pwd prompted every time, whereas previously it was only possible from pianosa. I added the public key of rossa to FB and the RT FE servers. I suppose I could add it to the Acromag servers too, but I haven't yet. |
16076
|
Thu Apr 22 15:15:26 2021 |
gautam | Update | PSL | PMC transmission | I was a bit surprised by these numbers suggesting the PMC transmission is only 50-60%. I went to the table today and confirmed that it is more like 85% (1.3 W in, 1.1 W transmitted, both numbers from with the FieldMate power meter), as I claimed in 2019. Even being conservative with the power meter errors, I think we can be confident T_PMC will be >80% (modulo any thermal effects with higher power degrading the MM). There isn't any reliable record of what the specs of the PMC mirrors are, but assuming the IO couplers have T=4000ppm and the end mirror has T=500ppm as per Alan's plot, this is consistent with a loss of something like 300ppm loss per mirror - seems very high given the small beam spots, but maybe these mirrors just aren't as high quality as the test masses?
It's kind of unfortunate that we will lose ~20% of the amplifier output through the first filter, but I don't see an easy way to clean these mirrors. It's also not clear to me if there is anything to be gained by attempting a cleaning - isn't the inside of the cavity supposed to be completely isolated from the outside? Maybe some epoxy vaporization events degraded the loss?
Quote: |
The transmitted power was ~50-60 mW. (Had to use power meter suspended by hand only.
|
|
16079
|
Thu Apr 22 17:04:17 2021 |
gautam | Update | SUS | Settings restored | Indeed, you can make your own snapshot by specifying the channels to snap in a .req file. But what I meant was, we should confirm that all the channels that we modify are already in the existing snapshot files in the autoburt dir. If it isn't, we should consider adding it. I think the whole burt system needs some cleaning up - a single day of burt snapshots occupies ~400MB (!) of disk space, but I think we're recording a ton of channels which don't exist anymore. One day...
Quote: |
Your message suggests that we can set burt to start noticing channel changes at home point and create a .req file that can be used to restore later. We'll try to learn how to do that. Right now, we only know how to burt restore using the existing snapshots from the autoburt directory, but they touch more things than we work on, I think. Or can we just always burt restore it to morning time? If yes, what snapshot files should we use?
|
|
16082
|
Fri Apr 23 18:00:02 2021 |
gautam | Update | PSL | HEPA speed lowered | I will upload some plots later - but in summary, I set the HEPA speed to ~40%. I used (i)IMC transmission RIN, (ii) Arm cavity transmission RIN and (iii) ALS beat noise as 3 diagnostics, to see how noise in various frequency bands for these signals change as a function of the HEPA speed. The MC2T RIN shows elevated noise between 1-10Hz at even the lowest speed I tried, ~20% of the max on each blower. The elevated noise extended to ~50-70 Hz for HEPA speeds >40% of the maximum, and the arm cavity RIN and ALS signals also start to become noisy for speeds >60% of the maximum. So I think 40% is a fine speed to run at - for squeezing measurement we may have to turn off the HEPA for 10mins but for the usual single arm / PRMI / DRMI locking, this should be just fine. For the elevated ALS noise - I'm not sure if the coupling is happening over the top of the enclosure where the fiber bringing light from EX comes close to the HEPA filters, or if it is happening inside the PSL enclosure itself, near the beat mouth - but anyways, at the 40% speed, I don't see any effect on the ALS noise.
I checked with a particle counter at the SW corner of the PSL table (which is the furthest away we can be on the table from the HEPA blowers) after leaving the blowers on for ~30mins and it registered 0 for both 0.3um and 0.5um sized particles (if the blowers are off, the respective numbers are 43 and 9 but I forgot what the units were, and I believe they have to be multiplied by 10).
I have not yet marked the speed control units yet in case there is some other HEPA science that needs to be done before deciding what is the correct setting. But I think I can get the PRFPMI lock without much issue with this lower speed, which is what I will try later today evening. |
Attachment 1: HEPAdiag.pdf
|
|
16097
|
Thu Apr 29 15:11:33 2021 |
gautam | Update | CDS | RFM | The problem here was that the RFM errors cropped up again - seems like it started ~4am today morning judging by TRX trends. Of course without the triggering signal the arm cavity couldn't lock. I rebooted everything (since just restarting the rfm senders/receivers did not do the trick), now arm locking works fine again. It's a bit disappointing that the Rogue Master setting did not eliminate this problem completely, but oh well...
It's kind of cool that in this trend view of the TRX signal, you can see the drift of the ETMX suspension. The days are getting hot again and the temp at EX can fluctuate by >12C between day and night (so the "air-conditioning" doesn't condition that much I guess 😂 ), and I think that's what drives the drift (idk what the transfer function to the inside of the vacuum chamber is but such a large swing isn't great in any case). Not plotted here but i hypothesize TRY levels will be more constant over the day (modulo TT drift which affects both arms).
The IMC suspension team should double check their filters are on again. I am not familiar with the settings and I don't think they've been added to the SDF. |
Attachment 1: RFM_errs.png
|
|
Attachment 2: Screenshot_2021-04-29_15-12-56.png
|
|
16104
|
Fri Apr 30 00:18:40 2021 |
gautam | Summary | LSC | Start of measuring IMC WFS noise contribution in arm cavity length noise | This is the actuator calibration. For the error point calibration, you have to look at the filter in the calibration model. I think it's something like 8e-13m/ct for POX and similar for POY.
Quote: |
I calibrated the control arms signals by 2.44 nm/cts calibration factor directly picked up from 13984.
|
|
16105
|
Fri Apr 30 00:20:30 2021 |
gautam | Update | CDS | F2A Filters double check | The SDF system is supposed to help with restoring the correct settings, complementary to burt. My personal opinion is that there is no need to commit these filters to SDF until we're convinced that they help with the locking / noise performance.
Quote: |
I double checked today and the F2A filters in the output matrices of MC1, MC2 and MC3 in the POS column are ON. I do not get what SDF means? Did we need to add these filters elsewhere
|
|
16142
|
Sat May 15 12:39:54 2021 |
gautam | Update | PSL | NPRO tripped/switched off | The NPRO has been off since ~1AM this morning it looks like. Is this intentional? Can I turn it back on (or at least try to)? The interlock signal we are recording doesn't report getting tripped but I think this has been the case in the past too.
After getting the go ahead from Koji, I turned the NPRO back on, following the usual procedure of diode current ramping. PMC and IMC locked. Let's see if this was a one-off or something chronic. |
Attachment 1: NPRO.png
|
|
16143
|
Sat May 15 14:54:24 2021 |
gautam | Update | SUS | IMC settings reverted | I want to work on the IFO this weekend, so I reverted the IMC suspension settings just now to what I know work (until the new settings are shown quantitatively to be superior). There isn't any instruction here on how to upload the new settings, so after my work, I will just restore from a burt-snapshot from before I changed settings.
In the process, I found something odd in the MC2 coil output filter banks. Attachment #1 shows what it it is today. This weird undetermined state of FM9 isn't great - I guess this flew under the radar because there isn't really any POS actuation on MC2. Where did the gain1 filter I installed go? Some foton filter file corruption? Eventually, we should migrate FM7,FM8-->FM9,FM10 but this isn't on my scope of things to do for today so I am just putting the gain1 filter back so as to have a clean FM9 switched on.
Quote: |
The old setting can be restored by running python3 /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/restoreOldConfigIMC.py from allegra or donatella.
|
|
I wrote the values from the c1mcs burt snapshot from ~1400 Saturday May 15, at ~1600 Sunday May 16. I believe this undoes all my changes to the IMC suspension settings. |
Attachment 1: MC2coilOut.png
|
|
16162
|
Wed May 26 02:00:44 2021 |
gautam | Update | Electronics | Coil driver noise | I was preparing a short write-up / test procedure for the custom HV coil driver, when I thought of something I can't resolve. I'm probably missing some really basic physics here - but why do we not account for the shot noise from DC current flowing through the series resistor? For a 4kohm resistor, the Johnson current noise is ~2pA/rtHz. This is the target we were trying to beat with our custom designed HV bias circuit. But if there is a 1 mA DC current flowing through this resistor, the shot noise of this current is 18pA/rtHz, which is ~9 times larger than the Johnson noise of the same resistor. One could question the applicability of this formula to calculate the shot noise of a DC current through a wire-wound resistor - e.g. maybe the electron transport is not really "ballistic", and so the assumption that the electrons transported through it are independent and non-interacting isn't valid. There are some modified formulae for the shot noise through a metal resistor, which evaluates to 10pA/rtHz for the same 4kohm resistor, which is still ~5x the Johnson noise.
In the case of the HV coil driver circuit, the passive filtering stage I added at the output to filter out the excess PA95 noise unwittingly helps us - the pole at ~0.7 Hz filters the shot noise (but not the Johnson noise) such that at ~10 Hz, the Johnson noise does indeed dominate the total contribution. So, for this circuit, I think we don't have to worry about some un-budgeted noise. However, I am concerned about the fast actuation path - we were all along assuming that this path would be dominated by the Johnson noise of the 4kohm series resistor. But if we need even 1mA of current to null some DC DARM drift, then we'd have the shot noise contribution become comparable, or even dominant?
I looked through the iLIGO literature, where single-stage suspensions were being used, e.g. Rana's manifesto, but I cannot find any mention of shot noise due to DC current, so probably there is a simple explanation why - but it eludes me, at least for the moment. The iLIGO coil drivers did not have a passive filter at the output of the coil driver circuit (at least, not till this work), and there isn't any feedback gain for the DARM loop at >100 Hz (where we hope to measure squeezing) to significantly squash this noise.
Attachment #1 shows schematic topologies of the iLIGO and proposed 40m configs. It may be that I have completely misunderstood the iLIGO config and what I've drawn there is wrong. Since we are mainly interested in the noise from the resistor, I've assumed everything upstream of the final op-amp is noiseless (equivalently, we assume we can sufficiently pre-filter these noises).
Attachment #2 shows the relative magnitudes of shot noise due to a DC current, and thermal noise of the series resistor, as a function of frequency, for a few representative currents, for the slow bias path assuming a 0.7Hz corner from the 4kohm/3uF RC filter at the output of the PA95.
Some lit review suggests that it's actually pretty hard to measure shot noise in a resistor - so I'm guessing that's what it is, the mean free path of electrons is short compared to the length of the resistor such that the assumption that electrons arrive independently and randomly isn't valid. So Ohm's law dictates and that's what sets the current noise. See, for example, pg 432 of Horowitz and Hill. |
Attachment 1: coilDriverTopologies.pdf
|
|
Attachment 2: shotVthermal.pdf
|
|
16245
|
Wed Jul 14 16:19:44 2021 |
gautam | Update | General | Brrr | Since the repair work, the temperature is significantly cooler. Surprisingly, even at the vertex (to be more specific, inside the PSL enclosure, which for the time being is the only place where we have a logged temperature sensor, but this is not attributable to any change in the HEPA speed), the temperature is a good 3 deg C cooler than it was before the HVAC work (even though Koji's wind vane suggest the vents at the vertex were working). The setpoint for the entire lab was modified? What should the setpoint even be?
Quote: |
- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.
- Then I went to the vertex and the east arm. The outlets and intakes are flowing.
|
|
Attachment 1: rmTemp.pdf
|
|
16247
|
Wed Jul 14 20:42:04 2021 |
gautam | Update | LSC | Locking | [paco, gautam]
we decided to give the PRFPMI lock a go early-ish. Summary of findings today eve:
- Arms under ALS control display normal noise and loop UGFs.
- PRMI took longer than usual to lock (when arms are held off resonance) - could be elevated sesimic, but warrants measuring PRMI loop TFs to rule out any funkiness. MICH loop also displayed some saturation on acquisition, but after the boosts and other filters were turned on, the lock seemed robust and the in-loop noise was at the usual levels.
- We are gonna do the high bandwidth single arm locking experiments during daytime to rule out any issues with the CM board.
The ALS--> IR CARM handoff is the problematic step. In the past, getting over this hump has just required some systematic loop TF measurements / gain slider readjustments. We will do this in the next few days. I don't think the ALS noise is any higher than it used to be, and I could do the direct handoff as recently as March, so probably something minor has changed. |
16249
|
Fri Jul 16 16:26:50 2021 |
gautam | Update | Computers | Docker installed on nodus | I wanted to try hosting some docker images on a "private" server, so I installed Docker on nodus following the instructions here. The install seems to have succeeded, and as far as I can tell, none of the functionality of nodus has been disturbed (I can ssh in, access shared drive, elog seems to work fine etc). But if you find a problem, maybe this action is responsible. Note that nodus is running Scientific Linux 7.3 (Nitrogen). |
12384
|
Tue Aug 9 00:44:43 2016 |
gautam | Update | SUS | ETMY patch-up | Summary:
Given that ETMX looks to be in good shape and the optic and suspension tower are ready for vacuum and air bakes respectively, I set about re-gluing the knocked off magnet of ETMY. In my previous elog, I had identified the knocked off magnet as the UL magnet. But in fact, it was the LR magnet that broke off. This is actually one of the magnets that was knocked off when Johannes was removing the optic from the vacuum chamber. I have edited the old elog accordingly.
Step 1: Removing epoxy residue
- I used the teflon+glass rig Steve put together for this purpose
- After soaking for ~2 hours in acetone, I was able to remove approximately half of the ring residue by lightly pushing with a wipe.
- The other half wouldn't budge so I let it soak for another 4 hours
- After 6 hours of soaking, I was able to get all of the epoxy residue off - it doesn't simply dissolve in the acetone, I had to push a little with one of the cotton-tipped paddles in the cleanroom
- I gave the portion exposed to acetone a quick drag wipe with isopropanol. I didn't spend too much time trying to clean the AR side given that we will be using first contact anyways.
- I have not touched the HR side for now, even though a small portion of it was exposed to acetone. While cleaning the HR face with first contact, this portion can be inspected and cleaned if necessary
Step 2: Putting the optic in the magnet gluing jig
- I transferred the optic to the magnet gluing jig
- Given that we weren't touching any side magnets, I reasoned I did not have to go through the elaborate shimming routine to account for the wedge of the optic that we had to do in the recent past
- However, I did not think to put a thicker teflon spacer on the lower side of the wedge, and as a result, I knocked off the UR magnet as well as the jig did not have sufficient clearance
- Fortunately, the UR magnet came off cleanly, there was hardly any epoxy residue left on the optic. The UR magnet was NOT one of the magnets knocked off by Johannes while removing the optic from the vacuum chamber
- I gave the area formerly occupied by the UL magnet 3-4 wipes with acetone and then 1-2 wipes with isopropanol
- At this stage, I proceeded to re-insert the magnet-gluing jig. I used the two scribe lines on the outer side of the jig to fix the rotation of the jig, and used the remaining two attached face magnets to fix the overall position of the jig (by centering these magnets relative to the apertures on the jig). In order to center well, I had to unscrew the stuck silver plated screw on the jig by 1 turn
- Having arranged the jig satisfactorily, I proceeded to remove epoxy residue off the dumbbell of the recently knocked off UL magnet using first a razor blade, then sandpaper and finally made some new grooves with a razor blade. I then cleaned the surface of the dumbbell to be in contact with the optic with isopropanol. All of this was done for the LR magnet two weeks ago right after it was knocked off
Step 3: Gluing the magnets
- I prepared the magnets in the pickle pickers
- I discarded 1 full squeeze of the epoxy after it reached the tip of the mixing fixture, and then extracted another full squeeze of the gun for mixing and gluing the magnets
- I mixed the epoxy in an Al foil vessel for 3-4 minutes, and then placed a few drops on a piece of Al foil for a test bake at 200F for ~15 minutes
- The test bake went well, so I proceeded to apply glue to the dumbbells and re-glue the magnets to the optic
- The gluing was done around midnight, so we should be able to have a look at this post lunch tomorrow.
Provided the gluing goes well, the plan for tomorrow is:
- Bring ETMY suspension tower from the vacuum chamber to the cleanroom along with its OSEMs
- Suspend ETMY with a new length of wire (this should be much more straightforward than our ETMX exploits as both standoffs are already glued)
- Insert OSEMs, check that all 4 face magnets are well centered w.r.t. their coils and also that at least one side magnet is well aligned relative to its coil and can be used
- If step 3 goes well, then ETMY is also ready for a vacuum bake. I guess we can also air bake the ETMY suspension tower, there's plenty of room in the oven
|
12386
|
Tue Aug 9 15:27:57 2016 |
gautam | Update | SUS | ETMY patch-up | The pickle pickers came off nicely and both magnets seem to be glued on okay. The alignment of the face magnets look pretty good, but we will only really know once we suspend the mirror, check the pitch balance, and put in the OSEM coils.
I brought the ETMY suspension tower + OSEM coils out of the vacuum chamber into the cleanroom. Given that the old wire had a pretty sharp kink in it, I removed it with the intention of suspending the optic with a new length of wire. I noticed a few potential problems:
Attachment #1 - ETMY tower is different from ETMX tower:
- The ETMY suspension seems to be of an older generation - it does not have the the two secondary wire clamps.
- The top piece was attached to the body of the tower using non-silver-plated screws. Steve tells me this is the wrong type, and we can switch these out when we put it back together.
- The wire clamp itself doesn't have much of a groove from the wire. But the wires have made asymmetric grooves in the tower itself (the left groove is deeper than the right as seen in Attachment #1), that are clearly visible. Should we get these grooves removed before attempting re-suspension? How do we want to remove it? Steve thinks the best option is to send it to the shop for milling, as there is hardly any room to rub sandpaper along the piece because of the pins, and these pins don't come out.
- Or do we just not care about these grooves for now, if we are planning to use new wire anyways after air-baking the towers?
- Steve thinks we should have a few spares of these top blocks handy (the latest version, with the secondary clamps), he wants to know if we should place an order for these (we already have 10 spare wire clamp pieces available for if/when we need them)
Attachment #2 - the base of the tower is significantly rusty:
- A few wipes with an acetone soaked rag yielded quite a lot of rust
- Steve thinks this is because the wrong type of stainless steel was used
- Does this have to do with the cage being of an older variety? After a few vigorous wipes, no more rust came off, but the rusting process will presumably keep generating new rust? Is this a concern? Do we want to change this piece before putting the tower back in?
I am holding off on attempting to re-suspend the optic for now, until we decide if the old wire grooves need to be removed or not. If we are okay with re-using the same piece as is, or if we are okay with using sandpaper and not the machine shop to remove the grooves, I will resume the re-suspension process.
Eric suggested another alternative, which is to use the old ETMX tower. I don't recall it being rusted, but this has to be checked again. The other problem of the wire-grooves would possibly still be an issue.
Regarding the vacuum bake of the ETMs, Bob tells us that the best case scenario we are looking at is September.
|
Attachment 1: IMG_2996.JPG
|
|
Attachment 2: IMG_2997.JPG
|
|
12390
|
Wed Aug 10 03:08:03 2016 |
gautam | Update | SUS | ETMY patch-up | [lydia, gautam]
Rana felt it was alright to use the wire clamp and suspension cage in its existing condition for checking the ETMY magnet-OSEM coil alignment. So we set about trying to re-suspend ETMY. The summary of our attempts:
- Transferred optic from magnet gluing rig to the suspension cage
- Adjusted bottom EQ stops till the scribe lines on both sides were at 5.5" as verified with the microscope
- Looped cleaned length of wire around optic, attached free ends to winches, placed the wires under light tension by finger-pulling the slack out
- Lowered the bottom EQ stops
- Winched the optic to the right height
- Clamped the wire with the only wire clamp on this variant of the suspension cage. We used the same torque wrench at the same torque setting as was successful for ETMX. But after removing the winches, and releasing the face EQ stops, the optic seems to have sagged a lot - it now touches all the bottom EQ stops, and the more I lower it, the more it seems to come down. Perhaps it is the effect of the wire grooves in the cage, or that the wire-clamp itself is slightly different from the piece used on the ETMX cage, but 1.3Nm of torque doesn't seem to have tightened the wire clamp sufficiently
- We can still probably salvage the situation by re-attaching the winches to the top of the cage, setting the optic to the right height again, and clamping the wire clamp with more torque (as this is just a check to see that the reglued magnet configuration is compatible with the OSEM coil positions on the cage). Before air baking the cage, we will have the old wire grooves removed, and then suspend the optic with a fresh loop of wire after the bake
- We could not check the magnet-OSEM alignment because of the slipping of the wire through the clamp. We decided against pushing on tonight
- Optic is currently in the cage, resting on the bottom EQ stops and with all face EQ stops within 1mm of the optic. The OSEM coils have not been inserted into the holders
Regarding the vacuum bake of the optics: why do we want to do this again? Koji mentioned that the EP30-2 curing process does not require a bake, and there is also no mention of requiring a vacuum bake in the EP30-2 gluing guide. Is there any other reason for us to vacuum bake the optic? |
12397
|
Wed Aug 10 23:45:03 2016 |
gautam | Update | SUS | ETMY re-suspended | Summary:
- ETMY has been re-suspended
- Reglued magnets (and also those that weren't knocked off) quite well with OSEM coils (see attachments)
- Pitch balance is off by ~2.8mrad (8mm over 1.5m lever arm) after inserting and centering OSEMs
- The same damping scheme used during the ETMX re-suspension process works reasonably well with ETMY as well
Details:
- I suspected that I had not tightened the wire clamp enough yesterday, and that the wire had slipped once the winches were removed
- Steve and I looked into the torque wrench situation today, and I realised that I had not been using the torque wrench correctly. What I thought were clicks indicating that the set torque has been reached was in fact just the sound the piece makes when going the opposite way relative to the direction set by the clip on the torque wrench. Anyways, the point is that while I thought I was tightening the screws with ~1.3Nm of torque, what was actually being applied was much less (although I don't have a good way to quantify how much less)
- So today I put the winches back on top of the tower, and winched the optic back up to the correct height using the ususal scribe line + microscope prescription
- I then tightened the wire clamp by hand. This is obviously not very repeatable, but it will have to do until we get a torque wrench with the correct range
- This seems to have done the trick - I did the tightening shortly after lunch, and after ~10 hours, there is no evidence of any wire sag
- I then proceeded to insert the OSEMs, first not all the way in to check the clearance available to the magnet, and once I was satisfied there was no danger of knocking anything off, went ahead and inserted the coils till the PD readouts were approximately half of the maximum (i.e. fully un-occluded) values. I used the OSEM coils originally on the ETMY tower, but all the other readout and drive electronics in the signal chain (satellite box included) belong to the ETMX setup (so as to avoid any cable routing over 80m from the Y end to the cleanroom). After some adjustment of the OSEM holding plates, I was able to center the magnets relative to the coils
- The tower only allows for a side OSEM to be inserted on one side. The other side does not have a threaded hole for a set screw. So we are forced to use the reglued magnet and not the side magnet that was not knocked off. By eye, it looks like the magnet may never completely occlude the LED, but the Striptool trace I was using to monitor the output of the PD did not yield any conclusive evidence. The optic was moving around a lot and I did not perform this check after turning the damping on
- I was able to damp the optic as well as we were able to damp ETMX on the clean bench (with the HEPA turned OFF). I had to turn the YAW gain down from 100-->75 to avoid some oscillations
- I then proceeded to check the pitch balance with the HeNe. The spot is low on an iris 1.43m away by ~8mm, which corresponds to a pitch misalignment of ~2.8mrad. I am not sure what to make of this - but perhaps its not unreasonable that we see this? Is there any record of what fine pitch balancing was achieved when the optic was put together back in 2010? This is also very sensitive to how far in/out the OSEM coils are, and though I've tried to center the coils as best as I can, I obviously have not done a perfect job...
What's next?
- Is the observed pitch imbalance a deal breaker? If so, I guess we need to re-glue a standoff?
- Are we willing to accept the side OSEM situation? (Tomorrow, I need to do a check to see what, if any, dynamic range we lose, with the damping on)
- If both the above are not problems we need to worry about, then:
- ETMY + ETMX -------> Vacuum bake on 22nd August (? - Bob also told me earlier today that he will try and put in some old turbo pump next week, and if that works, we could possibly get in the queue even before the 22nd)
- ETMY tower -------> Steve for sanding and removing wire grooves -------> Air bake
- ETMX tower -------> Air bake (provided the latest round of wire tightening has not left any grooves in the top piece of the tower, if it has, this needs to be cleaned up too)
- Some lengths of SOS wire (for re-suspending optics after bake) -------> Air bake
Attachments:
Attachment #1: Striptool trace showing all OSEM coils have been pushed in till the PD readout is approximately half the fully open value
Attachment #2: Pitch balance is off by ~2.8mrad (the Iris center is 5.5" above the table)
Attachment #3: UR magnet
Attachment #4: UL magnet
Attachment #5: LR magnet
Attachment #6: LR magnet
Attachment #7: SD magnet |
Attachment 1: ETMY_OSEMStrip.PDF
|
|
Attachment 2: IMG_2998.JPG
|
|
Attachment 3: IMG_3000.JPG
|
|
Attachment 4: IMG_3001.JPG
|
|
Attachment 5: IMG_3002.JPG
|
|
Attachment 6: IMG_3003.JPG
|
|
Attachment 7: IMG_3004.JPG
|
|
12758
|
Wed Jan 25 19:39:07 2017 |
gautam | Update | IMC | 29.5 MHz modulation depth measurement plan | Just collecting some links from my elog searching today here for easy reference later.
- EOM datasheet: Newfocus 4064 (according to this, the input Impedance is 10pF, and can handle up to 10W max input RF power).
- An elog thread with some past measurement details: elog 5339. According to this, the modulation depth at 29.5 MHz is 4mrad. The EOM's manual says 13mrad/V @1000nm, so we expect an input signal at 29.5MHz of 0.3V(pk?). But presumably there is some dependance of this coefficient on the actual modulation frequency, which I could not find in the manual. Also, Kiwamu's note (see next bullet) says that the EOM was measured to have a modulation depth of 8 mrad/V
- A 2015 update from Kiwamu on the triple resonant circuit: elog 11109. In this elog, there is also a link to quite a detailed note that Kiwamu wrote, based on his analysis of how to make this circuit better. I will go through this, perhaps we want to pursue installing a better triple resonant circuit...
I couldn't find any details of the actual measurement technique, though perhaps I just didn't look for the right keywords. But Koji's suggestion of measuring powers with the bi-directional coupler before the triple resonant circuit (but after the power combiner) should be straightforward. |
14750
|
Thu Jul 11 13:09:22 2019 |
gautam | Summary | CDS | P2 interface board | it will connect to a 15 pin breakout board in the Acromag chassis
Quote: |
It's nice and compact, and the cost of new 15-pin DSUB cables shouldn't be a factor here. What does the 15p cable connect to?
|
|
11599
|
Tue Sep 15 15:10:48 2015 |
gautam, ericq, rana | Summary | LSC | PRFPMI lock & various to-do's | I was observing Eric while he was attempting to lock the PRFPMI last night. The handoff from ALS to LSC was not very smooth, and Rana suggested looking at some control signals while parked close to the PRFPMI resonance to get an idea of what frequency bands the noise dominated in. The attached power spectrum was taken while CARM and DARM were under ALS control, and the PRMI was locked using REFL_165. The arm power was fluctuating between 15 and 50. Most of the power seems to be in the 1-5Hz band and the 10-30Hz band.
Rana made a number of suggestions, which I'm listing here. Some of these may directly help the above situation, while the others are with regards to the general state of affairs.
- Reroute both (MC and arm) FF signals to the SUS model
- For MC, bypass LSC
- Rethink the MC FF -
- Leave the arm FF on all the time?
- The positioning of the accelerometer used for MC FF has to be bettered - it should be directly below the tank
- The IOO model is over-clocking - needs to be re-examined
- Fix up the DC F2P - Rana mentioned an old (~10 yr) script called F2P ratio, we should look to integrate the Python scripts used for lock-in/demod at the sites with this
- Look to calibrate MC_F
- Implement a high BW CARM servo using ALS
- Gray code implementation for EPICS gain-stepping
|
Attachment 1: powerSpec0915.pdf
|
|
11579
|
Fri Sep 4 20:42:14 2015 |
gautam, rana | Update | CDS | Checkout of the Wenzel dividers | Some years ago I bought some dividers from Wenzel. For each arm, we have x256 and a x64 divider. Wired in series, that means we can divide each IR beat by 2^14.
The highest frequency we can read in our digital system is ~8100 Hz. This corresponds to an RF frequency of ~132 MHz which as much as the BBPD could go, but less than the fiber PDs.
Today we checked them out:
- They run on +15V power.
- For low RF frequencies (< 40 MHz) the signal level can be as low as -25 dBm.
- For frequencies up to 130 MHz, the signal should be > 0 dBm.
- In all cases, we get a square wave going from 0 ~ 2.5 V, so the limiter inside keeps the output amplitude roughly fixed at a high level.
- When the RF amplitude goes below the minimum, the output gets shaky and eventually drops to 0 V.
Since this seems promising, we're going to make a box on Monday to package both of these. There will one SMA input and output per channel.
Each channel will have a an amplifier since this need not be a low noise channel. The ZKL-1R5 seems like a good choice to me. G=40 dB and +15 dBm output.
Then Gautam will make a frequency counter module in the RCG which can do counting with square waves and not care about the wiggles in the waveform.
I think this ought to do the trick for our Coarse frequency discriminator. Then our Delay Box ought to be able to have a few MHz range and do all of the Fast ALS Carm that we need. |
Attachment 1: TEK00000.PNG
|
|
Attachment 2: TEK00001.PNG
|
|
Attachment 3: TEK00002.PNG
|
|
11940
|
Wed Jan 20 23:26:10 2016 |
gautam, rana | Update | LSC | PSL and AUX-X temperatures changed | Earlier today, we did a bunch of stuff to see if we could improve the situation with the excess ALS-X noise. Long story short, here are the parameters that were changed, and their initial and final values:
X-end laser diode temperature: 28.5 degrees ---> 31.3 degrees
X-end laser diode current: 1.900 A ---> 1.942 A
X-end laser crystal temperature: 47.43 degrees ---> 42.6 degrees
PSL crystal temperature: 33.43 degrees ---> 29.41 degrees
PSL Diode A temperature: 21.52 degrees ---> 20.75 degrees
PSL Diode B temperature: 22.04 degrees ---> 21.3 degrees
The Y-end laser temperature has not yet been adjusted - this will have to be done to find the Y-beatnote.
Unfortunately, this does not seem to have fixed the problem - I was able to find the beatnote, with amplitude on the network analyzer in the control room consistent with what we've been seeing over the last few days, but as is clear from Attachment 1, the problem persists...
Details:
- PSL shutter was closed and FSS servo input was turned off.
- As I had mentioned in this elog, the beat can now only be found at 47.41 degress +/- 1 deg, which is a shift of almost 5 degrees from the value set sometime last year, ~42.6 degrees. Rana thought it's not a good idea to keep operating the laser at such a high crystal temperature, so we decided to lower the X-end laser temperature back to 42.6 degrees, and then adjust the PSL temperature appropriately such that we found a beat. The diode temperature was also tweaked (this requires using a small screwdriver to twist the little knob inset to the front panel of the laser controller) - for the end laser, we did not have a dedicated power monitor to optimize the diode temperature by maximizing the current, and so we were just doing this by looking at the beat note amplitude on the network analyzer (which wasn't changing by much). So after playing around a little, Rana decided to leave it at 31.3 degrees.
- We then went to the PSL table and swept through the temperature till a beat was found. The PMC wouldn't stay locked throughout the sweep, so we first did a coarse scan, and saw weak (due to the PMC being locked to some weird mode) beatnotes at some temperatures. We then went back to 29.41 degrees, and ran the PMC autolocker script to lock the PMC - a nice large beatnote was found.
- Finally, Rana tweaked the temperatures of the two diodes on the PSL laser controller - here, the optimization was done more systematically, by looking at the PMC transmitted power on the oscilloscope (and also the MEDM screen) as a function of the diode temperature.
- I took a quick look at the ALS out of loop noise - and unfortunately, our efforts today does not seem to have noticeably improved anything (although the bump at ~1kHz is no longer there).
Some details not directly related to this work:
- There are long cables (routed via cable tray) suitable for RF signals that are running from the vertex to either end-table - these are labelled. We slightly re-routed the one running to the X-end, sending it to the IOO rack via the overhead cable tray so that we could send the beat signal from the frequency counter module to the X-end, where we could look at it using an analyzer while also twiddling laser parameters.
- A webcam (that also claims to have two-way audio!) has been (re?)installed on the PSL table. The ethernet connection to the webcam currently goes to the network switch on the IOO rack (though it is unlabelled at the moment)
- The X-end area is due for a clean-up, I will try and do some of this tomorrow.
|
Attachment 1: 2016_01_20_ALS_OutOfLoop_1.pdf
|
|
2246
|
Thu Nov 12 01:18:34 2009 |
haixing | Update | SUS | open-loop transfer function of mag levi system (comparison between simulink and measurement) | I built a Simulink model of the magnetic levitation system and try to explain the dip in the open-loop transfer function that was observed.
One can download the model in the svn. The corresponding block diagram is shown by the figure below.

Here "Magnet" is equal to inverse of the magnet mass. Integrator "1/s" gives the velocity of the magnet. A further integrator gives the displacement of the magnet.
Different from the free-mass response, the response of the magnet is modified due to the existence of the Eddy-current damping and negative spring in the vertical
direction, as indicated by the feedback loops after two integrals respectively. The motion of the magnet will change the magnetic field strength which in turn will pick
up by the Hall-effect sensor. Unlike the usual case, here the Hall sensor also picks up the magnetic field created by the coil as indicated by the loop below the mechanical
part. This is actually the origin of the dip in the open-loop transfer function. In the figure below, we show the open-loop transfer function and its phase contributed by both
the mechanical motion of the magnet and the Hall sensor with the black curve "Total". The contribution from the mechanical motion alone is shown by the magenta curve
"Mech" which is obtained by disconnecting the Hall sensor loop (I rescale the total gain to fit the measurement data due to uncertainties in those gains indicated in the figure).
The contribution from the Hall sensor alone is indicated by the blue curve "Hall" which is obtained by disconnecting the mechanical motion loop. Those two contributions
have the different sign as shown by the phase plot, and they destructively interfere with each other and create the dip in the open-loop transfer function.

In the following figure, we show the close-loop response function of the mechanical motion of the magnet.

As we can see, even though the entire close loop of the circuit is stable, the mechanical motion is unstable around 10 Hz. This simply comes from the fact that
around this frequency, the Hall sensor almost has no response to the mechanical motion due to destructive interference as mentioned.
In the future, we will replace the Hall sensor with an optical one to get rid of this undesired destructive interference.
|
2274
|
Mon Nov 16 15:18:10 2009 |
haixing | Update | SUS | Stable magnetic levitation without eddy-current damping |
By including a differentiator from 10 Hz to 50 Hz, we increase the phase margin and the resulting
magnetic levitation system is stable even without the help of eddy-current damping.
The new block diagram for the system is the following:

Here the eddy-current damping component is removed and we add an additional differential
circuit with an operational amplifier OP27G.
In addition, we place the Hall sensor below the magnet to minimize the coupling between
the coil and the Hall sensor.
The resulting levitation system is shown by the figure below:

|
4086
|
Wed Dec 22 11:24:23 2010 |
haixing | Update | SUS | measurement of imbalance in quadrant maglev protope | Yesterday, a sequence of force and gain measurement was made to determine the imbalance in the
quadrant, magnetic-levitation prototype. This was the reason why it failed to achieve a stable levitation.
The configuration is shown schematically by the figure below:

Specifically, the following measurements have been made:
(1) DC force measurement among four pairs of magnets at fixed distance with current of the coils on and off
From this measurement, the DC force between pair of magnets is determined and is around 1.6 N at with a
separation of 1 cm. This measurement also lets us know the gain from voltage to force near the working point.
The force between pair "2" is about 13% stronger than other pairs which are nearly identical. The force by the
coil is around 0.017 N per Volt (levitation of 5 g per 3 Volt); therefore, we need around 12 volt DC compensation
of pair "2" in order to counterbalance such an imbalance. Given the resistence of the coil equal to 26 Om, this
requires almost 500 mA DC compensation. Koji suggested that we need a high-current buffer, instead of what
has been used now.
(2) DC force measurement among four pairs of magnets (with current of the coils off) as a function of distance
From this measurement, we can determine the stiffness of the system. In this case, the stiffness or the
effective spring constant is negative, and we need to compensate it by using a feedback control. This is
one of the most important parameters for designing the feedback control. The data is still in processing.
(3) Gain measurement of the OSEM from the displacement to voltage.
This measurement is a little bit tricky due to the difficulty to determine the displacement of the flag.
After several measurements, it gave approximately 2 V/cm.
Plan for the next few days:
From the those measurements, all the parameters for the plant and sensor that need to determine the
feedback control are known. They should be plugged into the simulink model and to see whether the
old design is appropriate or not. Concerning the experimental part, we will first try to levitate the configuration
with 2 pairs of magnets, instead of 4 pairs, as the first step, which is easier to control but still interesting.
|
4906
|
Wed Jun 29 01:23:21 2011 |
haixing | Update | SUS | issues in the current quad maglev system | Here I show several issues that we have encountered in the quad magnetic levitation system. It would be great if you can give
some suggestions and comments (Poor haixing is crying for help)
The current setup is shown by the figure below (I took the photo this morning):

Basically, we have one heavy load which is rigidly connected to a plane that we try to levitate. On corners of the
plane, there are four push-fit permanent magnets. Those magnets are attracted by four other magnets which are
mounted on the four control coils (the DC force is to counteract the DC gravity). By sensing the position of the plane
with four OSEMs (there are four flags attached on the plane), we try to apply feedback control and levitate the plane.
We have made an analog circuit to realize the feedback, but it is not successful. There are the following main issues
that need to be solved:
(1) DC magnetic force is imbalanced, and we found that one pair has a stronger DC force than others. This should
be able to solved simply by replacing them with magnets have comparable strength to others.
(2) The OSEM not only senses the vertical motion, but also the translational motion. One possible fast solution is to
cover the photodiode and only leave a very thin vertical slit so that a small translational motion is not sensed.
Maybe this is too crappy. If you have better ideas, please let me know. Koji suggested to use reflective sensing
instead of OSEM, which can also solve the issue that flags sometimes touche the hole edge of the OSEM and
screw up the sensing.
(3) Cross coupling among different degrees of freedom. Basically, even if the OSEM only senses the vertical motion,
the motion of four flags, which are rigidly connected to the plane, are not independent. In the ideal case, we only
need to control pith, yaw and vertical motion, which only has three degrees of freedom, while we have four sensing outputs
from four OSEMs. This means that we need to work out the right control matrix. Right now, we are in some kind of dilemma.
In order to obtain the control matrix, we first have to get the sensing matrix or calibrate the cross coupling; however, this is
impossible if the system is unstable. This is very different from the case of quad suspension control used in LIGO,
in which the test mass is stable suspended and it is relatively easy to measure the cross coupling by driving the test mass
with coils. Rana suggested to include a mechanical spring between the fixed plane and levitated plane, so that
we can have a stable system to start with. I tried this method today, but I did not figure out a nice way to place the spring,
as we got a hole right in the middle of the fixed plane to let the coil connectors go though. As a first trial, I plan to
replace the stop rubber band (to prevent the plane from getting stuck onto the magnets) shown in the figure with mechanical
springs. In this case, the levitated plane is held by four springs instead of one. This is not as good as one, because
of imbalance among the four, but we can use this setup, at least, to calibrate the cross coupling. Let me know if you come
up better solution.
After those issues are solved, we can then implement Jamie's Cymac digital control, which is now under construction,
to achieve levitation. |
4992
|
Tue Jul 19 21:05:55 2011 |
haixing | Update | DAQ | choose the right relay | Rana and I are working on the AA/AI circuit for Cymac. We need relays to bypass certain paths in the circuit, and we just found a nice website
explaining how to choose the right relay:
http:/zone.ni.com/devzone/cda/tut/p/id/2774
This piece of information could be useful for others. |
5019
|
Fri Jul 22 15:39:55 2011 |
haixing | Update | SUS | matching the magnets | Yi Xie and Haixing,
We used the Gauss meter to measure the strength distribution of bought magnets, which follows a nice Gaussian distribution.
We pick out four pairs--four fixed magnets and four for the levitated plate that are matched in strength. The force difference is
anticipated to be within 0.2%, and we are going to measure the force as a function of distance to further confirm this.
In the coming week, we will measure various transfer functions in the path from the sensors to the coils (the actuator). The obtained
parameters will be put into our model to determine the control scheme. The model is currently written in mathematica which can
analyze the stability from open-loop transfer function. |
5022
|
Sun Jul 24 20:36:03 2011 |
haixing | Summary | Electronics | AA filter tolerance analysis | Koji and Haixing,
We did a tolerance analysis to specify the conner frequency for passive low-pass filtering in the AA filter of Cymac. The
link to the wiki page for the AA filter goes as follows (one can have a look at the simple schematics):
http://blue.ligo-wa.caltech.edu:8000/40m/Electronics/BNC_Whitening_AA
Basically, we want to add the following passive low-pass filter (boxed) before connecting to the instrumentation amplifier:

Suppose (i) we have 10% error in the capacitor value and (ii) we want to have common-mode rejection
error to be smaller than 0.1% at low frequencies (up to the sampling frequency 64kHz), what would be
conner frequency, or equivalently the values for the capacitor and resistor, for the low-pass filter?
Given the transfer function for this low-pass filter:

and the error propagation equation for its magnitude:

we found that the conner frequency needs to be around 640kHz in order to have
with 
|
5024
|
Sun Jul 24 22:19:19 2011 |
haixing | Summary | Electronics | AA filter tolerance analysis |
>> This sort of OK, except the capacitor connects across the (+) terminals of the two input opamps, and does not connect to ground:

>> Also, we don't care about the CMRR at 64 kHz. We care about it at up to 10 kHz, but not above.
In this case, the conner frequency for the low-pass filter would be around 100kHz in order to satisfy the requirement.
>>And doesn't the value depend on the resistors?
Yes, it does. The error in the resistor (typically 0.1%) is much smaller than that of the capacitor (10%). Since the resistor error propagates in the same as the capacitor,
we can ignore it.
Note that we only specify the conner frequency (=1/RC) instead of R and C specifically from the tolerance analysis, we still need to choose appropriate
values for R and C with the conner frequency fixed to be around 100kHz, for which we need to consider the output impedance of port 1 and port 2.
|
5038
|
Tue Jul 26 21:11:40 2011 |
haixing | Summary | Electronics | AA filter tolerance analysis | 
Given this new setup, we realized that the previous tolerance analysis is incorrect. Because the uncertainty in the capacitance value
does not affect the common mode rejection, as two paths share the same capacitor. Now only the imbalance of two resistors is relevant.
The error propagation formula goes as follows:

We require that the common-mode rejection error at low frequency up to 8kHz, namely
with , one can easily find out that the corner frequency needs to be around 24kHz.
|
2140
|
Sun Oct 25 14:29:45 2009 |
haixing, kiwamu | Configuration | General | SR785 spectrum analyzer | In this morning, we have disconnected SR785 which was in front of 1X2 rack, to measure a Hall sensor noise.
After a while, we put back SR785 and re-connected as it has been.
But the display setup might have been changed a little bit.
|
6412
|
Wed Mar 14 05:26:39 2012 |
interferomter tack force | Update | General | daytime tasks | The following tasks need to be done in the daytime tomorrow.
- Hook up the DC output of the Y green BBPD on the PSL table to an ADC channel (Jamie / Steve)
- Install fancy suspension matrices on PRM and ITMX [#6365] (Jenne)
- Check if the REFL165 RFPD is healthy or not (Suresh / Koji)
- According to a simulation the REFL165 demod signal should show similar amount of the signal to that of REFL33.
- But right now it is showing super tiny signals [#6403]
|
6416
|
Wed Mar 14 14:09:01 2012 |
interferomter tack force | Update | General | daytime tasks |
Quote: |
The following tasks need to be done in the daytime tomorrow.
- Hook up the DC output of the Y green BBPD on the PSL table to an ADC channel (Jamie / Steve)
- Install fancy suspension matrices on PRM and ITMX [#6365] (Jenne)
- Check if the REFL165 RFPD is healthy or not (Suresh / Koji)
- According to a simulation the REFL165 demod signal should show similar amount of the signal to that of REFL33.
- But right now it is showing super tiny signals [#6403]
|
For ITMX, I used the values from the conlog:
2011/08/12,20:10:12 utc 'C1:SUS[-_]ITMX[-_]INMATRIX'
These are the latest values in the conlog that aren't the basic matricies. Even though we did a round of diagonalization in Sept, and the
matricies are saved in a .mat file, it doesn't look like we used the ITMX matrix from that time.
For PRM, I used the matricies that were saved in InputMatricies_16Sept2011.mat, in the peakFit folder, since I couldn't find anything in the Conlog other than the basic matricies.
UPDATE: I didn't actually count the number of oscillations until the optics were damped, so I don't have an actual number for the Q, but I feel good about the damping, after having kicked POS of both ITMX and PRM and watching the sensors. |
407
|
Mon Mar 31 14:01:40 2008 |
jamie | Summary | LSC | Summary of DC readout PD non-linearity measurements | From March 21-26, I conducted some measurements of the response non-linearity of some mock-up DC readout photodetectors. The detectors are simple:
Vbias ---
|
PD
|-------- output
resistor
|
---
-
This is a description of the final measurement.
The laser current modulation input was given a 47Hz sine wave at 20mV. A constant small fraction of the beam was shown onto the reference detector, and a beam that was varied in DC power level was incident on the test detector. Spectra were taken from both detectors at the same time, 0.25Hz bandwidth, over 100 averages.
At each incident power level on the test detector, the Vpk in all multiples of the modulation frequency were measured (ie. V[i*w]). The difference between the 2f/1f ratio in the test and reference was then calculated, ie:
V_test[2*w]/V_test[1*w] - V_ref[2*w]/V_ref[1*w]
This is the solid black line in the plot ("t21-r21_v_power.png").
The response of a simulated non-linear detector was also calculated based on the Vpk measured at each harmonic in the reference detector, assuming that the reference detector had a purely linear response, ie:
V_nl[beta,2*w]/V_nl[beta,1*w] - V_l[2*w]/V_l[1*w]
these are the dashed colored lines in the plot ("t21-r21_v_power.png").
The result of the measurement seems to indicate that the non-linearity in the test detector is less than beta=-1.
The setup that was on the big optics table south of the laser, adjacent to the mode cleaner, is no longer needed. |
Attachment 1: t21-r21_v_power.png
|
|
4549
|
Wed Apr 20 23:20:49 2011 |
jamie | Summary | Computers | installation of CDS tools on pianosa | This is an overview of how I got (almost) all the CDS tools running on pianosa, the new Ubuntu 10.04 control room work station.
This is machine is experiment in minimizing the amount of custom configuration and source code compiling. I am attempting to install as many tools as possible from existing packages in
available packages
I was able to install a number of packages directly from the ubuntu archives, including fftw, grace, and ROOT:
apt-get install \ libfftw3-dev \ grace \ root-system
LSCSOFT
I installed all needed LSCSOFT packages (framecpp, libframe, metaio) from the well-maintained UWM LSCSOFT repository.
$ cat /etc/apt/sources.list.d/lscsoft.list deb http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze deb-src http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze contrib
sudo apt-get install lscsoft-archive-keyring sudo apt-get update sudo apt-get install ldas-tools-framecpp-dev libframe-dev libmetaio-dev lscsoft-user-en
You then need to source /opt/lscsoft/lscsoft-user-env.sh to use these packages.
EPICS
There actually appear to be a couple of projects that are trying to provide debs of EPICS. I was able to actually get epics working from one of them, but it didn't include some of the other needed packages (such as MEDM and BURT) so I fell back to using Keith's pre-build binary tarball.
Prereqs:
apt-get install \ libmotif-dev \ libxt-dev \ libxmu-dev \ libxprintutil-dev \ libxpm-dev \ libz-dev \ libxaw7-dev \ libpng-dev \ libgd2-xpm-dev \ libbz2-dev \ libssl-dev \ liblapack-dev \ gfortran
Pulled Keith's prebuild binary:
cd /ligo/apps wget https://llocds.ligo-la.caltech.edu/daq/software/binary/apps/ubuntu/epics-3.14.10-ubuntu.tar.gz tar zxf epics-3.14.10-ubuntu.tar.gz
GDS
I built GDS from svn, after I fixed some broken stuff [0]:
cd ~controls/src/gds svn co https://redoubt.ligo-wa.caltech.edu/svn/gds/trunk cd trunk #fixed broken stuff [0] source /opt/lscsoft/lscsoft-user-env.sh ./bootstrap export GDSBUILD=online export ROOTSYS=/usr ./configure --prefix=/ligo/apps/gds --enable-only-dtt --with-epics=/ligo/apps/epics-3.14.10 make make install
dataviewer
I installed dataviewer from source:
cd ~controls/src/advLigoRTS svn co https://redoubt.ligo-wa.caltech.edu/svn/advLigoRTS/trunk cd trunk/src/dv #fix stupid makefile /opt/rtapps --> /ligo/apps make make install
I found that the actual dataviewer wrapper script was also broken, so I made a new one:
$ cat /ligo/apps/dv/dataviewer
#!/bin/bash export DVPATH=/ligo/apps/dv ID=$$ DCDIR=/tmp/${ID}DC mkdir $DCDIR trap "rm -rf $DCDIR" EXIT $DVPATH/dc3 -s ${NDSSERVER} -a $ID -b $DVPATH "$@"
environment
Finally, I made a environment definer file:
$ cat /ligo/apps/cds-user-env.sh # source the lscsoft environment . /opt/lscsoft/lscsoft-user-env.sh
# source the gds environment . /ligo/apps/gds/etc/gds-user-env.sh
# special local epics setup EPICS=/ligo/apps/epics export LD_LIBRARY_PATH=${EPICS}/base/lib/linux-x86_64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=${EPICS}/extensions/lib/linux-x86_64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=${EPICS}/modules/seq/lib/linux-x86_64:$LD_LIBRARY_PATH export PATH=${EPICS}/base/bin/linux-x86_64:$PATH export PATH=${EPICS}/extensions/bin/linux-x86_64:$PATH export PATH=${EPICS}/modules/seq/bin/linux-x86_64:$PATH
# dataviewer path export PATH=/ligo/apps/dv:${PATH}
# specify the NDS server export NDSSERVER=fb
[0] GDS was not compiling, because of what looked like bugs. I'm not sure why I'm the first person to catch these things. Stricter compiler?
To fix the following compile error:
TLGExport.cc:1337: error: ‘atoi’ was not declared in this scope
I made the following patch:
Index: /home/controls/src/gds/trunk/GUI/dttview/TLGExport.cc =================================================================== --- /home/controls/src/gds/trunk/GUI/dttview/TLGExport.cc (revision 6423) +++ /home/controls/src/gds/trunk/GUI/dttview/TLGExport.cc (working copy) @@ -31,6 +31,7 @@ #include <iomanip> #include <string.h> #include <strings.h> +#include <stdlib.h> namespace ligogui { using namespace std;
To fix the following compile error:
TLGPrint.cc:264: error: call of overloaded ‘abs(Int_t&)’ is ambiguous
I made the following patch:
Index: /home/controls/src/gds/trunk/GUI/dttview/TLGPrint.cc =================================================================== --- /home/controls/src/gds/trunk/GUI/dttview/TLGPrint.cc (revision 6423) +++ /home/controls/src/gds/trunk/GUI/dttview/TLGPrint.cc (working copy) @@ -22,6 +22,7 @@ #include <fstream> #include <map> |
4732
|
Tue May 17 17:01:22 2011 |
jamie | Configuration | CDS | Update LSC channels from _DAQ to _DQ | As of RCG version 2.1, recorded channels use the suffix "_DQ", instead of "_DAQ". I just rebuilt and installed the c1lsc model, which changed the channel names, therefore hosing the old daq channel ini file. Here's what I did, and how I fixed it:
$ ssh c1lsc
$ cd /opt/rtcds/caltech/c1/core/trunk
$ make c1lsc
$ make install-c1lsc
$ cd /opt/rtcds/caltech/c1/scripts
$ ./startc1lsc
$ cd /opt/rtcds/caltech/c1/chans/daq
$ cat archive/C1LSC_110517_152411.ini | sed "s/_DAQ/_DQ/g" >C1LSC.ini
$ telnet fb 8087
daqd> shutdown
|
5049
|
Wed Jul 27 15:49:13 2011 |
jamie | Configuration | CDS | dataviewer now working on pianosa | Not exactly sure what the problem was, but I updated to the head of the SVN and rebuilt and it seems to be working fine now. |
5060
|
Fri Jul 29 12:39:26 2011 |
jamie | Update | CDS | c1iscex mysteriously crashed | c1iscex was behaving very strangely this morning. Steve earlier reported that he was having trouble pulling up some channels from the c1scx model. I went to investigate and noticed that indeed some channels were not responding.
While I was in the middle of poking around, c1iscex stopped responding altogether, and became completely unresponsive. I walked down there and did a hard reset. Once it rebooted, and I did a burt restore from early this morning, everything appeared to be working again.
The fact that problems were showing up before the machine crashed worries me. I'll try to investigate more this afternoon. |
5094
|
Tue Aug 2 16:43:23 2011 |
jamie | Update | CDS | NDS2 server on mafalda restarted for access to new channels | In order to get access to new DQ channels from the NDS2 server, the NDS2 server needs to be told about the new channels and restarted. The procedure is as follows:
ssh mafalda
cd /users/jzweizig/nds2-mafalda
./build_channel_history
./install_channel_list
pkill nds2
# wait a few seconds for the process to quit and release the server port
./start_nds2
This procedure needs to be run every time new _DQ channels are added.
We need to set this up as a proper service, so the restart procedure is more elegant.
An additional comment from John Z.:
The --end-gps parameter in ./build_channel_history seems to be causeing
some trouble. It should work without this parameter, but there is a
directory with a gps time of 1297900000 (evidently a test for GPS1G)
that might screw up the channel list generation. So, it appears that
the end time requires a time for which data already exists. this
wouldn't seem to be a big deal, but it means that it has to be modified
by hand before running. I haven't fixed this yet, but I think that I
can probably pick out the most recent frame and use that as an end-time
point. I'll see if I can make that work... |
5127
|
Fri Aug 5 20:37:34 2011 |
jamie | Summary | General | Summary of today's in-vacuum work | [Jamie, Suresh, Jenne, Koji, Kiwamu]
After this morning's hiccup with the east end crane, we decided to go ahead with work on ETMX.
Took pictures of the OSEM assemblies, we laid down rails to mark expected new position of the suspension base.
Removed two steering mirrors and a windmill that were on the table but where not being used at all.
Clamped the test mass and moved the suspension to the edge of the table so that we could more easily work on repositioning the OSEMs. Then leveled the table and released the TM.
Rotated each OSEM so that the parallel LED/PD holder plates were oriented in the vertical direction. We did this in the hopes that this orientation would minimize SD -> POS coupling.
For each OSEM, we moved it through it's full range, as read out by the C1:SUS-ETMX_{UL,UR,LL,LR,SD}PDMon channels, and attempted to adjust the positions so that the read out was in the center of the range (the measured ranges, mid values, and ultimate positions will be noted in a follow-up post). Once we were satisfied that all the OSEMs were in good positions, we photographed them all (pictures also to follow).
Re-clamped the TM and moved it into it's final position, using the rails as reference and a ruler to measure as precisely as possible :
ETMX position change: -0.2056 m = -20.56 cm = -8.09 in (away from vertex)
Rebalanced the table.
Repositioned the mirror for the ETMX face camera.
Released TM clamps.
Rechecked OSEM centering.
Unblocked the green beam, only to find that it was displaced horizontally on the test mass about half an inch to the west (-y). Koji determined that this was because the green beam is incident on the TM at an angle due to the TM wedge. This presents a problem, since the green beam can no longer be used as a reference for the arm cavity. After some discussion we decided to go with the TM position as is, and to realign the green beam to the new position and relock the green beam to the new cavity. We should be able to use the spot position of the green beam exiting the vacuum at the PSL table as the new reference. If the green X beam exiting at the PSL table is severely displaced, we may decide to go back in and move ETMX to tweak the cavity alignment.
At this point we decided that we were done for the day. Before closing up, we put a piece of foil with a hole in it in front of the the TM face, to use as an alignment aperture when Kiwamu does the green alignment.
Kiwamu will work on the green alignment over the weekend. Assuming everything works out, we'll try the same procedure on ETMY on Monday. |
|