ID |
Date |
Author |
Type |
Category |
Subject |
11434
|
Tue Jul 21 21:33:22 2015 |
Max Isi | Update | General | Summary pages moved to 40m LDAS account | The summary pages are now generated from the new 40m LDAS account. The nodus URL (https://nodus.ligo.caltech.edu:30889/detcharsummary/) is the same and there are no changes to the way the configuration files work. However, the location on LDAS has changed to https://ldas-jobs.ligo.caltech.edu/~40m/summary/ and the config files are no longer version-controlled on the LDAS side (this was redundant, as they are under VCS in nodus).
I have posted a more detailed description of the summary page workflow, as well as instructions to run your own jobs and other technical minutiae, on the wiki: https://wiki-40m.ligo.caltech.edu/DailySummaryHelp |
12394
|
Wed Aug 10 17:30:26 2016 |
Max Isi | Update | General | Summary pages status | Summary pages are currently empty due to a problem with the code responsible for locating frame files in the cluster. This should be fixed soon and the
pages should go back to normal automatically at that point. See Dan Kozak's email below for details.
Date: Wed, 10 Aug 2016 13:28:50 -0700
From: Dan Kozak <dkozak@ligo.caltech.edu>
> Dan, maybe it's a gw_data_find problem?
Almost certainly that's the problem. The diskcache program that finds
new data died on Saturday and no one noticed. I couldn't restart it,
but fortunately it's author just returned from several weeks vacation
today. He's working on it and I'll let you know when it's back up.
--
Dan Kozak
dkozak@ligo.caltech.edu |
12399
|
Thu Aug 11 11:09:52 2016 |
Max Isi | Update | General | Summary pages status | This problem has been fixed.
> Summary pages are currently empty due to a problem with the code responsible for locating frame files in the cluster. This should be fixed soon and the
> pages should go back to normal automatically at that point. See Dan Kozak's email below for details.
>
>
> Date: Wed, 10 Aug 2016 13:28:50 -0700
> From: Dan Kozak <dkozak@ligo.caltech.edu>
>
>
> > Dan, maybe it's a gw_data_find problem?
>
> Almost certainly that's the problem. The diskcache program that finds
> new data died on Saturday and no one noticed. I couldn't restart it,
> but fortunately it's author just returned from several weeks vacation
> today. He's working on it and I'll let you know when it's back up.
>
> --
> Dan Kozak
> dkozak@ligo.caltech.edu |
5428
|
Thu Sep 15 22:31:44 2011 |
Manuel | Update | SUS | Summary screen | I changed some colors on the Summary of Suspension Sensor using my italian creativity.
I wrote a script in Python to change the thresholds for the "alarm mode" of the screen.
The script takes a GPS-format start time as the 1st argument and a duration time as the second argument.
For every channel shown in the screen, it compute the mean value during this time.
The 3rd argument is the ratio between the mean and the LOW threshold. The 4th argument is the ratio between the mean and the LOLO threshold.
Then it sets the thresholds simmetrycally for HIGH and HIHI threshold.
It does that for all channels skipping the Gains and the Off Sets because this data are not stored.
For example is ratio are 0.9 and 0.7 and the mean is 10, thresholds will be LOLO=7, LOW=9, HIGH=11, HIHI=13.
You can run the script on pianosa writing on a terminal '/opt/rtcds/caltech/c1/scripts/SUS/set_thresholds.py' and the arguments.
I already run my program with those arguments: 1000123215 600 0.9 0.7
The time is of this morning at 5:00 for 10 minutes
This is the help I wrote
HELP: This program set the thresholds for the "alarm mode" of the C1SUS_SUMMARY.adl medm screen.
Written by Manuel Marchio`, visiting student from University of Pisa - INFN for the 2011 summer at Ligo-Caltech. Thrusday, 15th September 2011.
The 1st argument is the time in gps format when you want to START the mean
The 2nd argument is the DURATION
The 3rd argument is the ratio of the LOW and the HIGH thresholds. It must be in the range [0,1]
The 4th argument is the ratio of the LOLO and the HIHI thresholds. It must be in the range [0,1]
Example: path/set_thresholds.py 1000123215 600 0.9 0.7
and if the the mean is 10, thresholds will be set as LOLO=7, LOW=9, HIGH=11, HIHI=13
|
Attachment 1: sussum.png
|
|
5467
|
Mon Sep 19 18:05:27 2011 |
rana | Update | SUS | Summary screen |
Quote: |
I changed some colors on the Summary of Suspension Sensor using my italian creativity.
I wrote a script in Python to change the thresholds for the "alarm mode" of the screen.
|
I've started to fix up the script somewhat (as a way to teach myself some more python):
* moved all of the SUS Summary screen scripts into SUS/SUS_SUMMARY/
* removed the hardcoded channel names (a list of 190 hand-typed names !!!!!!!)
* fixed it to use NDS2 instead of try to use the NDS2 protocol on fb:8088 (which is an NDS1 only machine)
* it was trying to set alarms for the SUS gains, WDs, Vmons, etc. using the same logic as the OSEM PD values. This is non-sensical. We'll need to make a different logic for each type of channel.
New script is called setSensors.py. There are also different scripts for each of the different kinds of fields (gains, sensors, vmons, etc.)
Some Examples:
pianosa:SUS_SUMMARY 0> ./setDogs.py 3 5
Done writing new values.

|
5500
|
Wed Sep 21 16:22:14 2011 |
rana | Update | SUS | Summary screen | The SUS SUMMARY screen is now fully activated. You should keep it open at all times as a diagnostic of the suspensions.
No matter how cool you think you are, you are probably doing something bad when trying to lock, measure any loop gains, set matrices, etc. Use the screen.
This is the link to the automatic snapshot of the SUS SUMMARY screen. You can use it to check the Suspensions status with your jPhone.
Auto SUS SUMMARY Snapshot
When the values go yellow its near the bad level. When its red, it means the optic is misaligned or not damped or has the wrong gain, etc.
So don't ignore it Steve! If you think the thresholds are set too low then change them to the appropriate level with the scripts is SUS/ |
6833
|
Tue Jun 19 20:26:50 2012 |
Jenne | HowTo | Locking | Summer Plan | Jenne and Yuta's Summer Plan
These are the things that we'd like to accomplish, hopefully before Yuta leaves in mid-July
* Yarm mode scan
~ Measure residual motion of Yarm cavity when ALS is engaged
* Xarm mode scan
~ Align Xarm IR
~ Align Xarm green to cavity
~ Do mode scan (similar to Yarm)
~ Measure residual motion of Xarm cavity when ALS is engaged
* Hold both arms on IR resonance simultaneously (quick proof that we can)
~ Modify beatbox so we can use both X and Y at the same time (Jamie will do this Wednesday morning - we've already discussed)
* PRMI + Arms
~ Lock the PRMI (which we already know we can do) holding arms off resonance, bring both arms into resonance using ALS
* PRC mode matching - figure out what the deal is
~ Look at POP camera with video capture - use software that Eric the Tall wrote with JoeB to measure spot size
* DRMI glitches
~ Why can't we keep the DRMI locked stably?
* DRMI + Arms
~ Full lock!!
~ Make lots of useful diagnostics for aLIGO, measure sensing matricies, etc. |
4956
|
Fri Jul 8 09:53:49 2011 |
Nicole | Summary | SUS | Summer Progress Report 1 | A copy of my summer progress report 1 has been uploaded to ligodcc 7/711 and I have just added a copy to the TTsuspension wiki
PDF copy of Summer Progress Report |
11654
|
Wed Sep 30 15:44:06 2015 |
Steve | Update | General | Sun Fire X4600 | Gautam and Steve,
The decommissioned server from LDAS is retired to the 40m with 32 cores and 128GB of memory in rack 1X7 http://docs.oracle.com/cd/E19121-01/sf.x4600/ |
11269
|
Sun May 3 19:40:51 2015 |
rana | Update | ASC | Sunday maintenance: alignment, OL center, seismo, temp sensors | X arm was far out in yaw, so I reran the ASS for Y and then X. Ran OK; the offload from ASS outputs to SUS bias is still pretty violent - needs smoother ramping.
After this I recentered the ITMX OL- it was off by 50 microradians in pitch. Just like the BS/PRM OLs, this one has a few badly assembled & flimsly mounts. Steve, please prepare for replacing the ITMX OL mirror mounts with the proper base/post/Polaris combo. I think we need ~3 of them. Pit/yaw loop measurements attached.
Based on the PEM-SEIS summary page, it looked like GUR1 was oscillating (and thereby saturating and suppressing the Z channel). So I power cycled both Guralps by turning off the interface box for ~30 seconds and the powering back on. Still not fixed; looks like the oscillations at 110 and 520 Hz have moved but GUR2_X/Y are suppressed above 1 Hz, and GUR1_Z is suppressed below 1 Hz. We need Jenne or Zach to come and use the Gur Paddle on these things to make them OK.
From the SUS-WatchDog summary page, it looked like the PRM tripped during the little 3.8 EQ at 4AM, so I un-tripped it.
Caryn's temperature sensors look like they're still plugged in. Does anyone know where they're connected? |
Attachment 1: itmx_ol_loops_150503.png
|
|
Attachment 2: Gur_150503.png
|
|
1344
|
Mon Mar 2 03:57:44 2009 |
Yoichi | Update | Locking | Sunday night locking | Tonight's locking started with a boot fest of the FE computers which were all red when I came in.
It also took me sometime to realize that C1:IOO-MC_F was returning always zero to tdsavg, causing the offloadMCF script to do nothing.
I fixed this by rebooting c1iovme and c1iool0.
Like Rob on the thursday night, I was only able to reach arm power around 10.
This time, I turned down the MC WFS gain to 0.02 (from 0.3).
I also checked gains of most of the loops (MICH, PRC, SRC, DARM, CARM-MCL, CARM-AO).
All the loops looked fine until the lock was lost suddenly. Also the spectrum of MC_F did not change as the arm power was ramped up.
Actually, I was able to reach arm power=10 only once because I spent a long time checking the loop gains and spectrum at fine steps of the arm power.
So it is quite possible that this loss of lock was just caused by a seismic kick. |
14820
|
Wed Jul 31 14:44:11 2019 |
gautam | Update | Computers | Supermicro inventory | Chub brought the replacement Supermicro we ordered to the 40m today. I stored it at the SW entrance to the VEA, along with the other Supermicro. At the time of writing, we have, in hand, two (unused) Supermicro machines. One is meant for EY and the other is meant for c1psl/c1iool0. DDR3 RAM and 120 GB SSD drives have also been ordered, but have not yet arrived (I think, Chub, please correct me if I'm wrong).
Update 20190802: The DDR3 RAM and 120 GB SSD drives arrived, and are stored in the FE hardware cabinet along the east arm. So at the time of writing, we have 2 sets of (Supermicro + 120GB HD + 4GB RAM).
Quote: |
We should ask Chub to reorder several more SuperMicro rackmount machines, SSD drives, and DRAM cards. Gautam has the list of parts from Johannes' last order.
|
|
3833
|
Mon Nov 1 10:28:41 2010 |
steve | Bureaucracy | SAFETY | Suresh received 40m safety training | Our new postdoc Suresh Doravari received 40m specific safety training last week. |
11377
|
Thu Jun 25 15:07:50 2015 |
Steve | Update | safety | Surfs Safety 2015 | Jessica Pena, Megan Kelly, Eve Chase and Ignacio Magana received 40m specific basic safety traning today. |
Attachment 1: surfs2015.jpg
|
|
3527
|
Mon Sep 6 20:38:58 2010 |
Koji | Update | CDS | Susension model reviewed | I have reviewed the suspension model of C1SUS and refined it.
It is comaptible to the current one but has minor additions. |
Attachment 1: suspension_model.pdf
|
|
3528
|
Mon Sep 6 21:08:44 2010 |
rana | Update | CDS | Susension model reviewed | We must remember that we are using the Rev.B SOS Coil Drivers and not the Rev. A. 
The main change from A->B was the addition of the extra path for the bias inputs. These inputs were previously handled by the slow EPICS system and not a part of the front end. So we used to have a separate bias screen for these than the bias which is in the front end. The slow bias is what was used for the alignment to avoid overloading the range of the main coil driver path. |
9982
|
Wed May 21 13:18:47 2014 |
ericq | Update | CDS | Suspension MEDM Bug | I fixed a bug in the SUS_SINGLE screen, where the total YAW output was incorrectly displayed (TO_COIL_3_1 instead of TO_COIL_1_3). I noticed this by seeing that the yaw bias slider had no effect on the number that claimed to be the yaw sum. The first time I did this, I accidently changed the screen size a bit which smushed things together, but that's fixed now.
I committed it to the svn, along with some uncommitted changed to the oplev servo screen. |
14499
|
Thu Mar 28 23:29:00 2019 |
Koji | Update | SUS | Suspension PD whitening and I/F boards modified for susaux replacement | Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.
Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.
The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.
After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.
The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.
The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).
|
Attachment 1: IMG_7474.JPG
|
|
Attachment 2: D000210_backplane.pdf
|
|
Attachment 3: Screenshot_from_2019-03-28_23-28-23.png
|
|
2642
|
Fri Feb 26 01:00:07 2010 |
Jenne | Update | COC | Suspension Progress | This is going to be a laundry list of the mile markers achieved so far:
* Guiderod and wire standoff glued to each ITMX and ITMY
* Magnets glued to dumbbells (4 sets done now). ITMX has 244 +- 3 Gauss, ITMY has 255 +- 3 Gauss. The 2 sets for SRM and PRM are 255 +- 3 G and 264 +- 3 G. I don't know which set will go with which optic yet.
* Magnets glued to ITMX. There were some complications removing the optic from the magnet gluing fixture. The way the optic is left with the glue to dry overnight is with "pickle picker" type grippers holding the magnets to the optic. After the epoxy had cured, Kiwamu and I took the grippers off, in preparation to remove the optic from the fixture. The side magnet (thankfully the side where we won't have an OSEM) and dumbbell assembly snapped off. Also, on the UL magnet, the magnet came off of the dumbbell (the dumbbell was still glued to the glass). We left the optic in the fixture (to maintain the original alignment), and used one of the grippers to glue the magnet back to the UL dumbbell. The gripper in the fixture has very little slop in where it places the magnet/dumbbell, so the magnet was reglued with very good axial alignment. Since after the side magnet+dumbbell came off the glass, the 2 broke apart, we did not glue them back on to the optic. They were reattached, so that we can in the future put the extra side magnet on, but I don't think that will be necessary, since we already know which side the OSEM will be on.
* Magnets glued to ITMY. This happened today, so it's drying overnight. Hopefully the grippers won't be sticky and jerky like last time when we were removing them from the fixture, so hopefully we won't lose any magnets when I take the optic out of the fixture.
* ITMX has been placed in its suspension cage. The first step, before getting out the wire, is to set the optic on the bottom EQ stops, and get the correct height and get the optic leveled, to make things easier once the wire is in place. Koji and I did this step, and then we clamped all of the EQ stops in place to leave it for the night.
* The HeNe laser has been leveled, to a beam height of 5.5inches, in preparation for the final leveling of the optics, beginning tomorrow. The QPD with the XY decoder is also in place at the 5.5 inch height for the op lev readout. The game plan is to leave this set up for the entire time that we're hanging optics. This is kind of a pain to set up, but now that it's there, it can stay out of the way huddled on the side of the flow bench table, ready for whenever we get the ETMs in, and the recoated PRM.
* Koji and Steve got the ITMX OSEMs from in the vacuum, and they're ready for the hanging and balancing of the optic tomorrow. Also, they got out the satellite box, and ran the crazy-long cable to control the OSEMs while they're on the flow bench in the clean room.
Koji and I discovered a problem with the small EQ stops, which will be used in all of the SOS suspensions for the bottom EQ stops. They're too big. :( The original document (D970312-A-D) describing the size for these screws was drawn in 1997, and it calls for 4-40 screws. The updated drawing, from 2000 (D970312-B-D) calls for 6-32 screws. I naively trusted that updated meant updated, and ordered and prepared 6-32 screws for the bottom EQ stops for all of the SOSes. Unfortunately, the suspension towers that we have are tapped for 4-40. Thumbs down to that. We have a bunch of vented 4-40 screws in the clean room cabinets, which I can drill, and have Bob rebake, so that Zach and Mott can make viton inserts for them, but that will be a future enhancement. For tonight, Koji and I put in bare vented 4-40 screws from the clean room supply of pre-baked screws. This is consistent with the optics in our chambers having bare screws for the bottom EQ stops, although it might be nicer to have cushy viton for emergencies when the wire might snap. The real moral of this story is: don't trust the drawings. They're good for guidelines, but I should have confirmed that everything fit and was the correct size. |
16597
|
Wed Jan 19 14:41:23 2022 |
Koji | Update | BHD | Suspension Status | Is this the correct status? Please directly update this entry.
LO1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
LO2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS4 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR3 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
SR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
Last updated: Fri Jan 28 10:34:19 2022 |
176
|
Thu Dec 6 19:19:47 2007 |
Andrey | Configuration | SUS | Suspension damping Gain was restored |
Suspension damping gain was disabled for some reason (all the indicators in the most right part of the screen C1SUS_ETMX.adl were red), it is now restored. |
14725
|
Thu Jul 4 10:54:21 2019 |
Koji | Summary | SUS | Suspension damping recovered, ITMX stuck | So Cal Earthquake. All suspension watchdogs tripped.
Tried to recover the OSEM damping.
=> The watchdogs for all suspensions except for ITMX were restored. ITMX seems to be stuck. No further action by me for now. |
14471
|
Wed Feb 27 21:34:21 2019 |
gautam | Update | General | Suspension diagnosis | In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are
- To see how much the resonant peaks have shifted w.r.t. the database, if at all - I claim that the ETMY resonances have shifted by a large amount and also has lost one of the resonant peaks.
- To check the status of the existing diagonalization.
All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.
Watchdogs restored at 10 AM PST |
844
|
Mon Aug 18 08:07:10 2008 |
Yoichi | Configuration | SUS | Suspension free swinging | I've started a free swinging measurement of OSEM spectra now. Please leave the watchdogs untouched. |
15610
|
Sun Oct 4 15:32:21 2020 |
gautam | Update | SUS | Suspension health check | Summary:
After the earthquake on September 19 2020, it looks to me like the only lasting damage to suspensions in vacuum is the ETMY UR magnet being knocked off.
Suspension ringdown tests:
I did the usual suspension kicking/ringdown test:
- One difference is that I now kick the suspension "N" times where N is the number of PSD averages desired.
- After kicking the suspension, it is allowed to ring down with the damping disabled, for ~1100 seconds so that we can get spectra with 1mHz resolution.
- We may want to get more e-folding times in, but since the Qs of the modes are a few hundred, I figured this is long enough.
- I think this kind of approach gives better SNR than letting it ringdown 10,000 seconds (for 10 averages with 10 non overlapping segments of 1000 seconds), and I wanted to test this scheme out, seems to work well.
- Attachment #1 shows a summary of the results.
- Attachment #2 has more plots (e.g. transfer function from UL to all other coils), in case anyone is interested in more forensics. The data files are large but if anyone is interested in the times that the suspension was kicked, you can extract it from here.
Conclusions:
- My cursory scans of the analysis don't throw up any red flags (apart from the known problem of ETMY UR being dislodged) 👌 .
- The PRM data is weird
- I believe this is because the DC bias voltage to the coils was significantly off from what it normally is when the PRC is aligned.
- In any case, I am able to lock the PRC, so I think the PRM magnets are fine.
The PRC angular FF no longer works turns out this was just a weird interaction with the Oplev loop because the beam was significantly off-centered on the Oplev QPD. Better alignment fixed it, the FF works as it did before.
With the PRC locked and the carrier resonant (no ETMs), the old feedforward filters significantly degrade the angular stability to the point that the lock is lost.
My best hypothesis is that the earthquake caused a spot shift on PR2/PR3, which changed the TF from seismometer signal to PRC spot motion.
Anyways, we can retrain the filter.
- The fact that the PRC can be locked suggest PR2/PR3 are still suspended and okay.
- The SRM data is also questionable, because the DC bias voltage wasn't set to the values for an aligned SRC when the data was collected
- Nevertheless, the time series shows a clean ringdown, so at least all 5 OSEMs are seeing a signal.
- Fact that the beam comes out at the AS port suggest SR3/SR2 suspensions are fine 👍
Attachment #2 also includes info about the matrix diagonalization, and the condition numbers of the resulting matrices are as large as ~30 for some suspensions, but I think this isn't a new feature. |
Attachment 1: combined.pdf
|
|
Attachment 2: allPlots.zip
|
15506
|
Thu Jul 30 16:16:43 2020 |
gautam | Update | SUS | Suspension recovery | This earthquake and friends had tripped all watchdogs. I used the scripted watchdog re-enabler, and released the stuck ITMX (this operation is still requires a human and hasn't been scripted yet). IMC is locked again and all Oplevs report healthy optic alignment. |
82
|
Thu Nov 8 00:55:44 2007 |
pkp | Update | OMC | Suspension tests | [Sam , Pinkesh]
We tried to measure the transfer functions of the 6 degrees of freedom in the OMS SUS. To our chagrin, we found that it was very hard to get the OSEMs to center and get a mean value of around 6000 counts. Somehow the left and top OSEMs were coupled and we tried to see if any of the OSEMs/suspension parts were touching each other. But there is still a significant coupling between the various OSEMs. In theory, the only OSEMS that are supposed to couple are [SIDE] , [LEFT, RIGHT] , [TOP1, TOP2 , TOP3] , since the motion along these 3 sets is orthogonal to the other sets. Thus an excitation along any one OSEM in a set should only couple with another OSEM in the same same set and not with the others. The graphs below were obtained by driving all the OSEMS one by one at 7 Hz and at 500 counts ( I still have to figure out how much that is in units of length). These graphs show that there is some sort of contact somewhere. I cant locate any physical contact at this point, although TOP2 is suspicious and we moved it a bit, but it seems to be hanging free now. This can also be caused by the stiff wire with the peek on it. This wire is very stiff and it can transmit motion from one degree of freedom to another quite easily. I also have a graph showing the transfer function of the longitudnal degree of freedom. I decided to do this first because it was simple and I had to only deal with SIDE, which seems to be decoupled from the other DOFs. This graph is similar to one Norna has for the longitudnal DOF transfer function, with the addition of a peak around 1.8 Hz. This I reckon could very be due to the wire, although it is hard to claim for certain. I am going to stop the measurement at this time and start a fresh high resolution spectrum and leave it running over night.
There is an extra peak in the high res spectrum that is disturbing. |
Attachment 1: shakeleft.pdf
|
|
Attachment 2: shakeright.pdf
|
|
Attachment 3: shakeside.pdf
|
|
Attachment 4: shaketop1.pdf
|
|
Attachment 5: shaketop2.pdf
|
|
Attachment 6: shaketop3.pdf
|
|
Attachment 7: LongTransfer.pdf
|
|
Attachment 8: Shakeleft7Nov2007_2.pdf
|
|
Attachment 9: Shakeleft7Nov2007_2.png
|
|
12648
|
Wed Nov 30 01:47:56 2016 |
gautam | Update | LSC | Suspension woes | Short summary:
- Looks like Satellite boxes are not to blame for glitchy behaviour of shadow sensor PD readouts
- Problem may lie at the PD whitening boards (D000210) or with the Contec binary output cards in c1sus
- Today evening, similar glitchy behaviour was observed in all MC1 PD readout channels, leading to frequent IMC unlocking. Cause unknown, although I did work at 1X5, 1X6 today, and pulled out the PD whitening board for ITMY which sits in the same eurocrate as that for MC1. MC2/MC3 do not show any glitches.
Detailed story below...
Part 1: Satellite box swap
Yesterday, I switched the ITMY and ETMY satellite boxes, to see if the problems we have been seeing with ITMY UL move with the box to ETMY. It did not, while ITMY UL remained glitchy (based on data from approximately 10pm PDT on 28Nov - 10am PDT 29 Nov). Along with the tabletop diagnosis I did with the tester box, I concluded that the satellite box is not to blame.
Part 2: Tracing the signal chain (actually this was part 3 chronologically but this is how it should have been done...)
So if the problem isn't with the OSEMs themselves or the satellite box, what is wrong? I attempted to trace the signal chain from the satellite box into our CDS system as best as I could. The suspension wiring diagram on our wiki page is (I think) a past incarnation. Of course putting together a new diagram was a monumental task I wasn't prepared to undertake tonight, but in the long run this may be helpful. I will put up a diagram of the part I did trace out tomorrow, but the relevant links for this discussion are as follows (? indicates I am unsure):
- Sat box (?)--> D010069 via 64pin IDE connector --> D000210 via DB15 --> D990147 via 4pin LEMO connectors --> D080281 via DB25 --> ADC0 of c1sus
- D000210 backplane --> cross-connect (mis)labelled "ITMX white" via IDE connector
- c1sus CONTEC DO-32L-PE --> D080478 via DB37 --> BO0-1 --> cross-connect labelled "XY220 1Y4-33-16A" via IDE --> (?) cross-connect (mis)labelled "ITMX white" via IDE connector
I have linked to the DCC page for the various parts where available. Unfortunately I can't locate (on new DCC or old or elog or wiki) drawings for D010069 (Satellite Amplifier Adapter Board), D080281 ("anti-aliasing interface)" or D080478 (which is the binary output breakout box). I have emailed Ben Abbott who may have access to some other archive - the diagrams would be useful as it is looking likely that the problem may lie with the binary output.
So presumably the first piece of electronics after the Satellite box is the PD whitening board. After placing tags on the 3 LEMOs and 1 DB15 cable plugged into this board, I pulled out the ITMY board to do some tabletop diagnosis in the afternoon around 2pm 29Nov.
Part 3: PD whitening board debugging
This particular board has been reported as problematic in the recent past. I started by inserting a tester board into the slot occupied by this board - the LEDs on the tester board suggested that power-supply from the backplane connectors were alright, confirmed with a DMM.
Looking at the board itself, C4 and C6 are tantalum capacitors, and I have faced problems with this type of capacitor in the past. In fact, on the corresponding MC3 board (which is the only one visible, I didn't want to pull out boards unnecessarily) have been replaced with electrolytic capacitors, which are presumably more reliable. In any case, these capacitors do not seem to be at any fault, the board receives +/-15 V as advertised.
The whitening switching is handled by the MAX333 - this is what I looked at next. This IC is essentially a quad SPDT switch, and a binary input supplied via the backplane connector serves to route the PD input either through a whitening filter, or bypass it via a unity gain buffer. The logic levels that effect the switching are +15V and 0V (and not the conventional 5V and 0V), but according to the MAX333 datasheet, this is fine. I looked at the supply voltage to all ICs on the board, DC levels seemed fine (as measured with a DMM) and I also looked at it on an oscilloscope, no glitches were seen in ~30sec viewing stretch. I did notice something peculiar in that with no input supplied to the MAX333 IC (i.e. the logic level should be 15V), the NO and NC terminals appear shorted when checked with a DMM. Zach has noticed something similar in the past, but Koji pointed out that the DMM can be fooled into thinking there is a short. Anyway, the real test was to pull the logic input of the MAX333 to 0, and look at the output, this is what I did next.
The schematic says the whitening filter has poles at 30,100Hz and a zero at 3 Hz. So I supplied as "PD input" a 12Hz 1Vpp sinewave - there should be a gain of ~x4 when this signal passes through the path with the whitening filter. I then applied a low frequency (0.1Hz) square wave (0-5V) to the "bypass" input, and looked at the output, and indeed saw the signal amplitude change by ~4x when the input to the switch was pulled low. This behaviour was confirmed on all five channels, there was no problem. I took transfer functions for all 5 channels (both at the "monitor" point on the backplane connector and on the front panel LEMOs), and they came out as expected (plot to be uploaded soon).
Next, I took the board back to the eurocrate. I first put in a tester box into the slot and measured the voltage levels on the backplane pins that are meant to trigger bypassing of the whitening stage, all the pins were at 0V. I am not sure if this is what is expected, I will have to look inside D080478 as there is no drawing for it. Note that these levels are set using a Contec binary output card. Then I attached the PD whitening board to the tester board, and measured the voltages at the "Input" pins of all the 5 SPDT switches used under 2 conditions - with the appropriate bit sent out via the Contec card set to 0 or 1 (using the button on the suspension MEDM screens). I confirmed using the BIO medm screen that the bit is indeed changing on the software side, but until I look at D080478, I am not sure how to verify the right voltage is being sent out, except to check at the pins on the MAX333. For this test, the UL channel was indeed anomalous - while the other 4 channels yielded 0V (whitening ON, bit=1) and 15V (whitening OFF, bit=0), the corresponding values for the UL channel were 12V and 10V.
I didn't really get any further than this tonight. But this still leaves unanswered questions - if the measured values are faithful, then the UL channel always bypasses the whitening stage. Can this explain the glitchy behaviour?
Part 4: MC1 troubles
At approximately 8pm, the IMC started losing lock far too often - see the attached StripTool trace. There was a good ~2hour stretch before that when I realigned the IMC, and it held lock, but something changed abruptly around 8pm. Looking at the IMC mirror OSEM PD signals, all 5 MC1 channels are glitching frequently. Indeed, almost every IMC lockloss in the attached StripTool is because of the MC1 PD readouts glitching, and subsequently, the damping loops applying a macroscopic drive to the optic which the FSS can't keep up with. Why has this surfaced now? The IMC satellite boxes were not touched anytime recently as far as I am aware. The MC1 PD whitening board sits in the same eurocrate I pulled the ITMY board out of, but squishing cables/pushing board in did not do anything to alleviate the situation. Moreover, MC2 and MC3 look fine, even though their PD whitening boards also sit in the same eurocrate. Because I was out of ideas, I (soft) restarted c1sus and all the models (the thinking being if something was wrong with the Contec boards, a restart may fix it), but there was no improvement. The last longish lock stretch was with the MC1 watchdog turned off, but as soon as I turned it back on the IMC lost lock shortly after.
I am leaving the autolocker off for the night, hopefully there is an easy fix for all of this... |
Attachment 1: IMCwoes.png
|
|
10345
|
Thu Aug 7 12:34:56 2014 |
Jenne | Update | LSC | Suspensions not kicking? | Yesterday, Q helped me look at the DACs for some of the suspensions, since Gabriele pointed out that the DACs may have trouble with zero crossings.
First, I looked at the oplevs of all the test masses with the oplev servos off, as well as the coil drive outputs from the suspension screen which should go straight out to the DACs. I put some biases on the suspensions in either pitch or yaw so that one or two of the coil outputs was crossing zero regularly. I didn't see any kicks.
Next, we turned off the inputs of the coil driver filter banks, unplugged the cable from the coil driver board to the satellite box, and put in sinusoidal excitations to each of the coils using awggui. We then looked with a 'scope at the monitor point of the coil driver boards, but didn't see any glitches or abnormalities. (We then put everything back to normal)
Finally, I locked and aligned the 2 arms, and just left them sitting. The oplev servos were engaged, but I didn't ever see any big kicks.
I am suspicious that there was something funny going on with the computers and RFM over the weekend, when we were not getting RFM connections between the vertex and the end stations, and that somehow weird signals were also getting sent to some of the optics. Q's nuclear reboot (all the front ends simultaneously) fixed the RFM situation, and I don't know that I've seen any kicks since then, although Eric thinks that he has, at least once. Anyhow, I think they might be gone for now. |
16873
|
Wed May 25 16:38:27 2022 |
yuta | Update | SUS | Suspensions quick health check | [JC, Yuta]
We did a quick health check of suspesions after the pump down.
Summary:
- ITMX LRSEN is too bright (~761) and not responding to any optic motions (we knew this before the pump down)
- ITMY ULCOIL is not working
- LO1 LLCOIL is not working
- Damping loops need to be retuned, especially for ETMY (too much damping), SRM, PR3 and AS4 (damping too weak)
- MC1 sensor outputs are minus instead of plus
- LO2 OSEMs got stuck during the pump down, but now it is free after some kicks. OSEM sensorr values almost came back (see attached)
What we did:
1. Kicked optics with C1:SUS-{optic}_{UL,LL,UR,LR,SD}COIL_OFFSET one by one with offsets of +/- 10000 (or 100000), and checked if C1:SUS-{optic}_{UL,LL,UR,LR,SD}SEN_OUT16 move in both directions.
2. Check if the optic damps nicely.
3. Attached photo of the note is the result. |
Attachment 1: Screenshot_2022-05-25_16-46-13.png
|
|
Attachment 2: OSEMcheck.JPG
|
|
107
|
Thu Nov 15 18:23:55 2007 |
John | HowTo | Computers | Swap CAPS and CTRL on a Windows 2000/XP machine | I've swapped ctrl and caps on the four control room Windows machines. Right ctrl is unchanged.
Start menu->Run "regedit"
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout
Click on the KeyboardLayout entry.
Edit->New Binary Value name it Scancode Map.
Then select the new Scancode Map entry.
Edit menu->Modify Binary Data.
In the dialog box enter the following data:
0000: 00 00 00 00 00 00 00 00
0008: 03 00 00 00 3A 00 1D 00
0010: 1D 00 3A 00 00 00 00 00
Exit the Registry Editor. You need to log off and then on in XP (and restart in Windows 2000) for the changes to be made. |
3947
|
Thu Nov 18 14:19:01 2010 |
josephb | Update | CDS | Swapped c1auxex and c1auxey codes | Problem:
We had not switched the c1aux crates when we renamed the arms, thus the watchdogs labeled ETMX were really watching ETMY and vice-versa.
Solution:
I used telnet to connect to c1auxey, and then c1auxex.
I used the bootChange command to change the IP address of c1auxey to 192.168.113.59 (c1auxex's IP), and its startup script. Similarly c1auxex was changed to c1auxey and then both were rebooted.
c1auxey > bootChange
'.' = clear field; '-' = go to previous field; ^D = quit
boot device : ei
processor number : 0
host name : linux1
file name : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.60:ffffff00 192.168.113.59:ffffff00
inet on backplane (b):
host inet (h) : 192.168.113.20
gateway inet (g) :
user (u) : controls
ftp password (pw) (blank = use rsh):
flags (f) : 0x0
target name (tn) : c1auxey c1auxex
startup script (s) : /cvs/cds/caltech/target/c1auxey/startup.cmd /cvs/cds/caltech/target/c1auxex/startup.cmd
other (o) :
value = 0 = 0x0
c1auxex > bootChange
'.' = clear field; '-' = go to previous field; ^D = quit
boot device : ei
processor number : 0
host name : linux1
file name : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.59:ffffff00 192.168.113.60:ffffff00
inet on backplane (b):
host inet (h) : 192.168.113.20
gateway inet (g) :
user (u) : controls
ftp password (pw) (blank = use rsh):
flags (f) : 0x0
target name (tn) : c1auxex c1auxey
startup script (s) : /cvs/cds/caltech/target/c1auxex/startup.cmd /cvs/cds/caltech/target/c1auxey/startup.cmd
other (o) :
value = 0 = 0x0
|
11657
|
Thu Oct 1 20:26:21 2015 |
jamie | Update | DAQ | Swapping between fb and fb1 | Swapping between fb and fb1 as DAQ is very straightforward, now that they are both on the DAQ network:
- stop daqd on fb
- on fb sudoedit /diskless/root/etc/init.d/mx_stream and set: endpoint=fb1:0
- start daqd on fb1. The "new" daqd binary on fb1 is at: ~controls/rtbuild/trunk/build/mx-localtime/daqd
Once daqd starts, the front end mx_stream processes will be restarted by their monits, and be pointing to the new location.
Moving back is just reversing those steps. |
1135
|
Fri Nov 14 17:41:50 2008 |
Jenne | Omnistructure | Electronics | Sweet New Soldering Iron | The fancy new Weller Soldering Iron is now hooked up on the electronics bench.
Accessories for it are in the blue twirly cabinet (spare tips of different types, CD, and USB cable to connect it to a computer, should we ever decide to do so.
Rana: the soldering iron has a USB port? |
Attachment 1: newSolderingIron.JPG
|
|
689
|
Thu Jul 17 12:15:21 2008 |
Eric | Update | PSL | Swept PMC PZT voltage range | I unlocked the PMC and swept over C1:PSL-PMC_RAMP's full range a couple of times this morning. The PMC should now be relocked and returned
to normal. |
14339
|
Mon Dec 10 15:53:16 2018 |
gautam | Update | LSC | Swept-sine measurement with DTT | Disclaimer: This is almost certainly some user error on my part.
I've been trying to get this running for a couple of days, but am struggling to understand some behavior I've been seeing with DTT.
Test:
I wanted to measure some transfer functions in the simulated model I set up.
- To start with, I put a pendulum (f0 = 1Hz, Q=5) TF into one of the filter modules
- Isolated it from the other interconnections (by turning off the MEDM ON/OFF switches).
- Set up a DTT swept-sine measurement
- EXC channel was C1:OMC-TST_AUX_A_EXC
- Monitored channels were C1:OMC-TST_AUX_A_IN2 and C1:OMC-TST_AUX_A_OUT.
- Transfer function being measured was C1:OMC-TST_AUX_A_OUT/C1:OMC-TST_AUX_A_IN2.
- Coherence between the excitation and output were also monitored.
- Sweep parameters:
- Measurement band was 0.1 - 900 Hz
- Logarithmic, downward.
- Excitation amplitude = 1ct, waveform = "Sine"
Unexplained behavior:
- The transfer function measurement fails with a "Synchronization error", at ~15 Hz.
- I don't know what is special about this frequency, but it fails repeatedly at the same point in the measurement.
- Coherence is not 1 always
- Why should the coherence deviate from 1 since everything is simulated? I think numerical noise would manifest when the gain of the filter is small (i.e. high frequencies for the pendulum), but the measurement and coherence seem fine down to a few tens of Hz.
To see if this is just a feature in the simulated model, I tried measuring the "plant" filter in the C1:LSC-PRCL filter bank (which is also just a pendulum TF), and run into the same error. I also tried running the DTT template on donatella (Ubuntu12) and pianosa (SL7), and get the same error, so this must be something I'm doing wrong with the way the measurement is being run / setup. I couldn't find any mention of similar problems in the SimPlant elogs I looked through, does anyone have an idea as to what's going on here?
* I can't get the "import" feature of DTT to work - I go through the GUI prompts to import an ASCII txt file exported from FOTON but nothing selectable shows up in DTT once the import dialog closes (which I presume means that the import was successful). Are we using an outdated version of DTT (GDS-2.15.1)? But Attachment #1 shows the measured part of the pendulum TF, and is consistent with what is expected until the measurement terminates with a synchronization error.
the import problem is fixed - when importing, you have to give names to the two channels that define the TF you're importing (these can be arbitrary since the ASCII file doesn't have any channel name information). once i did that, the import works. you can see that while the measurement ran, the foton TF matches the DTT measured counterpart.
11 Dec 2pm: After discussing with Jamie and Gabriele, I also tried changing the # of points, start frequency etc, but run into the same error (though admittedly I only tried 4 combinations of these, so not exhaustive). |
Attachment 1: SimTF.pdf
|
|
3589
|
Mon Sep 20 11:39:45 2010 |
josephb | Update | CDS | Switch over | I talked to with Alex this morning, discussing what he needed to do to have a frame builder running that was compatible with the new front ends.
1) We need a heavy duty router as a separate network dedicated to data acquisition running between the front ends and the frame builder. Alex says they have one over at downs, although a new one may need to be ordered to replace that one.
2) The frame builder is a linux machine (basically we stop using the Sun fb40m and start using the linux fb40m2 directly.).
3) He is currently working on the code today. Depending on progress today, it might be installable tomorrow. |
815
|
Fri Aug 8 12:21:57 2008 |
josephb | Configuration | Computers | Switched X end ethernet connections over to new switch | In 1X4, I've switched the ethernet connections from c1iscex and c1auxex over to the new Prosafe 24 port switches. They also use the new cat6 cables, and are labeled.
At the moment, everything seems to be working as normally as it was before. In addition:
I can telnet into c1auxex (and can do the same to c1auxey which I didn't touch).
I can't telnet into c1iscex (but I couldn't do that before, nor can I telnet into c1iscey either, and I think these are computers which once running don't let you in). |
852
|
Tue Aug 19 13:34:58 2008 |
josephb | Configuration | Computers | Switched c1pem1, c0daqawg, c0daqctrl over to new switches | Moved the Ethernet connections for c1pem1, c0daqawg, and c0daqctrl over to the Netgear Prosafe switch in 1Y6, using new cat6 cables. |
4791
|
Mon Jun 6 22:41:22 2011 |
rana | Update | SUS | Switching problem in SUS models | Some weeks ago, Joe, Jamie, and I reworked the ETMY controls.
Today we found that the model rebuilds and BURT restores have conspired to put the SUS damping into a bad state.
1) The FM1 files in the XXSEN modules should switch the analog shadow sensor whitening. I found today that, at least on ETMY and ETMX, they do nothing. This needs to be fixed before we can use the suspensions.
2) I found all of the 3:30 and cts2um buttons OFF AGAIN. There's something certainly wrong with the way the models are being built or BURTed. All of our suspension tuning work is being lost as a consequence. We (Joe and Jamie) need to learn to use CONLOG and check that the system is not in a nonsense state after rebuilds. Just because the monitors have lights and the MEDM values are fluctuating doesn't mean that "ITS WORKING". As a rule, when someone says "it seems to work", that basically means that they have no idea if anything is working.
3) We need a way to test that the CDS system is working... |
Attachment 1: a.pdf
|
|
15077
|
Thu Dec 5 14:54:15 2019 |
gautam | Update | General | Symlink to SRmeasure and AGmeasure | I symlinked the SRmeasure and AGmeasure commands to /usr/bin/ on donatella (as it is done on pianosa) so that these scripts are in $PATH and may be run without having to navigate to the labutils directory. |
13909
|
Fri Jun 1 19:25:11 2018 |
pooja | Update | Cameras | Synchronizing video data with the applied motion to the mirror | Aim: To synchronize data from the captured video and the signal applied to ETMX
In order to correlate the intensity fluctuations of the scattered light with the motion of the test mass, we are planning to use the technique of neural network. For this, we need a synchronised video of scattered light with the signal applied to the test mass. Gautam helped me capture 60sec video of scattering of infrared laser light after ETMX was dithered in PITCH at ~0.2Hz..
I developed a python program to capture the video and convert it into a time series of the sum of pixel values in each frame using OpenCV to see the variation. Initially we had tried the same with green laser light and signal of approximately 11.12Hz. But in order to see the variation clearly, we repeated with a lower frequency signal after locking IR laser today. I have attached the plots that we got below. The first graph gives the intensity fluctuations from the video. The third and fourth graphs are that of transmitted light and the signal applied to ETMX to shake it. Since the video captured using the camera was very noisy and intensity fluctuations in the scattered light had twice the frequency of the signal applied, we captured a video after turning off the laser. The second plot gives the background noise probably from the camera. Since camera noise is very high, it may not be possible to train this data set in neural network.
Since the videos captured consume a lot of memory I haven't uploaded it here. I have uploaded the python code 'sync_plots.py' in github (https://github.com/CaltechExperimentalGravity/GigEcamera/tree/master/Pooja%20Sekhar/PythonCode).
|
Attachment 1: camera_mirror_motion_plots.pdf
|
|
446
|
Thu Apr 24 23:50:10 2008 |
rana | Update | General | Syringes in George the Freezer | There are some packets of syringes in the freezer which are labeled as belonging to an S. Waldman.
Thu Apr 24 23:48:55 2008
Be careful of them, don't give them out to the undergrads, and just generally leave them alone. I
will consult with the proper authorities about it. |
16315
|
Tue Sep 7 18:00:54 2021 |
Tega | Summary | Calibration | System Identification via line injection | [paco]
This morning, I spent some time restoring the jupyter notebook server running in allegra. This server was first set up by Anchal to be able to use the latest nds python API tools which is handy for the calibration stuff. The process to restore the environment was to run "source ~/bashrc.d/*" to restore some of the aliases, variables, paths, etc... that made the nds server work. I then ran ssh -N -f -L localhost:8888:localhost:8888 controls@allegra from pianosa and carry on with the experiment.
[paco, hang, tega]
We started a notebook under /users/paco/20210906_XARM_Cal/XARM_Cal.ipynb on which the first part was doing the following;
- Set up list of excitations for C1:LSC-XARM_EXC (for example three sine waveforms) using awg.py
- Make sure the arm is locked
- Read a reference time trace of the C1:LSC-XARM_IN2 channel for some duration
- Start excitations (one by one at the moment, ramptime ~ 3 seconds, same duration as above)
- Get data for C1:LSC-XARM_IN2 for an equal duration (raw data in Attachment #1)
- Generate the excitation sine and cosine waveforms using numpy and demodulate the raw timeseries using a 4th order lowpass filter with fc ~ 10 Hz
- Estimate the correct demod phase by computing arctan(Q / I) and rerunning the demodulation to dump the information into the I quadrature (Attachment #2).
- Plot the estimated ASD of all the quadratures (Attachment #3)
[paco, hang, tega]
Estimation of open loop gain:
- Grab data from the C1:LSC-XARM_IN1 and C1:LSC-XARM_IN2 test points
- Infer excitation from their differnce, i.e. C1:LSC-XARM_EXC = C1:LSC-XARM_IN2 - C1:LSC-XARM_IN1
- Compute the open loop gain as follows : G(f) = csd(EXC,IN1)/csd(EXC,IN2), where csd computes the cross spectra density of the input arguments
- For the uncertainty in G, dG, we repeat steps (1) to (3) with & without signal injection in the C1:LSC-XARM_EXC channel. In the absence of signal injection, the signal in C1:LSC-XARM_IN2 is of the form: Y_ref = Noise/(1-G), whereas with nonzero signal injection, the signal in C1:LSC-XARM_IN2 has the form: Y_cal = EXC/(1-G) + Noise/(1-G), so their ratio, Y_cal/Y_ref = EXC/Noise, gives the SNR, which we can then invert to give the uncertainty in our estimation of G, i.e dG = Y_ref/Y_cal.
- For the excitation at 53 Hz, our measurtement for the open loop gain comes out to about 5 dB whiich is consistent with previous measurement.
- We seem to have an SNR in excess of 100 at measurement time of 35 seconds and 1 count of amplitude which gives a relative uncertainty of G of 0.1%
- The analysis details are ongoing. Feedback is welcome.
|
Attachment 1: raw_timeseries.pdf
|
|
Attachment 2: demod_signals.pdf
|
|
Attachment 3: cal_noise_asd.pdf
|
|
17447
|
Fri Feb 3 17:41:26 2023 |
Paco | Summary | BHD | Systematic line noise hunting around BH44 | [Paco, Yuta]
We devised a plan to systematically hunt for the line noise source assuming it has something to do with BH44 and/or our recent changes in the LSC rack. Our noise estimate is the IMC_error point (TP1A) at the MC servo board. Our traces from Attachment #1 represent, in the following order:
PSL shutter open, IMC locked, C1:IOO-MC_SW1 = 1
PSL shutter closed, IMC unlocked, C1:IOO-MC_SW1 = 1
PSL shutter closed, IMC unlocked, C1:IOO-MC_SW1 = 0
PSL shutter open, IMC locked, C1:IOO-MC_SW1 = 1, rewired the delay box on LSC rack.
Same as above, plus we connected a loose "ground" wire to the delay box supply.
Same as above, plus we removed the PD interface unit connection.
Same as above plus we disconnected the 44 MHz local oscillators and terminated their outputs at the RF distribution box.
Same as above plus we disconnected the RFPD (BH44) from the IQ demod board at the LSC rack.
Where all the measurements used +4 dB input gain, 40:4000 filter enabled, and Boost = 0 settings on the MC servo board. In between measurements 3 and 4 we had to replace the SR560 (buffer) because the starting one ran out of battery... We found a good one in the YEND, used to buffer the OLTF excitation for the YAUX loop TF measurements. |
Attachment 1: 60Hzhunting.png
|
|
2474
|
Mon Jan 4 17:26:01 2010 |
Mott | Update | General | T & R plots for Y1 and Y1S mirrors | The most up-to-date T and R plots for the Y1 and Y1S mirrors, as well as a T measurement for the ETM, can be found on:
http://lhocds.ligo-wa.caltech.edu:8000/40m/Upgrade_09/Optics/RTmeasurement
|
17623
|
Wed Jun 7 10:35:42 2023 |
Radhika | Update | Daily Progress | T&R measurement setup for PR2 | [Radhika, Aaron, Mayank, Paco]
Here I'll describe the setup for T&R measurements of a PR2 replacement, using the Lightwave NPRO laser located at the S/E corner of the PSL table. Our transmittivity prior for this optic is ~20-30 ppm. Aaron and I outlined the setup for measuring the transmittivity of p- and s-polarizations using a chopper wheel for lock-in detection [Attachment 1].
JC found a Lightwave laser controller (in cabinet along YARM). Mayank and Paco helped start it up and adjust the current such that we can align with low power. I used the power meter sitting on PSL to record a quick laser calibration up to 160 mW (plot in Attachment 2 - I can go up to 500 mW in the future). Attachment 3 shows the location of the power meter when these points were collected. For alignment, I set the laser current to 0.94 A (~33 mW).
I removed most optics in the existing setup downstream of the Faraday isolator. I reused 2 PBS cubes and a HWP from the old setup (I still need a QWP). My progress as of 6/6 can be seen in Attachment 4.
Next:
- Acquire QWP
- Set up chopper with lock-in amplifier (SR830 or Moku:Go?)
- Align TRANS and REF PDs (using 2 Thorlabs PDA-100A) |
Attachment 1: IMG_4916.PNG
|
|
Attachment 2: lightwaveNPRO.pdf
|
|
Attachment 3: IMG_4908.JPG
|
|
Attachment 4: IMG_4912.PNG
|
|
17627
|
Fri Jun 9 11:12:05 2023 |
Radhika | Update | Daily Progress | T&R measurement setup for PR2 | After talking with Koji, the layout was revised as in Attachment 1. Instead of using reflection from the first PBS to obtain s-polarization, we'll use transmitted p-polarization and rotate to s with HWP2. This is because the reflection from a PBS isn't pure s-polarization. A second PBS will be used to verify that we have s-polarization (no light transmitted) and then removed for measurement.
I made changes to the setup as seen in Attachment 2. I found an aluminum block to raise the chopper wheel to the beam path. JC drilled a hole through the block so that the chopper wheel can be secured with a zip tie [Attachments 3-4].
Next, I'll mount one of the optics previously used by Anchal for T&R measurements. I'll see if this setup yields consistent results, and then proceed to the PR2 mirror. |
Attachment 1: IMG_4927.PNG
|
|
Attachment 2: IMG_4925.PNG
|
|
Attachment 3: IMG_4932.JPG
|
|
Attachment 4: IMG_4936.JPG
|
|
17640
|
Tue Jun 20 11:53:09 2023 |
Radhika | Update | Daily Progress | T&R measurement setup for PR2 | The transmissivity setup layout can be seen in Attachment 1. As in Aaron's measurements in cryo lab, the DUT beam passes through the outer spokes of the chopper wheel (frequency ftest). The reference beam passes through the inner spokes of the wheel (fref). The TRANS PD receives both beams; the REF PD only receives the reference beam. The DUT can be removed from the primary beam path. The only change to the initial design was the addition of a few folding mirrors to deal with space constraints.
The formula used for calculating the transmissivity:
(1).
Here, Ptrans and Pref refer to the signals on the TRANS and REF PDs. The DUT superscript indicates the presence of the DUT; the 0 superscripts indicates the DUT has been removed from the path. Note that the naive T calculation (below) would not be sensitive to laser intensity fluctuations, or PD gain changes between the DUT and 0 measurements:
(2).
I chose to test this setup with a known 90-10 beam splitter mounted at 45º of incidence. However, it's transmissivity calculated from equation (1) came out to be 4.5% - half of the expected value. The naive transmissivity from equation (2) came out to less: 3.5%. It is possible that the angle of incidence was off from 45º, but I would be surprised if it were that significantly off.
Next I set up a simpler DC measurement of the same 90-10 BS [Attachment 2]. After a HWP and PBS, the beam transmitted through the optic to the PD. The transmission through the DUT was 359.1 mV. Removing the DUT, the PD measured 5.266 V. This results in a transmissivity of 6.9% - around double the naive T calculation from the full setup, but still not quite 10%. Looks like there's a mystery factor of 2 somewhere.
|
Attachment 1: IMG_4957.JPG
|
|
Attachment 2: IMG_4977.JPG
|
|
17666
|
Wed Jul 5 10:25:31 2023 |
Radhika | Update | Daily Progress | T&R measurement setup for PR2 | After recovering a transmissivity of ~7% for the 90-10 BS (in AC chopper configuration), I moved on to PR2. I mounted PR2 on a rotation stage to control the angle of incidence (AOI) [Attachment 1]. I did a quick check to see if transmitted light through the DUT was measureable on the TRANS PD on its existing gain setting (0 dB). It was, which means the dynamic range of the PD at that gain setting is large enough and the PD gain does not need to be adjusted when placing/removing the DUT. This simplified the transmissivity calculation, since the extra gain reference was no longer needed (the power reference from REF PD remains). The p-polarization setup can be seen in Attachment 2. For each AOI i:
.
During measurement, for each AOI I measured the spectra of the TRANS and REF PDs with the Moku:Go Spectrum Analyzer [Attachment 3]. I found their spectral peaks at ftest and fref respectively. Attachment 4 shows the p-polarization results for transmissivity vs. AOI. The blue points are calculated from the peak values found by Moku:Go spectrum analyzer; the orange points are calculated from manual peak finding from taking an FFT of the time-series data. Excluding the 45deg AOI, the transmissivies are below 1000 ppm and are even smaller if using the manual FFT analysis. I will tweak the setup for s-polarization measurements this afternoon. |
Attachment 1: IMG_5013.JPG
|
|
Attachment 2: IMG_5014.PNG
|
|
Attachment 3: PR2_5deg_trans_spectra_20230628_153213_Screenshot.png
|
|
Attachment 4: 2023-06-30_PR2_transmissivity_AOI.pdf
|
|
|