40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 62 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  6025   Mon Nov 28 15:43:36 2011 steveUpdateRF SystemEOM temp zoomed

Quote:

Quote:

Here is 5 days of trend of the EOM temp sensor and the heater driver monitor.  Unfortunately, it looks like we're regularly railing the heater.  Not so awesome. 

Can you zoom the temp mon? (V= -0.1 ~ +0.1)
The crystal was too cold and I tried to heat the PSL table by the lighting. But it seemed in vain.

 It is not working

Attachment 1: eomtempmon.png
eomtempmon.png
  5984   Wed Nov 23 00:30:14 2011 ZachUpdateRF SystemEOM temperature controller trials

[Jenne, Zach]

We did some testing of the prototype temperature controller. When I left it late last night, it was not working in conjunction with the real heater and PT100 mounted to the EOM, but had been tested using simulated loads (a spare heater and a potentiometer for the RTD).

We measured each of the reference resistors carefully, as I should have done in the first place since they are only 1% tolerance (I am using 100-ohm ones in series with ~15-ohm ones, so they have a variation of +/- an ohm or so, which is consequential). We calculated the estimated zero-signal resistance of the RTD, then used a trimpot to verify that the AD620 output behaved as expected. We realized that I didn't tie the 620's reference to ground, so the output was floating around by a lot. Once we did that, the readout was still not working properly, but eventually magic happened and we got an appropriate signal. I did find that there was a discrepancy between the estimated zero-signal resistance and that measured across the trimpot with the readout nulled---this may be caused by a small offset in the 620, but is not important so long as the output still scales properly.

Before trying it out again on the real McCoy, I tested the whole, closed-loop circuit with the spare EOM on Jenne's desk. The temperature oscillated at first, but a reduction of gain at the input stage of the driver allowed it to stabilize. The temperature of the EOM (sitting on the electronics bench) was kept constant with a control current that varied from ~40 - 70 mA, depending on how many people were around it, etc. This is pretty much perfect for the quiescent level, but that means that we might have to increase the baseline operating resistance of the PT100 (by changing the reference resistors) once it is sitting in a hot foam box. Otherwise, we will have no gain on the cooling side. I tested the circuit response by cupping my hands over the EOM to increase the temperature and ensuring that the current dropped so as to null the error signal. It worked pretty well, with a thermally-limited bandwidth of I would estimate around 0.1 Hz. 

I went to try it out on the PSL table, but again it didn't work. It turned out that this time I had broken one of the soldered connections from the broken-out D-sub cable to the (tiny) wires going to the PT100, so there was no temperature signal. I resoldered it, but I forgot that there is a thin insulating layer on the wire, so no connection was made. Frank tutored Jenne on how to properly strip these wires without damaging the core, but alas I didn't pay attention.

The RTD/heater/D-sub package lies in wait on Jenne's desk, where I have left an apologetic note. Once it is fixed, we should be able to finally hook it up for realz.

  4122   Fri Jan 7 00:14:36 2011 kiwamuUpdateIOOEOM triple resonant box is working

 I checked the triple resonant box for the broadband EOM this afternoon, and found it was healthy.

So I installed it again together with a power combiner and succeeded in locking the MC.

Since the box has a non-50 Ohm input impedance at 29.5 MHz, so it maybe needed to adjust the phase of the LO for the demodulation of the PDH signal.

A good thing is that now we are able to impose the other sidebands (i.e. 11 MHz and 55MHz) via the power combiner.

  2691   Sun Mar 21 21:02:39 2010 KojiUpdatePSLEOM waist size
You don't need a lengthy code for this. It is obvious that the spot size at the distance L is minimum when L =
zR, where zR is the Rayleigh range. That's all.

Then compare the spot size and the aperture size whether it is enough or not.

It is not your case, but if the damage is the matter, just escape to the large zR side. If that is not possible
because of the aperture size, your EOM is not adequate for your purpose.
  2690   Sun Mar 21 20:08:20 2010 kiwamu, ranaUpdatePSLEOM wasit size

We are going to set the waist size to 0.1 mm for the beam going through the triple resonant EOM on a new PSL setup.

When we were drawing a new PSL diagram, we just needed to know the waist size at the EOM in order to think about mode matching.

waist.png

This figure shows the relation between the waist size and the spot size at the aperture of the EOM.

The x-axis is the waist size, the y-axis is the spot size. It is clear that there is a big clearance at 0.1 mm waist size. This is good.

Also it is good because the waist size is much above the damage threshold of the EO crystal (assuming 1W input).

The attached file is the python code for making this plot.

Attachment 2: waist.py.zip
  963   Thu Sep 18 12:16:01 2008 YoichiUpdateComputersEPICS BACK

Quote:

Somehow the EPICS system got hosed tonight. We're pretty much dead in the water till we can get it sorted.


The problem was caused by the installation of a DNS server into linux1 by Joe.
Joe removed /etc/hosts file after running the DNS server (bind). This somehow prevented proper boot of
frontend computers.
Joe and I confirmed that putting back /etc/hosts file resolved the problem.
Right now, the DNS server is also running on linux1.

We are not sure why /etc/hosts file is still necessary. My guess is that the NFS server somehow reads /etc/hosts
when he decides which computer to allow mounting. We will check this later.

Anyway, now the computers are mostly running fine. The X-arm locks.
The Y-arm doesn't, because one of the digital filters for the Y-arm lock fails to be loaded to the frontend.
I'm working on it now.
  964   Thu Sep 18 13:05:05 2008 YoichiUpdateComputersEPICS BACK

Quote:

The Y-arm doesn't, because one of the digital filters for the Y-arm lock fails to be loaded to the frontend.
I'm working on it now.


Rob told me that the filter "3^2:20^2" is switched on/off dynamically by the front end code for the LSC.
Therefore, the failure to manually load it was not actually a problem.
The Y-arm did not lock just because the alignment was bad.
Now the Y-arm alignment is ok and the arm locks.
  961   Thu Sep 18 01:14:23 2008 robSummaryComputersEPICS BAD

Somehow the EPICS system got hosed tonight. We're pretty much dead in the water till we can get it sorted.

The alignment scripts were not working: the SUS_[opt]_[dof]_COMM CA clients were having consistent network failures.
I figured it might be related to the network work going on recently--I tried rebooting the c1susaux (the EPICS VME
processor in 1Y5 which controls all the vertex angle biases and watchdogs). This machine didn't come back after
multiple attempts at keying the crate and pressing the reset button. All the other cards in the crate are displaying
red FAIL lights. The MEDM screens which show channels from this processor are white. It appears that the default
watchdog switch position is OFF, so the suspensions are not receiving any control signals. I've left the damping loops
off for now. I'm not sure what's going on, as there's no way to plug in a monitor and see why the processor is not coming up.

A bit later, the c1psl also stopped communicating with MEDM, so all the screens with PSL controls are also white. I didn't try
rebooting that one, so all the switches are still in their nominal state.
  12610   Thu Nov 10 19:02:03 2016 gautamUpdateCDSEPICS Freezes are back

I've been noticing over the last couple of days that the EPICS freezes are occurring more frequently again. Attached is an instance of StripTool traces flatlining. Not sure what has changed recently in terms of the network to cause the return of this problem... Also, they don't occur coincidentally on multiple workstations, but they do pop up in both pianosa and rossa.

Not sure if it is related, but we have had multiple slow machine crashes today as well. Specifically, I had to power cycle C1PSL, C1SUSAUX, C1AUX, C1AUXEX, C1IOOL0 at some point today

Attachment 1: epicsFreezesBack.png
epicsFreezesBack.png
  12172   Mon Jun 13 19:30:58 2016 AakashUpdateGeneralEPICS Installation | SURF 2016

About acquiring data: Initially I couldn't start with proper Acromag setup as the Raspberry pi had a faulty SD card slot. Then Gautam gave me a working pi on which I tried to install EPICS. I spent quite a time today but couldn't setup acromag over ethernet.  But, it would be great if we have a USB DAQ card. I have found a good one here http://www.mccdaq.com/PDFs/specs/USB-200-Series-data.pdf It costs around 106$ including shipping (It comes with some free softwares for acquiring data) . Also, I know an another python based 12bit DAQ card (with an inbuilt constant current source) which is made by IUAC, Delhi and more information can be found here http://www.iuac.res.in/~elab/expeyes/Documents/eyesj-progman.pdf  It costs around 60$ including shipping.

About temperature sensing: The RTD which I found on Omega's list is having a temperature resolution of 0.1 deg C. I have also asked them for the one with good resolution. Also according to their reply, they have not performed any noise characteristics study for those RTDs.

 

  12176   Tue Jun 14 11:52:08 2016 JohannesUpdateGeneralEPICS Installation | SURF 2016

We generally want to keep the configuration of the 40m close to that of the LIGO sites, which is why we chose BusWorks, and it is also being established as a standard in other labs on campus. Of course any suitable DAQ system can do the job, but to stay relevant we generally try to avoid patchwork solutions when possible. Did you follow Aidan's instructions to the book? I haven't set up a system myself, yet, so I cannot say how difficult this is. If it just won't work with the Raspberry Pi, you could still try using a traditional computer.

Alternatively, following Jamie's suggestions, I'm currently looking into using python for the modbus communications (there seem to be at least a few python packages that can do this), which would reportedly make the interfacing and integration a lot easier. I'll let you know when I make any progress on this.

Quote:

About acquiring data: Initially I couldn't start with proper Acromag setup as the Raspberry pi had a faulty SD card slot. Then Gautam gave me a working pi on which I tried to install EPICS. I spent quite a time today but couldn't setup acromag over ethernet.  But, it would be great if we have a USB DAQ card. I have found a good one here http://www.mccdaq.com/PDFs/specs/USB-200-Series-data.pdf It costs around 106$ including shipping (It comes with some free softwares for acquiring data) . Also, I know an another python based 12bit DAQ card (with an inbuilt constant current source) which is made by IUAC, Delhi and more information can be found here http://www.iuac.res.in/~elab/expeyes/Documents/eyesj-progman.pdf  It costs around 60$ including shipping.

About temperature sensing: The RTD which I found on Omega's list is having a temperature resolution of 0.1 deg C. I have also asked them for the one with good resolution. Also according to their reply, they have not performed any noise characteristics study for those RTDs.

 

 

  149   Fri Nov 30 19:46:58 2007 ranaConfigurationComputersEPICS Time Bad again
The time on the EPICS screens is off by 10 minutes again. Por Que?

Its because the ntpd on scipe25 wasn't restarted after the last boot. If someone
knows how to put the ntpd startup into that machine, please do so.

This time I started it up by just going sshing in as controls and then entering:

sudo /usr/sbin/ntpd -c /etc/ntp.conf

which runs it as root and points to the right file.

It takes a few minutes to get going because all of the martian machines have to first fail to
connect to the worldwide pool servers (e.g. 0.pool.ntp.org) before they move on and try linux1
which has a connection to the world. Once it gets it you'll see the time on the EPICS screens
freeze. It then waits until the ntp time catches up with its old, wrong time before updating
again.

According to the Wikipedia, this time is then good to 128 ms or less.
  10937   Fri Jan 23 18:08:10 2015 JenneUpdateCDSEPICS freezes

So, I neglected to elog this yesterday, but yesterday we had one of those EPICS freezes that only affects slow channels that come from the fast computers.  It lasted for about 5 minutes.

Right now, we're in the middle of another - it's been about 7 minutes so far. 

Why are these happening again?  I thought we had fixed whatever was the issue.

EDIT:  And, it's over after a total of about 8 minutes.

  11428   Sat Jul 18 16:03:00 2015 jamieUpdateCDSEPICS freezes persist

I notice that the periodic EPICS freezes persist.  They last for 5-10 seconds.  MEDM completely freezes up, but then it comes back.

The sites have been noticing similar issues on a less dramatic scale.  Maybe we can learn from whatever they figure out.

  10178   Thu Jul 10 17:53:19 2014 AkhilConfigurationComputer Scripts / ProgramsEPICS installed on Raspberry Pi

 I finished the installation of EPICS base on Raspberry Pi (domenica:  /opt/epics). I tested it by creating a test SoftIoc (controls@domenica~/freqCountIOC/simple.db) and was able to read from the channel on Chiara.

Now  I am looking into how to call my C code that talks to Raspberry Pi through a  .db script and write  it into the assigned channel. 

For installation I had to make these declarations in the environment (~/.bash_aliases):

 

 

 

export EPICS_EXT=${EPICS_ROOT}/extensions
export EPICS_EXT_BIN=${EPICS_EXT}/bin/${EPICS_HOST_ARCH}
export EPICS_EXT_LIB=${EPICS_EXT}/lib/${EPICS_HOST_ARCH}
if [ "" = "${LD_LIBRARY_PATH}" ]; then
    export LD_LIBRARY_PATH=${EPICS_EXT_LIB}
else
    export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${EPICS_BASE_LIB}
fi
export PATH=${PATH}:${EPICS_EXT_BIN}

 

  13852   Thu May 17 11:56:37 2018 gautamUpdateGeneralEPICS process died on c1ioo

The EPICS process on the c1ioo front end had died mysteriously. As a result, MC autolocker wasn't working, since the autolocker control variables are EPICS channels defined in the c1ioo model. I restarted the model, and now MCautolocker works.

  13732   Thu Apr 5 19:31:17 2018 gautamUpdateCDSEPICS processes for c1ioo dead

I found all the EPICS channels for the model c1ioo on the FE c1ioo to be blank just now. The realtime model itself seemed to be running fine, judging by the IMC alignment (as the WFS loops seemed to still be running okay). I couldn't find any useful info in demsg but I don't know what I'm looking for. So my guess is that somehow the EPICS process for that model died. Unclear why.

  498   Sun May 25 21:14:14 2008 tobinConfigurationComputersEPICS proxy server
I set up an EPICS gateway server on Nodus so that we can look at 40m MEDM screens from off-site.
The gateway is set up to allow read access to all channels and write access to none of them.

The executable is /cvs/cds/epics/extensions/gateway; it was already installed. A script to start
up the gateway is in target/epics-gateway. For the time being, I haven't set it up to start itself
on boot or anything like that.

To make it work, you have to set the environment variable EPICS_CA_ADDR_LIST to the IP address of
Nodus. For instance, something like this should work:

setenv EPICS_CA_ADDR_LIST 131.215.115.52

On Windows you can set up environment variables in the "System" Control Panel. On one of the tabs
there's a button that lets you set up environment variables that will be visible to all programs.

On Andrey's machine I installed the Windows EPICS extensions, i.e. MEDM and its friends. I also
installed the cool Tortoise SVN client which lets you interact with SVN repositories through
the windows explorer shell. (The right-click menu now contains SVN options.) I checked out
the MEDM directory from the 40m SVN onto the desktop. You should be able to just right-click in
that window and choose "SVN Update" to get all the newest screens that have been contributed to
SVN; however, there are currently some problems with the 40m SVN that make that not go smoothly.

At the moment on Andrey's (Windows) machine you can go into the MEDM folder and double-click on
any screen and it will just work, with the exception that not all the screens are installed
due to SVN difficulties.
  10769   Tue Dec 9 03:41:06 2014 JenneUpdateCDSEPICS running slow - network issue?

[Jamie, EricQ, Jenne, Diego]

This is something that we discussed late Friday afternoon, but none of us remembered to elog. 

We have been noticing that EPICS seems to run pretty slowly, and in fact twice last week froze for ~2 minutes or so (elog 10756). 

On Friday, we plotted several traces on StripTool, such as C1:SUS-ETMY_QPD_SUM_OUTPUT and C1:SUS-ETMY_TRY_OUTPUT and C1:LSC-TRY_OUTPUT to see if channels with (supposedly) the same signal were seeing the same sample-holding.  They were.  The issue seems to be pretty wide spread, over all of the fast front ends.  However, if we look at EPICS channels provided by the old analog computers, they do not seem to have this issue. 

So, Jamie posits that perhaps we have a network switch somewhere that is connected to all of the fast front end computers, but not the old slow machines, and that this switch is starting to fail. 

My understanding is that the boys are on top of this, and are going to figure it out and fix it.

  2067   Thu Oct 8 11:10:50 2009 josephb, jenneUpdateComputersEPICs Computer troubles

At around 9:45 the RFM/FB network alarm went off, and I found c1asc, c1lsc, and c1iovme not responding. 

I went out to hard restart them, and also c1susvme1 and c1susvme2 after Jenne suggested that.

c1lsc seemed to have a promising come back initially, but not really.  I was able to ssh in and run the start command.  The green light under c1asc on the RFMNETWORK status page lit, but the reset and CPU usage information is still white, as if its not connected.   If I try to load an LSC channel, say like PD5_DC monitor, as a testpoint in DTT it works fine, but the 16 Hz monitor version for EPICs is dead.  The fact that we were able to ssh into it means the network is working at least somewhat.

I had to reboot c1asc multiple times (3 times total), waiting a full minute on the last power cycle, before being able to telnet in.  Once I was able to get in, I restarted the startup.cmd, which did set the DAQ-STATUS to green for c1asc, but its having the same lack of communication as c1lsc with EPICs.

c1iovme was rebooted, was able to telnet in, and started the startup.cmd.  The status light went green, but still no epics updates.

The crate containing c1susvme1 and c1susvme2 was power cycled.  We were able to ssh into c1susvme1 and restart it, and it came back fully.  Status light, cpu load and channels working.  However I c1susvme2 was still having problems, so I power cycled the crate again.  This time c1susvme2 came back, status light lit green, and its channels started updating.

At this point, lacking any better ideas, I'm going to do a full reboot, cycling c1dcuepics and proceeding through the restart procedures.

  12005   Tue Feb 23 17:46:09 2016 gautamUpdateSUSEQ

Looks like another EQ 4.8 took out all the watchdogs, I've restored them, everything looks alright and doesn't look like any magnets got stuck this time... 

Attachment 1: SUS_Summary_23Feb.png
SUS_Summary_23Feb.png
  11935   Tue Jan 19 10:25:53 2016 SteveUpdateSUSEQ 3.6M Ludlow

Just an other local earthquake 3.6 Mag Ludlow, Ca

No obvious sign of damage

 

Attachment 1: EQ_3.6M_Ludlow.png
EQ_3.6M_Ludlow.png
  14323   Thu Nov 29 08:13:33 2018 SteveUpdatePEMEQ 3.9m So CA

EQ did not trip anything. atm1

Just a REMINDER: our vacuum system is at atm to help the vacuum upgrade to Acromag.

Exceptions: cryo pump and 4 ion pumps

It is our first rainy day of the season..The roof is not leaking.

 Vac Status: The vac rack power was recycled yesterday and power to controller TP1,2 and 3 restored. atm3

                     VME is OFF.        Power to all other instrument are ON.       23.9Vdc 0.2A

ETMY sus tower with locked optic in HEPA tent at east end is standing by for action.

 

Attachment 1: 3.9mSoCA.png
3.9mSoCA.png
Attachment 2: Vac_as_today.png
Vac_as_today.png
Attachment 3: as_is.png
as_is.png
  12001   Mon Feb 22 08:45:46 2016 SteveUpdateSUSEQ 4.3 grabs ITMX-UL magnet

Local EQ4.3

kicks ITMX-UL magnet into stuck position.

Hopefully it is only sticking.

Attachment 1: EQ4.3LucerneV.png
EQ4.3LucerneV.png
Attachment 2: ITMX-UL.png
ITMX-UL.png
  16342   Fri Sep 17 20:22:55 2021 KojiUpdateSUSEQ M4.3 Long beach

EQ  M4.3 @longbeach
2021-09-18 02:58:34 (UTC) / 07:58:34 (PDT)
https://earthquake.usgs.gov/earthquakes/eventpage/ci39812319/executive

  • All SUS Watchdogs tripped, but the SUSs looked OK except for the stuck ITMX.
  • Damped the SUSs (except ITMX)
  • IMC automatically locked
  • Turned off the damping of ITMX and shook it only with the pitch bias -> Easily unstuck -> damping recovered -> realignment of the ITMX probably necessary.
  • Done.
  7282   Mon Aug 27 09:24:17 2012 SteveUpdateSUSEQ damage

  It looks like we may lost 1 (or 3 )  magnets? Do not panic, it's not for sure

 

Attachment 1: eqDamage.png
eqDamage.png
  7283   Mon Aug 27 10:49:03 2012 KojiUpdateSUSEQ damage

After shaking ITMX by the alignment bias in yaw, it came back.

As ETMX seems to be largely misaligned yaw (and did not come back with the alignment impact),
the condition of the magnets are not clear. Only the side OSEM is responding nicely.

Quote:

  It looks like we may lost 1 (or 3 )  magnets? Do not panic, it's not for sure

 

  7286   Mon Aug 27 15:49:46 2012 JenneUpdateSUSEQ damage

Quote:

After shaking ITMX by the alignment bias in yaw, it came back.

As ETMX seems to be largely misaligned yaw (and did not come back with the alignment impact),
the condition of the magnets are not clear. Only the side OSEM is responding nicely.

Quote:

  It looks like we may lost 1 (or 3 )  magnets? Do not panic, it's not for sure

 

I tried to take some photos through the window of ETMX's chamber, to see if I could see any magnets.  What we have learned is that Jenne is still not the world's best photographer.  I was holding the camera at ~max zoom inside the beam tube between the table and the window, so that's my excuse for the photos being fuzzy.  The only thing that I can really conclude is that the magnets look like they are still there, but Jamie thinks they may be stuck on the PDs/LEDs (now looking at the photos myself, I agree, especially with UL and LR). 

It looks like the best thing to do at this point, since Koji already tried jerking ETMX around in yaw a little bit, is just wait and open the door, to see what's going on in there.  I posted the photos on Picasa:

https://picasaweb.google.com/foteee/ETMX_MaybeStuck_ThroughWindow_27Aug2012

I propose that, if the magnets are broken, we pull the ETM out of the camber and fix it up in the cleanroom while we pump back down.  This would restrict us from doing any Xarm work, but will force me to focus on DRMI, and we can put the ETM back when we vent to install the tip tilts.

  17003   Thu Jul 14 19:09:51 2022 ranaUpdateGeneralEQ recovery

There was a EQ in Ridgecrest (approximately 200 km north of Caltech). It was around 6:20 PM local time.

All the suspensions tripped. I have recovered them (after some struggle with the weird profusion of multiple conflicting scripts/ directories that have appeared in the recent past...)

ETMY is still giving me some trouble. Maybe because of the HUGE bias on that within the fast CDS system, it had some trouble damping. Also the 'reenable watchdog' script in one of the many scripts directories seems to do a bad job. It re-enables optics, btu doesn't make sure that the beams are on the optical lever QPD, and so the OL servo can smash the optic around. This is not good.

Also what's up with the bashrc.d/ in some workstations and not others? Was there something wrong with the .bashrc files we had for the past 15 years? I will revert them unless someone puts in an elog with some justification for this "upgrade".

This new SUS screen is coming along well, but some of the fields are white. Are they omitted or is there something non-functional in the CDS? Also, the PD variances should not be in the line between the servo outputs and the coil. It may mislead people into thinking that the variances are of the coils. Instead, they should be placed elsewhere as we had it in the old screens.

Attachment 1: ETMY-screen.png
ETMY-screen.png
  15428   Wed Jun 24 22:33:44 2020 gautamUpdateSUSEQ tripped all suspensions

This earthquake tripped all suspensions and ITMX got stuck. The watchdogs were restored and the stuck optic was released. The IFO was re-aligned, POX/POY and PRMI on carrier locking all work okay.

  15261   Sat Mar 7 15:18:30 2020 gautamUpdateSUSEQ tripped some suspensions

An earthquake around 330 UTC (=730pm yesterday eve) tripped ITMX, ITMY and ETMX watchdogs. ITMX got stuck. I released the stuck optic and re-enabled the local damping loops just now.

Attachment 1: EQ_6Mar.png
EQ_6Mar.png
  12687   Thu Dec 29 10:24:56 2016 SteveUpdatePEMEQ5.7mHawthorn NV

Sus damping restored.

 

Attachment 1: eq5.7HawthorneNV.png
eq5.7HawthorneNV.png
  12673   Thu Dec 8 07:56:05 2016 SteveUpdatePEMEQ6.5m Northen CA

No damage. ITMY is glitching, so it has not been damped.

 

Attachment 1: eq6.5FerndaleCA.png
eq6.5FerndaleCA.png
Attachment 2: 16d_glitching-_trend.png
16d_glitching-_trend.png
Attachment 3: EQ6.5_&4.7mFerndaleCa.png
EQ6.5_&4.7mFerndaleCa.png
  13011   Wed May 24 18:19:15 2017 KaustubhUpdateGeneralET-3010 PD Test

Summary:

In continuation to the previous test conducted on the ET-3040 PD,  I performed a similar test on the ET-3010 model. This model requires a fiber couple input for proper testing, but I tested it in free space without a fiber couple as the laser power was only 1.00 mW and there was not much danger of scattering of the laser beam. The Data Sheet can be found here

Procedure:

The schematic(attached below) and the procedure are the same as the previous time. The pump current was set to 19.5 mA giving us a laser beam of power 1.00mW at the fiber couple output. The measured voltage for the reference detector was 1.8V. For the DUT, the voltage is amplified using a low noise amplifier(model SR-560) with a gain of 100. Without any laser incidence on the DUT, the multimeter reads 120.6 mV. After alligning the laser with the DUT, the multimeter reads 348.5 mV, i.e. the voltage for the DUT is 227.9/100 ~ 2.28 mV. The DC transimpedance of the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75 A/W. Using this we calculate the power at the reference detector to be 0.24 mW. The DC transimpedance for the DUT is 50Ohm and the responsivity is around 0.85 A/W. Using this we calculate the power at the DUT to be 0.054 mW. After this we connect the the laser input to the Netwrok Analyzer(AG4395A) and give an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz.The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. I stored the data under the directory.../scripts/general/netgpibdata/data. The Bode Plot has been attached below and seeing it we observe that the cut-off frequency for the ET-3010 model is atleast over 500 MHz(stated as >1.5 GHz in the data sheet).

Result:

The bandwidth of the ET-3010 PD is atleast 500MHz, stated in the data sheet as >1.5GHz.

Precaution:

The ET-3010 PD has an internal power supply of 6V. Don't leave the PD connected to any instrument after the experimentation is done or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

Caliberate the vertical axis in the Bode Plot with transimpedance(Ohms) for the two PDs. Automate the procedure by making a Python script for taking multiple set of readings from the Netwrok Analyzer and aslo plot the error bands.

Attachment 1: PD_test_setup.png
PD_test_setup.png
Attachment 2: ET-3010_test.pdf
ET-3010_test.pdf
Attachment 3: ET-3010_test.zip
  2424   Wed Dec 16 20:29:08 2009 ranaUpdateCOCETM Coating study

This plot shows the Transmission for 532 and 1064 nm as a function of the thickness of the SiO2 layer.

i.e. the thickness is constrained so that the optical thickness of the SiO2 and Ta2O5 pair is always 1/2 of a wavelength.

The top layer of the mirror is also fixed in this plot to be 1/2 wave.

This plot shows the result for 17 pairs. For 16 pairs, we are unable to get as low as 15 ppm for the 1064 nm transmission.

Attachment 1: layerfrac.png
layerfrac.png
  2157   Wed Oct 28 17:20:21 2009 ranaSummaryCOCETM HR reflectivity plot

This is a plot of the R and T of the existing ETM's HR coating. I have only used 1/4 wave layers (in addition to the standard 1/2 wave SiO2 cap on the top) to get the required T.

The spec is a T = 15 ppm +/- 5 ppm. The calculation gives 8 ppm which is close enough. The calculated reflectivity for 532 nm is 3%. If the ITM reflectivity is similar, the signal for the 532 nm locking of the arm would look like a Michelson using the existing optics.

etm_40_1998.png

  10397   Thu Aug 14 23:19:49 2014 ranaSummaryLSCETM Violin fundamental filters moved to LSC

 We used to do violin mode and test mass body mode notches in the SUS-LSC filter modules. Now we want them balanced in the LSC and triggered by the LSC, so they're in the filter modules which go from the the LSC output matrix to the SUS.

01.png

Today, we were getting ETM violin mode ringups while doing ALS hunt and so we moved the bandstops into the LSC. I also changed the bandstop from a wide one which missed the ETMX mode to a double bandstop which gets both the ETMX and the ETMY mode. See attached image of the Bode mag.

03.png

  9469   Fri Dec 13 19:33:56 2013 DenUpdateASCETM X,Y QPDs

I have modified/compiled/installed/restarted c1scx and c1scy models to include arm transmission QPDs in angular controls.

For initial test I have wired normalized QPD pitch and yaw outputs to ASC input of ETMs. This was done to keep the signals inside the model.

QPD signals are summed with ASS dither lines and control. So do not forget to turn off QPD output before turning on dither alignment.

Medm screens were made and put to medm/c1sc{x,y}/master directory. Access from sitemap is QPDs -> ETM{ X,Y} QPD

  14103   Wed Jul 25 14:45:59 2018 SandrineSummaryThermal CompensationETM Y Table AUX read out

Attached is a photo of the set up of the ETM Y table showing the AUX read out set up. 

Currently, the flip mount sends the AUX to the PDA255. Terra inserted a razor blade so the PDA255 will witness more HOMs. The laser is also sent to the regular PD and the CCD.

Attachment 1: EY_table_.JPG
EY_table_.JPG
  3902   Fri Nov 12 00:13:34 2010 SureshUpdateSUSETM assembly started

[Jenne, Suresh]

Selection of ETMs

Of the four ETMs (5,6,7 and 8) that are with us Koji gave us two (nos. 5 and 7) for use in the current assembly.  This decision is based on the Radius of Curvature (RoC) measurements from the manufacturer (Advanced Thin Films).   As per their measurements the four ETMs are divided into two pairs such that each pair has nearly equal RoC. In the current case, RoCs are listed below:

   

Radii of Curvature of ETMs
ETM # RoC from Coastline Optics (m) RoC from Advanced Thin Films (m)
5 57.6 60.26
6 57.4 54.58
7 57.1 59.48
8 57.9 54.8

 

The discrepancy between the measurements from these two companies leaves us in some doubt as to the actual radius of curvature.  However we based our current decision on the measurement of Advanced Thin Films. 

 

Assembly of ETMs

We drag wiped both the ETMs (5 and 7) and placed them in the Small Optic Gluing Fixture.  The optics are positioned with the High Reflectace side facing downwards and with the arrow-mark on the Wire Standoff side (big clamp).  We then used the microscope to position the Guide Rod and the Wire Standoff in the tangential direction on the ETMs (step 4 of the procedure specified in E010171-00-D)

We will continue with the rest of the assembly tomorrow.

 

  4929   Fri Jul 1 16:01:48 2011 JamieUpdateSUSETM binary whitening switching fixed

I have fixed the binary whitening switching for the ETMs (ETMX and ETMY).  See below for a description of what some of the issues were.

The ETMX whitening/no-whitening response (same measurements performed in my previous post on checking vertex sus whitening switching) looks as it should.  The ETMY response seems to indicate that the switching is happening, but the measurements are very noise.  I had to up the averaging significantly to get anything sensible.  There's something else going on with ETMY.  I'll follow up on that in another post.

response

ETMX
ul : 3.28258088774 = 10.3243087313 db
ll : 3.31203559803 = 10.4018999194 db
sd : 3.27932572306 = 10.3156911129 db
lr : 3.28189942386 = 10.3225053532 db
ur : 3.31351020008 = 10.4057662366 db

ETMY
ul : 2.9802607099  =  9.4850851468 db
ll : 1.46693103911 =  3.3281939600 db
sd : 2.19178266285 =  6.8159497462 db
lr : 2.2716636118  =  7.1268804285 db
ur : 3.42348315519 = 10.6893639064 db

End rack cable diagrams inconsistent with binary channel mapping

One of the big problems was that the most up-to-date end rack cable diagrams (that I can find) are inconsistent with the actual binary mapping. The diagram says that:

  • BO adapter chassis output A (ch 1-16)   --> CAB_1X4_26 --> cross-connect 1X4-B7 (carrying QPD whitening switching signals)
  • BO adapter chassis output B (ch 17-32) --> CAB_1X4_27 --> cross-connect 1X4-A6 (carrying OSEM whitening switching signals)

In fact, the binary outputs are switched, such that output A carries the OSEM signals, and output B carries the QPD whitening signals.

I SWITCHED THE CABLES AT THE BINARY OUTPUT ADAPTER CHASSIS so that:

  • BO adapter chassis output A (ch 1-16)   --> CAB_1X4_27 --> cross-connect 1X4-A6 (carrying OSEM whitening switching signals)
  • BO adapter chassis output B (ch 17-32) --> CAB_1X4_26 --> cross-connect 1X4-B7 (carrying QPD whitening switching signals)

The rest of the wiring remains the same.

I made the same transformation for ETMY as well.

  15488   Wed Jul 15 21:08:43 2020 gautamUpdateElectronicsETM coil outputs DQed

To facilitate this investigation, I've DQed the 4 face coil outputs for the two ETMs. EX is currently running with 5 times the series resistance of EY, so it'll be a nice consistency check. Compilation, installation etc went smooth. But when restarting the c1scx model, there was a weird issue - the foton file, C1SCX.txt, got completely wiped (all filter coefficients were empty, even though the filter module names themselves existed). I just copied the chiara backup version, restarted the model, and all was well again.

This corresponds to 8 additional channels, recorded at 16k as float 32 numbers, so in the worst case (neglecting any clever compression algorithms), we are using disk space at a rate of ~4 MB/s more. Seems okay, but anyway, I will remove these DQ channels in a few days, once we're happy we have enough info to inform the coil driver design.

spoke too soon - there was an RFM error for the TRX channel, and restarting that model on c1sus took down all the vertex FEs. Anyways, now, things are back to normal I think. The remaining red light in c1lsc is from the DNN model not running - I forgot to remove those channels, this would've been a good chance! Anyways, given that there is an MLTI in construction, I'm removing these channels from the c1lsc model, so the next time we restart, the changes will be propagated.

For whatever reason, my usual locking scripts aren't able to get me to the PRFPMI locked state - some EPICS channel value must not have been set correctly after the model reboot 😞. I'll debug in the coming days.

Fun times lie ahead for getting the new BHD FEs installed I guess 🤡 ....

Quote:
 

Looking at signals to the ETMs from the current lock acquisition sequence, the RMS current to a single coil is approximately _____ (to be filled in later).

So we may need a version of the fast coil driver that supports a low noise mode (with large series resistance) and a high-range mode (with lower series resistance for lock acquisition).

Attachment 1: CDS.png
CDS.png
Attachment 2: coilOutDQed.png
coilOutDQed.png
  4041   Fri Dec 10 11:04:03 2010 OsamuUpdateSUSETM oplev mufunctioning

20101208_ETMX_oplev_sum.png

 

This plot shows ETM oplev and OSEM trend for 10 hours on day before yesterday as almost the same as plot shown this entry. I reported the 10-30minites fluctuations were seen, but I noticed it comes from not suspension but from oplev power fluctuation.

After Kiwamu fixed the ETM OSEM touch yesterday afternoon, still the same trend was seen, so we had thought what we fixed was not enough. This morning I looked at the yesterday's and day before yesterday's trend and noticed the simila trend both the pit and yaw in ETM oplev but not on the OSEM trend. Kiwamu suggested me to put the oplev sum on the same plot. It was!

So, ETMX is not bad, but in fact, still alignment fluctuation exist on the cavity. ITM?

 

  9290   Fri Oct 25 04:54:21 2013 MasayukiUpdateSUSETM violin mode

Summary

When PRMI + 2arms are locked yesterday, we heard the noise from suspension violin mode. For attenuation of that noise, we should design the resonant filter at that frequency and put into the ALS servo. I tried to measure the violin mode of ETMs SUS.

What I did

 1.The arms were locked by IR PDH. I used awggui to excite the suspention. I injected the Normal waveform, 10 Hz of bandwidth wave into C1:SUS-ETMs_ULCOIL_EXC. I put cheby filter in the FIlter of awggui. The order of that filter was 4, that has same bandwidth as that of injection wave and ripple was 4dB. I increase the injection gain with some ramp time(5sec). I swept from 600 Hz to 700 Hz. During that injection I saw the PDH error signal (POX11I and POY11I) in order to find resonance peak of violin mode.
 In ETMX resonances were easily found. That were at 631 Hz and 691 Hz. the 631 Hz peak was seen ALS error signal yesterday. On the other hand, I couldn't find ETMY violin mode. No peaks appeared any frequency.

2. For find the ETMY violin mode, I used dtt swept sine measurement. The excitation channel was C1:SUS-ETMs_ULCOIL_EXC. I measured the TF from excitation channel to POX11I and POY11I error signal. The measurement range was above 400 Hz and below 1000Hz,. The number of point is 600. I attached that result.
In ETMX curve, the coherence become bad near the resonant frequency of violin mode and also the TF is large. Although ETMX violin modes are obvious, ETMY violin modes are not visible. At 660 Hz, 780 Hz, 900 Hz the coherence is not good. That is because 60 Hz comb noise.

Discussion

 I attached the spectrum of the POX and POY error signal. Black and red curve is measured different time. I didn't inject any signal in both measurement, but the violin mode excitation has huge difference. Also there are peaks at beat frequency between violin mode and bounce mode(16 Hz), yaw motion(3 Hz). In ALS in-loop noise or XARM in-loop measurement, sometimes this region had big spikes. That was because of this resonance. And also that resonance peak couples to POY11I.

 I will measure the Q and design the resonant filter for ALS.

Attachment 1: violin1.pdf
violin1.pdf
Attachment 2: violin2.pdf
violin2.pdf
  3991   Mon Nov 29 22:50:07 2010 SureshUpdateSUSETMU05 Side Magnets glued back

[Suresh, Jenne]

ETMU05 : Gluing Side magnets back on to the optic.

The following steps taken in this process:

1) The two magnet+dumbell units which had come loose from the optic needed to be cleaned.  A lint free wipe was placed on the table top and a few cc of acetone was poured on to it.  The free end of the dumbbell was then scrubbed on this wipe till the surface regained its shine.  The dumbell was held at its narrow part with a forceps to avoid any strain on the magnet-dumbbell joint.

2) The optic was then removed from its gluing fixture (by loosening only one of the three retaining screws) and placed in an Al ring. The glue left behind by the side magnets was scrubbed off with a optical tissue wetted with Acetone. 

3) The optic was returned to the gluing fixture.  The position of the optic was checked by inserting the brass portion of the gripper and making sure that the face magnets are centered in it [Jenne doubled checked to be sure we got everything right].

4) The side magnets were glued on and the optic in the fixture has been placed in the foil-house.

If all goes well we will be able to balance the ETMU05 and give it to Bob for baking.

 

ETMU07 : It is still in the oven and we need to ask Bob to take out. It will be available for installation in the 40m tomorrow.  

 

  4022   Tue Dec 7 18:37:15 2010 SureshUpdateSUSETMU05 ready for baking

The ETMU05 has been removed from the suspension and put into the little foil house. 

Before removing it I checked the position and pitch of the optic with reference to the table top. 

The height:

     Using the traveling microscope I checked the height of the scribe lines from the table top.  They are at equal heights, centered on 5.5 inches, correct to about a quarter of the width of the scribe line.

The pitch

    The retro-reflection of the He-Ne laser beam is correct to within one diameter of the beam at a distance of about 1.5m.  This is the reflection from the rear, AR coated, surface.  The reflection from the front, HR coated, surface was down by about two diameters.

Jenne has checked with Bob and agreed on a date for baking the optic.

 

 

  4018   Mon Dec 6 23:33:15 2010 JenneUpdateSUSETMU05 winched, balanced, glued!!!!!!

[Suresh, Jenne]

We Finished!!!

ETMU05 (ETMY) had its wire winched to the correct height, was balanced, and had the standoff glued.  Since it's kind of like final exam week at Caltech, Suresh had his suspension exam today, and did most of this work himself, with me hanging around and watching. 

As you can see in my almost entirely green table, all that is left to do with the whole suspensions project is bake the optic (hopefully Bob has time / space this week), and then stick it in the chamber!  Hooray!!! (Can you tell I'm excited to not spend too much more time in the cleanroom?)

The table:

StatusTable.png

  3956   Fri Nov 19 16:13:09 2010 JenneUpdateSUSETMU05: magnets glued to optic

[Jenne, Suresh]

Suresh and I glued the intact-from-the-first-round magnets to ETMU05.  I accidentally got too much glue on one of the dumbbells (the glue was connecting the dumbbell to the gripper - bad news if we let that dry), and while I was cleaning it, the magnet broke off.  So I used one of the ones that Suresh had re-glued last night, and he is putting that one back together after some cleaning. 

To set the fixture, Suresh had the great idea of using small pieces of foil underneath the teflon pads to set the height of the optic in the fixture.  The optic still rests on the teflon pads, but with the foil we have finer control over how the optic sits.  Neat.  Since both ETMs are the same, we shouldn't have to do any more adjustment for the other ETM.

The updated Status Table:

StatusTable.png

  3984   Wed Nov 24 17:57:24 2010 JenneUpdateSUSETMU07: Baking. ETMU05: Needs side magnets reglued

[Jenne, Koji]

We removed ETMU07 from the suspension tower, after confirming that the balance was still good.  Bob put it in the oven to bake over the weekend.  The spring plungers and our spare magnets are all in there as well. 

I tried to remove the grippers from ETMU05, and when I did, both side dumbbells came off of the optic.  Unfortunately, I was working on getting channels into the DAQ, so I did not clean and reglue ETMU05 today.  However Joe told me that we don't have any ETMY controls as yet, and we're not going to do Yarm locking (probably) in the next week or so, so this doesn't really set any schedules back. 

The cleaning of ETMU05 will be tricky.  Getting the residual glue off of the optic will be fine, but for the dumbbells, we'd like to clean the glue off of the end of the dumbbells using a lint free wipe soaked in acetone, but we don't want to get any acetone in the magnet-to-dumbbell joint, and we don't want to break the magnet-to-dumbbell joint.  So we'll have to be very careful when doing this cleaning. 

The Status Table:

StatusTable.png

  3979   Tue Nov 23 18:08:28 2010 JenneUpdateSUSETMU07: Balanced, standoff glued. ETMU05: Magnets glued to optic

[Koji, Jenne]

ETMU07 had its wire winched to the correct height, was balanced, standoff glued.  Can be ready for going into the oven tomorrow, if an oven is available.  (One of Bob's ovens has a leak, so he's down an oven, which puts everything behind schedule.  We may not be able to get anything into the oven until Monday).

ETMU05 had magnets glued to the optic.  Hopefully tomorrow we will winch the wire and balance the optic, and glue the standoff, and be ready to go into the oven on Monday.

The spring plungers were sonicated, but have not yet been baked.  I told Daphen that we'd like the optics baked first, so that we can get ETMX in the chamber ASAP, and then the spring plungers as soon as possible so that we can install ETMY and put the OSEMs in.

The updated status table:

StatusTable.png

ELOG V3.1.3-