ID |
Date |
Author |
Type |
Category |
Subject |
16674
|
Wed Feb 16 15:19:41 2022 |
Anchal | Update | General | Reconfigured MC reflection path for low power |
I reconfigured the MC reflection path for low power. This meant the following changes:
- Replaced the 10% reflection BS by 98% reflection beam splitter
- Realigned the BS angle to get maximum on C1:IOO-MC_RFPD_DCMON when cavity is unlocked.
- Then realigned the steering mirrors for WFS1 and WFS2.
- I tried to align the light for MC reflection CCD but then I realized that the pickoff for the camera is too low for it to be able to see anything.
Note, even the pick-off for WFS1 and WFS2 is too low I think. The IOO WFS alignment does not work properly for such low levels of light. I tried running the WFS loop for IMC and it just took the cavity out of the lock. So for low power scenario, we would keep the WFS loops OFF.
|
16676
|
Wed Feb 23 15:08:57 2022 |
Anchal | Update | General | Removed extra beamsplitter in MC WFS path |
As discussed in the meeting, I removed the extra beam splitter that dumps most of the beam going towards WFS photodiodes. This beam splitter needs to be placed back in position before increasing the input power to IMC at nominal level. This is to get sufficient light on the WFS photodiodes so that we can keep IMC locked for more than 3 days. Currently IMC is unlocked and misaligned. I have marked the position of this beam splitter on the table, so putting it back in should be easy. Right now, I'm trying to align the mode cleaner back and start the WFS loops once we get it locked. |
16677
|
Thu Feb 24 14:32:57 2022 |
Anchal | Update | General | MC RFPD DCMON channel got stuck to 0 |
I found a peculiar issue today. The C1:IOO-MC_RFPD_DCMON remains constantly 0. I wonder if the RFPF output is being read properly. I opened the table and used an oscilloscope to confirm that the DC output from the MC REFL photodiode is coming consistently but our EPICs channel is not reading it. I tried restarting the modbusIOC service but that did not affect anything. I power cycled the acromag chassis while keeping the modbusIOC service off, and then restarted teh modbusIOC service. After this, I saw more channels got stuck and became unresponsive, including the PMC channels. So then I rebooted c1psl without doing anything to the acromaf chasis, and finally things came back online. Everything looks normal to me now but I'm not sure if one of the many channels is not in the right state. Anyways, problem is solved now.
|
16679
|
Thu Feb 24 19:26:32 2022 |
Anchal | Update | General | IMC Locking |
I think I have aligned the cavity, including MC1 such that we are seeing flashing of fundamental mode and significant transmission sum value as well.However, I'm unable to catch lock following Koji's method in 40m/16673. Autolocker could not catch lock either. Maybe I am doing something wrong, I'll pickup again tomorrow, hopefully the cavity won't drift too much in this time. |
16685
|
Sun Feb 27 00:37:00 2022 |
Koji | Update | General | IMC Locking Recovery |
Summary:
- IMC was locked.
- Some alignment change in the output optics.
- The WFS servos working fine now.
- You need to follow the proper alignment procedure to recover the good alignment condition.
Locking:
- Basically followed the previous procedure 40m/16673.
- The autolocker was turned off. Used MC2 and MC3 for the alignment.
- Once I hit the low order modes, increased the IN1 gain to acquire the lock. This helped me to bring the alignment to TEM00
- Found the MC2 spot was way too off in pitch and yaw.
- Moved MC1/2/3 to bring the MC2 spot around the center of the mirror.
- Found a reasonably good visibility (<90%) at a MC2 spot. Decided this to be the reference (at least for now)
SP Table Alignment Work
- Went to the SP table and aligned the WFS1/2 spots.
- I saw no spot on the camera. Found that the beam for the camera was way too weak and a PO mirror was useless to bring the spot on the CCD.
- So, instead, I decided to catch an AR reflection of the 90% mirror. (See Attachment 1)
- This made the CCD vulnerable to the stronger incident beam to the IMC. Work on the CCD path before increasing the incident power.
MC2 end table alignment work
- I knew that the focusing lens there and the end QPD had inconsistent alignment.
- The true MC2 spot needs to be optimized with A2L (and noise analysis / transmitted beam power analysis / etc)
- So, just aligned the QPD spot using today's beam as the temporary target of the MC alignment. (See Attachment 2)
Resulting CCD image on the quad display (Attachment 3)
WFS Servo
- To activate the WFS with the low transmitted power, the trigger threshold was reduced from 5000 to 500. (See Attachment 4)
- WFS offset was reset with /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_RF_offsets
- Resulting working state looks like Attachment 5 |
Attachment 1: PXL_20220226_093809056.jpg
|
|
Attachment 2: PXL_20220226_093854857.jpg
|
|
Attachment 3: PXL_20220226_100859871.jpg
|
|
Attachment 4: Screenshot_2022-02-26_01-56-31.png
|
|
Attachment 5: Screenshot_2022-02-26_01-56-47.png
|
|
16686
|
Sun Feb 27 01:12:46 2022 |
Koji | Update | General | IMC manual alignment procedure |
We expect that the MC sus are susceptible to the temperature change and the alignment drifts away with time.
Here is the proper alignment procedure.
0) Assume there is no TEM00 flash or locking, but the IMC is still flashing with higher-order modes.
1) Use the CCD camera and WFS DC spots to bring the beam to the nominal position.
2) Use only MC2 and MC3 to align the cavity to have low-order modes (TEM00,01,02 etc)
3) You should be able to lock the cavity on one of these modes. Minimize the reflection (maximize the transmission) for that mode.
4) This should allow you to jump to a better lower-order mode. Continue alignment optimization only with MC2/3 until you get TEM00.
5) Optimize the TEM00 alignment only with MC2/3
6) Look at the MC end QPD. use one of the scripts in scripts/MC/moveMC2 . Note that the spot moves opposite to the name of the scripts. i.e. MC2_spot_down moves the spot up, MC2_spot_right moved the spot left, etc...
These scripts move MC1/2/3 and try to keep the good MC transmission.
7) moveMC2 scripts are not perfect. As you use them, it makes the MC alignment gradually degraded. Use MC2 and MC3 to recover good transmission.
8) If MC2 spot is satisfactory, you are done.
-------------
Step 6-8 can be done with the WFS on. This way, you can skip step 7 as the WFS servo takes care of it. But if the spot move is too fast, the servo can't keep up with the change. If so, you have to wait for the settling of the servo. Once the spot position is satisfactory, MC servo relief should be run so that the servo offset (in actuation) can be offloaded to the bias slider.
|
Attachment 1: PXL_20220226_100859871.jpg
|
|
16725
|
Tue Mar 15 10:45:31 2022 |
Paco | Update | General | Assembled small in-vac optics |
[Paco]
This morning I assembled LO3, LO4 and AS3 (all mirrors) onto polaris K1 mounts. The mounts stand as per this elog, on 4.5" posts with 0.5" Al spacers to match the beam heigth of 5.5". I also assembled ASL by adding a 0.14" Al spacer, and finally, recycled two DLC mounts (from the XEND flowbench) and posts to mount the 2 inch diameter beamsplitters BHDBS and AS2 (T=10%). I stored the previous 2" optics in the CVI and lambda optic cases and labeled appropriately. |
16775
|
Wed Apr 13 16:23:54 2022 |
Ian MacMillan | Update | General | Smell in 40m |
[Ian, Paco, JC]
There is a strange smell in the 40m. It smells like a chemically burning smell maybe like a shorted component. I went around with the IR camera to see if anything was unusually hot but I didn't see anything. The smell seems to be concentrated at the vertex and down the y-arm |
16784
|
Mon Apr 18 15:17:31 2022 |
Jancarlo | Update | General | Tool box and Work Station Organization |
I cleaned up around the 40 m lab. All the Laser Safety Glasses have been picked up and placed on the rack at the entrance.
Some miscellaneous BNC Connector cables have been arranged and organized along the wall parallel to the Y-Tunnel.
Nitrogen tanks have been swapped out. Current tank is at 1200 psi and the other is at 1850 psi.
The tool box has been organized with each tool in its specified area. |
16787
|
Mon Apr 18 23:22:39 2022 |
Koji | Update | General | Tool box and Work Station Organization |
Whoa! Thanks! |
Attachment 1: PXL_20220419_062101907.jpg
|
|
16808
|
Mon Apr 25 14:19:51 2022 |
JC | Update | General | Nitrogen Tank |
Coming in this morning, I checked on the Nitrogen tanks to check the level. One of the tanks were empty, so I went ahead and swapped it out. One tank is at 946 PSI, the other is at 2573 PSI. I checked for leaks and found none. |
16809
|
Mon Apr 25 14:49:02 2022 |
Koji | Update | General | Nitrogen Tank |
For your (and mine) info:
N2 pressure can be monitored on the 40m summary page: https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20220425/vacuum/
(you need to hit "today" to go to the current status)
|
16901
|
Wed Jun 8 16:33:26 2022 |
Koji | Update | General | Power Outage 220608: HVAC restored |
I found the HVACs for the ends were off. They were turned back on. |
16921
|
Wed Jun 15 17:12:39 2022 |
Cici | Summary | General | Preparation for AUX Loop Characterization |
[Deeksha, Cici]
We went to the end Xarm station and looked at the green laser setup and electronics. We fiddled with the SR-785 and experimented with low-pass filters, and will be exploring the Python script tomorrow. |
16926
|
Thu Jun 16 19:49:48 2022 |
Cici | Update | General | Using the SR785 |
[Deeksha, Cici]
We used a python script to collect data from the SR785 remotely. The SR785 is now connected to the wifi network via Ethernet port 7. |
16933
|
Tue Jun 21 14:59:22 2022 |
Cici | Summary | General | AUX Transfer Function Loop Exploration |
[Deeksha, Cici]
We learned about the auxillary laser control loop, and then went into the lab to identify the components and cables represented by our transfer functions. We connected to the SR785 inside the lab so that we can use it to insert noise next time, and measure the output in various parts of the control loop. |
16944
|
Fri Jun 24 13:29:37 2022 |
Yehonathan | Update | General | OSEMs from KAGRA |
The box was given to Juan Gamez (SURF)
Quote: |
I put the box containing the untested OSEMs from KAGRA near the south flow bench on the floor.
|
|
16950
|
Mon Jun 27 13:25:50 2022 |
Cici | Update | General | Characterizing the Transfer Loop |
[Deeksha, Cici]
We first took data of a simple low pass filter, and attempted to perform a fit to both the magnitude and phase in order to find the Z of the components. Once we felt confident in our ability to measure tranfer functions, we took data and plotted the transfer function of the existing control loop of the AUX laser. What we found generally followed the trend of, but was lower than, 10^4/f, which is what we hoped to match, and also had a strange unexplained notch ~1.3 kHz. The magnitude and phase data both got worse after around 40-50 kHz, which we believe is because the laser came out of lock near the end of the run.
Edit:
[Attachment 2 and 3] are the frequency response of the low pass filter, curves fitted using least squares in python.
[Attachment 1 and 4] is the same measurement of OLTF of the actual AUX circuit, and the control diagram pointing out the location of excitation and test point. |
Attachment 1: TF_measurement_b.png
|
|
Attachment 2: transfer_function_mag_fit.png
|
|
Attachment 3: transfer_function_phase_fit.png
|
|
Attachment 4: control_flow.png
|
|
16953
|
Tue Jun 28 09:03:58 2022 |
JC | Update | General | Organizing and Cleaning |
The plan for the tools in 40m
As of right now, there are 4 tool boxes. X-end, Y-end, Vertex, and the main tool box along the X-arm. The plan is the give each toolbox a set of their own tools. The tools of X-end, Y-end, and Vertex toolboxes will be very similar containing the basic tools such as pliers, screwdrivers, allen ball drivers. Along with this, each tool box will have a tape measure, caliper, level, and other measuring tools we find convinient.
As for the new toolbox, I have done research and found a few good selections. The only problem I have ran into with this is the width of the tool box corresponding with the prices. The tool cabinet we have now is 41" wide. The issue I have is not in finding another toolbox of the same width, but for a similar price we can find a 54" wide tool cabinet. Would anyone be objected to making a bit more space for this?
How the tools will stay organized.
I the original idea I had was to use a specified color of electrical tape for each tool box. Then to wrap the corresponding tools tools with the same color tape. But it was brought to my attention that the electrical tape would become sticky over time. So, I think the using the label maker would be the best idea. with the labels being 'X' for X-end, 'Y' for Y-end, 'V' for vertex, and 'M' for main toolboxes.
An idea for the optical tables:
Anchal brought it up to me that it is a hassle to go back and forth searching for the correct sizes of Hex Keys and Allen Wrenches. The idea of a pouch on the outside of each optical table was mentioned so I brought this up to Paco. Paco also gave me the idea of a 3D printed stand we could make for allen ball drives. Does anyone have a preference or an idea of what would be the best choice and why?
A few sidenotes:
Anchal mentioned to me a while back that there are many cables that are laying on the racks that are not being used. Is there a way we could identify which ones are being used?
I noticed that when we were vented that a few of the chamber doors were leaning up against the wall and not on a wooden stand like others. Although, the seats for the chamber doors are pretty spacious and do not give us much clearance. For the future ones, could we make something more sleek and put the wider seats at the end chambers?
The cabinets along the Y-Arm are labelled, but do not correspond with all the materials inside or are too full to take in more items. Could I organize these?
|
16955
|
Tue Jun 28 16:26:58 2022 |
Cici | Summary | General | Vector fitting open loop transfer function/Audio cancellation of optical table enclosure |
[Deeksha, Cici]
We attempted to use vectfit to fit our earlier transfer function data, and were generally unsuccessful (see vectfit_firstattempt.png), but are much closer to understanding vectfit than before. Couple of problems to address - finding the right set of initial poles to start with has been very hard, and also however vectfit is plotting the phase data is unwrapping it, which makes it generally unreadable. Still working on how to mess with the vectfit automatically-generated plots. In general, our data is very messy (this is old data of the transfer function from last week), so we took more data today to see if our coherence was the problem (see TFSR785_28-06-2022_161937.pdf). As is visible from the graph, our coherence is terrible, and above 1kHz is almost entirely below 0.5 (or 0.2) on both channels. Figuring out why this is and fixing it is our first priority.
In the process of taking new data, we also found out that the optical table enclosure at the end of the X-arm does a decent job of sound isolation (see enclosure_open.mp4 and enclosure_closed.mp4). The clicking from the shutter is visible on a spectrogram at high frequencies when the enclosure is open, but not when it is closed. We also discovered that the script to toggle the shutter can run indefinitely, which can break the shutter, so we need to fix that problem! |
Attachment 1: vectfit_firstattempt.png
|
|
Attachment 2: TFSR785_28-06-2022_161937.pdf
|
|
Attachment 3: enclosure_open.MP4
|
Attachment 4: enclosure_closed.MP4
|
16982
|
Fri Jul 8 23:10:04 2022 |
Koji | Summary | General | July 9th, 2022 Power Outage Prep |
The 40m team worked on the power outage preparation. The detailed is summarized on this wiki page. We will still be able to access the wiki page during the power outage as it is hosted some where in Downs.
https://wiki-40m.ligo.caltech.edu/Complete_power_shutdown_2022_07 |
16988
|
Mon Jul 11 19:29:23 2022 |
Paco | Summary | General | Finalizing recovery -- timing issues, cds, MC1 |
[Yuta, Koji, Paco]
Restarting CDS
We were having some trouble restarting all the models on the FEs. The error was the famous 0x4000 DC error, which has to do with time de-synchronization between fb1 and a given FE. We tried a combination of things haphazardly, such as reloading the gpstime process using
controls@fb1:~ 0$ sudo systemctl stop daqd_*
controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 0$ sudo modprobe gpstime
controls@fb1:~ 0$ sudo systemctl start daqd_*
controls@fb1:~ 0$ sudo systemctl restart open-mx.service
without much success, even when doing this again after hard rebooting FE + IO chassis combinations around the lab. Koji prompted us to check the local times as reported by the gpstime module, and comparing it to network reported times we saw the expected offset of ~ 3.5 s. On a given FE ("c1***") and fb1 separately, we ran:
controls@c1***:~ 0$ timedatectl
Local time: Mon 2022-07-11 16:22:39 PDT
Universal time: Tue 2022-07-11 23:22:39 UTC
Time zone: America/Los_Angeles (PDT, -0700)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2022-03-13 01:59:59 PST
Sun 2022-03-13 03:00:00 PDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2022-11-06 01:59:59 PDT
Sun 2022-11-06 01:00:00 PST
controls@fb1:~ 0$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
192.168.123.255 .BCST. 16 u - 64 0 0.000 0.000 0.000
which meant a couple of things:
- fb1 was serving its time (broadcast to local (martian) network)
- fb1 was not getting its time from the internet
- c1*** was not synchronized even though fb1 was serving the time
By looking at previous elogs with similar issues, we tried two things;
- First, from the FEs, run sudo systemctl restart systemd-timesyncd to get the FE in sync; this didn't immediately solve anything.
- Then, from fb1, we tried pinging google.com and failed! The fb1 was not connected to the internet!!!
We tried rebooting fb1 to see if it connected, but eventually what solved this was restarting the bind9 service on chiara! Now we could ping google, and saw this output
controls@fb1:~ 0$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+tor.viarouge.ne 85.199.214.102 2 u 244 1024 377 144.478 0.761 0.566
*ntp.exact-time. .GPS. 1 u 93 1024 377 174.450 -1.741 0.613
time.nullrouten .STEP. 16 u - 1024 0 0.000 0.000 0.000
+ntp.as43588.net 129.6.15.28 2 u 39m 1024 314 189.152 4.244 0.733
192.168.123.255 .BCST. 16 u - 64 0 0.000 0.000 0.000
meaning fb1 was getting its time served. Going back to the FEs, we still couldn't see the ntp synchronized flag up, but it just took time after a few minutes we saw the FEs in sync! This also meant that we could finally restart all FE models, which we successfully did following the script described in the wiki. Then we had to reload the modbusIOC service in all the slow machines (sometimes this required us to call sudo systemctl daemon-reload) and performed burt restore to a last Friday's snap file collection.
IMC realign and MC1 glitch?
With Koji's help PMC locked, and then Yuta and Paco manually increased the input power to the IFO by rotating the waveplate picomotor to 37.0 deg. After this, we noticed that the MC REFL spot was not hitting the camera, so maybe MC1 was misaligned. Paco checked the AP table and saw the spot horizontally misaligned on the camera, which gave us the initial YAW correction on MC1. After some IMC recovery, we saw only MC1 got spontaneously kicked along both PIT and YAW, making our alignment futile. Though not hard to recover, we wondered why this happened.
We went into the 1X4 rack and pushed MC1 suspension cables in to disregard loose connections, but as we came back into the control room we again saw it being kicked randomly! We even turned damping off for a little while and this random kicking didn't stop. There was no significant seismic motion at the time so it is still unclear of what is happening. |
16993
|
Tue Jul 12 18:35:31 2022 |
Cici Hanna | Summary | General | Finding Zeros/Poles With Vectfit |
Am still working on using vectfit to find my zeros/poles of a transfer function - now have a more specific project in mind, which is to have a Red Pitaya use the zero/pole data of the transfer function to find the UGF, so we can check what the UGF is at any given time and plot it as a function of time to see if it drifts (hopefully it doesn't). Wrestled with vectfit more on matlab, found out I was converting from dB's incorrectly (should be 10^(dB/20)....) Intend to read a bit of a book by Bendat and Piersol to learn a bit more about how I should be weighting my vectfit. May also check out an algorithm called AAA for fitting instead. |
17003
|
Thu Jul 14 19:09:51 2022 |
rana | Update | General | EQ recovery |
There was a EQ in Ridgecrest (approximately 200 km north of Caltech). It was around 6:20 PM local time.
All the suspensions tripped. I have recovered them (after some struggle with the weird profusion of multiple conflicting scripts/ directories that have appeared in the recent past...)
ETMY is still giving me some trouble. Maybe because of the HUGE bias on that within the fast CDS system, it had some trouble damping. Also the 'reenable watchdog' script in one of the many scripts directories seems to do a bad job. It re-enables optics, btu doesn't make sure that the beams are on the optical lever QPD, and so the OL servo can smash the optic around. This is not good.
Also what's up with the bashrc.d/ in some workstations and not others? Was there something wrong with the .bashrc files we had for the past 15 years? I will revert them unless someone puts in an elog with some justification for this "upgrade".
This new SUS screen is coming along well, but some of the fields are white. Are they omitted or is there something non-functional in the CDS? Also, the PD variances should not be in the line between the servo outputs and the coil. It may mislead people into thinking that the variances are of the coils. Instead, they should be placed elsewhere as we had it in the old screens. |
Attachment 1: ETMY-screen.png
|
|
17006
|
Fri Jul 15 16:20:16 2022 |
Cici Hanna | Update | General | Finding UGF |
I have temporarily abandoned vectfit and aaa since I've been pretty unsuccessful with them and I don't need poles/zeroes to find the unity gain frequency. Instead I'm just fitting the transfer function linearly (on a log-log scale). I've found the UGF at about 5.5 kHz right now, using old data - next step is to get the Red Pitaya working so I can take data with that. Also need to move this code from matlab to python. Uncertainty's propagated using the 95% confidence bounds given by the fit, using curvefit - so just from the standard error, and all points are weighted equally. Ideally would like to propagate uncertainty accounting for the coherence data too, but haven't figured out how to do that correctly yet.
[UPDATE 7/22/2022: added raw data files] |
Attachment 1: UGF_4042.png
|
|
Attachment 2: UGF_5650.png
|
|
Attachment 3: TFSR785_29-06-2022_114042.txt
|
# SR785 Measurement - Timestamp: Jun 29 2022 - 11:40:42
# Parameter File: TFSR785template.yml
#---------- Measurement Setup ------------
# Start frequency (Hz) = 100000.000000
# Stop frequency (Hz) = 100.000000
# Number of frequency points = 30
# Excitation amplitude (mV) = 10.000000
# Settling cycles = 5
# Integration cycles = 100
#---------- Measurement Parameters ----------
... 52 more lines ...
|
Attachment 4: TFSR785_29-06-2022_115650.txt
|
# SR785 Measurement - Timestamp: Jun 29 2022 - 11:56:50
# Parameter File: TFSR785template.yml
#---------- Measurement Setup ------------
# Start frequency (Hz) = 100000.000000
# Stop frequency (Hz) = 2000.000000
# Number of frequency points = 300
# Excitation amplitude (mV) = 5.000000
# Settling cycles = 5
# Integration cycles = 200
#---------- Measurement Parameters ----------
... 322 more lines ...
|
17021
|
Wed Jul 20 11:58:45 2022 |
Paco | Summary | General | Jenne laser kaput? |
[Paco, Yehonathan, JC]
We were trying to setup the Jenne laser to characterize the response of three 1811s that Yehonathan is using for his WOPA experiment (in QIL). We hooked up a ~ 5 VDC power supply to the bias tee and looked to see if there was any DC response in the REF PD. We used a DB9 breakout board and a DB9 cable, and saw some current being drawn. The DC current was a bit too high (500 mA), so we turned the DC voltage off, and realized the VDC power was reversed, probably along the DB9 cable which we didn't check before. As we flipped the power supply leads and turned power back on, we could no longer see any current even though the voltage was now right (or was it???). We would like to debug this laser, and continue using it if it still works (!), but there is negligible documentation either here or in the wiki, so if there are any known places to look at it would be helpful to know them. |
17022
|
Wed Jul 20 14:12:07 2022 |
Paco | Summary | General | Jenne laser kaput! |
[Koji, Yehonathan, Paco]
Koji pointed out that this laser was always driven with a current driver (which was not nearby), and after finding it on one of the rolling carts, we hooked up the system but found that the laser driver displayed open circuit near the usual 20mA operating point. We therefore have to conclude that this laser is no more. We will look for a reasonable replacement.
Quote: |
[Paco, Yehonathan, JC]
We were trying to setup the Jenne laser to characterize the response of three 1811s that Yehonathan is using for his WOPA experiment (in QIL). We hooked up a ~ 5 VDC power supply to the bias tee and looked to see if there was any DC response in the REF PD. We used a DB9 breakout board and a DB9 cable, and saw some current being drawn. The DC current was a bit too high (500 mA), so we turned the DC voltage off, and realized the VDC power was reversed, probably along the DB9 cable which we didn't check before. As we flipped the power supply leads and turned power back on, we could no longer see any current even though the voltage was now right (or was it???). We would like to debug this laser, and continue using it if it still works (!), but there is negligible documentation either here or in the wiki, so if there are any known places to look at it would be helpful to know them.
|
|
17023
|
Wed Jul 20 15:58:52 2022 |
Koji | Summary | General | Jenne laser kaput! |
For troubleshooting, the proper laser driver (found beneath the AG network analyzer) was connected.
The current ~1mA was provided and the driver detected the "open circuit", which means the laser diode was busted.
https://dcc.ligo.org/LIGO-T060240
The laser diode in the parts list is: "GTRAN GaAs Strained QW Laser Diode, Part # LD-1060". |
17027
|
Fri Jul 22 17:43:19 2022 |
Koji | Update | General | Obtained a functional CRT |
[Koji Paco]
Koji went to Downs and found a CRT labeled "for 40m Rana?". So I decided to salvage it to the 40m after getting approval from Rich/Todd.
Paco and I tried this unit with the control room CCD signal and it just worked fine. So we can use this as a spare for any purpose in the lab. |
Attachment 1: PXL_20220723_003631871.jpg
|
|
17030
|
Mon Jul 25 09:05:50 2022 |
Paco | Summary | General | Testing 950nm laser found in trash pile |
[Paco, Yehonathan]
==== Late elog from Friday ====
Koji provided us with a QFLD-950-3S (QPHOTONICS) salvaged from Aidan's junk pile (LD is alive according to him). We tested the Jenne laser setup with this just to decide if we should order another one, and it worked.
The laser driver anode and cathode pins (8/9, 4/5 respectively) on the rear DB9 port from the ILX Lightwave LDX-3412 driver were connected to the corresponding anode and cathode pins in the laser package (5, and 9; note the numbers are reversed between driver and laser). Then, interlock pins 1 and 2 in the driver were shorted to enable operation. This is all illustrated in Attachments #1-2.
After setting a limit of 27.6 mA current in the driver, we slowly increased the actual current to ~ 19 mA until we could see light on a beam card. We can go ahead and get a 1060 nm replacement. |
Attachment 1: PXL_20220722_234600124.jpg
|
|
Attachment 2: PXL_20220722_234551918.jpg
|
|
17051
|
Mon Aug 1 17:19:39 2022 |
Cici | Summary | General | RPitaya Data on Jupyter Notebook |
Have successfully plotted data from the Red Pitaya on Jupyter Notebook! Have lost years of my life fighting with PyQt. Thanks to Deeksha for heavy contribution. Next task is to get actually good data (seeing mostly noise right now and haven't figured out how to change my input settings) and then to go to set up the RPi in the lab. |
17055
|
Wed Aug 3 15:01:13 2022 |
Koji | Update | General | Borrowed Dsub cables |
Borrowed DSUB cables for Juan's SURF project
- 2x D25F-M cables (~6ft?)
- 2x D2100103 ReducerCables
|
Attachment 1: PXL_20220803_215819580.jpg
|
|
17064
|
Fri Aug 5 17:03:31 2022 |
Yehonathan | Summary | General | Testing 950nm laser found in trash pile |
I set out to test the actuation bandwidth of the 950nm laser. I hooked the laser to the output of the bias tee of PD testing setup. I connected the fiber coming out of the laser to the fiber port of 1611 REF PD.
The current source was connected to the DB9 input of the PD testing setup. I turned on the current source and set the current to 20mA. I measured with a fluke ~ 2V at the REF PD DC port.
I connected the AC port of the bias tee to the RF source of the network analyzer and the AC port of the REF PD to the B port of the network analyzer. Attachment 2 shows the setup.
I took a swept sine measurement (attachment) from 100kHz to 500MHz.
It seems like the bandwidth is ~ 1MHz which is weird considering the spec sheet says that the pulse rise time is 0.5ns. To make sure we are not limited by the bandwidth of the cables I looped the source and the input of the network analyzer using the cables used for the previous measurement and observed that the bandwidth is a few 100s of MHz. |
Attachment 1: 20220805_164434.jpg
|
|
Attachment 2: LaserActuation_TF_Measurement.drawio.pdf
|
|
17070
|
Wed Aug 10 15:33:59 2022 |
Cici | Update | General | Working Red Pitaya VNA |
TL;DR: I am now able to inject a swept sine and measure a transfer function with python on my Red Pitaya! Attached is a Bode plot for a swept sine from 1 - 30 MHz, going through a band pass filter of 9.5 - 11.5 MHz.
------------------------------------------------------------------------------------
- Spent too long trying to get pyRPL to work, do not recommend. The code on their website has a lot of problems (like syntax-error level problems), and is ultimately designed to open up and start a GUI, which is not what I want even if it did work.
- Found some code on the git repository of someone at Delft University of Technology, worked better but still not great (oscilloscope/spectrum analyzer functions were alright, but couldn't successfully run a VNA with it, and overcomplicated). Helped me figure out appropriate decimation factors. Realized it was not using the FPGA to get TF data but instead just collecting a lot of time trace data and then taking an FFT in the code to get the TF, which wasn't ideal.
- Eventually switched to using the Red Pitaya SCPI server to talk to the Red Pitaya myself, successful! I inject a swept sine with a for loop that just cycles through frequencies and takes the transfer function at each one.
- Was originally getting the transfer function by using scipy.signal.csd() and scipy.signal.welch() to get Pxy and Pxx and dividing, and then just finding the closest point in the frequency spectrum to the frequency I was inserting.
- Switched to doing IQ demodulation myself: where x(t) is the measurement before the band pass filter and y(t) is the measurement after, taking the mean of (x(t) * cos(2pi*freq)) = a1, mean(x*sin()) = a2, mean(y*cos()) = b1, mean(y*sin()) = b2, and then TF(freq) = (b1 + i b2)/(a1 + i a2).
- Unfortunately still taking time trace data and then calculating the TF instead of using the FPGA, but I have not found anything online indicating that people are able to get VNA capabilities on the Red Pitaya without collecting and sending all the time trace data... I'm still not sure if that's actually a Red Pitaya capability yet.
-------------------------------------------------------------------------------------
To do:
- Will go take measurements of the AUX laser loop with the RPi! Have a good diagram of when I did it with the SR785 so it shouldn't be too hard hopefully.
- Figure out how to get coherence data!!
- Figure out how to get the RPi on the wifi. Right now I'm just plugging the RPi into my computer. Paco and I were working on this before and had trouble finding old passwords... Hopefully will not be too much of a roadblock.
|
Attachment 1: rpi_vna_test.pdf
|
|
17071
|
Wed Aug 10 19:24:19 2022 |
rana | Update | General | Working Red Pitaya VNA |
Boom!
|
17072
|
Wed Aug 10 19:36:45 2022 |
Koji | Bureaucracy | General | Lab cleaning and discovery |
During the cleaning today, we found many legacy lab items. Here are some policies what should be kept / what should be disposed
Dispose
- VME crates and VME electronics as long as they are not in use
- Eurocard SUS modules that are not in use.
- Eurocard crates (until we remove the last Eurocard module from the lab)
- Giant steel plate/palette (like a fork lift palette) along the Y arm. (Attachment 1)
- An overhead projector unit.
Keep
- Spare Eurocard crates / ISC/PZT Eurocard modules
- Boxes of old 40m logbooks behind the Y arm (see Attachment 2/3).
- Ink-plotter time-series data (paper rolls) of 1996 IFO locking (Attachment 4). Now stored in a logbook box.
- A/V type remnants: Video tapes / video cameras / casette tapes as long as they hold some information in it. i.e. Blank tapes/blank paper rolls can be disposed.
|
Attachment 1: steel_plate.jpg
|
|
Attachment 2: logbook1.jpg
|
|
Attachment 3: logbook2.jpg
|
|
Attachment 4: paper_plots.jpg
|
|
17076
|
Thu Aug 11 17:15:33 2022 |
Cici | Update | General | Measuring AUX Laser UGF with Red Pitaya |
TL;DR: Have successfully measured the UGF of the AUX laser on my Red Pitaya! Attached is one of my data runs (pdf + txt file).
---------------------------------------------------------------
- Figured out how to get a rudimentary coherence (use scipy.signal.coherence to get Cxy = abs(Pxy)**2/(Pxx*Pyy), then find what point is the closest to the frequency I'm inserting on that iteration of the swept sine and get the coherence closest to that). Not precisely the coherence at the frequency I'm inserting though, so not perfect... more of a lower bound of coherence.
- Figured out how to get the UGF from the data automatically (no error propagation yet... necessary next step)
- Put my red pitaya in the X-arm AUX laser control electronics (thank you to Anchal for help figuring out where to put it and locking the x-arm.) Counts dropped from 4500 to 1900 with the x-arm locked, so 58% mode matching. I lose lock at an amplitude >0.05 or so.
- Wrote a little script to take data and return a time-stamped text file with all the parameters saved and a time-stamped pdf of the TF magnitude, UGF, phase, and coherence, so should be easy to take more data next time!
----------------------------------------------------------------
- need to take more accurate coherence data
- need to propagate uncertainty on UGF (probably high...)
- take more data with higher coherence (the file attached doesn't have great coherence and even that was one of my better runs, will probably increase averaging since increasing amplitude was a problem)
|
Attachment 1: rpi_OLG_2022_08_11_16_51_53.pdf
|
|
Attachment 2: rpi_OLG_2022_08_11_16_51_53.txt
|
# frequency start: 500.0
# frequency stop: 50000.0
# samples: 50
# amplitude: 0.01
# cycles: 500
# max fs: 125000000.0
# N: 16384UGF: 9264.899326705621
# Frequency[Hz] Magnitude[V/V] Phase[rad] Coherence
4.999999999999999432e+02 5.216612299292965105e+01 -7.738468629291910261e-01 7.660920305860696722e-02
5.492705709937790743e+02 3.622076363933444298e+01 -5.897393740774580229e-01 3.183076012979469405e-01
... 49 more lines ...
|
17077
|
Fri Aug 12 02:02:31 2022 |
Koji | Update | General | Power Outage Prep: nodus /home/export backup |
Took the backup (snapshot) of /home/export as of Aug 12, 2022
controls@nodus> cd /cvs/cds/caltech/nodus_backup
controls@nodus> rsync -ah --progress --delete /home/export ./export_220812 >rsync.log&
As the last backup was just a month ago (July 8), rsync finished quickly (~2min). |
17078
|
Fri Aug 12 13:40:36 2022 |
JC | Update | General | Preparing for Shutdown on Saturday, Aug 13 |
[Yehonathan, JC]
Our first step in preparing for the Shutdown was to center all the OpLevs. Next is to prepare the Vacuum System for the shutdown.
|
17079
|
Mon Aug 15 10:27:56 2022 |
Koji | Update | General | Recap of the additional measures for the outage prep |
[Yuta Koji]
(Report on Aug 12, 2022)
We went around the lab for the final check. Here are the additional notes.
- 1X9: The x-end frontend machine still had the AC power. The power strip to which the machine is connected was disconnected from the AC at the side of the rack. (Attachment 1)
- 1X8: The vacuum rack still supplied the AC to c1vac. This was turned off at the UPS. (Attachment 2)
- 1X6: VMI RFM hub still had the power. This was turned off at the rear switch. (Attachment 3)
- PSL: The PSL door was open (reported above). Closed. (Attachment 4)
- 1Y2: The LSC rack still had the DC power. The supplies were turned off at the KEPCO rack (the short rack). (Attachment 5)
Note that the top-right supply for the +15V is not used. (The one in the empty slot got busted). We may need some attention to the left-most one in the second row. It indicated a negative current. Is this just the current meter problem or is the supply broken?
- Control room: The CAD WS was turned off.
I declare that now we are ready for the power outage. |
Attachment 1: PXL_20220812_234438097.jpg
|
|
Attachment 2: PXL_20220812_234655309.jpg
|
|
Attachment 3: PXL_20220812_234748559.jpg
|
|
Attachment 4: rn_image_picker_lib_temp_b5f3e38d-796c-4816-bc0e-b11ba3316cbe.jpg
|
|
Attachment 5: PXL_20220812_235429314.jpg
|
|
17080
|
Mon Aug 15 15:43:49 2022 |
Anchal | Update | General | Complete power shutdown and startup documentation |
All steps taken have been recorded here:
https://wiki-40m.ligo.caltech.edu/Complete_power_shutdown_2022_08 |
17081
|
Mon Aug 15 18:06:07 2022 |
Anchal | Update | General | c1vac issues, 1 pressure gauge died |
[Anchal, Paco, Tega]
Disk full issue:
c1vac was showing /var disk to be full. We moved all gunzipped backup logs to /home/controls/logBackUp. This emptied 36% of space on /var. Ideally, we need not log so much. Some solution needs to be found for reducing these log sizes or monitoring them for smart handling.
Pressure sensor malfunctioning:
We were unable to opel the PSL shuttter, due to the interlock with C1:Vac-P1a_pressure. We found that C1:Vac-P1a_pressure is not being written by serial_MKS937a service on c1vac. The issue was the the sensor itself has become bad and needs to be replaced. We believe that "L 0E-04" in the status (C1:Vac-P1a_status) message indicates a malfunctioning sensor.
Quick fix:
We removed writing of C1:Vac-P1a_pressure and C1:Vac-P1a_status from MKS937a and mvoed them to XGS600 which is using the sensor 1 from main volume. See this commit.
Now we are able to open PSL shutter. The sensor should be replaced ASAP and this commit can be reverted then. |
17082
|
Mon Aug 15 20:09:18 2022 |
Koji | Update | General | c1vac issues, 1 pressure gauge died |
- Disk Full: Just use the usual /etc/logrotate thing
- Vacuum gauge
I rather feel not replacing P1a. We used to have Ps and CCs as they didn't cover the entire pressure range. However, this new FRG (=Full Range Gauge) does cover from 1atm to 4nTorr.
Why don't we have a couple of FRG spares, instead?
Questions to Tega: How many FRGs can our XGS-600 controller handle?
|
17084
|
Wed Aug 17 01:18:54 2022 |
Koji | Update | General | Notice: SURF SUS test setup blocking the lab way |
Juan and I built an analog setup to measure some transfer functions of the MOS suspension. The setup is blocking the lab way around the PD test bench.
Excuse us for the inconvenience. It will be removed/cleared by the end of the week. |
Attachment 1: PXL_20220817_060428109.jpg
|
|
17085
|
Wed Aug 17 07:35:48 2022 |
yuta | Bureaucracy | General | My wish list for IFO commissioning |
FPMI related
- Better suspension damping HIGH
- Investigate ITMX input matrix diagonalization (40m/16931)
- Output matrix diagonalization
* FPMI lock is not stable, only lasts a few minutes for so. MICH fringe is too fast; 5-10 fringes/sec in the evening.
- Noise budget HIGH
- Calibrate error signals (actually already done with sensing matrix measurement 40m/17069)
- Make a sensitivity curve using error and feedback signals (actuator calibration 40m/16978)
* See if optical gain and actuation efficiency makes sense. REFL55 error signal amplitude is sensitive to cable connections.
- FPMI locking
- Use CARM/DARM filters, not XARM/YARM filters
- Remove FM4 belly
- Automate lock acquisition procedure
- Initial alignment scheme
- Investigate which suspension drifts much
- Scheme compatible with BHD alignment
* These days, we have to align almost from scratch every morning. Empirically, TT2 seems to recover LO alignment and PR2/3 seems to recover Yarm alignment (40m/17056). Xarm seems to be stable.
- ALS
- Install alignment PZTs for Yarm
- Restore ALS CARM and DARM
* Green seems to be useful also for initial alignment of IR to see if arms drifted or not (40m/17056).
- ASS
- Suspension output matrix diagonalization to minimize pitch-yaw coupling (current output matrix is pitch-yaw coupled 40m/16915)
- Balance ITM and ETM actuation first so that ASS loops will be understandable (40m/17014)
- Suspension calibrations
- Calibrate oplevs
- Calibrate SUSPOS/PIT/YAW/SIDE signals (40m/16898)
* We need better understanding of suspension motions. Also good for A2L noise budgeting.
- CARM servo with Common Mode Board
- Do it with single arm first
BHD related
- Better suspension damping HIGH
- Invesitage LO2 input matrix diagonalization (40m/16931)
- Output matrix diagonalization (almost all new suspensions 40m/17073)
* BHD fringe speed is too fast (~100 fringes/sec?), LO phase locking saturates (40m/17037).
- LO phase locking
- With better suspensions
- Measure open loop transfer function
- Try dither lock with dithering LO or AS with MICH offset (single modulation)
- Modify c1hpc/c1lsc so that it can modulate BS and do double demodulation, and try double demodulation
- Noise Budget HIGH
- Calibrate MICH error signal and AS-LO fringe
- Calibrate LO1, LO2, AS1, AS4 actuation using ITM single bounce - LO fringe
- Check BHD DCPD signal chain (DCPD making negative output when fringes are too fast; 40m/17067)
- Make a sensitivity curve using error and feedback signals
- AS-LO mode-matching
- Model what could be causing funny LO shape
- Model if having low mode-matching is bad or not
* Measured mode-matching of 56% sounds too low to explain with errors in mode-matching telescope (40m/16859, 40m/17067).
IMC related
- WFS loops too fast (40m/17061)
- Noise Budget
- Investigate MC3 damping (40m/17073)
- MC2 length control path |
17086
|
Wed Aug 17 10:23:05 2022 |
Tega | Update | General | c1vac issues, pressure gauge replacement |
- Disk full
I updated the configuration file '/etc/logrotate.d/rsyslog' to set a file sise limit of 50M on 'syslog' and 'daemon.log' since these are the two log files that capture caget & caput terminal outputs. I also reduce the number of backup files to 2.
controls@c1vac:~$ cat /etc/logrotate.d/rsyslog
/var/log/syslog
{
rotate 2
daily
size 50M
missingok
notifempty
delaycompress
compress
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
}
/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/mail.log
/var/log/daemon.log
{
rotate 2
missingok
notifempty
size 50M
compress
delaycompress
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
}
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
}
- Vacuum gauge
The XGS-600 can handle 6 FRGs and we currently have 5 of them connected. Yes, having a spare would be good. I'll see about placing an order for these then.
Quote: |
- Disk Full: Just use the usual /etc/logrotate thing
- Vacuum gauge
I rather feel not replacing P1a. We used to have Ps and CCs as they didn't cover the entire pressure range. However, this new FRG (=Full Range Gauge) does cover from 1atm to 4nTorr.
Why don't we have a couple of FRG spares, instead?
Questions to Tega: How many FRGs can our XGS-600 controller handle?
|
|
17087
|
Wed Aug 17 10:27:49 2022 |
Cici | Update | General | Locking X-arm AUX laser |
TL;DR: Got the x-arm aux laser locked again and took more data - my fit on my transfer functions need improvement and my new method for finding coherence doesn't work so I went back to the first way! See attached file for an example of data runs with poor fits. First one has the questionable coherence data, second one has more logical coherence. (ignore the dashed lines.)
------------------------------------------------------------------------------------
- The aux laser on the x-arm was still off after the power shutdown, so Paco and I turned it back on, and realigned the oplev of the ETMX - initial position was P = -0.0420, Y = -5.5391.
- Locked the x-arm and took another few runs - was calculating coherence by I/Q demodulation of the buffers and then recombining the I/Q factors and then taking scipy.signal.coherence(), but for some reason this was giving me coherence values exclusively above 0.99, which seemed suspicious. When I calculated it the way I had before, by just taking s.s.coherence() of the buffers, I got a coherence around 1 except for in noisy areas of the data where it dropped more significantly, and seemed to be more correlated to the data. So I'll go back to using that way.
- I also think my fits are not great - my standard error of the fits (calculated using the coherence as weight, see Table 9.6 of Random Data by Piersol and Bendat for the formula I'm using) are enormous. Now that I have a good idea that the UGF is between 1 - 15 kHz, I'm going to restrict my frequency band and try to fit just around where the UGF would be.
--------------------------------------------------------------------------------
To do:
- Reduce frequency band and take more data
- Get fit with better standard error, use that error to calculate the uncertainty in the UGF!
|
Attachment 1: rpi_OLG_2022_08_16_17_00_41.pdf
|
|
Attachment 2: rpi_OLG_2022_08_16_17_01_21.pdf
|
|
17090
|
Thu Aug 18 16:35:29 2022 |
Cici | Update | General | UGF linked to optical gain! |
TL;DR: When the laser has good lock, the OLTF moves up and the UGF moves over!
-----------------------------------------------------------
Figured out with Paco yesterday that when the laser is locked but kind of weakly (mirrors on the optical table sliiightly out of alignment, for example), we would get a UGF around 5 kHz, but when we had a very strong lock (adjusting the mirrors until the spot was brightest) we would get a UGF around 13-17 kHz. Attached are some plots of us going back and forth (you can kind of tell from the coherence/error that the one with the lower UGF is more weakly locked, too). Error on the plots is propagated using the coherence data (see Bendat and Piersol, Random Data, Table 9.6 for the formula).
-------------------------------------------------------------
Want to take data next week to quantitatively compare optical gain to UGF! |
Attachment 1: rpi_OLG_2022_08_17_18_03_52.pdf
|
|
Attachment 2: rpi_OLG_2022_08_17_18_00_50.pdf
|
|
17093
|
Fri Aug 19 15:20:14 2022 |
Koji | Update | General | Notice: SURF SUS test setup blocking the lab way |
The setup was (at least partially) cleared. |
Attachment 1: PXL_20220819_201318044.jpg
|
|
17095
|
Fri Aug 19 15:36:10 2022 |
Koji | Update | General | SR785 C21593 CHA+ BNC broken |
When Juan and I were working on the suspension measurement, I found that CHA didn't settle down well.
I inspected and found that CHA's + input seemed broken and physically flaky. For Juan's measurements, I plugged + channels (for CHA/B) and used - channels as an input. This seemed work but I wasn't sure the SR functioned as expected in terms of the noise level.
We need to inspect the inputs a bit more carefully and send it back to SRS if necessary.
How many SR785's do we have in the lab right now? And the measurement instruments like SR785 are still the heart of our lab, please be kind... |
Attachment 1: PXL_20220819_195619620.jpg
|
|
Attachment 2: PXL_20220819_195643478.jpg
|
|