Physical plan is cleaning our roof and gutters today.
Exceptions: cryo pump and 4 ion pumps
Vac Status: The vac rack power was recycled yesterday and power to controller TP1,2 and 3 restored. atm3
VME is OFF. Power to all other instrument are ON. 23.9Vdc 0.2A
ETMY sus tower with locked optic in HEPA tent at east end is standing by for action.
Yesterday, Koji and I noticed (from the wall StripTool traces) that the vertex seismometer RMS between 0.1-0.3 Hz in the X-direction increased abruptly around 6pm PDT. This morning, when I came in, I noticed that the level had settled back to the normal level. Trending the BLRMS channels over the last 24 hours, I see that the 0.3-1 Hz band in the Z direction shows some anomalous behaviour almost in the exact same time-band. Hard to believe that any physical noise was so well aligned to the seismometer axes, I'm inclined to think this is indicative of some electronics issues with the Trillium interface unit, which has been known to be flaky in the past.
I looked into the seismometer situation a bit more today. Here is the story so far - I think more investigation is required:
Attachment #2 has some spectrograms (they are rather large files). They suggest that the increase in noise in the 0.1-0.3 Hz band in the BS seismometer X channel is real - but there isn't a corresponding increase in the other two seismometers, so the problem could still be electronics related.
The Trillium T240 seismometer needs mass re-centering. Has anyone done this before, and do we have any hardware to do this?
I went to the Trillium interface box in 1X5. In this elog, Koji says it is D1000749-v2. But looking at the connector footprint on the back panel, it is more consistent with the v1 layout. Anyway I didn't open it to check. Main point is that none of the backplane data I/O ports are used. We are digitizing (using the fast CDS system) the front panel BNC outputs for the three axes. So of the various connectors available on the interface box, we are only using the front panel DB25, the front panel BNCs, and the rear panel power.
The cable connecting this interface box to the actual seismometer is a custom one I believe. It has a 19 pin military circular type hermetic connector on one end, and a DB25 on the other. Power is supplied to the seismometer from the interface box via this cable, so in order to run the test, I had to use a DB25 breakout board to act as a feedthrough and peek at the signals while the seismometer and interface boards were connected. I used Jenne's mapping of the DB25--> 19 pin connector (which also seems consistent with the schematic). Findings:
I am holding off on attempting any re-centering, for more experienced people to comment.
I removed the Trillium T240 DAQ interface unit from 1X4 for investigation.
It was returned to the electronics rack and all the connections were re-made. Some details:
Update 445pm: Seems to have done something good - the old feedforward filters reduce the YAW RMS motion by a factor of a few. Pitch performance is not so good, maybe the filter needs re-training, but I see coherence, see Attachment #2 for the frequency domain WF.
Attachment #1 shows the spectra of our three available seismometers over a period of ~10ksec.
Attachment #2 shows the result of applying frequency domain Wiener filter subtraction to the POP QPD (target) with the vertex seismometer signals as witness channels.
this is due to the Equivalence Principle: local accelerations are indistinguishable from spacetime curvature. On a spherical Earth, the local gradient of the metric points in the direction towards the center of the Earth, which is colloquially known as "down".
I don't understand why the z-axis motion reported by the T240 is ~10x lower at 10 mHz compared to the X and Y motions. Is this some electronics noise artefact?
Here is some disturbance in the spacetime curvature, where the local gradient of the metric seems to have been modulated (in the "downward" as well as in the other two orthogonal Cartesian directions) at ~1 Hz - seems real as far as I can tell, all the suspensions were being shaken about and all the seismometers witnessed it, though the peak is pretty narrow. A broader, less prominent peak also shows up around 0.5 Hz. We couldn't identify any clear source (no LN2 fill-up / obvious CES activity). This event lasted for ~45 mins, and stopped around 2315 local time. Shortly (~5min) after the ~1 Hz peak died down, however, the 3-10 Hz BLRMS channel reports an increase by ~factor of 2.
Onto trying some locking now that the suspensions have settled down somewhat.
at 1 Hz' this effect is not large so that's real translation. at lower frequencies a ground tilt couples to the horizontal sensors at first order and so the apparent signal is amplified by the double integral. drawing a free body diagram u can c that
x_apparent = (g / s^2) * theta
but for vortical this not tru because it already measures the full free fall and the tilt only shows up at 2nd order
The large ground motion at 1 Hz started up again tonight at around 23:30. I walked around the lab and nearby buildings with a flashlight and couldn't find anything whumping. The noise is very sinusoidal and seems like it must be a 1 Hz motor rather than any natural disturbance or traffic, etc. Suspect that it is a pump in the nearby CES building which is waking up and running to fill up some liquid level. Will check out in the morning.
Estimate of displacement noise based on the observed MC_F channel showing a 25 MHz peak-peak excursion for the laser:
dL = 25e6 * (13 m / (c / lambda)
= 1 micron
So this is a lot. Probably our pendulum is amplifying the ground motion by 10x, so I suspect a ground noise of ~0.1 micron peak-peak.
(this is a native PDF export using qtgrace rather than XMgrace. uninstall xmgrace and symlink to qtgrace.)
Attachment #1 is a spectrogram of the BS sesimometer signals for a ~24 hour period (from Wednesday night to Thursday night local time, zipped because its a large file). I've marked the nearly pure tones that show up for some time and then turn off. We need to get to the bottom of this and ideally stop it from happening at night because it is eating ~1 hour of lockable time.
We considered if we could look at the phasing between the vertex and end seismometers to localize the source of the disturbance.
The nightly seismic activity enhancement continued during the weekend. It always shows up around 10pm local time, persists for ~1 hour, and then goes away. This isn't a show stopper as long as it stops at some point, but it is annoying that it is eating up >1 hour of possible locking time. I walked over to CES, no one there admitted to anything - there is an "Earth Surface Dynamics Laboratory" there that runs some heavy equipment right next to us, but they claim they aren't running anything post ~530pm. Rick (building manager ?) also doesn't know of anything that turns on with the periodicity we see. He suggested contacting Watson but I have no idea who to talk to there who has an overview of what goes on in the building. 😢
The shaking started earlier today than yesterday, at ~9pm local time.
While the IFO is shaking, I thought (as Jan Harms suggested) I'd take a look at the cross-spectra between our seismometer channels at the dominant excitation frequency, which is ~1.135 Hz. Attachment #1 shows the phase of the cross spectrum taken for 10 averages (with 30mHz resolution) during the time period when the shaking was strong yesterday (~1500 seconds with 50% overlap). The logic is that we can use the relative phasing between the seismometer channels to estimate the direction of arrival and hence, the source location. However, I already see some inconsistencies - for example, the relative phase between BS_Z and EX_Z suggests that the signal arrives at the EX seismometer first. But the phasing between EX_Y and BS_Y suggests the opposite. So maybe my thinking about the problem as 3 co-located sensors measuring plane-wave disturbances originating from the same place is too simplistic? Moreover, Koji points out that for two sensors separated by ~40m, for a ground wave velocity of 1.5 km/s, the maximum phase delay we should see between sensors is 30 msec, which corresponds to ~10 degrees of phase. I guess we have to undo the effects of the phasing in the electronics chain.
Does anyone have some code that's already attempted something similar that I can put the data through? I'd like to not get sucked into writing fresh code.
🤞 this means that the shaking is over for today and I get a few hours of locking time later today evening.
Another observarion is that even after the main 1.14 Hz peak dies out, there is elevated seismic acitivity reported by the 1-3 Hz BLRMS band. This unfortunately coincides with some stack resonance, and so the arm cavity transmission reports greater RIN even after the main peak dies out. Today, it seems that all the BLRMS return to their "nominal" nighttime levels ~10 mins after the main 1.14 Hz peak dies out.
Yehonathan, please center the EX seismometer.
The attached PDF shows the seismometer signals (I'm assuming that they're already calibrated into microns/s) during the lab tour for the art students on 11/1. The big spike which I've zoomed in on shows the time when we were in the control room and we all jumped up at the same time. There were approximately 15 students each with a mass of ~50-70 kg. I estimate that out landing times were all sync'd to within ~0.1 s.
I have re-centered the EX (and EY) seismometers. They are Guralp CMG40-T, and have no special centering procedure except cycling the power a few times. I turned off the power on their interface box, then waited 10s before turning it back on.
The fist atm shows the comparison using data from 8-9 PM Saturday night:
I check the seismometers in the last 14 hours (Attached). Seems like the coherenece is restored in the x direction.
I re-connected the 3 accelerometers located near the MC1/MC3 chamber. It was a bit tedious to get the cabling sorted - I estimate the cable is ~80m long, and the excess length had to be wound around a spool (see Attachment #1), which wasn't really a 1 person job. It's neat-ish for now, but I'm not entirely satisfied. I think we should get shorter cables (~20m), and also mount the pre-amp/power units in a rack instead of leaving it on the floor. The pre-amp settings are x100 for all three channels. The MC2 channels are powered, but are unconnected to the seismometers - it was too tedious to unroll the other spool yesterday. Apart from this, the cable for the "Z" channel had to be re-seated in the strain relief clamp.
I did not enable any of the CDS filters that convert the raw signal into physical units, so for now, these channels are just recording raw counts.
Update 7pm: the spectra in the current config are here - not sure what to make of the MC2_Z channel appearing to show lower noise?
Update July 13 2020 430pm: This afternoon, I hooked up the MC2 accelerometer channels too...
Alaska M7.5 20:54UTC https://earthquake.usgs.gov/earthquakes/eventpage/us6000c9hg/executive
I looked at the suspensions. The watchdogs have not been tripped.
IMC was locked but continually shaken. (and occasional unlock)
The particle counter on the 40m PSL was removed. The package was made together with the OMC lab particle counter (see the packing list below).
The kit was picked up by Radhika for a python code to read out the numbers.
=== Packing List ===
EQs seen on Summary pages
I propose we set up a temperature sensor network as described in attachment 1.
Here there are two types of units:
These sensors can be configured over network by going to their assigned IP addresses. I'm not sure at the moment how to configure the dB files to get them to write on slow EPICS channels.
We will have an unused port on the BASE-GATEWAY (#B) which can be used to run another temperature sensor, maybe at an important rack, PSL table or somewhere else.
In future, if more sensors are required, there are expansion (network switch like) options for BASE-GATEWAY or daisy-chain options for the probes.
Edit Fri Jun 18 16:28:13 2021 :
See this [wiki page](https://wiki-40m.ligo.caltech.edu/Physical_Environment_Monitoring/Thermometers) for updated plan and final quote.
Anchal mentioned it would be good to put more details about how I arrived at the values needed to configure the modbus drive for the temperature sensor, since this information is not in the manual and is hard to find on the internet, so here is a breakdown.
So the generic format is:
which in our case become:
As can be seen, the parameters of the first two functions "drvAsynIPPortConfigure" and "modbusInterposeConfig" are straight forward, so we restrict our discussion to the case of third function "drvModbusAsynConfigure". Well, after hours of trolling the internet, I was able to piece together a coherent picture of what needs doing and I have summarised them in the table below.
Once the asyn IP or serial port driver has been created, and the modbusInterpose driver has been configured, a modbus port driver is created with the following command:
drvModbusAsynConfigure(portName, # used by channel definitions in .db file to reference this unit)
tcpPortName, # reference to portName created with drvAsynIPPortConfigure command
modbusLength, # length in dataType units
pollMsec, # how frequently to request a value in [ms]
Modbus addresses are specified by a 16-bit integer address. The location of inputs and outputs within the 16-bit address space is not defined by the Modbus protocol, it is vendor-specific. Note that 16-bit Modbus addresses are commonly specified with an offset of 400001 (or 300001). This offset is not used by the modbus driver, it uses only the 16-bit address, not the offset.
For ServersCheck, the offset is "30001", so that
modbusStartAddress = 30200 - 30001 = 199
For the past couple of days the 0.1 to 0.3 Hz RMS seismic noise along BS-X has increased. Attachment 1 shows the hour trend in the last ~ 10 days. We'll keep monitoring it, but one thing to note is how uncorrelated it seems to be from other frequency bands. The vertical axis in the plot is in um / s
Looks like this increase is correlated for BS/EX/EY. So it is likely to be real.
Comparison between 9/15 (UTC) (Attachment 1) and 9/10 (UTC) (Attachment 2)
I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber. The serial cable is connected to c1psl computer on 1X2 using 2 usb extenders (blue in color) over the PSL enclosure and over the 1X1 rack.
The main serial communication script for this counter by Radhika is present in 40m/labutils/serial_com/gt321.py.
A 40m specific application script is present in the new git repo for 40m scripts, in 40m/scripts/PEM/particleCounter.py. Our plan is to slowly migrate the legacy scripts directory to this repo overtime. I've cloned this repo in the nfs shared directory at /opt/rtcds/caltech/c1/Git/40m/scripts which makes the scripts available at all computers and keep them upto date in all computers.
The particle counter script is running on c1psl through a systemd service, using service file 40m/scripts/PEM/particleCounter.service. Locally in c1psl, /etc/systemd/system/particleCounter.service is symbollically linked to the file in the file.
Following channels for particle counter needed to be created as I could not find any existing particle counter channels.
These are created from 40m/softChansModbus/particleCountChans.db database file. Computer optimus is running a docker container to serve as EPICS server for such soft channels. To add or edit channels, one just need to add new database file or edit database files in thsi repo and on optimus do:
controls@optimus|~> sudo docker container restart softchansmodbus_SoftChans_1
I've added the above channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini to record them in framebuilder. Starting from 11:20 am Oct 20, 2021 PDT, the data on these channels is from BS chamber area. Currently the script is running continuosly, which means 0.3u particles are sampled every minute, 0.5u twice in 5 minutes and 1u, 2u, and 5u particles are sampled once in 5 minutes. We can reduce the sampling rate if this seems unncessary to us.
The particle count channel names were changes yesterday to follow naming conventions used at the sites. Following are the new names:
The legacy count channels are kept alive with C1:PEM-count_full copying C1:PEM-BS_DUST_1000NM channel and C1:PEM-count_half copying C1:PEM-BS_DUST_500NM channel.
Attachment one is the particle counter trend since 8:30 am morning today when the HVAC wokr started. Seems like there was some peak particle presence around 11 am. The particle counter even counted 8 counts of particles size above 5um!
SVG doesn't work in my browser(s). Can we use PDF as our standard for all graphics other than photos (PNG/JPG) ?
rethinking what I said on Wednesday - its not a good idea to put the particle counter on a vac chamber with optics inside. The rumble from the air pump shows up in the acoustic noise of the interferometer. Let's look for a way to mount it near the BS chamber, but attached to something other than vacuum chambers and optical tables.
I have placed a GT321 particle counter on top of the MC1/MC3 chamber next to the BS chamber.
I have done some reading about where would be the best place to put the particle counter. The ISO standard (14644-1:2015) for cleanrooms is one every 1000 m^2 so one for every 30m x 30m space. We should have the particle counter reasonably close to the open chamber and all the manufactures that I read about suggest a little more than 1 every 30x30m. We will have it much closer than this so it is nice to know that it should still get a good reading. They also suggest keeping it in the open and not tucked away which is a little obvious. I think the best spot is attached to the cable tray that is right above the door to the control room. This should put it out of the way and within about 5m of where we are working. I ordered some cables to route it over there last night so when they come in I can put it up there.
git repo: https://git.ligo.org/40m/tempsensor.git
Update the temp sensor channels to fit with cds format, ie. "C1:PEM-TEMP_EX", "C1:PEM-TEMP_EY", "C1:PEM-TEMP_BS"
- Use FLOAT32_LE data format for the database file (/cvs/cds/caltech/target/c1pem1/tempsensor/C1PEMaux.db) to create the new channels.
- Keep the old datadase code and channels so we can compare with new temp channels afterwards. Also we need a 1-month overlap b4 deleting the old channels.
[sus medm screen]
git repo: https://git.ligo.org/40m/susmedmscreen.git
todo (from talk with Koji)
- Link stateword display to open "C1CDS_FE_STATUS.adl"
- Damp filter and Lock filter buttons should open a 3x1 filter screen so that the 6 filters are opened by 2 buttons compared to the old screen that has 3 buttons connected to 2X1 filter screen
- Make the LOCKIN signla modulation flow diagramlook more like the old 40m screen since that is a better layout
- Move load coefficient button to top of sus medm screen (beside stateword)
- The rectangular red outline around the oplev display is confusing and needs to be modified for clarity
- COMM tag block should not be 3D as this suggests it is a button. Make it flat and change tag name to indicate individual watchdog control as this better reflect its functionality. Rename current watchdog switch to watchdog master is it does what the 5 COMM switches do at once.
- Macro pass need to be better documented so that when we call the sus screens from locations other than sitemap, we should know what macro variables to pass in, like DCU_ID etc.
- Edit sitemap.adl to point only to the new screens. Then create a button on the new screen that points to the old screen. This way, we can still access the old screen without clogging sitemap.
- Move the new screen location to a subfolder of where the current sus screens reside, /opt/rtcds/userapps/trunk/sus/c1/medm/templates
- Rename the overview screen (SUS_CUST_HSSS_OVERVIEW.adl) to use the SUS_SINGLE nomenclature, i.e. SUS_SINGLE_OVERVIEW.adl
- Keep an eye of the cpu usage of c1pem as we add BLRMS block for other optics.
Added new temp EPICs channels to database file (/cvs/cds/caltech/target/c1pem1/tempsensor/C1PEMaux.db)
Added new temp EPICs channels to slow channels ini file (/opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini)
[SUS medm screen]
Moved new SUS screen to location : /opt/rtcds/userapps/trunk/sus/c1/medm/templates/NEW_SUS_SCREENS
Place button on the new screen to link to the old screen and replace old screens link on sitemap.
Fixed Load Coefficient button location issue
Fixed LOCKIN flow diagram issue
Fixed watchdog labelling issue
Linked STATE WORD block to FrontEnd STATUS screen
Replaced the 2x1 pit/yaw filter screens for LOCK and DAMP fliters with 3x1 LPY filter screen
*Need some more time to figure out the OPTLEV red indicator
I mounted the particle counter over the BS chamber attached to the cable tray as seen in Attachment 1. The signal cable runs through an active 30ft cable to the 1x2 rack. the wire is labeled and runs properly through the cable tray. The particle counter is plugged in at the power strip attached near the cable tray. The power cord is also labeled.
I restarted the particle counter service in the c1psl computer in the /etc/systemd/system/ folder using the commands
sudo systemctl restart particleCounter
sudo systemctl status particleCounter
I cannged the usb hub assigned in the service file to ttyUSB0 which is what we saw the computer had named it.
Checking the channels from this elog show the same particle count as when testing with the buttons and checking the screen. It seems that the channels had been down but are now restarted.
nice - please update the particle counter page in the 40m wiki. Its probably years out of date.
For the proposed construction in the NW corner of the CES building (near the 40m BS chamber), they did a simulated construction activity on Wednesday from 12-1.
In the attached image, you can see the effect as seen in our seismometers:
this image is calculated by the 40m summary pages codes that Tega has been shepherding back to life, luckily just in time for this test.
Since our local time PDT = UTC - 7 hours, 1900 UTC = noon local. So most of the disturbance happens from 1130-1200, presumably while they are setting up the heavy equipment. If you look in the summary pages for that day, you can also see the IM lost lock. Unclear if this was due to their work or if it was coincidence. Thoughts?
I pressed the Auto-Z(ero) button for ~ 3 seconds at ~9:55 local (pacific) time on the trillium interface on 1X5.
This nicely brought the sensing signal back to ~zero. See attachment
Some basic info:
thanks, this seems to have recentered well.
It looks like it started to act funny at 0400 UTC on 10/24, so thats 9 PM on Sunday in the 40m. What was happening then?
I've attempted to visualize the various components of the cost function in the way I've defined it for the current iteration of the Oplev optimal control loop design code. For each term in the cost function, the way the cost is computed depends on the ratio of the abscissa value to some threshold value (set by hand for now) - if this ratio is >1, the cost is the logarithm of the ratio, whereas if the ratio is <1, the cost is the square of the ratio. Continuity is enforced at the point at which this transition happens. I've plotted the cost function for some of the terms entering the code right now - indicated in dashed red lines are the approximate value of each of these costs for our current Oplev loop - the weights were chosen so that each of the costs were O(10) for the current controller, and the idea was that the optimizer could drive these down to hopefully O(1), but I've not yet gotten that to happen.
Based on the meeting yesterday, some possible ideas:
I've made various changes to the optimal loop design approach, but am still not having much success. A summary of changes made:
Attachment #1 shows the outcome of a typical optimization run - so while I am having some more success with this than before, where the PSO algorithm was stalling and terminating before any actual optimization was done, it seems like I need to re-think the cost function yet again...
Attachment #2 shows the current terms entering the cost function, and their "desired" values.
The current version of the code I am using is here: although I may not have inculded some of the data files required to run it, to be fixed...
When putting code into git.ligo.org, one way to have automated testing is to use the Gitlab CI. This is an automated 'checker', much like the 'Travis' system used in GitHub. Essentially, you give it a make files which it runs somewhere and your GIT repo web page gets a little 'failed/passing' badge telling you if its working. You can also browse the logs to see in detail what happened. This avoids the 'but it works on my computer!' thing that we usually hear.
Another cool feature is client side pre-commit hooks. They can be used to run checks on the local version at the time of commit and refuses to push until the pass/fail exits 0.
Can be the same as the Gitlab CI or just basic code quality checks. I use them to prevent jupyter notebooks being commited with uncleared cells. It needs to be set up on the user's computer manually and is not automatically cloned with the directory: a script can be included in the repo to do this and run manually on first time clone.
After some more tweaking, I feel like I may be getting closer to a cost-function definition that works.
Some things to figure out:
[Anchal, Radhika, Jamie, Chris]
We conducted a test of three alternative controllers for the IMC pitch DOFs on Friday. These were loaded into a new RTS model c1sbr, which runs on the c1ioo front end as a user-space program at 256 Hz. It communicates with the c1ioo controller via shared memory IPCs to exchange error and control signals.
The IMC maintained lock during the handoffs, and we were able to take one minute of data for each (circa GPS 1349807926, 1349808426, 1349808751; spectra attached), which we can review to assess the performance vs the baseline. (On the first trial, lock was lost at the end when the script tried to switch back to the baseline controller, because we did not take care to clear the integrators. On subsequent trials we did that part by hand.)
The method of setting up this test was convoluted, but now that we see it working, we can start putting in the merge requests to get the changes better integrated into the system. First, modifications were required to the realtime code generator, to get controllers running at the new sample rate of 256 Hz. (This was done in a separate filesystem image on fb1, /diskless/root.buster256, which is only loaded by c1ioo, so as to isolate the changes from the other front end machines.) The generated code then needed hand-edits to insert additional header files and linker options, so that the alternative controllers could be loaded from .so shared libraries. Also, the kernel parameters had to be set as described here, to allow the user-space controller to have a CPU core all to itself. Finally, isolating the core was done following the recipe in this script (skipping the parts related to docker, since we didn’t use it).
WFS loops were running for past 2 hours when I made the overall gain slider zero at:
PDT: 2022-10-18 20:42:53.505256 PDT
UTC: 2022-10-19 03:42:53.505256 UTC
The output values are fixed to a good alignment. IMC transmission is about 14100 counts right now. I'll turn on the loop tomorrow morning. Data from tonight can be used for monitoing open loop noise.
Turning WFS loops back on at:
PDT: 2022-10-19 09:48:16.956979 PDT
UTC: 2022-10-19 16:48:16.956979 UTC
Five more mode cleaner alignment controllers were tested this morning (remotely). These were designed to run in tandem with the standard controller, instead of supplanting it. Before the test, c1ioo was burt restored back to the settings of the previous test on Oct 28, and in MC TRANS PIT/YAW filter banks the 80 dB gain filters were disengaged and outputs were enabled. Subsequently, all settings were returned to the original values. Each test consisted of five minutes with pitch alignment uncontrolled, five minutes with the standard controller only, and twenty minutes with both controllers enabled. GPS times for each phase of testing are the following: