Following instructions from LLO-CDS fo the rossa upgrade. Last time there were some issues with not being to access the LLO EPEL repos, but this time it seems to be working fine.
After adding font aliases, need to run 'sudo xset fp rehash' to get the new aliases to take hold. Afterwards, am able to use MEDM and sitemap just fine.
But diaggui won't run because of a lib-sasl error. Try 'sudo yum install gds-all'.
diaggui: error while loading shared libraries: libsasl2.so.2: cannot open shared object file: No such file or directory (have contacted LLO CDS admins)
X-windows keeps crashing with SL7 and this big monitor. Followed instructions on the internet to remove the generic 'Nouveau' driver and install the proprietary NVDIA drivers by dropping to run level 3 and runnning some command line hoodoo to modify the X-files. Now I can even put the mouse on the left side of the screen and it doesn't crash.
I stopped the test earlier today morning around 11:30am. The log file is located at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/PRM_stepping.txt. It contains the times at which the PRM was aligned/misaligned for lookback, and also the number of MC unlocks during every 30 minute period that the PRM alignment was toggled. This was computed by:
I think this method is a pretty reliable proxy, because the MC autolocker certainly takes >3 seconds to re-acquire the lock (it has to run mcdown, wait for the next cavity flash, and run mcup in the meantime).
Preliminary analysis suggests no obvious correlation between MC lock duty cycle and PRM alignment.
I leave further analysis to those who are well versed in the science/art of PRM/IMC statistical correlations.
To test the hypothesis that the IMC lock duty cycle is affected by the PRM alignment. Rana pointed out today that the input faraday has not been tuned to maximize the output->input isolation in a while, so the idea is that perhaps when the PRM is aligned, some of the reflected light comes back towards the PSL through the Faraday and hence, messes with the IMC lock.
I've made a simple script - the pseudocode is the following:
The idea is to keep looping the above over the weekend, so we can expect ~100 datapoints, 50 each for PRM misaligned/aligned. The times at which PRM was aligned/misaligned is also being logged, so we can make some spectrograms of PC drive RMS (for example) with PRM aligned/misaligned. The script lives at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/FaradayIsolCheck.py. Script is being run inside a tmux session on pianosa, hopefully the machine doesn't crash over the weekend and MC1/CDS stays happy.
A more direct measurement of the input Faraday isolation can be made by putting a photodiode in place of the beam dump shown in Attachment #1 (borrowed from this elog). I measured ~100uW of power leaking through this mirror with the PRM misaligned (but IMC locked). I'm not sure what kind of SNR we can expect for a DC measurement, but if we have a chopper handy, we could put a chopper (in the leaked beam just before the PD so as to allow the IMC to be locked) and demodulate at that frequency for a cleaner measurement? This way, we could also measure the contribution from prompt reflections (up to the input side of the Faraday) by simply blocking the beam going into the vacuum. The window itself is wedged so that shouldn't be a big contributor.
Today Angelina and I looked at the PRM OL with an eye towards installing a 2nd QPD. We want to try out using 2 QPDs for a single optic to see if theres a way to make a linear combination of them to reduce the sensitivity to jitter of the HeNe laser or acoustic noise on the table.
The power supply for the HeNe was gone, so I took one from the SP table.
There are WAY too many optics in use to get the beam from the HeNe into the vacuum and then back out. What we want is 1 steering mirror after the laser and then 1 steering mirror before the QPD. Even though there are rumors that this is impossible, I checked today and in fact it is very, very possible.
More optics = more noise = bad.
UVW refers to the 3 internal, orthogonal velocity sensors which are not aligned with the vertical or horizontal directions. XYZ refers to the linear combinations of UVW which correspond to north, east, and up.
Yesterday, while we were bringing the CDS system back online, we noticed that the control room wall StripTool traces for the seismic BLRMS signals did not come back to the levels we are used to seeing even after restarting the PEM model. There are no red lights on the CDS overview screen indicative of DAQ problems. Trending the DQ-ed seismometer signals (these are the calibrated (?) seismometer signals, not the BLRMS) over the last 30 days, it looks like
I poked around at the electronics rack (1X5/1X6) which houses the 1U interface box for these signals - on its front panel, there is a switch that has selectable positions "UVW" and "XYZ". It is currently set to the latter. I am assuming the former refers to velocities in the xyz directions, and the latter is displacement in these directions. Is this the nominal state? I didn't spend too much time debugging the signal further for now.
Looking at the dmesg on c1iscex for example, at least part of the problem seems to be associated with FB1 (192.168.113.201, see Attachment #1). The "server" can be unresponsive for O(100) seconds, which is consistent with the duration for which we see the MEDM status lights go blank, and the EPICS records get frozen. Note that the error timestamped ~4000 was from last night, which means there have been at least 2 more instances of this kind of freeze-up overnight.
I don't know if this is symptomatic of some more widespread problem with the 40m networking infrastructure. In any case, all the CDS overview screen lights were green today morning, and MC autolocker seems to have worked fine overnight.
I have also updated the wiki page with the updated daqd restart commands.
Unrelated to this work - Koji fixed up the MC overview screen such that the MC autolocker button is now visible again. The problem seems to do with me migrating some of the c1ioo EPICS channels from the slow machine to the fast system, as a result of which the EPICS variable type changed from "ENUM" to something that was not "ENUM". In any case, the button exists now, and the MC autolocker blinky light is responsive to its state.
I don't think the problem is fb1. The fb1 NFS is mostly only used during front end boot. It's the rtcds mount that's the one that sees all the action, which is being served from chiara.
I would make a detailed post with how the problems were fixed, but unfortunately, most of what we did was not scientific/systematic/repeatable. Instead, I note here some general points (Jamie/Koji can addto /correct me):
This should still work, but the address has changed. The daqd was split up into three separate binaries to get around the issue with the monolithic build that we could never figure out. The address of the data concentrator (DC) (which is the thing that needs to be restarted) is now 8083.
Koji suggested trying to simply retsart the ASS model to see if that fixes the weird errors shown in Attachment #2. This did the trick. But we are now faced with more confusion - during the restart process, the various indicators on the CDS overview MEDM screen froze up, which is usually symptomatic of the machines being unresponsive and requiring a hard reboot. But we waited for a few minutes, and everything mysteriously came back. Over repeated observations and looking at the dmesg of the frontend, the problem seems to be connected with an unresponsive NFS connection. Jamie had noted sometime ago that the NFS seems unusually slow. How can we fix this problem? Is it feasible to have a dedicated machine that is not FB1 do the NFS serving for the FEs?
The Xarm is currently in its original state, all cables are connected and c1auxex is hosting the slow channels.
Made a front and back panel and slot panels for DSub and IDC breakouts. I want to send this out soon, are there any comments? Preferences for color schemes?
[Koji, Jamie(remote), gautam]
Problems raised in elogs in the thread of 13474 and also 13436 seem to be solved.
Some other general remarks:
I don't think this is really a problem - we offload to the fast channels and not to the slow (although we really should offload to the slow channels). I think the best approach is to use the ezcaservo utility to offload the DC part of the ASS control signals to the slow channels, so as to not waste fast channel DAC counts on DC offsets. In principle, this approach should be somewhat immune to the slow channel calibration not being perfect.
While staring at epics records all day I noticed something about the PIT/YAW offset sliders and ASS offset offloading to slow channels scripts that I'm not sure others are aware off, so I'll briefly discuss it in this post.
The PIT and YAW sliders directly control soft channels that are hosted on the slow machine. Secondary epics records disentangle them for the individual coils:
These channels are the direct input for the physical output channels that generate the control voltage.
The fast channels for PIT and YAW have a numerical correction factor built in that accounts for differences between the OSEMs, but the slow channels don't. This means that the slow PIT/YAW controls are not entirely orthogonal but have crosstalk on the order of 10 percent. This in itself is not that dramatic, however the offload offsets scripts for the dither alignment use the fast PIT/YAW values as inputs, which represent the necessary adjustments to the OSEMs only after the individual correction factors have been applied. The offloading to slow knows nothing of this calibration difference between the OSEMs. The result is that there is a ~10 percent of the offset correction error on the mirror alignment AFTER offloading. This will of course converge after a few iterations, but in any case it is recommendable to run the dither alignment again after offloading and not offload the new offsets to the fast channels.
I had to key the c1psl crate to get the PMC locking again. Without this, it would still sort of lock, but it was very hard to turn on the loop; it would push itself off the fringe. So probably it was stuck in some state with the gain wrong. Since the RF stuff is now done in a separate electronics chain, I don't think the RF phase can be changed by this. Probably the sliders are just not effective until power cycling.
Once the RT machines were back, we launched only the five IOPs. They had bunch of red lights, but we continued to run essential models for the IFO. SOme of the lights were fixed by "global diag reset" and "mxstream restart".
The suspension were damped. We could restore the IMC lock. The locking became OK and the IMC was aligned. The REFL spot came back.
At least, I could confirm that the WFS ASC signals were not transmitted to c1mcs. There must be some disconneted links of IPC.
I then tried to get the MC WFS back, but running rtcds restart --all would make some of the computers hang. For c1ioo I had to push the reset button on the computer and then did 'rtcds start --all' after it came up. Still missing IPC connections.
I'm going to get in touch with Rolf.
This splicing in of fast binary channels we discussed at yesterday's and today's meetings is getting messy with the current chassis. Cleaning up the cable mess was a key point, so I got a 4U height DEEP chassis from Rich and drew up a front panel for a modular approach that we can use at the other 40m locations as well. The front panel will have slots for smaller slot panels to which we can mount the breakout boards as before, so all the wiring that I've done can be transfered to this design. If some new connector standard is required it will be easy to draw a new slot panel from a template, for now I'll make some with two DSub37 and IDC50. Since this chassis is so huge it will have ample space for cross-connects.
I also moved the communication of c1auxex2 with the Acromag units off the martian network, connecting them with a direct cable connection out of the second ethernet port. To test if this works I configured the second ethernet port of c1auxex2 to have the IP address 192.168.114.1 and one of the acromag units to have 192.168.114.11, and initialized an IOC with some test channels. Much to my surprise this actually worked straight out of the box, and the test channels can be accessed from the control room computers without having a direct ethernet link to the acromag modules. huzzah!
Steve: it would be nice to have all plugs- connectors lockable
40m Lab CAD
1. 40m_bldg.dwg has 2D drawing of the 40m building
2. 40m_VE.dwg has the Vaccuum Envelope.
3. 40melev.dwg has the relative positioning between (1) and (2).
4. All files can be found in Dropbox folder [40m SOS Modeling], which should be renamed to [40m CAD].
5. Next step would be to add the optical table, mirrors.
1. Current objective: (refer to D070172) - Increase the length of the side arms (so it matches the dimensions of D960001), while keeping the test mass subassembly at the same height.
2. Future objective: Resonant frequency FEM of the frame (sans the test mass), and then change height to get the desired frequency.
I attached a wiring schematic from the slow DAQ to the eurocrate modules. Of these, pins 1-32 (or 1A-16C) and pins 33-64 (17A-32C) are on separate DSub connectors. Therefore the easiest solution is to splice the slow DIO channels into the existing breakouts so we can proceed with the transition. This will still remove a lot of the current cable salad. For the YEND we can start thinking about a more elegant solution (For example a connector on the front panel of the Acromag chassis for the fast DIO) now that the problem is better defined.
The new slow machine c1auxex2 is ready to deploy. Unfortunately we don't have enough 37pin DSub cables to connect all channels. In fact, we need a total of 8, and I found only three male-male cables and one gender changer. I asked Steve to buy more.
Over the past week I have transferred all EPICS records - soft channels and physical ones - from c1auxex to c1auxex2, making changes where needed. Today I started the in-situ testing
I copied the relevant files to start the modbus server to /cvs/cds/caltech/target/c1auxex2, although kept local copies in /home/controls/modbusIOC/ from which they're still run.
I wonder what's the best practice for this. Probably to store the database files centrally and load them over the network on server start?
Getting the chassis ready took a little longer than anticipated, mostly because I had not looked into the channel list myself before and forgot about Lydia's post which mentions that some of the switching controls have to be moved from the fast to the slow DAQ. We would need a total of 5+5+4+8=22 binary outputs. With the existing Acromag units we have 16 sinking outputs and 8 sourcing outputs. I looked through all the Eurocrate modules and confirmed that they all use the same switch topology which has sourcing inputs.
While one can use a pull-down resistor to control a sourcing input with a sourcing output,
pulling down the MAX333A input (datasheet says logic low is <0.8V) requires something like 100 Ohms for the pull down resistor, which would require ~150mA of current PER CHANNEL, which is unreasonable. Instead, I asked Steve to buy a second XT1111 and modified the chassis to accomodate more Acromag units.
I have now finished wiring the chassis (except for 8 remaining bypass controls to the whitening board which need the second XT1111), calibrated all channels in use, confirmed all pin locations via the existing breakout boards and DCC drawings for the eurocrate modules, and today Steve and I added more fuses to the DIN rail power distribution for +20V and +15V.
There was not enough contingent free space in the XEND rack to mount the chassis, so for now I placed it next to it.
c1auxex2 is currently hosting all original physical c1auxex channels (not yet calc records) under their original name with an _XT added at the end to avoid duplicate channel names. c1auxex is still in control of ETMX. All EPICS channels hosted by c1auxex2 are in dimensions of Volts. The plan for tomorrow is to take c1auxex off the grid, rename the c1auxex2 hosted channels and transfer ETMX controls to it, provided we can find enough 37pin DSub cables (8). I made 5 adapter boards for the 5 Eurocrate modules that need to talk to the slow DAQ through their backplane connector.
The issue was partially fixed and the interferometer is in workable condition now.
What -probably- fixed it was restarting the dhcp server on chiara
sudo service isc-dhcp-server restart
Afterwards the frontends were restarted one by one. SSH access was possible and the essential models for IFO operation were started.
c1iscex reported initially that no DAQ card was found, and inside the IO chassis the LED indicator strip was red. Turning off the machine, checking the cables and rebooting fixed this.
Once a realtime machine was rebooted, it did not come back. I suspect that the diskless hosts have a difficulty to boot up.
Since we're getting ready to put the replacement slow DAQ for c1auxex in I wanted to bring the IFO back to operating condition after the PMC hasn't been locked for days. Something seems wrong with the CDS system though, many of the frontent models have red background and don't seem to be responsive. I followed the instructions laid out in https://wiki-40m.ligo.caltech.edu/Computer_Restart_Procedures.
In the attached screenshot, initially all c1ioo models were red, and on c1iscex only c1x01 was blue, the other ones red. I was able to ssh into both machines and tried to restart indivitual models, which didn't work and instead turned their background white. Still following the wiki page, I restarted both machines but they don't respond to pinging anymore and thus I cannot use ssh to reach them. Not sure what to do, I also rebooted fb over telnet.
So far I couldn't find any records of how to fix this situation.
I wired up the power distribution, and ethernet cables in the Acromag chassis today. For the time being it's all kind of loose in there but tomorrow the last parts should arrive from McMaster to put everything in its place. I had to unplug some of the wiring that Aaron had already done but labeled everything before I did so. I finalized the IP configuration via USB for all the units, which are now powered through the chassis and active on the network.
I started transcribing the database file ETMXaux.db that is loaded by c1auxex in the format required by the Acromags and made sure that the new c1auxex2 properly functions as a server, which it does.
We configured the AtomServer for the Martian network today. Hostname is c1auxex2, IP is 192.168.113.49. Remote access over SSH is enabled.
There will be 6 acromag units served by c1auxex2.
Some hardware to assemble the Acromag box and adapter PCBs are still missing, and the wiring and channel definitions have to be finalized. The port driver initialization instructions and channel definitions are currently locally stored in /home/controls/modbusIOC/ but will eventually be migrated to a shared location, but we need to decide how exactly we want to set up this infrastructure.
An email has come at 5PM on Dec 3rd.
Pizza mail didn't go out last weekend - looking at logfile, it seems like the "sendmail" service was missing. I installed sendmail following the instructions here: https://tecadmin.net/install-sendmail-server-on-centos-rhel-server/
Except that to start the sendmail service, I used systemctl and not init.d. i.e. I ran systemctl start sendmail.service (as root). Test email to myself works. Let's see if it works this weekend. Of course this isn't so critical, more important are the maintenance emails that may need to go out (e.g. disk usage alert on chiara / N2 pressure check, which looks like nodus' responsibilities).
Current objectives and statuses:
Annuloses are not pumped for 30 days, since TP2 failed. IFO pressure 7e-6 Torr it, Rga 2.6e-6 Torr
Valve configuration: Vacuum Normal as TP3 is the forepump of Maglev, annuloses are not puped at 1.1 Torr
TP3 50K rpm, 0.15A 24C, foreline pressure 16.1 mTorr
The TP3 foreline pressure was 4.8 Torr, 50K rpm 0.54A and 31C........Maglev rotation normal 560 Hz....... IFO pressure 7.2e- 6 Torrit was not effected
V1 closed ......replaced drypump.........V1 opened
IFO 6.9e-6 Torrit at 19:55, TP3fl 18 mT, 50Krpm 0.15A 24C
VM1 is still closed
PMC wasn't locking. Had to power down c1psl. Did burt restore. Still not great.
I think many of the readbacks on the PMC MEDM screen are now bogus and misleading since the PMC RF upgrade that Gautam did awhile ago. We ought to fix the screen and clearly label which readbacks and actuators are no longer valid.
Also, the locking procedure is not so nice. The output V adjust doesn't work anymore with BLANK enabled. Would be good to make an autolocker script if we find a visitor wanting to do something fun.
I brought a bunch of SR560s over for repair from Bridge labs. This unit, picture attached (SN 49698), appears to still not be retaining charge. I’ve brought it back.
I've ordered 4 of these from McMaster. Should be delivered to the 40m by noon tomorrow.
For the insulation, I have decided to use this one (Buna-N/PVC Foam Insulation Sheets). We will need 3 of the 1 inch plain backing ones (9349K4) to wrap a few layers around it. I'll try two layers for now, since the insulation seems to be doing quite well according to initial testing.
Kira and I also discussed the issiue. It would be good if someone can hunt aroun on the web and get some free samples of non-shedding foam with R~4.
Here are a couple of preliminary plots of the noise from a 20minute stretch of data - the new curve is the orange one, labelled sensing, which is the spectrum of the PIT/YAW error signal from the HeNe beam single bounce off a single steering mirror onto the QPD, normalized to account for the difference in QPD sum. The peaky features that were absent in the dark noise are present here.
I am a bit confused about the total sum though - there is ~2.5mW of light incident on the PD, and the transimpedance gain is 10.7kohm. So I would expect 2.5e-3 mW * 0.4A/W * 10.7 kV/A ~ 10.7V over 4 quadrants. The ADC is 16 bit and has a range +/- 10V, so 10.7 V should be ~35,000 cts. But the observed QPD sum is ~14,000 counts. The reflected power was measured to be ~250uW, so ~10% of the total input power. Not sure if this is factored into the photodiode efficiency value of 0.4A/W. I guess there is some fraction of the QPD that doesn't generate any photocurrent (i.e. the grooves defining the quadrants), but is it reasonable that when the Oplev beam is well centered, ~50% of the power is not measured? I couldn't find any sneaky digital gains between the quadrant channels to the sum channel either... But in the Oplev setup, the QPD had ~250uW of power incident on it, and was reporting a sum of ~13,000 counts with a transimpedance gain of 100kohm, so at least the scaling seems to hold...
I guess we wan't to monitor this over a few days, see how stationary the noise profile is etc. I didn't look at the spectrum of the intensity noise during this time.
Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...
I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that
too complex; just shoot straight from the HeNe to the QPD. We lower the gain of the QPD by changing the resistors; there's no sane reason to keep the existing 100k resistors for a 2 mW beam. The specular reflection of the QPD must be dumped on a black glass V dump (not some flimsy anodized aluminum or dirty razor stack)
I've attempted to visualize the various components of the cost function in the way I've defined it for the current iteration of the Oplev optimal control loop design code. For each term in the cost function, the way the cost is computed depends on the ratio of the abscissa value to some threshold value (set by hand for now) - if this ratio is >1, the cost is the logarithm of the ratio, whereas if the ratio is <1, the cost is the square of the ratio. Continuity is enforced at the point at which this transition happens. I've plotted the cost function for some of the terms entering the code right now - indicated in dashed red lines are the approximate value of each of these costs for our current Oplev loop - the weights were chosen so that each of the costs were O(10) for the current controller, and the idea was that the optimizer could drive these down to hopefully O(1), but I've not yet gotten that to happen.
Based on the meeting yesterday, some possible ideas:
You may want to consult with the cryo Q people (Brittany, Aaron) for a Si QPD. If you want the same QPD architecture, I can look at my QPD circuit stock.
What is the best way to set this test up?
I think we need a QPD to monitor the spot rather than a single element PD, to answer this question about the sensor noise. Ideally, we want to shoot the HeNe beam straight at the QPD - but at the very least, we need a lens to size the beam down to the same size as we have for the return beam on the Oplevs. Then there is the power - Steve tells me we should expect ~2mW at the output of these HeNes. Assuming 100kohm transimpedance gain for each quadrant and Si responsivity of 0.4A/W at 632nm, this corresponds to 10V (ADC limit) for 250uW of power - so it would seem that we need to add some attenuating optics in the way.
Also, does anyone know of spare QPDs we can use for this test? We considered temporarily borrowing one of the vertex OL QPDs (mark out its current location on the optics table, and move it over to the SP table), but decided against it as the cabling arrangement would be too complicated. I'd like to use the same DAQ electronics to acquire the data from this test as that would give us the most direct estimate of the sensor noise for supposedly no motion of the spot, although by adding 3 optics between the HeNe and the QPD, we are introducing possible additional jitter couplings...
For the OL NB, probably don't have to fudge any seismic noise, since that's a thing we want to suppress. More important is "what the noise would be if the suspended mirrors were no moving w.r.t. inertial space".
For that, we need to look at the data from the OL test setup that Steve is putting on the SP table.
Updated some values, most importantly, the k-factor. I had assumed that it was in the correct units already, but when converting it to 0.046 W/(m^2*K) from 0.26 BTU/(h*ft^2*F), I got the following plot. The time constant is still a bit larger than what we'd expect, but it's much better with these adjustments.
For our next steps, I will measure the time constant of the heater without any insulation and then decide how many layers of it we will need. I'll need to construct and calibrate a temperature sensor like the ones I've made before and use it to record the values more accurately.
I performed a test with the can last week with one layer of insulation to see how well it worked. First, I soldered two heaters together in series so that the total resistance was 48.6 ohms. I placed the heaters on the sides of the can and secured them. Then I wrapped the sides and top of the can in insulation and sealed the edges with tape, only leavng the handles open. I didn't insulate the bottom. I connected the two ends of the heater directly into the DC source and drove the current as high as possible (around 0.6A). I let the can heat up to a final value of 37.5C, turned off the current and manually measured the temperature, recoding the time every half degree. I then plotted the results, along with a fit. The intersection of the red line with the data marks the time constant and the temperature at which we get the time constant. This came out to be about 1.6 hours, much longer than expected considering that onle one layer instead of four was used. With only one layer, we would expect the time constant to be about 13 min, while for 4 layers it should be 53 min (the area A is 0.74 m^2 and not 2 m^2).
I made a model for our seismometer can using actual data so that we know approximately what the time constant should be when we test it out. I used the appendix in Megan Kelley's report to make a relation for the temperature in terms of time.
In our case, we will heat the can to a certain temerature and wait for it to cool on its own so
We know that where k is the k-factor of the insulation we are using, A is the area of the surface through which heat is flowing, is the change in temperature, d is the thickness of the insulation.
We can take the derivative of this to get
We can guess the solution to be
where tau is the time constant, which we would like to find.
The boundary conditions are and . I assumed we would heat up the can to 40 celcius while the room temp is about 24. Plugging this into our equations,
We can plug everything back into the derivative T'(t)
Equating the exponential terms on both sides, we can solve for tau
Plugging in the values that we have, m = 12.2 kg, c = 500 J/kg*k (stainless steel), d = 0.1 m, k = 0.26 W/(m^2*K), A = 2 m^2, we get that the time constant is 0.326hr. I have attached the plot that I made using these values. I would expect to see something similar to this when I actually do the test.
To set up the experiment, I removed the can (with Steve's help) and will place a few heating pads on the outside and wrap the whole thing in a few layers of insulation to make the total thickness 0.1m. Then, we will attach the heaters to a DC source and heat the can up to 40 celcius. We will wait for it to cool on its own and monitor the temperature to create a plot and find the experimental time constant. Later, we can use the heatng circuit we used for the PSL lab and modify the parts as needed to drive a few amps through the circuit. I calculated that we'd need about 6A to get the can to 50 celcius using the setup we used previously, but we could drive a smaller current by using a higher heater resistance.
Confirmed that this crontab is running - the daily backup of the crontab seems to have successfully executed, and there is now a file crontab_nodus.ligo.caltech.edu.20171122080001 in the directory quoted below. The $HOSTNAME seems to be "nodus.ligo.caltech.edu" whereas it was just "nodus", so the file names are a bit longer now, but I guess that's fine...
I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.
This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.
I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.
I got the the SuperMicro 1U server box from Larry W on Monday and set it up in the CryoLab for initial testing.
The specs: https://www.supermicro.com/products/system/1U/5015/SYS-5015A-EHF-D525.cfm
The processor is an Intel D525 dual core atom processor with 1.8 GHz (i386 architecture, no 64-bit support). The unit has a 250GB SSD and 4GB RAM.
I installed Debian Jessie on it without any problems and compiled the most recent stable versions of EPICS base (3.15.5), asyn drivers (4-32), and modbus module (2-10-1). EPICS and asyn each took about 10 minutes, and modbus about 1 minute.
I copied the database files and port driver definitions for the cryolab from cryoaux, whose modbus services I suspended, and initialized the EPICS modbus IOC on the SuperMicro machine instead. It's working flawlessly so far, but admittedly the box is not under heavy load in the cryolab, as the framebuilder there is logging only the 16 analog channels.
I have recently worked out some kinks in the port driver and channel definitions, most importantly:
Aaron and I set 12/4 as a tentative date when we will be ready to attempt a swap. Until then the cabling needs to be finished and a channel database file needs to be prepared.
The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017
Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. "websvn" is not installed.
Per our discussions in the meetings over the last week, I've tried to put together a simple Oplev noise budget. The only two terms in this for now are the dark noise and a model for the seismic noise, and are plotted together with the measured open-loop error signal spectra.
Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. And "websvn" was also implemented.
The numbers I have from the fitting don't agree very well with the OSEM readouts. Attachment #1 shows the Oplev pitch and yaw channels, and also the OSEM ones, while I swept the ASC_PIT offset. The output matrix is the "naive" one of (+1,+1,-1,-1). SUSPIT_IN1 reports ~30urad of motion, while SUSYAW_IN1 reports ~10urad of motion.
From the fits, the BS calibration factors were ~x8 for pitch and x12 for yaw - so according to the Oplev channels, the applied sweep was ~80urad in pitch, and ~7urad in yaw.
Seems like either (i) neither the Oplev channels nor the OSEMs are well diagonalized and that their calibration is off by a factor of ~3 or (ii) there is some significant imbalance in the actuator gains of the BS coils...
Need to double check against OSEM readout during the sweep.