The c1lsc computer has been moved over to the 1Y3 rack, just above the c1lsc IO chassis.
It will talking to the c1sus computer via a Dolphin PCIe reflected memory card. The cards have been installed into c1lsc and c1sus this morning.
It will talk to its IO chassis via the usual short IO chassis cable.
The Dolphin fiber still needs to be strung between c1sus and c1lsc.
The DAQ cable between c1lsc and the DAQ router (which lets the frame builder talk directly with the front ends) also needs t to be strung.
c1lsc needs to be configured to use fb as a boot server, and the fb needs to be configured to handle the c1lsc machine.
I checked the vacuum system and judged there is no apparent issue.
The chambers and annulus had been vented before the power failure.
So the matters are only on the TMPs.
TP1 showed the "Low Input Voltage" failure. I reset the error and the turbine was lift up and left not rotating.
TP2 and TP3 seem rotating at 50KRPM and the each lines show low pressur (~1e-7)
although I did not find the actual TP2/TP3 themselves.
Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.
linux1 and nodus and fb all appear to be on and answering their pings.
I'm going to leave it like this for the morning crew. If it
The monitors for allegra and rossa's seemed to be in a weird state after the power outage. I turned allegra and rossa on, but didn't see anything. However, I was after awhile able to ssh in. Power cycling the monitors did apparently got them talking with the computers again and displaying.
I had to power cycle the c1sus and c1iscex machines (they probably booted faster than linux1 and the fb machines, and thus didn't see their root and /cvs/cds directories). All the front ends seem to be working normally and we have damped optics.
The slow crates look to be working, such as c1psl, c1iool0, c1auxex and so forth.
Kiwamu turned the main laser back on.
Looks like there was a power outage.
The last time(Moonday) Jenne and I worked on the RF distribution unit's structure.
We are making RF distribution unit for RF upgrade which is designed by Alberto.
Rana, Koji, Jenne suggested a better design for RF Distribution unit.
So Jenne and I gathered information of parts and decided what parts will be used with specific numbers.
Specific circuit is shown in the attached picture.
Any suggestion would be really appreciated.
The front ends and fb computers were unresponsive this morning.
This was due to the fb machine having its ethernet cable plugged into the wrong input. It should be plugged into the port labeled 0.
Since all the front end machines mount their root partition from fb, this caused them to also hang.
The cable has been relabled to "fb" on both ends, and plugged into the correct jack. All the front ends were rebooted.
I tested the RFM connection between c1ioo and c1scx. Unfortunately, on the first test, it turns out the c1ioo machine had its gps time off by 1 second compared to c1sus and c1iscex. A second reboot seems to have fixed the issue.
However, it bothers me that the code didn't come up with the correct time on the first boot.
The test was done using the c1gcv model and by modifying the c1scx model. At the moment, the MC_L channel is being passed the MC_L input of the ETMX suspension. In the final configuration, this will be a properly shaped error signal from the green locking.
The MC_L signal is currently not actually driving the optic, as the ETMX POS MATRIX currently has a 0 for the MC_L component.
This morning (~0100) I started to redo some of the wiring in the rack with the FB in it. This was in an effort to activate the new Megatron (Sun Fire 4600) which we got from Rolf.
Its sitting right above the Frame Builder (FB). The fibers in there are a rats nest. Someone needs to team up with Joe to neaten up the cabling in that rack - its a mini-disaster.
While fooling around in there I most probably disturbed something, leading to the FB troubles today.
I succeeded in locking the end green laser to X arm with the new ETM.
Though the lock is still not so stable compared to the previous locking with the old ETM. Also the beam centering is quite bad now.
So I will keep working on the end green lock a little bit more.
Once the lock gets improved and becomes reasonably stiff, we will move onto the corner PLL experiment.
- beam centering on ITMX
- check the mode matching
- revise the control servo
We finished the installation of ETMX into the chamber.
In order to clear the issue of the side OSEM, we put a spacer such that the OSEM can tilt itself and accommodate the magnet.
Though we still don't fully understand why the side magnet is off from the center.
Anyway we are going to proceed with this ETMX and perform the REAL green locking.
(what we did)
- took the ETM tower out from the chamber, and brought it to the clean room again.
- checked the rotation of the ETM by using a microscope. It was pretty good.
The scribe lines at the both sides are at the same height within the diameter of the scribe line.
- checked the height of the ETM by measuring the vertical distance from the table top to the scribe line. This was also quite good.
The height is correctly 5.5 inch within the diameter of the scribe line.
- checked the magnet positions compared with the OSEM holder holes.
All the face magnets are a little bit off upward (approximately by 1mm or less).
The side magnet is off toward the AR surface by ~ 1-2mm.
(yesterday we thought it was off downward, but actually the height is good.)
- raised the position of the OSEM holder bar in order to correct the miscentering of the face magnets.
Now all the face magnets are well centered.
- brought the tower back to the chamber again
- installed the OSEMs
We put a folded piece of aluminum foil in between the hole and the side OSEM as a spacer.
- leveled the table and set the OSEMs to their mid positions.
- slided the tower to place
We had timing problems across the front ends.
Noticing that the 1PPS reference was not blinking on the Master Timing Distribution box. It was supposed to be getting a signal from the c0dcu1 VME crate computer, but this was not happening.
We disconnected the timing signal going into c0dcu1, coming from c0daqctrl, and connected the 1PPS directly from c0daqctrl to the Ref In for the Master Timing distribution box (blue box with lots of fibers coming out of it in 1X5).
We now have agreement in timing between front ends.
After several reboots we now have working RFM again, along with computers who agree on the current GPS time along with the frame builder.
RFM is back and testpoints should be happy.
We still don't have a working binary output for the X end. I may need to get a replacement backplane with more than 4 slots if the 1st slot of this board has the same problem as the large boards.
I have burt restored the c1ioo, c1mcs, c1rms, c1sus, and c1scx processes, and optics look to be damped.
The front ends seem to have different gps timestamps on the data than the frame builder has when receiving them.
One theory is we have fairly been doing SVN checkouts of the code for the front ends once a week or every two weeks, but the frame builder has not been rebuilt for about a month.
Alex is currently rebuilding the frame builder with the latest code changes.
It also suggests I should try rebuilding the frame builder on a semi-regular basis as updates come in.
[Kiwamu, Jenne, Koji, Suresh]
The following steps in this process were completed.
1) Secured the current ETMX (Old ETMY) with the earth quake stops.
2) Removed the OSEMs and noted the Sl no. of each and its position
3) Placed four clamps to mark the location of the current ETMX tower (Old ETMY's position on the table)
4) Moved the ETMX (Old ETMY) tower to the clean table flow bench. In the process the tower had to be tilted during removal because it was too tall to pass upright through the vacuum chamber port. It was scary but nothing went wrong.
5) Koji calculated the location of the new ETMX and told us that it should be placed on the north end of the table.
6) Moved the OSEM cables, counter balancing weights and the 'chopper' out of the way. Had to move some of the clamps securing the cables.
7) Moved the ETMU07 tower from the clean room to the ETMX table
8) Positioned the OSEMs as they were placed in the earlier tower and adjusted their position to the middle of the range of their shadow sensors. The four OSEMs on the face did not give us any trouble and were positioned as required. But the side OSEM could not be put in place. The magnet on the left side, which we are constrained to use since the tower is not designed to hold an OSEM on the right side, seems a little too low (by about a mm) and does not interrupt the light beam in the shadow sensor. The possible causes are
a) the optic is rotated. To check this we need to take the tower back to the clean room and check the location of the optic with the traveling microscope. If indeed it is rotated, this is easy to correct.
b) the magnet is not located at the correct place on the optic. This can also be checked on the clean room optical bench but the solution available immediately is to hold the OSEM askew to accommodate the magnet location. If time permits the magnet position can be corrected.
We have postponed the testing of the ETMU07 tower to 1st of Nov Dec.
Restarted the elog again, this time using .../elog/start-elog.csh, which Joe pointed out works just fine. I have amended the wiki instructions to point to this script, instead.
We put ETMX back in its tower, and confirmed its balance. It might be pointing a teensy bit upward, but it is way less than the DC pointing offset we see when we put the OSEMs in the towers (since the PDs and LEDs have some magnetic bits to them).
Discussions are ongoing as to where the ETM should sit on its table, but we'll probably toss it into the chamber later this evening.
I took ETMY out of the magnet gluing fixture, and put it in a ring, in the foil house. It is ready to have the wire winched and get balanced at our convenience.
The updated status table:
1) Turns out the /opt/rtcds/caltech/c1/target/gds/param/testpoint.par file had been emptied or deleted at one point, and the only entry in it was c1pem. This had been causing us a lack of test points for the last few days. It is unclear when or how this happened. The file has been fixed to include all the front end models again. (Fixed)
2) Alex and I worked on tracking down why there's a GPS difference between the front ends and the frame builder, which is why we see a 0x4000 error on all the front end GDS screens. This involved several rebuilds of the front end codes and reboots of the machines involved. (Broken)
3) Still working on understanding why the RFM communication, which I think is related to the timing issues we're seeing. I know the data is being transferred on the card, but it seems to being rejected after being red in, suggesting a time stamp mismatch. (Broken)
4) The c1iscex binary output card still doesn't work. (Broken)
Alex and I will be working on the above issues tomorrow morning.
Currently, the c1ioo, c1sus and c1iscex computers are running with their front ends. They all still have 0x4000 error. However, you can still look at channels on dataviewer for example. However, there's a possibility of inconsistent timing between computer (although all models on a single computer will be in sync).
All the front ends where burt restorted to 07:07 this morning. I spot checked several optic filter banks and they look to have been turned on.
I didn't realize that the script in .../elog was configured to start 2.8.0 already. I will revert the instructions on the wiki to point to that
Last night I found that the response of ITMX against the angle offsets were strage.
Eventually I found a loose connection at the feedthrough connectors of ITMX chamber.
So I pushed the connector hard, and then ITMX successfully became normal.
It looked like someone had accidentally kicked the cable during some works.
This bad connection had made unacceptable offsets in the OSEM readout, but now they seem fine.
We seemed to have a broken fiber link for use between the LSC and its IO chassis. It is unclear to mean when this damage occurred. The cable had been sitting in a box with styrofoam padding, and the kink is in the middle of the fiber, with no other obvious damage near by. The cable however may have previously been used by the people in Downs for testing and possibly then. Or when we were stringing it, we caused a kink to happen.
I talked to Alex yesterday, and he suggested unplugging the power on both the computer and the IO chassis completely, then plugging in the new fiber connector, as he had to do that once with a fiber connection at Hanford. We tried this this morning, however, still no joy. At this point I plan to toss the fiber as I don't know of any way to rehabilitate kinked fibers.
Note this means that I rebooted c1sus and then did a burt restore from the Nov/30/07:07 directory for c1suspeics, c1rmsepics, c1mcsepics. It looks like all the filters switched on.
We do, however, have the a Dolphin fiber which originally was intended to go between the LSC and its IO chassis, before Rolf was told it doesn't work well that way. However, we were going to connect the LSC machine to the rest of the network via Dolphin.
We can put the LSC machine next to its chassis in the LSC rack, and connect the chassis to the rest of the front ends by the Dolphin fiber. In that case we just need the usual copper style cable going between the chassis and the computer.
The elog seemed to be down at around 12:05pm. I waited a few minutes to see if the browser would connect, but it did not.
I used the existing script in /cvs/cds/caltech/elog/ (as opposed to Zach's new on in elog/elog-2.8.0/) which also seems to have worked fine.
I have created an updated version of the "start-elog-nodus" script and put it in .../elog/elog-2.8.0. It seems to work fine.
As a result of the vacuum work, now the IR beam is hitting ETMX.
The spot of the transmitted beam from the cavity can be found at the end table by using an IR viewer.
(today's missions for IOO)
- cabling for the pzt mirrors
- energizing the pzt mirrors and slide them to their midpoint.
- locking and alignment of the MC
- realignment of the pzt mirrors and other optics.
- letting the beam go down to the arm cavity
ETMU05 : Gluing Side magnets back on to the optic.
The following steps taken in this process:
1) The two magnet+dumbell units which had come loose from the optic needed to be cleaned. A lint free wipe was placed on the table top and a few cc of acetone was poured on to it. The free end of the dumbbell was then scrubbed on this wipe till the surface regained its shine. The dumbell was held at its narrow part with a forceps to avoid any strain on the magnet-dumbbell joint.
2) The optic was then removed from its gluing fixture (by loosening only one of the three retaining screws) and placed in an Al ring. The glue left behind by the side magnets was scrubbed off with a optical tissue wetted with Acetone.
3) The optic was returned to the gluing fixture. The position of the optic was checked by inserting the brass portion of the gripper and making sure that the face magnets are centered in it [Jenne doubled checked to be sure we got everything right].
4) The side magnets were glued on and the optic in the fixture has been placed in the foil-house.
If all goes well we will be able to balance the ETMU05 and give it to Bob for baking.
ETMU07 : It is still in the oven and we need to ask Bob to take out. It will be available for installation in the 40m tomorrow.
I installed the following packages on Graphviz in order to support visualization of GStreamer pipeline graphs:
The elog was down so I restarted it. The instructions on the wiki do not work as the process has some complicated name (i.e. it is not just 'elogd'). I used kill and the pid number.
I will get around to updating the restart script to work with 2.8.0.
This morning I opened the chambers and started some in-vac works.
As explained in this entry, I swapped pzt mirror (A) and (C) successfully.
The chambers are still open, so don't be surprised.
I uploaded some pictures taken in the last and this week. They are on the Picasa web albums.
in vac work [Nov. 18 2010]
in vac work [Nov 23 2010]
CDS work [Nov 24 2010]
We tried installing C1LSC but it's not completely done yet due to the following issues.
(1) A PCIe optical fiber which is supposed to connect C1LSC and its IO chasis is broken at a high probability.
(2) Two DAC boards (blue and golden board) are missing.
We will ask the CDS people at Downs and take some more of those stuff from there.
( works we did )
- took the whole C1ASC crate out from the 1Y3 rack.
- installed an IO chasis to the place where C1ASC was.
- strung a timing optical fiber to the IO chasis.
- checked the functionality of the PICe optical fiber and found it doesn't work.
Fig.1 c1asc taken out from the rack Fig.2 IO chasis installed to the rack
Fig.3 PCIe extension fiber (red arrow for an obvious bended point)
I have made a little bit of progress on the PEM channels. I have begun writing up detailed instructions in the DAQ Wiki page on how to add a channel to the new DAQ system. I have followed those instructions thus far, and can see my channels in the .ini file (and in the daqconfig gui thing), but I don't have any channels in Dataviewer or DTT.
There are some tricky "gotchas" involved in creating new models and channels. Some examples include: No use of the characters "DUMMY" in any channel name. The makefile is specifically hardcoded to fail if that string of characters is used. Also, you must have at least 2 filter banks in every model. Why? No one knows. You just do. The model won't compile unless you have 2 or more filter banks.
My efforts today included ~3 reboots of the frame builder, and ~2 reboots of c1sus. When Kiwamu and I rebooted c1sus, we burt restored to some time in the last 24 hrs. Some of the SUS filters on some of the optics were not set correctly (things like the bounce roll filter), so we turned all of them on, and reset all of the input and output matricies to be the correct combination of +1 and -1's to make Pit, Pos and Yaw. The tuning seems to happen now-a-days in the gains for each DOF, and the gains were set correctly by the burt restore for every optic except PRM. We made some educated guesses for what the gains should be based on the other optics, and PRM is damping pretty well (these guesses included reducing the SIDE gain by ~10 from the BS SIDE value, since the analog gain of the PRM SIDE sensor is much higher than others). We'll have to fine tune these gains using some Yuta-developed method soon. Or find a burt snapshot that had some non-unity values in there.
We removed ETMU07 from the suspension tower, after confirming that the balance was still good. Bob put it in the oven to bake over the weekend. The spring plungers and our spare magnets are all in there as well.
I tried to remove the grippers from ETMU05, and when I did, both side dumbbells came off of the optic. Unfortunately, I was working on getting channels into the DAQ, so I did not clean and reglue ETMU05 today. However Joe told me that we don't have any ETMY controls as yet, and we're not going to do Yarm locking (probably) in the next week or so, so this doesn't really set any schedules back.
The cleaning of ETMU05 will be tricky. Getting the residual glue off of the optic will be fine, but for the dumbbells, we'd like to clean the glue off of the end of the dumbbells using a lint free wipe soaked in acetone, but we don't want to get any acetone in the magnet-to-dumbbell joint, and we don't want to break the magnet-to-dumbbell joint. So we'll have to be very careful when doing this cleaning.
The Status Table:
Wow. I typed DTT on rossa and it actually worked! No complaints about testpoints, etc. I was also able to use its new 'NDS2' function to get data off of the CIT cluster (L1:DARM_ERR from February). You have to use the kinit/kdestroy stuff to use NDS2 as usual (look up NDS2 in DASWG if you don't know what I mean).
[Joe, Suresh, Kiwamu]
We will fully install and run the new C1LSC front end machine tomorrow.
And finally it is going to take care of the IOO PZT mirrors as well as LSC codes.
During the in-vac work today, we tried to energize and adjust the PZT mirrors to their midpoints.
However it turned out that C1ASC, which controls the voltage applying on the PZT mirrors, were not running.
We tried rebooting C1ASC by keying the crate but it didn't come back.
The error message we got in telnet was :
memory init failure !!
We discussed how to control the PZT mirrors from point of view of both short term and long term operation.
We decided to quit using C1ASC and use new C1LSC instead.
A good thing of this action is that, this work will bring the CDS closer to the final configuration.
(things to do)
- move C1LSC to the proper rack (1X4).
- pull out the stuff associated with C1ASC from the 1Y3 rack.
- install an IO chasis to the 1Y3 rack.
- string a fiber from C1LSC to the IO chasis.
- timing cable (?)
- configure C1LSC for Gentoo
- run a simple model to check the health
- build a model for controlling the PZT mirrors
I installed and activated Altium, a PCB design software, on the Windows machine M2.
With Altium I am going to design the triple resonant circuit for the broadband EOM.
We found that two of three PZT mirrors are at wrong place in the chambers.
Therefore we have to move these PZT mirrors together with their connections.
Here is a diagram for the current situation and the plan.
Basically mirror (A) must be associated to the output beam coming out from the SRM, but it was incorrectly put as a part of the input optics.
Similar to that, mirror (C) must belong to the input optics, but it is incorrectly being used as a part of OMC stuff.
Therefore we have to swap the positions of mirror (A) and mirror (C) as shown in the diagram above.
In addition to the mirror immigration, we also have to move their cables as well in order to keep the right functions.
We took a look at the length of the cables outside of the chambers in order to check if they are long enough or not.
And we found that the cables from c1asc (green line in the diagram) is not long enough, so we will put an extension D-sub cable.
ETMU07 had its wire winched to the correct height, was balanced, standoff glued. Can be ready for going into the oven tomorrow, if an oven is available. (One of Bob's ovens has a leak, so he's down an oven, which puts everything behind schedule. We may not be able to get anything into the oven until Monday).
ETMU05 had magnets glued to the optic. Hopefully tomorrow we will winch the wire and balance the optic, and glue the standoff, and be ready to go into the oven on Monday.
The spring plungers were sonicated, but have not yet been baked. I told Daphen that we'd like the optics baked first, so that we can get ETMX in the chamber ASAP, and then the spring plungers as soon as possible so that we can install ETMY and put the OSEMs in.
I created a new setup script for the newest build of the gds tools (DTT, foton, etc), located in /opt/apps (which is a soft link from /cvs/cds/apps) called gds-env.csh.
This script is now sourced by cshrc.40m for linux 64 bit machines. In addition, the control room machines have a soft link in the /opt directory to the /cvs/cds/apps directory.
So now when you type dtt or foton, it will bring up the Centos compiled code Alex copied over from Hanford last month.
Joe showed me what was what with adding DAQ channels, and I have begun building a simulink model to acquire the PEM channels.
My models is in: /cvs/cds/rtcds/caltech/c1/core/advLigoRTS/src/epics/simLink/c1pem.mdl
Next on the to do list in this category: test which input connector goes with which channel (hopefully it's linear, exactly as one would think), and give the channels appropriate names.
The last time(Friday) I made an arrangement for RF distribution unit.
I am making RF distribution unit for RF upgrade which is designed by Alberto.
To reduce a noise from loose connection,
I tried to make the number of hard connect as much as possible while reducing the number of connection via wire.
This is why I put splitters right next to the front pannel so that the connection between pannel plugs and splitters could be made of hard joints.
I attached the arrangement that I made on the last Friday.
Next time, I will drill the teflon(the supporting plate) for assembly.
I cleaned up the /cvs/cds/caltech/target/ directory of all the random models we had built over the last year, in preparation for the move of the old /cvs/cds/caltech/target slow control machine code into the new /opt/rtcds/caltech/c1/target directories.
I basically deleted all the directories generated by the RCG code that were put there, including things like c1tst, c1tstepics, c1x00, c1x00epics, and so forth. Pre-RCG era code was left untouched.
Front ends seem to be experiencing a timing issue. I can visibly see a difference in the GPS time ticks between models running on c1ioo and c1sus.
In addition, the fb is reporting a 0x2bad to all front ends. The 0x2000 means a mismatch in config files, but the 0xbad indicates an out of sync problem between the front ends and the frame builder.
As there are plans to work on the optic tables today and suspension damping is needed, we are holding off on working on the problem until this afternoon/evening, since suspensions are still damping. It does mean the RFM connections are not available.
At that point I'd like to do a reboot of the front ends and framebuilder and see if they come back up in sync or not.
- The tanks are open
[done] - Remove the PZT cable currently underlying between BS and ITMY chambers
[done] - Put this PZT cable between BS and IMC chambers. Connect it on the PZT on the IMC table (SM1)
[done]- Put the two OSEM cables between BS and ITMY chambers. Connect this cable to SRM.
The connector for this cable at the BS side is coming from Bob's place on Wednesday. We left it disconnected for now.
- Energize all of four PZTs and check the functionality.
So I started fully characterizing the beat detection path.
As a part of the characterization works, I measured the spectra of the RFPD noise as well.
The noise is totally dominated by that of the RFPD (i.e. not by an RF amplifier).
I am going to check the noise curve by comparing with a LISO model (or a simple analytical model) in order to make sure the noise is reasonable.
The red curve represents the dark noise of the RFPD, which is amplified by a low noise amp, ZFL-1000LN.
The blue curve is a noise of only ZFL-1000LN with a 50 Ohm terminator at its input.
The last curve is noise of the network analyzer AG4395A itself.
It is clear that the noise is dominated by that of RFPD. It has a broad hill around 100MHz and a spike at 16MHz.
Gain of ZFL-1000LN = 25.5 dB (measured)
Applied voltage to ZFL-1000LN = +15.0 V
Bias voltage on PD = -150 V
I measured the RF transimpedance of the POX photodiode by measuring the optical transfer function with the AM laser and by measuring the shot noise with a light bulb. The plots of these measurements are at http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/POX.
I measured the noise of the photodiode at 11 MHz for different light intensities using an Agilent 4395a. The noise of a 50 ohm resistor as measured by this spectrum analyzer is 10.6 nV/rtHz. I fit this noise data to the shot noise formula to find the RF transimpedance at 11 MHz to be (2.42 ± 0.08) kΩ. The RF transimpedance at 11 MHz as measured by the transfer function is 6.4 kΩ.
As I said in the past entry (see this entry), there was unknown loss of about 20dB in the beat detection path.
Today I measured the frequency response of the wideband RFPD using the Jenne Laser.
Since all the data were taken by using a 1064nm laser, the absolute magnitudes [V/W] for 532nm are not calibrated yet.
I will calibrate the absolute values with a green laser which has a known power.
The data were taken by changing the bias voltage from -150V to 0V.
The shape of the transfer function looks quite similar to that Hartmut measured before (see the entry).
It has 100MHz bandwidth when the bias voltage is -150V, which is our normal operation point.
Theoretically the transfer function must keep flat at lower frequency down to DC.
Therefore for the calibration of this data, we can use the DC signal when a green beam with a known power is illuminating the PD.
I disconnected the yellow GPIB box from the backside of HP3563A (classic analyzer),
and connected it to AG4395A (network analyzer), which is the official place for it.
A story about minor disasters, and crises averted:
Once upon a time, in a cleanroom not so far away..... there lived an optic. To preserve anonymity, we shall call him "ETMU05". This optic had a rough day. When removing the grippers from the magnet-to-optic fixture, 4 out of 6 magnets broke off the dumbbells (the dumbbells were still securely glued to the optic...these had come out of the same batch that had problems last week, same problem). The remaining 2, LL and LR, were sadly of the same polarity. This is bad, because it means that the "humans" taking care of "ETMU05" didn't check the polarity of the face magnets properly, and ensure that they were laid out in an every-other pattern (LL and UR having the same polarity, and LR and UL having the opposite). So, the humans removed all magnets and dumbbells from ETMU05. All remaining glue was carefully scrubbed off the surfaces of ETMU05 using lens paper and acetone, and the magnets and dumbbells were sonicated in acetone, scrubbed with a lint-free wipe, sonicated again, and then scrubbed again to remove the glue. ETMU05 had a nice cleansing, and was drag wiped on both the AR and HR surfaces with acetone and iso. ETMU05 is now on vacation in a nice little foil hut.
His friend, (let's call him ETMU07) had a set of magnets (with polarities carefully confirmed) glued to him. The cleaned magnets and dumbbells removed from ETMU05 were reglued to their dumbbells, and should be dry by tomorrow.
.....And then they lived happily ever after. The End.
The revised schedule / status table:
c1iscex does not even see its 32 channel Binary output card. This means we have no control over the state of the analog whitening and dewhitening filters. The ADC, DAC, and the 1616 Binary Input/Output cards are recognized and working.
Tried recreating the IOP code from the known working c1x02 (from the c1sus front end), but that didn't help.
Checked seating of the card, but it seems correctly socketed and tightened down nicely with a screw.
Tomorrow will try moving cards around and see if there's an issue with the first slot, which the Binary Output card is in.
The ETMX is currently damping, including POS, PIT, YAW and SIDE degrees of freedom. However, the gds screen is showing a 0x2bad status for the c1scx front end (the IOP seems fine with a 0x0 status). So for the moment, I can't seem to bring up c1scx testpoints. I was able to do so earlier when I was testing the status of the binary outputs, so during one of the rebuilds, something broke. I may have to undo the SVN update and/or a change made by Alex today to allow for longer filter bank names beyond 19 characters.
The CDS oscillator part doesn't work inside subsystems.
Rolf checked in an older version of the CDS oscillator which includes an input (which you just connect to a ground). This makes the parser work properly so you can build with the oscillator in a subsystem.
So I did an SVN checkout and confirmed that the custom changes we have here were not overwritten.
Turns out the latest svn version requires new locations for certain codes, such as EPICS installs. I reverted back to version 2160, which is just before the new EPICs and other rtapps directory locations, but late enough to pick up the temporary fix to the CDS oscillator part.
1) Investigate ETMX SD sensor problems
2) Fully check out the ETMX suspension and get that to a "green" state.
3) Look into cleaning up target directories (merge old target directory into the current target directory) and update all the slow machines for the new code location.
4) Clean up GDS apps directory (create link to opt/apps on all front end machines).
5) Get Rana his SENSOR, PERROR, etc channels.
3) Install LSC IO chassis and necessary cabling/fibers.
4) Get LSC computer talking to its remote IO chassis
5) If time, connect and start debugging Dolphin connection between LSC and SUS machines