40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 27 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  5281   Tue Aug 23 01:05:40 2011 JenneUpdateTreasureAll Hands on Deck, 9am!

We will begin drag wiping and putting on doors at 9am tomorrow (Tuesday). 

We need to get started on time so that we can finish at least the 4 test masses before lunch (if possible). 

We will have a ~2 hour break for LIGOX + Valera's talk.

 

I propose the following teams:

(Team 1: 2 people, one clean, one dirty) Open light doors, clamp EQ stops, move optic close to door.  ETMX, ITMX, ITMY, ETMY

(Team 2: K&J) Drag wipe optic, and put back against rails. Follow Team 1 around.

(Team 3 = Team 1, redux: 2 people, one clean, one dirty) Put earthquake stops at correct 2mm distance. Follow Team 2 around.

(Team 4: 3 people, Steve + 2) Close doors.  Follow Team 3 around.

Later, we'll do BS door and Access Connector.  BS, SRM, PRM already have the EQ stops at proper distances.

 

  4934   Fri Jul 1 20:26:29 2011 ranaSummarySUSAll SUS Peaks have been fit

         MC1    MC2    MC3    ETMX   ETMY   ITMX   ITMY   PRM    SRM    BS     mean   std
Pitch   0.671  0.747  0.762  0.909  0.859  0.513  0.601  0.610  0.566  0.747  0.698  0.129
Yaw     0.807  0.819  0.846  0.828  0.894  0.832  0.856  0.832  0.808  0.792  0.831  0.029
Pos     0.968  0.970  0.980  1.038  0.983  0.967  0.988  0.999  0.962  0.958  0.981  0.024
Side    0.995  0.993  0.971  0.951  1.016  0.986  1.004  0.993  0.973  0.995  0.988  0.019

There is a large amount of variation in the frequencies, even though the suspensions are nominally all the same. I leave it to the suspension makers to ponder and explain.

Attachment 1: Screen_shot_2011-07-01_at_8.17.22_PM.png
Screen_shot_2011-07-01_at_8.17.22_PM.png
  7132   Thu Aug 9 04:26:51 2012 SashaUpdateSimulationsAll c1spx screens working

As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.

I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.

Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.

Masha and I ate some of Jamie's popcorn. It was good.

  7133   Thu Aug 9 07:24:58 2012 SashaUpdateSimulationsAll c1spx screens working

Quote:

As the subject states, all screens are working (including the noise screens), so we can keep track of everything in our model! :D I figured out that I was just getting nonsense (i.e. white noise) out of the sim plant cause the filter matrix (TM_RESP) that controlled the response of the optics to a force (i.e. outputted the position of the optic DOF given a force on that DOF and a force on the suspension point) was empty, so it was just passing on whatever values it got based on the coefficients of the matrix without DOING anything to them. In effect, all we had was a feedback loop without any mechanics.

I've been working on getting the mechanics of the suspensions into a filter/transfer function form; I added something resembling that into foton and turned the resulting filter on using the shiny new MEDM screens. However, the transfer functions are a tad wonky (particularly the one for pitch), so I shall continue working on them. It had a dramatic effect on the power spectrum (i.e. it looks a lot more like it should), but it still looks weird.

Still haven't found the e-log Jamie and Rana referred me to, concerning the injection of seismic noise into the simulation. I'm not terribly worried though, and will continue looking in the morning. Worst case scenario, I'll use the filters Masha made at the beginning of the summer.

Masha and I ate some of Jamie's popcorn. It was good.

 Okay! Attached are two power spectra. The first is a power spectrum of reality, the second is a power spectrum of the simPlant. Its looking much better (as in, no longer obviously white noise!), but there seems to be a gain problem somewhere (and it doesn't have seismic noise). I'll see if I can fix the first problem then move on to trying to find the seismic noise filters.

Attachment 1: Screenshot.png
Screenshot.png
Attachment 2: Screenshot-1.png
Screenshot-1.png
  16527   Mon Dec 20 14:10:56 2021 AnchalUpdateBHDAll coil drivers ready to be used, modified and tested

Koji found some 68nF caps from Downs and I finished modifying the last remaining coil driver box and tested it.

SERIAL # TEST result
S2100633 PASS

With this, all coil drivers have been modified and tested and are ready to be used. This DCC tree has links to all the coil driver pages which have documentation of modifications and test data.

  1733   Sun Jul 12 20:06:44 2009 JenneDAQComputersAll computers down

I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).

 

I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again.  Utter failure. 

 

I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.

  1735   Mon Jul 13 00:34:37 2009 AlbertoDAQComputersAll computers down

Quote:

I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).

 

I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again.  Utter failure. 

 

I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.

 I think the problem was caused by a failure of the RFM network: the RFM MEDM screen showed frozen values even when I was power recycling any of the FE computers. So I tried the following things:

- resetting the RFM switch
- power cycling the FE computers
- rebooting the framebuilder
 
but none of them worked.  The FEs didn't come back. Then I reset C1DCU1 and power cycled C1DAQCTRL.
 
After that, I could restart the FEs by power recycling them again. They all came up again except for C1DAQADW. Neither the remote reboot or the power cycling could bring it up.
 
After every attempt of restarting it its lights on the DAQ MEDM  screen turned green only for a fraction of a second and then became red again.
 
So far every attempt to reanimate it failed.
  1736   Mon Jul 13 00:53:50 2009 AlbertoDAQComputersAll computers down

Quote:

Quote:

I popped by the 40m, and was dismayed to find that all of the front end computers are red (only framebuilder, DAQcontroler, PEMdcu, and c1susvmw1 are green....all the rest are RED).

 

I keyed the crates, and did the telnet.....startup.cmd business on them, and on c1asc I also pushed the little reset button on the physical computer and tried the telnet....startup.cmd stuff again.  Utter failure. 

 

I have to pick someone up from the airport, but I'll be back in an hour or two to see what more I can do.

 I think the problem was caused by a failure of the RFM network: the RFM MEDM screen showed frozen values even when I was power recycling any of the FE computers. So I tried the following things:

- resetting the RFM switch
- power cycling the FE computers
- rebooting the framebuilder
 
but none of them worked.  The FEs didn't come back. Then I reset C1DCU1 and power cycled C1DAQCTRL.
 
After that, I could restart the FEs by power recycling them again. They all came up again except for C1DAQADW. Neither the remote reboot or the power cycling could bring it up.
 
After every attempt of restarting it its lights on the DAQ MEDM  screen turned green only for a fraction of a second and then became red again.
 
So far every attempt to reanimate it failed.

 

After Alberto's bootfest which was more successful than mine, I tried powercycling the AWG crate one more time.  No success.  Just as Alberto had gotten, I got the DAQ screen's AWG lights to flash green, then go back to red.  At Alberto's suggestion, I also gave the physical reset button another try.  Another round of flash-green-back-red ensued.

When I was in a few hours ago while everything was hosed, all the other computer's 'lights' on the DAQ screen were solid red, but the two AWG lights were flashing between green and red, even though I was power cycling the other computers, not touching the AWG at the time.  Those are the lights which are now solid red, except for a quick flash of green right after a reboot.

I poked around in the history of the curren and old elogs, and haven't found anything referring to this crazy blinking between good and bad-ness for the AWG computers.  I don't know if this happens when the tpman goes funky (which is referred to a lot in the annals of the elog in the same entries as the AWG needing rebooting) and no one mentions it, or if this is a new problem.  Alberto and I have decided to get Alex/someone involved in this, because we've exhausted our ideas. 

  6997   Fri Jul 20 17:11:50 2012 JamieUpdateCDSAll custom MEDM screens moved to cds_users_apps svn repo

Since there are various ongoing requests for this from the sites, I have moved all of our custom MEDM screens into the cds_user_apps SVN repository.  This is what I did:

For each system in /opt/rtcds/caltech/c1/medm, I copied their "master" directory into the repo, and then linked it back in to the usual place, e.g.:

a=/opt/rtcds/caltech/c1/medm/${model}/master
b=/opt/rtcds/userapps/trunk/${system}/c1/medm/${model}
mv $a $b
ln -s $b $a

Before committing to the repo, I did a little bit of cleanup, to remove some binary files and other known superfluous stuff.  But I left most things there, since I don't know what is relevant or not.

Then committed everything to the repo.

 

  2386   Thu Dec 10 13:50:02 2009 JenneUpdateVACAll doors on, ready to pump

[Everybody:  Alberto, Kiwamu, Joe, Koji, Steve, Bob, Jenne]

The last heavy door was put on after lunch.  We're now ready to pump.

  7920   Sat Jan 19 15:05:37 2013 JenneUpdateComputersAll front ends but c1lsc are down

Message I get from dmesg of c1sus's IOP:

[   44.372986] c1x02: Triggered the ADC
[   68.200063] c1x02: Channel Hopping Detected on one or more ADC modules !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[   68.200064] c1x02: Check GDSTP screen ADC status bits to id affected ADC modules
[   68.200065] c1x02: Code is exiting ..............
[   68.200066] c1x02: exiting from fe_code()

Right now, c1x02's max cpu indicator reads 73,000 micro seconds.  c1x05 is 4,300usec, and c1x01 seems totally fine, except that it has the 02xbad.

c1x02 has 0xbad (not 0x2bad).  All other models on c1sus, c1ioo, c1iscex and c1iscey all have 0x2bad.

Also, no models on those computers have 'heartbeats'.

C1x02 has "NO SYNC", but all other IOPs are fine.

I've tried rebooting c1sus, restarting the daqd process on fb, all to no avail.  I can ssh / ping all of the computers, but not get the models running.  Restarting the models also doesn't help.

upside_down_cat-t2.jpg

c1iscex's IOP dmesg:

[   38.626001] c1x01: Triggered the ADC
[   39.626001] c1x01: timeout 0 1000000
[   39.626001] c1x01: exiting from fe_code()

c1ioo's IOP has the same ADC channel hopping error as c1sus'.

 

  7922   Sat Jan 19 18:23:31 2013 ranaUpdateComputersAll front ends but c1lsc are down

After sshing into several machines and doing 'sudo shutdown -r now', some of them came back and ran their processes.

After hitting the reset button on the RFM switch, their diagnostic lights came back. After restarting the Dolphin task on fb:

"sudo /etc/init.d/dis_networkmgr restart"

the Dolphin diagnostic lights came up green on the FE status screen.

iscex still wouldn't come up. The awgtpman tasks on there keep trying to start but then stop due to not finding ADCs.

 

Then power cycled the IO Chassis for EX and then awtpman log files changed, but still no green lights. Then tried a soft reboot on fb and now its not booting correctly.

Hardware lights are on, but I can't telnet into it. Tried power cycling it once or twice, but no luck.

Probably Jamie will have to hook up a keyboard and monitor to it, to find out why its not booting.

FE.png

P.S. The snapshot scripts in the yellow button don't work and the MEDM screen itself is missing the time/date string on the top.

  14241   Wed Oct 10 12:38:27 2018 yukiConfigurationLSCAll hardware was installed

I connected DAC - AIboard - PZTdriver - PZT mirrors and made sure the PZT mirrors were moving when changing the signal from DAC. Tomorrow I will prepare alignment servo with green beam for Y-arm.

  13885   Thu May 24 10:16:29 2018 gautamUpdateGeneralAll models on c1lsc frontend crashed

All models on the c1lsc front end were dead. Looking at slow trend data, looks like this happened ~6hours ago. I rebooted c1lsc and now all models are back up and running to their "nominal state".

Attachment 1: c1lsc_crashed.png
c1lsc_crashed.png
  12284   Sun Jul 10 15:54:23 2016 ericqUpdateCDSAll models restarted

For some reason, all of the non-IOP models on the vertex frontends had crashed.

To get all of the bits green, I ended up restarting all models on all frontends. (This means Gautam's coil tests have been interrupted.)

  7724   Mon Nov 19 15:15:22 2012 JenneUpdateSUSAll oplev gains turned on

Quote:

Oplev values that were changed to zero:

PRM P=0.15, Y=-0.3

SRM P=-2.0, Y=2.0

BS P=0.2, Y=-0.2

ITMY P=2.1, Y=-2.0

ITMX P=1.0, Y=-0.5

ETMX P=-0.2, Y=-0.2

ETMY P=0.5, Y=0.6

Also, PRCL was changed in the LSC input matrix from REFL33I to AS55I, since there is no REFL beam out of the IFO :(

 Ayaka and I restored all of the oplev gains to these values.  The exception is ETMY, which has both gains negative.  I am unsure if this is a transcription error on my part, or if something physical has changed.  The layout of the ETMY oplev was modified (since Rana took out the offending lens) but that shouldn't affect the sign of the gains.

  7551   Mon Oct 15 22:16:09 2012 JenneUpdateSUSAll oplev gains turned to 0

Steve has promised to fix up all of the oplevs, but it hasn't happened yet, so I've turned all of the oplev gains to zero, so that when the optics are restored we don't have to quickly click them off.

Oplev values that were changed to zero:

PRM P=0.15, Y=-0.3

SRM P=-2.0, Y=2.0

BS P=0.2, Y=-0.2

ITMY P=2.1, Y=-2.0

ITMX P=1.0, Y=-0.5

ETMX P=-0.2, Y=-0.2

ETMY P=0.5, Y=0.6

Also, PRCL was changed in the LSC input matrix from REFL33I to AS55I, since there is no REFL beam out of the IFO :(

  11054   Fri Feb 20 12:29:01 2015 ericqUpdateCDSAll optics damped

I noticed some diagnostic bits in the c1sus IOP complaining about user application timing and FIFO under/overflow (The second and fourth squares next to the DACs on the GDS screen.) Over in crackle-cymac land, I've seen this correlate with large excess DAC noise. After restarting all models, all but one of these is green again, and the optics are now all damped. 

It seems there were some fishy BURT restores, as I found the TT control filters had their inputs and outputs switched off. Some ASS filters were found this way too. More undesired settings may still lurk in the mists...

The interferometer is now realigned, arms locked. 

  5007   Wed Jul 20 20:44:56 2011 JamieUpdateSUSAll sus models rebuilt and restarted

There were a couple of recent improvements to the sus_single_control model that had not been propagated to all of the suspension controllers.

Rebuilt and restarted c1mcs, c1sus, c1scx, and c1scy.  Everything seems to be working fine after restart.

  1975   Tue Sep 8 17:57:30 2009 JenneUpdatePEMAll the Acc/Seis working again

All of the accelerometers and seismometers are plugged in and functional again.  The cables to the back of the accelerometer preamp board (sitting under the BS oplev table) had been unplugged, which was unexpected.  I finally figured out that that's what the problem was with half of the accelerometers, plugged them back in, and now all of the sensors are up and running.

TheSEIS_GUR seismometer is under MC1, and all the others (the other Guralp, the Ranger which is oriented vertically, and all 6 accelerometers) are under MC2.

  1146   Wed Nov 19 10:32:11 2008 AlbertoConfigurationElectronicsAll the Marconi Set to the Rubidium Frequency Standard
I placed the SRS Rubidium FS275 over the PSL rack, next to the frequency counter. This one and the Marconi on the PSL rack have been connected to the 10MHz output of the frequency standard. I set also the first Marconi, the one that used to drive the others, to external, direct frequency reference. Now it reads 166981718 Hz versus 166981725 Hz measured by the frequency counter: 8 Hz difference.
  1147   Wed Nov 19 18:02:18 2008 ranaConfigurationElectronicsAll the Marconi Set to the Rubidium Frequency Standard
Not sure what was going on before. I changed the frequency counter to use an AC coupled input, and had it average
for 50 seconds. The output now agrees with the Marconi front panel to less than 1 Hz. Its still not 0.000 Hz,
but not bad.
  4084   Tue Dec 21 16:34:42 2010 kiwamuSummaryVACAll the test masses have been wiped

 [Jenne and Kiwamu]

 We wiped all the test masses with isopropyl alcohol.

They became apparently much cleaner.

(how to)

 At first we prepared the following stuff:

  * syringe

  * isopropyl alcohol 

  * lens papers

  * cleaned scissors

  Then we cut the lens papers into the half by the scissors such that the long side can remain.

This is because that the SOSs have relatively narrow spaces at their optic surfaces for putting a piece of paper. 

   We did vertical and horizontal wiping using the lens paper and appropriate amount of isopropyl alcohol.

Each wiping (vertical and horizontal) requires two or three times trials to appropriately remove dusts.

Amount of isopropyl:

   * vertical 15 [ul]

   * horizontal 10 [ul]

In addition to this, we also used the ionizer gun for blowing big dusts and fiber away from the surface.

 

 

(surface inspection)

   Before wiping them, all the test masses had small dusts uniformly distributed on the HR surfaces.

Especially ETMX was quite dirty, many small spots (dusts) were found when we shined the surface with the fiber illuminator.

ETMY was not so bad, only a couple of small dusts were at around the center.  ITMX/Y had several dusts, they were not as dirty as ETMX, but not cleaner than ETMY.

   After we wiped them,  we confirmed no obvious dusts were around the centers of the optics. They looked pretty good !

 

 

  8962   Fri Aug 2 22:51:10 2013 JenneUpdateGeneralAll vent tasks complete, just need oplev check

[Manasa, Koji, Jenne]

We went into the BS and IOO chambers, and aligned the green beams such that they came out of the vacuum chamber.  The idea here was to get the beams at the same height, but slightly offset in yaw.  This required moving the Periscope on BS table, PBS in front of that periscope, the Periscope on the IOO table, and 2 steering mirrors on the IOO table after the 2nd periscope.  The tables were not releveled, although we have aligned the full interferometer to this situation, so we do not want to touch the tables.  The MC spot positions are still consistent with those measured earlier this afternoon, before this work, so I'm not concerned.

We confirmed that both green beams are hitting a good place (centered in pitch, and just left and right of center in yaw) on the mirror in the OMC chamber, and are getting to the center of the first mirror on the PSL table.  We then coarsely aligned the beams on the PSL table.

We then relocked and aligned the arms for IR, and checked that the AS beam is centered on the mirrors in the BS chamber, and that the beam is coming out, and to the AS table.  I touched the last mirror before PZT3 a small amount in yaw, and then PZT3 in pitch and yaw, until we saw the beam recentered on the first mirror on the AS table.  At that point, we were also back to the center of the AS camera (which is good, since Koji had aligned all of that the other day).  So, the AS beam is good.

We checked IPPOS, and have centered the beam on all the mirrors, and aligned the beam onto the QPD. 

We checked IPANG, by looking through the viewports at the mirrors in the ETMY chamber.  We are now centered in yaw, but clipping a bit low.  This is what we want, since we always end up drifting high during the pump-down. 

We see a nice, unclipped REFL beam on the camera.

We see a beam a little high on the POP camera, but Koji looked on the table with a card, and saw the beam....we just need to do minor alignment on the out of vac mirrors.

We checked again that the green TEM00 beams from both arms come to the PSL table. 

We are getting POX and POY out, since we are using them to lock and align the arms.

Manasa and Koji recovered one clean allen key from the bottom of the chambers, but one remains, as a sacrifice to the vacuum gods. 

I believe that, with the exception of checking the oplevs and taking photos of PR3, and the green steering optics, we have finished all of our vent tasks.  We should do a quickie alignment on Monday, check the oplevs, take some photos, and put on the heavy doors.  Pumping can start either Monday afternoon or Tuesday morning. 

 

  15578   Wed Sep 16 17:44:27 2020 gautamUpdateCDSAll vertex FE models restarted

I had to make a CDS change to the c1lsc model in an effort to get a few more signals into the models. Rather than risk requiring hard reboots (typcially my experience if I try to restart a model), I opted for the more deterministic scripted reboot, at the expense of spending ~20mins to get everything back up and running.


Update 2230: this was more complicated than expected - a nuclear reboot was necessary but now everything is back online and functioning as expected. While all the CDS indicators were green when I wrote this up at ~1800, the c1sus model was having frequent CPU overflows (execution time > 60 us). Not sure why this happened, or why a hard power reboot of everything fixed it, but I'm not delving into this.

The point of all this was that I can now simultaneously digitize 4 channels - 2 DCPDs, and 2 demodulated quadratures of an RF signal.

Attachment 1: CDS.png
CDS.png
  14591   Fri May 3 09:12:31 2019 gautamUpdateSUSAll vertex SUS watchdogs were tripped

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

On a side note - I don't think we log the watchdog state explicitly. We can infer whether the optic is damped by looking at the OSEM sensor time series, but do we want to record the watchdog state to frames?

Attachment 1: SUSwatchdogs.png
SUSwatchdogs.png
  14596   Mon May 6 11:05:23 2019 JonUpdateSUSAll vertex SUS watchdogs were tripped

Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.

Quote:

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

  305   Sat Feb 9 13:32:07 2008 JohnSummarySUSAll watchdogs tripped
When I arrived this afternoon the watchdogs had tripped on all optics. I reset them and enabled the coil currents.

I had to adjust the alignment of the mode cleaner to get it to lock.
  306   Sun Feb 10 20:47:01 2008 AlanSummarySUSAll watchdogs tripped
A moderate earthquake occurred at 11:12:06 PM (PST) on Friday, February 8, 2008.
The magnitude 5.1 event occurred 21 km (13 miles) NW of Guadalupe Victoria, Baja California, Mexico.
http://quake.wr.usgs.gov/recenteqs/Quakes/ci14346868.html
  15373   Wed Jun 3 19:19:11 2020 gautamUpdateSUSAll watchdogs tripped

This EQ seems to have knocked all suspensions out. ITMX was stuck. It is now released, and the IMC is locked again. It looks like there are some serious aftershocks going on so let's keep an eye on things.

  2467   Wed Dec 30 10:58:48 2009 AlbertoUpdateGeneralAll watchdogs tripped this morning

This morning I found all the watchdogs had tripped during the night.

I restored them all.

I can't damp ITMX. I noticed that its driving matrix is all 1s and -1s as the the right values had been lost in some previous burtrestoring.

  2468   Wed Dec 30 18:01:03 2009 Alberto, RanaUpdateGeneralAll watchdogs tripped this morning

WQuote:

This morning I found all the watchdogs had tripped during the night.

I restored them all.

I can't damp ITMX. I noticed that its driving matrix is all 1s and -1s as the the right values had been lost in some previous burtrestoring.

 

Rana fixed the problem. He found that the side damping was saturating. He lowered the gain a little for a while, waited for the the damping to slow down the optic and then he brought the gain back where it was.

He also upadted the MEDM screen snapshot.

  15155   Sun Jan 26 13:30:19 2020 gautamUpdateSUSAll watchdogs tripped, now restored

Looks like a M=4.6 earthquate in Barstow,CA tripped all the suspensions. ITMX got stuck. I restored the local damping on all the suspensions just now, and freed ITMX. Looks like all the suspensions damp okay, so I think we didn't suffer any lasting damage. IMC was re-aligned and is now locked.

Attachment 1: EQ_Jan25.pdf
EQ_Jan25.pdf
  15335   Fri May 15 19:10:42 2020 gautamUpdateSUSAll watchdogs tripped, now restored

This EQ in Nevada seems to have tripped all watchdogs. ITMX was stuck. It was released, and all the watchdogs were restored. Now the IMC is locked.

  1294   Wed Feb 11 15:01:47 2009 josephbConfigurationComputersAllegra

So after having broke Allegra by updating the kernel, I was able to get it running again by copying the xorg.conf.backup file over xorg.conf in /etc/X11.  So at this point in time, Allegra is running with generic video drivers, as opposed to the ATI specific and proprietary drivers.

  2509   Tue Jan 12 11:34:26 2010 josephbUpdatePEMAllegra dataviewer

Quote:

So that we can use both Guralps for Adaptive stuff, and so that I can look at the differential ground motion spectra, I've reconnected the Guralp Seismometers to the PEM ADCU, instead of where they've been sitting for a while connected to the ASS ADC.  I redid the ASS.mdl file, so that the PEM and PEMIIR matricies know where to look for the Gur2 data.  I followed the 'make ass' procedure in the wiki.  The spectra of the Gur1 and Gur2 seismometers look pretty much the same, so everything should be all good.

There's a problem with DataViewer though:  After selecting signals to plot, whenever I hit the "Start" button for the realtime plots, DataViewer closes abruptly. 

When I open dataviewer in terminal, I get the following output:

allegra:~>dataviewer
Warning: communication protocol revision mismatch: expected 11.3, received 11.4
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: Not all children have same parent in XtManageChildren
Warning: communication protocol revision mismatch: expected 11.3, received 11.4
msgget: No space left on device
allegra:~>framer4 msgget:msqid: No space left on device

Does anyone have any inspiration for why this is, or what the story is?  I have GR class, but I'll try to follow up later this afternoon.

 This problem seems to be restricted to allegra.  Dataviewer works fine on Rosalba  and op440m, as well as using ssh -X into rosalba to run dataviewer remotely.  DTT seems to work fine on allegra.  The disk usage seems no where near full on allegra.  Without knowing which "device" its refering to (although it must be a device local to allegra), I'm not sure what to look at now. 

I'm going to do a reboot of allegra and see if that helps.

Update:  The reboot seems to have fixed the issue.

 

 

 

 

  3017   Sun May 30 17:51:04 2010 kiwamuHowToPEMAllegra dataviewer

I found the dataviewer didn't work only on Allegra. This thing sometimes happened as described in the past entry.

I rebooted Allegra, then the problem was fixed.

 

  15246   Wed Mar 4 11:10:47 2020 YehonathanUpdateComputersAllegra revival

Allegra had no network cable and no mouse. We found Allegra'snetwork cable (black) and connected it.

I found a dirty old school mouse and connected it.

I wiped Allegra and now I'm currently installing debian 10 on allegra following Jon's elog.

04/01 update: I forgot to mention that I tried installing cds software by following Jamie's instruction: I added the line in /etc/apt/sources.list.d/lscsoft.list: "deb http://software.ligo.org/lscsoft/debian/ stretch contrib". But this the only thing I managed to do. The next command in the instructions failed.

  5778   Tue Nov 1 18:45:48 2011 JenneUpdateComputersAllegra's screens

I was trying to give Allegra a second head, and it didn't quite work.  It's still in progress.  Steve, you might not like how I've 'mounted' the second monitor, so we can talk about something that might work tomorrow.

  14425   Fri Feb 1 01:24:06 2019 gautamUpdateSUSAlmost ready for pumpdown tomorrow

[koji, chub, jon, rana, gautam]

Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to 

  1. Confirm clearance between elliptical reflector and ETMY
  2. Confirm leveling of ETMY table
  3. Take pics of ETMY table
  4. Put heavy door on ETMY chamber
  5. Pump down

The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.

  17116   Wed Aug 31 01:22:01 2022 KojiUpdateGeneralAlong the X arm part 1

 

Attachment 5: RF delay line was accommodated in 1X3B. (KA)

Attachment 1: PXL_20220831_015945610.jpg
PXL_20220831_015945610.jpg
Attachment 2: PXL_20220831_020024783.jpg
PXL_20220831_020024783.jpg
Attachment 3: PXL_20220831_020039366.jpg
PXL_20220831_020039366.jpg
Attachment 4: PXL_20220831_020058066.jpg
PXL_20220831_020058066.jpg
Attachment 5: PXL_20220831_020108313.jpg
PXL_20220831_020108313.jpg
Attachment 6: PXL_20220831_020131546.jpg
PXL_20220831_020131546.jpg
Attachment 7: PXL_20220831_020145029.jpg
PXL_20220831_020145029.jpg
Attachment 8: PXL_20220831_020203254.jpg
PXL_20220831_020203254.jpg
Attachment 9: PXL_20220831_020217229.jpg
PXL_20220831_020217229.jpg
  17129   Fri Sep 2 15:30:10 2022 AnchalUpdateGeneralAlong the X arm part 1

[Anchal, Radhika]

Attachment 2: The custom cables which were part of the intermediate setup between old electronics architecture and new electronics architecture were found.
These include:

  • 2 DB37 cables with custom wiring at their connectors to connect between vacuum flange and new Sat amp box, marked J4-J5 and J6-J7.
  • 2 DB15 to dual head DB9 (like a Hydra) cables used to interface between old coil drivers and new sat amp box.

A copy of these cables are in use for MC1 right now. These are spare cables. We put them in a cardboard box and marked the box appropriately.
The box is under the vacuum tube along Yarm near the center.

 

  17117   Wed Aug 31 01:24:48 2022 KojiUpdateGeneralAlong the X arm part 2

 

 

Attachment 1: PXL_20220831_020307235.jpg
PXL_20220831_020307235.jpg
Attachment 2: PXL_20220831_020333966.jpg
PXL_20220831_020333966.jpg
Attachment 3: PXL_20220831_020349163.jpg
PXL_20220831_020349163.jpg
Attachment 4: PXL_20220831_020355496.jpg
PXL_20220831_020355496.jpg
Attachment 5: PXL_20220831_020402798.jpg
PXL_20220831_020402798.jpg
Attachment 6: PXL_20220831_020411566.jpg
PXL_20220831_020411566.jpg
Attachment 7: PXL_20220831_020419923.jpg
PXL_20220831_020419923.jpg
Attachment 8: PXL_20220831_020439160.jpg
PXL_20220831_020439160.jpg
Attachment 9: PXL_20220831_020447841.jpg
PXL_20220831_020447841.jpg
  17118   Wed Aug 31 01:25:37 2022 KojiUpdateGeneralAlong the X arm part 3

 

 

Attachment 1: PXL_20220831_020455209.jpg
PXL_20220831_020455209.jpg
Attachment 2: PXL_20220831_020534639.jpg
PXL_20220831_020534639.jpg
Attachment 3: PXL_20220831_020556512.jpg
PXL_20220831_020556512.jpg
Attachment 4: PXL_20220831_020606964.jpg
PXL_20220831_020606964.jpg
Attachment 5: PXL_20220831_020615854.jpg
PXL_20220831_020615854.jpg
Attachment 6: PXL_20220831_020623018.jpg
PXL_20220831_020623018.jpg
Attachment 7: PXL_20220831_020640973.jpg
PXL_20220831_020640973.jpg
Attachment 8: PXL_20220831_020654579.jpg
PXL_20220831_020654579.jpg
Attachment 9: PXL_20220831_020712893.jpg
PXL_20220831_020712893.jpg
  17119   Wed Aug 31 01:30:53 2022 KojiUpdateGeneralAlong the X arm part 4

Behind the X arm tube

Attachment 1: PXL_20220831_020757504.jpg
PXL_20220831_020757504.jpg
Attachment 2: PXL_20220831_020825338.jpg
PXL_20220831_020825338.jpg
Attachment 3: PXL_20220831_020856676.jpg
PXL_20220831_020856676.jpg
Attachment 4: PXL_20220831_020934968.jpg
PXL_20220831_020934968.jpg
Attachment 5: PXL_20220831_021030215.jpg
PXL_20220831_021030215.jpg
  17120   Wed Aug 31 01:53:39 2022 KojiUpdateGeneralAlong the Y arm part 1

 

 

Attachment 1: PXL_20220831_021118213.jpg
PXL_20220831_021118213.jpg
Attachment 2: PXL_20220831_021133038.jpg
PXL_20220831_021133038.jpg
Attachment 3: PXL_20220831_021228013.jpg
PXL_20220831_021228013.jpg
Attachment 4: PXL_20220831_021242520.jpg
PXL_20220831_021242520.jpg
Attachment 5: PXL_20220831_021258739.jpg
PXL_20220831_021258739.jpg
Attachment 6: PXL_20220831_021334823.jpg
PXL_20220831_021334823.jpg
Attachment 7: PXL_20220831_021351076.jpg
PXL_20220831_021351076.jpg
Attachment 8: PXL_20220831_021406223.jpg
PXL_20220831_021406223.jpg
Attachment 9: PXL_20220831_021426110.jpg
PXL_20220831_021426110.jpg
  17121   Wed Aug 31 01:54:45 2022 KojiUpdateGeneralAlong the Y arm part 2
Attachment 1: PXL_20220831_021459596.jpg
PXL_20220831_021459596.jpg
Attachment 2: PXL_20220831_021522069.jpg
PXL_20220831_021522069.jpg
Attachment 3: PXL_20220831_021536313.jpg
PXL_20220831_021536313.jpg
Attachment 4: PXL_20220831_021544477.jpg
PXL_20220831_021544477.jpg
Attachment 5: PXL_20220831_021553458.jpg
PXL_20220831_021553458.jpg
Attachment 6: PXL_20220831_021610724.jpg
PXL_20220831_021610724.jpg
Attachment 7: PXL_20220831_021618209.jpg
PXL_20220831_021618209.jpg
Attachment 8: PXL_20220831_021648175.jpg
PXL_20220831_021648175.jpg
  17130   Fri Sep 2 15:35:19 2022 AnchalUpdateGeneralAlong the Y arm part 2

[Anchal, Radhika]

The cables in USPS open box were important cables that are part of the new electronics architecture. These are 3 ft D2100103 DB15F to DB9M Reducer Cable that go between coil driver output (DB15M on back) to satellite amplifier coil driver in (DB9F on the front). These have been placed in a separate plastic box, labeled, and kept with the rest of the D-sub cable plastic boxes that are part of the upgrade wiring behind the tube on YARM across 1Y2. I believe JC would eventually store these dsub cable boxes together somewhere later.

  6373   Wed Mar 7 13:59:07 2012 Ryan FisherSummaryComputer Scripts / ProgramsAlterations to base epics install for installing aLIGO conlog:

In order to install the necessary extensions to epics to make the aLIGO conlog work, I have edited one file in the base epics install that affects makefiles:

/cvs/cds/caltech/apps/linux64/epics/base/configure/CONFIG_COMMON

Jamie said he prefers diffs, so I regenerated the original file and did a diff against the current file:

megatron:configure>diff CONFIG_COMMON.orig.reconstructedMar72012 CONFIG_COMMON.bck.Mar72012
206,207c206,210
< USR_CPPFLAGS =
< USR_DBDFLAGS =
---
> USR_CPPFLAGS = -I $(EPICS_BASE)/include
> USR_CPPFLAGS += -I $(EPICS_BASE)/include/os/Linux/
> USR_CPPFLAGS += -I $(EPICS_BASE)/../modules/archive/lib/linux-x86_64/
> USR_DBDFLAGS = -I $(EPICS_BASE)/dbd
> USR_DBDFLAGS += -I $(EPICS_BASE)/../modules/archive/dbd

This is saved in CONFIG_COMMON.diff.Mar72012_1

  6377   Wed Mar 7 18:00:39 2012 Ryan FisherSummaryComputer Scripts / ProgramsAlterations to base epics install for installing aLIGO conlog:

Quote:

In order to install the necessary extensions to epics to make the aLIGO conlog work, I have edited one file in the base epics install that affects makefiles:

/cvs/cds/caltech/apps/linux64/epics/base/configure/CONFIG_COMMON

Jamie said he prefers diffs, so I regenerated the original file and did a diff against the current file:

megatron:configure>diff CONFIG_COMMON.orig.reconstructedMar72012 CONFIG_COMMON.bck.Mar72012
206,207c206,210
< USR_CPPFLAGS =
< USR_DBDFLAGS =
---
> USR_CPPFLAGS = -I $(EPICS_BASE)/include
> USR_CPPFLAGS += -I $(EPICS_BASE)/include/os/Linux/
> USR_CPPFLAGS += -I $(EPICS_BASE)/../modules/archive/lib/linux-x86_64/
> USR_DBDFLAGS = -I $(EPICS_BASE)/dbd
> USR_DBDFLAGS += -I $(EPICS_BASE)/../modules/archive/dbd

This is saved in CONFIG_COMMON.diff.Mar72012_1

 After following up with Patrick Thomas for a while trying to make the extensions to epics work within the currently installed epics, he decided that we should just start over with a fresh install of epics 3.14.10.

I am installing this in /ligo/apps/linux-x86_64/epics/base-3.14.10/

Prior to all of this, I had done a lot of installation and configuration of the packages needed to make LAMP work:

sudo apt-get install lamp-server^

sudo apt-get install phpmyadmin

I then continued to follow the instructions on Patrick's wiki:

https://awiki.ligo-wa.caltech.edu/aLIGO/Conlog#EDCU_library

I installed the c_string library into /ligo/apps/linux-x86_64/ according to his instructions.  (prior to my installs, there was no /ligo/ on this machine at all, so I made the needed parent directories with the correct permissions).

That should be everything up to installing epics (working on that now).

ELOG V3.1.3-