40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 269 of 357  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  7322   Thu Aug 30 20:20:52 2012 JenneUpdateSUSWatek camera placed on SE viewport of ITMX to look at PRM

[EricQ, Jenne]

We placed the Watek camera on the SE viewport of the ITMX chamber, and focused it on the face of PRM.  We are not able to see any scattered light transmitted through the PRM, so this camera was an ineffective way to try to check spot centering on the PRM.  Jamie placed one of the new targets on the PRM cage - see his elog for details.

To get more use of the camera, we need to mount it on something, at the 5.5 inch beam height, and then cover that something with clean foil so we can place the camera on the table, in the beamline in various places.  We also need to carefully wrap the cables in foil so the don't dirty anything inside.

  16942   Thu Jun 23 15:05:01 2022 Water MonitorUpdateUpgradeWater Bottle Refill

22:05:02 UTC Jordan refilled his water bottle at the water dispenser in the control room.

  3208   Tue Jul 13 17:36:42 2010 nancyUpdateIOOWavefront Sensing Matrix Control

For yesterday - July 12th.

Yesterday, I tried understanding the MEDM and the Dataviewer screens for the WFS.

I then also decided to play around with the sensing matrix put into the WFS control system and see what happens.

I changed the sensing matrix to completely random values, and for some of the very bad values, it even lost lock :P (i wanted that to happen)

Then I put in some values near to what it already had, and saw things again.

I also put in the matrix values that I had obtained from my DC calculations, which after Rana's explanation, I understand was silly.

Later I put back the original values, but the MC lock didnot come back to what it was earlier. Probably my changing the values took it out of the linear region. THE MATRIX NOW HAS ITS OLD VALUES.

I was observing the POwer Spectrum of teh WFS signals after changing the matrix values, but it turned out to  be a flop, because  I had not removed the mean while measuring them.  I will do that again today, if we obtain the lock again (we suddenly lost MC lock badly some 20 minutes ago).

  3236   Fri Jul 16 15:39:27 2010 nancyUpdateIOOWavefront Sensors- switched off

I tuned the gain of WFS to 0 last night at about 3am.

I turned it back on now.

  16006   Wed Apr 7 22:48:48 2021 gautamUpdateIOOWaveplate commissioning

Summary:

I spent an hour today evening checking out the remote waveplate operation. Basic remote operation was established 👍 . To run a test on the main beam (or any beam for that matter), we need to lay out some long cabling, and install the controller in a rack. I will work with Jordan in the coming days to do these things. Apart from the hardware, some EPICS channel will need to be added to the c1ioo.db file and a python script will need to be set up as a service to allow remote operation.

Part numbers:

  • The controller is a NewFocus ESP300.
  • The waveplate stage is a PR50CC. The waveplate itself that is mounted has a 1" diameter (clear aperture is more like 21mm), which I think is ~twice the size of the waveplates we have in the lab, good thing Livingston shipped us the waveplate itself too. It is labelled QWPO-1064-10-2, so should be a half wave plate as we want, but I didn't explicitly check with a linearly polarized beam today. Before any serious high power tests, we can first contact clean the waveplate to avoid any burning of dirt. The damage threshold is rated as 1 MW/cm^2, and I estimate that we will be well below this threshold for any power levels (<30W) we are planning to put through this waveplate. For a 100um radius beam with 30W, the peak intensity is ~0.2 MW/cm^2. This is 20% of the rated damage threshold, so may be better to enforce that the beam be >200um going through this waveplate.
  • The dimensions of the mount look compatible with the space we have on the PSL table (though of course once the amplifier comes into the picture, we will have to change the layout. Maybe it's better to keep everything downstream of the PMC fixed - then we just re-position the seed beam (i.e. NPRO) and amplifier, and then mode-match the output of the amplifier to the PMC.

Electrical tests:

  1. First, I connected a power cord to the ESP300 and powered it on - the front display lit up and displayed a bunch of diagnostics, and said something to the effect of "No stage connected".
  2. Next, I connected the rotary mount to "Axis #1": Male DB25 on the stage to female DB25 on the rear of the ESP300. The stage was recognized.
  3. Used the buttons on the front panel to rotate the waveplate, and confirmed visually that rotation was happening 👍 . I didn't calibrate the actual degrees of rotation against the readback on the front panel, but 45 degrees on the panel looked like 45 degrees rotation of the physical stage so seems fine.

RS232 tests:

  • This unit only has a 9-pin Dsub connector to interface remotely to it, via RS232 protocol. c1psl Supermicro host was designated the computer with which I would attempt remote control.
  • To test, I decided to use a serial-USB adapter. Since this is only a single unit, no need to get an RS232-ethernet interface like the one used in the vacuum rack, but if there are strong opinions otherwise we can adopt some other wiring/control philosophy.
  • No drivers needed to be installed, the host recognized the adapter immediately. I then shifted the waveplate and controller assembly to inside the VEA - they are sitting on a cart behind 1X2. Once the controller was connected to the USB-serial adapter cable, it was registered at /dev/ttyUSB0 immediately. I had to chown this port to the controls user for accessing it using python serial
  • Initially, I was pleasantly surprised when I found not one but TWO projects on PyPi that already claimed to do what I want! Sadly, neither NewportESP1.1 nor PyMeasure0.9.0 actually worked - the former is for python2 (and the string typesetting has changed for PySerial compatible with python3), while the latter seems to be optimized for Labview interfacing and didn't play so nice with the serial-USB adapter. I didn't want to spend >10mins on this and I know enough python serial to do the interfacing myself, so I pushed ahead. Good thing we have several pySerial experts in the group now, if any of you want to figure out how we can make either of these two utilities actually work for us - there is also this repo which claims to work for python 3 but I didn't try it because it isn't a managed package.
  • The command list is rather intimidating, it runs for some 100 (!) pages. Nevertheless, I used some basic commands to readback the serial number of the controller, and also succeeded in moving the stage around  by issuing the "PR" command appropriately 👍. BTW, I forgot that I didn't test the motor enable/disable which is an essential channel I think.
  • I think we actually only need a very minimal set of commands, so we don't need to read all 100 pages of instructions:
    • motor enable/disable
    • absolute and relative rotations
    • readback of the current position
    • readback of the moving status
    • a stop command
    • an interlock
  • Note that as a part of this work, in addition to chowning /dev/ttyUSB0, I installed the two aforementioned python packages on c1psl. I saw no reason to manually restart the modbus and latch services running on it, and I don't believe this work would have impacted the correct functioning of either of those two services, but be aware that I was poking around on c1psl. I was also reminded that the system python on this machine is 2.7 - basically, only the latch service that takes care of the gains for the IMC servo board are dependent on python (and my proposed waveplate control script will be too), but we should really upgrade the default python to 3.7/3.8.

Next steps:

Satisfied that the unit works basically as expected, I decided to stop for today. My thinking was that we can have the ESP300 installed in 1X1 or 1X2 (depending on where space is more readily available). I will upload have uploaded a cartoon here so people can comment if they like/dislike my plan

  • We need to use a long-ish cable to run from 1X1/1X2, where the controller will be housed, to the PSL enclosure. Livingston did ship one such long cable (still on Rana's table), but I didn't check if the length is sufficient / the functionality of this long cable. 
  • We need to set up some EPICS channels for the rotation stage angle, motor ENABLE/DISABLE, a "move stage" button, motion status, and maybe a channel to control the rotation speed? 
  • We need a python script that is reading from / writing to these EPICS channel in a while loop. Should be straightforward to setup something to run like the latch.py service that has worked decently reliably for ~a year now. afaik, there isn't a good way to run this synchronously, and the delay in sending/completing the execution of some of the serial commands might be ~1 second, but for the purpose of slowly ramping up the power, this shouldn't be a problem.
  • One question I do have is, what is the strategy to protect the IFO from the high power when the lock is lost? Surely we are not gonna rely on this waveplate for any fast actuation? With the current input power of 1W, the MCREFL photodiode sees ~100mW when the IMC loses lock. So if the final input power is 35W, do we wanna change the T=10% beamsplitter in the MCREFL path to keep this ratio?

Once everything is installed, we can run some tests to see if the rotary motion disturbs the PSL in any meaningful way. I will upload some photos to the picasa later. Photos here.

  16036   Thu Apr 15 15:54:46 2021 gautamUpdateIOOWaveplate commissioning - hardware installed

[jordan, gautam]

We did the following this afternoon.

  1. Disconnected the cable from the unused (and possibly not working) RefCav heater power supply, and removed said PS from 1X1. There was insufficient space to install the ESP300 controller elsewhere. I have stored the power supply along the east arm under the beamtube, approximately directly opposite the RFPD cabinet.
  2. Installed the ESP 300 - conveniently, the HP DCPS was already sitting on some rails and so we didn't need to add any.
  3. Ran a long D25-D25 cable from the ESP300 to the NE corner area of the PSL enclosure. The ends of the cable are labelled as "ESP end" and "Waveplate end". The HEPA was turned on for the duration we had the enclosure open, and I have now turned it off.
  4. Connected the waveplate to this cable. Also re-connected the ESP300 to the c1psl supermicro host via the USB-RS232 adapter cable.

The IMC stayed locked throughout our work, and judging by the CDS overview screen, we don't seem to have done any lasting damage, but I will run more tests. Note that the waveplate isn't yet installed in the beam path - I may do this later today evening depending on lab activity, but for now, it is just sitting on the lower shelf inside the PSL enclosure. I will post some photos later.

Quote:
 

So this system is ready to be installed once Jordan and I find some time to lay out cabling + install the ESP300 controller in a rack.


Update: The waveplate was installed. I gave it a couple of rounds of cleaning by first contact, and visually, it looked good to me. More photos uploaded. I also made some minor improvements to the MEDM screen, and setup the communication script with the ESP300 to run as a systemd service on c1psl. Let's see how stable things are... I think the philosophy at the sites is to calibrate the waveplate rotation angle in terms of power units, but i'm not sure how the unit we have performs in terms of backlash error. We can do a trial by requesting ~100 "random" angles, monitoring the power in s- and p-polatizations, and then quanitfying the error between requested and realized angles, but I haven't done this yet. I also haven't added these channels to the set recorded to frames / to the burt snapshot - do we want to record these channels long term?

  16022   Tue Apr 13 17:47:07 2021 gautamUpdateIOOWaveplate commissioning - software prepared

I spent some time today setting up a workable user interface to control the waveplate.

  1. Created some EPICS database records at /cvs/cds/caltech/target/ESP300.db. These are all soft channels. This required a couple of restarts of the modbus service on c1psl - as far as I can tell, everything has come back up without problems.
  2. Hacked newportESP to make it work, mainly some string encoding BS in the python2-->python3 paradigm shift.
  3. Made a python script at /cvs/cds/caltech/target/ESP300.py that is based on similar services I've set up for the CM servo and IMC servo boards. I have not yet set this up to run as a service on c1psl, but that is pretty trivial.
  4. Made a minimal MEDM screen, see Attachment #1. It is saved at  /opt/rtcds/caltech/c1/medm/c1psl/C1PSL_POW_CTRL.adl and can be accessed from the "PSL" tab on sitemap. We can eventually "calibrate" the angular position to power units.
  5. Confirmed that I can move the waveplate using this MEDM screen.

So this system is ready to be installed once Jordan and I find some time to lay out cabling + install the ESP300 controller in a rack.

At the moment, there is no high power and there is minimal risk of damaging anything, but someone should double check my logic to make sure that we aren't gonna burn the precious IFO optics. We should also probably hook up a hardware interlock to this controller.

I went through some aLIGO documentation and believe that they are using a custom made potentiometer based angle sensor rather than the integrated Newport (or similar) sensor+motor. My reading of the situation was that there were several problems to do with hysterisis, the "find home" routine etc. I guess for our purposes, none of these are real problems, as long as we are careful not to randomly rotate the waveplate through a full 180 degrees and go through the full fringe in the process. Need to think of a clever way to guard against careless / accidental MEDM button presses / slider drags.


Unrelated to this work: I haven't been in the lab for ~a week so I took the opportunity today to go through the various configs (POX/POY/PRMI resonant carrier etc). I didn't make a noise budget for each config but at least they can be locked 👍 . I also re-aligned the badly misaligned PMC and offloaded the somewhat large DC WFS offsets (~100 cts, which I estimate to be ~150 nNm of torque, corresponding to ~50 urad of misalignment) to the IMC suspensions' slow bias voltages. 

  2410   Mon Dec 14 12:13:52 2009 JenneUpdateTreasureWe are *ROCKSTARS* ! IFO is back up

[Jenne, Kiwamu, Koji]

We got the IFO back up and running!  After all of our aligning, we even managed to get both arms locked simultaneously.  Basically, we are awesome. 

 This morning, we did the following:

*  Turned on the PZT High voltages for both the steering mirrors and the OMC.  (For the steering mirrors, turn on the power, then hit "close loop" on each.  For the OMC, hit Output ON/OFF).

*  Looked at the PZT strain gauges, to confirm that the PZTs came back to where they had been.  (Look at the snapshot of C1ASC_PZT_Al)

*  Locked all components of the PSL (This had already been done.)

*  Removed beam dump which was blocking the PSL, and opened the PSL mechanical shutter.  Light into the IFO!

*  Locked the Mode Cleaner.  The auto-locker handled this with no problem.

*  Confirm that light is going through the Faraday.  (Look at the TV sitting on top of MC13 tank...it shows the Faraday, and we're hitting the input of the Faraday pretty much dead-on).

*  Look at IP_ANG and IP_POS.  Adjust the steering mirrors slightly to zero the X&Y readings on IP_ANG.  This did not change the PZTs by very much, so that's good.

*  Align all of the Core Optics to their OpLev positions.

*  On the IFO_Align screen, save these positions.

*  Run the IFO_Configure scripts, in the usual order.  (Xarm, Yarm, PRM, DRM).  Save the appropriate optics' positions after running the alignment scripts.  We ended up running each alignment script twice, because there was some residual misalignment after the first iteration, which we could see in the signal as viewed on DataViewer (Either TRX, TRY, or SPOB, for those respective DoFs).

*  Restore Full IFO.

*  Watch the beauty of both arms and the central cavity snapping together all by themselves!  In the attached screenshot, notice that TRX and TRY are both ~0.5, and SPOB and AS166Q are high.  Yay!

Conclusions: 

*  The wiping may have helped.  While aligning X and Y separately, TRX got as high as ~1.08, and TRY got as high as 0.98  This seems to be a little bit higher than it was previously.

*  Since everything locked up in pretty short order, and the free swinging spectra (as measured by Kiwamu in elog 2405) looks good, we didn't break anything while we were in the chambers last week.  Excellent.

*  We are now ready for a finesse measurement to tell us more quantitatively how we did with the wiping last week.

 

  2412   Mon Dec 14 13:17:33 2009 robUpdateTreasureWe are *ROCKSTARS* ! IFO is back up

 

 

  7859   Wed Dec 19 20:18:51 2012 ranaUpdateComputersWe are Changing the Passwerdz next week----

Be Prepared

http://xkcd.com/936/

  9602   Wed Feb 5 15:39:41 2014 manasaUpdateGeneralWe are pumping down

[Steve, Manasa]

I checked the alignment one last time. The arms locked, PRM aligned, oplevs centered.

We went ahead and put the heavy doors ON. Steve is pumping down now!

  4873   Thu Jun 23 23:54:29 2011 KojiOmnistructureEnvironmentWe are saved

Sonali, Ishwita, and another anonymous SURF saved the long-lasted water shortage of the 40m

  2948   Tue May 18 16:19:19 2010 josephbUpdateCDSWe have two new IO chassis

We have 2 new IO chassis with mounting rails and necessary boards for communicating to the computers.  Still need boards to talk to the ADCs, DACs, etc, but its a start.  These two IO chassis are currently in the lab, but not in their racks.

They will installed into 1X4 and 1Y5 tomorrow.  In addition to the boards, we need some cables, and the computers need the approriate real time operating systems setup.  I'm hoping to get Alex over sometime this week to help work on that.

  2483   Thu Jan 7 14:08:46 2010 JenneUpdateComputersWe haven't had a bootfest yet this week.....so today's the day

All the DAQ screens are bright red.  Thumbs down to that.

  2484   Thu Jan 7 14:55:36 2010 JenneUpdateComputersWe haven't had a bootfest yet this week.....so today's the day

Quote:

All the DAQ screens are bright red.  Thumbs down to that.

 All better now. 

  1849   Thu Aug 6 20:03:10 2009 KojiUpdateGeneralWe left two carts near PSL table.

Stephanie and Koji

We left two carts near the PSL table.
We are using them for characterization of the tripple resonant EOM.

  5548   Mon Sep 26 17:49:21 2011 JenneUpdateComputersWe now have BURT restore for slow channels

Koji and Suresh found that there have not been any autoburt snapshots taken of slow channels since ~December 13th 2010.  Not good!

We have found an elog from Joe talking about autoburt changes from that day:  elog 4046

Joe pointed all of the autoburt stuff to the new directory system, so it now decides to take a snapshot of every system in the *new* target directory.  This means, since all of the aux things were left in the *old* target directory that none of them were getting snapshots taken.  I have added the old target path back to the autoburt cron file so that every hour it will search through both old and new target directories and take snapshots of everything in both. 

So, the systems which will now once again have autoburt snapshots taken are the following:

c1aux

c1auxex

c1auxey

c1dcuepics

c1iool0

c1iscaux

c1iscaux2

c1iscepics

c1losepics

c1omcepics

c1psl

c1susaux

c1vac1

c1vac2

 

I moved some old stuff (and especially things which would conflict with the new stuff) to the old target directory/oldfe/ :  c1ass, c1assepics, c1susvme1, c1susvme2, c1sosvme, c1iovme.

The following systems don't have an autoburt.req file, so don't get snapshots:  c0daqawg, c1daqctrl, c1dcu1, c1iscex, c1iscey.  If any of these need autoburts, we should create them.

All the new systems in the new target directory still have their autoburts working.

The first test of this will be in a few minutes, at 18:07:00 Pacific during the regular cron job.  Hopefully nothing crashes....

  5552   Mon Sep 26 22:40:41 2011 JenneUpdateComputersWe now have BURT restore for slow channels
[Jenne, Koji]

After much Perl-learning and a few iterations, we have fixed the burt restore script, so that it actually does the slow channels. We have so far had one successful run, at 22:25, and the regular cron job should start doing the slow channels as of 23:07.
  7872   Wed Jan 2 15:33:23 2013 JenneHowToLockingWe should retry in-air locking

Immediate things to do include finishing installation of new TTs and re-routing of oplev paths in the BS chamber, but after all that, we should retry in-air locking.

The last time we (I) tried in-air locking, MICH wouldn't lock since there was only ~ 6uW of light on AS55 (see elog 7355).  That was before we increased the power into the MC by a factor of 10 (see elog 7410), so we should have tens of microwatts on the PD now.  At that time, we could barely see some PDH signal hidden in the noise of the PD, so with a factor of 10 optical gain, we should be able to lock MICH.

REFL should also have plenty of power - about 1.5 times the power incident on the PRM, so we should be able to lock PRCL. 

Even if we put a flat G&H mirror after the PRM to make a mini-cavity, and we lose power due to poor mode matching, we'll still have plenty of power at the REFL port to lock the mini-cavity.

For reference, I calculate that at full power, POX and POY see ~13uW when the arms are locked.

 

POX/POY power =  [  (P_inc on ITM) + (P_circ in arm)*(T_itm)  ] * (pickoff fraction of ITM ~ 100ppm)

REFL power = (P_inc on PRM) + (P_circ in PRCL)*(T_prm)     =~ 1.5*(P_inc on PRM)

  921   Thu Sep 4 10:13:48 2008 JenneUpdateIOOWe unlocked the MC temporarily
[Joe, Eric, Jenne]

While trying to diagnose some DAQ/PD problems (look for Joe and Eric's entry later), we unlocked the PMC, which caused (of course) the MC to unlock. So if you're looking back in the data, the unlock at ~10:08am is caused by us, not whatever problems may have been going on with the FSS. It is now locked again, and looking good.
  3131   Tue Jun 29 08:55:18 2010 JenneFrogsEnvironmentWe're being attacked!

Infested_InvasionOfKillerBugs.jpg

We're going to have to reinstate the policy of No food / organic trash *anywhere* in the 40m.  Everyone has been pretty good, keeping the food trash to the one can right next to the sink, but that is no longer sufficient, since we've been invaded by an army of ants:

AntInvasion_small.jpg

We are going back to the old policy of Take your trash out to the dumpsters outside.  I'm sure there are some old wives tales about how exercise after eating helps your digestion, or something like that, so no more laziness allowed!

  7694   Fri Nov 9 17:15:05 2012 Manasa, Steve, AyakaUpdateGeneralWe're closed! Pumping down monday morning

Quote:

After a brief look this morning, I called it and declared that we were ok to close up.  The access connector is almost all buttoned up, and both ETM doors are on.

Basically nothing moved since last night, which is good.  Jenne and I were a little bit worried about how the input pointing might have been effected by our moving of the green periscope in the MC chamber.

First thing this morning I went into the BS chamber to check out the alignment situation there.  I put the targets on the PRM and BS cages.  We were basically clear through the PRM aperture, and in retro-reflection.

The BS was not quite so clear.  There is a little bit of clipping through the exit aperture on the X arm side.  However, it didn't seem to me like it was enough to warrant retouching all the input alignment again, as that would have set us back another couple of days at least.

Both arm green beams are cleaning coming out, and are nicely overlapping with the IR beams at the BS (we even have a clean ~04 mode from the Y arm).  The AS and REFL spots look good.  IPANG and IPPOS are centered and haven't moved much since last night.  We're ready to go.

The rest of the vertex doors will go on after lunch.

Jamie and Steve got the ETM doors on this morning.

We got the other heavy doors including the ITMs, BS and the access connector in place.

If nobody raises any concerns in reply to this elog, Steve will assume it as a green signal and will start pumping down first thing Monday morning after the final check on the access connector bellow screws.

 

Steve! 

Ayaka and I got the ITMY and BS door closed at 45foot pounds just now. 

  8284   Wed Mar 13 10:26:58 2013 ManasaUpdateLockingWe're still good!!

We're still good with the IFO alignment after 7hours.

I found the green still locked in the same state as last night; but no IR (so the arms are stable and the TTs should definitely take the blame).

From last night's observation (elog about drift in TT1), I only moved TT1 in pitch and gained back locking in IR for both the arms

  420   Wed Apr 16 09:47:35 2008 AndreySummaryPEMWeather Station
The weather station is functional again.

The long ethernet Cat5 cable connecting 'WeatherLink' and processor 'c1pem1' was repaired yesterday, namely the RJ45 connector was replaced,
and information about weather conditions is now again continuously being transferred from the 'Weather Monitor' to the control UNIX computers. We can see this information in 'c0Checklist.adl' screen and in Dataviewer.

Below are the two sets of trends for the temperature, wind speed and direction, pressure and the amount of precipitation.

The upper set of trends ("Attachment 1") is "Full Data" in Dataviewer for the 3 hours from 6.30AM till 9.30AM this morning,
and the lower set of trends ("Attachment 2") is "Minute Trend" in Dataviewer for 15 hours from 6.30PM yesterday till 9.30AM this morning.

I also updated the wiki-40 page describing the Weather Station and added to there a description of the process of attaching the RJ45 connector to the end of ethernet Cat5 cable. To access the wiki-40 page about the "weather station" you should go from the main page to "PEM" section and click on "Weather Station".
  7014   Mon Jul 23 21:17:58 2012 LizUpdatePEMWeather Station Works!

Rana and I traced the cables that ran from c1pem1 to the Weather Station monitor.  We found that the flat blue cable that is plugged into c1pem1 was not connected to the black cable from the Weather Station.  We don't know why they are unplugged, but the Weather Station had been inactive since 2010.  Rana plugged them back in (they are now connected via a sketchy connector that had its pins askew) and now the channels are outputting correct data!  Everything else seems to be in good order and now I can use the data from the Weather Station for the summary pages!

  7015   Mon Jul 23 21:54:48 2012 ranaUpdatePEMWeather Station Works!

To get the code to run on c1pem1, we had to move the old target back into the /cvs/cds/caltech/target/ directory.  It is in /cvs/cds/caltech/target/c1pem1/.

JoeB had apparently moved it into some other area called 'oldfe' and this was why the weather station has not been running for years.    Joe is at LLO now, but he's not beyond our reach...

Once the code had been moved back I started it up. I also rebooted it from the telnet prompt to ensure that it worked on reboot. It did.

The cable issue that Liz mentions probably happened during the PSL table lifting and cable cleanup. It looks like someone yanked the ethernet cable out of its adapter and broke it...

  452   Sat Apr 26 01:45:38 2008 AndreySummaryPEMWeather Station enhancement
Two more things concerning weather monitoring have been done during this week.

1) A Dataviewer template was created, so that it allows to see "real-time" information from weather channels immediately, without adding many channels "manually".

If one wants to use this template,
open Dataviewer -> "File" -> "Restore Settings", /cvs/cds/caltech/users/Templates/Dataviewer_Templates/Weather.xml.

2) I wrote a couple of Matlab scripts that allow to read data (minute trends) from the Dataviewer channels over some time in the past, save the received data in mat-files, and plot those minute-trends. Thus, one can get plots that are very much similar to what one can see in Dataviewer. These two Matlab files are located in the directory
"/cvs/cds/caltech/users/weather_station". File "WeatherReading.m" allows reading from the weather channels (paths to mDV directory must be configured before using my script), file "WeatherTrends.m" allows plotting of those minute trends.

Unfortunately, hardware problems arise very often if we want to read for a somewhat long time in the past, so until now I have not succeeded in getting trends for more than 20 minutes. As an example, see the attached png-file with the 20-minutes trends of data from Thursday evening.

3) So far I did not have success in learning how to recalculate pressure from Pascals to mbars in EPICS (although I tried google-search).

4) I am making every effort in recent weeks not to put any personal or non-scientific information into elog, but this message could be important for all of us, so I cannot resist:
a shark in the Pacific Ocean has killed a swimmer near San-Diego (I saw this in russian news and then made a quick google-search).
http://latimesblogs.latimes.com/lanow/2008/04/this-just-in-fa.html
  414   Fri Apr 4 16:54:06 2008 AndreySummaryEnvironmentWeather station is fully alive

After today's trip to the roof of our building the weather station seems to be completely resurrected!

We went to the roof together with Steve Vass, and we discovered that:

(1) Sensors of wind speed, wind direction and the bowl that measures the amount of precipitation do not have any visible defects, so there is no problem with all those sensors even after being outside for seven years.

(2) We discovered that there are cable junctions located on the roof, and those junctions were located close to the rim (edge) of the roof, before the cables go inside of 40-meter lab room. The taping in the place of the junction was not good due to the age, and the connections between the cables were disrupted (cable endings were out of the connectors). Therefore, no signal from the roof sensors could be transferred to the 'Weather Monitor'. It was not wise from the person who installed the weather station to leave the fragile cable connections outside, on the roof, because the length of the cables allowed to locate those three connectors inside of the building.

See the attached PDF-file with pictures.

(3) After the cables were plugged into the connectors, these cable junctions were gently pulled into the inside of the 40-meter interferometer room. These cable junctions should not be located outside of the building!

Immediately after all the above-mentioned steps, the reasonable indications of outside temperature, humidity, pressure, wind speed and direction appeared on the 'Weather Monitor'.

In order to see if there is any problem of communication between the 'Weather Monitor' and UNIX control computers through 'c1pem1', I rolled out two brand new black cat-5 ethernet cables on the floor of the interferometer room (they are on the floor temporarily, the ethernet cable will go from the floor into the ceiling cable tray eventually), connected the two cables together through freshly purchased from Caltech bookstore cable connectors, and thus connected the 'Weather Monitor' to the processor 'c1pem1'.

Result: Now we can see reasonable indications of outside temperature, pressure, amount of precipitation, wind speed and direction on the EPICS screen! Moreover, these indications are changing with time.

As a reminder for everyone: standard atmospheric pressure is about 101kPa, so the indications of pressure as 99900Pa is quite reasonable.

One thing is not clear for me yet: wind speed on the 'Weather Monitor' is fluctuating between 2 and 4 mph, while MEDM EPICS-screen values are fluctuation in the range between 0 and 3mph.

Many thanks to Steve Vass and Alexander Ivanov for their help.
  458   Mon Apr 28 23:44:33 2008 AndreyUpdateComputer Scripts / ProgramsWeather.db

I was trying to figure out how to modify the file "Weather.db" so that the atm.pressure would be recalculated from Pa to bar before appearing in the EPICS screen, but so far I did not succeed. I restarted processor "c1pem1" several times. I will continue this tomorrow, and also I will modify the nmaes of the weather channels.
  1734   Sun Jul 12 23:14:56 2009 JenneOmnistructureGeneralWeb screenshots aren't being updated

Before heading back to the 40m to check on the computer situation, I thought I'd check the web screenshots page that Kakeru worked on, and it looks like none of the screens have been updated since June 1st.  I don't know what the story is on that one, or how to fix it, but it'd be handy if it were fixed.

  1762   Sun Jul 19 22:38:24 2009 robOmnistructureGeneralWeb screenshots aren't being updated

Quote:

Before heading back to the 40m to check on the computer situation, I thought I'd check the web screenshots page that Kakeru worked on, and it looks like none of the screens have been updated since June 1st.  I don't know what the story is on that one, or how to fix it, but it'd be handy if it were fixed.

 Apparently I broke this when I added op540m to the webstatus page.  It's fixed now.

  1207   Mon Dec 29 21:51:02 2008 YoichiConfigurationComputersWeb server on nodus
The apache on nodus has been solely serving for the svn web access.
I changed the configuration and all files under /cvs/cds/caltech/users/public_html/ can be seen under
https://nodus.ligo.caltech.edu:30889/

The page is not password protected, but you can add a protection by putting an appropriate .htaccess
in your directory.
For the standard LVC password, put the following in your .htaccess
AuthType Basic  
AuthName "LVC password"
AuthUserFile /cvs/cds/caltech/apache/etc/LVC.auth
Require valid-user
  12372   Thu Aug 4 14:21:21 2016 ericq UpdateComputer Scripts / ProgramsWeb things mostly back online

The nodus restart caused a bit of downtime. The apache configuration files were accidentally deleted the other day, so elog/svn/wikis were just holding on in memory; this fact was unfortunately not elogged. 

Things should be up and running again, except for the 8080->8081 elog redirection which I haven't been able to figure out.

I will also set up the NFS backup to include nodus configuration files from now on

  12373   Thu Aug 4 15:00:40 2016 ericq UpdateComputer Scripts / ProgramsWeb things mostly back online

Nodus' /export and /etc directories are now being backed up at /cvs/cds/caltech/nodus_backup

They will be rsync'd over as part of the nightly tape backups (scripts/backup/rsync.backup)

  12375   Thu Aug 4 17:41:53 2016 KojiUpdateComputer Scripts / ProgramsWeb things mostly back online

Sorry I was writting the elog, but I had to dive into the chamber (@LHO) before completion.

  4150   Thu Jan 13 14:21:13 2011 josephbUpdateCDSWebview of front end model files automated

After Rana pointed me to Yoichi's MEDM snapshot script, I learned how to use Xvfb, which is what Yoichi used to write screens without a real screen.  With this I wrote a new cron script, which I added to Mafalda's cron tab to be run once a day at 6am.

The script is called webview_update.cron and is in /opt/rtcds/caltech/c1/scripts/AutoUpdate/.

#!/bin/bash
DISPLAY=:6
export DISPLAY
#Check if Xvfb server is already running
pid=`ps -eaf|grep vfb | grep $DISPLAY | awk '{print $2}'`
if [ $pid ]; then
        echo "Xvfb already running [pid=${pid}]" >/dev/null
else
# Start Xvfb
echo "Starting Xvfb on $DISPLAY"
Xvfb $DISPLAY -screen 0 1600x1200x24 >&/dev/null &
fi
pid=$!
echo $pid > /opt/rtcds/caltech/c1/scripts/AutoUpdate/Xvfb.pid
sleep 3

#Running the matlab process
/cvs/cds/caltech/apps/linux/matlab/bin/matlab -display :6 -logfile /opt/rtcds/caltech/c1/scripts/AutoUpdate/webview.log -r webview_simlink_update

  1489   Thu Apr 16 16:26:57 2009 peteUpdateLockingWed. night locking
yoichi, pete

We installed the watchLockLoss script in scripts/AutoDTT/.  This script monitors arm power and uses command line
DTT to save 5 s snapshot of the interferometer when it senses loss of lock.  We ran it on linux and it seemed to
save an xml file about half the time; we'll try it on solaris.  

I managed to get up to arm power of about 20 a couple of times.  IFO lost lock a couple of times after turning
off moving zero.  MC2 would often get tripped by lock loss and need resetting.  Maybe we will try to stiffen the
op levs.
  4824   Wed Jun 15 15:18:01 2011 kiwamuUpdateGeneralWednesday cleaning

[Jenne / Kiwamu]

We spent approximately an hour for the weekly Wednesday cleaning.

This time we moved onto an area where a desk and optics shelf reside along the Y arm.

We will continue cleaning up there in the next time too.

  14964   Thu Oct 10 23:36:02 2019 KojiUpdateGeneralWednesday cleaning work

[Jon, Yehonathan, Gautam, Aaron, Shruti, Koji]

We get together on Wednesday afternoon for cleaning the lab. Particularly, we collected e-wastes: VME crates, VME modules, old slow control cables, and other old/broken electronics. They are piled up in the office area and the cage outside rioght now (Attachments 1/2). We asked Liz to come to pick them up (under the coordination with either Gautam or Koji). Eventually this will free up two office desks.

Also, we made the acromag components organized in plastic boxes. (Attachment 3)

  14971   Tue Oct 15 17:19:38 2019 KojiUpdateGeneralWednesday cleaning work

[Liz, Gautam, Chub, Jordan, Koji]

We removed a significant amount of e-waste from the lab. The garbage was moved to the e-waste station in WB SB and are waiting for disposal.

  1361   Thu Mar 5 05:07:09 2009 YoichiUpdateLockingWednesday night locking
Tonight, I was having a problem with the PO_DC hand-off.
It fails most of the time.
I increased the averaging time for the PD1_DC offset measurement.
I also wrote a script to match the gain of the transmission DC and the PO_DC signals.
This script (/cvs/cds/caltech/scripts/CM/matchPODCGain ) measures the gains of the old (TRX+TRY) and new (PO_DC) signals at 150Hz and returns the optimal value to be put into the input matrix.
cm_step script calls matchPODCGain to determine the matrix element value for the PO_DC signal.

Even with this script, the hand-off was still unreliable.
I checked the AO path loop gain just before the hand off. It looked normal.
Then I realized that the oscilloscope I hooked up to the PO_DC signal using a T-BNC may be introducing some noise into the channel.
So I removed it. Then the PO_DC hand off went well at least once.
The IFO still loses lock at around arm power 10.

I attached time series of the latest lock loss. The second attachment is a zoom of the first one.
This time, there is a glitch in the ETM feedback signals, which is also present in the DARM and CARM and error signals.
I saw this kind of glitches several times today.
  244   Thu Jan 17 14:13:20 2008 robUpdateLSCWednesday's locking
Incremental progress on locking yet again. This time the handoff of DARM to the OMC worked, and progress halted at handing off control of the common mode to REFL166.
  3102   Wed Jun 23 12:28:34 2010 RazibSummaryPhase CameraWeeekly Summary

This past week I have completed the following tasks:

 

1. Built a trigger and power box for the camera GC 750M (06058) and took some test images to see whether the trigger box really works. Result: It is doing fine!

2. Went over the setup that is already sitting on the table. Ref: Aidan's elog entry

3. Attended seminars and talks given by Alan, Jahms, Koji and Rana.

4. Attended the mandatory laser safety training by Peter.

 

Expected task for this week (could be more):

1. Work out analytical expressions of the power of the carrier and sidebands going to the camera in the setup. (As suggested by Rana and Joe)

2. Work on producing beat signal to the camera using the He-Ne laser setup.

3. Move,if possible, to the Nd:YAG setup.

4. Go over the codes and paper by the past SURFers on the phase camera experiment.

 

trigger-box_circuit.png

 


 

  1698   Wed Jun 24 12:09:24 2009 ClaraUpdatePEMWeek 1(ish)

I spent the week reading up on filter algorithm theory, particularly Wiener filtering. I have also learned how to get data from specific channels at specific times, and I've been getting myself acquainted with Matlab (which I have not previously used). Finally, I started messing around with the positioning of the accelerometers and seismometers in order to try to find the setup that yields the best filtration.

  1694   Wed Jun 24 10:53:34 2009 Chris ZimmermanUpdateGeneralWeek 1/2 Update

I've spent most of the last week doing background reading; fourier transforms, shm, e&m, and other physics that I didn't cover at school.  I also read a few chapters in Saulson, especially the chapter on noise and shot noise.  To get a better grip on what I'm going to be doing I read through the polarization chapter in Hobbs' "Optics" text, mostly on wave plates since that's a large part of this readout.  Since then I've been working up to calculating the shot noise, starting with the electric field throughout the new interferometer readout.

  1710   Wed Jul 1 10:56:42 2009 Chris ZimmermanUpdateGeneralWeek 2/3 Update

I spent the last week working a lot with the differences between a basic Michelson readout and the new one as a displacement sensor.  The new one (w/ wave plates) ends with two differently polarized beams and should have better sensitivity; I've also been going through noise/sensitivity calculations for each, although that hit a road block when I had to start the 1st SURF progress report, which has taken up most of my time since Saturday.

  1720   Wed Jul 8 11:05:40 2009 Chris ZimmermanUpdateGeneralWeek 3/4 Update

The last week I've spent mostly working on calculating shot noise and other sensitivities in three michelson sensor setups, the standard michelson, the "long range" michelson (with wave plates), and the proposed EUCLID setup.  The goal is to show that there is some inherent advantage to the latter two setups as displacement sensors.  This involved looking into polarization and optics a lot more, so I've been spending a lot of time on that also.  For example, the displacement sensitivity/shot noise on the standard michelson is around 6:805*10^-17 m/rHz at L_=1*10^-7m, as shown in the graph.  NSD_Displacement.png

  1750   Wed Jul 15 12:44:28 2009 Chris ZimmermanUpdateGeneralWeek 4/5 Update

I've spent most of the last week working on finishing up the UCSD calculations, comparing it to the EUCLID design, and thinking about getting started with a prototype and modelling in MATLAB.  Attached is something on EUCLID/UCSD sensors.

  6986   Wed Jul 18 10:08:01 2012 LizUpdateComputer Scripts / ProgramsWeek 5 update/progress

Over the past week, I have been focusing on the issues I brought up in my last ELOG,  6956.  I spent quite a while attempting to modify the script and create my own spectrogram function within the existing code.  I also checked out the channels on the PSL table for the PSL health page and produced a spectrogram plot of the PMC reflected, transmitted, and input powers, the PZT Voltage and the laser output power.  When I was entering these channels into the configuration script, I came across an issue with the way the python script parses this.  If there were spaces between the channel names (for example: C1:PSL-PMC_INPUT_DC, C1:PSL-PMC_RFPDDC... etc) the program would not recognize the channels.  I made some alterations to the parsing script such that all white spaces at the beginning and end of the channels were stripped and the program could find them.

 The next thing that I worked on was attempting to see if the microphone channels were actually stopping the program or just taking an extraordinarily long time.  I tried running the program with shorter time samples and that seemed to work quite well!  However, I had to leave it running overnight in order to finish.  I am sure that this difference comes from the fact that the microphone channels are fast channels.  I would like to somehow make it run more quickly, and am thinking about how best to do this.

I finally got my spectrogram function to work after quite a bit of trouble.  There were issues with mismatched data and limit sets that I discovered came from times when only a few frames (one or two) were in one block.  I added some code to  ignore small data blocks  like those and the program works very well now!  It seems like the best way to get the right limits is to let the program automatically set the limits (they are nicely log-scaled and everything) but there are some issues that produce questionable results.  I spent a while adding a colormap option to the script so that the spectrogram colors can be adjusted!  This mostly took so long because, on Monday night, some strange things were happening with the PMC that made the program fail (zeros were being output, which caused an uproar in the logarithmic data limits).  I was incredibly worried about this and thought that I had somehow messed up the script (this happened in the middle of when I was tinkering with the cmap option) so I undid all of my work!  It was only when I realized it was still going on and Masha and Jenne were talking about the PMC issues that I figured out that it was an external issue.  I then went in and set manual limits so that a blank spectrogram and redid everything.

The spectrogram is not operational and the colormap can be customized.  I need to fix the problem with the autoscaled axes (perhaps adding a lower bound?) so that the program does not crash when there is an issue.

Yesterday, I spoke with Rana about what my next step should be.  He advised me to look at ELOGs from Steve (6678) and Koji (6675) about what they wanted to see on the site.  These gave me a good map of what is needed on the site and where I will go next.

I need to find out what is going on with the weather channels and figure out how to calibrate the microphones.  I will also be making sure there are correct units on all of the plots and figure out how to take only a short section of data for the microphone channels.  I have already modified the tab template so that it is similar to Koji's ELOG idea and will be making further changes to the layout of the summary pages themselves.  I will also be working on having the right plots up consistently on the site.

 

  1779   Wed Jul 22 16:15:52 2009 Chris ZimmermanUpdateGeneralWeek 5/6 Update

The last week I've started setting up the HeNe laser on the PSL table and doing some basic measurements (Beam waist, etc) with the beam scan, shown on the graph.  Today I moved a few steering mirrors that steve showed me from at table on the NW corner to the PSL table.  The goal setup is shown below, based on the UCSD setup.  Also, I found something that confused me in the EUCLID setup, a  pair of quarter wave plates in the arm of their interferometer, so I've been working out how they organized that to get the results that they did.  I also finished calculating the shot noise levels in the basic and UCSD models, and those are also shown below (at 633nm, 4mw) where the two phase-shifted elements (green/red) are the UCSD outputs, in quadrature (the legend is difficult to read).

 

 

ELOG V3.1.3-