Wednesday, December 9, 2009

Allframe is now finished and I have a master list. Unfortunately, there are some problems on chip2 which are not looking good. I still haven't figured out a way to work around this; it appears as though there's a steep gradient on the outside of the chip where the number of stars drops off dramatically. I had difficulty allstar'ing a few exposures, but neither was on chip2 and that doesn't appear to have made a difference, though it will show up during the ASTs if there is a difference. I'm re-allstarring and will re-allframe just in case this could change something. The only anomaly I've found with chip2 is the fact that daomaster found a larger number of stars than on the other chips. However, the chip2 .als files contain a proper number of stars and the astrometry appears to be just fine, so I don't know what exactly in the allstar-ing could have been wrong.

I double checked the position of stars in the allstar file of every individual exposure on chip2 and didn't see the gradient in any of them, so it must be some weird aggregate affect which is concerning. I found that rchip2_179_nast.als appeared to have a higher density of stars which was strange, but no gradient. I'm stumped.

I moved forward anyway and did some cuts in chi and sharp cuts. My masterlist started out with 117,137 stars and after the cuts it's left with 54,137 stars. In lieu of a solution to the problems with chip2 I'm going to use this latter master list (after cuts) to move forward with calibrations.

Before I spend too much time on that, I imagine I should be working on simulating a perfect ellipse for my AAS poster which needs to be completed before next Friday...

Wednesday, December 2, 2009

Post Allframe reflections

Back to updating.

So I now have files with super great astrometry and also proper sky levels. Using those, I DAOphot'd and allstar'd everything, throwing out bad pixels when calculating the psfs. My allstar files ended up having a few thousand stars a piece usually. After that, I prepared everything using DAOmatch and DAOmaster then Allframed it all.

I still don't quite understand how DAOmaster finds the number of stars that it does. For a typical input allstar file of a single exposure on a single chip with a few thousand, DAOMaster finds something like 25,000 stars when looking at all of the exposures on a chip (20 in all). I'm allowing daomaster to use all stars that are found on any one chip, and these numbers indicate to me that there could be double counting as if it makes a list (.mag file) of every star found on every chip regardless of whether it's already been found. But according to the ccdpck_man.doc (guinea pig documentation):

"Finally you specify a match-up radius. After having transformed the stars in frame n to the coordinate system of the master frame, stars will be cross-identified only if their positions agree to within that tolerance.
Furthermore, they will agree only if that is the closest otherwise unassigned star in frame n to that star in the master list, and only if that is the closest otherwise unassigned star in the
master list to that star in frame n. Multiple cross-identifications of the same star in one list to different stars in the other are not allowed.

DAOMASTER will go through all the star lists and match as many
stars as it can to stars appearing in the master list (initially equal to
the list of stars found in frame 1); many of these provisional
cross-identifications will be spurious, but legitimate cross-identifications
will outnumber them."

After allframing, I combine the two bands from a single chip and select out only those stars found in both. The result is a catalog of around 20,000 stars. The files that Ricardo Allframed for me over the summer tend to have something like 12,000 stars in the final catalog file of both bands.

While this is a little troubling, I'm willing to accept particularly because the astrometry was bad before. In creating the final file, stars must match up in both bands. With bad astrometry, it's likely that many of these stars got thrown out. Not all of them did because the astrometry was only about 4 or 5 pixels off, but it's somewhat reasonable that about half were too far off and didn't match up with any other stars so they don't end up in the final catalog. Also, the only files input to DAOmatch and DAOmaster were .als outputs which all contained a sane (few thousand) number of stars, as Ricardo confirmed. So the number of stars are coming from a sane source and I'm using the same parameters as Ricardo did (save for a correction to the transformation relevant to our data set).

2 of the chips are still Allframing. Once they're finished, I'll create the catalog for them and then I'll go about making a master list. With my master list I'll look into making cuts in chi and sharp and then move on to finishing calibrations.

Monday, September 21, 2009

In honor of my trip to Chile, I'm bringing back the blog posting, hopefully on a regular basis. More to come on the observing experience later. At the moment I have several tasks at hand in my own research that I'm behind on doing:

1. Finishing my talk and paper for the KNAC thing next weekend.
So far I have a draft of the talk power point, though it could use some work. I'm essentially using my Ann Arbor talk as a template, but I took out Alex's contributions since he'll have his own talk at KNAC and I added some background info. I'm (passively) looking for more figures to make the ppt more interesting.

2. Testing out the new supermongo install.
At this very moment I'm running Allframe on the g exposures of chip3. I'm still having some trouble with formatting issues of the output of DAOmatch that I feel like others don't have--it's necessary for me to delete columns before continuing on to DAOmaster. But so far after deleting some columns I'm up and running and Allframe has yet to crash AND it's not making crazy filenames! Having learned better, I'll wait to send the official report until I'm sure that it will finish without breaking.


 If it does work, I'm going to go back and see if I can figure out the discrepancy in outputs. Even if I can't, I already have code to reformat the files to what they should be so I'm going to continue with all of the chips to create a new master list to compare with the other. I'm going to use a different exposure to reference all of the other frames to because I was having problems on some of the chips with the exposure (126) that I chose last time. I think I might go with an r exposure this time, since I don't remember having any difficulties with those. 


3. Tracing back the RA and Dec of my star list to see if the astrometry could be off somewhere which now may be causing problems with calibration.
I've barely started on this but I'm going to focus more attention on it and hopefully have something figured out by the end of tonight.


4. Plot the magnitude offsets as a function of distance from the center of Willman 1 to check for a radial dependence which also could indicate problems with calibration.
I attempted this one but ran into some snags. I think there's definitely a lot more I can do after putting some more thought into it. Again, now that I'm settled in here I'll take some time during some of the longer exposures to do this--right after I've figured out the RA, Dec situation, which I think is more important at the moment.


All of this will definitely get checked off the list this week because it's already the end of September and I need to move on already.

Thursday, July 16, 2009

Positive Loglike Values!

Hallelujah.

Today:

I got the max likelihood code working with a fix to the KPNO input file. It's running at a snail's pace, though. The amoeba results look legit at first glance so I hope I'm not wasting time in letting it run.

Allframe finished last night, but I'm just noticing that all of my sharp values are 0.0. This is going to be a big problem when I go to make chi and sharp cuts to the CMD to pick out the stars. I'm looking into this and trying to run Allframe again to see if there's not an easy fix. I wasn't entirely sure what to change to fix this problem since it's so random, but I increased the Allframe iterations and I'll just see if my results are similar to what I got before.

In the meantime my post-Allframe analysis is so close and yet so far...I'm currently at a standstill with running DAOmaster on my Allframe output. For some reason it just won't read the input file. I tried the files that Ricardo sent from his run of Wil1 data and that won't run either. So I sent some of my files back to him to see if they'll run on his machine. I'm not sure if I'll be happy or not if they run there, but it'll be better than nothing. I compared my files to his and other than the sharp thing I don't see a difference. And I don't think the bad value would contribute to a read error. I'm going to take a fresh look at it a little later when I haven't been staring at the same files all day.


Tomorrow:

With any luck my max likelihood run will be done so I'll have some good Wil1 params by the end of the day.

With a huge amount of luck it won't take me another 2 weeks to figure out what's wrong with the Allframe analysis this time. Ricardo got good sharp values on the same data so I'm not sure I want to know what the problem is...but one way or another it would be awesome to have a CMD leading into the weekend. If only the allframe results would cooperate. The rest is really pretty straightforward.

In the meantime of other things running I'm going to get started on the absolute magnitude calculation. With everything else happening I doubt this will get done by the end of tomorrow. But I'll settle for by the end of Monday.


Results are finally close to coming together. I just hope it happens sooner rather than later so I can put this talk together.

Wednesday, July 15, 2009

Allframe works!

Today:

Got allframe working! Finally.

Compared a density plot I made of my star list with one made from the original catalog. I'm inclined to say that the one I made with the original catalog matches figure Beth sent that she made back using that catalog. But there are still significant morphological differences between my distribution from the new catalog and these other figures. This concerns me.

I also have an idea about what could be causing ridiculousness in my input file...I think it's most likely something wrong with the input file. I don't have enough time to implement some changes at the moment but I'm going to come back to this tonight and try a revised input file.

Tomorrow:
1. I want to close the loop on this stellar density plot. Hopefully I'll have Allframe results to work with.
2. Allframe will be done running, so I'll need to figure out how to put together my master list and then create a CMD.
3. Go back to working on the max likelihood stuff. I got caught up with Allframe so I'm still having many of the same issues as before. I'm wondering if I didn't quite implement Dave's corrections correctly. Something is wrong and it's probably going to take a careful eye to catch it. It would be great if I could have it running by Friday so I can be sure to have some solid results to put into my talk.

Monday, July 13, 2009

Today:

I Double checked the chi and sharp cuts I made:

I ended up using:
--sharp: -0.5 to 0.5
--chi: 0.8 to 1.5
This is more or less a generous cut to clean things up a bit. More could be done to get a more stringent cut. I also remade the CMD with these cuts and without getting close to Wil1. It looks pretty good. I noticed that the isochrone wasn't doing a good job of describing the data and realized that I must have used the wrong distance to Wil1 when I calculated the distance modulus. But that's fixed and all is well. I used the same code Anna and I wrote for Segue to match stars to within 1 sigma of the M92 isochrone. I also made a Hess Diagram which I've been playing with, but also looks really good.


I also set out to tweak the contour plot of stellar density:

Like I just mentioned, I did a better job of selecting the stars for the CMD and used only the stars that matched the chi/sharp and isochrone cuts but didn't do any ra/dec cuts so that I would have the whole field to work with. I messed with Beth's annulus idea only to realize that I was already scaling the smoothed density plot. But I wasn't ignoring the central region (where Wil1 is). So I made a mask that cut out the central square before I scaled things. It was a crude way of doing it and could be changed, but I think it'll do for now. My contours definitely turned out looking a lot better so I think I'm going to tweak the smoothing and binning tonight to see if I can't get it looking really good by tomorrow.


So, in all, I now have newly updated figures from KPNO data including: hess diagram, CMD with isochrone fit, chi and sharp cuts, stellar density plot.


Tomorrow:

Max likelihood for KPNO
-still dealign with negative loglikes?
-got an email from Dave--he thinks I might have to get fancy like he did with LBT. He also pointed out some other stuff in his code that might be useful to change. I'm gonna look into this more tomorrow and probably talk to him about it.

Absolute Magnitude
Beth's been thinking a lot about how to calculate this and gave me a pretty good way of going about it. I wanted to finish what I was already working on, but this is on the plate for tomorrow.
-LF
-plots and normalization
-final calculations
-mag limit from 2006--90% completion

Friday, July 10, 2009

And then there were spatial density plots.

Tuesday, July 7, 2009

Super fast update because I have to go pick up the chilluns:


Allframe still doesn't work.

Got the max likelihood code working (thanks to Dave) on the SDSS data.

Almost ready to run the max likelihood code on the KPNO data--want to double check my input first.

In lieu of Allframe results, the next step with my stack is artificial star testing, right Beth? Will look at addstar tomorrow.

Also tomorrow: look more closely at max likelihood results, run the code on KPNO stars, commence the testing of artificial stars.

Wednesday, July 1, 2009

Allframe

I've been learning how to use Allframe through a lot of reading and a lot of doing.

Allframe:

Allframe needs 4 types of input files:
1) .als files for each individual exposure output by Allstar
2) .psf files for each individual exposure output by DAOphot
3) a single .mag file that DAOMaster outputs
4) a single .mch that DAOMaster outputs


Generating these:
1) .als
I used the batch_als.py code to iterate DAOphot and Allstar twice for each individual exposure. I'll use the second (better) .als files to input to Allframe. (These are suffixed with ".als" by batch_als, not to be confused with the first pass allstar outputs which are suffixed with ".als1". )

2) .psf
Again, I got this as an output from the batch_als.py run. Again, I'll use the second-pass files, which are denoted by ".psf2".

Interim Step: DAOMatch (preliminary .mch file)

These Stetson procedures are finding/matching up stars for me. I used the Allframe "cookbook" for this part, but got a lot of parameter advice from Ricardo and also referred to the "guinea pig" publication that Stetson sends out as well as the DAOphot manual.

Right now I'm testing with only the 20 exposures of Chip7. Ultimately, I'll have a .mch file for each chip which contains transformations for all 20 exposures on this chip. This .mch is NOT the file to input to Allframe, but rather a preliminary guess to be input to DAOMaster.

I'm using the following input parameters, which are subject to change but I either got from the cookbook or from Ricardo, where Ricardo's advice obviously superseded the cookbook.

DAOmatch inputs:
--Each exposure on a single chip, both bands
--output: chipN.mch

3/4) DAOMaster
The input for DAOMaster is the .mch file I generate for each chip using DAOMatch. Unfortunately, that output contains two extraneous columns (the last two) that DAOMaster doesn't like. So they have to be erased before running DAOMaster. After they're gone, the single chipN.mch file gets input into DAOMaster.

--Min, Min fraction, Enough = 1, 0.05, 1
--Maximum Sigma = 10
--Degrees of Freedom = 20 (R: 6)
--Critical Match-up Radius = 10 (decreasing by integers to 1, re-running 1 a few times til # of stars is constant)
--Assign new star IDs: y
--mean mags and scatter: y (chipN.mag)
--new transformation file: y (chipN.mch)
--all other files: n

The file that contains the mean stellar magnitudes and scatters is suffixed by .mag. That's the star list input to Allframe. The file of updated transformations between chips is suffixed by .mch and that's the final input for Allframe.


Current Status:

Beth and I took a look at Ricardo's allframe.opt file and decided to make some changes to our allframe.opt and allstar.opt files. Because I this, I have to re-run batch_als.py on all files, which is a bit time consuming.

I'm changing the options so that inner sky=2 and outer sky=20. I changed my allstar.opt to agree with Ricardo on the profile error (=0.5 as opposed to my 0.0), since I have to re-run Allstar anyway. I also used a geometric coefficient of 20 (different from Ricardo's 6) in allframe.opt because our data was taken over the course of a few nights. Maximum iterations was lowered from the default 200 to Ricardo and the cookbook's suggestion of 50 to save time. All other parameters agree with Ricardo's allframe.opt file and nothing else was changed in the allstar.opt file from before.

I'm running the batch_als.py for the 20 exposures on Chip 7 first so that it's sure to be ready for Allframing overnight. Once batch_als.py finish, I have to run DAOmatch and DAOMaster to get all 4 files for input into Allframe and then let her rip. As soon as I start everything running for Allframe, I'm going to run batch_als.py on all exposures of all chips. Once all of that is running, I'm going to return to my code that should automate DAOmatch since I never got that working properly.

Once all of Allframe input files I'll start them all Allframing, which should be ready by the time I get home tonight.

Tomorrow:

Check out my awesome Allframe results.
Figure out how to throw out all the junk Allframe gave me.

Friday, June 26, 2009

Calibration Code:

The problem with bootstrapping was just a misunderstanding of what the code was doing, so that's working fine now. So from that code I calculated clipped mean and median variances for each band in color and zero-point. Each median was close to its respective mean which is a good sign. Mean/median values are:

g color uncert. 0.0130472/0.0130471
g zero-point unc. 0.00871615/0.00871514

r color uncert. 0.0238134/0.0238136
r zero-point unc. 0.0199431/0.0199430

More robust mean values of the zero-point offet and color term for each band are:
g zero-point: 6.84060
g color term: 0.0709389

r zero-point: 7.15033
r color term: 0.0352746


The g band is looking good--uncertainties are relatively small and the color term and the mean zero-point offset is about what it was before. I'm more concerned about the r-band. zero-point is similar to what it was before with a relatively small uncertainty. What concerns me is the color term--the uncertainty for which is comparable than the measured value. This means we can't really trust that value for the color term. I'm not terribly sure about how to handle this.

My next step, as per Ricardo's paper, is to check these values for each exposure to see if the variances between exposures are small compared to the uncertainties described here for the stack. When Ricardo did this he compared between chips, but he also had 36 chips. We have only 8 chips and I expect that the differences between exposures are going to be where any problems lie. We have robust weight maps and distortion maps for all chips and it was the background levels between exposures that concerned us about the stacks to begin with. So I'm more worried about discrepancies between exposures than between chips.

Thursday, June 25, 2009

Today


1. Used the python code to iterate over DAOphot and Allstar. It doesn't look like it got us any deeper overall, but it certainly found more stars and so the CMD filled in pretty nicely. We're not at about 25.5 mags on a CMD of stars within 2 arcmin of Wil1, which is the same limit used in the 2006 CMD.

2. My registration and abstract for the Ann Arbor gig are finally sent! Got an email from one of the organizers about hotel reservations. Tonight I'll buy the plane tickets so that's out of the way.

3. Worked on the calibration code. The zero-point/color term code based on what Beth sent is completed. I'm working on getting the bootstrap working. I think the bootstrap itself is more or less correct, but I'm having trouble getting it to play nicely with the linear fitting.

4. Alex and I succeeded in our task to get Beth and Marla to the Physics Lounge (not an easy thing to do) for her surprise yay-you-got-money-here's-some-champagne party which Suzanne organized, of course.

5. I also read Ricardo's CB and UMaII draft to get a sense of how he's been doing things. He gave a lot of insight into the calibration methods that I'm currently working on, which I'll use as guidance as I finish up working on that code. He also ended up using AllFrame because it got deeper results. Because we're still not getting as deep as we want, I'm going to start reading up on AllFrame and figuring out how much of a hassle it'll be to try that method. I'm hoping to give it a trial run next week to see if I can get a deeper CMD. It still concerns me a bit that we're not even getting to 26th magnitude with swarp, as Beth's calculations predicted. That seems to indicate that something else is still not working properly in my current analysis.

Tomorrow:

1. Finish calibration code.
2. Look into AllFrame.
3. Get moving on the max likelihood stuff.

Wednesday, June 24, 2009

Today:

Words weren't really flowing last night so I finished the abstract this morning and emailed it to Beth for comments. Once the abstract was proofread I emailed in my application for the workshop.

Then I fixed the bug Beth found in the chi and sharp code--an indexing problem. That greatly reduced the file size and corrected the plots. So I sent those to Beth for feedback.

Next I took a look at the Python code Beth recommended for iterating DAOphot and Allstar and got that running. The thought is that after we've subtracted the brightest stars, running DAOphot and Allstar again will nab even fainter stars, hopefully giving us a fainter CMD overall. Had some trouble getting in running, though. It got stuck when choosing psf stars. I emailed Beth to see if she had ever had that problem with the code.

Also had a group meeting today. Everyone seems to be making progress. I've made a good amount of progress since last week even though last week I thought I would be stuck by now. Marla Geha is also here collaborating with Beth so she sat in on the meeting.

Tomorrow:
More along the same lines--fixing the python code, hopefully improving CMDs, working on calibration code.

Monday, June 22, 2009

Things I figured out/did today:

Today has been a hodge podge of trying things to see if there is any indication of where our methods could be going wrong or could be improved.

Took a look at many different versions of the chi versus sharp plots for the weighted stack, single exposure, and median stack in each band. I overplotted green dots that represented those stars that matched between the bands and noticed that these were spread throughout, which was to be expected. I did note that stars in the weighted stack weren't matching to the most extreme stars detected in either bands, while the faintest stars in the single exposure and median stacks did match. This is keeping in mind that the latter two don't go as deep to begin with.

I also grumbled about the bad flat fields for a while. But Beth reminded me that they have already been used (the images were divided by them in the first round of image reduction) so we can't do anything about them. In fact, using them to stack the images now should account for any badness. This is reflected by the fact that our stacked images look fine. And besides, they reflect the chip they were taken from so it's the camera that looks bad, which is important to take into account. It's just unfortunate that Wil1 is centered on worst chip of camera.

I thought briefly that the CMD I was creating from only stars close to Willman 1 was significantly shallower (up to 1 mag) than the CMD with all stars in the field. Beth reminded me that it could be a deceiving because of the huge difference in densities, what with so few stars from the field residing within 5 arcmin of the center of Willman 1. On her suggestion, I followed up by remaking the CMD of the field, btu plotting only a random sample of those stars equal in size to that of the close-up CMD. It would seem that what I was seeing was a result of the high number of stars because this last plot showed about the same depth as the close-up one. Oh well...seems I'll try anything to get to the ever elusive 26th magnitude mark.

I made a 10 exposure r stack to check and see if that could get us any deeper or a better image. I used the same technique as before with weight maps, etc. I also checked the swarped images which, thanks to the weight mapping, look pretty good. After DAOphot'ing, I discovered that the standard deviation of the image is in fact smaller for the 10 exposures than for the 7 exp stack (9.98 as opposed to 10.908--I'd say this is a big enough difference to warrant some interest). DAOphot also found 10,000 more stars from extra three exp (now we're up to 46,000) so I'm hoping that they're not all crap and can contribute to the fainter mags. I checked out the chi and sharp plots and I'd say they add a few tenths of a mag to the 7 exposure weighted image. Considering all these things, I'm going to say that the 10 exposure stack is the way to go. Particularly because bad image quality is what got those three exposures thrown out to begin with and that's no longer an issue.



Still to do/consider (by no means in order of importance):

1. On the calibration code I still need to: bootstrap calibration, find mean, median, variance
2. Use python code to iterate allstar--does this get us any deeper?
3. Should we go back and use the weight maps in sextractor? Could it make a big difference in detecting faint sources from the beginning? I got this idea from reading Dave's Hercules paper this morning. It's a rather handy step-by-step guide of pretty much everything I'm doing.
4. Submit registration/abstract for workshop ASAP.
5. Talk to Dave about images.
6. Fix bug in sharp/chi comparison code.

Thursday, June 18, 2009

Today:

Today I made my first weight maps. To do so I first normalized the pixel values of the flat field by dividing all the values by the maximum value. The I combined the flat field and bad pixel map by multiplying the values of each pixel in these two images. So the weighted maps look much like the flat fields, but don't have the bad pixels and cosmic rays.

Then I made MEFs of these weight maps for each exposure in each band.

Finally, I swarped the previously MEF'd .fits images with their respective weight maps. I combined them using the "WEIGHTED" method in swarp which stacks the images using a weights average. My weight map type was "WEIGHT_MAP".

The r stack looked pretty good, although it didn't get rid of all the streaks that we had attributed to bad pixels. The g stack pretty much just looked terrible. So I manually set the suspect pixels to zero in an effort to maybe pick up some bad pixels that had been lost.

Because the r stack was looking alright I went ahead and DAOphot'd it to check out how deep it was getting. The verdict: 19th magnitude! This is about a magnitude deeper than before so we should be down to 26th magnitude with calibration.

After emailing with Dave, we decided that the divide-by-the-max-value method of normalization wasn't a good one. In fact, because the average value of our flat was about 1, we're going to assume they're already normalized.

Tomorrow:

1. Finish re-making all the weight maps using this assumption
2. re-MEF them
3. re-swarp everything
4. Check out the stacks--if good/better then DAOphot
5. if DAOphot'd then do calibration and CMD
6. if good CMD then revel in it for a while.

Wednesday, June 17, 2009

i <3 lacosmic

Today:

Several important things got done today.

1. We chose the new .opt files to finish the rest of the analysis. There wasn't much difference between the images made from the new vs. old .opts but we figured if anything the new ones are more thorough.

2. Dave suggested that using a maximum gain and read noise shouldn't matter if all values are pretty close together. I think I'll use a median just to be sure. But that means the images we're looking at to make these decisions are more or less good ones.

3. Beth and I took a look at the averaged stack I had made before and hadn't liked because it looked terrible--LOTS of rows of bad pixels. We're thinking it can be salvaged with a bad pixel map to allow us to ignore the gross stuff.

4. I successfully ran lacosmic to get rid of bad pixels and cosmic rays and it worked super well as far as I can tell. The masks it output are very pretty. One problem: lacosmic sets the bad stuff equal to 1. According to Dave, swarp needs the bad pixels and cosmic rays to be equal to 0 and everything else equal to 1. (Opposite what lacosmic did.) So I'm writing a few lines of code to reverse this.

5. I made some more progress on the calibration code with Beth's help. I got her code up and running, so I now have a more robust calculation for the linear fit in zp-color space. The slope is still small so everything still looks good. I won't do much more with this yet since I'm anticipating better data to calibrate by Friday.

Tonight
1. lacosmic the r-band images.
2. Makes MEFs of the masks/cleaned images?

Tomorrow.

1. Explore weighted maps.
2. Swarp stacks average-style with the help of the new maps.
3. Get ready to DAOphot.


By Friday:

1. DAOphot averaged stacks.
2. Admire the beautiful CMD that goes to 26.5 mag.


Beth's thesis suggestion:

Star formation history of Wil1.

Tuesday, June 16, 2009

Today:

I spent some time talking to Beth comparing what I have to what she got a few years ago. I ended up taking out my chi and sharp cuts--these can always be added in later. According to Beth, everything seems comparable between my single exposure analyses and hers which is good news.

I also madesome improvements on the plots of position versus offset for both bands for both the new .opts and old .opts. Eyeballing them, there's no real dependence on position, but I'll wait for some feedback from Beth on this and perhaps later (tomorrow?) fit a line to this data to see what kind of a relationship there really is.

Continuing on the quest for calibration, I've written some code that uses linfit to fit a line to the zero-point versus color plots for both bands. The color terms are about 0.05 and 0.04 for the g and r bands, respectively, so this is also good news. I'm now going to apply some code that Beth sent to fit this same data a little better. She used fitexy and was able to incorporate the measurement uncertainties. She also iterated the fit, throwing out data that was more than 3 sigmas from the fit. I know for a fact that I have at least one outlier for each band so this will be impotant. So now that I've used linfit to get a general idea that things are sane, I'll use the more robust version to get a better value for the calibration.

In essence, I've plotted true_mag - inst_mag vs. inst_gmag - inst_rmag. So I've fitted a line that will follow the format true_mag = inst_mag + zero point + slope*(inst_gmag - inst_rmag). I know all of these things except the "slope" which is what all this fitting should solve for. All I know is that it should be small.


Tomorrow:

1a. Meet with Beth at 9 to talk about subtracted images, etc.

1b. Mention the issue of old .opts vs. new .opts. There's not a big difference between the two in the stack images thus far. Can I make a decision to ditch one in favor of the other? That'll cut down on time spent analyzing things. Because there's not large discrepancy between the two, I'm leaning toward the new .opt files since we know that it some ways at least they're better. And they're obviously not causing any major problems. Are there any other results we're waiting on to make this decision?

2. Talk with Dave at 10.

3. DAOphot average stack. Beth expects that we can get another half a magnitude from this. She's currently digging around for the flat-fields and bad pixel masks.

4. More calibration--get Beth's code working to include measurement uncertainties.




Monday, June 15, 2009

Today:

Beth spotted a bug in my calibration code so suddenly we're down to 25th magnitude. Now I'm comparing the calibrated CMD's made from the old .opt files and the new ones. I'm comparing between CMDs of the stack and CMDs of a single exposure on Chip 7. If only I could figure out how to attach plots to the blog, I could show them here, though I think maybe they're too big.

Anyhow, I'm noticing that the stacked images seem to get to the same depth using both new and old .opt files. The colors are slightly different between them, but no other noticeable difference. For the single exposures, a few stars show up in the old .opt CMD that are at fainter mags than in the new .opt CMD. I'm thinking the newer one probably cuts these out at some point because it's more robust and the general cutoff seems to be at ~23.5 or 23.75 for both anyway. I already know the new .opt files turn up fewer stars and the stacks get to the same depth, so I think everything is legit here.

However we're still working on getting deeper because we expected calibrated CMDs down to 26 or 26.5. While more brainstorming on this continues, I'm going to work on getting my calibration code up to snuff. And then I'm going to the sound wave panel at 4 this afternoon.


Calibrating:

I checked out the offsets as a function of position in both bands and for both the new and old .opt files. They all seem to be independent of position, which is a good thing. There does appear to be one star or so in both bands and both the new and old .opts that is a lot higher than the rest, but I don't think this is a big deal. If anything, I can cut it out later.

I then moved on to do the linear fit to the zero-pt versus color data. I'm a little confused, though--should I have two fits, one for each band? Since both have a different zero-pt, this must be the case. I guess I'm just a little confused as to what the fit is going to be used for. I understand that when I calculated the zero-point offset I had to add that back into the instrumental magnitude. But color involves both magnitudes and they have different color terms. Will I be adding the color term for each band back into the magnitude in that band to adjust the calibrate the color?



Tomorrow:

More calibrating.

Friday, June 12, 2009

Okay, here's the word.

I'm having issues with the single exposure CMD. Out of ~3000 stars in each catalog, I'm only getting ~30 that match to within .5 arcsec. I'm getting 156 to within 1 arcsec...but shouldn't it be that there aren't too many more matched when I bump this to 1 arcsec because everything's already been matched? The comforting news is that these are matching to those python found...but that's only slightly comforting.

Because...when I go to match to the SDSS catalog not a single star matches. I've thought about this for a while now and can't figure it out. Even if the 30 stars were the only ones that matched between the bands, my SDSS catalog is centered on Wil1 which is on chip 7 so I'm having a hard time believing it's not finding any matches.

I took a look at the single exposures in each band and compared the star-subtracted image from the not-subtracted image. There are some donuts associated with the bigger stuff but I would say th majority are subtracting really well. This is true for both bands. But taking a look at the stacks, there seems to be a much bigger problem with donuts in both bands. I'm wondering if the fitting radius shouldn't be increased for stacks?

I made a CMD of the stacks daophot'd with the old .opt parameters. There's a slight hint of getting deeper--one star fainter than 24 mag. But beside that they look about the same.

Like I mentioned before, I couldn't make a CMD of single exposures because I'm not finding any stars to calibrate.

Thursday, June 11, 2009

So after reading Beth's comment on yesterday's post I realized that the cut I was using DAOphot to do was round and sharp when I wanted chi and sharp. And I also didn't need DAOphot to do this because the allstar outputs contain chi and sharp.

I went back to see what limits I should put on chi. I used 'find' to calculate flux and plotted chi and sharp against flux like before. I then selected some limits. I implemented them to the un-calibrated matching code, matching only ra's and dec's that already made the cut.

The numbers:

I started with a g catalog of 27334 stars and an r catalog of 33679 (from the allstar outputs). After the magnitude cuts I had 22642 and 26525 stars, respectively.

All stars:
Matching the g and r band catalogs I was left with 16857 stars in the CMD, all of which agreed with stars matched by Python. After matching this catalog to SDSS I was left with 1359 stars for the final calibrated CMD.

Within 5 arcsec of Wil1's center:
After matching the g and r stars I was left with a catalog of 1356 stars, all of which agreed with stars that python matched. Then after matching this KPNO catalog with one of 25000 SDSS stars I was left with 165 stars for the calibrated CMD.


In either CMD I'm still not getting even as deep as the 2006 CMD. My main line of thinking at the moment has to do with the differences in read noise and gain that were input to DAOphot now and back then. My read noise is higher and gain is smaller than those used for the 2006 paper. While I stacked the exposures so I should have a deeper image, if I'm not mistaken a higher read noise will result in fewer sources detected and a low gain will just compound this effect. I'm not sure how I could quantify the affect this might have on the number of sources I'm finding, but I'm considering running DAOphot with the 2006 numbers to compare the resulting CMDs. However, I'm still confused about where the 2006 values came from and so I'm more or less convinced that mine are at least more correct. Though if I end up getting a CMD with better results I could be persuaded to change the values if Beth can explain where they came from...



Tomorrow:
In lieu of other ideas to make the CMD deeper, I think I'll forge ahead on the CMD calibration code. I still need to:

1. Weight the points in the plot by their measurement uncertainties.
2. Plot gtrue-ginst vs. gtrue=rtrue and do a linear fit on the result, solving for the zero-point and the color term in: gtrue-ginst=zp+ct(g-r).
3. Calculate the measurement error on each component of the zp. (Get the SDSS error from that catalog.)
4. Bootstrap the data 1000 times, saving each result in a structure. Then compare the variance in colors.

I doubt that all of this will get done tomorrow, but perhaps by the end of Monday.

Comparison with 2006 DAOphot parameter files

I compared the input parameter files to DAOphot that I used to the ones used for the 2006 paper.

In daophot.opt:
Most of it was similar. But the readout noise and gain were pretty significantly different. For the paper, noise=1.875 and gain=3.2 while I have noise=2.71 and gain=2.07. According to the website where I got my info, the 2006 gain is way out of whack (http://www.noao.edu/ctio/mosaic/) unless I'm looking at the wrong info. According to that website I think my noise also makes sense but I might be wrong about the conversion. I know Beth has mentally sanity-checked it several times before.

I also have a variable psf of 2 compared to variable psf of 0. This means I'm considering a quadratic fit while the 2006 paper used a gaussian fit. I think if anything, mine will be better.

I have a larger psf radius--20 as compared to 11 used in 2006. These are certainly different but mine is set to be larger than the largest star. So I'm thinking they either should be different because my data has been stacked and/or mine will be better because 11 might just be too small.

In 2006, the default percent errors were while I set those to 0. I consulted Dave about this who also uses 0, though no one seems to know exactly how these work.

In allstar.opt

In 2006 Beth used the default inner and outer sky of 3 and 25, respectively. But I used 30 and 50. According to talking with Beth in the past, mine are legit. But it does concern me that these cover completely different ranges.

Again, the 2006 paper used default values for error while I have 0 for both.

In photo.opt
The paper used mostly default values in this file--apertures increasing in increments of 1. It used inner and outer sky to be 15 and 25. However, I use apertures that increase in increments of 2 and an inner and outer sky of 2 and 30. I'm thinking this is okay--I'm just being more liberal in my limits of sky and aperture. Though I don't know much about this file or where it came from, so I haven't spent too long thinking about proper values--before I started running DAOphot this summer Beth and I just glanced at this and assumed everything was about right. I am confused about why the inner and outer sky limits are not the same as those used in allstar.opt. In 2006 they weren't consistent, either.

Wednesday, June 10, 2009

Today:


Meeting with Beth:

This morning I talked with Beth about the CMD issue for a while. She found something wrong with my code (I knew it had to be something!) that was throwing off all the CMDs. So we fixed that and now the python matching code agrees with the idl matching code which is comforting. All in all, the two matched 19850 stars. There were still a few features on the CMD that looked fishy, but it looked much more like a CMD--including the thin disk and what could be a turn off. But it's still not as deep as we expected. In order to really test this, though, the magnitudes have to be calibrated so we know exactly how deep the CMD actually is.


Sharp and Round cuts in DAOphot

Before doing the calibrating stuff I spent some time fiddling with the DAOphot parameters. I took a look at the objects in the previous allstar files to see if I could improve upon the sharp and round limits that I applied in DAOphot. I plotted flux vs. round and sharp to narrow the limits and changed the daophot.opt files for each of the g and r bands. I ended up doing two cuts--one stricter than the other.

Boo on not being able to insert a table. But anyway, the more lenient one was:
...............g10 ....................r7
sharp..... 0.35 to 1. ..........0.3 to 1.
round..... -1 to 0.25........... -1 to 0.2

I tried to lower the upper limit on sharp but DAOphot wouldn't let me for some reason...

Originally, there were 33679 stars in the r band catalog and 27334 in the g band catalog. This first cut lowered the number in each catalog to 17335 and 17583, respectively. It then found 9299 matched stars.

The second, more strict cut used:

......................g10................ r7
sharp .......0.35 to 1. ...........0.3 to 1.
round ........0. to 0.25 .........0. to 0.2

so it just cut off the bottom end of the round measurements. This cut reduced the catalogs to 3138 and 2645 in the r and g bands, respectively. 540 stars were matched and put in the CMD.

The weird features--like the third, reddest peak--remain in both. But the stricter cut makes the CMD shallower so it might be best to use the first, more lenient cut.


Calibration and zero-point offset plot

I matched the CMD stars (from the first sharp/round cut) to SDSS stars and plotted gtrue-ginst vs. ginst (where gtrue is SDSS and ginst is KPNO) to get a sense of what the zero-point offset is. The offset doesn't seem to be dependent on instrumental magnitude (this is good), but it's not zeroed (this is also good!). In fact it's centered around a gtrue-ginst of 7. To get a more quantitative idea of where we are I ignored SDSS stars fainter than 22 and took the median of the offset (gtrue-ginst). Offsets for the g and r bands are 6.7059405 and 7.2057229, respectively. I added these offsets to the instrumental magnitudes and made a new CMD. I'm still concerned because it only goes to about 23rd magnitude. But it looks much cleaner and more CMD-like. (CMDcalibration.ps)



Tomorrow:
I'm concerned with the fact that only 1300 or so stars are getting matched between SDSS's 25,000 stars and the 9,300 which made the sharp and round cuts on my data. To double check this I'm going to try matching before doing the sharp and round cuts to see if the number of stars that match SDSS changes. If it does, I'll investigate this further. I expected more to match.

I'm meeting with Suzanne and co. in the morning for a quick IDL intro. Then it's back to the drawing board on how to get this CMD deeper.

Tuesday, June 9, 2009

This morning:

I got the match.py program working fine. I then used it output to make a new CMD. It matched 20,207 stars--much more than what I was getting before using spherematch, but way more than I was expecting it to find.

This afternoon:

I went back to my idl matching code to see if I could figure anything out. I wasn't able to discover a reason why it would be matching so few stars. I did notice that many of the really crazy colors were associated with at least one band having an especially low color. So I did a magnitude cut at 15 in both bands to see if this would clean up the CMD a bit. It helped with the color distribution but not must else.

I also spent a bit of time working on the CMD calibration stuff, but I'm getting ready to leave because it's super stormy out.

Tomorrow:

Meeting at 9 with Beth to go over everything in detail to find out if we can discover anything useful about the crazy CMDs.

If I'm not busy working on whatever we come up with to fix the CMD I'll continue to work on the calibration code, particularly the offset plot since that will hopefully help to find anything that could be wrong in the CMD.

Monday, June 8, 2009

Today:

So I've been getting ready to run match.py which Dave sent me on Friday. It's a different way of doing the matching. That code requires that the input catalogs of star coordinates are sorted by RA. So, I wrote some code that sorts the RA's and spits out files which contain the RA and Dec coords in order.

I then tried to run the match.py code. I think my problem is that when I write the files they don't come out looking like 11 different columns. It prints out as only 5 columns across. I think this is confusing Python when I try to read in the file. I used two different techniques for writing the files...forprint and printf and both turned out a files that looked the same. Not totally sure how to fix this, but I think I'll do some googling tonight and have some ideas before tomorrow morning.

I also started working on the CMD calibration code. I'm still confused about how to calculate the saturation limit. I have so far worked on the part of the code that will match SDSS stars to Willman 1 stars. However, I'll need a new SDSS catalog that takes into account the saturation limit on the KPNO data. My next step is to make the zero-point offset plot (g_true-g_inst vs. g_inst). The other big thing I haven't figured out yet is how to weight the points by their measurement uncertainties.

Finally, I sent Dave the image files so hopefully they got there alright and he can give some input in the near future about problems I've overlooked. Also, still brainstorming on things that could be wrong with the data, I read something online at that http://msowww.anu.edu.au/~jerjen/daophot.html for the more complicated analytical models DAOphot uses for its psfs, the more stars are needed to define it. I was picking 75 stars. Is this enough?


Tomorrow:
1. Figure out file-writing issue and run match.py.
2. Make some more headway on CMD calibration--saturation limits and weighting points.
3. Maybe talk to Dave about those images if he's figured anything out yet.

Friday, June 5, 2009

Brief Update

Today:

Talked with Dave this morning about diagnosing the CMD problems I'm having. He taught me some tricks to use with ds9--WCS matching and some region files stuff. He also sent me some python code to match stars between the two bands.

I didn't get to the match.py but that'll be first up for Monday if not before. I wrote some code to make my own region files and overplotted them onto the images in ds9 and nothing looked wrong. I also checked to be sure that the psfs were spread pretty evenly over th entire image. It looks as if they are, though there's a suspicious lack of psfs in the center of the image.

I also made a histogram of the match distances spherematch is finding when I match stars between the catalogs that way. I'm suspicious of how many stars spherematch is finding--about 600 match to within 1 arcsecond out of ~30,000 stars in each catalog. Hopefully messing with match.py will shed some light on this problem.

Other than that I WCS matched the three stacks in ds9 and discovered some *gasp* wiggling in the location of stars. Incidentally, the g7 and g10 stacks appear to be the furthest off.Talked with Beth about this briefly. Doesn't seem to be a header issue because I feel that there would be a greater discrepancy, though I'll investigate this further on Monday. Also seems like the stars would be elliptical if it was a scamp issue.

The stars in the r7 stack also look a lot bigger than the others. Beth attributed this to a different FWHM than the g-band, which makes me concerned about the fact that I used the same FWHM for all my DAOphoting. But Beth doesn't think that a FWHM that was a little off should cause a big problem.

Both Beth and I seem to be out of ideas at the moment, so the weekend will be a good time to think some more. Waiting until Dave gives me info on how to give him my images via FTP so that he can offer an opinion.

Wednesday, June 3, 2009

Today:

I started out wanting to fix the distortion maps so that it would show all 8 chips. In order to do so I had to scamp the MEF files. So I started by sextractor-ing the MEF files so that I could get the catalog files needed to input into scamp. I was concerned about inputting the gain here because before I had been doing it on the command line for each chip. But now that each chip was in a different extension of the MEF files for each exposure, that was an issue. I talked with Beth and she suggested that it didn't matter too much for a similar reason to why we can use a constant gain and read noise in DAOphot. So I took the median of the gains of each chip and used that as the value in the .sex configuration file:

sex gain= 2.75

I then used the output catalogs from sextractor to run scamp and discovered that the distortion files were in fact showing all 8 chips! Victory!

Seeing that scamp was playing nicely, I went on to swarp the MEFs together. My concern about gain issues was further alleviated by the fact that swarp was getting the proper gains out of the image headers. I then took a look at the stacked image to make sure it looked alright. There's nothing obvious that stands out--the stack looks as good as it did before. I'll check again after DAOphot star subtracts.

After that I added the constant sky into the new 10-exp stack, using the same median value that I calculated before.

3. Then I ran DAOphot and checked again to make sure I have the best analytic model possible.

Model # ........ Avg Chi^2
1....................0.0224
2....................0.0507
3....................0.0221
4....................0.0142
5....................0.0549
6....................0.0137
7....................0.0138

This time it looks like the close race is between 6 and 7, but 6 wins out by a little bit. I DAOphoted again and got the average chi for #6 to be 0.0139 and the average of #7 to be 0.0140. The slight changes are presumably because of minor discrepancies in how I was manually selecting the psf stars. But #6 still wins so that's what I'll stick with.

In the process of DAOphoting I discovered another parameter that's neither in the manual or in our daophot.opt file, but apparently is in the default daophot.opt. Keyword is "USE SATURATED PSF STARS" and it's currently set to 0.0. I'm willing to bet 0= "No" and move on with my life.

I overplotted the SDSS stars once again to make sure that was still looking consistent and see if it looked any better than before. I'm still concerned by a few places where SDSS shows a star and nothing at all appears in the stack, even before star subtraction. But other than that, things look fine.

I then re-did the 7-exp g and r stack using MEFs and re-did all of the DAOphoting and Allstaring there so that I can come up with the CMD.

Tomorrow:

Come up with a good CMD and go over CMD calibration with Beth.

Tuesday, June 2, 2009

Today:

I started with checking out the analytical models of the psfs to try to find one that fit well. I ran DAOphot using each option to check out the residuals. I compared the chi^2 DAOphot printed to the screen for each model. A good chi^2 is close to 0--it represents the percent deviation, root-mean-square, to which DAOphot's first approximation matches the observed stellar profiles, on average.


Model // Chi^2
Gaussian (#1) // 0.0224
Moffat, beta=1.5 (#2) // 0.0500
Moffat, beta=2.5 (#3) // 0.0197
Lorentz (#4) // 0.0137
Penny, 4 free (#5) // 0.0544
Penny, 5 free (#6) // 0.0136
Super Secret (#7) // 0.0134

Looks like the super secret model wins out, but the 5-free-parameter Penny and Lorentz models are close behind.


In an effort to address the distortion problems Dave brought up yesterday, I've created multi-extension fits (MEF) files similar to what he was using.

Tomorrow:

I'll scamp all 10 of the MEF files at once to see if I can get a distortion file that shows all 8 chips. If this works, I'll swarp the files to see if that looks good. If swarp looks good, I'll redo DAOphot and test the different analytical models once again just to be sure I've got the best one.

Assuming all of this runs smoothly, I'll also talk to Beth about CMD calibration, which we pushed back today when I got stuck with this mef stuff.

Monday, June 1, 2009

Today:

Mag limit issues:

Last Friday afternoon I was having trouble because my CMD was only going to about 17th magnitude. I tackled that problem for most of the day.

I discovered that I'd lied to DAOphot a bit about how I had made the stack. So, after fixing this, my plots of sharp, round, and flux versus magnitude jumped down to about 20.5 magnitudes. Beth still expected this to be deeper. I then DAOphot'd a single exposure on one chip--Exposure 126 on Chip 7 to compare. I made the same sharp, round, and flux plots to check out the instrumental magnitude limits and they were showing stars only out to about 19 magnitudes. Still deeper than the 17 I was getting at first, but not quite down to the 20.5. This might be expected because the stack should get deeper than a single exposure, but not totally sure what the numbers should be.

I made a CMD of the new stack with proper DAOphot'ing and that looks to go down to about 19 mag. A few magnitudes better than before, but still not as good as I was expecting.

Email from Dave:

Dave sent an email today bringing up the issue of the distortion in our images. My scamp distortion output shows only one chip instead of all 8, so he was worried that we were assuming each chip has the same distortion which could easily be very untrue. At his suggestion, I overplotted SDSS stars before and after star subtraction to make sure that my stars were lining up with the ones SDSS knows. It looked like they were lining up pretty well, though there were a few places where SDSS said there should be a star but my image had nothing. I'm still not clear on what was causing this, but in general things lined up very well. I'll check again tomorrow to see if there's anything else I can do to check off on the distortion problems, though things look pretty good to me.

Dave also got back to me today about my DAOphot questions. He agreed with most of our parameter values: He used a single value for gain assuming it wouldn't affect any calculations we were actually going to use. He also used a variable psf and suggested that the KPNO Mosaic imager is probably especially prone to variable psf because it was one of the first wide-field imagers. He set percent error and profile error to 0 like we did, too. But just like us he doesn't really understand what this does either.

As far as Analytical Model PSF, he uses an option that's in the code, but not in the manual. Setting the parameter equal to 7 gives:

C Penny function --- Gaussian core plus Lorentzian wings.
C The Lorentzian and Gaussian may be tilted in different
C directions.

This is the fit he used, but he said it was because his LBT data was particularly bad and suggested we might not need to use it. He said we should try out some other things first. I emailed him back to ask how I could tell if the fit I chose was a good one. He suggested checking by eye to see if the residuals look good and checking to see if the chi-squareds were legit. I'm not sure I totally understand how to do those things, but that'll be a project for tomorrow.

Tomorrow:
1. Sign off on the distortion problems--make sure everything's cool there.
2. Explore some of the analytical models of psf to make sure I have a good one. This will pretty much conclude the discussion of DAOphot parameters pending future weirdness in results.
3. Talk with Beth in the morning about CMD calibration and get moving in that direction.
4. Email Dave back.

Thursday, May 28, 2009

DAOphot, here I come.

Today:

Good news. When I look at the r-band sky levels without having swarp first do the background subtracting, I get some really crazy stuff happening. Exposure 181 drops nearly to 0 on chips 4, 5, and 6, and exposures 179 and 180 are much higher than the others.Yesterday I had only looked at the sky levels for the background subtracted frames. Now looking at the stack of 7 exposures (without the 3 weird ones) I don't see the bad patch that was there before in ds9 OR in atv, so I'll proceed using the 7 exposures instead of 10. I still have yet to decide which stack to use for the g-band--that decision probably won't be made until I see the CMDs.

I calculated the median of medians from the un-subtracted sky levels to figure out what value I should add back in. I did this for the 10-exposure and 7-exp stacks in both r- and g-bands--4 values in all:

10 exp:
g: 528.095
r: 1823.11

7-exp:
g: 493.916
r: 1817.94

It seems reasonable to me that the median levels differ slightly between the two sets of stacks, but not by too much within each band. This makes me think there's not anything really crazy going on. These are the constant values I'll add to the sky-subtracted stacks to put the sky back in.

I added the sky to both the 10-exp and 7-exp g-band stacks as well as the 7-exp r-band stack.

Tomorrow:
DAOphot ftw.

Wednesday, May 27, 2009

Update on where I am

Today:
I stacked the 7 exposures for the g-band and got them looking nice.

I then checked out the psfs of the sources on both the 10-exp and 7-exp stack of the g-band data to see if the sources looked okay. They looked decent--a little raggedy around the edges but otherwise circular. Neither looked particularly better than the other. I'll take a look at the CMDs of each to see if there are differences.

I took a look at the sky levels for the r-band data and discovered that exposures 138 and 139 are much different from the others. Even the ones with comparable values don't seem to follow any trend as far as I can tell. I will ask Beth about this tomorrow and in the meantime make a stack of the 8 exposures that seem consistent. The levels for the r-band are also much higher than those for the g-band, but I think this is okay as long as all 10 are in the same ballpark.

I also stacked the 10 exposures of the r-band and there's something funky happening with at least one exposure on Chip 3. The same shows up in the 8-exposure stack. Oddly enough, I'm seeing this in ds9, but it doesn't show up when I look at the stacks in atv. I'm not sure why this would be, but the images look fine in atv. I also looked at the psfs of both stacks using atv and they look fine.

I discovered that the problem I was having earlier with stacking the images was in fact a result of me averaging the exposures in the stack instead of medianing them. Because the averaged stacks looked so messed up, this is reason enough to use a median instead of an average, in my opinion.

Tomorrow:
Once I take a look into what's happening on Chip 3 in the r-band, I'll go about adding the sky back in. I'll do so by adding a constant background, the value of which will be the median of the median background level on each chip. Of course, this constant will be different for the g- and r-bands.

After the sky's back in, I should be set to daophot which I think I should be able to at least begin by the end of the day tomorrow.

Summer Research 2k9. This is going to be epic.

Long-term Goals:
Complete the Wil1 analysis I've been working on for nearly a year.
Create deeper CMDs of Willman 1 stars.
Help correct the paper Beth wrote in 2006 and resubmit.


Short-term Goals:
Figure out the sky subtraction and add the sky back in.
Create a CMD of Willman 1 by Friday, June 5, 2009.

Current task:
Yesterday afternoon I re-swarped all of the original files without background subtracting them. I then created a plot of the sky levels on each chip and each exposure to compare with the one I made when I used swarp to background subtract. I noticed that the two plots looked very different. Exposures 126, 127, and 130 have much higher sky levels than the other exposures, which seem to follow an expected trend assuming that the moon was rising for consecutive exposures while the data was taken. I talked about this with Beth who suggested that clouds could have interfered with those 3 exposures and could have affected the sky level in all sorts of ways. Reflection from clouds could explain a brighter sky.

Next I'm going to reswarp the 7 exposures that seem well behaved. I will then compare the stacks of 10 and 7 exposures (both created with background subtracting), respectively, to see if the 3 wacky exposures are affecting the CMD. From this, I will choose which stack is most robust to use for the remainder of the analysis.

After choosing which set of exposures to use, I'll decide whether a median or average would be best for making the stack, again by comparing the CMDs.