Friday, June 26, 2009

Calibration Code:

The problem with bootstrapping was just a misunderstanding of what the code was doing, so that's working fine now. So from that code I calculated clipped mean and median variances for each band in color and zero-point. Each median was close to its respective mean which is a good sign. Mean/median values are:

g color uncert. 0.0130472/0.0130471
g zero-point unc. 0.00871615/0.00871514

r color uncert. 0.0238134/0.0238136
r zero-point unc. 0.0199431/0.0199430

More robust mean values of the zero-point offet and color term for each band are:
g zero-point: 6.84060
g color term: 0.0709389

r zero-point: 7.15033
r color term: 0.0352746

The g band is looking good--uncertainties are relatively small and the color term and the mean zero-point offset is about what it was before. I'm more concerned about the r-band. zero-point is similar to what it was before with a relatively small uncertainty. What concerns me is the color term--the uncertainty for which is comparable than the measured value. This means we can't really trust that value for the color term. I'm not terribly sure about how to handle this.

My next step, as per Ricardo's paper, is to check these values for each exposure to see if the variances between exposures are small compared to the uncertainties described here for the stack. When Ricardo did this he compared between chips, but he also had 36 chips. We have only 8 chips and I expect that the differences between exposures are going to be where any problems lie. We have robust weight maps and distortion maps for all chips and it was the background levels between exposures that concerned us about the stacks to begin with. So I'm more worried about discrepancies between exposures than between chips.

Thursday, June 25, 2009


1. Used the python code to iterate over DAOphot and Allstar. It doesn't look like it got us any deeper overall, but it certainly found more stars and so the CMD filled in pretty nicely. We're not at about 25.5 mags on a CMD of stars within 2 arcmin of Wil1, which is the same limit used in the 2006 CMD.

2. My registration and abstract for the Ann Arbor gig are finally sent! Got an email from one of the organizers about hotel reservations. Tonight I'll buy the plane tickets so that's out of the way.

3. Worked on the calibration code. The zero-point/color term code based on what Beth sent is completed. I'm working on getting the bootstrap working. I think the bootstrap itself is more or less correct, but I'm having trouble getting it to play nicely with the linear fitting.

4. Alex and I succeeded in our task to get Beth and Marla to the Physics Lounge (not an easy thing to do) for her surprise yay-you-got-money-here's-some-champagne party which Suzanne organized, of course.

5. I also read Ricardo's CB and UMaII draft to get a sense of how he's been doing things. He gave a lot of insight into the calibration methods that I'm currently working on, which I'll use as guidance as I finish up working on that code. He also ended up using AllFrame because it got deeper results. Because we're still not getting as deep as we want, I'm going to start reading up on AllFrame and figuring out how much of a hassle it'll be to try that method. I'm hoping to give it a trial run next week to see if I can get a deeper CMD. It still concerns me a bit that we're not even getting to 26th magnitude with swarp, as Beth's calculations predicted. That seems to indicate that something else is still not working properly in my current analysis.


1. Finish calibration code.
2. Look into AllFrame.
3. Get moving on the max likelihood stuff.

Wednesday, June 24, 2009


Words weren't really flowing last night so I finished the abstract this morning and emailed it to Beth for comments. Once the abstract was proofread I emailed in my application for the workshop.

Then I fixed the bug Beth found in the chi and sharp code--an indexing problem. That greatly reduced the file size and corrected the plots. So I sent those to Beth for feedback.

Next I took a look at the Python code Beth recommended for iterating DAOphot and Allstar and got that running. The thought is that after we've subtracted the brightest stars, running DAOphot and Allstar again will nab even fainter stars, hopefully giving us a fainter CMD overall. Had some trouble getting in running, though. It got stuck when choosing psf stars. I emailed Beth to see if she had ever had that problem with the code.

Also had a group meeting today. Everyone seems to be making progress. I've made a good amount of progress since last week even though last week I thought I would be stuck by now. Marla Geha is also here collaborating with Beth so she sat in on the meeting.

More along the same lines--fixing the python code, hopefully improving CMDs, working on calibration code.

Monday, June 22, 2009

Things I figured out/did today:

Today has been a hodge podge of trying things to see if there is any indication of where our methods could be going wrong or could be improved.

Took a look at many different versions of the chi versus sharp plots for the weighted stack, single exposure, and median stack in each band. I overplotted green dots that represented those stars that matched between the bands and noticed that these were spread throughout, which was to be expected. I did note that stars in the weighted stack weren't matching to the most extreme stars detected in either bands, while the faintest stars in the single exposure and median stacks did match. This is keeping in mind that the latter two don't go as deep to begin with.

I also grumbled about the bad flat fields for a while. But Beth reminded me that they have already been used (the images were divided by them in the first round of image reduction) so we can't do anything about them. In fact, using them to stack the images now should account for any badness. This is reflected by the fact that our stacked images look fine. And besides, they reflect the chip they were taken from so it's the camera that looks bad, which is important to take into account. It's just unfortunate that Wil1 is centered on worst chip of camera.

I thought briefly that the CMD I was creating from only stars close to Willman 1 was significantly shallower (up to 1 mag) than the CMD with all stars in the field. Beth reminded me that it could be a deceiving because of the huge difference in densities, what with so few stars from the field residing within 5 arcmin of the center of Willman 1. On her suggestion, I followed up by remaking the CMD of the field, btu plotting only a random sample of those stars equal in size to that of the close-up CMD. It would seem that what I was seeing was a result of the high number of stars because this last plot showed about the same depth as the close-up one. Oh well...seems I'll try anything to get to the ever elusive 26th magnitude mark.

I made a 10 exposure r stack to check and see if that could get us any deeper or a better image. I used the same technique as before with weight maps, etc. I also checked the swarped images which, thanks to the weight mapping, look pretty good. After DAOphot'ing, I discovered that the standard deviation of the image is in fact smaller for the 10 exposures than for the 7 exp stack (9.98 as opposed to 10.908--I'd say this is a big enough difference to warrant some interest). DAOphot also found 10,000 more stars from extra three exp (now we're up to 46,000) so I'm hoping that they're not all crap and can contribute to the fainter mags. I checked out the chi and sharp plots and I'd say they add a few tenths of a mag to the 7 exposure weighted image. Considering all these things, I'm going to say that the 10 exposure stack is the way to go. Particularly because bad image quality is what got those three exposures thrown out to begin with and that's no longer an issue.

Still to do/consider (by no means in order of importance):

1. On the calibration code I still need to: bootstrap calibration, find mean, median, variance
2. Use python code to iterate allstar--does this get us any deeper?
3. Should we go back and use the weight maps in sextractor? Could it make a big difference in detecting faint sources from the beginning? I got this idea from reading Dave's Hercules paper this morning. It's a rather handy step-by-step guide of pretty much everything I'm doing.
4. Submit registration/abstract for workshop ASAP.
5. Talk to Dave about images.
6. Fix bug in sharp/chi comparison code.