Monday, June 14, 2010

Bright artificial star tests completed

I finished the allframe follow-up analysis and just visually checking my calibrated masterlist, it looked as though nearly 100% of the stars were found. That means when I go to do the completeness calculation, I expect good agreement.

I first went back to the completeness test Beth gave me--calculating the number of stars per magnitude for the true images, artificial input, and fake output images. I found a 98% approximate completeness sat 21.5. However, as I went to fainter magnitudes I found that I was about to calculate a higher than 100% completeness. This indicates to me that I'm likely finding all or most of the artificial stars, but I'm perhaps finding a few other stars which are not artificial and yet I'm counting as such.

After that initial test was looking reasonable (though not perfect), I went back to matching the artificial star output to the positions of the artificial stars I input. I matched positions for all stars with g < 21.5 (the bright stars I input all had g or r eq 21.5 for simplicity). I found 95.9% completeness. I went to fainter magnitude limits to make sure that I wasn't somehow picking up more stars (this seems like it could only be the case if my calibration or positions were off since all of my stars should be accounted for with a maglim of 21.5. And neither one of these scenarios is likely since they would result in systematic problems in the completeness and would probably reduce the completeness at 21.5 by far more than 4%.) In the 2006 paper, Beth was getting 100% completeness down to r eq 22.5 mag. Mine doesn't seem to be this good for the bright tests, so I wouldn't expect it to be as good for the real artificial star tests. However, I'm zen with a 96% completeness, especially if at fainter magnitudes the values are similarly reasonable. After all that, I've determined that my completeness calculation seems to be sane and that it was, in fact, a problem with the artificial start input. That's slightly comforting for my own sanity. Now that I've got the technique figured out and the code written, I'm going to apply it to the real images and re-do the artificial star tests from the beginning. I'll then re-do all of this analysis and hopefully come out with properly-calculated completeness levels in the end. I'll start Allframe running this afternoon and complete the calculations tomorrow morning. Real Artificial star tests:

I'll be inputting a grid of stars as defined in my fake CMD with a square spacing of 56 pixels. All of the stars will be located on all 18 exposures of chip 7 --that's a total of 1764 stars per run. After I calculate completeness values for the first run and determine that everything is working fine, I'll need to do more runs. At this point, I'm considering 30 runs in total, each time randomly generating new artificial stars which meet the same CMD critera. That will result in just over 52,000 stars used to calculate the final completeness. Each iteration of Allframe takes about 3 hours to run, which is a total of 90 hours. However, I can run multiple instances of Allframe at once. Last semester we found that running 8 at a time really bogged down Squid. However, running 4 at a time shouldn't be a problem. My current plan is to finish about 8 to 12 instances a day once I get the real tests started. That means it should all be done in less than a week.

Note: When I did the calibration of the bright stars, I found that the g-band zeropoint offset was about 0.3 higher than for the calibration of real data. The r-band appeared consistent with the calibration of the real data and both color terms were the same. I don't think the zero points should be different at all because they are determined from stars in our images matched to those in SDSS. Needless to say, the artificial stars I'm inputting shouldn't be influencing the matching to SDSS data and so the calibration stars should be the same. In an analysis like ours, 0.3 mag is a fairly significant margin. I'm going to move forward and see if the same happens in the actual artificial star testing (as opposed to the bright artificial star testing). If I get the same result, I'll have to look into it further.

No comments:

Post a Comment