Friday, June 18, 2010

Friday

Today:

1. I found a bug in the addstar file code that will hopefully have a profound impact on my completeness results. Unfortunately, I accidentally corrupted my only copy of one of the allframe output files, so I have to re-run Allframe all over again. :( I still have high hopes that my completeness problems will all magically disappear after this. If so, Allframing begins this weekend.

2. I spent part of the morning working on the manual that tracks all of my analysis. I had originally written it as a .doc because I was working from my computer during a break from school and the ssh connection was horrible. Today I converted it all to a latex document and made the formatting really nice. I also added a preface and have been trying to incorporate a table of contents. Some other time, I'll get to updating the content. So far, the Bertin software sections are largely complete; the Allframe and Artificial Star Test sections still have a ways to go.

3. During the afternoon I revised the first two sections of the paper. Overall, the intro reflects that of my thesis much more now, with plenty of revisions I nabbed from the first draft of the real paper. I'll proofread draft 2 and then send it on to Beth for review. Next on the paper agenda will be finishing the completeness section and moving on to results.

Monday:
1. Completeness like it's never been done before.
2. Adding bootstrapping to the distance code.
3. More work on the paper.

Priorities

Today marks the end of week 4. I thought I'd take a moment to pause and reflect. The first month has seen fewer results than I had initially hoped. The completeness stuff is taking much longer than expected due to all the problems I'm having. I will be VERY glad to have all of that behind me. However, it seems to finally be almost together. I've also set it aside at times to get ahead on other things, so while there aren't a lot of results to show, I have gotten around to a lot. A recap for posterity:

Done
1. UMD cosmology meeting
2. 1st draft of intro and data sections of the paper
3. Bright artificial star tests
4. Completeness calculation of said tests
5. Near-convergence on the real artificial star tests
6. Distance calculation (all but the distance uncertainty calculation)

To Do
1. Perhaps obviously, completeness tests are back to being my #1 priority. I already have the code to calculate the completeness (once it's working properly!). Once I have reliable results there will be a lot of Allframe action.

2. Revising the first two sections of the paper.

3. Once I've calculated completeness and photometric uncertainty, I need to include these parameters in the distance calculation and finalize that.

4. I can then use the completeness limit to properly calculate ML results. The only calculation I really need to check on is the surface brightness. I believe the ML code already does it, or at least makes it easy to do, but I'll make sure the calculation is ready to go before the ASTs are finished.

5. The resulting number of stars provided by ML will then be input into the absolute magnitude code and that can be finalized.

6. Using the completeness limit and the best matched fiducial at the appropriate distance, I can finalize the morphology.

7. The structural parameters will inform the simulation that Beth and/or I probably just need to start over on to correct.

8. Paper Paper Paper--When all is said and done it should be relatively straightforward to write up all the results. I've got my thesis to work from and also Beth's 2006 paper. I plan to write bits and pieces and I go, in the order mentioned above.

9. Analysis Manual--Along the way, I also need to finish the document that describes how I've done all this. A lot is included already, but I've indicated where I need to add detail. I also need to include a lot more information about the Allframe portion and need to add a section for the Artificial Star Tests. There's also a lot of formatting that needs to get done. This is fairly low priority at the moment because this is something that could technically be finished even after I leave here, though I hope to make some time before then.

Distance and Completeness Updates

Fun
Today started out with breakfast in Zubrow for all the KINSC students. Twas good. It's also pot luck day!

I spent part of the afternoon helping some other folks figure things out. But now for my own work:

Distance
I came back and double checked my input to the distance code, just to be sure that everything was good to go there. I checked my technique against Dave's description of his calculation and went back to Shane's Bootes II paper, which Dave had cited. I discovered that there were a few additional things that I hadn't coded yet. Primarily, after I select the best fit isochrone or fiducial and the best distance, I need to bootstrap the distance calculation with the best fit to derive the error in the distance. Everything besides this last step is set, so I'll add the bootstrap after I've got the completeness stuff figured out.

I also tracked down the info about M13 and M92 (age/metallicity/distance) which was missing from yesterday's post.

Completeness
I spent all afternoon working on the completeness calculation for the actual ASTs. I really want to get the full-fledged Allframing started before the weekend. At this point I need the results to make significant progress on other calculations. Essentially, a lot of things will come together quickly once the artificial star tests are finished and the completeness levels are calculated.

Last I left the completeness calculation, I had a debugged version for the bright star tests, but had run into trouble matching the positions for the real test. That problem is still there. I did find a bug in the code where I was creating the input addstar file, so I made new ones and am running Allframe again. Hopefully that will solve the problem. In the meantime, I'm going to double check a few other things. Fingers crossed, this will not be an issue by lunchtime. Below are some signs of problems I've discovered on my hunt for answers. These are results from the bad addstar input runs, so I'll go back after running Allframe again to see if these red flags are still there.

A few observations:
1. I'm definitely getting 96% completeness for the bright star tests at r < 22.5. I've confirmed qualitatively and quantitatively.
2. Without applying a faint magnitude limit, only 6 stars out of 1765 artificial stars are matching between my art star input and the output from Allframe.
3. There are fewer stars coming out of allframe for the true distribution than are getting input as artificial. This could possibly be correct but seems unlikely. Allframe will pick up noise down to 27.5 mags, so even for stuff that's very faint I should be getting a lot of signal there. This is not to mention the true stars that allframe should be picking out of the image, too.
4. A quick, informal survey of 22 stars in the center of the ref frame shows that 3/22 visible artstars weren't subtracted (at least not well) by DAOphot/Allstar. Real stars are getting subtracted pretty well.
5. The bright star test outputs 1426 more stars than the real artificial star test. After calibration the bright star masterlist contains 1538 more stars than the actual art star masterlist. After final chi/sharp cuts, the margin is almost twice as many stars in the bright masterlist as in the actual AST masterlist.
6. I've double checked my option files, the program I use to create the masterlist, and the calibration program and they're exactly the same for both the bright tests and the actual tests.




Tomorrow:
1. Paper revisions
2. Completeness calculation
3. Allframing

Thursday, June 17, 2010

Distance Calculation

I downloaded 4 isochrones for the distance calculation. They are:

13Gyr, [Fe/H] = -2.3
13Gyr, [Fe/H] = -2.0
10Gyr, [Fe/H] = -2.3
10Gyr, [Fe/H] = -2.0

I downloaded these from Aaron Dotter's database at http://stellar.dartmouth.edu/~models/isolf.html. For each of the four, I assumed [a/Fe] = 0.0, Y = 0.245 + 1.5Z, and used the SDSS color database in ugriz.

I'll also be comparing to the M13 and M92 fiducials. The info on them is:

M13: (Grundahl et al. 1998)
distance = 7.7 kpc (dm = 14.431)
age = 12 +/- 1 Gyr
metallicity = [Fe/H] = -1.61
[a/Fe] = 0.3


M92: (Pont et al. 1998)
distance = 8kpc (dist modulus (dm) = 14.515)
age = 14 +/- 1.2 Gyr
metallicity = [Fe/H] = -2.2

I'll be fitting each of the fiducials and isochrones at 6 distances:

distance | dm
35.0 kpc | 17.720
36.0 kpc | 17.782
37.0 kpc | 17.841
38.0 kpc | 17.899
39.0 kpc | 17.955
40.0 kpc | 18.010

As of the end of the day Wednesday, the code was completed and de-bugged. The best match so far is M13 at a distance of 39 kpc with a total of 210 Wil1 stars within the envelope. This is derived using a magnitude limit of r = 24.25, which will need to be compared to the results of the completeness tests. Also, I'm currently using the measurement errors from the KPNO data to define the envelope around each isochrone/fiducial within which I'm matching Wil1 stars. This will need to be adjusted to the photometric uncertainty derived from the completeness tests.

Tuesday, June 15, 2010

June 15

Compeleteness

I finished the completeness calculation of the actual artificial star distribution today. The problem that I was finding with calibration yesterday didn't occur with the distribution I really want. From an email I sent to Beth:

The problem could be that I didn't (and still haven't) masked out any of the artificial stars. For the calibration I set a magnitude limit of r = 21.75 to only use reliable SDSS stars. My bright stars had r = 21.5 so if they overlapped with an SDSS star, they would've been included in the calibration and possibly screwed it up. In the real tests, there are very few stars that bright, so it's unlikely for an artificial star to meet the mag limit AND match the position of an SDSS star. The calibration for the real artificial star test is consistent with that of the true data, which tells me the calibrations aren't being affected by the presence of artificial stars.

I therefore assumed that the calibrations were good for the artificial data I was actually concerned about and decided to move on. I ran into some snags when I moved onto the completeness calculation itself. I worked with it for a long time and decided I just needed to take a break from it. I'm going to look at it again later tonight. I already have the code from the bright artificial star tests, so it should be a straightforward calculation, but apparently applying the code to the real tests is buggy.

Distance

In the meantime, I've begun coding up the distance calculation. I went back to Dave's paper to read how he did the calculation and its similar to how I'll be doing it. The outline of my technique is as follow:

1. Download a number of isochrones and fiducials with a range of metallicities and a few ages.
2. Match stars within 1 hlr of the center of Wil1 to these color-magnitude sequences within an envelope defined by the photometric error determined by my artificial star tests.
3. Do the same thing for an annulus representing the background sky in the field of Wil1, but far from potential member stars.
4. Count the number of Wil1 stars (member candidates - contaminants) which are consistent with the main sequence
5. Shift distance modulus of the sequences by intervals of 0.025 mag around what the approximate Wil1 distance modulus is ( m - M ~ 17.84). Repeat
6. Repeat steps 2-4.

Whichever distance modulus/metallicity combo matches the most member stars of Wil1 is presumed to be the best fit.

So far I have a detailed outline of the code and I've started to fill in the details. I'm going to spend some time tonight deciding exactly which fiducials/isochrones I want so that I can put them into the code in the morning. I'm hoping to have a full draft of the code by the end of the day tomorrow for debugging.

Paper

I have yet to get back to editing the paper, but I'll definitely make time before the week is out. Hopefully in the next couple of days I'll have a completed distance calculation code and be running Allframe so I'll have some extra time.

The Outside World
Today Jerry had a visitor, Michael Triantafyllou, aka the Director for the Center for Ocean Engineering at MIT. Apparently his son is interested in Haverford and was getting a tour. He gave a little talk for Jerry and his students, Peter and Andrew, and Mimi and me. He's doing research about the movement of fish as they swim. Apparently fish can stay beyond rocks for a long period of time, getting their energy from the flow of the water (!) and using very little of their own muscle power to stay there. His group is trying to recreate the technology, essentially building a fish. They're using a lot of MEMS to gauge pressure changes around vortices caused by turbulence behind obstacles in a river environment. I could go on about it, but it was interesting.

Tomorrow afternoon Peter Love is giving a talk about his research. I'll probably stop by for an hour.

Monday, June 14, 2010

Bright artificial star tests completed

I finished the allframe follow-up analysis and just visually checking my calibrated masterlist, it looked as though nearly 100% of the stars were found. That means when I go to do the completeness calculation, I expect good agreement.

I first went back to the completeness test Beth gave me--calculating the number of stars per magnitude for the true images, artificial input, and fake output images. I found a 98% approximate completeness sat 21.5. However, as I went to fainter magnitudes I found that I was about to calculate a higher than 100% completeness. This indicates to me that I'm likely finding all or most of the artificial stars, but I'm perhaps finding a few other stars which are not artificial and yet I'm counting as such.

After that initial test was looking reasonable (though not perfect), I went back to matching the artificial star output to the positions of the artificial stars I input. I matched positions for all stars with g < 21.5 (the bright stars I input all had g or r eq 21.5 for simplicity). I found 95.9% completeness. I went to fainter magnitude limits to make sure that I wasn't somehow picking up more stars (this seems like it could only be the case if my calibration or positions were off since all of my stars should be accounted for with a maglim of 21.5. And neither one of these scenarios is likely since they would result in systematic problems in the completeness and would probably reduce the completeness at 21.5 by far more than 4%.) In the 2006 paper, Beth was getting 100% completeness down to r eq 22.5 mag. Mine doesn't seem to be this good for the bright tests, so I wouldn't expect it to be as good for the real artificial star tests. However, I'm zen with a 96% completeness, especially if at fainter magnitudes the values are similarly reasonable. After all that, I've determined that my completeness calculation seems to be sane and that it was, in fact, a problem with the artificial start input. That's slightly comforting for my own sanity. Now that I've got the technique figured out and the code written, I'm going to apply it to the real images and re-do the artificial star tests from the beginning. I'll then re-do all of this analysis and hopefully come out with properly-calculated completeness levels in the end. I'll start Allframe running this afternoon and complete the calculations tomorrow morning. Real Artificial star tests:

I'll be inputting a grid of stars as defined in my fake CMD with a square spacing of 56 pixels. All of the stars will be located on all 18 exposures of chip 7 --that's a total of 1764 stars per run. After I calculate completeness values for the first run and determine that everything is working fine, I'll need to do more runs. At this point, I'm considering 30 runs in total, each time randomly generating new artificial stars which meet the same CMD critera. That will result in just over 52,000 stars used to calculate the final completeness. Each iteration of Allframe takes about 3 hours to run, which is a total of 90 hours. However, I can run multiple instances of Allframe at once. Last semester we found that running 8 at a time really bogged down Squid. However, running 4 at a time shouldn't be a problem. My current plan is to finish about 8 to 12 instances a day once I get the real tests started. That means it should all be done in less than a week.




Note: When I did the calibration of the bright stars, I found that the g-band zeropoint offset was about 0.3 higher than for the calibration of real data. The r-band appeared consistent with the calibration of the real data and both color terms were the same. I don't think the zero points should be different at all because they are determined from stars in our images matched to those in SDSS. Needless to say, the artificial stars I'm inputting shouldn't be influencing the matching to SDSS data and so the calibration stars should be the same. In an analysis like ours, 0.3 mag is a fairly significant margin. I'm going to move forward and see if the same happens in the actual artificial star testing (as opposed to the bright artificial star testing). If I get the same result, I'll have to look into it further.