Wednesday, July 14, 2010

Good news!
As of yesterday all fo the Allframing was completed so I was ready to do the post-allframe analysis and prepare the masterlist for calibration and the completeness calculation. I spent most of the afternoon doing this. I also automated all of the code to create the masterlists and calibrate them for all of the artificial star test iterations.

Bad news!
When I calibrated the data the g-band offset was still there.  :( :( :( I had debugged this for the first test and checked that the calibration was correct, but for some reason it was now incorrect in the other 19 tests. I discovered a bug in the program that creates my input addstar files that amounted to not applying the offset fix. Essentially, I did it correctly for the first test and then during the process of automating that code, I introduced a bug that undid the correction. This morning I began feverishly correcting this. I'm now running two tests--one with the bug corrected and one omitting the zero-point fix altogether. The latter should replicate the current results I have. The former should correct them. I'm doing both on two more iterations of the artificial star tests in order to quadruple check that this is fixed before I go on. After those are done I should have 3 tests down and 17 to go.

17 is no biggie. I'm pretty angry with these artificial star tests at this point so I'm going to run them all night until they're finished. They'll be done in the morning even if I have to stay up all night to finish them. I'm also toying with the idea of doing 25 tests instead of 20, but I'll wait to make sure that the other problems are fixed before I go overboard on anything.

Good news!
My surface brightness calculation is fixed. Turns out the problem was a unit error--my number looked wrong because I was comparing it to the Martin et al. 2008 value, which was in different units. That's waiting on the ML code.

Bad news!
I can't get any idea of the completeness until the artificial star tests are corrected. The bug affected only the .add input files and not the effective ones. This means that I can't properly compare the number input to the number found for the completeness calculation. It's also affecting the magnitude bins. That means the ML code with the chi/sharp cut will have to wait until tomorrow morning.

Good news!
That chi/sharp cut is fixed as of yesterday and I wrote a function to apply it to a masterlist of my choosing. I've also automated the masterlist and calibration code already, so post-allframing analysis is pretty straightforward.

There's a lot of daomatching and daomastering to do after allframe finishes on all of these tests. That's going to be the most tedious thing to do and the reason why I might be up late tonight. Once that's done I will:
1. Create the masterlists and apply the chi/sharp cut.
2. Calibrate them.
3. Calculate completeness.
4. Choose a completeness limit.
5. Run the ML code using this limit and check the results.

And beyond:
6. Modify/run the absolute magnitude code depending on the completeness limit I choose for this calculation and using the ML results.
7. Run the surface brightness calculation using the ML results.
8. Run the distance calculation using the completeness/photometric uncertainty results.
9. Recreate the spatial map.
10. Run the simulation based on the ML results.


  1. Sooooooo... how'd the AST'ing megasession last night go?

  2. p.s. I started reading the paper. Decided that I should be the next person to edit the intro and that the rest of the paper should be edited once these AST's are done, things move forward, and there are actual numbers/results to add in.