Friday, July 2, 2010

I haven't posted all week. Mostly because there hasn't seemed to be anything new to post. But finally--an update!

I've spent my week doing a few things. First, the distance code is done. It took me longer than expected to finish the bootstrap which will be used to calculate the uncertainty in the distance. It ended up being tricky to automate everything to give me exactly what I want with little effort. Hopefully it will be a pleasure to run later when I calculate the best fit distance.

Second, I've been fighting with the ML code for much of this week. At first, I wasn't getting any sort of sane response from the code, it was just outputting my input values 1000 times. I found the bug there--a discrepancy between the field area calculation in and that in The most updated versions of these can now be found in /home/gail/comparedata. The other thing to keep in mind in the future is the shape of the input field. I settled on using a circular field of a certain radius around Wil1. This is what I used for my thesis and I wanted to be able to directly compare. As such, the code is set up to calculate the area of the field assuming it's circular. Should I instead use a rectangular field (e.g. by taking out the radius cut in, I'll need to change the way that I'm calculating field area in

I was interested in the ML results for three data sets: the 2006 data, the 2010 data with strict chi/sharp cut, and the 2010 data with the loose (2006) chi/sharp cut. When I ran the ML code on the 2010 data using the looser chi/sharp cut, I found that there were some anomalies in the results. There was a secondary peak at e = 0 and also at PA ~ 30 degrees. There was also a double peak in the half-light radius distribution. The primary peaks lined up with the SDSS data, but these double peaks were troubling. I spent some time seeing if the parameters I was using could be having this effect. I experimented with several things that I had tweaked before: the closeness of the color cut and the field size. For all of the runs I left the faint magnitude limit at 24.25--the limit used in my thesis and also probably what our completeness limit will end up being. Whatever I did, I couldn't get rid of the weird double peaks and so I concluded that the 2006 data and 2010 data with loose chi/sharp cuts just weren't as good as the 2010 data with strict chi/sharp cut. I talked over this result with Beth and she agreed, so I'm going to forge ahead with the 2010 data set after all.

First up was reminding myself of the completeness numbers that I was getting to begin with. For the 2010 data set with strict chi/sharp I was getting completeness levels like:

This is compared to the completeness levels for a looser chi/sharp cut. The following is for the 2010 data with the looser, 2006 chi/sharp limits:
As expected, the latter has a higher completeness. The 2010 data with the strict chi/sharp cut turns out the best ML results, but a really bad completeness, with limits ~ 1 mag brighter than when using the 2006 cut. I'm going to explore using only brighter stars with high completeness for the ML code and perhaps using the same data set but including fainter magnitudes for some of the other calculations such as absolute magnitude and distance. Beth suggests that the latter can be corrected for the low completeness and likely won't be affected by it, respectively. High completeness can be important for the ML code, but it's possible to just use the brighter stars where completeness is high. I'm in the process of getting results for such a data set for comparison. I'll also consult Ricardo and Dave on this.

Surface brightness
In other news, I've drafted the surface brightness code. It should be relatively straightforward, but something still isn't quite right. I think I'm close, though, and hope to fix this in the near future.

No comments:

Post a Comment