Thursday, July 22, 2010

Double Rainbow

Things are moving smoothly here in Wil1 land. I made some important progress today.

Photometric Uncertainty: After a bit of debugging I have the photometric uncertainty as a function of magnitude from the artificial star tests. I've been using it all over the place--to calculate the distance and the morphology of Wil1. 

Distance: The official new distance calculation has been finalized. The distance to Wil1 is 36 kpc which corresponds to a distance modulus of 17.781. The prize for best fit main sequence goes to M92. My tentative distance uncertainty is 1kpc, but as Beth pointed out we need to adjust this based on the best fit distances of runner-up main sequence candidates.

Morphology: With the new distance modulus and fiducial in hand I updated the spatial density plot of Wil1. I also updated the code to include a color-cut based on an envelope derived from the photometric error which I calculated yesterday and finalized this morning. It already looked better with the 15 sigma contour back in action. I'm using a larger field now than I was in May (14'x14' as opposed to 10'x10'). I compared several smoothing lengths between 0.3' and 0.6'. The choice is rather arbitrary as to which to use at this point. I think 0.5' or 0.6' give the least noise, but all of the smoothing lengths demonstrate essentially the same features and irregularity with varying degrees of noise. I'm going to run with the 0.6' smoothing for the sake of a paper draft.

Simulation: Beth is going to send me the debugged simulation code she worked so hard on so I can compare the real results to a simulation of the same number of stars. That means digging out the number of stars as output by the ML code. Once the simulated positions are made it's just a matter of putting them through the plotting routine to create the same spatial plots for comparison.

Tomorrow:

Things have been going well as far as progress is concerned. Results are coming together quickly, as expected. Tomorrow I need to calculate uncertainties on the structural params from the ML results and the distance. I'll also spend a fair amount of time adding the results I have to the paper which I haven't touched in a while. My main goal for tomorrow is to tie up loose ends and get real numbers into the paper. When that's all done I'll move on to wrapping up the simulation analysis and completing the last few calculations--absolute magnitude and surface brightness and the tidal radius what-ifs. By this time next week there will definitely be a paper draft.

Aside from wrapping up calculations and paper writing, next week will be all about documentation and cleaning up my directories.

Wednesday, July 21, 2010

Another belated update. (I haven't been much in the mood for blogging lately for some reason.)

So--ARTIFICIAL STAR TESTS ARE DONE. Yay. They're finally looking up to snuff so I'm pretty pleased. Definitely excited to be sleeping more this week than last. All of the artificial star data has been compiled into masterlists and calibrated. I've also calculated the completeness limit from all of these runs put together. Check it out! I was a little disappointed at first by the completeness levels of the bright end. I'd like them to all be 100% across the board. Beth pointed out that at the bright end there's some shot noise due to low number statistics (there are only a handful of stars in some of the brightest bins. Additionally, my chi/sharp cut begins to get a lot looser around r=22.5 which explains the bump in completeness there. In general, I'm happy with the completeness.


















Next on the agenda were the ML results. I started out with the idea that I would compare the results from data down to the 75% (r ~ 25.) completeness level to those results as deep as the 90% (r ~ 24.75) completeness level. Unfortunately, both of these looked terrible. (Also unfortunately, I can't seem to upload more than one image right now.)

So the first thing I did was repeated the ML calculation using a shallower set--I went back to the trusty r < 24.25 which I had used for my thesis. This data was still not great but it was better, as expected. At this point I had to go back to the drawing board and was getting worried that something was seriously wrong. Then I remembered the conversations with Ricardo about the initial conditions for the ML calculation. Small numbers play a role in screwing with ML results, but Ricardo had previously mentioned that initial conditions likely play an even bigger part in changing results. I have more stars in my masterlist than ever before, so I put my chips on the initial conditions. I went back and repeated the ML calculation iteratively for several magnitude limits, each time using the result of the last as the input for the next one. At r < 24.25, the results converged after only 3 iterations and at r < 24.75, the results converged after 4 iterations and were looking great. The two were also consistent with one another so things are looking good.


I spoke with Beth about which magnitude limit to choose and she suggested I next go to the morphology to see how those were looking at each of the magnitude limits. First, I checked that they were sane. One concern was the the calculated position angle is now much lower than previous calculations (~62 degrees as opposed to Martin et al. 2008's 77 degrees). An overplotted ellipse showed that PA looks good and the shape of Wil1 was recognizable even with many more stars than we'd previously had. The choice of magnitude limit has thus come down to signal to noise. A visual inspection of the morphology at r<24.25 and r<24.75 suggests a higher S/N in the shallower data. In fact, my signal-to-noise calculation revealed that S/N is better by a factor of 10 with a maglim of r=24.25. The reason for this is that the number of galaxies is increasing so rapidly at fainter magnitudes that signal is getting washed out more at the fainter levels.

I've now essentially decided on a magnitude limit of r =24.25 for my further calculations. My next steps are:

1. Calculate the photometric uncertainty as a function of magnitude from the artitificial star data.

2. Incorporate this photometric uncertainty and the isochrone/fiducial uncertainty into the distance code. Beth suggested that I use either the minimum photometric error or 0.05, whichever is larger, as the expected uncertainty in our model isochrones and empirical fiducials.

3. I'll then calculate the distance to Wil1 and the best fit main sequence model.

4. Using the results of the distance calculation, I'll more carefully choose the stars that are being used to describe the shape of Wil1. I'll also experiment with different smoothing lengths in an attempt to maximize noise. At the moment I'm getting a maximum signal of 15 sigma over the background level.

5. After the morphology is finished, I'll move on to things like the final absolute magnitude and surface brightness calculations.  We'll also want to simulate fake Wil1's to compare the results and calculate the assymetry parameter. Then I should be home free for putting results and conclusions in the paper and cleaning up my code next week. 

Thursday, July 15, 2010

Belated Wed update

-Confirmed the bug from before.

-Found two more bugs that popped up in the post-allframing analysis.
--one of which was an incorrect application of the chi/sharp function. I was previously performing the cut before the data was calibrated, but my chi/sharp function is manually set to calibrated magnitudes! This was killing the completeness.

-Automated DAOmaster for the first time. It's relatively easy to do by hand and I previously had problems successfully writing a shell script to automate it. But now I have one! Life just got that much simpler.

-Calculated the completeness from all the artificial star tests together --a total of XXXXX stars. Much happier about the the completeness limit, not so much about the values at the bright end--I'm getting more like 97% instead of 100% brighter than r =22.5

-More than half through the corrected ASTs

Wednesday, July 14, 2010

Good news!
As of yesterday all fo the Allframing was completed so I was ready to do the post-allframe analysis and prepare the masterlist for calibration and the completeness calculation. I spent most of the afternoon doing this. I also automated all of the code to create the masterlists and calibrate them for all of the artificial star test iterations.

Bad news!
When I calibrated the data the g-band offset was still there.  :( :( :( I had debugged this for the first test and checked that the calibration was correct, but for some reason it was now incorrect in the other 19 tests. I discovered a bug in the program that creates my input addstar files that amounted to not applying the offset fix. Essentially, I did it correctly for the first test and then during the process of automating that code, I introduced a bug that undid the correction. This morning I began feverishly correcting this. I'm now running two tests--one with the bug corrected and one omitting the zero-point fix altogether. The latter should replicate the current results I have. The former should correct them. I'm doing both on two more iterations of the artificial star tests in order to quadruple check that this is fixed before I go on. After those are done I should have 3 tests down and 17 to go.

17 is no biggie. I'm pretty angry with these artificial star tests at this point so I'm going to run them all night until they're finished. They'll be done in the morning even if I have to stay up all night to finish them. I'm also toying with the idea of doing 25 tests instead of 20, but I'll wait to make sure that the other problems are fixed before I go overboard on anything.

Good news!
My surface brightness calculation is fixed. Turns out the problem was a unit error--my number looked wrong because I was comparing it to the Martin et al. 2008 value, which was in different units. That's waiting on the ML code.

Bad news!
I can't get any idea of the completeness until the artificial star tests are corrected. The bug affected only the .add input files and not the effective ones. This means that I can't properly compare the number input to the number found for the completeness calculation. It's also affecting the magnitude bins. That means the ML code with the chi/sharp cut will have to wait until tomorrow morning.

Good news!
That chi/sharp cut is fixed as of yesterday and I wrote a function to apply it to a masterlist of my choosing. I've also automated the masterlist and calibration code already, so post-allframing analysis is pretty straightforward.

Tomorrow:
There's a lot of daomatching and daomastering to do after allframe finishes on all of these tests. That's going to be the most tedious thing to do and the reason why I might be up late tonight. Once that's done I will:
1. Create the masterlists and apply the chi/sharp cut.
2. Calibrate them.
3. Calculate completeness.
4. Choose a completeness limit.
5. Run the ML code using this limit and check the results.

And beyond:
6. Modify/run the absolute magnitude code depending on the completeness limit I choose for this calculation and using the ML results.
7. Run the surface brightness calculation using the ML results.
8. Run the distance calculation using the completeness/photometric uncertainty results.
9. Recreate the spatial map.
10. Run the simulation based on the ML results.

Wednesday, July 7, 2010

 Today:
Yesterday I screwd something up with the completeness calculation when I went to apply Dave's suggestion. For some reason the positions of the artificial stars weren't matching properly and so I was getting screwy completeness numbers. I ended up remaking the masterlist and that fixed the problem. I think I had inadvertently changed the addstar files and so they weren't matching properly with the masterlist.

In lieu of a better fix, I manually changed the addstar input magnitudes in the g-band by 0.35. This is the average value of the offset in the g-band brighter than r=24.5. Unfortunately when I was doing the post-allframe analysis I screwed up one of the allframe files so I have to re-run it. (In case you ever wondered why I keep the same files in so many different places, that'd be why; in one iteration of daomatch, you can corrupt an allframe file and have to re-run everything. C'est la vie. One allframe finishes again, the rest should be quick and the offset should be gone.

I was thinking about the nature of fixing the problem this way and it made me concerned about the way I was planning to calculate the photometric uncertainties. At first I thought that I would need to think of a different approach because in tweaking the offsets, I could easily just tweak them so much that the photometric uncertainties would turn out to be artificially low. However, the output magnitudes which I use to calculate the offsets must follow from the arbitrarily chosen magnitudes I input. And so the tweaking wouldn't be a problem as long as it's done pre-daophot/allstar/allframe. This is something I've been working on justifying to myself and I think I finally have.


While allframe runs, I've been working on Dave's approach to a chi and sharp cut. He suggested that we base our cuts on the envelopes defined by the artificial stars. These are very well behaved so it makes total sense. I empirically determined the envelopes and then interpolated functions to define the upper and lower limits on chi and sharp. My functions look good, but I'm trying to use a where statement to define the areas of chi and sharp space to allow. When I plot the results, my stars aren't within those bounds, so there's still a bug in the code. I think I need to take a break from it and come back later tonight. It'll be a good chance to take solace in the a/c in the lab instead of my sweltering apartment.

Tomorrow (not in this order):
1. With my new masterlist defined by the chi and sharp functions,
  • calculate the completeness and see how awesome it looks
  • run the ML code to see if the results are better behaved; if they're not, I'll need to make a choice about a brighter completeness limit to use for the ML calculation.
2. Edit the code for the artificial star tests so that I can run multiple tests. The positions of the stars need to be offset by a small amount each time. I also need to iteratively change the seed for the random selection of my stars to change things up a bit. Then start allframing like crazy!

Friday, July 2, 2010

I haven't posted all week. Mostly because there hasn't seemed to be anything new to post. But finally--an update!

Distance
I've spent my week doing a few things. First, the distance code is done. It took me longer than expected to finish the bootstrap which will be used to calculate the uncertainty in the distance. It ended up being tricky to automate everything to give me exactly what I want with little effort. Hopefully it will be a pleasure to run later when I calculate the best fit distance.

ML
Second, I've been fighting with the ML code for much of this week. At first, I wasn't getting any sort of sane response from the code, it was just outputting my input values 1000 times. I found the bug there--a discrepancy between the field area calculation in LIKE_exp_kpno.pro and that in struct_ML_grid_kpno.pro. The most updated versions of these can now be found in /home/gail/comparedata. The other thing to keep in mind in the future is the shape of the input field. I settled on using a circular field of a certain radius around Wil1. This is what I used for my thesis and I wanted to be able to directly compare. As such, the code is set up to calculate the area of the field assuming it's circular. Should I instead use a rectangular field (e.g. by taking out the radius cut in preparelist.pro), I'll need to change the way that I'm calculating field area in struct_ML_grid_kpno.pro.

I was interested in the ML results for three data sets: the 2006 data, the 2010 data with strict chi/sharp cut, and the 2010 data with the loose (2006) chi/sharp cut. When I ran the ML code on the 2010 data using the looser chi/sharp cut, I found that there were some anomalies in the results. There was a secondary peak at e = 0 and also at PA ~ 30 degrees. There was also a double peak in the half-light radius distribution. The primary peaks lined up with the SDSS data, but these double peaks were troubling. I spent some time seeing if the parameters I was using could be having this effect. I experimented with several things that I had tweaked before: the closeness of the color cut and the field size. For all of the runs I left the faint magnitude limit at 24.25--the limit used in my thesis and also probably what our completeness limit will end up being. Whatever I did, I couldn't get rid of the weird double peaks and so I concluded that the 2006 data and 2010 data with loose chi/sharp cuts just weren't as good as the 2010 data with strict chi/sharp cut. I talked over this result with Beth and she agreed, so I'm going to forge ahead with the 2010 data set after all.

Completeness
First up was reminding myself of the completeness numbers that I was getting to begin with. For the 2010 data set with strict chi/sharp I was getting completeness levels like:


This is compared to the completeness levels for a looser chi/sharp cut. The following is for the 2010 data with the looser, 2006 chi/sharp limits:
As expected, the latter has a higher completeness. The 2010 data with the strict chi/sharp cut turns out the best ML results, but a really bad completeness, with limits ~ 1 mag brighter than when using the 2006 cut. I'm going to explore using only brighter stars with high completeness for the ML code and perhaps using the same data set but including fainter magnitudes for some of the other calculations such as absolute magnitude and distance. Beth suggests that the latter can be corrected for the low completeness and likely won't be affected by it, respectively. High completeness can be important for the ML code, but it's possible to just use the brighter stars where completeness is high. I'm in the process of getting results for such a data set for comparison. I'll also consult Ricardo and Dave on this.

Surface brightness
In other news, I've drafted the surface brightness code. It should be relatively straightforward, but something still isn't quite right. I think I'm close, though, and hope to fix this in the near future.

Friday, June 25, 2010

Things I did today

I haven't posted in a while, but here's a quick update.

Today I discovered:
1. The calibration stars are fine. I wasn't comparing the same area of observations before.
2. There's still a problem in the zero point that can't be located.
3. Is it a problem with the star injection?

Tomorrow:
1. Finish the distance code by adding the uncertainty bootstrap to the end a la Walsh et al. 2008.
2. Code up the surface brightness calculation from the ML results a la Maya's handy guide.