Thursday, July 22, 2010

Double Rainbow

Things are moving smoothly here in Wil1 land. I made some important progress today.

Photometric Uncertainty: After a bit of debugging I have the photometric uncertainty as a function of magnitude from the artificial star tests. I've been using it all over the place--to calculate the distance and the morphology of Wil1. 

Distance: The official new distance calculation has been finalized. The distance to Wil1 is 36 kpc which corresponds to a distance modulus of 17.781. The prize for best fit main sequence goes to M92. My tentative distance uncertainty is 1kpc, but as Beth pointed out we need to adjust this based on the best fit distances of runner-up main sequence candidates.

Morphology: With the new distance modulus and fiducial in hand I updated the spatial density plot of Wil1. I also updated the code to include a color-cut based on an envelope derived from the photometric error which I calculated yesterday and finalized this morning. It already looked better with the 15 sigma contour back in action. I'm using a larger field now than I was in May (14'x14' as opposed to 10'x10'). I compared several smoothing lengths between 0.3' and 0.6'. The choice is rather arbitrary as to which to use at this point. I think 0.5' or 0.6' give the least noise, but all of the smoothing lengths demonstrate essentially the same features and irregularity with varying degrees of noise. I'm going to run with the 0.6' smoothing for the sake of a paper draft.

Simulation: Beth is going to send me the debugged simulation code she worked so hard on so I can compare the real results to a simulation of the same number of stars. That means digging out the number of stars as output by the ML code. Once the simulated positions are made it's just a matter of putting them through the plotting routine to create the same spatial plots for comparison.

Tomorrow:

Things have been going well as far as progress is concerned. Results are coming together quickly, as expected. Tomorrow I need to calculate uncertainties on the structural params from the ML results and the distance. I'll also spend a fair amount of time adding the results I have to the paper which I haven't touched in a while. My main goal for tomorrow is to tie up loose ends and get real numbers into the paper. When that's all done I'll move on to wrapping up the simulation analysis and completing the last few calculations--absolute magnitude and surface brightness and the tidal radius what-ifs. By this time next week there will definitely be a paper draft.

Aside from wrapping up calculations and paper writing, next week will be all about documentation and cleaning up my directories.

Wednesday, July 21, 2010

Another belated update. (I haven't been much in the mood for blogging lately for some reason.)

So--ARTIFICIAL STAR TESTS ARE DONE. Yay. They're finally looking up to snuff so I'm pretty pleased. Definitely excited to be sleeping more this week than last. All of the artificial star data has been compiled into masterlists and calibrated. I've also calculated the completeness limit from all of these runs put together. Check it out! I was a little disappointed at first by the completeness levels of the bright end. I'd like them to all be 100% across the board. Beth pointed out that at the bright end there's some shot noise due to low number statistics (there are only a handful of stars in some of the brightest bins. Additionally, my chi/sharp cut begins to get a lot looser around r=22.5 which explains the bump in completeness there. In general, I'm happy with the completeness.


















Next on the agenda were the ML results. I started out with the idea that I would compare the results from data down to the 75% (r ~ 25.) completeness level to those results as deep as the 90% (r ~ 24.75) completeness level. Unfortunately, both of these looked terrible. (Also unfortunately, I can't seem to upload more than one image right now.)

So the first thing I did was repeated the ML calculation using a shallower set--I went back to the trusty r < 24.25 which I had used for my thesis. This data was still not great but it was better, as expected. At this point I had to go back to the drawing board and was getting worried that something was seriously wrong. Then I remembered the conversations with Ricardo about the initial conditions for the ML calculation. Small numbers play a role in screwing with ML results, but Ricardo had previously mentioned that initial conditions likely play an even bigger part in changing results. I have more stars in my masterlist than ever before, so I put my chips on the initial conditions. I went back and repeated the ML calculation iteratively for several magnitude limits, each time using the result of the last as the input for the next one. At r < 24.25, the results converged after only 3 iterations and at r < 24.75, the results converged after 4 iterations and were looking great. The two were also consistent with one another so things are looking good.


I spoke with Beth about which magnitude limit to choose and she suggested I next go to the morphology to see how those were looking at each of the magnitude limits. First, I checked that they were sane. One concern was the the calculated position angle is now much lower than previous calculations (~62 degrees as opposed to Martin et al. 2008's 77 degrees). An overplotted ellipse showed that PA looks good and the shape of Wil1 was recognizable even with many more stars than we'd previously had. The choice of magnitude limit has thus come down to signal to noise. A visual inspection of the morphology at r<24.25 and r<24.75 suggests a higher S/N in the shallower data. In fact, my signal-to-noise calculation revealed that S/N is better by a factor of 10 with a maglim of r=24.25. The reason for this is that the number of galaxies is increasing so rapidly at fainter magnitudes that signal is getting washed out more at the fainter levels.

I've now essentially decided on a magnitude limit of r =24.25 for my further calculations. My next steps are:

1. Calculate the photometric uncertainty as a function of magnitude from the artitificial star data.

2. Incorporate this photometric uncertainty and the isochrone/fiducial uncertainty into the distance code. Beth suggested that I use either the minimum photometric error or 0.05, whichever is larger, as the expected uncertainty in our model isochrones and empirical fiducials.

3. I'll then calculate the distance to Wil1 and the best fit main sequence model.

4. Using the results of the distance calculation, I'll more carefully choose the stars that are being used to describe the shape of Wil1. I'll also experiment with different smoothing lengths in an attempt to maximize noise. At the moment I'm getting a maximum signal of 15 sigma over the background level.

5. After the morphology is finished, I'll move on to things like the final absolute magnitude and surface brightness calculations.  We'll also want to simulate fake Wil1's to compare the results and calculate the assymetry parameter. Then I should be home free for putting results and conclusions in the paper and cleaning up my code next week. 

Thursday, July 15, 2010

Belated Wed update

-Confirmed the bug from before.

-Found two more bugs that popped up in the post-allframing analysis.
--one of which was an incorrect application of the chi/sharp function. I was previously performing the cut before the data was calibrated, but my chi/sharp function is manually set to calibrated magnitudes! This was killing the completeness.

-Automated DAOmaster for the first time. It's relatively easy to do by hand and I previously had problems successfully writing a shell script to automate it. But now I have one! Life just got that much simpler.

-Calculated the completeness from all the artificial star tests together --a total of XXXXX stars. Much happier about the the completeness limit, not so much about the values at the bright end--I'm getting more like 97% instead of 100% brighter than r =22.5

-More than half through the corrected ASTs

Wednesday, July 14, 2010

Good news!
As of yesterday all fo the Allframing was completed so I was ready to do the post-allframe analysis and prepare the masterlist for calibration and the completeness calculation. I spent most of the afternoon doing this. I also automated all of the code to create the masterlists and calibrate them for all of the artificial star test iterations.

Bad news!
When I calibrated the data the g-band offset was still there.  :( :( :( I had debugged this for the first test and checked that the calibration was correct, but for some reason it was now incorrect in the other 19 tests. I discovered a bug in the program that creates my input addstar files that amounted to not applying the offset fix. Essentially, I did it correctly for the first test and then during the process of automating that code, I introduced a bug that undid the correction. This morning I began feverishly correcting this. I'm now running two tests--one with the bug corrected and one omitting the zero-point fix altogether. The latter should replicate the current results I have. The former should correct them. I'm doing both on two more iterations of the artificial star tests in order to quadruple check that this is fixed before I go on. After those are done I should have 3 tests down and 17 to go.

17 is no biggie. I'm pretty angry with these artificial star tests at this point so I'm going to run them all night until they're finished. They'll be done in the morning even if I have to stay up all night to finish them. I'm also toying with the idea of doing 25 tests instead of 20, but I'll wait to make sure that the other problems are fixed before I go overboard on anything.

Good news!
My surface brightness calculation is fixed. Turns out the problem was a unit error--my number looked wrong because I was comparing it to the Martin et al. 2008 value, which was in different units. That's waiting on the ML code.

Bad news!
I can't get any idea of the completeness until the artificial star tests are corrected. The bug affected only the .add input files and not the effective ones. This means that I can't properly compare the number input to the number found for the completeness calculation. It's also affecting the magnitude bins. That means the ML code with the chi/sharp cut will have to wait until tomorrow morning.

Good news!
That chi/sharp cut is fixed as of yesterday and I wrote a function to apply it to a masterlist of my choosing. I've also automated the masterlist and calibration code already, so post-allframing analysis is pretty straightforward.

Tomorrow:
There's a lot of daomatching and daomastering to do after allframe finishes on all of these tests. That's going to be the most tedious thing to do and the reason why I might be up late tonight. Once that's done I will:
1. Create the masterlists and apply the chi/sharp cut.
2. Calibrate them.
3. Calculate completeness.
4. Choose a completeness limit.
5. Run the ML code using this limit and check the results.

And beyond:
6. Modify/run the absolute magnitude code depending on the completeness limit I choose for this calculation and using the ML results.
7. Run the surface brightness calculation using the ML results.
8. Run the distance calculation using the completeness/photometric uncertainty results.
9. Recreate the spatial map.
10. Run the simulation based on the ML results.

Wednesday, July 7, 2010

 Today:
Yesterday I screwd something up with the completeness calculation when I went to apply Dave's suggestion. For some reason the positions of the artificial stars weren't matching properly and so I was getting screwy completeness numbers. I ended up remaking the masterlist and that fixed the problem. I think I had inadvertently changed the addstar files and so they weren't matching properly with the masterlist.

In lieu of a better fix, I manually changed the addstar input magnitudes in the g-band by 0.35. This is the average value of the offset in the g-band brighter than r=24.5. Unfortunately when I was doing the post-allframe analysis I screwed up one of the allframe files so I have to re-run it. (In case you ever wondered why I keep the same files in so many different places, that'd be why; in one iteration of daomatch, you can corrupt an allframe file and have to re-run everything. C'est la vie. One allframe finishes again, the rest should be quick and the offset should be gone.

I was thinking about the nature of fixing the problem this way and it made me concerned about the way I was planning to calculate the photometric uncertainties. At first I thought that I would need to think of a different approach because in tweaking the offsets, I could easily just tweak them so much that the photometric uncertainties would turn out to be artificially low. However, the output magnitudes which I use to calculate the offsets must follow from the arbitrarily chosen magnitudes I input. And so the tweaking wouldn't be a problem as long as it's done pre-daophot/allstar/allframe. This is something I've been working on justifying to myself and I think I finally have.


While allframe runs, I've been working on Dave's approach to a chi and sharp cut. He suggested that we base our cuts on the envelopes defined by the artificial stars. These are very well behaved so it makes total sense. I empirically determined the envelopes and then interpolated functions to define the upper and lower limits on chi and sharp. My functions look good, but I'm trying to use a where statement to define the areas of chi and sharp space to allow. When I plot the results, my stars aren't within those bounds, so there's still a bug in the code. I think I need to take a break from it and come back later tonight. It'll be a good chance to take solace in the a/c in the lab instead of my sweltering apartment.

Tomorrow (not in this order):
1. With my new masterlist defined by the chi and sharp functions,
  • calculate the completeness and see how awesome it looks
  • run the ML code to see if the results are better behaved; if they're not, I'll need to make a choice about a brighter completeness limit to use for the ML calculation.
2. Edit the code for the artificial star tests so that I can run multiple tests. The positions of the stars need to be offset by a small amount each time. I also need to iteratively change the seed for the random selection of my stars to change things up a bit. Then start allframing like crazy!

Friday, July 2, 2010

I haven't posted all week. Mostly because there hasn't seemed to be anything new to post. But finally--an update!

Distance
I've spent my week doing a few things. First, the distance code is done. It took me longer than expected to finish the bootstrap which will be used to calculate the uncertainty in the distance. It ended up being tricky to automate everything to give me exactly what I want with little effort. Hopefully it will be a pleasure to run later when I calculate the best fit distance.

ML
Second, I've been fighting with the ML code for much of this week. At first, I wasn't getting any sort of sane response from the code, it was just outputting my input values 1000 times. I found the bug there--a discrepancy between the field area calculation in LIKE_exp_kpno.pro and that in struct_ML_grid_kpno.pro. The most updated versions of these can now be found in /home/gail/comparedata. The other thing to keep in mind in the future is the shape of the input field. I settled on using a circular field of a certain radius around Wil1. This is what I used for my thesis and I wanted to be able to directly compare. As such, the code is set up to calculate the area of the field assuming it's circular. Should I instead use a rectangular field (e.g. by taking out the radius cut in preparelist.pro), I'll need to change the way that I'm calculating field area in struct_ML_grid_kpno.pro.

I was interested in the ML results for three data sets: the 2006 data, the 2010 data with strict chi/sharp cut, and the 2010 data with the loose (2006) chi/sharp cut. When I ran the ML code on the 2010 data using the looser chi/sharp cut, I found that there were some anomalies in the results. There was a secondary peak at e = 0 and also at PA ~ 30 degrees. There was also a double peak in the half-light radius distribution. The primary peaks lined up with the SDSS data, but these double peaks were troubling. I spent some time seeing if the parameters I was using could be having this effect. I experimented with several things that I had tweaked before: the closeness of the color cut and the field size. For all of the runs I left the faint magnitude limit at 24.25--the limit used in my thesis and also probably what our completeness limit will end up being. Whatever I did, I couldn't get rid of the weird double peaks and so I concluded that the 2006 data and 2010 data with loose chi/sharp cuts just weren't as good as the 2010 data with strict chi/sharp cut. I talked over this result with Beth and she agreed, so I'm going to forge ahead with the 2010 data set after all.

Completeness
First up was reminding myself of the completeness numbers that I was getting to begin with. For the 2010 data set with strict chi/sharp I was getting completeness levels like:


This is compared to the completeness levels for a looser chi/sharp cut. The following is for the 2010 data with the looser, 2006 chi/sharp limits:
As expected, the latter has a higher completeness. The 2010 data with the strict chi/sharp cut turns out the best ML results, but a really bad completeness, with limits ~ 1 mag brighter than when using the 2006 cut. I'm going to explore using only brighter stars with high completeness for the ML code and perhaps using the same data set but including fainter magnitudes for some of the other calculations such as absolute magnitude and distance. Beth suggests that the latter can be corrected for the low completeness and likely won't be affected by it, respectively. High completeness can be important for the ML code, but it's possible to just use the brighter stars where completeness is high. I'm in the process of getting results for such a data set for comparison. I'll also consult Ricardo and Dave on this.

Surface brightness
In other news, I've drafted the surface brightness code. It should be relatively straightforward, but something still isn't quite right. I think I'm close, though, and hope to fix this in the near future.

Friday, June 25, 2010

Things I did today

I haven't posted in a while, but here's a quick update.

Today I discovered:
1. The calibration stars are fine. I wasn't comparing the same area of observations before.
2. There's still a problem in the zero point that can't be located.
3. Is it a problem with the star injection?

Tomorrow:
1. Finish the distance code by adding the uncertainty bootstrap to the end a la Walsh et al. 2008.
2. Code up the surface brightness calculation from the ML results a la Maya's handy guide.

Friday, June 18, 2010

Friday

Today:

1. I found a bug in the addstar file code that will hopefully have a profound impact on my completeness results. Unfortunately, I accidentally corrupted my only copy of one of the allframe output files, so I have to re-run Allframe all over again. :( I still have high hopes that my completeness problems will all magically disappear after this. If so, Allframing begins this weekend.

2. I spent part of the morning working on the manual that tracks all of my analysis. I had originally written it as a .doc because I was working from my computer during a break from school and the ssh connection was horrible. Today I converted it all to a latex document and made the formatting really nice. I also added a preface and have been trying to incorporate a table of contents. Some other time, I'll get to updating the content. So far, the Bertin software sections are largely complete; the Allframe and Artificial Star Test sections still have a ways to go.

3. During the afternoon I revised the first two sections of the paper. Overall, the intro reflects that of my thesis much more now, with plenty of revisions I nabbed from the first draft of the real paper. I'll proofread draft 2 and then send it on to Beth for review. Next on the paper agenda will be finishing the completeness section and moving on to results.

Monday:
1. Completeness like it's never been done before.
2. Adding bootstrapping to the distance code.
3. More work on the paper.

Priorities

Today marks the end of week 4. I thought I'd take a moment to pause and reflect. The first month has seen fewer results than I had initially hoped. The completeness stuff is taking much longer than expected due to all the problems I'm having. I will be VERY glad to have all of that behind me. However, it seems to finally be almost together. I've also set it aside at times to get ahead on other things, so while there aren't a lot of results to show, I have gotten around to a lot. A recap for posterity:

Done
1. UMD cosmology meeting
2. 1st draft of intro and data sections of the paper
3. Bright artificial star tests
4. Completeness calculation of said tests
5. Near-convergence on the real artificial star tests
6. Distance calculation (all but the distance uncertainty calculation)

To Do
1. Perhaps obviously, completeness tests are back to being my #1 priority. I already have the code to calculate the completeness (once it's working properly!). Once I have reliable results there will be a lot of Allframe action.

2. Revising the first two sections of the paper.

3. Once I've calculated completeness and photometric uncertainty, I need to include these parameters in the distance calculation and finalize that.

4. I can then use the completeness limit to properly calculate ML results. The only calculation I really need to check on is the surface brightness. I believe the ML code already does it, or at least makes it easy to do, but I'll make sure the calculation is ready to go before the ASTs are finished.

5. The resulting number of stars provided by ML will then be input into the absolute magnitude code and that can be finalized.

6. Using the completeness limit and the best matched fiducial at the appropriate distance, I can finalize the morphology.

7. The structural parameters will inform the simulation that Beth and/or I probably just need to start over on to correct.

8. Paper Paper Paper--When all is said and done it should be relatively straightforward to write up all the results. I've got my thesis to work from and also Beth's 2006 paper. I plan to write bits and pieces and I go, in the order mentioned above.

9. Analysis Manual--Along the way, I also need to finish the document that describes how I've done all this. A lot is included already, but I've indicated where I need to add detail. I also need to include a lot more information about the Allframe portion and need to add a section for the Artificial Star Tests. There's also a lot of formatting that needs to get done. This is fairly low priority at the moment because this is something that could technically be finished even after I leave here, though I hope to make some time before then.

Distance and Completeness Updates

Fun
Today started out with breakfast in Zubrow for all the KINSC students. Twas good. It's also pot luck day!

I spent part of the afternoon helping some other folks figure things out. But now for my own work:

Distance
I came back and double checked my input to the distance code, just to be sure that everything was good to go there. I checked my technique against Dave's description of his calculation and went back to Shane's Bootes II paper, which Dave had cited. I discovered that there were a few additional things that I hadn't coded yet. Primarily, after I select the best fit isochrone or fiducial and the best distance, I need to bootstrap the distance calculation with the best fit to derive the error in the distance. Everything besides this last step is set, so I'll add the bootstrap after I've got the completeness stuff figured out.

I also tracked down the info about M13 and M92 (age/metallicity/distance) which was missing from yesterday's post.

Completeness
I spent all afternoon working on the completeness calculation for the actual ASTs. I really want to get the full-fledged Allframing started before the weekend. At this point I need the results to make significant progress on other calculations. Essentially, a lot of things will come together quickly once the artificial star tests are finished and the completeness levels are calculated.

Last I left the completeness calculation, I had a debugged version for the bright star tests, but had run into trouble matching the positions for the real test. That problem is still there. I did find a bug in the code where I was creating the input addstar file, so I made new ones and am running Allframe again. Hopefully that will solve the problem. In the meantime, I'm going to double check a few other things. Fingers crossed, this will not be an issue by lunchtime. Below are some signs of problems I've discovered on my hunt for answers. These are results from the bad addstar input runs, so I'll go back after running Allframe again to see if these red flags are still there.

A few observations:
1. I'm definitely getting 96% completeness for the bright star tests at r < 22.5. I've confirmed qualitatively and quantitatively.
2. Without applying a faint magnitude limit, only 6 stars out of 1765 artificial stars are matching between my art star input and the output from Allframe.
3. There are fewer stars coming out of allframe for the true distribution than are getting input as artificial. This could possibly be correct but seems unlikely. Allframe will pick up noise down to 27.5 mags, so even for stuff that's very faint I should be getting a lot of signal there. This is not to mention the true stars that allframe should be picking out of the image, too.
4. A quick, informal survey of 22 stars in the center of the ref frame shows that 3/22 visible artstars weren't subtracted (at least not well) by DAOphot/Allstar. Real stars are getting subtracted pretty well.
5. The bright star test outputs 1426 more stars than the real artificial star test. After calibration the bright star masterlist contains 1538 more stars than the actual art star masterlist. After final chi/sharp cuts, the margin is almost twice as many stars in the bright masterlist as in the actual AST masterlist.
6. I've double checked my option files, the program I use to create the masterlist, and the calibration program and they're exactly the same for both the bright tests and the actual tests.




Tomorrow:
1. Paper revisions
2. Completeness calculation
3. Allframing

Thursday, June 17, 2010

Distance Calculation

I downloaded 4 isochrones for the distance calculation. They are:

13Gyr, [Fe/H] = -2.3
13Gyr, [Fe/H] = -2.0
10Gyr, [Fe/H] = -2.3
10Gyr, [Fe/H] = -2.0

I downloaded these from Aaron Dotter's database at http://stellar.dartmouth.edu/~models/isolf.html. For each of the four, I assumed [a/Fe] = 0.0, Y = 0.245 + 1.5Z, and used the SDSS color database in ugriz.

I'll also be comparing to the M13 and M92 fiducials. The info on them is:

M13: (Grundahl et al. 1998)
distance = 7.7 kpc (dm = 14.431)
age = 12 +/- 1 Gyr
metallicity = [Fe/H] = -1.61
[a/Fe] = 0.3


M92: (Pont et al. 1998)
distance = 8kpc (dist modulus (dm) = 14.515)
age = 14 +/- 1.2 Gyr
metallicity = [Fe/H] = -2.2

I'll be fitting each of the fiducials and isochrones at 6 distances:

distance | dm
35.0 kpc | 17.720
36.0 kpc | 17.782
37.0 kpc | 17.841
38.0 kpc | 17.899
39.0 kpc | 17.955
40.0 kpc | 18.010

As of the end of the day Wednesday, the code was completed and de-bugged. The best match so far is M13 at a distance of 39 kpc with a total of 210 Wil1 stars within the envelope. This is derived using a magnitude limit of r = 24.25, which will need to be compared to the results of the completeness tests. Also, I'm currently using the measurement errors from the KPNO data to define the envelope around each isochrone/fiducial within which I'm matching Wil1 stars. This will need to be adjusted to the photometric uncertainty derived from the completeness tests.

Tuesday, June 15, 2010

June 15

Compeleteness

I finished the completeness calculation of the actual artificial star distribution today. The problem that I was finding with calibration yesterday didn't occur with the distribution I really want. From an email I sent to Beth:

The problem could be that I didn't (and still haven't) masked out any of the artificial stars. For the calibration I set a magnitude limit of r = 21.75 to only use reliable SDSS stars. My bright stars had r = 21.5 so if they overlapped with an SDSS star, they would've been included in the calibration and possibly screwed it up. In the real tests, there are very few stars that bright, so it's unlikely for an artificial star to meet the mag limit AND match the position of an SDSS star. The calibration for the real artificial star test is consistent with that of the true data, which tells me the calibrations aren't being affected by the presence of artificial stars.

I therefore assumed that the calibrations were good for the artificial data I was actually concerned about and decided to move on. I ran into some snags when I moved onto the completeness calculation itself. I worked with it for a long time and decided I just needed to take a break from it. I'm going to look at it again later tonight. I already have the code from the bright artificial star tests, so it should be a straightforward calculation, but apparently applying the code to the real tests is buggy.

Distance

In the meantime, I've begun coding up the distance calculation. I went back to Dave's paper to read how he did the calculation and its similar to how I'll be doing it. The outline of my technique is as follow:

1. Download a number of isochrones and fiducials with a range of metallicities and a few ages.
2. Match stars within 1 hlr of the center of Wil1 to these color-magnitude sequences within an envelope defined by the photometric error determined by my artificial star tests.
3. Do the same thing for an annulus representing the background sky in the field of Wil1, but far from potential member stars.
4. Count the number of Wil1 stars (member candidates - contaminants) which are consistent with the main sequence
5. Shift distance modulus of the sequences by intervals of 0.025 mag around what the approximate Wil1 distance modulus is ( m - M ~ 17.84). Repeat
6. Repeat steps 2-4.

Whichever distance modulus/metallicity combo matches the most member stars of Wil1 is presumed to be the best fit.

So far I have a detailed outline of the code and I've started to fill in the details. I'm going to spend some time tonight deciding exactly which fiducials/isochrones I want so that I can put them into the code in the morning. I'm hoping to have a full draft of the code by the end of the day tomorrow for debugging.

Paper

I have yet to get back to editing the paper, but I'll definitely make time before the week is out. Hopefully in the next couple of days I'll have a completed distance calculation code and be running Allframe so I'll have some extra time.

The Outside World
Today Jerry had a visitor, Michael Triantafyllou, aka the Director for the Center for Ocean Engineering at MIT. Apparently his son is interested in Haverford and was getting a tour. He gave a little talk for Jerry and his students, Peter and Andrew, and Mimi and me. He's doing research about the movement of fish as they swim. Apparently fish can stay beyond rocks for a long period of time, getting their energy from the flow of the water (!) and using very little of their own muscle power to stay there. His group is trying to recreate the technology, essentially building a fish. They're using a lot of MEMS to gauge pressure changes around vortices caused by turbulence behind obstacles in a river environment. I could go on about it, but it was interesting.

Tomorrow afternoon Peter Love is giving a talk about his research. I'll probably stop by for an hour.

Monday, June 14, 2010

Bright artificial star tests completed

I finished the allframe follow-up analysis and just visually checking my calibrated masterlist, it looked as though nearly 100% of the stars were found. That means when I go to do the completeness calculation, I expect good agreement.

I first went back to the completeness test Beth gave me--calculating the number of stars per magnitude for the true images, artificial input, and fake output images. I found a 98% approximate completeness sat 21.5. However, as I went to fainter magnitudes I found that I was about to calculate a higher than 100% completeness. This indicates to me that I'm likely finding all or most of the artificial stars, but I'm perhaps finding a few other stars which are not artificial and yet I'm counting as such.

After that initial test was looking reasonable (though not perfect), I went back to matching the artificial star output to the positions of the artificial stars I input. I matched positions for all stars with g < 21.5 (the bright stars I input all had g or r eq 21.5 for simplicity). I found 95.9% completeness. I went to fainter magnitude limits to make sure that I wasn't somehow picking up more stars (this seems like it could only be the case if my calibration or positions were off since all of my stars should be accounted for with a maglim of 21.5. And neither one of these scenarios is likely since they would result in systematic problems in the completeness and would probably reduce the completeness at 21.5 by far more than 4%.) In the 2006 paper, Beth was getting 100% completeness down to r eq 22.5 mag. Mine doesn't seem to be this good for the bright tests, so I wouldn't expect it to be as good for the real artificial star tests. However, I'm zen with a 96% completeness, especially if at fainter magnitudes the values are similarly reasonable. After all that, I've determined that my completeness calculation seems to be sane and that it was, in fact, a problem with the artificial start input. That's slightly comforting for my own sanity. Now that I've got the technique figured out and the code written, I'm going to apply it to the real images and re-do the artificial star tests from the beginning. I'll then re-do all of this analysis and hopefully come out with properly-calculated completeness levels in the end. I'll start Allframe running this afternoon and complete the calculations tomorrow morning. Real Artificial star tests:

I'll be inputting a grid of stars as defined in my fake CMD with a square spacing of 56 pixels. All of the stars will be located on all 18 exposures of chip 7 --that's a total of 1764 stars per run. After I calculate completeness values for the first run and determine that everything is working fine, I'll need to do more runs. At this point, I'm considering 30 runs in total, each time randomly generating new artificial stars which meet the same CMD critera. That will result in just over 52,000 stars used to calculate the final completeness. Each iteration of Allframe takes about 3 hours to run, which is a total of 90 hours. However, I can run multiple instances of Allframe at once. Last semester we found that running 8 at a time really bogged down Squid. However, running 4 at a time shouldn't be a problem. My current plan is to finish about 8 to 12 instances a day once I get the real tests started. That means it should all be done in less than a week.




Note: When I did the calibration of the bright stars, I found that the g-band zeropoint offset was about 0.3 higher than for the calibration of real data. The r-band appeared consistent with the calibration of the real data and both color terms were the same. I don't think the zero points should be different at all because they are determined from stars in our images matched to those in SDSS. Needless to say, the artificial stars I'm inputting shouldn't be influencing the matching to SDSS data and so the calibration stars should be the same. In an analysis like ours, 0.3 mag is a fairly significant margin. I'm going to move forward and see if the same happens in the actual artificial star testing (as opposed to the bright artificial star testing). If I get the same result, I'll have to look into it further.

Thursday, June 10, 2010

Bright star tests

Today I spent some time trying to understand how Addstar is working and where things are going wrong. In a word, that was a fail. Addstar seems to be skipping over some artificial stars that I've indicated in my input file and I couldn't find a reason for it to do so. As I mentioned yesterday, Addstar was inserting only those stars in every other row. I found that when I decreased the spacing of the artificial stars it was then inputting alternating columns and rows to give a diamond grid effect.

While I couldn't solve the mystery of Addstar, I decided to go ahead and do more testing using only bright artificial stars in order to find a way to troubleshoot the problem. I found that Addstar added the same pattern of artificial stars for a given spacing of stars. So as long as I am careful to keep my spacing the same or double check the resulting artificial star pattern in a bright star test for a new spacing, I figure I can reliably make a new file with only those stars addstar is actually inputting for comparison later when I calculate the completeness levels.

In addition to the spacing tests, I also went back to the original artificial star input to make sure that I was only inputting stars that appear on all exposures of the same chip. When I do my Allframe analysis, I require stars to have both g and r information (all of my input artificial stars do) and to be found in at least 2 exposures. (Because the exposures are dithered from one another, there could be stars near the edges that appear on only one exposure. My method for choosing my artificial stars before today would've meant this would only be the case near the edges of my reference frame, so it should not have had a huge effect, but I did this correction to eliminate the edge factor altogether.) It ended up just being easier to ensure that all of my artificial stars appear on the overlap of all exposures. This means I'm inputting fewer stars per Allframe run than I had previously. This is fine because I can always run more than 10 artificial star tests or further decrease the artificial star spacing per run to get more artificial stars for the end statistics.

My next project will be completing the effective Addstar files (including the positions and mags of only artificial stars that actually appear in the fake images). I've been thinking about whether it will be using to run Allframe on the bright artificial star tests. I'm not sure it will provide much more insight than running the actual tests will. Instead, I'll practice making the effective Addstar files and double checking the art star positions. Then I'll apply that technique to the real files and Allframe the real files again. I'll then go back to Beth's method of looking at the number of stars per magnitude on the true and artificial frames as compared to the input. If there are still major problems, it might worthwhile to do the full Allframe analysis on the bright artificial star images.


Notes:


For now, I've settled on a 50 pix spacing in both the x and y directions. I've determined that the area that overlaps all exposures has coords
xmin = 439.4917
xmax = 3989.424
ymin = 251.866
ymax = 1840.46

I rounded the min values up and the max values down for good measure. In all, I'm inserting 70 stars in the x direction and 31 stars in the y direction. r181 still have signficant astrometry problems, so I've ignored it when calculating these values (it's one of the 2 exposures that's not used in the analysis, so I haven't even bothered to run it through DAOphot and Allstar. r183 is the other, but it will at least generate an allstar output in the usual amount of time, so I run it with the others for simplicity, though it is not used at all in the Allframe analysis.)

Ideas about Addstar:

Just talked to Beth about a few other things to consider when figuring out the best way to work with Addstar. First, I'll generate a fraction of the stars I currently have to see if we're somehow exceeding a limit Addstar has for the number of stars that can be input at one time. The second idea is to just cut the spacing in half to see if I can get the spacing I had wanted in the first place. I mentioned this idea yesterday and it's the desired way to do things because we want to fit a reasonable max number of stars into each AST so that we can cut down on Allframe computing time. Additionally, it'll be easiest to generate the effective Addstar file using this approach. Right now, the easiest spacing for that seems to be 56 pix in either direction, but tomorrow morning I'll try larger pixel spacing, perhaps 28 pix (half the original) to see what the pattern looks like.

Tuesday, June 8, 2010

Completeness problem found!

I was still having completeness problems up until group meeting this afternoon. So Beth gave me some suggestions to sanity check things in more detail. I went back to the image files of the original data, artificial star data, and art star-subtracted data. Turns out half of the stars I was inputting weren't showing up. I looked for stars brighter than 21.5 mag and wasn't finding most of them. So, again at Beth's suggestion, I created a new .add file of artificial stars that all had a magnitude of 20 (clearly bright enough to see by eye). When I ran this file through DAOphot, Addstar, and Allstar, every other row of artificial stars were missing. That would explain at least in part why I wasn't finding many artificial stars.

This is more or less taking me back to square one but that's a good thing in this case. I need to take a look at how I'm running addstar and see if there's an obvious solution. Otherwise I think I could just troubleshoot the problem by putting in twice as many stars with half the spacing in the vertical direction (to get the originally intended effect). Then before calculating completeness I'll just need to create a .add file with the artificial stars that actually appear in the output of addstar.

Making the addstar fix is next on the agenda and will hopefully alleviate all of my problems, but we'll see. I'm also still going to check the aperture photometry on a few of the stars in the artificial images to be sure that the photometry is looking sane. I'll check both the bright grid and the one generated from the fake CMD.

Tomorrow is AstroPhilly! So I won't be doing much of this tomorrow, but I'll be back to it on Thursday.

Monday, June 7, 2010

June 7 Update

Today I worked on finishing up the data section of the paper and proofreading what I have. I sent the draft of the first 2 sections to Beth for comments.

I also went back to looking at the completeness stuff and going through it very carefully. Beth gave me an alternate calculation to try. Instead of matching positions, she recommended I match based on CMDs to see what the completeness levels are looking like. That way I'll know if it's still a problem with the position transformations or if Allframe simply isn't finding the stars. There are a few things that are complicating this process. The first was the error in positions. The second is that I can't be sure that the magnitudes coming out of the pipeline are matching those I put in. I'm getting the sense that this isn't a huge concern, but I'll need to test this hypothesis thoroughly tomorrow.

Tomorrow is finishing the coding on this new completeness calculation. There's also group meeting and Bruce's talk in the afternoon.

Thursday, June 3, 2010

Paper Outline

I met with Beth earlier today to discuss the outline of my paper. Here it is:

1. Intro
2. Observations and Data Reduction
-Astrometry
-Photometry
--Allframe
--Calibration
--star selection
-Completeness
3. Results
-Age, metallicity, distance
-Absolute Magnitude
-Structural Properties (incl surface brightness)
-Morphology
--KPNO
--simulated
((mass seg))
4. Implications of tidal features
-Assuming circular orbit --> instantaneous tidal radius
-assuming GC M/L (= a few) --> instantaneous tidal radius
-assuming Wolf et al. mass --> tidal radius
-assuming tidal radius = visible size --> M/L
-assuming 10:1 orbit and currently at apocenter --> tidal radius
5. Conclusions
6. Discussion

I've decided to take a break on the completeness stuff to get moving on the paper. Today and tomorrow I'll work on the first two sections. Then I'll get back to the completeness thing so that I can calculate the magnitude uncertainties. From there I'll work on determining the distance using fiducials and isochrones. When I've decided on a distance I'll use it to inform me about Wil1's age and metallicity. Then I'll go back to the structural properties to finalize them given the results of the completeness testing and also to calculate the Wil1 surface brightness. After that we've got to get back to working on the simulated galaxy to fix problems with the morphology of the simulation.

After all that gets done I'll just have the final calculations for section 4 and interpreting all the data for the conclusions and discussion section. No big deal. (If time permits, we'll then check out the possibility of mass segregation, at least in a preliminary sense.)

Completeness woes

I've been working on the artificial star testing this week and having trouble calculating the completeness. Last Thursday I ran Allframe for the first time on the fake frames. Then on Friday I completed the post-Allframe analysis and dug into calculating the completeness but I was getting crazy values that were nowhere near what they should have been.

After digging through on Monday and Tuesday I had fixed a few bugs. But it wasn't until yesterday that I realized a conversion error between pixel values and ra and dec. The coordinates in the reference frame were correct, but all other frames appeared to have some distortion in the placement of artificial stars. In fact, many of the stars were offset by 2", and so that explains why I wasn't finding them within 0.5" of where I thought I had input them.

Unfortunately, even when I increased the match length to 2.5", I still wasn't matching all of the stars as I had expected if that was the only problem. I chose to re-run Allframe using the fix, but I don't think that will completely solve the problem. One of the things I do in the process of matching Allframe is match between all of the frames to make sure i have at least 2 detections of each star. It is possible that I was losing stars in this step because the inserted stars weren't necessarily coincident between frames (though they were meant to be), however it looked to me as though all frames other than the reference were using the same ra and dec values for the artificial stars. So there should have been only a very few stars lost, namely those appearing on only the reference frame. Still, this problem needed to be fixed and it's very possible that additional errors were introduced when the transformation between frames was calculated using DAOmaster.

I've corrected several bugs in my calculation of the completeness including how I was selecting stars. Today, I'll finish analyzing the latest Allframe run and incorporate that to my completeness calculation to see if I'm matching things any better. I also plan to meet with Beth to discuss an outline for the paper. Then I'll jump into writing the intro.

Monday, May 24, 2010

First day back on the job

It's been a fairly productive day. I met with Beth this morning to discuss directions for the summer and we seemed to be on the same page. I spent this morning going through my files on squid and getting rid of a lot of the old stuff. There were a lot of files that were unnecessarily taking up space simply because I had found better ways to do things and never had the time to go through and delete the old stuff. Along the way I would occasionally use the old files for reference so I had to go through and see exactly what needed to be kept and what could go. It was interesting seeing how far I've come over the past year and a half or so.

After cleaning out my corner of squid, the next priority is getting moving on the artificial star tests. Those are necessary to double check the completeness and the magnitude limits we set on the data for the ML code, etc. Last I had messed with that stuff, I had allstar and daophot automatically running on all the files, but hadn't checked the result because I was focusing on my thesis. It's now time to go back and revisit that work and come up with a real completeness cut-off. That will be this week's project.

Tomorrow Beth and I leave for a workshop at the University of Maryland: Advances in Theoretical and Observational Cosmology. It looks like it will be interesting and I'll be staying both Wednesday and Thursday. I'm looking forward to the observational stuff which I'm familiar with, but also the theoretical side. I know I just got back on campus, but I suppose it will also be nice to break up the not-quite-yet-established monotony.

It will also be good to return to maintaining the blog. It seems to keep me on a productive path and makes it especially easy for me to reference notes to myself. I've virtually given up my written notebook in favor of this online version.

Here's to a good summer!

Summer 2010: Goals

Back at Haverford for another exciting summer. (And it's already raining.)

This summer will be my last and Haverford and will really focus on getting the paper together for publication. I'll also wrap up some loose ends from all the works I've done.

Long-term goals:
-Write a draft of the paper

Shorter-term goals:
-Complete artificial star testing
-Fix Wil1 simulations
-Re-examine the luminosity function as a function of distance from the center of Wil1
-Quantify asymmetry in Wil1

Artificial star testing is the first priority. Then I'll probably write up something to quantify the asymmetry. Next week Beth and I will talk about the luminosity function results from the 2006 paper, my thesis, and the structure of the new paper. In a few weeks we'll look more in depth into the Wil1 simulation problem.

Monday, March 15, 2010

Artificial Star Tests and Max Likelihood

Artificial Star testing is about to get underway in earnest. My fake CMD is finally complete and I'm happy with the result. Now I'll choose how many stars to run per chip, based on a reasonable grid density. Then I'll begin running DAOphot and Allstar.

The max Likelihood results are looking decent down to a magnitude of 24.25. I'm working with a field with radius 10' about Martin et al's "center" of Willman 1. Going to 24.5th or 25th magnitude results in outrageous values for PA and, at 25th mag, even the ellipticity and rhalf are out of whack. These are with the strictest chi/sharp cuts I've made yet. They are:

-0.21 < sharp < 0.1
0.8 < chi < 1.21

The number of stars I'm inputting into the ML code with these chi/sharp cuts and a 10' radius field are:
23 mag: 502 (looks great!)
24 mag: 1095 (looks great!)
25 mag: 2405 (looks awful!)

Monday, February 8, 2010

Calibrated Masterlist and Allframe Update

Allframing is finally concluded. I ended up ditching two of my 20 exposures on each chip, both in the r band, to work around the problems with Chip2 and also some Allstar problems from Chips 4 and 6. Everything has finally been completed and the initial master list is looking good.

I then moved on to calibrating the data, using techniques similar to before. I'm using /home/gail/Wil1_goodheaders/allframe/masterlist/calibrate_allframe.pro to do this. I'm calibrating my list to SDSS data at mags fainter than 21.5 in r. I also did a cut in chi and sharp as a function of magnitude. To calibrate, I calculated an offset value between the SDSS and KPNO data in g and r. I then looked for dependences on color, magnitude, and position based on this offset and found none.

I then calculated a color term by bootsrapping my data for 1000 iterations. I had some problems throwing out outliers that emerged during this bootstrapping. There were some iterations where I wouldn't find any data that was within my desired error range. It's still possible that this is a bug.

I moved ahead to plot the data with the fits and they look really nice. Even with outliers, the calculated calibration fits the data very nicely. I briefly checked that the mean and median of the zp and ct in all of the bootstrapped iterations were similar and they were. I also checked the variance in these parameters and they looked sane, though the r-band has much larger uncertainties all around. I owe this to the fact that I can't get rid of outliers in the r band. I was able to throw them out in the g band after relaxing from allowing a 4-sigma interval (up from 3-sigma).

I went ahead and made a new, calibrated masterlist using two versions of my chi and sharp cut, one slightly stricter than the other. I also plotted a new calibrated CMD. We definitely have a lot more sources now, though the main sequence is harder to pick out in my opinion. I overplotted the M13 & M92 fiducials for reference in ~/Wil1_goodheaders/allframe/masterlist/calibCMD_allframe.ps (the dashed line is M92). It does look like we're going a little deeper, though.