Friday, June 12, 2009

Okay, here's the word.

I'm having issues with the single exposure CMD. Out of ~3000 stars in each catalog, I'm only getting ~30 that match to within .5 arcsec. I'm getting 156 to within 1 arcsec...but shouldn't it be that there aren't too many more matched when I bump this to 1 arcsec because everything's already been matched? The comforting news is that these are matching to those python found...but that's only slightly comforting.

Because...when I go to match to the SDSS catalog not a single star matches. I've thought about this for a while now and can't figure it out. Even if the 30 stars were the only ones that matched between the bands, my SDSS catalog is centered on Wil1 which is on chip 7 so I'm having a hard time believing it's not finding any matches.

I took a look at the single exposures in each band and compared the star-subtracted image from the not-subtracted image. There are some donuts associated with the bigger stuff but I would say th majority are subtracting really well. This is true for both bands. But taking a look at the stacks, there seems to be a much bigger problem with donuts in both bands. I'm wondering if the fitting radius shouldn't be increased for stacks?

I made a CMD of the stacks daophot'd with the old .opt parameters. There's a slight hint of getting deeper--one star fainter than 24 mag. But beside that they look about the same.

Like I mentioned before, I couldn't make a CMD of single exposures because I'm not finding any stars to calibrate.

Thursday, June 11, 2009

So after reading Beth's comment on yesterday's post I realized that the cut I was using DAOphot to do was round and sharp when I wanted chi and sharp. And I also didn't need DAOphot to do this because the allstar outputs contain chi and sharp.

I went back to see what limits I should put on chi. I used 'find' to calculate flux and plotted chi and sharp against flux like before. I then selected some limits. I implemented them to the un-calibrated matching code, matching only ra's and dec's that already made the cut.

The numbers:

I started with a g catalog of 27334 stars and an r catalog of 33679 (from the allstar outputs). After the magnitude cuts I had 22642 and 26525 stars, respectively.

All stars:
Matching the g and r band catalogs I was left with 16857 stars in the CMD, all of which agreed with stars matched by Python. After matching this catalog to SDSS I was left with 1359 stars for the final calibrated CMD.

Within 5 arcsec of Wil1's center:
After matching the g and r stars I was left with a catalog of 1356 stars, all of which agreed with stars that python matched. Then after matching this KPNO catalog with one of 25000 SDSS stars I was left with 165 stars for the calibrated CMD.


In either CMD I'm still not getting even as deep as the 2006 CMD. My main line of thinking at the moment has to do with the differences in read noise and gain that were input to DAOphot now and back then. My read noise is higher and gain is smaller than those used for the 2006 paper. While I stacked the exposures so I should have a deeper image, if I'm not mistaken a higher read noise will result in fewer sources detected and a low gain will just compound this effect. I'm not sure how I could quantify the affect this might have on the number of sources I'm finding, but I'm considering running DAOphot with the 2006 numbers to compare the resulting CMDs. However, I'm still confused about where the 2006 values came from and so I'm more or less convinced that mine are at least more correct. Though if I end up getting a CMD with better results I could be persuaded to change the values if Beth can explain where they came from...



Tomorrow:
In lieu of other ideas to make the CMD deeper, I think I'll forge ahead on the CMD calibration code. I still need to:

1. Weight the points in the plot by their measurement uncertainties.
2. Plot gtrue-ginst vs. gtrue=rtrue and do a linear fit on the result, solving for the zero-point and the color term in: gtrue-ginst=zp+ct(g-r).
3. Calculate the measurement error on each component of the zp. (Get the SDSS error from that catalog.)
4. Bootstrap the data 1000 times, saving each result in a structure. Then compare the variance in colors.

I doubt that all of this will get done tomorrow, but perhaps by the end of Monday.

Comparison with 2006 DAOphot parameter files

I compared the input parameter files to DAOphot that I used to the ones used for the 2006 paper.

In daophot.opt:
Most of it was similar. But the readout noise and gain were pretty significantly different. For the paper, noise=1.875 and gain=3.2 while I have noise=2.71 and gain=2.07. According to the website where I got my info, the 2006 gain is way out of whack (http://www.noao.edu/ctio/mosaic/) unless I'm looking at the wrong info. According to that website I think my noise also makes sense but I might be wrong about the conversion. I know Beth has mentally sanity-checked it several times before.

I also have a variable psf of 2 compared to variable psf of 0. This means I'm considering a quadratic fit while the 2006 paper used a gaussian fit. I think if anything, mine will be better.

I have a larger psf radius--20 as compared to 11 used in 2006. These are certainly different but mine is set to be larger than the largest star. So I'm thinking they either should be different because my data has been stacked and/or mine will be better because 11 might just be too small.

In 2006, the default percent errors were while I set those to 0. I consulted Dave about this who also uses 0, though no one seems to know exactly how these work.

In allstar.opt

In 2006 Beth used the default inner and outer sky of 3 and 25, respectively. But I used 30 and 50. According to talking with Beth in the past, mine are legit. But it does concern me that these cover completely different ranges.

Again, the 2006 paper used default values for error while I have 0 for both.

In photo.opt
The paper used mostly default values in this file--apertures increasing in increments of 1. It used inner and outer sky to be 15 and 25. However, I use apertures that increase in increments of 2 and an inner and outer sky of 2 and 30. I'm thinking this is okay--I'm just being more liberal in my limits of sky and aperture. Though I don't know much about this file or where it came from, so I haven't spent too long thinking about proper values--before I started running DAOphot this summer Beth and I just glanced at this and assumed everything was about right. I am confused about why the inner and outer sky limits are not the same as those used in allstar.opt. In 2006 they weren't consistent, either.

Wednesday, June 10, 2009

Today:


Meeting with Beth:

This morning I talked with Beth about the CMD issue for a while. She found something wrong with my code (I knew it had to be something!) that was throwing off all the CMDs. So we fixed that and now the python matching code agrees with the idl matching code which is comforting. All in all, the two matched 19850 stars. There were still a few features on the CMD that looked fishy, but it looked much more like a CMD--including the thin disk and what could be a turn off. But it's still not as deep as we expected. In order to really test this, though, the magnitudes have to be calibrated so we know exactly how deep the CMD actually is.


Sharp and Round cuts in DAOphot

Before doing the calibrating stuff I spent some time fiddling with the DAOphot parameters. I took a look at the objects in the previous allstar files to see if I could improve upon the sharp and round limits that I applied in DAOphot. I plotted flux vs. round and sharp to narrow the limits and changed the daophot.opt files for each of the g and r bands. I ended up doing two cuts--one stricter than the other.

Boo on not being able to insert a table. But anyway, the more lenient one was:
...............g10 ....................r7
sharp..... 0.35 to 1. ..........0.3 to 1.
round..... -1 to 0.25........... -1 to 0.2

I tried to lower the upper limit on sharp but DAOphot wouldn't let me for some reason...

Originally, there were 33679 stars in the r band catalog and 27334 in the g band catalog. This first cut lowered the number in each catalog to 17335 and 17583, respectively. It then found 9299 matched stars.

The second, more strict cut used:

......................g10................ r7
sharp .......0.35 to 1. ...........0.3 to 1.
round ........0. to 0.25 .........0. to 0.2

so it just cut off the bottom end of the round measurements. This cut reduced the catalogs to 3138 and 2645 in the r and g bands, respectively. 540 stars were matched and put in the CMD.

The weird features--like the third, reddest peak--remain in both. But the stricter cut makes the CMD shallower so it might be best to use the first, more lenient cut.


Calibration and zero-point offset plot

I matched the CMD stars (from the first sharp/round cut) to SDSS stars and plotted gtrue-ginst vs. ginst (where gtrue is SDSS and ginst is KPNO) to get a sense of what the zero-point offset is. The offset doesn't seem to be dependent on instrumental magnitude (this is good), but it's not zeroed (this is also good!). In fact it's centered around a gtrue-ginst of 7. To get a more quantitative idea of where we are I ignored SDSS stars fainter than 22 and took the median of the offset (gtrue-ginst). Offsets for the g and r bands are 6.7059405 and 7.2057229, respectively. I added these offsets to the instrumental magnitudes and made a new CMD. I'm still concerned because it only goes to about 23rd magnitude. But it looks much cleaner and more CMD-like. (CMDcalibration.ps)



Tomorrow:
I'm concerned with the fact that only 1300 or so stars are getting matched between SDSS's 25,000 stars and the 9,300 which made the sharp and round cuts on my data. To double check this I'm going to try matching before doing the sharp and round cuts to see if the number of stars that match SDSS changes. If it does, I'll investigate this further. I expected more to match.

I'm meeting with Suzanne and co. in the morning for a quick IDL intro. Then it's back to the drawing board on how to get this CMD deeper.

Tuesday, June 9, 2009

This morning:

I got the match.py program working fine. I then used it output to make a new CMD. It matched 20,207 stars--much more than what I was getting before using spherematch, but way more than I was expecting it to find.

This afternoon:

I went back to my idl matching code to see if I could figure anything out. I wasn't able to discover a reason why it would be matching so few stars. I did notice that many of the really crazy colors were associated with at least one band having an especially low color. So I did a magnitude cut at 15 in both bands to see if this would clean up the CMD a bit. It helped with the color distribution but not must else.

I also spent a bit of time working on the CMD calibration stuff, but I'm getting ready to leave because it's super stormy out.

Tomorrow:

Meeting at 9 with Beth to go over everything in detail to find out if we can discover anything useful about the crazy CMDs.

If I'm not busy working on whatever we come up with to fix the CMD I'll continue to work on the calibration code, particularly the offset plot since that will hopefully help to find anything that could be wrong in the CMD.

Monday, June 8, 2009

Today:

So I've been getting ready to run match.py which Dave sent me on Friday. It's a different way of doing the matching. That code requires that the input catalogs of star coordinates are sorted by RA. So, I wrote some code that sorts the RA's and spits out files which contain the RA and Dec coords in order.

I then tried to run the match.py code. I think my problem is that when I write the files they don't come out looking like 11 different columns. It prints out as only 5 columns across. I think this is confusing Python when I try to read in the file. I used two different techniques for writing the files...forprint and printf and both turned out a files that looked the same. Not totally sure how to fix this, but I think I'll do some googling tonight and have some ideas before tomorrow morning.

I also started working on the CMD calibration code. I'm still confused about how to calculate the saturation limit. I have so far worked on the part of the code that will match SDSS stars to Willman 1 stars. However, I'll need a new SDSS catalog that takes into account the saturation limit on the KPNO data. My next step is to make the zero-point offset plot (g_true-g_inst vs. g_inst). The other big thing I haven't figured out yet is how to weight the points by their measurement uncertainties.

Finally, I sent Dave the image files so hopefully they got there alright and he can give some input in the near future about problems I've overlooked. Also, still brainstorming on things that could be wrong with the data, I read something online at that http://msowww.anu.edu.au/~jerjen/daophot.html for the more complicated analytical models DAOphot uses for its psfs, the more stars are needed to define it. I was picking 75 stars. Is this enough?


Tomorrow:
1. Figure out file-writing issue and run match.py.
2. Make some more headway on CMD calibration--saturation limits and weighting points.
3. Maybe talk to Dave about those images if he's figured anything out yet.