Montag, 17. Oktober 2016

JWST Wide Survye (Discussion with M. Franx @ Berkeley Oct 2016)

From today's discussion the following proposal emerged:

What if we did a 700 sqarcmin survey of R=100 only survey of the entire CANDELS
area; 1h per setting



This yields 2000 galaxies to m_AB=24.8 for which we get continuum S/N >10/pix;
good enough to do quite good age, FeH and a/Fe.
at z>2
We also get good enough H/alpha for 2Msun/year

Mittwoch, 31. August 2016

AS4 stellar astrophysics concept

Science scope:
  broad spectrum of ground-breaking stellar astrophysics,
 telling us how stars work, live and die...

With Gaia & TESS there are transformational space missions 
that focus on imaging/photometry/astrometry (some spectra) of bright stars.
The proposed program is tailored towards a spectroscopic survey that
in many ways ideally complements this:
 -- it is all sky
 -- it is matched in apparent magnitude
 -- it is time-domain
 -- it is dust-penetrating (disk/young stars)

Natural operational meshing of stellar astrophysics & DISCO

It is focussed on
 -- age-chemo-orbital mapping of the entire galaxy 
 -- binary stars across the HR (as calibrators, stellar evolution, SF,SN)
 -- astroseimsology to understand stellar evolution (need spectra)
 -- planet hosts 

It is a full sky program, with APOGEE and BOSS (1 each) in both hemispheres,
2/3 is multi-epoch observations; 2/3 APOGEE; 1/3 BOSS; still based on a 10min minimal cadence.

"Surprise" element:
Instead of a ROBOT, a 4-5 fold-set of 300 APOGEE fibers, feeding 4-5 gang connectors,
still using plug plates would enable:
 -- simultaneous use of 300 APOGEE and 500 BOSS fibers 
     --> better use of instrument/detectors
 -- more flexibility in fiber placement

This would mean:
-- minimal new hardware development 
   (RV improvement; fiber cart upgrade with BOSS and more APOGEE fibers)
-- move 1 BOSS spectrograph to LCO
-- plug plate costs in two hemisphere continues
-- could start 2019

All science can be done in minimally 3 years; comfortably 4 years;
minimum 25 Million x 10min x spectrum
few million fiberx10min open (for BOSS)

Much of the same science would work within fiber-robot framework,
too, in comparable survey duration.



Montag, 1. August 2016

Exercises for the cosmology block course

Though on exercises for the Cosmology Block course:

Day 1: Newtonian cosmology:
   Exercise: things to do with the collapsing top hat:
   -- explain that (in three days) we'll see that overdose parts of the Universe
can be treated as Newtonian Universes
  -- the collapsing sphere:
     -- toy-problem
     -- "virializing"
     -- properties
     -- application to the Milky Way:
        we take v_circ --> v_virial --> mass, size, etc..


Day 2: Friedman-Robertson Walker Universe
    -- code up, and plot, the angular diameter distances; luminosity distances
       as a function of cosmological parameters;
   -- what are the uncertainties implied by current cosmol. parameter uncertainties
       (which we prod ad hoc at this point)
    -- application: what's size of a galaxy at z=0.5,2,7
    [TOO Early; move all of this to day 3?]

Day 3: Linear growth of structure etc...
    -- code up the solution of contrast growth for non-trivial (i.e. realistic) cosmological parameters
    -- compare the (linear) growth of structure between toy-cases of cosmological parameters
    -- do top-hat now in a cosmological context  
    -- or code up and plot Press-Schechter

Exercises for the cosmology block course

Though on exercises for the Cosmology Block course:

Day 1: Newtonian cosmology:
   Exercise: things to do with the collapsing top hat:
   -- explain that (in three days) we'll see that overdose parts of the Universe
can be treated as Newtonian Universes
  -- the collapsing sphere:
     -- toy-problem
     -- "virializing"
     -- properties
     -- application to the Milky Way:
        we take v_circ --> v_virial --> mass, size, etc..


Day 2: Friedman-Robertson Walker Universe
    -- code up, and plot, the angular diameter distances; luminosity distances
       as a function of cosmological parameters;
   -- what are the uncertainties implied by current cosmol. parameter uncertainties
       (which we prod ad hoc at this point)
    -- application: what's size of a galaxy at z=0.5,2,7
    [TOO Early; move all of this to day 3?]

Day 3: Linear growth of structure etc...
    -- code up the solution of contrast growth for non-trivial (i.e. realistic) cosmological parameters
    -- compare the (linear) growth of structure between toy-cases of cosmological parameters
    -- have students walk themselves through the argument why CMB+LSS basically rules out    
baryonic dark matter.

Exercises for the cosmology block course

Though on exercises for the Cosmology Block course:

Day 1: Newtonian cosmology:
   Exercise: things to do with the collapsing top hat:
   -- explain that (in three days) we'll see that overdose parts of the Universe
can be treated as Newtonian Universes
  -- the collapsing sphere:
     -- toy-problem
     -- "virializing"
     -- properties
     -- application to the Milky Way:
        we take v_circ --> v_virial --> mass, size, etc..


Day 2: Friedman-Robertson Walker Universe
    -- code up, and plot, the angular diameter distances; luminosity distances
       as a function of cosmological parameters;
   -- what are the uncertainties implied by current Cosmol. parameter uncertainties
    -- application: what's size of a galaxy of 1kpc size at z=0.5,2,7

Samstag, 9. Juli 2016

Dear All,
   after the group meeting on Wednesday, I played around just to get
a more quantitative sense of what kind of stars (at what distances,
and what velocities, [Fe/H] etc..) we should expect, I played around
with GUMS (=Gaia Universe Model Snapshot;
http://arxiv.org/pdf/1202.0132v2.pdf ).

This e-mail contains
a) a pointer to a roughly-DR1-TGAS-like mock catalog:
    https://www.dropbox.com/s/a1hiqrrdg1euqfs/GUMS_Gmag_11.5.fits?dl=0
    [provided by Jan; see http://www2.mpia.de/GC-NEW/wiki/GC/MWGroupMeeting ]

b) some plots and thoughts on the expected astrophysical
    properties of the sample members
c) if you take a however-simplistic model for the errors
    (e.g. 0.4mas, and 0.4mas/yr for TGAS<10.5). With that,
   I see no reason why you could not proceed, tune the code and make the plots
   for whatever Gaia-day1 paper you have in mind.

Details below.
HW


For technical reasons (travel web connectivity), I only downloaded the 1.5M
stars G<10.5; not the G<11.5-ish that may be most appropriate for;
Jan R. will put the full catalog on the MW@MPIA Wiki soon.
Let's look first at "all" sample members, and then at the
10^5 nearby ones, within 200pc. The catalog includes kinematics, binarity,
but is (largely) mute on white dwarfs, etc..

If we look at all 1.5M stars, we get the following distributions:




.. counts dominated by stars in the 3700k-5000K range.




.. giants and MS stars have approximately equal portions; the red clump
(at logg 2.3) sticks out a bit.




most stars are within 1kpc, but there's a LONG tail of distant stars (see below)...



at ~0.4 mas/year error, the DR1 measurement is precise and accurate for most stars




Now, let's look at the sky distribution, as a function of distance:

the most distant ones (>5kpc) are super giants in the Galactic plane


as we consider closer samples; they (of course) become increasingly
more homogeneously (yeah, a Mollweide projection would show that more nicely)







Just as a specific example of what to do with this,
here's a Galactic-top-down view of
the spatial distribution of some low-latitude tracers:
G-dwarfs & red clump stars in TGAS:



If you can pick out those two types of stars, you can make an instantaneous
map of the vertical motions of the Galactic disk (ideas on how to do this in a
subsequent communication).


Now, let's look at the very nearby stars; those may be good to calibrate
stellar physics:
There are 150.000 TGAS stars within 200pc.



their distribution is more dominated by warm/hot MS stars:






-- 

Donnerstag, 7. Juli 2016

A vertical motion map of the Galactic disk with TGAS (Gaia DR1)

In a separate post, I have sketched out what "Gaia DR1 (TGAS)" contains in astrophysical terms,
here is a zoom-in on a possible Gaia-day1 project:

Idea: can we make a map of the vertical motions of the Galactic disk, using only
proper motions and parallaxes; no spectroscopic information, to get spectrophotometric distances.

1) no spectroscopy --> take low-latitude stars (|b|<5deg),
    where all the vertical motion is in the plane of the sky.

2) we need stars with good photometric distance:
       these could be
      a) RC stars (can we pick them photometrically; I think we can?);
          there should be 60.000 RC stars at |b|<5deg in TGAS, with <D>~800pc
      b) MS stars (can we pick MS stars)? the plot below shows that we can eliminate non-MS stars by their parallax measurements (of we have a parallax accuracy of ~0.5mas

 

Note:  there are only 100 G-MS stars at |b|<5deg in TGAS, with <D>~100pc; that pins down the solar velocity?

3) that would lead to the following spatial distribution of the two tracers


which should be good enough to make a vertical disk corrugation map (by simple binning of the vertical proper motions).

Donnerstag, 26. Mai 2016

Densely targeting emission line dominated objects with NIRSPEC MSA

Notes on an obvious/crazy idea that has fermented in various people's brains,
and recently re-emerged from conversations involving Marijn Franx,
Daniel Eisenstein & HWR; these thoughts were floated at the
May 2016 NIRCAM-NIRSPEC meeting, with basically positive reception to the group there.


Conjecture: for emission-line dominated objects it is possible/sensible to open
many more shutters than the conservative "no overlapping spectra" targeting might suggest.
[For the time being, this pertains to NIRSPEC R=1.000 or higher resolution.]

Starting facts/assumption:
-- the spectra of most faint, high-z (z>~5-6) galaxies are emission line dominated.
-- the potentially most interesting ("PopIII") galaxies are emission line dominated
-- the # of targets that have potentially detectable emission lines in 10^(4-5) sec NIRSPEC
   exposures is far larger than the slit real estate budget, assuming no-spectral-overlap.
   [galaxies with broad-band magnitudes ~30 may well have strong, hence detectable lines...]
-- for emission line dominated spectra of (very) faint object, only a tiny portion of the spectral
   range therefore contains "significant" pixel.

Consequences:
-- the vast majority of detector pixels contains no "interesting" signal
-- many faint high-z emission line galaxy candidates will go un-targetted in
   any one deep NIRSPEC MSA setting.

Proposed remedy (to be taken as a though experiment, first):
open ALL shutters at focal plain locations that plausibly (according to NIRCAM photometry)
have high-z, presumably strong emission line targets.
[spectral overlap and chip gaps be damned for now.]
Let's take N~5 as a mental strwaman-plan.

[Nomenclature: dispersion runs along 'columns', slit runs along 'rows']
Let's presume that means we would have N shutters open in any one column (on N galaxies).
Advantages: N times more targets
Disadvantages:
  a)  x higher sky background
  b)  "confusion of N overlapping spectra"

Addressing the disadvantages:
 on  a) How does the monochromatic surface brightness of emission lines (say, 10Msun/yr at z~6)
  compare to the background? I.e. are the cores of strong emission lines above/below the background.
  NB 1: compared to 'slitless' spectroscopy the background
  is still N/365 lower than slitless. [365 == # of shutters in dispersion direction]
  NB 2: the ensemble of open slits will inform us about the background

 on b) lines are narrow and sparse; if the continuum is negligible; the spectral signatures
 don't overlap. In principle there is some wavelength degeneracy
 (which line came through which slit); but this should be manageable, as long as there are
 no shutters open in the same column and adjacent/nearby rows.
on b): information on the continuum will be severely degraded;
 --> for emission-line dominated objects, there is little information in the continuum anyway;
 photometry will help

One specific approach is to target all objects that were done on R=100 mode, in R=1000, with disregard to overlap problems. The R=100 mode should break many of the degeneracy issues.
[Thanks to Chris W. for suggesting this.]

Basis of this: could one get Brant's and Christina's mock -data catalog, including their photo-z estimates?











Freitag, 20. Mai 2016

Target assignment priorities for JWST MSA (NIRSPEC) Part I

after discussion in Victoria, May 2016, some notes on my thoughts
on the slit assignment from NIRSPEC MSA:

Let's presume, the plate solution is perfectly known and there are no failed shutters,
and we are interested in a 3-dither (no nods) mask design.

Let's presume we have a set of targets, with foremost attributes:
alpha,delta, size,flux,"scientific value", wavelength-range of highest interest (presuming we know z).

Let's presume, we want no spectral overlap; and we can neglect the issue of spectra
falling off the chip

The question is: what is the best combination of
a) telescope pointing (field center & orient)
b) target list

For a) we have in practice we have only 2 DOF, presuming the orient is given.
presumably these are fixed by the positions of very few "high-value" targets.
So a) and b) basically decouple.

Conjecture: in the limit that spectra cover the entire detector, the matter is simple:
in each slit position, one finds the "best available" object.
Complications foremost arise in defining which object is "best":
we need to define a merit function between
-- how intrinsically valuable is the target
-- how off-center (w.r.t. the micro shutter) should its centroid be;
   let's quantify this by a single number  log(S/N)-log(S/N_best),
  where S/N_best is defined as the S/N (given t_exp) we could get for
  a perfectly centered source with best sky subtraction.

Things to pre-compute:


Mittwoch, 11. Mai 2016

Jan Rybitzki's Chempy projects

Just to commit to memory here is a discussion draft of Jan's papers
[from HWR - JR conversations; and DWH input]

Paper II or I :
  -- write-up of the basic model chempy (with thesis advisor Andreas Just).
  -- science bit: given a set of yields, how much can the abundances of
      a single star constrain the chempy parameters: the SFR, high-mass IMF slope,
      the SN-delay, the feed-back-mass-loading, the fraction of WD's that go SN Ia, and
     the gas inflow rate.
    [corollary: is having the age of a star helpful, if the star is not very old]
 -- implementation: take the Sun and Arcturus abundances and ages, and
    construct a chempy parameter pdf triangle plot.
-- consider taking the 'cosmic abundance' instead of Arcturus
-- consider different yield tables
-- discussion:
    explain why this fails..


Paper I or II:
   -- are the APOGEE data good enough to tell us which (published) yields tables are "best"
  -- Melissa will canonize the Hawkins et al accurate APOGEE abundances/ages to the
     RC sample of APOGEE; we then presume that abundance zero-point systematics
     are a sub-dominant error source
  -- Jan will try all 9 yield table (3x) combination to match the APOGEE RC sample
     (effectively marginalizing over the chempy params) and ask which fits best --> make Hogg happy
    [How much of this could be in paper I]
CHANGE of scope: use the ~30 abundance standards of Jofre et al 2016, to fix the field tables...
  that stays Paper III

Aside: can we ask what the set of [X/H] zero-point shifts in APOGEE can be, that would make
Arcturus, the Sun, and the cosmic standard likely??


Paper III:
   -- goal: the (varied?) chemical prehistories of all stars in the APOGEE sample
  -- use ensemble fit to tweak yield tables (see Paper II); we then assume both the
      yields and the abundance zero points to be "correct" (i.e. we won't marginalize)
  -- chempy has four parameters that may plausibly vary from star-to-star:
     the SFR, high-mass IMF slope (?), the feed-back-mass-loading, and
     the gas inflow rate.
  -- construct the pdf of these parameters for every single star in APOGEE;
     also exploit the ages for the stars we have..
  -- this enables:
      ** did the IMF vary as a function of time, of FeH?
      ** does the inferred mass-loading, or the inflow correlate with other properties
        (age, FeH, etc..)

CHANGE of scope: apply all of this to the ~30 abundance standards of Jofre et al 2016

Paper IV (Hogg)


Donnerstag, 21. April 2016

Binaries/rotation/dredge-up ages of giants

Had a conversation with Selma de Mink:
her claim: on the MS, stars more massive than 2Mo rotate a lot
faster than power-mass stars (because of magnitude field stars).
The question is whether some of that rotation survives recognizeably
into the giant phase.

Does Apogee have a rotation parameter? If so, does that correlate with mass?

Sonntag, 7. Februar 2016

Exploring Apogee's abundances space with chemical evolution models

Starting point

Jan Rybizki arrived Feb 1, 2016 and brought with him his one-zone chemical evolution model. The overall plan is to explore what we can learn about a) the Milky Way (its chemical evolution and the yields of the stars), b) the Apogee abundances, and c) Jan's one-zone model and its limitations.
.. and then learn it. Here's a set of HWR notes after talking with Jan.

Jan overplayed his (fiducial) model predictions to the Apogee abundances as-is. It looks like in the plots below, basically a mismatch... which is an OK starting point.





Towards understanding the data-model (mis-)match

The reasons why data and model may disagree are manyfold ...disagreement is good, it teaches us something new. 

Model parameters

The most immediate advantage of Jan's models is that it can 'fit data' via MCMC,
which makes the variation of many, even all model parameters feasible. The model parameters fall into two categories:

Galaxy History: This entails the SFR, the inflow and outflow terms form the box, and the IMF; this is a 'handful of parameters'

Stellar physics: this entails essentially the yields; these are in some sense parameters, but presumably the space of all possible yields is not spanned by a set of continuous parameters. Hence, see below.

Data Calibration

It is well known that the Apogee ASCAP pipeline, at least for some elements has systematic offsets that are far in excess of the typical error bars. One simple way to explore the role of possible offsets is to make a basic abundance offset [X/Fe]_0 a fitting parameter, presumably with some prior on it (to avoid complete 'runaway').

Yields

It would be good to think about how one can turn the options on yields into something that is parameterizable, and therefore fit-able.

Applicability of a 1-zone Model

The model as is, makes a 'unique prediction' for [X/H] or [X/Fe] = f(age). Clearly, the data show a 'spread' in their abundance patterns, well beyond their errors. In a galaxy where radial migration must be prevalent, a 1-zone model cannot be correct.

In the longer run, we should explore in which regime it is a useful approximation: this could be done by restricting the observations to a limited range in [Fe/H], if [Fe/H] has been a good birth-radius predictor (for stars younger than 8Gyrs). Or we could fit a superposition of 1-zone models.


Next Steps


What portion of [X/H]-age space can be reached by varying the parameters of a one-zone model of a given yield-table?


How to go about it? Perhaps by simply defining the fill range of plausible model parameters, (galaxy evolution and IMF, not yields)  and then sampling model predictions uniformly.

It would be good to have an approach, to figure out how degenerate (or not), variations in the IMF and of the SFH are (if one has Apogee-type data).


What is the best one-zone fit to the Apogee data under the (untenable) assumption that all yields are right, and the data have no systematic offsets?

Apparently for historical reasons it is still not 'easy' in Jan's model to hold an arbitrary number of parameters (including none) fixed, fitting the rest via emcee. That piece of infrastructure should probably be put in place as one of the first steps.

What is the best one-zone fit to the Apogee data under the (untenable) assumption that all yields are right, and the data may have systematic offsets?

Apparently for historical reasons it is still not 'easy' in Jan's model to hold an arbitrary number of parameters (including none) fixed, fitting the rest via emcee. That piece of infrastructure should probably be put in place as one of the first steps.

Can we learn about the yields?

Conceptually the next step would be to have a few yield-knobs to tune, and let them loose in the data fitting.

Sample selection:

We need to decide what good sample cuts are, to make the one-zone model a sensible framework. Options are:
-- spatial cut (solar radius)
-- abundance cut (FeH==birth radius) at solar FeH +- 0.x dex
-- no cuts