Using π1 − π2 as a measure of how different two groups are isn’t always agood measure. For example, a difference of 0.05 means something differentfor the cases Another measure that people find useful, particularly for small π1 and π2, isthe relative risk This often matches better with the way people think about small proportions.
Example: Incidence of Rhabdomyolysis and Lipid-Lowering Drugs (JAMA, December 1, 2004 – Vol 292, No. 21, pages 2585-2590) π1 ˆπ2 = 0.000788 0.000061 = 0.000727 While the absolute difference between the two probabilities is small, it’s notreally the correct scale to be describing the difference.
The ratio of almost 13 times is a better description of the increased risk ofproblems with Cerivastatin.
Note that Cerivastatin was voluntarily removed from the market by Bayer inAugust 2001 for due to reports of fatal cases of Rhabdomyolysis (a severemuscle reaction to the drug).
The sampling distribution of the usual estimate of RR, usually is not well approximated by a normal distribution. So will not work well as a confidence for RR.
is approximately normally distributed. So we can base a confidence intervalon this quantity instead.
The standard error of log RR, as an estimate of log RR, is giving a 100(1-α)% CI for RR of CI(log(RR)) = log(RR) ± z∗ From this we can get the confidence interval for RR of = 12.89 log RR = log 12.89 = 2.556 2.556 ± 1.96 × 0.4742 = (1.627, 3.486) (e1.627, e3.486) = (5.086, 32.643) Note that that this interval is not symmetric around RR. This is reasonablein this case as RR must have a skewed distribution. For statistics withskewed distributions, usually confidence intervals based on them with notbe symmetric about the observed statistic.
Question: Is the transformation trick done earlier to get the CI for RRvalid? Suppose we have a valid confidence interval procedure for a parameter θand we want to get a confidence interval procedure for f (θ), where f (·) isa strictly monotonic function (i.e. increasing or decreasing). For simplicitylets assume that f (·) is an increasing function.
Now suppose that the true parameter value for a particular problem is θ0and lets consider all data sets that include this value in the interval (L, U ),i.e, Since f is an increasing function, f (L) ≤ f (θ0) ≤ f(U) i.e. f (θ0) is in the interval (f(L), f(U)).
So if the procedure that generates intervals (L, U ) for θ0 has confidencelevel (1 − α), the intervals (f (L), f (U )) for f (θ0) must have the sameconfidence level.
The proposed interval for RR satisfies this as exp is an increasing function.
So how well this interval works depends on the approximately normality ofRR, which isn’t that good.
In the examples discussed so far, we could have looked at the failure rates(the ϕs) instead of the success rates (the πs). Lets do that for a newexample Example: Infant mortality in New York City in 1974 So the risk of death for low birthweight babies is almost 11 times higherthan the risk for normal birthweight babies.
What if we look at the chance of being alive Note that the RR based on the ϕ’s in not a simple function of the RR basedon the π’s.
i.e. no g(·) where g(RRs) = RRf.
In fact, it can be shown that if RRs = c, then • c < 1: RRf ∈ (1, ∞) • c > 1: RRf ∈ (0, 1) The difference in proportions works much nicer. Its easy to show that ϕ1 − ϕ2 = (1 − π1) (1 − π2) = (π1 − π2) In addition, all the inference procedures discussed last time transform thesame way ϕ2) = SEπ1 ˆπ2) • CI(ϕ1 − ϕ2) = −CI(π1 − π2) • z(ϕ1 − ϕ2) = −z(π1 − π2) Because of this problem with the relative risk plus the poor distributionalresults, the relative risk isn’t the most popular measure Another measure to describe probability that is commonly used is the oddsof an event 3. If the odds of a success are ω, then the odds of a failure are 1ω 4. If the odds of a success are ω, the probability of a success is Since there is a 1-1 relationship between odds and probabilities, insteadof making statements about probabilities, we can make statements aboutodds.
Consider the situation where two populations of interest have successprobabilities π1 and π2. The odds ratio is defined as φ = 2 = 1−π2 = 2 × The odds ratio acts like relative risk, e.g.
RR = 10 ⇔ π2 = 10π1 φ = 10 ⇔ ω2 = 10ω1 In addition, if π1 and π2 are small, then This is one reason why people look at the odds ratio. Its similar to RR andthe distributional properties of its estimator are nicer.
There is another motivation for looking at odds. Consider the binomialdensity function So the odds is a natural parameter of the binomial distribution. Actuallylog ω is the canonical parameter of the binomial (which we will talk aboutwhen we get to talking about the exponential family).
One important relationship between odds and probabilities is (You’re asked to justify this in question 3 of the assignment.) H0 : π2 − π1 = 0 vs HA : π2 − π1 = 0 H0 : φ = 1 vs HA : φ = 1 Similarly we get for the one-sided hypotheses H0 : π2 − π1 = 0 vs HA : π2 − π1 < 0 H0 : φ = 1 vs HA : φ < 1 H0 : π2 − π1 = 0 vs HA : π2 − π1 > 0 H0 : φ = 1 vs HA : φ > 1 There are four other reasons why the odds ratio is a useful measure forcomparing population probabilities 1. In practice, the odds ratio tends to remain more nearly constant over Note that this is an empirical result and will not hold in some examples.
2. The odds ratio is the only parameter that can be used to compare two groups of binary responses from retrospective studies.
3. The comparison of odds extends nicely to regression analysis (e.g.
4. It really doesn’t make a difference whether we count successes or failures i = 1 is the odds of failure in group i.
Will we use this is a slightly different form sometimes log φf = log 2 log 1 = log φs = (log ω2 log ω1) S1(n2 − S2)S So the odds of death in for low birth weight babies is almost 12 times theodds for normal birth weight babies.
For comparison, recall that RRdeath = 10.78.
As with the sampling distribution of RR, ˆ normal, unless n1 and n2 are extremely large.
However the sampling distribution of log ˆ φ] log φ = µ 3. If n1 and n2 are large, log ˆ As noted in the text, there aren’t good rules for what large n means, butusually if things are ok for ˆ These statements can be justified by Taylor series methods (e.g.
variance statement can be derived by the delta rule).
Similarly to the RR case, we will start by getting a confidence interval forlog φ.
To do this, we need to get the standard error. We can do this by pluggingthe estimated πs into the variance formula and taking the square root,giving nπ1(1 ˆπ1) nπ2(1 ˆπ2) Then the confidence interval for log φ is Following the same approach as for RR, a confidence interval for φ is = elog ˆφ−z∗ SE(log ˆφ) , elog ˆφ+z∗α/2 φ × e−z∗ SE(log ˆφ) Again this is not a symmetric interval around ˆ To exhibit construction of this interval, lets look at the lipid drug example CI(log φ) = 2.557 ± 1.96 × 0.474 CI(φ) = (e1.627, e3.487) = (5.088, 32.678) So their is strong evidence that φ > 1, consistent with Baycol leading tomore problems with Rhabdomyolysis.
H0 : φ = 1 vs HA : φ = 1 H0 : log φ = 0 vs HA : log φ = 0 φ. To do this we need to calculate the standard error of this under the null hypothesis. The usual estimate is nπc(1 ˆπc) nπc(1 ˆπc) which is approximately distributed N (0, 1) under H0.
12695 × 0.000125(1 0.000125) 130865 × 0.000125(1 0.000125) Again supporting that Baycor has more averse events.
In the examples looked at so far we are looking at the relationship between2 binary variables. In each case their is a response/predictor relation ofinterest Birth weight (> / < 2500g) In each of these cases, the sampling is prospective (at least effectively).
Subjects are assigned/observed for the predictor variable and then theresponse variable is observed.
In some cases, this sampling method is not feasible. One example in thetext looks at the relationship between smoking and cancer.
Another example would be the relationship between genetics and breastcancer. The mutations in the gene BRCA1 have been shown to increasethe risk of breast cancer in women. Lets think of a prospective study toexamine the risk for both forms (wild type/mutant) of the gene.
Sample n young women and classify them by wild type (n1) and mutant(n2). Observe them over a period of time (say 30 years) and count thenumber who are diagnosed with cancer. This allows us to estimate π1 = P [Cancer|Wild Type]π2 = P [Cancer|Mutant] Not particularly feasible for a quick answer.
Another approach to studying this relationship is a retrospective case-controlstudy. The form of the study is Sample m1 subjects without breast cancer. Count number with mutant Sample m2 subjects with breast cancer (often matched for important covariates - age, smoking status, etc). Count number with mutant allele.
p1 = P [Mutant|Cancer]p2 = P [Mutant|No Cancer] the information on the other set of conditionals.
With this design it is not usually possible to estimate π1 and π2 However we can estimate φπ. It can be shown that So while we can’t get π1 and π2 (and equivalently ω1 and ω2), at least wecan get estimates of the relationship In fact, φπ is the only parameter from the prospective study than can beestimated in a retrospective study.


Microsoft word - lct 1008 accident loss analysis.doc

Provided by Harleysville’s Risk Control Department 800-523-6344 ext 8100 Accident analysis is a systematic, statistical study of loss data. Its main purpose is to identify common features or patterns in the loss experience. Accident analysis provides managers responsible for risk control programs with an essential overview of the loss experience. It is

03-5202 dep sprinkles.pdf


Copyright © 2010 Medicament Inoculation Pdf