Skip to contents

TNTP maintains two versions of leader and teacher high expectations questions. The current version contains four questions, all of which are positive coded. The old version, meanwhile, contains six questions, only two of which are positive coded. Below are the required column names for each version.

Teacher or Leader Expectations - CURRENT Version (4 questions)

  • Metric name to use in package: metric = 'expectations'
  • Items:
    • exp_fairtomaster (“It’s fair to expect students in this class to master these standards by the end of the year.”)
    • exp_oneyearenough (“One year is enough time for students in this class to master these standards.”)
    • exp_allstudents (“All students in my class can master the grade-level standards by the end of the year.”)
    • exp_appropriate (“The standards are appropriate for the students in this class.”)
  • Scale: 0 (Strongly Disagree), 1 (Disagree), 2 (Somewhat Disagree), 3 (Somewhat Agree), 4 (Agree), and 5 (Strongly Agree).

Teacher or Leader Expectations - OLD Version (6 questions)

  • Metric name to use in package: metric = 'expectation_old'
  • Items:
    • exp_allstudents (“All students in my class can master the grade-level standards by the end of the year.”)
    • exp_toochallenging (“The standards are too challenging for students in my class.”)
    • exp_oneyear (“One year is enough time for students in my class to master the standards.”)
    • exp_different (“Students in my class need something different than what is outlined in the standards.”)
    • exp_overburden (“Students in my class are overburdened by the demands of the standards.”)
    • exp_began (“Because of where students began the year, I spend nearly all of my time on standards from earlier grades.”)

This vignette will focus on the current version. But, the process works the same for the old version.

Let’s create a fake dataset to work with.

n <- 300

teacher_expectations <- data.frame(
  id = 1:n,
  exp_fairtomaster = sample(0:5, size = n, replace = TRUE, prob= c(.10, .15, .20, .20, .20, .15)),
  exp_oneyearenough = sample(0:5, size = n, replace = TRUE, prob= rev(c(.10, .15, .20, .20, .20, .15))),
  exp_allstudents = sample(0:5, size = n, replace = TRUE),
  exp_appropriate = sample(0:5, size = n, replace = TRUE, prob= c(.05, .10, .25, .30, .20, .10)),
  # create teacher expectation percentages by group
  teacher_group = sample(c('A', 'B', 'C'),size = n, replace= TRUE)
)

Expectations Per Teacher

We’ll first calculate whether each teacher has high expectations of students.

teacher_expectations %>%
  make_metric(metric = "expectations") %>%
  head() %>%
  kable()
#> [1] "0 Row(s) in data were NOT used because missing at least one value needed to create common measure."
id exp_fairtomaster exp_oneyearenough exp_allstudents exp_appropriate teacher_group cm_expectations cm_binary_expectations
1 3 4 1 0 C 8 FALSE
2 4 2 1 3 A 10 FALSE
3 3 4 0 5 A 12 TRUE
4 3 4 4 1 A 12 TRUE
5 4 4 4 3 C 15 TRUE
6 4 5 1 1 C 11 FALSE

The column cm_expectations is the teacher’s expectations score. It’s the sum of all the expectations columns. cm_binary_expectations is a boolean representing TRUE if the teacher has high expectations and FALSE otherwise. Teachers have high expectations if their expectation score exceeds the cutoff. This value is 11 for the current expectations version: teachers with scores under 11 do not have high expectations, while those with scores over 11 do have high expectations.

Percentage of Teachers with High Expectations

We can use metric_mean to calculate the percentage of teachers with high expectations, with standard errors included. Note that use_binary is set to TRUE so that we get the percentage of teacher with high expectations. If we simply wanted the average expectations score we would set this parameter to FALSE.

expectations_mean <- metric_mean(teacher_expectations, metric = "expectations", use_binary = T)
#> [1] "0 Row(s) in data were NOT used because missing at least one value needed to create common measure."

expectations_mean
#> $`Overall mean`
#>  1       emmean    SE  df lower.CL upper.CL
#>  overall  0.373 0.028 299    0.318    0.428
#> 
#> Confidence level used: 0.95 
#> 
#> $`Number of data points`
#> [1] 300

The code below saves the mean value as an R object.

expectations_mean_value <- summary(expectations_mean[['Overall mean']])$emmean

round(expectations_mean_value, 2)
#> [1] 0.37

Percentage of Teachers with High Expectations by Group

metric_mean also can be used to calculate percentages by group, along with standard errors and group comparisons.

group_expectations_mean <- metric_mean(teacher_expectations, metric = "expectations", equity_group = "teacher_group", by_class = F, use_binary = T)
#> [1] "0 Row(s) in data were NOT used because missing at least one value needed to create common measure."

group_expectations_mean
#> $`Group means`
#>  equity_group emmean     SE  df lower.CL upper.CL
#>  A             0.327 0.0475 297    0.233    0.420
#>  B             0.409 0.0503 297    0.310    0.508
#>  C             0.388 0.0478 297    0.294    0.482
#> 
#> Confidence level used: 0.95 
#> 
#> $`Difference(s) between groups`
#>  contrast estimate     SE  df t.ratio p.value
#>  A - B     -0.0817 0.0692 297  -1.180  0.4658
#>  A - C     -0.0614 0.0674 297  -0.911  0.6336
#>  B - C      0.0203 0.0694 297   0.292  0.9541
#> 
#> P value adjustment: tukey method for comparing a family of 3 estimates 
#> 
#> $`Number of data points`
#> [1] 300

Now, let’s tidy up these results by placing them in a data frame.

summary(group_expectations_mean[['Group means']]) %>%
  as_tibble() %>%
  kable()
equity_group emmean SE df lower.CL upper.CL
A 0.3269231 0.0475452 297 0.2333549 0.4204913
B 0.4086022 0.0502785 297 0.3096550 0.5075493
C 0.3883495 0.0477754 297 0.2943282 0.4823708

And let’s do the same for the comparisons.

summary(group_expectations_mean[['Difference(s) between groups']]) %>%
  as_tibble() %>%
  kable()
contrast estimate SE df t.ratio p.value
A - B -0.0816791 0.0691988 297 -1.1803545 0.4658284
A - C -0.0614264 0.0674021 297 -0.9113434 0.6335577
B - C 0.0202526 0.0693572 297 0.2920050 0.9540930