Well, good morning. Welcome to today’s live webinar to release the results of The Nation’s Report Card: Trends in Academic Progress from 2012. I’m Cornelia Orr, Executive Director of the National Assessment Governing Board and moderator for today’s event. The National Assessment Governing Board is a bipartisan organization created by Congress to set policy for the National Assessment of Educational Progress, or NAEP, which is also known as The Nation’s Report Card. The Governing Board is committed to making NAEP informative and useful for the public and for policymakers. Today’s Report Card provides an unparalleled view of the progress of U.S. education over four decades. You’ll hear from speakers today about this, but you can also participate using #NAEPtalk. We encourage you to use that throughout the webinar to carry on a conversation about what you’re hearing and seeing in the data. Today we’ll be looking back at the past 40 years and tracking long-term progress in reading and mathematics for students, their achievement at ages 9, 13 and 17. The data will reveal developments in achievement gaps by racial ethnic groups and gender in the context of some societal shifts and great changes in the demographics of our schools. This unique assessment is traditionally given every four years, and it differs from the main NAEP assessments with which you’re probably more familiar. Main NAEP is administered every two years and given at grades 4, 8 and 12. These two NAEP assessments differ in both the questions that are given on the tests and the scores that are reported. Jack Buckley, when he speaks, will tell us more detail about the differences between these two assessments. Today’s report will provide a broad perspective on the evolution of learning over the past 40 years. To discuss the report today, we have a distinguished panel of experts who will share their thoughts and reactions to the Report Card. I’ll briefly introduce each of our four speakers; and then I’ll turn the mic to our webinar producer, who will review event logistics with you. Our first speaker, as I mentioned, will be Jack Buckley, Commissioner of the National Center for Education Statistics. He will present the Report Card findings to us. Our next speaker will be Brent Houston, Principal of Shawnee Middle School in Shawnee, Oklahoma, and a Member of the National Assessment Governing Board. Finally, we will hear from Kati Haycock, who is President of The Education Trust. She’s been an advocate for better serving the underserved for many years. Following Kati’s remarks, we will have brief online Question and Answer Session with all attendees and speakers. Before we begin, Connie, our webinar producer, will review the logistics for using the WebEx system. Connie? Thank you, Cornelia. If you have technical difficulties during today’s webinar, please refer to your confirmation email or call 1-866-229-3239 for assistance. Our speakers will be answering questions during a Q&A Session later in the event, but attendees are welcome to submit questions about today’s Report Card results or speaker comments throughout the presentation. Simply type your question into the Q&A panel on the lower right side of your WebEx screen. When you submit your question, choose the option in the dropdown menu that says All Panelists. Please include your name and organization with all questions. Please note that the live closed captioning is also available in the bottom right corner of your screen in the Media Viewer panel. Click the “X” at the top of the Media Viewer if you’d like to close the captioning panel. Back to you, Cornelia. Thank you very much, Connie. It’s now our pleasure to welcome our first speaker; but I want to remind you before I do that, that you can engage in our conversation today by just using #NAEPtalk, and we encourage you to do that and provide your comments. As Connie mentioned, you can also submit your questions to us using the webinar tool. Our presenter of the Report Card information today is Dr. Jack Buckley. He is Commissioner of the National Center for Education Statistics, on leave from his position as a Professor of Applied Statistics at NYU. Dr. Buckley is well-known for his research on school choice, particularly related to charter schools, and also on statistical methods for public policy. Dr. Buckley served as Deputy Commissioner of NCES from 2006 to 2008. He also spent five years in the U.S. Navy as a Surface Warfare Officer and Nuclear Reactor Engineer and also worked in the intelligence community as an Analytic Methodologist. Jack, we thank you for being here. Thank you, Cornelia; I’m very pleased to be here as always. Good morning, everyone. I’m here today to present results from the 2012 NAEP Long-Term Trend Reading and Mathematics assessments, the first long-term trend assessments that we’ve had since 2008. These assessments as you might guess from the name, actually go back a ways, back to the first assessments administered in the early 1970s, so we actually have about 40 years of trend data. With NAEP we have two basic families of assessment, what we call main NAEP and today’s topic, long-term trends. These long-term trend results as shown on the left here provide national numbers for reading and mathematics for both public and private school students. These students, based on the practice that we observed back in the ’70s, are actually assessed by age at ages 9, 13 and 17 as opposed to grade, which you’re probably used to in main NAEP. Main NAEP, shown on the right here, also of course provides national results in reading and math, as well as results for the states and for the TUDAs, selected large trial urban districts. Here we report results at Grades 4 and 8; but as you know, these data don’t go back nearly as far as long-term trend. And we do also collect and report data at the twelfth grade, but only at the national level in main NAEP. So here’s a quick overview of the differences in content between long-term trend and main NAEP. The key here of course is that the main NAEP has changed periodically to reflect new developments in curriculum, in instruction and assessment, but the long-term trend assessments have remained largely unchanged. So both assess reading and mathematics, but the assessments focus on different skills. For example in mathematics, the topics assessed are the same for both assessments; but long-term trend focuses on basic skills and recall of definitions, while main NAEP goes beyond basic skills and also assesses problem solving and reasoning. In reading, both assessments include a variety of text types; and both require students to, for example, locate information and make inferences. But main NAEP tasks tend to be a bit more complex. So the long-term trend reading assessments generally use shorter passages, for example, than the main NAEP assessment. Both assessments report scale scores on 0 to 500 point scales; but the two scales are different in important respects, and they can’t be compared across the two families of studies. Both assessments also have a second method for reporting student achievement. Long-term trend uses performance levels, while main NAEP uses the familiar achievement levels. The performance levels, which are set at 50 point intervals on a 0 to 500 point scale, offer descriptions of what the students scoring at or near these levels know and can do. The performance levels, the familiar Basic, Proficient and Advanced, have the same meanings across all three ages. I’m sorry, those are the achievement levels. The achievement levels, on the other hand, the Basic, Proficient and Advanced, are set by the National Assessment Governing Board, and are more prescriptive rather than descriptive, reflecting judgments about what students should know and be able to do. All right, so students participating in long-term trend were assessed in either reading or mathematics. And in 2012, we had representative samples of about 8,000 or 9,000 students for each group, which is a total of more than 26,000 students per subject. The testing time was about an hour per student; and we assessed 13-year-olds in the fall, 9-year-olds in the winter and 17-year-olds in the spring, which again is a bit different than we assessed in the main NAEP, which is a January to March approximately testing window. So for many years, of course the purpose of the long-term trend assessment is to make very few changes over the long run. And so we actually made very, very few changes and did not affect the comparability of results. In 2004, we actually made more significant changes; it was necessary to bring even long-term trend up-to-date in terms of format and manner of administration. And in particular in 2004, we began allowing accommodations for students with disabilities and English language learners, as we do in our other assessments. And because of these changes in 2004, we administered both the original methods and the revised assessments to different but equivalent samples of students. And our analysis back then determined that the changes in format and administration had no effect on student performance; but by increasing participation of special needs students through accommodations, we did see an apparent decrease in average scores for students. But in most cases, these decreases were not statistically significant. Since 2008, we’ve only administered the revised assessment; but when we show you some of the longer-term trend lines, you’ll see two data points for 2004 and this is why. As you know, the student population in the UnitedÂ States has changed in a variety of ways since we first began measuring student performance with this long-term trend. We can use our data to go back as far as 1978 to look at these demographic changes and provide some context for overall changes in student performance over the years. In 1978, for example, the 13-year-old student population as shown here was 80% White; 13% Black; 6% Hispanic; and 1% Asian or Pacific Islander. But in 2012, the proportion of White students had fallen to 56%, while the proportions of Hispanic and Asian or Pacific Islander students increased. The change in the proportion of Black students was not statistically significant over this period. Long-term trend also lets us examine changes in the proportion of students at a given age attending a given grade, since it’s an age-based sample. So in 1978, for example, 28% of 13-year-old students were in the seventh or lower grades; but in 2012, that proportion had risen to 39%. At the same time, the proportion attending eighth grade fell from 72% to 60%. The proportion in the ninth grade or higher was only about 1% in 1978 and less than 0.5% in 2012. So why has this happened? Well, NAEP can’t tell us that; but we do know that some states have increased the age at which children may start attending kindergarten and that some parents are using this to delay their child’s entry to school, sometimes called “academic redshirting.” It’s also true in many states that retention policies have changed as well. So let’s now take a look at the results for reading. Students participating in the assessment, as always for NAEP, took only a portion of the complete assessment. In this case, they had read passages and responded to questions in three 15-minute sections. Each section contained three or four short passages and about ten questions. Some questions and their corresponding materials were administered to more than one age group. We report the scores for all three age groups on a single 0 to 500 point scale; and as always, because NAEP results are based on samples, there’s a margin of error associated with every score. When we compare NAEP scores, I’m only today going to cite differences that are larger than this margin of error; that is, those differences that are statistically significant. So here we see rating scores for 9-, 13- and 17-year-old students from 1971 through 2012 shown on the 0 to 500 point scale. An asterisk is placed next to a score for an assessment year if that score was statistically different from 2012; and we connect, via dotted line, data points when the original assessment was administered, while the solid line shows after 2004 to reflect the revised assessment. So if we look at the bottom line, 9-year-olds, we see that they had an average score of 221 in 2012 shown on the lower right. This is an increase compared to their score of 208 in 1971, as indicated by the asterisk next to that score. Moving up to the middle line, the scores for 13-year-olds in 2012, an average of 263 was also higher than in 1971. In addition, it was higher than the score for the last previous assessment in 2008. The top line shows average results for 17-year-old students. At age 17, the score for 2012 was not significantly different from that of 1971 or 2008. We could also take a closer look across the ability distribution for 9-year-olds; so here we’re looking at their percentile scores. Comparing 2012 to 1971, you can see that 9-year-old students at all five of these percentiles, the 90th, 75th, the median, the 25th and the 10th, have higher average scores than the most recent assessment, but none of them showed increases in comparison with 2008. So again, over the long run from 2012 back to 1971, you see gains across all these percentiles; but compared to four years ago, there are no statistically significant differences. Another way to look at the information on the previous slide is in this case showing the percentiles simply — each of the five percentiles, but from the first administration to the most recent. It’s a little easier in this format to compare the amount by which students at the different percentiles grew over the long run. As you can see here, all five groups showed increases; but the lower-performing students, the 10th, 25th, and even the 50th had larger gains compared to the higher-performing percentiles. This is an item map giving descriptions of the kind of questions that we gave 9-year-old students, ranked according to degree of difficulty. The question descriptions provide specific examples of the skills displayed by the students at various performance levels. The number associated with each question is the average score of students likely to answer a question like that correctly. So for example, students scoring at or above 177 will be likely to recognize explicit information in an expository passage. They’d also be likely to answer correctly all the questions below that 177. So here again, students performing at or above 266 would be likely to be able to recognize the main purpose of an expository passage. These higher-level questions do tend to ask students to make judgments and generalizations based on information drawn from the entire passage rather than simply picking out an explicit, single detail. Here’s a look at the percentile distribution over time for 13-year-olds. So just as in the case of the 9-year-olds, there are long-term gains at all five percentiles; and these gains range from about seven to nine points, using unrounded numbers. Here though, as we saw in the mean, there’s also a short-term gain compared four years ago to 2008. These gains, however, were limited to students at the 25th, 50th and 75th percentiles; and at each percentile, the growth was about three points. At age 17, there were long-term gains for students at the 10th and 25th percentiles. So remember, while there’s no difference as we showed in the earlier slide for students at the mean and for 17-year-olds over the long run, in here that’s reflected similarly no statistical differences at the median. There has been growth in the lowest-performing 17-year-olds over time. There was also a short-term gain four years ago, compared to four years ago as well, but only for the students at the 10th percentile. This slide shows the percentages of students at selected performance levels in 1971, four years ago in 2008, and in 2012 for all three of the age groups. So on the left at ages 9 and 13, you can see increases for the three levels shown when we look back to 1971 comparing to 2012. The percentage at or above level 250, those students able to interrelate ideas and make generalizations about what they read, increased for all three age groups. In 2012 at age 9, 96% of students scored at level 150 or higher, while 74% scored at level 200 or higher, and 22% scored at level 250 or higher. This graph compares scores for 13-year-old White and Hispanic students. As always in NAEP, we collect a variety of demographic additional data about the students, and here we’re able to disaggregate by various subgroups. The first year for which we can actually break out Hispanic students separately is 1975. So here you can see the 21-point gap between the two groups in 2012 is narrower statistically than the 30-point gap in 1975 and the 26-point gap in 2008. Scores for Hispanic students were higher in 2012 than in either comparison year as well. So here’s a case where the gap is closing over the long run, with both groups improving but Hispanic students improving at a somewhat faster rate. This figure tries to show that, showing the score increases for White, Black and Hispanic students comparing 2012 with the first assessment for which we can disaggregate, either 1971 or 1975. So for each vertical bar, the lower line represents the score for the first assessment for which this data is available, while the top line shows the score for 2012. So overall, you can see scores increase for all three groups of students; but Black and Hispanic students, again, had larger gains than White students. So looking at the left, you can see that 9-year-old scores increased by 15, 36 or 25 points for White, Black or Hispanic students, respectively. At ages 13 and 17, the pattern was the same, larger increases for Black and Hispanic students. White students still had higher rating scores though than Black or Hispanic students for all ages. We can also look at the gender gap in reading. Here of course, as we see across all our assessments, female students have historically had higher scores than males. In the reading gap the case of long-term trend narrowed at age 9 only, where it fell from 13 points in 1971 to about 5 points in 2012. Again, in this case, scores for both boys and girls were higher in 2012; but that 17-point increase for male students was large enough to reduce the gap by 7 points. Here we look at the performance of 17-year-old students over time according to their grade. So in 1971, 17-year-olds who were in the 10th grade or below, which is the red line at the bottom, had an average score of 238. In 2012, their score was 266; this is an increase of 28 points. The proportion of these students, however, increased as well from 14% to 26% of all 17-year-olds. So just to pause on that for a moment, in 2012, 26% of 17-year-olds were in the 10th grade or below. The good news, though, is that for those sort of lowest-performing 17-year-olds, their scores have increased markedly over time. So now let’s turn to math. Here at each age level, students were assessed on their knowledge of basic math facts and formulas, their ability to carry out computations, and also their ability to apply mathematics in the context of daily living. The assessment included numbers, measurement, geometry, probability and statistics in algebra. Students participating responded to questions again in three 15-minute sections; each one had between about 21 to 37 items. The majority of those items were in a multiple choice format. In this assessment, we do not use calculators or manipulatives, such as rulers or protractors, during the assessment, despite the picture of a ruler and protractor there. So here we see the math scores for 9-, 13- and 17-year-olds from 1973 to 2012. In this case, in addition to the thick dotted line and the solid line, which has the same interpretation as before, there’s a thin dotted line between 1973 and 1978. This is here to represent the fact that in 1973 we’re here reporting only what we call “extrapolated” data to compare the results of the first assessment with the later assessments. The reason for this is that the first assessment back in ’73 had only a limited number of questions that were in common with later assessments. So back then they used a statistical model to sort of project backwards in ’78 for what the scores should have been in 1973 based on the assessment going forward. The scores for both 9- and 13-year-olds were higher in 2012 than in 1973, as indicated by the asterisk next to the scores for the first assessment again, so 219 and 266. Looking at age 13 only, do we see that the 2012 score is also higher than the score for the last previous assessment four years ago in 2008. In fact, if you look closely at the line for the 13-year-olds, the most recent score in 2012 is statistically higher than every other previous assessment here. Again, looking at the means for age 17, the score for 2012 was not statistically significant different from either four years ago or 1973. Again, we can look at the achievement gains by percentile. Here for 9-year-olds you can see again that average mathematics scores, like reading, increased at all five percentiles comparing here 2012 back to 1978; we don’t have the extrapolated early data available for the percentiles. The score increases were at least 22 points for students over the long run in all five percentiles, although none of these percentiles showed any increases in comparison with 2008. At age 13, we can see again long-term gains for the students at all five percentiles, at least 15 points each. Here though, as you might expect from the mean slide, there are gains in the short run as well. In this case, though, the gains were limited only to the higher-performing students. So we see a statistically significant improvement for our 13-year-olds in math between 2008 and 2012, but only for the students at the 90th and 75th percentiles. This is another look at the age 13 performance by the five percentiles, again comparing 2012 to the first year where data are available, in 1978. And you can see, as the previous slide showed, increases for all the groups. Here we have something different; it’s the only place where we see this in the results today. The red line on the left there between the two left bars shows the students at the 10th percentile scored a 240 on average in 2012, but students at the 25th percentile had actually scored a 238 back in 1978. So the increases for students at the 10th and 25th percentiles were large; but in this case, at the 10th percentile, was large enough to overlap where the 25th percentile was back in the ’70s. For the 13-year-olds, here’s a look quickly at their item map that shows the kinds of math items that students at various points on the scale would likely be able to answer correctly. So students scoring, for example, a 240 would be likely to be able to compute the perimeter of a square while those at 310 would be likely to be able to correctly rewrite an algebraic expression. Let’s take a look now at the percentile trends for 17-year-olds. At age 17, there were actually long-term gains for students at the 10th, 25th, and 50th percentiles when the gains ranged from about 12 to 6 points when we compare 2012 back to 1978. So there are some long-term improvements noted in mathematics for 17-year-olds; however, across all five percentiles when we look back to 2008, there are no statistically significant changes. Again, this slide shows the percentages of students at selected performance levels. So just to take an example; at age 17, 96% of students scored at or above level 250; 60% scored at level 300 or higher; and 7% scored at level 350 or higher. Increases occurred in the percentage at or above level 250 and at or above level 300 when we compare 2012 back to 1978. In the other age groups, you read the figures the same way. Again, if we disaggregate by race or ethnicity, here we see that scores for White, Black and Hispanic students increased in mathematics for all three age groups when we compare 2012 back to the first assessment. And here we see that Black students showed larger score gains than White students in all three age groups. Let’s take a look at the gender gap. Historically again, across all our assessments, in the older years there tends to be somewhat of a gender gap in favor of boys. In this case I don’t show you the data for 9 and 13, where there’s no statistically significant gender gap; but at age 17, there is still a gap favoring males although this gap did close a bit. It was eight points in 1973, down to four points in 2012. And the reason for this is if you look over the long run, scores for boys are relatively stable; and the scores for girls have increased a bit more. So now there’s a lot of information in long-term trend. Here are a quick couple of summary slides. So first, when we compare scores in 2012 back to whenever the earliest assessment year was available for reading and math, we can see that scores are higher for 9- and 13-year-olds on average, so these are the means again; but no significant change for students at age 17. In 2008, we see it increase on average at the means only for age 13 for both subjects. And if we disaggregate, if we look at scores since the first assessment for White, Black and Hispanic students, you can see increases at all three ages in both subjects. When we compare 2012 back to 2008, however, we only see one increase, that’s for Hispanic students in age 13 reading, a seven-point gain. The good news is that at least there are no declines. Okay, as I mentioned, there’s an awful lot of information. More complete information for all three age groups are available in the long-term trend Report Card, which is our printed and also available online report. But you can also go online at the NAEP website to obtain additional information, take a look at released items through the NAEP Question Center, and also conduct your own extensive analyses using NAEP’s Data Explorer for long-term trends. There is a separate web tool, Data Explorer, for the long-term trend data. We also encourage you to follow #NAEP for an up-to-date continuous conversation about these results and more. In conclusion, as always, I’d like to offer my sincere thanks to the students, teachers and schools who participated in this assessment and made it possible for us to compile this report. Thank you. Thank you, Jack. Our next speaker, Brent Houston, is Principal at Shawnee Middle School in Shawnee, Oklahoma; and I understand it’s quiet there today — no students around because of summer break. Brent has twice been named Teacher of the Year and was appointed to the Oklahoma Educational Television Authority board of directors in 2002 by Governor Brad Henry. Mr. Houston is also a member of the National Assessment Governing Board and serves on the Governing Boards Assessment Development Committee; that’s the one that reviews all of the NAEP items that go on any test. So a couple of years ago, he was reviewing the items for the long-term trends. Thank you for your time today, Brent; we look forward to hearing your perspective. Thank you, Cornelia. It is quiet and quite hot already here in Oklahoma. I was struck when Jack thanked all the students for participating in this long-term trend study that probably all of us in attendance with this webinar, I know I was probably one of those students back in the early ’70s who was taking this long-term trend early for the first time perhaps. As principal of Shawnee Middle School though in Shawnee, Oklahoma, I’m encouraged by the progress in so many areas as I reviewed this report, The Nation’s Report Card: Trends in Academic Progress 2012. This report, which assesses how students have achieved over the past 40 years, has progressed in basic subjects of reading and mathematics; and it gives us a sweeping look at how America’s students are doing overall compared with the previous generations. Contextual factors are assessed too, such as how much time students spent reading for fun. In some ways, the findings are full of hope. Today’s children, ages 9 and 13, are scoring better overall than students at those ages in the early ’70s. Yet within such promising upward trends, there are what I call hidden challenges. One of the most striking findings has as much to do with my role on the National Assessment Governing Board as it does with my roles as principal of the middle school here in Shawnee and as an educator in our country and goes back to that disturbing lack of improvement among 17-year-olds. Since the early 1970s, the average scores of 17-year-olds, as Jack pointed out to us, in both reading and math have remained stagnant. Among the contextual factors this report assesses is parents’ level of education. Since 1978, an increasing number of parents recognize the value of higher levels of education, at least for themselves. Look, for example, at the 17-year-old students assessed in mathematics. In 1978, 32% of the parents of those children graduated from college. Last year, 51% of the parents of 17-year-olds graduated from college. The growing emphasis that parents put on educating themselves is gratifying to see, and you would think the value of higher education among parents would translate into better performance for their children. Yet despite some gains among lower performers, the average scores of 17-year-olds over 40 years have stayed flat. If parents are achieving more, you would think that their older students in particular would be achieving at higher levels. Now, at my school here at Shawnee Middle School, we are working to get our parents involved; and our Parent Teacher Organization re-established itself this past school year with very encouraging results. But even before this, our parents stepped up when they needed to support their kids. I know that in Shawnee, parents who engage in educational activities with their families, and that means things like reading to their children at home; taking them to museums, concerts, other cultural events; participating with them in science fairs and science activities, those parents who do that with their children have students who do markedly better than the other students in school. So why is it that some parents who think enough of education to go to college themselves have students who are not scoring higher? Why isn’t that interest in education translating to their children’s achievement in school? That is, to me, one of the most provocative and challenging thought long-term trends within this report. In reading, boys at age 9 are closing the gap with average scores increasing from 201 in 1971 to 218 in 2012. In mathematics, the gender gap at age 17 narrowed from eight points in 1973 to just four points in 2012, due to an increase in the average score for girls. This report also reveals that children who more frequently read for fun, that is, they read outside of class, score higher in reading than students who do so less frequently. Here at my school, there are echoes of that gender gap in reading. We have, for instance, an afterschool book club with attendance that ranges from 15 to 20 students; but generally, when I check the attendance of those attending that club, there are just two or three boys present. We also have an accelerated reader program, and most of those accelerated reader top performers or prize winners are girls. It is also encouraging that the racial and ethnic gap has narrowed for most age groups in both reading and mathematics since the early ’70s. This report shows double-digit gains in average scores of Black and Hispanic students at all three ages across the country. The hidden challenge, however, is this: since the last Long-Term Trend NAEP Assessment in 2008, only the White/Hispanic gap among 13-year-olds in reading has narrowed. Though the national numbers in this report show the percentage of Hispanic 13-year-old students more than tripling since 1978, here in our community, Shawnee, Oklahoma, the numbers are a bit lower. The number of students in that racial and ethnic group has doubled at my school over the decades instead of tripled. I might add also that I find often that Hispanic students’ parents are very involved here in our community in their children’s education, taking off days from work if they have to, to be at school if their child’s teacher calls. That may be the case across the country, and it may explain some of the narrowing gaps between Hispanic and White children. In the years since this trend assessment began, several efforts to strengthen our education system have been introduced. There was the Nation at Risk report in 1983, No Child Left Behind in 2001; and looking forward to ten years from now, we will certainly point back to the Common Core State Standards movement as a new attempt to increase the rigor of our coursework for all students. The issues and questions persist; the common thread throughout every effort perhaps being the drive to improve schooling and to measure that improvement. There has always been progress, and there has always been and probably will always be hidden challenges. The great value of this report, though, is in its allowing us to see both the progress and the problems. Our 9- and 13-year-olds are making gains, but that pace isn’t as evident among 17-year-olds. The achievement gaps are closing, but they still exist. The trend of higher levels of parental education is up, but it doesn’t always translate to better scores for students. This report gives us answers over the long term, but invites us to keep asking questions about what we do right, what we do wrong and how we can improve America’s education. Thank you, Cornelia. Thank you, Brent. Our final speaker today is Kati Haycock. She is President of The Education Trust. Ms. Haycock has tremendous experience in the world of education advocacy. She was formerly the Executive Vice President of the Children’s Defense Fund and was founder and President of The Achievement Council. Kati was also the Director of Outreach and Student Affirmative Action for the University of California school system. Kati, we look forward to your insights on these results. Thank you, Cornelia. Good morning to all of you. You know, one of the best things about the National Assessment of Educational Progress long-term trend assessment is that it really allows us as a country to get beyond finger pointing and competing claims about whether our schools are in crisis or whether they’re not and answer the question, “How are we doing?” against a set of benchmarks that have remained essentially unchanged for 40 years. And for The Education Trust as an organization, this has been an essential tools in our efforts to help both educators and the public understand where we are making progress and for whom. What’s important to know is that when you break the data out over the long term and you ask who’s improving, the answer, as you can see in this first slide, and certainly as you saw in Jack’s more thorough presentation, the answer is everyone. At the elementary and at the middle school levels, all groups of children are performing higher today than they were when this assessment got underway. And the particularly good news, given where they started, is that Black and Hispanic children have racked up some of the biggest gains of all. It’s important to understand that these gains, by the way, aren’t just minor — statistically significant but kind of meaningless gains. When you look, for example, at mathematics achievement, what you learn is that African American and Latino 9-year-olds today are actually performing about where their 13-year-old counterparts were in the early 1970s. Moreover, while it might have seemed impossible 25 years ago for Black and Latino 9- and 13-year-olds to ever reach the proficiency levels that White students then held, they have indeed reached those levels in mathematics as you can see in this slide. And as a result, even though the performance of White students has improved too, we’ve made significant reductions in long-standing gaps between groups. As you heard earlier, in reading the gaps between groups are down, depending on grade level and group, between 30% and 51%. In mathematics, they’re down between 26% and 42%. Moreover, there has been progress across the achievement spectrum from those at the low end of the performance distribution to those at the high end. In math, for example, the lowest-performing 13-year-olds in 2012, those at the 10th percentile, scored 27 points higher than did the lowest-performing 13-year-olds back in 1978. And the highest performers in 2012, those at the 90th percentile, scored 16 points above the highest performers at the beginning of the trend. When you think about it, these results very clearly put to rest any notion that our schools are getting worse. In fact, our schools are actually getting better for every group of children they serve. If we have a crisis in American education, it is this: that we simply aren’t yet moving fast enough to educate the so-called minorities who will soon comprise a new majority of our children nearly as well as we educate the old majority. At best, students of color are just now performing at the level of White students a generation ago. Now, it is absolutely a fool’s errand to make claims about causality, especially when surrounded, as I am right now, by statisticians from the National Center for Education Statistics; so I won’t do that here. But as we ask the question that’s so important for us to ask as educators and as Americans. How can we accelerate progress for the very students who will be our new majority? It is instructive to look at rates of progress over time. It is clear, for example, that some of the biggest gains for Black and Hispanic students took place in the ’70s and early ’80s, a time when policymakers were beginning to confront problems inside of the educational system, including segregation and deep funding inequities, but also problems outside of it. Results since the late 1990s, those that coincide with efforts to raise achievement and close gaps through standards and accountability in public reporting, also show gains; but like the gains in the late 1970s, they also suggest a worrisome slowing down in the most recent years. Take African American 9-year-olds, for example; from 1994 to 1999, math scores actually declined by one point, or about 0.2 a year down, that is; but between 1999 and 2004, just as accountability and public reporting efforts took hold nationwide, scores increased by 13 points or roughly 2.6 points per year. Since then, however, the rate of improvement has slowed; between 2008 and 2012, math scores for Black 9-year-olds increased by only 2 points or about 0.5 points annually. This pattern of steep gains between 1999 and 2004, followed by slower rates of improvement, is consistent across all groups of 9-year-olds; but the trend is not uniform, as we heard earlier, across both subjects in all three ages, especially age 17. So what lessons should we take from all of this? First, I think it’s very clear that improvement and gap closing is not just a theoretical possibility; it is happening. Long-standing gaps between groups are getting ever smaller though not nearly fast enough for either the kids or for our collective future. Second, as we seek, to pick up the pace and close these gaps once and for all, it behooves us, and that means all of us, to mine every bit of data we have, including both this assessment and its main NAEP counterpart in an effort to learn from the times and, in the case of main NAEP, from the states where progress has been the greatest. That kind of insight that we can get from the fastest gainers will help all of us to pick up our game. Thanks, Cornelia. Thank you, Kati. Now we want to engage in a Question and Answer Session with some of your questions. During this brief session, our facilitator will be Valerie Marrapodi; and she will direct the questions to the appropriate speaker. Valerie? Thank you so much, Cornelia. Those of you who have questions about today’s Report Card results or our speakers’ comments, please submit them now. As Connie mentioned earlier today, we ask that when you submit your question, you choose the option in the dropdown menu that says All Panelists. Please note that we also have NCES Associate Commissioner Peggy Carr on hand to answer questions today. Please remember to include your name and your organization when typing in your question. Questions we cannot get to during the event today will be answered later via e-mail. Our first question is from the Center for Public Education. Jim Hull is wondering: “To examine how much improvement our schools have made over the past 20 years, would long-term or main NAEP be the best measure of whether more students are learning what they need to know?” Jack, would you care to share your thoughts on that question? Sure, I’d be happy to,so the answer is, Yes. The problem as to the question of course is that both assessments really cover — only together do they really provide a complete picture of sort of achievement in either math or reading. We’ve maintained them both together because essentially there’s an old saw in psychometrics or educational measurement which is, “if you want to measure change, don’t change the measure.” And so long-term trend reflects that perspective, where you want to keep essentially your assessment the same in order to get the most accurate measures of change possible. However, as we all know, over a 40-year period, an awful lot changes in our education system, in our sort of pedagogy in curriculum and instruction, what we think math and reading are and what different sort of sequences of course material and other subject matters presented before students at different age levels. And so main NAEP really more reflects the acknowledgement that in the reality of education, you can’t always not change the measure; sometimes you do actually need to modernize your framework. Of course this can present some confusion because sort of the man with two watches problem. So we’ve done more than one validity study to look at, for example, average rates of change in scores. One study looked at the period between 1990 and 2009; in that case, we found that both the long-term trend and the main NAEP showed positive progress at the elementary and middle school levels in math, and the elementary school level in reading. But the main NAEP trend had a higher average rate of change in the scores. And we’ll continue to do that type of detailed analysis to compare; but really, we need to leave it to the public and to policymakers over which family of studies is the best to answer a particular question. Excellent, thank you so much. Our next question is in regard to our 17-year-olds. It’s from the Central Connecticut State University, Jesse Turner, asking: “How do our NAEP scores for 17-year-olds look like compared to 2008?” And, Jack, I know you touched on this in your slides previously; but can you go ahead and elaborate on that? Sure, I’d be happy to. So let’s break it down separately, I guess, by math and reading. If we’re just comparing our 17-year-olds in 2012 to 2008, I think the best things to look at are the percentile changes. So if we compare at all five of the percentiles between 2008 and 2012, in mathematics there were no statistically significant changes. So in all the different ability groups that we measured here are approximate percentiles, there’s no statistically significant change. When we look at reading, though, we actually do see some statistically significant improvement for 17-year-olds, but only the lowest-performing ones. So between 2008 and 2012, the 10th percentile increased from 227 to 232. So it’s not a case of absolutely no improvement in the last four years for 17-year-olds, but it’s very much concentrated in reading and only at the lowest performers. Great, thank you so much. Our next question is from Peace Bransberger; she is wondering: “Are any of the results analyzed available for a metro area or comparisons by metro versus non-metro?” And, Jack, sorry to monopolize the conversation here; but I think this is another one I want to pass your way. The short answer unfortunately is, No. Unlike main NAEP where we have a robust and hopefully growing program of providing results at the urban district level on a trial basis, in the long-term trend NAEP we don’t have sufficient sample size in any particular metropolitan area to disaggregate that way. We can, however, at least going back to 2008, disaggregate in the very general sense in terms of urban versus, say, rural school districts. So it’s possible to provide some information; but it’s, again, very general. It’s nationally representative for urban areas as opposed to a particular metro area. Great, thank you. Another question that we have today deals with — we have several questions coming in about instruction and learning. And this question is from Addison Northwest Supervisory Union, from Carol Spencer; and she is wondering: “What instructional strategies or programs had the most significant impact on these changes?” Brent, do you have any thoughts on that question? I can react to that based on what we do here in our school. Our school leans pretty heavily on making very deliberate efforts to practice research-based instructional strategies that increase student achievement. Especially, we focus on three specific ones; one is we make sure we give students opportunities to identify similarities and differences in their classes, in their lessons. We also are heavily into asking kids to summarize and to learn how to take notes. Thirdly, we’ve spent a lot of time and effort reinforcing student effort and recognizing that effort. Student recognition for a job well done, a lot of times we’ve changed the conversation here from, “You did a good job,” to just tweaking it a little bit by saying, “I’ll bet that was hard, and I’ll bet you’re proud of the effort you gave to do that.” And that’s an amazing difference; we see kids try a little harder whenever we use those strategies. Now, speaking again just from my experience here at Shawnee Middle School, one of the practices we have that’s seen significant impact in terms of raising student achievement over the years is that teachers have become better at analyzing the data. Because they’re better at analyzing the data, they’re more prescriptive in the way they teach. We spend a great deal of time poring over test data and benchmark results; and then we meet with students to go over those results, and we set goals with the students. The purpose is, of course, to set a goal to increase their score for the next time around. Great, thank you so much. Our next question is from Jim Kohlmoos, and I apologize for possibly not saying your last name correctly. But he is complimentary towards Kati saying, “Great analysis.” And, Kati: “How would you suggest we accelerate the gap closing process?” Thanks, Jim, for the compliment. We’ve spent a lot of time as an organization trying to understand what it is that we can learn from the schools and school districts and, in fact, from the states that are making the most progress in accelerating performance for students of color and low-income students and close the gaps between groups. Obviously, the long-term trends exam doesn’t lend itself to that; but certainly, there are lots of other sources of data. And in the end, our impression is that you make gains not through some special, weird, voodoo formula that, in other words, what Black children or Latino children or poor children need isn’t some magical thing that’s different from what other kids need. They really need the same, high-quality instruction that all kids need. And that’s the sort of good news, is that the schools and school districts that are really focusing on getting the basics right, which is very clear expectations for what students should learn; lots of supports for teachers in teaching to those standards; not leaving teachers on their own to kind of make up how they’re going to get there; common sort of curriculum and assignments that teachers can give together so that they can essentially have a common understanding of what good enough work looks like, and lots of help for both kids and teachers as they struggle to get the higher levels of achievement. So it’s not really so much about some weird set of extras; it’s mostly about focusing on the basics, not ever giving up, and really demanding quality work from all kids. Excellent insight. Our next question is from Kathy McKean, and she is wondering: “Do you know of any studies comparing changes in the curriculum for age 9-, 13-, and 17-year-olds since 1971?” Jack, could you start us out with your thoughts on that? I’ll take a shot at it, but other panelists might have better knowledge here. Certainly not in NAEP’s long-term trend and the types of curriculum studies that I’ve seen also generally focus on grade levels, not age cohorts, which is sort of a peculiarity of the long-term trend. I would say in main NAEP, we do have a long-running series of high school transcript studies that have periodically looked at course taking so you can look at what high school graduates, sort of how their course taking has changed over time for a little bit over the last decade or so. And there, for example, we find generally in more recent years, increases in advanced course taking; so more high school graduates are taking sort of advanced science, more AP and IB courses. And that’s one curriculum study that NAEP has done. I would say also that that speaks sort of to the curriculum issues we face, particularly in math. At the middle school, we have more and more eighth grade students, for example, taking Algebra I which that used to be reserved for freshmen at the high school. And then as a result of that, then we have an advanced seventh grade math class to prepare them for Algebra I in eighth grade. So I think those kind of things — curriculum-wise, that may help answer that question. Excellent, thank you so much. Unfortunately, we only have time for one last question today. Please note that any questions that you have submitted, we will be following up with you directly over e-mail. Our last question we’re going to take today is from Posie Boggs with Literate Nation: “When describing testing results from NAEP, how should one respond when an educator says the results do not apply to their state, city or town because NAEP does not test every child in that state, city or town?” Cornelia, would you like to go ahead and address that question? I will take the first shot at it, but others are certainly welcome to comment on this. So the assessment is nationally represented, and it is intended to represent how progress is going across the nation. I would not say that it does not apply to the state because I think if you were to take the main NAEP and look at the state results, you’d see very similar patterns that you see in the national results. So I wouldn’t just immediately dismiss this. I would encourage states to look at it carefully and use it in conjunction with the main NAEP. And we will have a reporting later this year of reading and math scores from 2013. So this is very timely. These are occurring in very close proximity, and I think they can be used together at the state level to make improvements. This is Jack; I would completely support that answer. I think I would just emphasize again the point with a nationally-representative sample for an assessment like NAEP is that every single child in each of the age groups in this case has some probability of being selected. And we spend a lot of time trying to develop and then draw a careful, nationally-representative sample so that we can speak, even though we don’t test every kid and that would be an undue burden on our nation — that we do in fact, are able to generalize our results to every single child in that age group in every state, in every city or town. Let me, if I could, just add one thought on this; and perhaps it’s easier for Brent and I, as not staff members at NCES. I will tell you how I respond when people ask me that same question; and that is, “You should trust these results much more than you trust the results from your state and local assessment.” Why? Because these are not exams that teachers are teaching to. You don’t have to worry about whether numbers went up or down, not because kids learn more but because teachers taught to this test. Nobody teaches to the NAEP exam, which is one of the reasons why it is such a useful measure of what our kids actually can do. That is an excellent, excellent comment, Kati; I echo that. Great, thank you so much. I wish we had more time, but we do have to close out our session today. Cornelia, back to you. Thank you so much, Valerie. And thank you to all of you who have submitted questions and those of you who have been making comments about these results and the assessment on Twitter. I hope that today’s events will inspire you to learn even more about the results and the implications for the education of our nation’s students. I encourage you to continue participating in the Chat using the Twitter hash tags and to follow the conversation about today’s results in the media. On the Governing Board’s Event website, you’ll find links to the full Report Card, the speakers, and the speakers’ comments. And in a few days, you will also be able to find an archive of today’s webinar in case you want to recommend it to a colleague. In addition to The Nation’s Report Card website, there you can take a deeper look at the data we released today by using the NAEP Data Explorer tool, as well as review the long-term trend questions and background information on students, teachers, classrooms and schools that were collected along with the assessment questions. Be sure to stay up-to-date on information and events by following the Governing Board and NAEP on our Social Media sites. Today I want to thank Jack, Peggy, Brent and Kati for being with us. And of course thank you to all of you for participating. Thank you, and now back to our webinar. Ladies and gentlemen, this concludes today’s webinar. You may now disconnect your lines. Thank you and have a great day.