Marshall Memo 728

A Weekly Round-up of Important Ideas and Research in K-12 Education

March 19, 2018



In This Issue:

1. Three levels of learning

2. The persistence of tracking as an example of the research/practice gap

3. Pushing back on test prep in elementary literacy classrooms

4. Designing a high-quality system of assessments

5. Why students should not use a checklist to assess websites

6. Effective mentoring of new teachers


Quotes of the Week

“Why has education research done so little to blunt inequality and exclusion?”

Jeannie Oakes (see item #2)


“When you learn about successful principals, you keep coming back to the character traits they embody and spread: energy, trustworthiness, honesty, optimism, determination… In other words, they are high-energy types constantly circulating through the building, offering feedback, setting standards, applying social glue… We went through a period when we believed you could change institutions without first changing the character of the people in them. But we were wrong. Social transformation follows personal transformation.”

            David Brooks in “Good Leaders Make Good Schools” in The New York Times, March

            13, 2018,


“We have to acknowledge that kids are going to challenge authority; that’s sort of their job.

We have to respond with developmentally appropriate, purposeful actions – so [kids] are held accountable in a way that helps them learn and grow, as opposed to just learning that they should continually buck authority, or that compliance is the only way to be in good graces with adults.”

            Cami Anderson in “Sparking a School Discipline Revolution” by Amanda Kocon in a

            TNTP interview, excerpted in Education Digest, March 2018 (Vol. 83, #7, p. 16-21);

            Anderson’s Discipline Revolution Project is at


“Physically placing master teachers, highly effective teachers, or coaches in central locations where they are closer to – and more likely to cross paths with – their colleagues would increase the probability that these individuals interact with and influence others.”

            James Spillane and Matthew Shirrell in “The Schoolhouse Network” in Education

Next, Spring 2018 (Vol. 18, #2, p. 68-73),; Spillane can be

reached at [email protected].


“It’s not enough to read books about it; you need to feel it.”

            Jakob Hetzelein, a history teacher in Germany, quoted in “Tours of Nazi Camps to

Curb Creeping Hate” by Katrin Bennhold in The New York Times, March 12, 2018,










1. Three Levels of Learning

            In this article in Phi Delta Kappan, Guy Claxton (King’s College/London) uses the metaphor of a river to describe three levels of learning:

-   Knowledge and information on the surface, usually easy to see and describe (the Boston Tea Party, the periodic table);

-   Skills and literacies just below the surface, the expertise that enables students to use knowledge and information (reading, writing, solving a tricky new math problem, thinking in Spanish);

-   Attitudes and habits of mind deeper still; harder to see, these are the gradually developing processes that influence how students respond to difficulty, complexity, and frustration: Are they interested or threatened? Do they engage or wait to be directed? Are they willing to admit mistakes or do they try to cover up their fallibility?

“Like the banks and bed of the river,” says Claxton, “every classroom channels and shapes students’ development, not just as knowers of facts but as learners. The attitudes and habits shaped at school have a powerful impact on students’ long-term success in life.”

            Thinking back on when he taught high-school chemistry, Claxton realizes with chagrin that he was “unconsciously and unintentionally steering my students to becoming passive, dependent, cautious grade addicts rather than imaginative, independent, risk-taking explorers. I made things too neat for them and rescued them from difficulty too quickly, so they didn’t learn how to struggle for themselves. I was in such a hurry to make sure they were catching the knowledge I was sending down the river that I paid no heed to the deeper forms of learning.”

            How could he have done better? For one thing, he realizes, rather than providing the exact equipment necessary for a particular experiment, he should have laid out a variety of resources, including some that were not appropriate, and let students choose. “As teachers,” he says, “we make dozens of these choices in every lesson, every day. Making these choices requires an awareness of the different layers of learning and how each layer draws upon different aspects of good teaching.”

            Thinking about the second layer of skill development, Claxton realizes that he should have been more of a personal trainer/coach, designing good “exercise machines” for the target skills appropriate to students’ current level of expertise and giving constant feedback, so they would stretch to achieve their personal best.

            The deepest level is different, he says: “Learning attitudes are not taught, or even trained, so much as incubated, and we teachers have to design the incubator. To build curiosity, resilience, and independence, we need to design the whole culture of the classroom to welcome and strengthen these dispositions. Everything we do – how we design our lessons, how we mark students’ work, the language we use, how we arrange the furniture, how eager we are to demonstrate our knowledge – all of these elements slowly shape students’ attitudes.” This layer is constantly interacting with the other two, but teachers need to be aware of all three “lest we inadvertently teach in a way that keeps our students floating on the surface.”

            Claxton has four practical suggestions for building strong, positive attitudes about learning while not neglecting knowledge and skills:

            • Not letting students use erasers – The way erasers are used in some classrooms, he believes, “weakens children’s learning by encouraging them to hide their mistakes. Some teachers act as if being ‘bright’ means getting all your answers right the first time.” In the real world, there’s lots of trial and error, and getting the idea that being smart means being instantly successful makes students feel stupid when they make mistakes, flounder, or need more time. “This bad feeling,” says Claxton, “makes you want to avoid effort and the risk of error, so your resilience and determination are undermined and you become a weaker learner.”

            • A poster with steps to take when you’re stuck – He suggests having students brainstorm self-help suggestions on steps to take when they don’t know what to do – for example, Reread the question. Look at the illustrations. Ask a friend.

            • Three things I’ve already tried – When a teacher arrives to help a student, ask what the student has already tried. Maybe the student needs another minute, or a slight hint, but not too much, says Claxton: “Just enough to get their learning going again.”

            • Chili challenges – Similar to restaurant menus that give a choice of how spicy a dish should be, students might be given the choice of different degrees of difficulty – aiming for just the right level to get them into the learning zone. “If students choose one that is too easy or too hard, no problem,” says Claxton, “they can always choose a different one. Once they get used to being in the learning zone, they become more resilient.”

            Claxton mentions several programs and school designs that he believes are effectively addressing the challenge of teaching to all three levels:

-   EL Education (previously Expeditionary Learning);

-   Studio Thinking schools (inspired by Project Zero);

-   High Tech High schools;

-   The Cultures of Thinking and Visible Thinking methods;

-    The Habits of Mind network;

-   Character Lab;

-   International Baccalaureate’s Learner Profile;

-   The Tools of Mind program;

-   Building Learning Power (from Claxton’s own team).

“The philosophical, theoretical, experimental, and practical synergy between these and many

other kindred approaches,” he says, “is driving the redesign of schools and classrooms around the world.”


“Deep Rivers of Learning” by Guy Claxton in Phi Delta Kappan, March 2018 (Vol. 99, #6, p.

45-48),; Claxton can be reached at [email protected].

Back to page one


2. The Persistence of Tracking as an Example of the Research/Practice Gap

            “Why has education research done so little to blunt inequality and exclusion?” asks Jeannie Oakes (University of California/Los Angeles) in this article in Educational Researcher (drawn from her 2016 AERA presidential address). “Three decades of struggling with this problem have opened my eyes to the fact that moving from research to equitable, inclusive policy and practice faces particular challenges that limit the impact of our research.” Oakes points to solid research on the efficacy of adequate funding, access to high-quality teaching, challenging curriculum, student-centered and culturally relevant instruction, increased learning time, bilingual instruction, health and social supports, integrated schools, and doing away with tracking. But implementing these key findings has been uneven at best. Detracking is a case in point.

            Earlier in her career, Oakes and others did groundbreaking research showing that tracking was an ineffective strategy, especially for students in the lower tracks, and for children of color, who disproportionately wind up in the lower tracks. Oakes and her colleagues found that in schools that did away with tracking:

-   Many teachers successfully worked with heterogeneous groups, boosting the achievement of struggling students above what it would have been in tracked classes;

-   Higher-achieving students did just as well in heterogeneous classes;

-   Racial gaps in enrollment in challenging courses were reduced, significantly in some cases.

However, says Oakes, “neither prior research nor on-the-ground success in these schools was enough to change practice at scale or sustainably. Evidence about the harms of tracking and educators’ own innovations ran smack into three entrenched and pervasive obstacles:

-   Deeply held beliefs on the nature of the ability to learn as fixed, genetically determined, distributed on a bell curve, and racially linked;

-   Fear and hostility about racial and social-class mixing;

-   [The] politics of comparative advantage in which powerful families use school status and success as a means to help ensure the intergenerational transmission of advantage to their children.”

In short, Oakes concludes, “fixing inequality is not an engineering problem… Technical solutions are surely needed, but they are not enough. Inequity and exclusion reflect and are sustained by cultural norms and power relations. Those forces make it seem ‘normal’ for young people from materially and culturally advantaged groups to succeed at higher rates than others. Persistent fears and doubts about the ‘Other’ blind us to their potential contributions and help drive policies that limit access to materials, social opportunities, and resources for those who are already the most disadvantaged.”

            Oakes closes with a call for “public scholarship” in educational research, based on the hope and determination that scientific inquiry will improve public schools, the nation’s core democratic institution. “As public scholars,” she says, “let’s work with others to produce and use knowledge that supports the cultural and political shifts that inclusive and equitable education requires. Such work is itself democratic and empowering of those who suffer most from inequity, exclusion, and inadequate education. It can contribute to the most important work of the 21st century – making our increasingly diverse society a vibrant, creative, and intelligent democracy.”

            Oakes mentions AERA’s Knowledge Forum, a portal with 31 short (7-minute) talks by scholars on practical applications of research:


“Public Scholarship: Education Research for a Diverse Democracy” by Jeannie Oakes in Educational Researcher, March 2018 (Vol. 47, #2, p. 91-104),; Oakes can be reached at [email protected].

Back to page one


3. Pushing Back on Test Prep in Elementary Literacy Classrooms

            In this article in The Reading Teacher, Dennis Davis and Nermin Vehabovic (North Carolina State University) acknowledge the pressure many teachers are under to spend a lot of classroom time on test preparation to boost their students’ scores on high-stakes reading tests. “Resist!” say Davis and Vehabovic. “Test preparation is not the solution to raising students’ test scores in reading, and it may have long-lasting negative effects on students’ literacy lives… Further, research cautions that instruction centered on testing might be more likely to occur in schools that serve students of color and those affected by poverty than in schools in more affluent communities. This disparity in educational opportunity makes resisting test-centric practices an issue of social justice.”

If administrative mandates make test prep absolutely unavoidable, the authors recommend spending no more than five percent of the year’s literacy time on it (that’s about eight hours total). “After that threshold is crossed,” they say, “(and maybe before), students might learn unproductive messages about reading comprehension. This unproductive learning is hard to undo once it takes hold.”

            But Davis and Vehabovic aren’t absolutists. They agree that “testwiseness” is a useful skill – familiarity and confidence with how test items are worded and presented on the page, timing constraints, and “suspension of interpretive authority” (in other words, kids can’t talk back to the test). “Test writers use language and formats that might not be familiar to all students,” say the authors. “There might be some benefit in helping students feel comfortable with the language of the test so they can demonstrate their knowledge rather than struggling to break the code of the test’s author.” But Davis and Vehabovic believe it should be strictly time-limited and taught in separate units – the test genre – from regular literacy instruction.

The heart of Davis’s and Vehabovic’s article is devoted to sensitizing teachers and administrators to ways that test prep can creep into the literacy curriculum on cats’ feet, conveying unintended messages to students and undermining good teaching and learning. They believe schools should resist these five practices:


Prioritizing tested standards – This happens when local curriculum guidelines require teachers to deemphasize untested standards or defer them until after accountability tests have been given. It also happens when students are told that a particular lesson or topic is especially important because it will be tested. These two practices suggest to students that the test is the goal of instruction. They might also infer that the messy, difficult-to-assess aspects of reading comprehension (critical thinking, online inquiry, social affiliation, knowledge building, and writing) are less important.

Using test-formatted passages – This happens when teachers use passages and questions from test-prep workbooks or hand out photocopied excerpts of a book asking test-like questions with stems derived from previous tests. Students who get a lot of this kind of test prep might infer that it’s not important for readers to choose their own texts, analyze and discuss authors and their motives, read longer texts over several days, or read multiple texts on the same topic for deeper meaning. “Instruction centered on test passages is especially detrimental for readers who have had difficulty with school literacy,” say Davis and Vehabovic.

Strategies for annotating passages – This happens when students follow a rote set of procedures with each passage: begin by writing the genre at the top, then circle the title and headings, then write a prediction, then write a main idea statement after each paragraph. A related practice is mini-lessons that focus more on completing written evidence of strategies rather than higher-level thinking, metacognition, and problem solving. “When annotation becomes a required way of showing understanding of a passage,” say Davis and Vehabovic, “students might form misconceptions about how, why, and when readers use strategies (and make notes) while reading. Comprehension strategies lose their value when they are implemented at arbitrary stopping points in a text and serve mostly to prove that test answers have been traced back to specific sentences or paragraphs in a passage.”

Item teaching – For example, assigning examples of previous test items and discussing how to eliminate answer choices and hunt down and annotate evidence for the best answer. This can also involve incorporating test-like questions into read-alouds or small-group reading. “Item teaching is counterproductive because it does not help students learn from content that the items are supposed to measure, and may misrepresent or distort the intended content,” say Davis and Vehabovic. “Time spent discussing how to answer a question is time removed from discussing the content of the text, responding and critiquing, or talking about how a text informs the readers’ curiosities about the world around them.” Students might very well conclude that comprehension is mostly about categorizing small chunks of text and agreeing with an authoritative interpretation of the author’s message.

Over-interpreting item data – This usually involves item-by-item analysis of students’ performance on a benchmark test and zeroing in on specific areas (for example, making inferences) for re-teaching. While this practice is well-intentioned and seems logical, Davis and Vehabovic advise teachers not to go down this rabbit hole. “There are many reasons a reader might answer items incorrectly,” they argue. “Maybe the question was an inference about a character with whom the student could not identify or empathize; maybe the text was so challenging for the student that her answers provide no valid information about her meaning-making at all; maybe the wording of that question was confusing. The conclusion that a reader needs help with a particular standard based on analysis of one or a few questions is usually unfounded. Most comprehension tests are not designed to provide fine-tuned diagnostic information about discrete components of comprehension. We urge teachers and school personnel to consider that most comprehension tests are useful for giving us information about one large standard (i.e., students can accurately answer questions about the texts and topics that were included on the assessment). They should not be used to draw conclusions about smaller subcomponents of reading comprehension.”

The authors conclude with a helpful Venn diagram showing the intersection of lifelong literacy practices, classroom instruction, and accountability testing. Some conclusions:

-   Where classroom instruction and accountability testing intersect, one segment is labeled “knowledge and skills important to classroom literacy that are not (or cannot be) prioritized on accountability tests.” These include independent reading, self-selection of texts, and participation in vibrant discussions.

-   The other segment is labeled “knowledge and skills students need for accountability testing that are part of classroom literacy instruction.” This overlap is important, because effective literacy instruction is the best way to prepare for tests.

-   Lifelong literacy practices intersect with classroom instruction and to a lesser degree with accountability testing, but there is a segment outside both of those: the deeper beliefs and habits of mind that we hope students will carry with them to future grades and literacy-rich lives.

-   The very slim test prep segment is outside all three areas and includes test-specific skills – important but very limited – that are not part of the “real” literacy curriculum.


“The Dangers of Test Preparation: What Students Learn (and Don’t Learn) About Reading Comprehension from Test-Centric Literacy Instruction” by Dennis Davis and Nermin Vehabovic in The Reading Teacher, March/April 2018 (Vol. 71, #5, p. 579-588),; the authors can be reached at [email protected] and [email protected].

Back to page one


4. Designing a High-Quality System of Assessments

            “No single assessment or piece of student work can provide educators, students, parents, and the public with information about what students know and can do,” say a group of 19 education organizations and assessment experts in this paper, crafted by Kathryn Young (Education Counsel) and Lexi Barrett and Rebecca E. Wolfe (Jobs for the Future). The monograph synthesizes research from numerous sources into ten qualities that a system of assessments should ideally contain:

            • Comprehensiveness – Taken together, assessments should measure the full array of knowledge, skills, and behaviors needed for college, career, and civic readiness. This includes core academic content, critical thinking, problem solving, collaboration skills, metacognition, and academic mindsets. Measures should include performance assessments and projects as well as conventional tests.

            • Balance – The system should include assessments of learning and real-time assessments for learning, with timely information that improves teaching and students’ learning strategies, continuously fine-tunes the assessments themselves, and provides public accountability.

            • Equity and inclusiveness – The system of assessments should include accommodations for English language learners and students with disabilities and make good use of Universal Design for Learning. It should also include information on college and career readiness that is comparable, valid, and reliable statewide so stakeholders have a sense of student learning across schools and districts and across groups of students. “Additionally,” say the authors, “knowledge, skills, and behaviors assessed should be those that can be taught and mastered in the classroom so that assessments do not presuppose students coming to school with prior knowledge from particular contexts.”

            • Capacity – Teachers and administrators must have the tools, knowledge, and skills to administer the full range of assessments and use the results to support students and improve pedagogy. Many students will need to be brought up to speed on assessments of deeper learning so they are engaged and increasingly comfortable with them.

            • Efficiency – Too much testing has been a frequent complaint among educators. However, say the authors, “The goal should not be to minimize time on testing solely for the sake of time – which could lead to eliminating some of the most important indicators of student learning, such as performance assessments.” Rather, all assessments should be designed for high-quality measurement of the most important objectives, while eliminating any possible duplication.

            • Coherence – Assessments should be designed to ensure horizontal alignment (with what students are being taught) and vertical alignment (with grade-to-grade expectations and summative tests). Teachers, students, administrators, and parents should have a clear picture of student learning in real time and over time, all geared to standards that lead to college and career success.

            • Engagement – There should be ongoing input from and collaboration with key stakeholders (teachers, administrators, students, parents, employers, postsecondary institutions, and community leaders) so people understand the various assessments and how they are being used, can contribute to continuously improving them when necessary, and can inform the design of the assessment system.

            • Fine-tuning – There should be cycles of piloting, review, calibration, and continuous improvement to ensure that assessments are accurate, up-to-date, and serve their fundamental purpose of improving teaching and learning.

            • Quality – Assessments, individually and collectively, should meet  high standards of validity, reliability, and fairness. Following the Standards for Educational and Psychological Testing, test scores should never be the sole determinant of consequential decisions; multiple measures are far more reliable.

            • Privacy – Data on students’ work and achievement must be transparent and as meaningful as possible, and must also be protected and secure. The authors endorse the Student Data Principles promulgated by the Data Quality Campaign and 33 other organizations.


“10 Principles for Building a High-Quality System of Assessments” by Kathryn Young, Lexi Barrett, and Rebecca E. Wolfe in a Jobs for the Future publication, February 2018,; the authors can be reached at [email protected], [email protected], and [email protected].

Back to page one


5. Why Students Should Not Use a Checklist to Assess Websites

            In this article in Phi Delta Kappan, Joel Breakstone, Sarah McGrew, Mark Smith, Teresa Ortega, and Sam Wineburg (Stanford University) critique the widely used checklist approach to assessing the reliability of an unknown website. They applied the popular CRAAP Test (from the Meriam Library at California State University) to check out The Employment Policies Institute’s website,

-   Currency – Is the information timely?

-   Relevance – Is the information important to your needs?

-   Authority – Is there a reputable source of the information?

-   Accuracy – How reliable, truthful, and correct is the content?

-   Purpose – What is the motivation behind the website?

Using this checklist, they say, the EPI website checked out fine, but it missed several rather important facts: the Employment Policies Institute is an industry public relations firm posing as a think tank, and is funded by Berman and Company, a Washington firm that works on behalf of the food and beverage industry – which opposes raising the minimum wage. According to a New York Times article, Richard Berman, the company’s owner, has a track record of creating “official-sounding nonprofit groups” to disseminate information on behalf of corporate clients.

            Where did this information come from? The Stanford researchers found it in a matter of minutes by doing what professional fact-checkers do: immediately leaving the website in question and searching the Internet for relevant information about it. But the fact-checker approach is not used by most undergraduates, K-12 students, and teachers. Asked to assess the EPI website, 95 percent of college students and 90 percent of eleventh graders failed to see through the façade.

The problem, say the authors, is that checklists don’t equip people to deal with “an Internet populated by websites that cunningly obscure their true backers: corporate-funded sites posing as grassroots initiatives (a practice commonly known as astroturfing); supposedly nonpartisan think tanks created by lobbying firms, and extremist groups mimicking established professional organizations. By focusing on features of websites that are easy to manipulate, checklists are not just ineffective but misleading. The Internet teems with individuals and organizations cloaking their true intentions. At their worst, checklists provide cover to such sites.” One other problem with the CRAAP Test is that the five elements have sub-questions, totaling 25, which makes it a cumbersome and impractical tool for students and educators in a hurry.

            “The consequences of failing to prepare students to evaluate online material are real and dire,” conclude the authors. “The health of our democracy depends on our access to reliable information and increasingly, the Internet is where we go for it… For every important social and political issue, there are countless groups seeking to gain influence, often obscuring their true backers. If students are unable to identify who is behind the information they encounter, they are easy marks for those who seek to deceive them… Skilled teachers already use modeling to show students how to analyze Shakespearean sonnets or evaluate scientific claims. We must do the same with strategies for evaluating information online.”


“Why We Need a New Approach to Teaching Digital Literacy” by Joel Breakstone, Sarah McGrew, Mark Smith, Teresa Ortega, and Sam Wineburg in Phi Delta Kappan, March 2018 (Vol. 99, #6, p. 27-31),; the authors can be reached at [email protected], [email protected], [email protected], [email protected], and [email protected]. See a related article from Stanford researchers in Marshall Memo 660.

Back to page one


6. Effective Mentoring of New Teachers

            In this article in Phi Delta Kappan, Nina Weisling (Cardinal Stritch University) and Wendy Gardiner (Pacific Lutheran University) observe that not all mentoring programs for new teachers are working well. They offer these suggestions for supports, structures, and resources that the most effective mentoring programs have in place:

            • Set clear expectations. Principals need to make several key decisions up front: Will mentors and mentees have a say in choosing each other? Will matches be made with an eye to content and/or grade level? Will mentors have an evaluative role? When and where will mentors and mentees meet, and how frequently? Clarifying these and other issues is important to setting up mentoring relationships for success.

            • Maximize co-teaching and close support. Weisling and Gardiner believe the conventional mentoring model – creating lesson plans, gathering data during a lesson, analyzing student work, viewing and discussing a classroom video, on-the-fly conversations about challenges and successes – doesn’t take full advantage of a veteran teacher’s potential. More powerful, they believe, are:

-   Co-teaching – The mentor and mentee plan and teach a lesson together, providing modeling and immediate support and a lot to talk about afterward;

-   Demonstration teaching – It’s especially helpful for novice teachers to see how a particular technique or strategy is executed with their own students;

-   Stepping in – Real-time suggestions or interventions can be effective if the mentee is aware that this might happen and there is a trusting relationship; these can be as subtle as a hand signal or as direct as rescuing the mentee from a tricky teacher-student dynamic.

There’s no one right way to handle a mentoring relationship, say Weisling and Gardiner, and mentors should be flexible and adaptable to each situation.

            • Choose well and mentor the mentor. Naturally, mentors need to be excellent teachers, but they also need something more: the interpersonal skills to take new teachers under their wing and the wisdom to help identify and analyze critical problems of practice. Principals should orchestrate good professional development for beginning mentors, organize co-mentoring partnerships in which veteran mentors collaborate with newbies, and encourage all mentors to reflect on their practice and results.

            • Tend to relationships. Mentors need to have enough released time to get to know their mentees’ students, which improves the quality of advice and support given. Principals also need to keep an eye on mentor/mentee relationships and treat mentors as part of their instructional cabinet. “When mentor teachers are seen as trusted members of a support team – and not as tattletales, playing ‘gotcha’ with their mentees – then they can serve effectively as liaisons. On the one hand, they can help new teachers understand and implement school policies; on the other hand, they can help administrators understand and respond to new teachers’ needs.”


“Making Mentoring Work” by Nina Weisling and Wendy Gardiner in Phi Delta Kappan, March 2018 (Vol. 99, #6, p. 64-69),; the authors can be reached at [email protected] and [email protected].

Back to page one








© Copyright 2018 Marshall Memo LLC


If you have feedback or suggestions,

please e-mail [email protected]



About the Marshall Memo


Mission and focus:

This weekly memo is designed to keep principals, teachers, superintendents, and other educators very well-informed on current research and effective practices in K-12 education. Kim Marshall, drawing on 48 years’ experience as a teacher, principal, central office administrator, writer, and consultant lightens the load of busy educators by serving as their “designated reader.”


To produce the Marshall Memo, Kim subscribes to 60 carefully-chosen publications (see list to the right), sifts through more than a hundred articles each week, and selects 5-10 that have the greatest potential to improve teaching, leadership, and learning. He then writes a brief summary of each article, pulls out several striking quotes, provides e-links to full articles when available, and e-mails the Memo to subscribers every Monday evening (with occasional breaks; there are 50 issues a year). Every week there’s a podcast and HTMI version as well.



Individual subscriptions are $50 for a year. Rates decline steeply for multiple readers within the same organization. See the website for these rates and how to pay by check, credit card, or purchase order.



If you go to you will find detailed information on:

• How to subscribe or renew

• A detailed rationale for the Marshall Memo

• Publications (with a count of articles from each)

• Article selection criteria

• Topics (with a count of articles from each)

• Headlines for all issues

• Reader opinions

• About Kim Marshall (including links to articles)

• A free sample issue


Subscribers have access to the Members’ Area of the website, which has:

• The current issue (in Word or PDF)

• All back issues (Word and PDF) and podcasts

• An easily searchable archive of all articles so far

• The “classic” articles from all 14 years

Core list of publications covered

Those read this week are underlined.

All Things PLC

American Educational Research Journal

American Educator

American Journal of Education

American School Board Journal

AMLE Magazine

ASCA School Counselor

ASCD SmartBrief

District Management Journal

Ed. Magazine

Education Digest

Education Next

Education Update

Education Week

Educational Evaluation and Policy Analysis

Educational Horizons

Educational Leadership

Educational Researcher

Elementary School Journal

English Journal

Essential Teacher

Exceptional Children

Go Teach

Harvard Business Review

Harvard Educational Review

Independent School

Journal of Adolescent and Adult Literacy

Journal of Education for Students Placed At Risk (JESPAR)

Kappa Delta Pi Record

Knowledge Quest

Literacy Today

Mathematics Teaching in the Middle School

Middle School Journal

Peabody Journal of Education

Phi Delta Kappan


Principal Leadership

Reading Research Quarterly

Responsive Classroom Newsletter

Rethinking Schools

Review of Educational Research

School Administrator

School Library Journal

Social Education

Social Studies and the Young Learner

Teachers College Record

Teaching Children Mathematics

Teaching Exceptional Children

The Atlantic

The Chronicle of Higher Education

The Education Gadfly

The Journal of the Learning Sciences

The Language Educator

The Learning Professional (formerly Journal of Staff Development)

The New York Times

The New Yorker

The Reading Teacher

Theory Into Practice

Time Magazine