Our Curriculum Review Landscape is ‘Frankly Bananas’
By popular demand, I'm summarizing the issues with ELA curriculum reviews. It's a mess.
The good news: the country is waking up to the fact that curriculum matters. Weak curriculum has ushered “cueing” into classrooms instead of systematic phonics and popularized book-staved programs nationwide.
If curriculum can bring weak instruction into schools, it stands to reason that it can bring better teaching into schools, too. Louisiana and Tennessee brought strong curricula into use statewide, and reading score improvements followed. It’s possible!
Unfortunately, most states aren’t following that work, partly because states don’t have clear signals to follow on curriculum quality.
The curriculum review landscape is “frankly bananas,” in the memorable words of journalist Holly Korbey. She’s spot on.
The organizations reviewing materials don’t see eye to eye on what matters:
EdReports reviews against the Common Core standards, and it doesn’t even get that right.
The Reading League focuses primarily on foundational skills, and its reports are both hard-to-comprehend and weak on other aspects of literacy.
The Knowledge Matters Campaign highlights the strongest programs for comprehension, but it neglects foundational skills nuances and fails to note shortcomings of programs.
The result is a mishmosh of signals for states and districts to navigate.
Here’s a lay of the review landscape.
EdReports
EdReports, the most influential curriculum review organization, was founded in 2014 to review curricula against the Common Core standards. For its first few years, EdReports earned kudos for calling major publishers on poorly-aligned programs. It helped raise awareness of the importance of curriculum, although other influencers played a larger role1.
By 2018, red flags began to appear in EdReports: a poor review for the excellent Bookworms curriculum and a strong review for the weak Wonders 2020 program rang alarm bells, especially in the standards community2.
The EdReports failures stem from a design flaw: each review is conducted by only 4-5 teachers who receive as little as 25 hours of training. No literacy or math experts are involved, and reviewer turnover is high.
Over time, major publishers figured out how to game the review system with overstuffed programs that ticked all the review boxes but included fluff and garbage. EdReports failed to evolve its processes as swiftly as the publishers evolved their tactics.
Also, curricula with creative approaches have been penalized for designing their materials differently than the standard-bearers, a fate suffered by Bookworms and Fishtank.
More broadly, the field has evolved a lot since 2014. EdReports has been slow to produce next-generation review criteria that respond to the limitations of the standards, as well as Science of Reading-era insights.
Further, EdReports only reviews programs which are submitted for review by their publishers, so it isn’t comprehensive. The authors of emerging curricula are hesitant to submit their materials for review after watching EdReports botch the Bookworms review multiple times.
EdReports recently announced a leadership change, and it’s subtly acknowledging its brand issues. But the organization has a history of moving slowly. It’s searching for a new leader to start this Spring, and the inevitable rebuilding will take time. I’m betting we’re 3-5 years away from a sizable body of trustworthy reviews.
Unfortunately, the field is still waking up to the issues with EdReports—after it has sown a mess nationally. Many states partnered with or followed EdReports for their state lists, so its flaws have flowed downstream. Also, EdReports has a massive head start on brand awareness, thanks to $65M+ in philanthropic funding.
You don’t have to take it from me. Holly Horbey, Natalie Wexler, and Emily Hanford have illuminated the issues with EdReports, and their work is collected in the footnotes3.
Reading League
The Reading League (RL) reviews focus overwhelmingly on foundational skills, befitting the organization’s focus on reading foundations. Its careful scrutiny of the foundational skills fine points offer a helpful resource.
However, the reports aren’t very usable. Even fans acknowledge that RL reviews are “wonky.”
The RL review tool is complicated. It sets out to show that problematic things are not present in a program. Reviewers score curricula based on the lack of negative attributes. This arduous approach to scoring makes the reviews hard to digest.
And the reviews have notable shortcomings. RL reviews give shallow attention to knowledge-building, text-rich instruction, writing, and usability. The RL gives surprisingly positive reviews for content quality to curricula with few-to-no books (!!). I wish the Reading League had limited its reviews to foundational skills-only, which would have been a more authentic reflection of its broader work.
The Reading League fails to categorize curricula by quality (weak / good / better / best). Reading across its reports, one cannot tell better from weaker curricula. When Pennsylvania incorporated RL reviews into its state list, its (mis)read of the Reading League reviews raised eyebrows in both the initial, widely-panned draft and the final list.
Like EdReports, RL only reviews programs with a publisher’s blessing, so it fails to be comprehensive.
I’d say RL reviews are most useful in curriculum implementation, to help districts understand foundational skills pitfalls in already-selected programs. RL reviews are all-but-useless for understanding effectiveness for reading comprehension.
Knowledge Matters Campaign
The Knowledge Matters Campaign (KMC) has screened curricula for their knowledge-building virtues and published a list of programs which earn high marks. As its Review Tool explains, KMC puts many aspects of instruction under the heading of knowledge-building (work with rich texts, connectedness of writing to reading instruction, and more), so its reviews reflect a broad survey.
I tend to point people to KMC’s list, because it is the best list for comprehensive reading and writing programs. Its website offers helpful insight into the topics of study and books used in each program.
However, KMC shares only the high points of each program, not the shortcomings. Its list is a good starting point, but it won’t tell you where the bodies are buried, making it a weak tool for “fit and match” considerations and for understanding supplementation needs.
Also, KMC’s failure to focus on foundational skills has downsides. It certainly hasn’t won over the Reading League camp. Also, it hides a notable trend: districts are increasingly abandoned the foundational skills components of knowledge-building programs for UFLI and other phonics programs.
Alternatives
Some presume that the What Works Clearinghouse or Evidence for ESSA are a decent guideposts, but they have their own issues4, and offer little.
I set out to create better sources of intel with the Curriculum Insight Project, alongside a volunteer army of professionals in the field (mostly educators working in districts and using the programs in question).
We have brought much-needed national attention5 to the lack of books in some popular curricula, raised awareness of the issues with basal programs, and played an essential role in speaking out about EdReports when most would not6. But, we haven’t published as much intel as we’d hoped. Our shoestring funding has slowed progress, as folks in our volunteer army have day jobs. (I have come to understand why EdReports carries a 60-person staff.)
The Curriculum Insight Project has punched above our weight on impact, but I would be the first person to say that we aren’t moving quickly enough to meet the needs of the field. Not even close.
Overall
Alas, the field lacks a clear guidepost on ELA curriculum quality.
I’m also concerned by the lagging nature of reviews by all parties, given a swiftly-evolving landscape:
Multiple providers released new programs this year (Arts & Letters, Emerge) or they are putting out updated materials (EL Education, Into Reading, Units of Study).
Nell Duke’s Great First Eight is in pilots; in the current landscape, it’s poised to get little airtime.
Most states don’t have the capacity to do strong curriculum reviews; they are a TON of work, and best-done by savvy and classroom-experienced literacy minds.
Ideally, we would have one national guidepost for reference by states, but we aren’t even close.
Also, I don’t foresee alternatives emerging any time soon. The major philanthropic funders, especially those that backed the Common Core standards, have lined up around EdReports; they have invested $6-9M per year since 2014 in its work, and they seem protective of that investment.
The same funders have given grants to states to carbon-copy the EdReports approach locally. They have invested in a broad ecosystem around EdReports: RAND reports on the curriculum landscape use EdReports as a key input. Professional learning providers like TNTP and Instruction Partners are expected to treat ”all-green” on EdReports as gospel. When the CCSSO convenes states around instructional materials, EdReports is in the room. All that work shares the same funding sources as EdReports. Those funders will not welcome new entrants lightly.
It’s a mess, y’all.
And don’t shoot the messenger, but it’s at least as bad in math.
I wish I was closing with a list of ready solutions. This thorny issue won’t be resolved easily. But solving problems begins with understanding them, so consider this my humble contribution.
EdReports defenders give it credit for helping the field to understand the potential of curriculum, and they have a point. But as a close-watcher of this landscape, I think others deserve a lot more credit.
In 2016-18, when EdReports had painfully-little brand awareness, I was the Chief Marketing Officer at Open Up Resources (OUR), a nonprofit publisher. OUR was enjoying strong growth (in fact, we were K-12’s fastest-growing startup during that era), and I was asked to coach the EdReports team on improving its marketing, because the organization was so very unknown. I don’t believe communications has ever been its strong suit.
While I know EdReports helped to move the ball forward, I believe that other influences have played larger roles: work by Student Achievement Partners,;Natalie Wexler’s book The Knowledge Gap and related writing; the Knowledge Matters Campaign; professional learning providers like TNTP and UnboundEd; the many providers of high-quality products (some of whom put out strong Science of Reading podcasts); and Science of Reading era grassroots momentum.
I sometimes hear people say that EdReports got its reviews against the Common Core Standards (CCSS) right, at least, but its issues lie elsewhere. Friends, this simply isn’t the case.
Years ago, the Standards authoring community began publishing content to counter EdReports signals:
In 2018, Student Achievement Partners – an organization founded by the Standards authors – published “We’re Bullish on Bookworms” to show their support for Sharon Walpole’s program in the wake of its first (of two) flawed EdReports reviews.
In 2020, SAP published a widely-circulated report on Teachers College Reading Workshop Units of Study, a program that EdReports had dragged its feet in reviewing.
In 2021, SAP put out a report on issues with Wonders, after it received a surprising all-green review. In subsequent webinars and articles, Sue Pimentel (lead author of the CCSS in ELA) and her fellow panelists made clear that the concerns were not unique to Wonders, and coined the term “basal bloat” to reflect the category-level issue.
Really, these EdReports issues have been hiding in plain sight for years.
Holly Korbey recently reported on EdReports and the curriculum review landscape generally. “The piecemeal system of rating curriculum is frankly bananas,” Korbey writes, nailing it.
Natalie Wexler detailed the issues with EdReports brilliantly.
Emily Hanford’s Sold a Story puts a spotlight on Ohio to understand why two high-performing programs aren’t on the EdReports list.
Before that media wave, I summarized the concerns about EdReports and reported on the slow pivot at the organization.
If you want to see how EdReports made a mess of one state’s curriculum list, read about the Ohio curriculum list. It’s a cautionary tale. Pennsylvania is busy snowballing the Ohio problem with its early list. (Ugh.)
The issues with What Works Clearinghouse and Evidence for ESSA have been detailed by Holly Horbey:
“Federally funded What Works Clearinghouse does efficacy reviews of curricula—something that’s desperately needed—but makes them really hard to understand. “They’re great at explaining something to other researchers, but not great at breaking it down in a way that a practitioner can understand,” Lane said…
To make things more complicated, Johns Hopkins’ Evidence for ESSA, which came about to support districts implementing the Every Student Succeeds Act, rates the quality of the evidence being used to test out different curriculum to see if they work. The quality of the studies is really important, especially since education research standards can be wishy-washy and downright unreliable—but also somehow makes trying to find an evidence-based curricula harder, because the quality of the study is quite different from the quality of the material itself, i.e. whether or not a lot of children learned when using it.
Lane makes this point with an example: here is the Evidence for ESSA review of Leveled Literacy Intervention (LLI), a small-group tutoring curricula made by Fountas & Pinnell, which has been called out for recommending teaching practices that aren’t supported by evidence.”
Holly Lane captured one funny detail in an interview with Melissa & Lori Love Literacy: Fountas & Pinnell exploited Evidence for Essa shortcomings in its marketing emails: “They’re sending a promotional email that says there’s very strong evidence that our program doesn’t really work.” Worth four minutes to listen to the two videos in this thread for the nuances.
Since I published my piece on book-starved curricula in US schools, I have been pleased to see its themes picked up in the New York Times reporting (Kids rarely read whole books anymore. Even in English class.). In Education Week, Sarah Schwartz penned two excellent pieces with similar themes “Are Books Really Disappearing From American Classrooms?” and What Is a Basal Reader, And Why Are They Controversial?
The issue has also had attention from Carl Hendrick, Natalie Wexler, the Science of Reading Classroom initiative, and more.
A long oral history should be written about the number of education leaders who know EdReports is badly-broken, but won’t say it out loud, mostly because they share funders or friendships with the EdReports folks.




Yes! This gets at something I’ve been increasingly concerned about. The problem isn’t just individual curriculum lists or reviewers, but the broader ecosystem that determines what counts as “quality” in the first place. Too often, rigor, coherence, and evidence of effectiveness are displaced by alignment language, branding, or ideological signaling.
I’m currently working on a book that examines this larger pattern of curriculum capture: how review systems, policy incentives, and professional norms have narrowed what reaches classrooms while insulating those decisions from meaningful scrutiny. Your article's analysis of the curriculum flaws fits right within that landscape.
This is an overdue conversation.
Thanks for continuing to bring this very important topic to light.