Our Curriculum Review Landscape is ‘Frankly Bananas’
Read up on the issues with curriculum reviews in America.
If you want states and districts to pursue curriculum reform, you must take the time to understand the issues with curriculum reviews in America.
Here’s an overview of the ELA review landscape. It’s grim.
EdReports
EdReports, the curriculum review heavyweight, has lost the trust of the field.
EdReports was founded in 2014 to review curricula against the Common Core standards. For its first few years, EdReports earned kudos from much of the field, including loud praise from me, for calling major publishers on weak programs. It played a valuable role in raising awareness of the potential of strong curriculum, in parallel with efforts in pioneering states and new entrant product marketers1.
By 2018, worrisome signs began to appear: a poor review for the excellent Bookworms curriculum and a strong review for the poor Wonders 2020 program rang alarm bells, especially in the standards community2.
Its failures stem from a design flaw: each review is conducted by only 4-5 teachers who receive as little as 25 hours of training. No literacy or math experts are involved, and reviewer turnover is high.
Over time, major publishers figured out how to game the review system with overstuffed programs that ticked all the review boxes but included fluff and garbage. EdReports did not evolve its processes as swiftly as the publishers evolved their tactics. Also, curricula that take more creative approaches have been penalized for designing their materials differently than the standard-bearers, a fate suffered by Bookworms and Fishtank.
More broadly, the field has evolved a lot since 2014. EdReports has been slow to produce next-generation review criteria that respond to the limitations of the standards, as well as Science of Reading-era insights.
Further, EdReports only reviews programs which are submitted for review, so it fails to be comprehensive. Some new curriculum developers are hesitant to submit their materials after the Bookworms review was botched twice in a row.
EdReports recently announced a leadership change, and it’s subtly acknowledging its brand problems. But the organization has a history of moving slowly. It’s searching for a new leader, and the inevitable rebuilding will take time. My bet: we are 3 years away from a sizable body of trustworthy reviews.
Unfortunately, the field is still waking up to the issues with EdReports – after it has sown a mess nationally. Many states partnered with or looked to EdReports for their state lists, so its flaws flowed downstream. Also, EdReports has received more than $65M in philanthropic funding to date, supporting a 60-person staff plus marketing agencies that give it a massive head start on brand awareness.
You don’t have to take it from me. Here’s some additional reading:
Holly Korbey recently reported on EdReports and the curriculum review landscape generally. “The piecemeal system of rating curriculum is frankly bananas,” Korbey writes, and she’s not wrong.
Natalie Wexler detailed the issues with EdReports brilliantly.
Emily Hanford’s Sold a Story puts a spotlight on Ohio to understand why two high-performing programs aren’t on the EdReports list.
Before that media wave, I summarized the concerns about EdReports and reported on the slow pivot at the organization.
If you want to see how EdReports made a mess of one state’s curriculum list, read about the Ohio curriculum list. It’s a cautionary tale. Pennsylvania is busy snowballing the Ohio problem with its early list. (Ugh.)
Reading League
The Reading League (RL) reviews are popular among RL devotees. They focus overwhelmingly on foundational skills, befitting the organization’s focus on reading foundations. It’s helpful to have the Reading League’s careful scrutiny of the foundational skills fine points.
However, the reports aren’t very usable. Even fans acknowledge that RL reviews are “wonky.”
The review tool is complicated. It sets out to show that problematic things are not present in a program. Reviewers score curricula around the lack of negative attributes. This arduous approach to scoring makes the reviews hard to digest.
And they have notable shortcomings. The RL reviews give surprisingly positive reviews for content quality to curricula with few-to-no books, and their attention to knowledge-building, text-rich instruction, writing, and usability are shallow.
The Reading League does not categorize curricula by quality (weak/good/better/best). Reading across the reports, it’s hard to tell better from weaker curricula, and the reports feel interchangable. The field needs clearer signaling, and the Reading League reviews feel like noise.
Like EdReports, RL only reviews programs with a publisher’s blessing, so it fails to be comprehensive.
I would say RL reviews are most useful in curriculum implementation, to help districts understand foundational skills-only pitfalls in already-selected programs.
Knowledge Matters Campaign
The Knowledge Matters Campaign (KMC) has screened curricula for their knowledge-building properties and published a list of programs which earn high marks. As its Review Tool explains, KMC puts many aspects of instruction (work with rich texts, connecting writing to reading instruction, and more) under the heading of knowledge-building, so its reviews reflect a broad survey.
I tend to point people to this list, because it is the best list for comprehensive reading and writing programs. The website offers helpful insight into the topics of study and books used in each program.
However, KMC shares only the high points of each program, not the shortcomings. Its list is a good starting point, but it won’t tell you where the bodies are buried, making it a weak tool for “fit and match” considerations and for understanding needs to supplement.
Notably, districts are increasingly abandoned the foundational skills components of knowledge-building programs for UFLI and other phonics programs, and there is no sign of this shift in the KMC reviews.
Alternatives
I set out to create better sources of intel with the Curriculum Insight Project, alongside a volunteer army of professionals in the field (mostly educators working in districts and using the programs in question).
We have brought much-needed attention to the lack of books in some popular curricula, raised awareness of the issues with basal programs, and played an essential role in speaking out about EdReports when most would not (mostly because they share funders or friendships with the EdReports folks). But, we haven’t published as much intel as we’d hoped.
Our shoestring funding has slowed progress. Everyone in our mostly-volunteer effort has a day job. To date, my own work on the project has been entirely pro bono, to allocate precious resources to the educators involved, but this hasn’t been enough for us to overcome resource constraints in cranking out alternative curriculum reports.
We are punching above our weight on impact, but I would be the first person to say that we aren’t moving quickly enough to meet the needs of the field. Not even close.
Overall
The failures at EdReports leave the field in a rough place. The alternatives (RL reports, Knowledge Matters list, and Curriculum Insight Project) command far less attention.
I’m also concerned by the lagging nature of reviews in a swiftly-evolving landscape. Multiple providers released new programs this year (Arts & Letters) or they are putting out updated materials (EL Education, Units of Study). Intriguing programs like Nell Duke’s Great First Eight are going into pilots. In the current landscape, it’s poised to get little airtime.
Most states don’t have the capacity to do strong curriculum reviews; they are a TON of work, and best-done by savvy and classroom-experienced literacy minds. Ideally, we would have one national guidepost for reference by states. Sadly, we lost that opportunity when EdReports – which once did stronger work – went sideways around 2018-19.
Also, I don’t foresee alternatives emerging any time soon. The major philanthropic funders, especially those that backed the Common Core standards, have lined up around EdReports; they have invested $6-9M per year since 2014 in building its brand, and they are clearly protective of that brand. The same funders have given grants to states to carbon-copy the EdReports approach locally, in partnership with the national organization. They have also invested in a broad ecosystem of reviewers (including RAND), professional learning providers, and support for the CCSSO’s work around instructional materials. They will not welcome new entrants lightly.
It’s a mess, y’all.
And don’t shoot the messenger, but it’s at least as bad in math.
EdReports defenders give it credit for helping the field to understand the potential of curriculum, and they have a point. Still, I have watched this landscape closely. In 2016-18, when EdReports had painfully-little brand awareness, I was the Chief Marketing Officer at Open Up Resources, a nonprofit publisher that was bringing EL Education, Illustrative Math 6-8, and Bookworms to market. Open Up was enjoying strong growth (in fact, we were K-12’s fastest-growing startup during that era), and I was asked to coach the EdReports team on improving its marketing, because it was so unknown. I don’t believe communications has ever been its strong suit.
While I know EdReports helped to move the ball forward, I believe that efforts by SAP, Natalie Wexler’s book The Knowledge Gap and her column, the Knowledge Matters Campaign, professional learning providers like TNTP and UnboundEd, the many providers of high-quality products (some of whom put out strong Science of Reading podcasts), and Science of Reading era grassroots momentum have played larger roles.
I sometimes hear people say that EdReports got its reviews against the Common Core Standards (CCSS) right, at least, but its issues lie elsewhere. Friends, close watchers have long seen otherwise.
Years ago, the Standards authoring community began publishing content to counter EdReports signals.
In 2018, Student Achievement Partners – an organization founded by the Standards authors – published “We’re Bullish on Bookworms” to show their support for Sharon Walpole’s program in the wake of its first (of two) flawed EdReports reviews.
In 2020, SAP published a widely-circulated report on Teachers College Reading Workshop Units of Study, a program that EdReports had dragged its feet in reviewing.
In 2021, SAP put out a report on issues with Wonders, after it received a surprising all-green review. In subsequent webinars and articles, Sue Pimentel (lead author of the CCSS in ELA) and her fellow panelists made clear that the concerns were not unique to Wonders, and coined the term “basal bloat” to reflect the category-level issue.
Really, these EdReports issues have been hiding in plain sight for years.


Thank you for talking about this! With all of the information we have about best practices, we have to be questioning ALL aspects of education. The entire field needs a shift - not just the teachers. As educators, it’s imperative that we continue to ask questions, point out inconsistencies, and evaluate everything to keep us all moving towards better practices. Otherwise this will just be a phase that passes with minimal change.