enVisionmath2.0 Response to EdReports

enVisionmath2.0 is based on years of research on how students learn and what teachers want and need to effectively help their students. The authors of enVisionmath2.0 wrote the program to reflect the latest in high-quality standards in the United States, which includes the Common Core State Standards for Mathematics (CCSS-M). The authorship team includes members of the Common Core State Standards for Mathematics K-12 development work and review teams and a member of the mathematical progressions writer/reviewer team. All are widely published in books, professional journals, and articles; conduct extensive field research; and are recognized thought leaders in how children learn, teacher education, and professional development. Please see an open letter from the enVisionmath2.0 author team on this page.

After an in-depth analysis of the EdReports review of enVisionmath2.0, the author team and their editors determined that the review included

  • Factual errors,
  • Misunderstandings of the instructional model,
  • Misinterpretations of the CCSS-M and the Publisher’s Criteria, and a lack of understanding of effective curriculum development and pedagogy.

The enVisionmath2.0 author team and their editors provided EdReports with over 60 pages of comments and concerns around these three areas, however EdReports made only minor corrections to their review and determined that our concerns were not errors, but "philosophical differences about our indicators and their interpretation, rather than actual errors or accuracy issues."

The enVisionmath2.0 author and editor concerns focus primarily on the following:

  • Violates the Role of Standards in Developing Curriculum In the introduction to the Common Core State Standards for Mathematics, the writers definitively state that the standards are not intended to “dictate curriculum or teaching methods.” Rather, standards should define what students should understand and be able to do, but they should not define how those goals are accomplished. The tool used for this review violates the intent of the standards by defining for teachers how to achieve the goals of the standards and by asking reviewers to consider elements of curriculum development in the evaluation of instructional materials.
  • Attempts to provide feedback were effectively ignored. Written by the authors of enVisionmath2.0, the feedback identified only evidence statements that compromised the integrity and objectivity of the review, specifically, factual errors, misunderstandings of the CCSS-M, and a lack of understanding of effective curriculum development and pedagogy. A few of the most egregious errors and inaccuracies were corrected, but most of the substantive issues and factual errors raised by the authors were dismissed as “philosophical differences about our indicators and their interpretation.” Reviews of this nature cannot be grounded in philosophy and interpretation, but in objective criteria that stand up to accepted standards of validity and reliability. This feedback, which was provided on the draft version of the review, can be found on this site.
  • The Evidence Guides and Tool are flawed. Nearly all of the criteria and indicators in the Tool are subjective, negating EdReports’ claims of being able to provide unbiased, impartial feedback on instructional materials. Subjective language permeates and dominates both the major tool and the Evidence Guides for each Gateway. In fact, there is very little quantifiable language in any of the criteria or indicators. Criterion 2, for example, states, “Each grade’s instructional materials are coherent and consistent with the Standards.” Determining whether instructional materials are “coherent and consistent” with standards relies solely on personal opinion, not on a reproducible and quantifiable set of criteria. Further, according to the EdReports’ Methodology statement, the tool “guides the entire evaluation process to provide educators with credible, reliable and useful information about instructional materials and their alignment with the standards.” Yet, in the Evidence Guides, reviewers are asked to identify missed opportunities for these indicators and then base their score on these missed opportunities, rather than on how well the instructional materials as a whole align with the standards, the purported purpose of the review. This sanctioned focus on perceived missed opportunities (once again a personal opinion rather than a quantifiable criterion) neglects focusing on whether the opportunities that are documented are sufficient to achieve the instructional goal and it also allows bias and subjectivity to enter into the review.
  • The Evidence Guides and Tool are not reflective of the Common Core State Standards The Evidence Guides and Tool reflect the views of other entities with fixed ideas around curriculum and pedagogy. For example, in Gateway 1 of the tool, reviewers are asked to the focus on major, supporting, and additional content, a categorization developed by a smaller group of individuals and not vetted and reviewed with the same scrutiny as the CCSS-M. The categorization at times conflicts with the CCSS-M Critical Areas, conveying inconsistent messages around what teachers are to focus on in a given grade. In its open letter to the education community dated May 20, 2015, both the NCTM and NCSM leadership highlighted concerns around the designation of “major work” in the Tool, noting that these designations were in conflict with the Critical Areas articulated in the CCSS-M.
  • The Evidence Guides and Tool are often applied unevenly between different programs evaluated It appears that there was a level of bias in the way the subjective tool was applied and a pre-determine agenda and philosophy about how mathematics should be taught and who should be able to develop curriculum.
  • The Tool does not hold up to accepted and expected standards for validity or reliability. No documentation is provided around measures of validity or reliability. EdReports’ methodology does not include precise definitions of terms used in the Tool that are prerequisite for establishing both validity and reliability. For example, terms such as “coherence,” “consistent with standards,” “conceptual understanding,” “full meaning of each standard,” to name a very few, are not explicitly defined in the context of this review, thereby failing to meet the norms for construct validity and inter-rater reliability. The lack of transparency into the training that EdReports provides for its reviewers further adds to the concerns about validity and reliability.

Publisher Response