Showing posts with label Value Added Measures. Show all posts
Showing posts with label Value Added Measures. Show all posts

Thursday, July 31, 2014

Schools Matter: Value-Added Modeling (VAM) is pseudoscience, but profitable pseudosciences persist

First published on Schools Matter on July 26, 2014


"While value-added models are intended estimate teacher effects on student achievement growth, they fail to do so in any accurate or precise way. — Professor Bruce Baker"

Back in early April I penned a piece for K12NN on The American Statistical Association's (ASA) paper on Value Added Methodologies [1]. In it I asserted that the "document provides strong support to those who oppose this wrongheaded use of statistics to make high stakes decisions effecting the lives of students, educators, and our school communities." This week I noticed a trackback to a reprint of the article. What caught my eye was the title, which seemingly was entirely out of keeping with the spirit of the ASA's stance: Advocating for a robust value-added implementation.

I read through the VAM cheerleading piece and was gobsmacked by the deliberate manipulation of the ASA document's tenor and tone. This VAM apologetic read less like a legitimate blog posting and more like a corporate press release. Without doing much more research, I typed the following comments:

This quotation from the ASA document sums up the entire issue best: "The majority of the variation in test scores is attributable to factors outside of the teacher’s control". To, as the author above has, try and frame ASA's position as supportive of VAM phrenology takes mendaciousness to breathtaking heights. Rather than considering students as empty receptacles for "knowledge" deposited by a method that can be "measured," perhaps we can start talking about students as agents in their own pedagogical experiences—something that doesn't exist in the current regime of the profitable testing-industrial-complex.

The author's response displayed the same blatant avoidance of issues as the original piece. In fact, it stated some of the long discredited claims of the VAM camp, including Professor Bruce Baker's favorite trope about how "more sophisticated" VAMs address the issues people have with VAMs.

Thanks for your comment. The ASA statement seems to discuss primary drivers of student test scores, not student growth. It is well known that there is a strong relationship between students’ achievement (or test scores) and their socioeconomic/demographic background. However, there is typically little or no relationship between students’ growth and their socioeconomic/demographic background.

Another way to see this is that the most important factor of “current” test scores is prior tests scores and, once enough prior test scores are included in the model, the socioeconomic/demographic factors become relatively small or even non-significant, despite enormous sample sizes.

That said and to your concern about considering students in the context of their own experiences, more sophisticated value-added/growth models, like EVAAS, can follow the progress of individual students over time, so that each student serves as his or her own control.

Aside from being patently wrong, the whole thing smacked of being boilerplate text written in the bowels of a corporate public relations department. That's when I decided to look into this dubious Jennifer Facciolini was and who she wrote for. I should have done that in the first place.

As of 2012, Statistical Analysis System (SAS) Institute Inc., is one of the largest privately owned software companies in the world with revenue $3.02 billion USD (2013). They are the developers of the wildly inaccurate, but highly profitable Education Value-Added Assessment System, (AKA SAS EVAAS) — a VAM implementation used in many districts. Money chasing SAS is all about big fish government contracts, to wit an excerpt from a recent Businessweek piece:

SAS Institute Inc. won a $6,479,583.96 federal contract from the Defense Information Systems Agency, Scott Air Force Base, Illinois, for Statistical Analysis System software licenses and support renewal.

When a huge firm like SAS pulling down big dollar defense contracts makes the education "market" their priority, you can believe that they won't let things like facts and evidence discrediting VAM get in the way of them selling and supporting their EVAAS phrenology kit to any and all districts infected by the neoliberal corporate reform virus.

Hiring several white, well educated former-teachers like Nadja Young and Jennifer Facciolini to shill for your product is a smart public relations investment for the VAM behemoth. Given that they were teachers in the south, the chances are that they are making a great deal more money at SAS by simply selling out their former profession to corporate interests. Whether the boilerplate prose in their gushing blog posts is written by them or not isn't all that important. What's important is the appearance that professional teachers actually might think that phrenology and VAM are legitimate sciences. I can only hope that that the prose in these mindless corporate posts isn't written by these former teachers. Students should never be exposed to such nonsensical drivel. A selection of some of their titles should serve to numb the mind of any sentient being:

  • Data-driven education books make great holiday gifts for educators. Yes, really.
  • "March madness" of student course enrollment gets assist from value-added assessment
  • Beyond value-added: Teachers need diagnostic data to improve their practice
  • Student growth measures can be the bridge to new assessments

I'll spare readers further torment. As the preponderance of evidence against VAM pseudoscience like the watershed ASA paper grows, expect profit hungry firms like SAS to keep doubling down on the duplicity and deception. Having an army of former teachers shilling for your defective product is a small expense compared to losing those highly profitable contracts with districts.


NOTES

[1] In its various permutations we've seen the "M" in VAM stand for Modeling, Measures, Methodologies, and others. The only honest word for last member of the acronym would be "Mendaciousness," since phrenology by any other name is…



Share/Bookmark

Wednesday, April 09, 2014

K12NN: American Statistical Association has just released a very important document on Value Added Methodologies

First published April 9, 2014 on K-12 News Network


"The President of the United States and his Secretary of Education are violating one of the most fundamental principles concerning test use: Tests should be used only for the purpose for which they were developed. If they are to be used for some other purpose, then careful attention must be paid to whether or not this purpose is appropriate" — Gerald Bracey, PhD

VAM/AGT and other neoliberal corporate reforms have all scientific validity of phrenology. They’re just as racist as well. The American Statistical Association (ASA) released their ASA Statement on Using Value-Added Models for Educational Assessment today. While their spokesperson explicitly said they neither support, nor oppose the use of so-called "Value Added" methodologies, the actual document provides strong support to those who oppose this wrongheaded use of statistics to make high stakes decisions effecting the lives of students, educators, and our school communities. Too bad the amateur statisticians at the Los Angeles Times were able to commit their egregious acts several years ago before this document was released. It's also too bad that LAUSD recently implemented one of these seriously flawed models, one that will abjectly harm students' education and further undermine the morale of our professional educators for years to come.

Some important excerpts from the document (all emphasis mine):

Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model. These limitations are particularly relevant if VAMs are used for high-stakes purposes. (1)

VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools. Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality. (2)

In practice, no test meets this stringent standard, and it needs to be recognized that, at best, most VAMs predict only performance on the test and not necessarily long-range learning outcomes. Other student outcomes are predicted only to the extent that they are correlated with test scores. A teacher’s efforts to encourage students’ creativity or help colleagues improve their instruction, for example, are not explicitly recognized in VAMs. (4)

Attaching too much importance to a single item of quantitative information is counter-productive—in fact, it can be detrimental to the goal of improving quality. In particular, making changes in response to aspects of quantitative information that are actually random variation can increase the overall variability of the system. (5)

The quality of education is not one event but a system of many interacting components. (6)

A decision to use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes in the school environment. For example, more classroom time might be spent on test preparation and on specific content from the test at the exclusion of content that may lead to better long-term learning gains or motivation for students. (6)

Overreliance on VAM scores may foster a competitive environment, discouraging collaboration  and efforts to improve the educational system as a whole. (6)

The majority of the variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum, and unmeasured influences. (7)

The VAM scores themselves have large standard errors, even when calculated using several years of data. These large standard errors make rankings unstable, even under the best scenarios for modeling. (7)

A VAM score may provide teachers and administrators with information on their students’ performance and identify areas where improvement is needed, but it does not provide information on how to improve the teaching (7)

All in all, the document is an academic condemnation of the VAM/AGT pseudosciences that have been ushered in by neoliberal corporate education reform project. While the ASA is populated with actual scientists and statisticians, we can be sure that the corporate reform crowd will be quick to try to refute the document. Here the tag-line of a recent article in Salon by Paul Rosenberg is apropos: 'Like global warming deniers, "education reformers" have nothing to  lose and everything to gain by sowing confusion'.

For a copy of the ASA Statement on Using Value-Added Models for Educational Assessment see http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf. For additional information, please visit the ASA website at www.amstat.org.



Share/Bookmark