> "This paper has shown that the same VA measures are also an informative proxy for teachers’ long-term impacts."
Ah, as I figured, you are promoting VAM. I already mentioned how it's a difficult tool to use. And there are well-known problems with using VAM which aren't mentioned in that paper, which you don't seem to be aware of.
For example, a Texas court threw out EVAAS, as a way to evaluate Houston teachers, because of due process concerns, like how teachers are unable to have their score independently re-evalauted. The judge also points out the "house-of-cards" nature of VAM, and the ongoing academic debate about its applicability. https://www.courthousenews.com/wp-content/uploads/2017/05/Ho...
The VAM opponent expert witness presented their main arguments. Quoting http://vamboozled.com/houston-lawsuit-update-with-summary-of...
1) Large-scale standardized tests have never been validated for this use.
2) When tested against another VAM system, EVAAS produced wildly different results.
3) EVAAS scores are highly volatile from one year to the next.
4) EVAAS overstates the precision of teachers' estimated impacts on growth
5) Teachers of English Language Learners (ELLs) and “highly mobile” students are substantially less likely to demonstrate added value
6) The number of students each teacher teaches (i.e., class size) also biases teachers’ value-added scores.
7) Ceiling effects are certainly an issue.
8) There are major validity issues with “artificial conflation.” (This is the phenomenon in which administrators feel forced to make their observation scores "align" with VAAS scores.)
9) Teaching-to-the-test is of perpetual concern.
10) HISD is not adequately monitoring the EVAAS system. HISD was not even allowed to see or test the secret VAM sauce.
11) EVAAS lacks transparency.
12) Related, teachers lack opportunities to verify their own scores.
Here's one paper analyzing the specific details of the EVAAS numbers SAS generated for Houston - https://www.researchgate.net/publication/341532272_Methodolo... , with citations of its own about various issues with VAM. More below (via Google Scholar 'EVAAS houston effective').
> consistent as he or she moves across schools
Here's another paper: https://www.redalyc.org/pdf/2750/275022797012.pdf . "Almost half (46%) of a sample of HISD teachers who moved to different grade levels reported switching value-added ranks after the move, from “ineffective” to “effective” or vice versa, also across grade levels that were adjacent ".
If it's not consistent when moving grade levels, why do you think it's consistent moving across schools?
Is it because "Dr. William L. Sanders, the developer of the SAS ® EVAAS®, claims that teachers who move from one environment to another, even if radically different, continue to do just as well (LeClaire, 2011)"?
> GPAs are highly subjective, and more importantly, harder to compare across schools and even across classes.
And yet are a better predictor of future academic success than test scores. As I highlighted.
It appears you prefer to use use a worse predictor, one which requires an artificially imposed "high-stakes" testing environment, because it lets you do fancier types of data science that appeal to your sense that numbers are objective.
> strong research exist to show that SGPs are a valid and useful measurement
Remember earlier how you implied these methods were objective?
Odd that the paper you linked to says the other VAM methods didn't factor for a "drift in teacher quality".
Almost as if there's no agreement on what the model should be.
Almost as if the choice of model to use was also "highly subjective."
If they aren't subjective, then different VAM models should make the same predictions for the same population, right?
Points #2 and #3 above should be very rare, right?
And if they are not rare, they should not be used to determine who to fire, right?
> Remember, this was about measuring teacher performance, not student performance.
And VAM has not proved useful at measuring teacher performance, because of the flaws I quoted above.
I believe you approve of the idea of firing teachers with low VAM scores, which Houston and other school districts have done. Yet, quoting now from "All sizzle and no steak: Value-added model doesn’t add value in Houston" at https://journals.sagepub.com/doi/full/10.1177/00317217177341...
] while EVAAS was in use for educational reform purposes in Houston (i.e. to increase student achievement), Houston students saw no improvements of the sort that had been promised in grades 3-8 in reading, grades 4 and 7 in writing, grades 5 and 8 in science, and grade 8 in social studies (Figure 1, blue trend lines). In those subject areas and grades, tests scores declined overall from 2012 to 2015, as compared to other similar students throughout the state (black trend lines).
Almost as if VAM-based firing isn't a useful tool.
> and amenable to standardized testing
Yes, that's exactly my point. You highlighted the areas which are easy to test.
Composition is not easy to test, and it's also important. Being able to write an essay on Populism in the late 1800 US is not easy to test (not impossible - the AP American History tests do this, but it's expensive). But this is also a skill taught in school. My school required students take a practical art course. Yet testing for drafting skills, or wood working, or auto repair, isn't included in the high-stakes testing.
Why does it just happen to be that only those things which are easy/cheap to test are coincidentally the right topics to test?
> Everyone is biased
Film at 11. I don't listen only to Philip Morris scientists to judge if smoking tobacco has health problems.
> I trust the bias that says standardized tests are useful
So far it doesn't seem like you are aware of the evidence that VAM is not an effective method for deciding if a teacher should be fired. That would easily explain your comments.