At a high school level, you have students who are either engaged and will likely learn both subjects eventually, or students who aren’t as engaged and are just looking to take their “last math class”. In my opinion, it makes more sense to offer a choice or at least focus on the curriculum that keeps students are that age to most engaged.
As someone who has studied both subjects, I’d go with linear algebra 9 times out of 10, the exception being if someone wanted to also study physics.
IMHO, since linear algebra is largely a tool for solving differential equations, I think calculus should be taught first, as the fundamental knowledge.
But calculus is also very very useful, and probably easier to understand, derivatives and integral are quite intuitive concepts compared to eigen vectors...
When worded this way, it sure doesn't sound like something worth doing. Matrix algebra (computational rules for matrix-vector and matrix-matrix products) is just an "implementation detail" of the general idea of a linear transformation.
The notion of a linear transformation T(x) = y where x is an input vector and y is an output vector is a really good thing to know ASAP so I'm all for the LA before CALC... or rather, if I had to choose between one XOR the other, I'd go for LA for sure!
1/ For practical considerations, the notion of a linear transformation is super useful if you'll be studying any complex process (as soon as you have multiple input variables, you'll want to put some coefficients in front of them, and what is the simplest math model you can use? In high school math we learn about proportionality relations, i.e. y = mx, where the output y depends on the input x multiplied by coefficient m (the slope if you think geometrically in the xy-plane).
Extending the notion of proportionality to transformations with n inputs and k outputs, instead of the single slope m you need k*n coefficients to describe the proportionality relations between input component j and output component i.
Linear models are pretty good in the bang-for-your-buck metric for math models since: (1) k*n params for an R^n --> R^k transformation is reasonable amount of parameters, and (2) using "tomography*" you can easily estimate each of the coefficients. This is why LTs are used in many fields of science/computing use linear transformations (Biology, Chemistry, Economics, Statistics, NNs, etc.).
tomography*: input n "probing inputs" to T e1, e2, ..., en and record the outputs T(e1), T(e2), ..., T(en) (each of the outputs is a k-dimensional vector) --- if T is a linear transformations, then the info you've collected is enough to know all the k*m coefficients of the linear transformation (<=> entries of the matrix).
2/ From a theoretical point of view, I think teaching linear transformations and matrix-vector products (the boring row-times-column arithmetic rules) is a really good thing since it introduces learners to representation theory. You have one thing in math land y = T(x) and another thing in math land y = Mx and you know their behaviour and properties are identical (isomorphic?). This means you can understand the properties of one of the things by studying the properties of the other thing, e.g., Ker(T) <=> Nullspace(M).
For me, this first contact with representation theory concepts feels like a really valuable thing to have. (A good knowledge buzz moment to get learners more interested in learning math). And it's not just linear transformations and matrices that have a "is a representation of" relationship between them. There are lots of representations in LA:
- vectors <-> coordinates
- system of equations <-> matrix equation
- row ops <-> elementary matrices
- linear transformations <-> matrix-vector products
- composition of linear transformations <-> matrix-matrix products
- graph <-> adjacency matrix
- conditional prob p(y|x) <-> matrix whose columns are p(y|x_i)
- function in time <-> Fourrier coefficients
- quantum state <-> vectors with complex coefficients
- quantum operation <-> unitary matrices
- quantum measurement <-> list of projection matrices that sum to the identity
In particular for devs, it's a really easily transferrable analogy. The linear transformation T is the "spec" while the matrix M and matrix-vector product rules together represent a particular implementation of the spec. See `T` and `T_impl` near the bottom of this notebook https://github.com/minireference/noBSLAnotebooks/blob/master... (binder https://mybinder.org/v2/gh/minireference/noBSLAnotebooks/049... or colab https://colab.research.google.com/github/minireference/noBSL... )3/ From a pedagogical point of view, if we can deliver the "representation theory buzz" from linear algebra in high school, then this will be a good chance to review some important high school ideas:
Integer representations:
- integers in terms of decimal digits with place value
a = dn...d3d2d1d0 = dn*10^n + ... d3*1000 + d2*100 + d1*10 + d0
- integers in terms of prime factorization
a = 2^a2*3^a3*5^a5*...
Rational representations: - fractions as m/n where m in Z and n in N\*
- reduced fractions as m/n where m in Z and n in N\* where GCD(m,n)=1
So all in all, if we were to define LA as "representation theory knowledge buzz," and someone asks me if I recommend learning LA before CALC, I'd say yes.*I remember when we were introduced to abstract vector spaces in high school, and we were all pretty confused, even though this was a high school dedicated to mathematics, the foremost in the country.
Even complex vectors had us scratching our heads, which in retrospect seems absurdly trivial. It's just that we were used to thinking in very concrete terms, anything purely abstract is 10 times harder to grasp, so you probably can't teach things like linear transforms without matrices.
Derivatives and integrals on the other hand are very easy to visualise.