In this case, the experts could literally be routing / weighting of the LoRas, so it could be a 1x100k (or 1mm or whatever) vector , binary, maybe for simplicity and size. Or floats that are capped to 1/0 at inference time, but trained as floats.
The thing that’s a little weird to me is that you’d need to keep retraining the experts. But, I guess it may just be part of the pipeline for adding custom knowledge to the system.