A card is blue (resp. red, etc) because it has a blue mana symbol in its casting cost. Not because it is found in the company of other blue cards. That is the concept of colour that a model must represent before you can say with any conviction that it "understands" the concept of colour. In terms of "hard lines"- that's the hard line you must cross.
The kind of model you're talking about then would be a classifier able to label individual cards with their colours, or an end-to-end model with an internal representation of cards' charactersitics. That is not what was shown here.
A blue card is found in the company of other blue cards, because humans picked them, because of the blue mana symbol in its casting cost.
With proper training, you end up with exactly the "end-to-end model with an internal representation of cards' charactersitics"
Since it can't see the cards, it can't say anything useful about a card it hasn't seen during training, but if you added some new cards and started training again, a pre-trained net might learn the new cards faster than one you train from scratch. That would be evidence that the network has learnt a meaningful embedding.
There is no proof that this network has done so, but I think word2vec shows that it's a feasible approach.
You're assuming way too much capability that is not present. Just because a human can make this inference, it doesn't mean that a neural net can. Neural networks are notoriously incapable of inference, or anything that requires reasoning.
>> There is no proof that this network has done so, but I think word2vec shows that it's a feasible approach.
Word2vec (word embeddings in general) are actually a good example why this kind of thing doesn't work the way you think it does. A word embedding model represents information about the context in which tokens (words, sentences, etc) are found but it does not, in and of itself, represent the meaning of words. The only reason why we know that words it places in the general vicinity of each other have similar meaning is because we already understand meaning and we can interpret the results. But the model itself does not have anything like "understanding". It only models collocations.
Same thing here. You seem pretty certain that with more data (perhaps with a deeper model) you can represent something that the model doesn't have an internal representation for. But just because the behaviour of the model partially matches the behaviour of a system that does have an internal representation for such a thing, in other words, a human, that doesn't mean that the model also behaves the way it behaves because it models the world in the same way that the human does.
And you can see that very clearly if you try to use a model like the one in the article, or one trained on all the magic drafts ever, to draft a set of cards it hasn't seen before. It should be obvious that such a model would be entirely incapable of doing so. That's because it doesn't represent anything about the characteristics of cards it hasn't seen and so can't handle new cards. A human understands what the cards' characteristics means and so can just pick up and play a new card with little trouble.
As to what I mean by "internal representation"; machine learning models that are trained end-to-end and that are claimed to learn constituent concepts in the process of learning a target concept actually have concrete representations of those constituent concepts as part of their structure. For example, CNNs have internal representations of each layer of features they learn in the process of classifying an image. Without such an internal reprsentation all you have is some observed behaviour and some vague claims about understanding this or learning that, at which point you can claim anything you like.
This is a mostly meaningless semantic distinction. I can ask you to give a synonym for "king" and you might suggest ruler, lord, or monarch. I can ask a word2vec model for a synonym for "king" and it will provide similar suggestions. What "understanding" of the words' meanings do you have that the model lacks? Be specific!
Definitions are abstract concepts, so the fact that you can pick similar words and so can the model are equivalent. To put it differently:
>The only reason why we know that words it places in the general vicinity of each other have similar meaning is because we already understand meaning and we can interpret the results.
Is not correct. The only reason why we know that the words it places in the general vicinity of each other have similar meanings is because our mental models put the same words in the same vicinities.
>Same thing here. You seem pretty certain that with more data (perhaps with a deeper model) you can represent something that the model doesn't have an internal representation for. But just because the behaviour of the model partially matches the behaviour of a system that does have an internal representation for such a thing, in other words, a human, that doesn't mean that the model also behaves the way it behaves because it models the world in the same way that the human does.
This doesn't matter. Just because the model's internal representation of a concept doesn't map obviously to the way you understand it doesn't mean that the model doesn't have a representation of that context. Word2vec models do represent concepts. We can interpolate along conceptual axes in word2vec spaces. That's as close to an internal representation of an isolated concept as you're gonna get. Like, I can ask a word2vec model how "male" or "female" a particular term is, and get a (meaningful!) answer. We never explicitly told the word2vec model to monitor gender, but it can still provide answers because that information is encoded.
>Without such an internal reprsentation all you have is some observed behaviour and some vague claims about understanding this or learning that, at which point you can claim anything you like.
Again, who cares? If it passes a relevant "turing test", what does your quibble about the internal representation not being meaningful enough to you matter? Clearly there's an internal representation that's powerful enough to be useful. Just because you can't understand it at first glance doesn't make it not real.
To address another one of your comments:
> Hi. From your write-up and a quick look at your notebook that's what your model is doing. And you measure its accuracy as its ability to do so. Is that incorrect?
neither I nor the person you responded to is the author. But yes, this understanding is incorrect. It is indeed trained on historic picks, but this is not the same thing as reproducing a deck that it has seen before. To illustrate, imagine that the training set of ~2000 datapoints had 1999 identical situations, and 1 unique one.
The unique one is "given options A and history A', pick card a". The other 1999 identical ones are "given options A and history B', pick b" (yes this is as intended). A model trained to exactly reproduce a deck it had seen previously would pick "a". The model in question would (likely, depending on the exact tunings and choices) pick "b".
This bias towards the mean is intentional, and is completely different than "trying to recreate an exact deck it's seen before", which isn't a thing you normally do outside of autoencoders and as others have mentioned, doesn't make much sense.