In general, in AI, when we talk about "concepts" we're talking about the things machine learning models are trained to, well, to model.
In PAC-Learning terms, specifically, a "concept" is a set of instances (which may be vectors or whatever).
Note that a "concept" is not the same as a "class", as in classification. Instead a concept belongs to a class of similar concepts and a learner is trained on instances of concepts in a class. Then a learner is said to be capable of learning the concepts in a class if it can correctly label instances of a concept in the class with some probability of some error.
For a more concrete example, a "class" of concepts is the class of objects represented as subsets of pixels in digital images. A "concept" of that class is, for example, the concept "dog". An image classifier can be said to be able to learn to identify objects in images if it can correctly classify subsets of the pixels in an image as "dog" (or "not dog").
Since the article above is coming from Josh Tenenbaum's group, that's the kind of terminology you should have in mind, when you're talking about "concepts". These guys are old-school (and I say that as a compliment).