It was originally going to be in XML but they recently switched to JSON, which is a good move, I think. I can't wait for it to be adopted as it will give so much more richness to the data set.
Certainly less, less by several orders of magnitude, than in the depressingly ubiquitous JSON-as-configuration use case...
Eventually it'll work both ways. I'm hoping this is a big help for adoption, as gives you two formats for the price of one (just write an importer for MNX, and you can get MusicXML import support "for free" if you use the library).
Will MNX allow for inline comments?
I don’t see any comments on the examples page:
https://w3c.github.io/mnx/docs/mnx-reference/examples/
I know JSON doesn’t have comments, but JS and JSON5 allow for comments. It would be super nice to allow for comments because you can hand annotate sections of the MNX file for the purposes of teaching.
Given the choice between supporting comments and supporting a wider variety of implementations/libraries ("plain" JSON as opposed to a comments-supporting variant), I think the latter is a more practical priority.
With that said, we'd like to add a standard way to add vendor-specific information to an MNX document — which is definitely a must-have, for applications that will use MNX as a native format — and I could see a comments-ish thing appearing in that form.
Regarding that examples page, I'm actually planning to do something along those lines anyway. The MusicXML docs and the MNX docs use the same system (a Django app), and the MusicXML part uses a custom XML tag to define "this part of the XML example should be highlighted in blue" (example: https://w3c.github.io/musicxml/musicxml-reference/examples/a...). It's on my to-do list to implement the same thing for the JSON version — which is essentially like inline comments(ish), if you squint.
MusicXML is a great effort to tackle a very difficult problem, but some of the details can get rather hairy (e.g. having to represent many concepts twice, once for the visual aspect, and once for the performance aspect; or how exactly to express incomplete slurs). Interoperability in practice seems to be fairly limited (Possibly because for many music programs, MusicXML import/export is an afterthought).
One of the biggest contributions a new standard could make is to provide as complete a test suite as possible of various musical concepts (and their corner cases!) and their canonical representation. It looks like MNX has already made good efforts in this direction.
Considering this data is machine-generated and machine-ingested, moving away from XML seems like a big step down.
I like JSON for data transfer but for describing documents XML is decent.
https://music-encoding.org/about/
This is what MuseScore 4 will soon start using.
Related: Can it handle non-Western notations?
SMuFL is a font layout specification. It solves the longtime problem of "I'm making a music font. Which Unicode code glyph should I use for a treble clef?" For many years, this was a Wild West situation, and it wasn't possible to swap music fonts because they defined their glyphs in inconsistent ways. This problem is basically solved now, thanks to SMuFL.
MNX is a way of encoding the music itself. It solves the problem of "I have some music notation I want to encode in a semantic format, so it can be analyzed/displayed/exported/imported/etc."
EDIT: I'd like to clarify that I posted this comment, as a joke, before the below comment went on to clarify that there was, in fact, a JSON-based rewrite of the music standard in progress:
https://news.ycombinator.com/item?id=38460827
Never change, tech world!
Does anyone have any success stories?
Music notation is incredibly complex, and there are many places things can go wrong. There's a wide spectrum of error situations, such as:
* The exporting application "thinks" about notation in a different way than the importing application (i.e., it has a different mental model).
* MusicXML provides multiple ways of encoding the same musical concept, and some applications don't take the effort to check for all possible scenarios.
* Some applications support a certain type of notation while others don't.
* MusicXML doesn't have a semantic way of encoding certain musical concepts (leading applications to encode them as simple text (via the words element), if at all.
* Good ol' fashioned bugs in MusicXML import or export. (Music notation is complex, so it's easy to introduce bugs!)
This sounded interesting, so I went to the webpage, and found this point specifically called out:
> It prioritizes interchange, meaning: it can be generated unambiguously, it can be parsed unambiguously, it favors one-and-only-one way to express concepts, and multiple programs reading the same MNX file will interpret it the same way.
But I'm curious to see some examples of this. https://w3c.github.io/mnx/docs/comparisons/musicxml/ provides an interesting comparison (and calls out how the same MusicXML can be interpreted in different ways for things like octave shifts), but it would be nice if the page also included alternate ways that MusicXML can represent the same composition and talk about how certain programs end up misinterpreting/misrepresenting them. The Parts comparison, for instance, mentions how you can represent the same thing in two different ways in MusicXML (score-timewise and score-partwise), but only provides an example for one (score-partwise), and doesn't go into much more detail about if this leads to ambiguity in interpretation or if it's just making things needlessly complex.
Just to give you a quick response: look into MusicXML's concept of a "cursor". Parsing a MusicXML document requires you to keep an internal state of a "position", which increments for every note (well, careful, not every note -- not the ones that contain a "chord" subelement!) and can be explicitly moved via the "backup" and "forward" elements: https://w3c.github.io/musicxml/musicxml-reference/elements/f...
For music with multiple voices, this gets easy to mess up. It's also prone to fractional errors in music with tuplets, because sometimes software chooses to use MusicXML position numbers that aren't evenly divisible into the rhythms used in a particular piece of music. That can result in a situation where the MusicXML cursor gets to a state that doesn't actually align with any of the music.
Now as this song had nonsense lyrics and many repetitions and almost-repetitions, the structure of the song didn't quite pop out to me, so what I did was export a midi from vocaloid that I opened musescore. From musescore I then exported it as MusicXML. I opened that in Notepad++ for the sole purpose of pretty printing the xml to normalize the texual representation and saved it right back. I took that and opened it in a jupyter notebook where I scraped it for <measure> elements with regular expressions and then I searched for repeating ones, that I assembled into repeating segments and sub-segments.
This helped me memorize the song.
What I liked about MusicXML was that it was self-documenting enough that I didn't need to reference documentation and I could find candidates for normalization quite easy (for instance I didn't care about directions of stems or inferred dynamics).
A gotcha is that Musescore 4 has a bug where it doesn't show the "midi import" where you can adjust the duration quantization, this didn't matter to me for this song, but I did bite me once in the past when opening a midi from Vocaloid. Musescore 3 works. Without playing around with that there can be an issue where it infers 16th notes as staccato 8th notes and similar.
And there are actually a lot of alternatives, e.g. ABC notation, Alda, Music Macro Language, LilyPond, to name a few. Difficult to decide which one to prefer.
Music generation from notation is pretty much the MINST toy-scale equivalent for sequence/language learning models, it's surprising that there's so little attention being paid to it despite how easy it to get started with.
I'm a hobbyist in this space (am a composer myself as well a software engineer) and currently all tools are very poor. MusicXML is better than MIDI. MEI [1] is better than MusicXML etc.
The problem is there is miniscule amount of effort and money spent into this field because music overall makes peanuts. It really doesn't justify training expensive ML algorithms which is unfortunate.