For example, if I have three packages:
- uno depends on json 1.3.6
- dos depends on json 1.4.12
- tres depends on json 2.1.0
Cargo will use json 1.4.12 for uno and dos, and json 2.1.0 for tres.Hopefully rust builds a culture that respects semantic versioning better than the Ruby & Node cultures do. That has to start at the top. There were several Rails 2.3.X releases with minor ABI incompatibilities. Truly respecting semver would have required these patch level updates to get a new major number.
We hypothesize that this difference works better for AOT compiled languages. And since it's pre-1.0, it's all good. This is the 'we'll watch this closely and adjust' from the email: it might not be okay.
When you declare an x.y dependency, tools that use SemVer assumes x.(>y). So, a 1.4 dependency says "Anything greater than 1.4.anything, but less than 2.anything.anything".
This is assuming the ~> operator, if it was =, it would ONLY be 1.14.2, which is even more restrictive.
The project _should_ work, which is why Cargo is using this modified version of ~>.
Rails follows a shifted semver version, as documented here: https://github.com/rails/rails/blob/master/guides/source/mai...
Bumps from 4.x to 4.y might contain smaller breaking changes.
For a large project as Rails, I find that reasonable, otherwise, we'd be at Rails 20 by now, which also doesn't quite give a good sense of how the project evolved.
I would much prefer Rails 20 than the current situation. If you want to make a major marketing level move, introduce a codename or something. Separate marketing from semantics.
Ruby as an ecosystem doesn't really care for semver. Some projects follow it anyways, which I can respect, but they aren't the norm.
The Ruby system cares for semver in general and it is propagated there a lot, but I agree: it certainly isn't uniform.
Now sure why you think the culture there doesn't respect semver.
$ cargo cult
would build a new project/module from a template.
https://mail.mozilla.org/pipermail/rust-dev/2014-June/010569...
The best solution (as taken by npm, virtualenv and others) is to install libraries locally to the project that is building them.
That way, package management becomes the sole concern of the build system.
"Accumulation" is a good thing, it means each project has the exact version of a package that it was tested with, not some later version that a sysadmin decides it "probably should work with".
- that packages follow semver
- that the OS packagers are in a better position to test package combinations.
If the author releases a new version of libfoo, and A, B and C in an OS repo depend on libfoo, then the OS packagers do not release a new version of libfoo until the tests for A, B & C pass.
These are two good assumptions, and the language package world would be in much better shape if they followed those assumptions too.
Either you rely on system librarys OR every binary / library pulls in its own copy of its dependencies (npm model).
The latter lets you have multiple versions of the same library for different dependencies, without confusing your system package manager.
The former might mean 'less build time', but it's pretty much whats wrong with the C/C++ ecosystem; you can't track the system libraries from the package tool, and you get out of sync.
Pillow and freetype specifically pop to mind as an irritatingly broken recent example; when freetype upgrades, perfectly working python projects suddenly stop working because the base system freetype library has been upgraded; and the pinned python libraries that depended on the previous freetype version no longer work, because they rely on the system package manager not to do stupid things.
It would be nice if you could point cargo as a 'local binary cache' to speed things up, and make them work even if the central repository goes down; and that could be package manager friendly, I imagine.
Flipping your question around a bit: will package manager creators and maintainers ever develop better solutions to the use case of development rather than system administration so that we don't have to keep creating these language-specific tools?
You wouldn't even require upstream NixOS packages, just place built cargo packages in the nix store, using it like a cache. Then upstream NixOS channels could start accumulating cargo packages, making cargo dependency "builds" faster.
Hopefully the cargo team can come up with a solution that works a little better here, but I wouldn't hold my breath.
Understand that enterprise OSes are not built on developer Macbooks. Enterprise distros have reproducible builds on automated systems with chroots that contain only what the package needs, no network access and sometimes the build happens inside a virtual machine. Its is almost ridiculous how after maven forgot the topic of being "offline" almost every tool released afterward has done the same mistake.
Understand that Linux distributions sell support and that means being able to rebuild a patched version of a failed package. So whatever dependencies are pinned in Cargo.toml or Gemfile is irrelevant. The OS maker will override them as a result of testing, patching or simply to share one dependency across multiple packages. Distros can't afford having one package per git revision used on a popular distro and then afford to fix the same security issue in all of them.
So having "cargo build" be able to override dependencies and instead look on the installed libraries with a command line switch or env variable helps the packager not having to rewrite the Cargo.toml in the package recipe.
Maven was probably the first tool that made packaging almost impossible and completely ignored the use-case of patching a component of the chain and be able to rebuild it completely.
Semantic versioning is great news, because it allows you to replace the dependencies for a common semantic version (not exactly the same).
For integrating with the C libraries not much needs to be done. If you support pkg-config then you cover 85% of the cases.
While technically the cache directory is a place where files can accumulate outside the view of a well documented and designed administrative tools, this is common problem shared with many tools including your favourite browser.
The quiet nature of the development process of Cargo was actually a response to the previous package management failures. The idea was not to publicise heavily before it was ready for dog fooding. This seems to have paid off.
The next features we're planning on working on are: - Supporting refs other than `master` from git packages
You can keep it on your local filesystem and reference it w/ `file:///path/to/repo.git` -- you can also use SSH and HTTPS URIs from any other repository host, not just github!
So you could just clone the repo to some `vendor` directory and move `master` to whatever version you want!
I'm pretty excited to see Teepee and Rust come together so I can really give it a spin doing what I'm currently doing daily for a job.
I hope it will be more stable and work better than the Haskell package manager, Cabal. I literally never got that to work on any machine. It would typically destroy itself while attempting to update itself...
It doesn't ship with Rust and the docs on GitHub and crates.io are not very enlightening.
1) Install latest version of Rust found here: http://www.rust-lang.org
2) git clone --recursive git@github.com:rust-lang/cargo.git
3) make
4) make install (could be that sudo is needed for you) DESTDIR=$HOME make installaroch:~/staging/|⇒ brew info rust
rust: stable 0.10 (bottled), HEAD
/usr/local/Cellar/rust/0.10 (74 files, 174M) *
Poured from bottle
From: https://github.com/Homebrew/homebrew/commits/master/Library/...aroch:~/staging/|⇒ brew info cargo
Error: No available formula for cargo
The new tutorial will be based around 'real' Rust development, and so will assume Cargo.
That said, http://crates.io/ should have install instructions on the site. I'll open a ticket and get on that.
1. Wycats (Yehuda Katz) is on Rust apparently :)
2. `.toml` -- some crossbreed YAML/INI file format that I like
That said it's not that big of a deal. At least it's not an in-house markup like Haskell's cabal...
About 14 months ago, it caused some of the most serious vulnerabilities in the Ruby on Rails world ever: http://tenderlovemaking.com/2013/02/06/yaml-f7u12.html
> why not just use JSON?
JSON is not really human-editable. Those quotes and commas, ugh! Also, JSON lacks comments.
The vulnerabilities in YAML (which is a superset of JSON, by the way) point at why YAML and JSON both aren't appropriate for configuration: they are _serialization_ formats. Configuration isn't what they're built for.
And you're right, it's really just not a huge deal in any way. Especially once we have `cargo project` to autogenerate the basics.
Infact, why not just use npm's package.json?
You cannot write comments in JSON, for instance.
{
'package': {
'name': 'hello-world',
'version': '0.1.0',
'authors': [ 'wycats@example.com' ]
},
'bin': {
'name': 'hello-world',
'comment': 'the name of the executable to generate'
}
}
So where is the problem ?And you have the advantage, that other tools can use the complete file including the comment. In TOML you need an extra parser to grab the comment.
Your solution works fine for docstrings, but comments and docstrings are not the same thing (although many languages that don't support docstrings in the syntax hack them together using comments, admittedly).
But beyond that, what's the argument for switching to json? Is there some kind of intercompatibility with npm/Node.js to be gained?
Also, having to quote everything makes writing JSON by hand a pain. Why would you want to use it over a nicer format?
What I wouldn't do for a cut-down YAML standard with most of the serialization crap cut out.
From: https://github.com/toml-lang/toml
"Latest tagged version: v0.2.0.
Be warned, this spec is still changing a lot. Until it's marked as 1.0, you should assume that it is unstable and act accordingly."
cargo read-manifest --manifest-path .(I admit I have a bias against toml, but still...)