Interesting paper from what I could glean, though.
EDIT: "Jinx dynamically builds a set of potential interleavings (i.e., alternate eventualities, or execution scenarios, that will occur under some future set of conditions) that are most likely to result in concurrency faults, and quickly tests those execution paths to surface concurrency problems including deadlocks, race conditions, and atomicity violations." So it's more selective than I first thought.
We avoid exponential search space problems by using sampling, and curtailing of exploration. We choose what to explore based on research about where bugs are likely to lie. Exhaustive examination of all but fairly trivial problems is impossible for exactly this reason, and this is why we sample: rather than force users to change the way they write code, we deal with the way they have written code.
Thanks, Pete
Basically, you use the same general strategies that are used to write Go playing programs.
can we imagine an abstraction layer now which would solve all our (concurrency) problems, but which would simply be too slow on current hardware to actually use?
The difficult part is in how the threads actually coordinate. The problem is extremely application-specific (what exactly do threads need to share? When do they need to share it? These cannot be answered in a general way). It's generally accepted that concurrency bugs (examples: data races (colloquially "race conditions"), deadlock, atomicity violations, locking discipline violations) are extremely difficult bugs. This is probably either because (1) programmers are not accustomed to thinking about coordinating between parallel activities or (2) people are in just worse at thinking concurrently than thinking sequentially.
So new libraries/methods for accomplishing communication between threads are always welcome and can help reduce the complexity of parallel programming. However, nobody has yet found an abstraction that both works for most kinds of parallel programs (MapReduce is very simple to work with but also very restrictive) and is simple enough for people to program in without fear of hard-to-solve concurrency bugs (message passing and shared memory are both quite general but considered somewhat unsafe).
So, the problem is not that a good abstraction layer would be too computationally expensive -- it's that no one even knows what the abstraction should be! Hope this makes the issue clearer.
It seems to me you and i are very comfortably coordinating sharing resources right now. In parallel, a browser is running on my computer, a browser is running on your computer, and an http server is running on the hn server.
But between us we're doing some collaborative. We are both contributing text out of which a single document is synthesized, and we might both have up voted this story, etc.
Our collaboration here is structured in terms of http requests/responses. Does this in itself address issues of "race conditions", "deadlocks", etc?
Can we imagine a future in which computation and memory are so abundant, we can virtualize this client/server paradigm for any collaborating parallel programs?
Or can we imagine a future in which there is no need to parallelize a large class of programs, because they will execute satisfyingly fast in a single thread?
It solves all your concurrency problems in the same sense that garbage collection solves all your memory problems. I.e. you can still make a mess, but it's harder to do so, and it's less effort to build something that doesn't break.
EDIT: Whoops, beaten to the punch. Methinks refreshing the page before posting might be a good idea :)
I'd be looking for something that would run on Linux. Open source would of course be nice, but free is probably more important to me in this case. Finished product will be open-source designed to replace pthread's rwlocks.
I sure wish they'd provide an idea of the eventual cost, though. I can understand their not wanting to right now, but I at least can't afford to invest time into their beta program without knowing if I'll be able to afford it when they start charging for it.
I just posted the pricing information on the site. We haven't finalized the actual price yet, but we expect to price this product in line with typical quality and load testing tools (think low four figures, USD). We will definitely offer substantial discounts to those customers who help us during our beta by submitting bugs or issues they find with Jinx itself, or by submitting bugs they've found in other software (their own or open source) using Jinx. Visit our "Report a Bug" page to give us feedback.
--Prashant