However, it is in no sense "new or unique" as the authors suggest. Extensive (20+ years) of research literature on data sonification is out there, so...
http://www.icad.org/knowledgebase
Note also (very many) art-led sonification projects, turning everything from live IP traffic to gene-sequence or x-ray astronomy datasets, carried out since the early 90s. Prix Ars Electronica may be a good place to look for these.
My summary of the field in general, FWIW, is this - it's trivial to turn a realtime data stream into sound. It's slightly harder to turn the stream into either a) music or b) non-dissonant sound streams, and it's very hard indeed to create a legible (ie useful, reversible) general purpose sonification framework, because auditory discrimination abilities vary so widely from individual to individual and are highly context-dependent.
Of course, because sound exists in time not space, there's no simple back-comparison of data with and relative to itself, as when one looks at a visual graph. Listeners rely on shaky old human memory: did I hear that before? Was it lower, louder? And so on.
That said, I remain fascinated by the area and propose that a sonic markup language for the web would be interesting.
Sneaky plug: My current project (http://chirp.io) began by looking at 'ambient alerts' until we reached the point above, and decided to put machine-readable data into sound, instead of attaching sound to data for humans.
Good luck, and I very much look forward to hearing more!
That said, what we're trying to do specifically - which is sonification as a service, and trying to adequately cover a very wide range of different use cases and sound sources at once - is probably new. I don't think that matters much, though, and "newness" is the least interesting aspect of the project.
1) If a graphical plot turns data into something visual, an audio "plot" turns data into something audible. Your output is an audio file rather than an image or video file. The typical applications of this are to turn a boolean flag into a chime (e.g. text message received). Your important insight is that this can be extended to longer-form audio outputs.
2) When is audio more advantageous than image or video?
- When you cannot look at a screen (driving, working out)
- When there are too many screens (control room)
- In a very dark environment where visibility is impeded
- If you are blind or vision-impaired
This could find real application in cockpits/control rooms, to ensure that a pilot is perceiving data even if they aren't looking at a particular dial. It could also be useful for various fitness and health apps that don't need you to look at the screen all the time.Perhaps the most interesting application would be in a car, which is where people spend a great deal of time and have their ears and brains (but not their eyes) free. Some ideas:
a) Could you generate different sounds based on the importance of a text message (doing something like Gmail's importance filtering) signaling that you don't really need to respond to this particular message right now while driving?
b) Could you have audio feedback for important things along the road? For example, the problem with the Trapster app (trapster.com) is that I need to look at the phone to see where the speedtraps are. You can imagine an integrated audio feed that could give information like this and also tell you your constantly updated ETA (via Google Maps API call). Or you could listen to the pulse of your company on the road to do something semi-useful, and drill down into notable events via voice.
c) The really interesting thing is if you could pair this with a set of defined voice control commands. As motivation: an audible plot can't be backtracked like a visual plot. With a visual plot your eyes can just scan back to the left. To scan back and re-heard the sound you just heard requires rewinding and replaying. But it could be interesting to set up a small set of voice commands that allow not just rewinding, but rewinding and zooming. So you hear an important "BEEP" and you want to say something like "STOP. ZOOM" and set up the heuristics such that this identifies the right BEEP and then gives an audio drill-down of exactly what that BEEP represented.
d) Done right, you might be able to turn a subset of webservices into a sort of voice-controlled data radio for the road. People spend thousands of hours in their cars so it's a real opportunity.
What I think would be a useful addition would be transforming 'levels' (as opposed to events) to ambient, continuously-playing audio. This is pretty much the "dynamic audio" of computer games.
For example, you could have strings playing according to CPU activity: softly and slowly (think double basses) when activity is low, but more loudly and urgently (cellos) when activity is high. That would create a sense of how busy the server is (if you enable the CPU activity 'channel').
edit: I see cortesi has already mentioned they're working on transforming continuous data now - good job.
http://alexdong.com/choir-dot-io-explained/
We definitely see Choir fitting in where you can't look at or interact with a screen. Cars and wearable computing are areas we're excited about. First, though, we want to experiment on the desktop, find out what makes a good audio interface, and solve our own burning needs regarding more mundane monitoring situations.
Either way, looks like your signup-form has some peculiar ideas about what constitutes an email address, it keeps asking me to input an email address when I type in:
choir.io@s.hypertekst.netWatching log files scroll by I have noticed that once you have stared at them for too long you start recognizing the patterns. There's not enough time you read everything that scrolls by you quite often you just know that now something is out of place.
Maybe these soundscapes could provide something similar in a non obtrusive way. Just by listening your brain would be wired to expect certain sounds as consequence for certain actions. If something goes wrong, you would just know it.
I think one challenges is how to take something like this into use. Setting up the the triggers and configuring the sounds feels like too much trouble ("What is the correct sound for this event"). Might be just better to take some ready provided set and learn the sounds.
It seems like the big trick when implementing an app on top of this is appropriately assigning the "level" of the event. Every time the Alarm or Horn goes off it’s fairly intrusive.
Regardless an awesome, uniqueº and useful service.
--
º In my experience.
Instead of simply generating a fixed sound for each event, have you considered synthesizing a continuous multi-track score? Like a baseline piece of orchestra music being modulated by the events. Or something like Brian Eno's http://en.wikipedia.org/wiki/Generative_music
Also, perhaps consider streams of data other than discrete events: perhaps continuous metrics like CPU utilization, or stack traces from profiles, or percentiles of latency, or ...
Continuous data is one of the very next things we are implementing, partly because a sufficiently dense discrete set of events becomes a frequency, partly to cater for measurements like load. We plan to indicate magnitude with pitch and volume, but there are some complexities in the API and representation that we're working through first.
The problem I see from the GitHub demo and the discussion here is that you zone out of the "background noise" and focus on the important/out-of-bound/etc sounds. Great, so why not just remove the background sounds and just alert the user to an urgent notification. There's nothing new to this, however, this is just audible notification alerts.
If you are going to run the sounds in the background, your brain is going to process out the on-going "normal" sound anyway.
E.g. if you know a co-worker is on holidays then a dearth of checkin sounds will seem normal (ideally this feeling of normalcy will be subconscious). On a different day, a dearth of checkin sounds might alert you that your startup isn't making much coding progress (depending on context not necessarily a bad thing!).
I had the pleasure of meeting the Mailbox app crew at Dropbox's offices a few months ago. They had a really cool light show on what looked like a table tennis net strung up with networked LEDs and pasted to the wall. When a user signed up, it would create a blue pattern across the net. When a message was sent, the screen flashed red. You can imagine the screen was a dancing symphony of visually encoded events -- it was and really remarkable and quite beautiful to watch. Chaotic at first, but once you memorized the patterns you could glance at the screen and immediately feel the pulse of the application. After a few hours I think you'd almost be in touch with the application where you could recognize errors without even having to check your logs / analytics / etc...
So @cortesi, definitely build in a hook for the Mixpanel API. It'd be great to get a sound everytime a user signs up, signs in, or triggers certain events.
I can imagine all the SF startup folks walking around the mission with boomboxes on their shoulders networked to pick up their audio feed from Choir.io, broadcasting their own encoded analytics melody to the world. Or PMs with headphones on at their spin class, keeping up with their engineers' progress on the new sprint. Ok yes, I'm mocking the movement now, but it's still pretty cool, congrats =)
We're collecting integration ideas over here:
http://choir.uservoice.com/forums/217059-general/category/70...
If there's anything else on your wishlist, just chuck it in tnere.
No doubt a super cool and out-of-the-box idea, but I quite personally would go crazy if I had to hear water droplet sounds any longer than an hour.
Also, whether any particular sound is annoying or pleasant is complicated (we're just figuring out the parameters now) and subjective. So, we're working on letting users create, edit and share sound packs to see what smarter and more talented folks than us come up with.
wetter reverbs, in particular the late reflections are pretty strong with far-away background noises. maybe even stronger than the original sound itself (though I'm not sure if that makes physical sense, it's easy to do with a regular reverb effect, and really muffles the sound into background)
also something with the stereo image.
if funds allow, ask a professional sound mastering studio, maybe? there's people that might know just the tricks.
oh and if you want to place the sound in the room, bury in the other ambient sounds, tell the users they really need somewhat decent speakers, not plastic desktop speakers and definitely not headphones (even if they're really good headphones).
By which I mean, I've heard/experienced a couple of other data sonification / generative art installations with a similar concept, and they didn't sound "right", in some sense. Maybe part of it is that the github events seem more "useful" or "natural" than whatever it was (I forgot) those other installations were um sonificating.
But an important part of it is, I think you already chose a couple of really nice and varying sounds. Ones that stand out in the spectrum, and sound good both on their own as well as when they're repeated in quick succession (though for that last situation, some kind of real time synthesis using oscillators would maybe provide a smoother sound)
anyway, well done!
Error log:
Blocked loading mixed active content "http://api.choir.io/stream/f9c750f2bedb0c0f" @ https://choir.io/static/media/lib.967f1395.js:8671This looks awesome - I been wanting to setup something similar in our office that makes a sound every time a sale is made for some time now, so this can be pretty handy.
You mentioned there will be Windows and OSX standalone clients coming soon. Will there be an API for writing clients?
https://choir.io/player/f9c750f2bedb0c0f
Been listening to this for awhile now. Love it. Can't wait for a standalone client. Do you have a mailing list? I'd love to keep track of an ongoing feature list of sorts.