It was based around 3 arcade cabinets pointing together, so the players couldn't see what was on each others screens.
This was achieved by modifying the ball speed/direction slightly so that it arrived at the bat/wall at a musically relevant point and triggered the correct sound.
Ah, here you go, Josh as a reference to it on his site: https://www.autogena.org/work/ping
The audio engine (the bit I worked on) was in effect a stem based mixer, but with a state transition diagram per stem, with multiple loops available. Depending on the route through the transitions (triggered by events from the game play, for example, how many balls the player is keeping in play) it was possible to reach more complex or simpler performances of the same musical piece, so the players decisions and ability would affect how the performance of the music, not the music itself if that makes sense?
https://github.com/Farama-Foundation/Gymnasium
There also SethBling's excellent video on YouTube about machine learning specifically with Super Mario World:
https://www.youtube.com/watch?v=qv6UVOQ0F44
I encourage you to give it a try! I feel that video games are a bit underrated by current AI buzz, I think there's lots of potential for a machine to learn a skill through playing a game. And lots of potential for a game being selected or even created with the goal of teaching a specific skill. However at that point maybe it's better to forego the audio and visuals and speak to the machine in text or with pure data.
On the other hand, I have seen a video about convolutional neural networks that feed on each pixel of an image. So perhaps training with sound data, or with the pixels of a spectrogram, could have some positive results. And certainly I would be amused to see a game played in time with music, or possibly even dancing with a songs melody, harmony, and story as well.
Anything that's ever been created by humans, existed first in the imagination of a human brain. And you've got one of those. A mental vision pursued and brought from the mind into physical reality is a beautiful thing, a gift for all of humanity in my eyes. I think it's quite worthwhile. But that's just my perspective. Thank you for sharing your imagination. Have a nice day
The original game’s sound was tied to the frame rate so this vaguely happened by default. Later ports to PAL broke this because it ran at a slower frame rate.
https://www.youtube.com/@LucidRhythms
Probably almost impossible to adapt written works 'backwards' into a visualization but it might be fun to have different bars represent different notes and have the balls split for chords.
While the submission has the notes not at a basic 1/4 tempo, and is automatically "animated" based on the constrained optimization. Also leads to a much more interesting visualization :)
https://www.matrixsynth.com/2014/07/roland-mt-80s-midi-playe...
That said, however, I find it to be oddly satisfying to watch. Curious if experience playing different instruments has anything to do with it. To me, something like a xylophone or a steelpan feels pretty analogous to this.
https://en.wikipedia.org/wiki/Atari_Video_Music
If you've seen the movie Over the Edge, Claude and Johnny have one at their house.
How is the beat used to sync the pong chosen? Like for Bad Apple!, especially around 1m55 https://www.youtube.com/watch?v=bvxc6m-Yr0E it seems off
Good suggestion from a YouTube commenter, pasting it here
> This is pretty cool.. it would be cooler if there were multiple pongs and paddles for each type of beat (like high beats and low beats)
I do kind of wish that the last note corresponded to a game over, though, and I wonder if a smaller screen or faster ball would widen the playing field a little. Maybe I'll fork the code and try some of those out myself.
> "We obtain these times from MIDI files, though in the future I’d like to explore more automated ways of extracting them from audio."
Same here. In case it helps: I suspect a suitable option is (python libs) Spleeter (https://github.com/deezer/spleeter) to split stems and Librosa (https://github.com/librosa/librosa) for beat times. I haven't ventured into this yet though so I may be off. My ultimate goal is to be able to do it 'on the fly', i.e. in a live music setting being able to generate visualisations a couple of seconds ahead being played along with the track.
Not sure if this is unsavory self promotion (it's not for commercial purposes, just experimenting), but I am in the middle of documenting something similar at the moment.
Experiments #1 - A Mutating Maurer Rose | Syncing Scripted Geometric Patterns to Music: https://www.youtube.com/watch?v=bfU58rBInpw
It generates a mutating Maurer Rose using react-native-svg on my RN stack, synced to a music track I created in Suno AI *. Manually scripted to sync up at the moment (not automatic until I investigate the above python libs).
Not yet optimised, proof of concept. The Geometric pattern (left) is the only component intended to be 'user facing' in the live version - But the manual controls (middle) and the svg+path html tags (right) are included in this demo in order to show some of the 'behind the scenes'.
Code not yet available, app not yet available to play with. Other geometric patterns in the app that I have implemented:
- Modified Maurer
- Cosine Rose Curve
- Modified Rose Curve
- Cochleoid Spiral
- Lissajous Curve
- Hypotrochoid Spirograph
- Epitrochoid Spirograph
- Lorenz Attractor
- Dragon Curve
- Two Pendulum Harmonograph
- Three Pendulum Harmonograph
- Four Pendulum Harmonograph
This is the Typescript Maurer Rose function (that is used with setInterval + an object array of beat times which determine when to advance the 'n' variable):
export const generateGeometricsSimplemaurer = (n: number, d: number, scale: number = 1) => {
const pathArray: TypeSvgPathArray = [];
for (let i = 0; i <= 360; i += 1) {
const k = i \* d;
const r = Math.sin(n \* k \* (Math.PI / 180));
const x =
r \*
Math.cos(k \* (Math.PI / 180)) \*
40 \* // base scale
scale +
50; // to center the image
const y =
r \*
Math.sin(k \* (Math.PI / 180)) \*
40 \* // base scale
scale +
50; // to center the image
pathArray.push(\${i === 0 ? "M" : "L"} ${x} ${y}`);`
}
const pathString: string = pathArray.join(" ");
return pathString;
};
setInterval isn't an appropriate solution for the long term.The geometric patterns (with their controls) will have a playground app that you can use to adjust variables... As for the music sync side, it will probably take me a long time.
*Edit: I just noticed that the author (Victor Tao) actually works at Suno
Now try synchronising the music to the game.
You could use our Bungee library for the audio processing.
Nothing new. Apparently there are references to people doing this in ancient and medieval times.