Here's our film from last year, Sunspring, which was written entirely by an LSTM trained on science fiction screenplays: https://www.youtube.com/watch?v=LY7x2Ihqjmc
It's worth noting that this year I used subtitle files rather than screenplays to train our LSTMs, so we only had dialogue rather than dialogue + action descriptions. The Ars Technica article explains everything.
Here's some information about context-free grammars: http://www.decontextualize.com/teaching/rwet/recursion-and-c...
Shame on Ars staff for conflating this parlor trick as "artificial intelligence," and failing to explain how this rather simple process works.
For fucks sake, they find the raw screenplay of a movie like Knight Rider, regex out the crap, download a Markov chain generator off of github, and train their model on the text. And of course by "training", I mean they run a single command to process the text document, and wait. Why this is described as a "...long short-term-memory recursive machine-learning algorithm" probably has to do with the fact that Ars has their hand in promoting these short films.
How exactly were you able to infer that they were lying and were instead using a Markov chain from a few snippets of dialogue?
Second, the concept of AI is so fragmented, varied, and overloaded, that who's really to say Markov text generation isn't a simple AI?
And TBH, there's not much difference between Markov chains and most AI and machine learning algorithms. The math's a little different, but they're almost all based on probability and statistics (and sometimes searching) and none of them really understand anything.
It's actually pretty standard and not particularly innovative, technically, in that you can google already existing tutorials on exactly how to do that (even in Keras - forget lower level stuff).
At any rate, this doesn't work well, typically. I'm sure they generated a bunch of sample sequences and cherry picked ones that made even a little sense for inclusion in their script.
A large amount of AI-generated text just swaps in the source dataset, generates the text, calls it AI, and gets many blog posts written about "artificial creativity."
https://medium.com/artists-and-machine-intelligence/adventur...
https://medium.com/artists-and-machine-intelligence/adventur...
But setting aside the article's lack of technical rigor, and its scientism, the film itself is interesting. I was impressed by how the actors could convey feeling, even with a randomly generated script. I imagine that, in acting school, actors have to do this sort of thing as an exercise: learn how to convey feeling using random words or grunts (is this done?).
For me, it's interesting that there's another side to human communication that has nothing to do with verbal meaning. It sounds hard to do THAT using software.
I guess that's why there's an Oscar for editing! The temporal context of a shot matters as much as the shot itself.
But even if my reaction to the film is conditioned by editing, and not just Hoff's acting, I still find it weird that I'm feeling things that don't depend much on the content of the words he's speaking.
That sounds like a very interesting movie I would like to watch very much. But search for Death Wish only turns up tons and tons of the one with Charles Bronson going postal.
If you view the credits, it shows all the corpuses the various bot-generated lines were trained on. Hasselhoff's lines were trained on a whole bunch of baywatch episodes.
But if we end up with androids dreaming of electric sheep, this is where it started.