Why? With responses generated according to what? Are you really just suggesting using neural networks in the compiler's optimiser?
> Then try using a smaller network until something like your registration flow, or a simple content management system was just a bunch of floating point numbers in some matrices saved off to disk.
Why? What's the advantage over just building software?
Why would I want to do this? I'm not 100% sure ... I think it would be super fast once you got it working. I think it would avoid many security bugs. You wouldn't have to read that "oh drupal 3.x has 20 new security bugs" better go patch our code. I think when I had this idea I was thinking about it terms of a parallel system that could catch hacking by noting when actual http responses diverged too much from the predicted response. The main idea being that for a given input the output really is 100% predictable, assuming your app doesn't use random numbers like in a game or something.
To link this idea to the article, I think things like XML parsers could be written this way .... I can't prove it but I suspect that they would be very fast and not come with all the baggage that the article complains about.
I started thinking along these lines after reading stuff like this https://medium.com/@karpathy/software-2-0-a64152b37c35
If you consider yourself a world-leading expert on neural networks and have some secret sauce in mind, by all means, good luck... otherwise it sounds like a fools errand.
I do want to point out that I'm thinking of doing this on a very limited website, not a general purpose thing that replicates any possible website. When I imagine the complexity of a modest CMS or an online mortgage calculator I think that it is much less complex than translating human languages. The fact that web code has to be so much more precise than human language actually makes the task easier. But to be fair, I'm all talk at this point with no code to show for it. So I will keep these comments in mind, this thread has been helpful for helping me think through some of this stuff.
Also I'll bet you that your neural net is > 100x slower than straight line code.
For session based variables? Not sure, either it all becomes stateless and the code has to read everything from storage for each reqeuest .... or maybe the lstm is able to model something like an entire user session and remember the stuff that the original app would have put in the session.
That Andrej Karpathy article that I linked to two comments above ... he pointed out, in a different blog post, that regular neural networks can approximate any pure function. Recurrent neural networks like the LSTM can approximate any computer program. It is because they can propagate state from step to step that allows them to do this.
As far as it being 100X slower, well at a certain point I will be willing to take your money :)
I imagine it would be fast, then you realise you've made a static content caching layer out of a neural network and replace it with Varnish cache and it would be hyper fast.
I just don't know how you can achieve that with static cache ... only if somebody else requested that exact mortgage calculation before and it is still in the cache.
Also, my idea of the "given input" from the earlier comment would have to include results of sql queries that would form the entire input to the LSTM.
But honestly I think over trained auto encoders can be used as hash maps. That would be an application more in line with what I think you are saying.
I'm not sure I'd like a http webserver to silently fail, or be undebuggable when it comes to security vulnerabilities when given strange inputs.