https://en.wikipedia.org/wiki/Wright_Flyer
https://www.reddit.com/r/AskHistorians/comments/1ocua1b/in_1903_the_nyt_published_an_editorial_declaring/
* inclusion of episodic long term memory in SNN every-n-number-of-tokens behind (my own idea...); * implementations in C and C# ports without torch/tensorflow (SpikeGPT is in python with torch); * several types of 'attention' and training modes and memory modes; * training/learning without backpropagation; * CPU-friendly in the sense that while it's still kind of slow (unfortunately) at least GPU isn't mandatory
Here are the screenshots of both the c# windows forms implementation and the C/cygwin port...and 2 random screenshots of claude sonnet 4.6 and gemini pro 3.1 about the program:
https://imgur.com/a/SAQqKmm
why is the text generated from seed still far from perfect? 2 reasons: very small corpus and the c# has <100% accuracy.
However the big nice surprise: It seems like grammar and semantics are both learned, this coupled with my idea to include a way for long term episodic memory a long context outside the tiny 'ctx' window can be extended easily to thousands of tokens behind without decrease in speed - could make it a practical program. Generation is also very fast.
future work:
* BPE, right now it's just words tokenizer...not good for code; * did i say "code"? It may actually be a total failure for coding...or maybe not: completely untested; * The program actually has 2 versions, the other one noticeably deviates from this one and it has c and even f# port, however the f# just doesn't work...it always produces complete gibberish...major bug. * never tested on actual neuromorphic CPU, just goood ol' intel universal laptop ones; * python port should be possible; * finally the big test: large text corpus (megabytes) and accuracy over 95% <- ultimate test.
https://i.imgur.com/p6AmBrq.png
Another way explain this is: "like Mamba but with good long term memory and fixed inference/generation speed". At the moment in C only but I bet it can be ported to python.
1. free; 2. solves the privacy issue so you communicate with your chatbot offline, lack of privacy is huge issue amongst end users. 3. crazy but even in 2026 people don't always have internet everywhere.
This turns the LLM industry from SaaS subscription to a "download gta 4" kind of business model, like a steam video game business.
I know there are many open source models, but they're designed to be 'build yourself and train them on your corpus' kind of solution...it's for professionals, not a ready working solution.
or even better...you can follow the money (aka bill gate's case back in the days_: contact big companies like intel, hp, lenovo etc to embed your AI into their hardware, so they market it as "for free offline AI assistant" while your contract with the company gives you millions, the vast majority of people have no idea what "45% on humanity last exam" means so even if your model isn't gemini 3.1 pro it will be considered a plus, and if it is then even better since it means people get something better without the hassle of paying online to sites they didn't know exist yesterday.
It's also option for IoT devices like watches, scooters, cars, even the fridge - again i'm sure this is a thing but almost anyone these days relies on SaaS even if you download the chatgpt app or use copilot on windows - internet connection is needed and the model is server-side not on your machine.