I make no great claims for the system, it has major issues being prompt based. It is a prototype to explore the feasibility of the idea of giving a chatbot arete, a code of conduct. There are few tests, no evals so all the usual caveats! An intellectual exercise in possibilities not currently being explored anywhere else. Does it work? Hmm, almost :)
It extracts normative propositions from incoming user requests then compares then to it's own internal ethical normative propositions using the Normative Calculus. The system also uses the Decision Paradigm algorithm from Lee Roy Beach [3] to make a forecast on whether to take up the user's task or not.
[1] https://link.springer.com/article/10.1023/A:1013805017161 [2] https://www.jstor.org/stable/j.ctt1pd2k82 [3] https://books.google.ie/books/about/The_Psychology_of_Narrat...
- You don't have to think about concurrency or multithreading as in Python. There is no GIL to worry about. The built in support for things Supply and hyper-operators are all available in the language. It is really easy to hook up disparate parts of a distributed agent without having to think about async or actors libraries or whatever in Python.
- Something I prefer is the OOP abstractions in Raku. They are much richer than Python. YMMV, depending on what you prefer.
- Better support for gradual typing and constraints out of the box in Raku.
Python wins on the AI ecosystem though :)
I started messing around with this code several years ago and the LLM libs in Raku were not as rich as today. I thought I needed a specific type of LLM message handling structure that could be extended to do tool handling and some of Letta type memory management (which I never got around to!). I have some Python libs of my own and I ported them. I suspect if I was starting now, I would use what is available in the community. This version of TallMountain is the last of a long series of prototypes, so I never rewrote those parts.