One thought is that obsidian can execute web assembly and a parser / sema checker written in something that turns into wasm can therefore be run on the source files. Can probably tie that to a syntax highlighter style thing for in-ide feedback.
The other is that markdown is a tempting format for literate programming. I do have some notes in obsidian that are fed to cmark to product html. With some conventions, splitting a literate program into executable code embedded in a html document is probably doable as an XML pipeline.
In a much simpler vein, I'm experimenting with machine configuration from within obsidian. The local DNS server sets itself up using a markdown file so editing an IP or adding a new machine can be done by changing that markdown.
I hope the author continues down this path and writes more about the experience.
I appreciate it=) I definitely want to write some more stuff up, in particular how code organization changes when you can tag and add attributes to definitions.
This sounds really interesting, any code to look at anywhere or write up about it?
There's a file called DNS.md which contains lines like `192.168.1.15 milan` in an obsidian vault. Obsidian sync copies it around. DNS is by pihole which uses a plain text file in that sort of format for the entries.
Then superuser's crontab -l
0 * * * * cmp /home/jon/Documents/Obsidian/SystemControl/DNS.md /etc/pihole/custom.list >/dev/null 2>&1 || cat /home/jon/Documents/Obsidian/SystemControl/DNS.md > /etc/pihole/custom.list && sudo -u jon pihole restartdns
Cron has rules about relative paths that I don't remember so it's literally written as above.It seems likely that the idea generalises. I'm considering managing public keys for ssh / wireguard in similar fashion but haven't done so yet.
Obsidian Digital Garden[1] is FOSS, so it might be modifiable parse and output the code pages correctly.
However this gives you two things that Smalltalk doesn't:
1. It's language agnostic (boring I know)
2. It promotes keeping your code and written texts in the same system where they're both first class. That way they can link between each other, transclude each other, be published together, be organized the same way, etc. I really think this is the most interesting thing about the project, it really feels important to me.
Caveat: right now my written documents can link to/transclude code, but it doesn't work the other way yet. This is because the linearizer will see a link from code to documents as another definition and try to jam it in the source file. This would be an interesting use case for typed links, but Obsidian doesn't a have them AFAIK. Kind of cool since I haven't seen many other use cases for typed links in the wild.
EDIT: It occurs to me that I've never used a Smalltalk notetaking or word processing program. Are there any that are integrated with the System Browser, so that they can link to (or even better embed) code? If anyone has more info please let me know!
https://lepiter.io/feenk/introducing-lepiter--knowledge-mana...
That said, there's a lot to be said for revisiting old ideas. There was so much interesting research done in the 60s and 70s in all sorts of random directions, maybe because at that time there were no precedents or expectations for how things should be done. There are so many untapped resources here, it's crazy. Every now and then I re-watch "The Mother of All Demos" [1] from 1968 where Douglas Englebert demonstrates some of the research at Stanford or the Sketchpad Demo [2] from 1963 where Ivan Sutherland is presenting a GUI-based CAD system.
Fortunately, these ideas have now been picked up again, but to me it's interesting to note just how long a time lapsed between these ideas and becoming mainstream. Some of it is obviously the cost as the state-of-the-art research machines were massively more powerful than the home computers even 2 decades later, but I'm sure there were a lot of great ideas that have just been forgotten.
Part of the problem, I think, is that we have found solutions to some of the easy problems and optimised it to such a degree that it's then hard to ever go back and revisit the alternative approaches because you'd need to regress so far from the current levels of expectations.
[1] https://www.youtube.com/watch?v=yJDv-zdhzMY [2] https://www.youtube.com/watch?v=6orsmFndx_o
In terms of over-optimisation forcing a certain technologies to be developed and others to be ignored, one example I'm very familiar with is computer graphics. I'd written a TON of stuff here, but decided to simplify it as it was labouring to specific a point.
But our computer graphics state-of-the-art was roughly along these lines: drawing all edges of polygons, hidden-line removal (Sutherland), clipping intersecting polygons (Hodgman), filling polygons with a single colour, *, Gouraud shading, Gouraud shading with smaller triangles, Phong shading with bigger triangles, texturing, fixed texture and lighting pipelines, pixel shaders, vertex shaders. I'll also add compute shaders too, but that was more of a generalisation of what people were starting to do with pixel shaders operating on data that wasn't really pixel data.
Now, you'll notice my * around the time of single colour filled polygons... this might not be the correct place to put the *, but around this point some people started experimenting with ray-tracing and got amazing results, just incredibly slowly. These were seen as the "gold standard", but because drawing triangles was much faster, this is where the money continued to be poured into, optimising and optimising this special case, discovering more techniques to "approximate" the right image, but trying to avoid the hard work of actually rendering it. Over time, things have got closer and closer to ray tracing, except transparent and shiny objects have always been the achilles heel.
Fortunately industry's interest in ray-tracing has resumed, and now compute shaders are general enough that they can be used, but they're still orders of magnitude slower because the renderer needs to consider the entire scene not just a triangle at a time, so you need to store the scene in some kind of tree that's paged in on demand and different latencies for different pixels causes problems for the SIMD architectures. We're starting to see more and more consumer-level hardware with decent ray-tracing performance now, but it's been a decade of lost time in terms of optimisation from where it could have been if the entire market hadn't been competing only in making triangles rasterise more quickly.
In the ray-tracing space, we still see that it's too slow to create perfect images (for very complicated scenes with lots of shiny surfaces and few lights, you might need thousands of rays per pixel to just get a handful that actually reach a light source), so we've invented all sorts of approaches to cover it up - whether it's training an ML model to guess the real colour for black pixels from neighbouring ones, or re-projecting pixels from a previous frame to fill in the games, etc.
Personally, I can't help but think the real breakthrough in performant raytracing will come from tracing light from the light sources instead. This wasn't done traditionally because potentially it's even more expensive than tracing backwards from the pixel, but should be more accurate when there are multiple light sources.
But even the latest batch of hardware is all focused on raytracing, which I think is missing the biggest trick of all - they could be using cone-tracing as a first approximation and then subdividing the cone into smaller and smaller chunks until they're approximately pixel sized. None of this is new, it's just not what the larger industry is doing right now, because it's cheaper and easier for them to do rays instead.
that being said, the thing i haven’t been able to convince myself of yet is why these are different to just normal (in-line) functions? as in, why should i have to write [[foo]]: would it not be better to have all identifiers automatically linked?
I like the idea (also a fan of Unison's approach to code-in-the-db), but I worry about the potential issues that come from effectively having a single global namespace. Could be that I just don't have the discipline for it, though.
Exactly. But zacgarby's right that you would want some auto-linking, so this is where language-specific plugins come in.
The difference from today's world would be that those plugins would leave their results explicitly serialized in the source medium, so they wouldn't have to keep being reconstructed by every other tool.
> I like the idea (also a fan of Unison's approach to code-in-the-db), but I worry about the potential issues that come from effectively having a single global namespace. Could be that I just don't have the discipline for it, though.
I have lots of thoughts on this. I was initially disappointed that Unison kept a unique hierarchy to organize their code-- that seems so filesystem-ey and 1990s.
However, I'm now a convert. The result of combining a unique hierarchy with explicit links between nodes is a 'compound graph' (or a 'cluster graph', depending, getting the language from https://rtsys.informatik.uni-kiel.de/~biblio/downloads/these...). These are very respectable data structures! One thing they're good for is being able to always give a canonical title to a node, but varying what that title is depending on the situation.
I think that for serious work the linearizer would want to copy this strategy as well. Right now it's flat because that's all I need for my website, but if you were doing big projects in it you'd want to follow Unison and have a hierarchy. In the `HashMap` folder you'd display `HashMap.get` with a link alias that shows plain `get`, but if that function is being called from some other folder it would appear as the full `HashMap.get`.
You could still do all the other cool stuff like organize by tags and attributes using frontmatter, but for the particular purpose of display names having a global hierarchy is useful.
EDIT: What matters more than what the linearizer does is what Obsidian displays, so it's there that the "take relative hierarchical position into account when showing links" logic would have to occur. That could be a plugin or maybe Obsidian's relative link feature, I haven't used the latter.
Thank you! I think [[links]] will work out of the box with Logseq since they're the same as Obsidian. Transclusions will be in the wrong format since Obsidian transclusions look like `![[this]]`, but it would be quick to modify the linearizer to handle them.
You may not want transclusions though since transcluding code into other code is... very weird. I'm curious what use cases people come up with for it though.
What are the projects you're especially bullish on?
I know less about Hazel, my understanding is that it's source-code-in-CRDTs, which is definitely structured source code though may not technically be in a database.
Unrelated: what has your experience been using igneous-linearizer to help understand other people's code?