I see the author is spring cleaning:
> I've turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so some of these things I no longer use. On Linux I now use KDE on Wayland with a minimally-configured browser. I miss the power user features, but I do not miss the friction and constant maintenance.
https://github.com/skeeto/dotfiles/commit/df275005769b654618...
> I am no longer using Mutt nor running my own mail server. In general less terminal stuff for me.
https://github.com/skeeto/dotfiles/commit/e331e367c75f66aaa9...
LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.
FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.
LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.
Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.
In case anyone else wondered about using gptel to edit thinking (eg vis Qwen3.6's `preserve thinking`), [1] explains:
> In a multi-turn request, from the time you run `gptel-send`, everything the LLM sends is passed back to it [...during tool calls...] includes multiple reasoning blocks. [...But...] subsequent gptel-send calls read their input from the buffer contents (or active region, etc), so the reasoning blocks in the buffer will not [] be sent as "reasoning_content".
But in org mode, those are apparently `#+being_reasoning` blocks (`gptel-include-reasoning`?), so editable thought might be an easy addition?
A caution, fwiw, that any llms which respond with interleaved content and reasoning blocks, currently only work when not streaming, and fixing that is non-trivial.[also 1]
My .emacs config has improved and I wrote my own Emacs based coding agent https://github.com/mark-watson/coding-agent
Let's be honest: Lisp probably won't ever get bigger than Python, unless Python for whatever reason starts dying on its own. But if AI ever gets serious about interpretability, formal reasoning, program synthesis - all the stuff Lisp was built for - it just might quietly become relevant again in research contexts, without ever reclaiming mainstream status.
Scicloj has been building out a serious ML stack in Clojure - noj, metamorph.ml, scicloj.ml.tribuo, libpython-clj for Python interop. Beside that, people been proving that 'code is data' is exactly what makes it a better target for LLMs. Clojure is most token efficient PL - it's been proven. There are some recent interesting clj projects in relevance:
—
I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.
I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.
I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.
All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case. All the AI-related Emacs settings are in my config².
Is this helpful, or you want some more concrete examples?
—
¹ https://github.com/agzam/death-contraptions
² https://github.com/agzam/.doom.d/tree/main/modules/custom/ai
Very true. There's an enormous tacit knowledge gap. Check this out:
I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...
Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.
I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.
This is what gives me the most pause.
For me the friction always comes when I try to use the internet without it
One example: it disables the default Ctrl-F search function but its own search function is subpar (no match counts/hlsearch, e.g.) and often clashes with website's built-in search (on Github, e.g.).
It doesn't work on the default newtab either, and changing the default newtab somehow makes opening a new tab slower (that's FF's fault, I guess)…
solid extension, big fan
Since I needed to keep around a Chromium anyway, and I already am forced to use one for work, it became simpler to just use solely use a Chromium.
In the process I dropped some extensions.
It's been great.
This time around I'm using Chromium for personal stuff, and Firefox for work-stuff. I do more work-related browsing, so having the vertical tabs in firefox meant that was the better browser to use for official stuff.
(In my previous job I used safari for work, and firefox for personal.)
I'm actually excited about the potential for a future where local agents help improve the operating system experience as I go by making changes based on my use case. All local, of course. I do not want to trust a cloud provider with my use cases/behavior on my computer so they can sell me more ads...
When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.
It's so comfortable that it acts as an impediment to change, since some types of change are uncomfortable.
This can feel like friction to me.
When I remove customisation, I am more "open to experience", and often find preferable tooling.
Edit: Or do you mean "declaratively" in the sense of using something like straight.el?
this exactly. most people can’t set it up that well.
Yet the author ended up doing the latter and it's not really made clear why. Why?
Because deep down they are incomparable categorically. Separate the tools from the foundational ideas and you see the very different value. Vim-model of text navigation is fantastic, practical, brilliant idea. Once you grok it - you can take it anywhere. You can use it in your editor, browser, terminal, WM. Emacs is rooted in another, even more brilliant idea of practical notation for lambda calculus. These ideas have no overlap. But understanding the philosophy of each (ideally both) could open so many different possibilities.
Doom emacs and Spacemacs are both good starter kits to give you an idea of what you could do.
There is also an ecosystem of applications for Emacs that are really good. They don't require you to use Emacs as your editor (you can run, say, Magit as a standalone instance) but if you do, they integrate really well with each other.
Anyone know of something like this?
(setq user-emacs-directory (format "~/.emacs.d/magit/%s/" emacs-version))
(setq custom-file (concat user-emacs-directory "custom.el"))
(setq make-backup-files nil)
(setq auto-save-default nil)
(setq create-lockfiles nil)
(setq inhibit-startup-screen t)
(setq initial-scratch-message nil)
(require 'magit)
(defun setup-standalone-magit ()
(magit-project-status)
(delete-other-windows))
(add-hook 'after-init-hook 'setup-standalone-magit)
And a small wrapper (`~/.local/bin/magit`): #!/bin/sh
if [ "$(git rev-parse --is-inside-work-tree)" = "true" ]; then
exec emacs -nw -q --no-splash -l "/path/to/magit-init.el"
fi
It worked well for me because I can reuse all my keybindings (evil + leader keys with `general`) and my workflow is fully in the terminal. (I have since moved on to Jujutsu, and `jjui` is filling this gap for me right now, but it's not quite a magit-for-jj).The IntelliJ git client is my favorite by far, I am curious what do you not like about it?
I have long struggled to learn emacs and use it effectively. Just for the fun it, If I were to use claude as I my teacher, how can I ask it to teach me to use Emacs? I don't like to ask questions and go back to try it. I want it to be a drive that will assist me with the usage. Has anyone tried such an approach to learn emacs?
Yesterday I typed "Set the default YAML indentation to 2 spaces." It came up with
(use-package yaml-mode
:defer t
:config
(setq yaml-indent-offset 2))
(add-hook 'yaml-ts-mode-hook
(lambda ()
(require 'yaml-mode)
(setq-local indent-line-function #'yaml-indent-line)))
Now I can hit tab to indent YAML by 2 spaces, and I learned a little in the process. I'm delighted with this setup.What does (inside Droid) mean ? Do you use any package to integrate to claude code in emacs?
I've started using Droid inside Emacs via the agent-shell package I learned about here a few days ago (https://news.ycombinator.com/item?id=45561672). It handles quite a few other agents, too.
You can start with vanilla Emacs with zero config and Claude/Copilot/Codex/etc, running separately. Your first goal is to have the LLM running inside Emacs - ask the LLM how. It probably will recommend gptel - as one of the most popular and robust choices, go with it.
Once you get LLM tools to modify Emacs state from within, you can just go crazy. You can tell it to change colors, fonts, ask any stupid questions, whatever. It will do it without losing a beat - no restarts, no waiting, no copy pasting - just flow.
Isn't that kinda expected with a new software release, that it doesn't have a 100% feature parity?
My experience with LLM technologies is it does make generating the code a really quick part. It may be reasonable to take much more time to specify things up front (rather than emergently as you would by hand). -- I mean, if you've got a well crafted description of what you want, you'll be able to get a working program MUCH quicker with an LLM, today, compared to writing it out by hand.
Would it really be surprising/shocking if an LLM was able to rewrite (most) features from an existing software, to a new software?
It seems like the reality today is, we've gone from a maintained software in a niche ecosystem with happy users, to a more fragmented one where everyone has an LLM write their own half-baked one.
Try doing that with Elfeed2.
Vi/Nvi2 users can almost do the same with Unix pipes and apertium/translate-shell/some lingva CLI translating tools for the whole document/regex selection/lines, a la Emacs. So can sfeed users, where depending on the feed they can pipe the plumbers' output (or just hack the scripts) to any other translating tool:
git://codemadness.org/sfeed
Heck, a few years ago I could reuse Telega.el's (Telegram client) translating functions for non-Telega buffers translating some text guide in the spot. So, did the blogger actually win something?
I also suspect it allows easier consolidation. Moving from a deprecated lib to a new (and better) one for example.
Implementations will likely homogenize a bit as well, but on the other hand boy am I glad not to see an increasing amount of bizarre naïve hand-rolled implementations for some things.
You're right Spacemacs is essentially a batteries-included version of Emacs.
Hence my question, what Wellons (who's a seasoned veteran of Emacs) could ever say anything about Spacemacs (or Doom - which in this context makes no difference)? What kind of views one would be interested to hear? Using the Space key as the "Lead key", or something about local-leader key; or vim-navigation/Evil in general; or modules/layers architecture of Emacs config? He said in that post you shared that he believed he'd eventually end up using Evil - he doesn't need to use Spacemacs for that.
Spacemacs is great for beginners, for people who don't want to deal with learning Emacs native bindings - they are legit confusing. For someone like Chris, it makes little sense, they'd probably would just add modal editing packages to their existing config. Even though Spacemacs and Doom are still valuable - one can find many interesting gems there.
Also, these projects may give you a good discipline for structuring your keys mnemonically - everything files related would be at "SPC f", search stuff on "SPF s", etc.
I've been using it since 1994.
Whoa, shit, I'm old.
It was my first experience with emacs as well, but in MS-DOS, ca 1990. Did not know there was a CP/M version.
For you, perhaps.
- The impact on computer science seems almost entirely negative so far: mostly the burden of academic wordslop, though an additional negative impact is AI sucking all the air out of the room. What's worse is how little interesting computer science has come out of the biggest technological development with computers in many years: in fact there has been a terrible and very sudden regression of scientific methodology and integrity, people rationalizing unscientific thinking and unprofessional behavior by pointing to economic success. I think it'll take decades to undo the damage, it's ideological.
- The impact on software development actually does seem a bit positive. I am not really a software developer at all. It always felt too frustrating :) However the easing of frustration might be offset by widespread devastation of new FOSS projects. I don't want to put my code online, even though I'm not monetizing it. I'm certainly not alone. That makes me really sad. But I watched ChatGPT copy-paste about 200 lines of F# straight from my own GitHub, without attribution. I'm not letting OpenAI steal my code again.
- Software engineering... it does not seem like any of these systems are actually capable of real software engineering, but we are also being adversely affected by an epidemic of unscientific thinking. Speaking of: I would like to see Mythos autonomously attempt a task as complex and serious as a C compiler. Opus 4.6 totally failed (even if popular coverage didn't portray it as such):
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
"Future of software engineering" folks should stuff like this in mind. What model is going to undo Mythos's mess? What if that mess is your company's product? Hope you know some very patient humans![1] They should have different educational tracks. There is no reason why a big fancy school like MIT can't have computer scientists do something like SICP and software engineers do the applied Python class. Forcing every computer professional into "computer science" is just silly; half the students gripe about how useless this theory is, the other half gripe about how grubby the practice is. What really sucks here is that I think Big Tech would support the idea, we're just stuck in a weird social rut.
I feel like LLMs[1] are going to cause a kind of "divorce" between those who love making software and those who love selling software. It was difficult for these two groups to communicate and coordinate before, and now it is _excruciating_. What little mutual tolerance and slack there was, is practically gone.
Open source was always[2] a fragile arrangement based on the kind of trust that involves looking at things through one's fingers (turning a blind eye may be more idiomatic in English), and we are at the point where you just have to either shut your eyes, or otherwise stop pretending that the situation can be salvaged at all.
Just a thought I had: some people think that LLM-shaming is declasse, and maybe it is, but I think that perhaps we _should_ LLM-shame, until the AI-companies train their LLMs to actually give attribution, if nothing else (I mean if it can memorize entire blocks of code, why can't it memorize where it saw that code? Would this not, potentially, _improve_ the attribution-situation, to levels better than even the pre-LLM era? Oh right, because plagiarism might actually be the product).
[1]: Not blaming the tech itself, but rather the people who choose to use it recklessly, and an industry that is based almost entirely on getting mega-corporations to buy startups that, against the odds, have acquired a decent number of happy-ish customers, that can now be relentlessly locked-in and up-sold to.
[2]: I mentioned a specific example of good old fashioned, pre-LLM, human plagiarism here: https://news.ycombinator.com/item?id=46540608
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-...
Well, you have to separate "future of" from "ensuing damage". This is similar to the fishing industry. Fishermen in the past used spears, rods, small nets, nowadays annual national catch statistics are reported in kilotonnes. They are destroying the ocean floor, causing massive extinction of species, causing irreversible damage. Yet, you can't argue looking 100-150 years back that industrial fishing was not "the future of the fishing industry". That is also why programmers won't ever disappear because of AI progress. Just like we still need fishermen, we'd need programmers. The sad truth about this is that soon we truly may have no need for fishermen, because there's no fish left in the ocean.
This sounds like unsubstantiated hyperbole - can we keep HN grounded in reality, please?
My alternative hypothesis - you don't like agentic coding or maybe LLMs in general. Not helpful for the group.
From the linked post:[0]
> I left an employer that is years behind adopting AI to one actively supporting and encouraging it. As of March, in my professional capacity I no longer write code myself. My current situation was unimaginable to me only a year ago. Like it or not, this is the future of software engineering. Turns out I like it, and having tasted the future I don’t want to go back to the old ways.
It's deeply distressing to watch people fall into AI psychosis. Being smart, accomplished, or experienced is no defence.
After the bubble pops and the industry realises the damage these tools can do to people, folks like the author will have to confront that they were taken in by a lie. Many won't be able to confront that.
Anyone who has actual corporate team lead or management experience understand AI as effectively a junior dev who doesn't have great persistent memory. These devs using AI are reviewing, guiding, and validating the work given to them by AI just as they would from a junior dev.
The inverse of your statement is more apt; it's distressing to see people so angsty about AI usage. There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
What the future holds for AI model pricing-- that is a valid concern. But I don't think that's what you intended.
Are you sure OP belongs in the second group? He explicitly said he doesn't read all the code generated by his AI:
> I have not read most of the code, and instead focused on results, so you might say this was “vibe-coded.”
> Models you can run yourself are toys.
Now I may be old, but whenever we put a lot of faith in unaccountable megacorps it sure seems to have backfired a lot (remember when Amazon removed 1984 from people's libraries?). As long as a model running locally on a regular laptop bought from the supermarket isn't good enough I'm gonna remain sceptical about current AI.
There's also ethical and environmental considerations, but let's see if we can walk before we try to run.
this is just like being promoted from developer to manager. some people like it some don't. with AI there is another dimenstion: some people like managing machines instead of people, some don't.
it's not for me. i don't want to stop writing code. i don't mind to manage people but i don't want to manage machines (at least not with an unprecise interface/outcome as AI provides). consequently AI may be fine for this person, but it is not for me.
It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.
Perhaps you're confusing "not using AI" with "not being dependent on AI", those are very different things.
The edge isn't from avoidance, it's from using AI as leverage on top of real skill. A strong developer + AI beats a strong developer alone, and massively beats a weak developer + AI. The edge doesn't come from avoiding a tool - it comes from being the kind of person who doesn't need it but uses it anyway. That's leverage. Refusing to use it is just leaving leverage on the table to make a philosophical point.
> After the bubble pops
People like Chris (who is enormously capable engineer) would just move onto different tools, different techniques and paradigms. That is the essence of being a software developer - many of us choose this path specifically because it forces you to learn something new, every single day. That is (I suspect) also another reason why Wellons decided to migrate away from Emacs - he just learned it so deeply, perhaps it's no longer giving him the satisfaction of learning. Which to be honest is hard to believe - Emacs is a boundless playground, there's always something new to learn there.
Saying they have psychosis is a rude exaggeration.
Maybe a lot of people who are doing that aren't admitting that they've stopped writing code, but when all you're ever doing is manually fixing a few lines, or moving blocks of code to more sensible places, fixing jumbled parameters in a call and such, you're not really writing code anymore. You're now a chef in a kitchen yelling at assistants and just touching things when dealing with communicating a correction to one of those dimwits is more frustrating than just doing it yourself.
You still have to be a cook to be a chef, though. But the reason I say that AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
"After the bubble pops" we might see that a lot of new chefs can't actually afford assistants. But just as likely, the overbuilt (government-subsidized directly and through policy) capacity might end up getting written off, and at the cost of electricity and maintenance costs could stay reasonably good. Or algos improve. Or training methods improve.
it is inconceivable to me how anyone could ever enjoy working like that. but whatever floats their boat.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
So the issue isn’t LLM productivity but unrealistic expectations that skyrocketed in the last years? Makes sense.
> Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI
I don’t see any major business impact.
???
>You are not using the tools correctly.
Stop being deluded, man.
When this crap collapses into itself you will be in tears back asking for the knowledge you failed to get without the fancy Clippys.
Now, stop that fancy Megahal chatbot and learn to do things by hand.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
I know a lot of people become comfortable with the default editing tools in Emacs, and many of them are good, but on the whole, vanilla Emacs does not ship with a great editor.
The Vim family makes up amazingly well designed editors.
Evil is a Vim implementation in Emacs. It is the best of both worlds, and not just on paper. It actually works.