Anyone know of a database of generic automations to train on?
We might take it one step further and ask the user if they want to add a rule that certain rooms have a certain level of light.
Although light level would tie it to a specific sensor. A smart enough system might also be able to infer this from the position of the sun + weather (ie cloudy) + direction of the windows in the room + curtains open/closed.
What's not a trivial amount of work is figuring out how to integrate that into HA.
I can guarantee that there is an uncountably infinite number of people like me, and very few people like you. You don't need to do my work for me; you just need to enable me to do it easily. What's really needed are decent APIs. If I go into Settings->Automation, I get a frustrating trigger/condition/action system.
This should instead be:
1) Allow me to write (maximally declarative) Python / JavaScript, in-line, to script HA. To define "maximally declarative," see React / Redux, and how they trigger code with triggers
2) Allow my kid(s) to do the same with Blockly
3) Ideally, start to extend this to edge computing, where I can push some of the code into devices (e.g. integrating with ESPHome and standard tools like CircuitPython and MakeCode).
This would have the upside of also turning HA into an educational tool for families with kids, much like Logo, Microsoft BASIC, HyperCard, HTML 2.0, and other technologies of yesteryear.
Specifically controlling my lights to give constant light was one of the first things I wanted to do with HA, but the learning curve meant there was never enough time. I'm also a big fan of edge code, since a lot of this could happen much more gradually and discreetly. That's especially true for things with motors, like blinds, where a very slow stepper could make it silent.
I feel like you've not read the HA docs [1,] or took the time to understand the architecture [2]. And, for someone who has more than enough self-proclaimed skills, this should be a very understandable system.
[0] https://github.com/custom-components/pyscript [1] https://www.home-assistant.io/docs/ [2] https://developers.home-assistant.io/
E.g. I just tested w/ChatGPT, gave it a selection of instructions about playing music, the time and location, and a series of hypothetical responses, and then asked it to deduce what went right and wrong about the response, and it correctly deduced what the user intent I implied was a user that given the time (10pm) and place (the bedroom) and rejection of loud music possibly just preferred calmer music, but who at least wanted something calmer for bedtime.
I also asked it to propose a set of constrained rules, and it proposed rules that'd certainly make me a lot happier by e.g. starting with calmer music if asked an unconstrained "play music" in the evening, and transition artists or genres more aggressively the more the user skips to try to find something the user will stick with.
In other words, you absolutely can get an LLM to look at even very constrained history and get it to apply logic to try to deduce a better set of rules, and you can get it to produce rules in a constrained grammar to inject into the decision making process without having to run everything past the LLM.
While given enough data you can train a model to try to produce the same result, one possible advantage of the above is that it's far easier to introspect. E.g. my ChatGPT session had it suggest a "IF <user requests to play music> AND <it is late evening> THEN <start with a calming genre>" rule. If it got it wrong (maybe I just disliked the specific artists I used in my example, or loved what I asked for instead), then correcting its mistake is far easier if it produces a set of readable rules, and if it's told to e.g. produce something that stays consistent with user-provided rules.
(the scenario I gave it, btw. is based on my very real annoyance with current music recommendation that all to often does fail to take into account things like avoiding abrupt transitions, paying attention to the time of day and volume settings, and changing tack or e.g. asking questions if the user skips multiple tracks in quick succession)