It can be an alternative to running your own systems like Sidekiq, Celery, or Quartz and the operational overhead that comes along with them. Cron jobs and cloud provider tools like CloudWatch Events are also used for job scheduling but lack observability and may force you to frequently make expensive queries to your data store just to see if there is any work to do.
Questions and feedback are greatly appreciated :)
Unlike AWS SQS, Azure SB queues can have items scheduled to be queued at an arbitrary time, see https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure..... SQS can only delay up to 15 minutes, so you have to implement your own hack to schedule for later than that.
And also unlike SQS, they can trigger a function so you don't have to deal with the logic to listen yourself - or even the transactional message handling, if the function fails the message is automatically re-queued.
But have added couple of extra features in send.rest
1) Calling external APIs ( Think : pull data from Facebook while running this task in future and post data on my API or send SMS )
2) Reports and retries ( Try 3 times or send me sms its failing )
3) Recurrence ( Call this task every Monday 9:00 pm )
4) SMS and Emails comes along with them ( Send SMS or Email , pick any service as your backend )
I guess without some extra features it will be difficult to get a market acceptance.
I've found that with scheduling tasks for the future it's important to do a final check before fulfilling the action, whether it's sending an email, push notification, etc. You don't want to send a reminder for an event that has been deleted, for example. That is why I have decided to keep the scope small and let developers make the final decision there.
Reports and retries are definitely things that I think add value though, and I have plans to expand on that.
Also, is there retry logic? I.E. 3 retries, delay 60 seconds between retries?
Right now the retry logic is just one retry 5 seconds after the first failure. At which point the hook gets set to a failed status and failure notifications get sent out. Retries are tricky because depending on how the job is implemented they can cause more harm than good. So I plan to refine that more based on customer feedback.
Ah, yes, there's nothing like bringing capacity back online only to have it crushed by all your customers retrying at the same time.
AWS got bit hard by this[3] but there's a blog post[1] about it, which is linked to by the docs for their client software[2].
[1] https://aws.amazon.com/blogs/architecture/exponential-backof...
[2] https://docs.aws.amazon.com/general/latest/gr/api-retries.ht...
[3] https://aws.amazon.com/message/5467D2/ ... basically DynamoDB is a fundamental service for AWS and had implemented some new streams features. This all appeared to be working, but they were running closer to capacity than intended, and when a cluster went out this caused a cascading failure.
And see this which linked to that RCA: https://blog.scalyr.com/2015/09/irreversible-failures-lesson...
That said, I've wanted something akin to a distributed cron, without the complexities of workflow engines like airflow or azkaban, so I started writing a little HTTP API for scheduling jobs [1], which is mostly complete except for configurable retry logic and failure notifications. To schedule a periodic job that runs every 30 seconds,
curl http://$HOSTNAME/jobs \
-d '{ "title" : "Healthcheck"
, "code" : "curl example.com"
, "frequency" : "30 seconds" }'
[1] https://github.com/finix-payments/jobs curl https://api.pocketwatch.xyz/v1/timer/new \
-H "Content-type: application/json" \
-d '{ "apiKey" : "i_am_hn_and_proud"
, "url" : "https://host.tld/?my_param=my_val"
, "method" : "POST"
, "interval_unit_type" : "second"
, "interval_unit_count": 1
, "duration_unit_type" : "minute"
, "duration_unit_count": 6 }'
You can use curl but the Node.js client library[1] is probably easier. I've made a free demo API key for HN readers that should be enough to let everyone try it out a bit[2].[1] https://www.npmjs.com/package/%40dosy%2Fpocketwatch [2] https://news.ycombinator.com/item?id=17353486
Sure, but what about avoiding liabilities?
Almost all popular job processing libraries in all major frameworks support executing 1 off tasks with robust retry policies and even support recurring tasks.
The operational complexity is really just running the sidekiq or celery command in another Docker container or if you don't use Docker, then setting up 1 extra systemd unit file.
I once had a Celery / Flask app running on a $20 / month digitalocean server. It handled over 3 million background jobs per month and it also hosted my DB server, cache server and web front end.
On your pricing page, your highest tier supports up to 100,000 requests per month at $129/month. How much would it cost to do 3 million requests per month?
Also how would you set up custom retry policies?
> How much would it cost to do 3 million requests per month?
If you want to use Posthook to schedule 3 million requests a month please email support [at] posthook.io and we can work out a special plan and support contract. Do keep in mind that background jobs and scheduled jobs are two different things and what I aim for Posthook to solve are scheduled jobs. I suspect a big part of those 3 million were background jobs.
> Also how would you set up custom retry policies?
The retry policy is fixed at the moment but it seems like this a common request so I want to roll that feature out soon.
Yep, almost all of them were tasks that should be executed as soon as possible, but Celery also allows for you to schedule tasks at a future date, and deal with running scheduled tasks at a repeated time (every Sunday at 3am, etc.).
As for the price, I was just curious for the sake of wanting to compare it to the $20 / month DO set up.
Thanks for the answers.
Hey! Coincidentally at the same time as the OP, I just built a recurring task hook as a service, and I can do 1 request every second (2.6 million per month) for USD$26
You can see that pricing if you plug your requirement (or if you're just curious) into the console[1]. That console is for buying once-off timers.
But I've also made a free API key for people on HN to try out using the API[2]. (those free demo jobs will be capped at 2 weeks no matter what duration is requested tho, and I might end up capping individual timers a certain amount as well, but not yet).
There's also a console for trying out this free API key (which doesn't including pricing/cost info)[3], and I just put this free demo on Show HN[4].
[1] https://pocketwatch.xyz [2] https://dosyago-corp.github.io/pocketwatch-api/ [3] https://api.pocketwatch.xyz/fancy.html [4] https://news.ycombinator.com/item?id=17353486
I've used Tidal too, it's utterly horrible and universally hated by anyone I know who has used it. What took months to set up and debug in Tidal, took us hours in Rundeck.
If you already have something like Jenkins you could probably bend that to your will, too.
I guess the big point is, I'm looking to use this for regularly scheduled jobs, not just as a queuing service to spread load.
My email is in my profile if you want to get in touch.
The Azure service also supports logging (60 days), retry options, and advanced recurrence schedules (every other Tue, etc.)?
That Azure pricing is very competitive / cheap! That's 22 million requests per month for USD 14.
1 timer on my service can do max request 1 per second (2.6 million per month) and that 1 timer would cost $26.
On my biggest plan you can get 4000 timers and 100 million requests per month for $495.
Still Azure is beating my prices by about 10 times. But they can't do second resolution (yet?), so they're still only a "distributed cron".
Lately, we've had great experience using Hangfire (https://www.hangfire.io) for async job processing in .NET. It's an awesome piece of software.
I haven't seen this mentioned yet but you could leverage Zapier.com for this type of work. They have a "schedule" app you can pipe it into an outgoing webhook to achieve the same result.
It's pretty powerful. for example, at work, we wanted to run a specific task at different times: nightly, after deploys, and using a slack command. This was all achieved using 3 Zaps (Zapier apps) and it took no more than 30 minutes for the whole thing.
Keep it up!
Step 1 - schedule Step 2 - outgoing webhook
It takes all of 5 minutes to set things up but we have a pro account so I haven't tried the free tier yet.