background job processing for delivering ActivityPub pushes #19
Labels
No labels
ActivityPub
advanced features
basic functionality
bug
DHT
evaluation
refactoring
security
test case
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: schmittlauch/Hash2Pub#19
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
The relay node role requires nodes to push new posts to subscribing servers. Such actions can be queued and processed asynchronously in the background.
Mastodon uses Sidekiq as a job executor and scheduler. Do similar things exist in the Haskell world, are they necessary and should this project use them? Or are
forkIO
d threads, channels and maybe some persistence layer in between enough?The DHT layer also needs to schedule regular maintenance jobs, but as those won't pile up such an asynchronous background processing worker queue isn't necessary.
@schmittlauch If you wish to use one of the best and full-featured libraries on Hackage, I can only recommend odd-jobs.
This one pulls in Postgres. I still have to decide on how to persist data, will probably ask around in the SocialHub thread quite soon.
Note to self for further reading: https://www.haskelltutorials.com/odd-jobs/haskell-job-queues-ultimate-guide.html
I personally think Postgres is far from being "overkill" if you are going to build some kind of highly-connected network service, especially if you wish to do some retention. Of course it's not like
sqlite3
, but it's a fairly standard tool on the "engineering" side of things. :)in the mock post service, this is done by separate worker threads and in-memory.
Thus removing it from the milestone.