start | find | index | login or register | edit
comment-2008-12-12-2
by earl, 5849 days ago
Imagine a web crawler system where you have some master submitting crawl jobs into a queue and bots taking those jobs and doing their crawl work. Now if one of those crawl job batches gets lost, who cares? Do you think Google does?

Does this answer your immediate question?

Two remarks: Firstly, I assert that in lots of realistic scenarios job durability is the more immediate concern than queue durability. Workers die often, a stable message broker should not. Secondly, recent (dev) versions of beanstalkc have support for persistent queues. This is also convenient for non-failure scenarios, like migrating queues over machines. If worst-case failure recovery is your only concern ... well; once you go down that road, you decide where to stop. It may lead you to battery-powered solid state drives.
powered by vanilla
echo earlZstrainYat|tr ZY @.
earl.strain.at • esa3 • online for 8692 days • c'est un vanilla site