James's Ramblings

EfficientSQS

Sending lots of small writes to your database via AWS’s SQS is both very bad for your AWS bill and very bad for database performance.

You are billed for each SQS request, so you want to make the minimum requests possible to reduce your bill. Once you have achieved that you are still (at the time of writing) billed for each 64 KiB chunk of data, however, there is still substantial cost saving to be had.

Even more importantly, lots of small writes is very bad for database performance and you will find that you have to vertically scale your database and/or buy more performant disks. Meanwhile, your application may be performing horribly from an end-user perspective. Upgrading databases is also quite a difficult feat to do without incurring dreaded downtime for maintenance windows.

Database engines have algorithms and data structures to ensure that data is efficiently written to disk when large commits are sent. That can mean a more performant (and likely cheaper) database, happy users with a zippy application, and less downtime.

I have written EfficientSQS, a small Go microservice intended to accomplish both tasks.

EfficientSQS operates on the server side. The service batches and bin-packs messages sent to it to ensure that a minimal amount of requests are sent to SQS, saving on your AWS bill.

Furthermore, as data is becoming aggregated, individual messages become larger, meaning you can start sending larger commits to your database layer, therefore, you might receive happy user and cheaper/more performant database benefits.

EfficientSQS is not suitable for every workload; please read the README.md thoroughly.