Scribd has absolutely fascinating data-at-scale type problems, all the way down to the fundamentals of how we use AWS S3. In my previous post I wrote about the design of Content Crush and how Scribd is consolidating objects in S3 to minimize our costs. Related to that work I was fortunate enough to join the (in)famous Corey Quinn to talk about Engineering around Extreme S3 scale:

Checking if files are damaged? $100K. Using newer S3 tools? Way too expensive. Normal solutions don’t work anymore. Tyler shares how with this much data, you can’t just throw money at the problem, but rather you have to engineer your way out.

You can also listen On Everand or watch via the Last Week in AWS YouTube channel