littlelogs

Keep a social journal of your work progress as you make and learn things.

larouxn
larouxn

Last night, @rhitakorrr and I jumped on a voice chat so we could discuss his monstrous build system changes for the frontend of #midnightmurderparty and then deploy it. The discussion went well. Our build system is now super powerful and will prevent browser caching, which is great considering how JS heavy the site is.

We decided to profile the RAM usage as we’ve run into our asset build process being killed before and assumed we may be running OOM since we only have a 512M box and we’re running hot deploys (server still running). Well, suspicions confirmed, we are indeed running OOM during hot deploys. Total usable RAM is 489M. Number:

  • Idle server - 88M
  • Deploy building - 250M
  • Fully running - 355M

Simple math: 489M (total) - 355M (running) - 250M (deploy) = -116M. We end up with -116M of RAM during a hot deploy. Thus, we’ll be upgrading our droplet on DigitalOcean to the $10/month box, which has a full 1GB of RAM. Oh, also, #Ruby Sinatra routing is garbage. Give me abstract named routes!

rhitakorrr
rhitakorrr

“Oh, also, #Ruby Sinatra routing is garbage.”

Four hours of debugging later, we realize the weird limitations of send_file… Still curious why it worked fine locally.

24 Aug 2016

larouxn
larouxn

Logically, it’s gotta be something with NGINX… which wouldn’t surprise me as I’m near convinced it’s taking over the server. It has ulterior motives, I swear. 😨

24 Aug 2016