Keep a social journal of your work progress as you make and learn things.

Tagged #Ruby

September 2016


Yesterday evening and this evening I spent some time researching, testing, and coding up a solution in the form of a POST endpoint that will allow @rhitakorrr and I to automagically deploy #midnightmurderparty whenever master is merged into via pull request or pushed to directly. Spoiler: the magic bit is GitHub webhooks. Surprise, surprise.

As of now, we have webhooks posting JSON blobs to our prod server whenever a PR or push occurs. Currently, the JSON payloads are simply logged to our app.log. Eventually the system will delegate out a bash command to #Ruby Capistrano deploy and the POST destination will be moved off of prod, to staging mayhap.

August 2016


RSS Part 2: Glorious Refactor Heaven

  • Follow up to yesterday’s 'key' versus :key accessor debacle: I recursively deep_symbolized all keys. I prefer using symbols anyway. #Ruby
  • I rewrote the majority of the RSS release building and formatting code. Simplification level went through the roof. So, so, so much better. Borderline beautiful versus my RSS Feed version 1.0 and 1.1.
  • Refactor Pull Request

Overall, super happy with the RSS refactor from last night and tonight. Planning to merge and deploy shortly. #midnightmurderparty


RSS Part 1: Takeaways from a Trip to Dynamic Mutable Hell and Back w/ @rhitakorrr

  • With Active Record you can use as_json to turn your database records into JSON, wahoo! 😊
  • If you .merge!({key: val}) some of your JSON blobs, the merged ones become Hashes (expectedly)… which I forgot and then had some records with values only accessible via ['key'] and some via [:key].
  • Whilst iterating through an array of objects, if you insert an object into an array you initialized outside the iteration block, you’re actually only inserting a reference to that object. If the inserted object changes later but still during the loop, the one you inserted a while back has now also changed. 😒

I love #Ruby and this may not happen often… but I wish it couldn’t happen in the first place. Shoulda used #Elm. #midnightmurderparty


Last night, @rhitakorrr and I jumped on a voice chat so we could discuss his monstrous build system changes for the frontend of #midnightmurderparty and then deploy it. The discussion went well. Our build system is now super powerful and will prevent browser caching, which is great considering how JS heavy the site is.

We decided to profile the RAM usage as we’ve run into our asset build process being killed before and assumed we may be running OOM since we only have a 512M box and we’re running hot deploys (server still running). Well, suspicions confirmed, we are indeed running OOM during hot deploys. Total usable RAM is 489M. Number:

  • Idle server - 88M
  • Deploy building - 250M
  • Fully running - 355M

Simple math: 489M (total) - 355M (running) - 250M (deploy) = -116M. We end up with -116M of RAM during a hot deploy. Thus, we’ll be upgrading our droplet on DigitalOcean to the $10/month box, which has a full 1GB of RAM. Oh, also, #Ruby Sinatra routing is garbage. Give me abstract named routes!


Yesterday evening I made one small change and formulated one small hypothesis regarding deploys for #midnightmurderparty. First off, the small change.

Issue was that on deploy, @rhitakorrr’s #Elm files weren’t being recompiled. Turns on that due to the nature of our deploy system, though other files were being updated, the build process was being run on non-updated base Elm files. Threw a tasty little git pull step into deploys before asset builds. Success! 😁

Secondarily, the deploy hypothesis. Background: our deploys run whilst the server is running and perform a hot restart on our #Ruby Puma workers. Also, our server is a $5/month Ubuntu 16.04 x64 droplet on DigitalOcean with 512MB of RAM. Sometimes when running the deploys they would reach the asset build step and the process would be killed. My current hypothesis is that we are running OOM (out of memory) running the deploy while the server is running. Tomorrow I’m going to profile this RAM issue with some htop and redundant deploys.


Yesterday morning, whilst running a routine check on both our staging server and production server I came across a bug: I could not log into the editor portion of our production server. I reset the login credentials but still, just a nice, red, HTTP status code 418, our default error code, cause why not. RFC2324

Due to supporting a myriad of operating systems, Rack environments, and recently rolling out deployments which, compared to tradition SSH, don’t actually load your entire login environment leaving a lot of config unreachable… this auth bug proved quite nefarious.

Downfall: we used to auth based on an environment variable but since deploys start the #Ruby (Puma) web servers, they were started in an environment without the variable needed for auth. This led me to a lot of “how can I load my .bashrc in deploys!?” and other fruitless questions. In the end, good only YAML saved the day for #midnightmurderparty. 😅


Finally decided to merge and deploy my #Ruby RSS backend functionality for #midnightmurderparty so we can support RSS readers, emails, and social media distribution.

  • Initially tried rewriting all the RSS feed generation using Ruby’s RSS standard library so we can move off of a gem called Builder, which has been stagnant since 2014 (R.I.P. Jim Weirich 😢), but couldn’t seem to get the standard lib to work properly. Jim’s gem is simpler and better. Much more control.
  • After scrapping the rewrite, I tested the original RSS code, deployed to our staging server, and ran the feed by @rhitakorrr. With his feedback, I updated two fields and added one.
  • Lastly, I deployed the fixed up RSS feed code to our staging server for future pre-prod testing.

RSS had been languishing in its feature branch for a week or two because as of late I’ve been busy with deployment and error emails, likely future log subjects. All things considered, I’m very satisfied I finally got around to shipping RSS.