littlelogs

Keep a social journal of your work progress as you make and learn things.

Tagged #midnightmurderparty

May 2017

larouxn

おはよう、心配しない、これは短いログだけ。

(Good morning, don’t worry, this is only a short log.)

Just wanted to stop by to mention that @rhitakorrr and I merged the new PureScript editor into master and deployed to both #MidnightMurderParty staging and production last night. Ran into a strange Postgres vs SQLite difference based bug and had to drop databases and what not due to some model refactoring but overall it was a pretty straightforward deploy considering the change was +2000 lines. Anyway, this our new language breakdown (20.3% PureScript, Elm is 52.2%, Ruby 5.3%): languages

じゃね (See ya later) ✌️

larouxn

こんばんは, just a quick update here from the #MidnightMurderParty backend team, aka, me. We (@rhitakorrr and I) have successfully setup frontend to backend Google auth validation with built-in validation that it’s either @rhitakorrr or I logging in as well as session storage to store said auth status which we then use for our authenticated endpoints that we hit up after login. No more server side yaml key and (unencrypted) browser cookie garbage for us! 🙃

The backend is done, for now and thus, I will move on to writing MMP animation soundtrack v1.1 next, taking into account @rhitakorrr’s awesome feedback. 🎹

larouxn

👋 こんにちは littlelogs 🌊. As @rhitakorrr stated in his last log, I gave a talk of #MidnightMurderParty this past week. Last Thursday I broke down, to the best of my ability and memory, the history and learnings of MMP to a sizeable group of Shopify employees, both in-house and over the livestream. presentation_slide

Also, we’re planning on finishing up MMP, everything except story, within the next month or so, woo!

April 2017

larouxn

Small-ish #MidnightMurderParty update here. We have left our previous VPS host, DigitalOcean, in favour of Linode. DigitalOcean may be shiny and nice looking and all but when it comes down to it, we just need a 1GB RAM, 1 CPU, SSD box with reasonable data caps, and some tasty Ubuntu 16.04 LTS. At Linode, we’re able to do this for $5 a month. At DigitalOcean, it was costing us $10 a month. Super small servers and small prices but hey, 50% cheaper is 50% cheaper. 😊 The whole setup on Linode took me about 2 hours last night. The box provisioning steps could’ve been a tad clearer but after that it was just SSH for days, so, natural as can be.

Other than that, I’ve begun attempting to score some ambient horror tunes for the upcoming MMP promo animation. Should be interesting.

March 2017

larouxn

👋 Hey littlelogs, it’s been a while. Almost two months actually! Well, all that #MidnightMurderParty talk of automated deploys and such that @rhitakorrr and I have taken part in over the past many months… year now even… well, deploys are gone. Automated deploys that is. It took me about a year to realize that this project is small enough that I could simply write a 5 line bash script to accomplish everything a humongous, frustrating Capistrano process was doing.

Automated deploys are dead and the repo and servers have been subsequently cleaned up quite a bit. Furthermore, now we’re able to update! Ruby 2.4.1 was released yesterday… we’re running it in both staging and production. Also, we upgraded to Puma 3.7.1 which provides some nice DDOS protection-esque functionality. We’re lean, not mean, up to date, and moving forward once again. 😅

Other stuff, I wrote a short six song EP in about a day and a half earlier this month. If you’re interested, you can check it out here: http://bit.ly/ottawa-ost

January 2017

larouxn

In the never ending story that is the development of #MidnightMurderParty, not really, it’s getting close, @rhitakorrr pinged me yesterday afternoon with a double request. Firstly, he wanted an end point at which he could receive the release date of the next unreleased segment of the story. Thirty minutes later, give or take, /api/next was born. Simple enough.

Unfortunately, the second request was to figure out why the hell unreleased segments (release date falls after today) were appearing in the reader. That was a fun little bug to figure out. Turns out I was only checking the release dates for chapters, not every entry within a chapter. Woops. 😛 The release bug was somewhat simple to fix but the resulting refactoring took quite a bit more time and brainpower.

Also, I locked down the Gemfile after witnessing Puma bump to 3.7.0 and break deploys… no thank you. 😅 🔒 Locked down. 🔒

larouxn

Just a quick little update regarding some fun #midnightmurderparty stuff I’ve been up to this past week. Chronologically, I suppose I worked on some cross-browser testing first and my god… there are a lot of browsers these days. https://goo.gl/xXJ35W

I also learned a tad about image optimization and compression. The main difference between JPEG and PNG compression is lossiness. JPEG, a lossy format, compresses images into chunky, huge pixel mosaics in order to reduce file size. Whereas PNG, since it’s lossless, instead chooses to retain visual fidelity and simply reduce the number of colours to reduce file size. Pretty neat, different approaches.

Lastly, one rather unique thing happened during our (@rhitakorrr and I) weekly meeting this week. All of a sudden the redirect to our staging server started returning a 403 error. I was home, where the staging server is, so I checked it via the internal IP. Server was up and fine. Apparently, my public IP changed mid-meeting. Thank you Bell Canada. 😂

larouxn

Completing my triumvirate of new year, new #midnightmurderparty dev and subsequent logs, I’m back to claim that though the reader is still completely borked in Chromium based browsers for every page after page 2… we have finally reached “no image curtain fall load” nirvana… with browser cache. First load is still janky, but better! By this I mean, you can’t see the image loading vertically down the page when you access the reader, provided you have some cache. This was primarily achieved by aggressively optimizing our image payload. CloudFlare is probably helping a bit too. 😄

Originally our image payload for the reader was 3.1mb. After my secondary optimization the image payload is down to just over 800kb. A ~75% size reduction! I couldn’t be happier with how well our image optimization went. @rhitakorrr and I also discussed Chromium fixes, merged in his new Google Analytics stack, and performed a bunch of routine maintenance and upgrades. The #RoadToBeta is real.

larouxn

Last evening @rhitakorrr and I had our weekly Tuesday #midnightmurderparty meeting to plan out the upcoming week as well as our roadmap to beta and launch.

Work wise, I have decided to move past auto deploys, probably forever, and upgrading our Ruby version, for now, to focus on other more feasible tasks. One such task is optimizing our load times, specifically optimizing images. I noticed we were serving up some pretty unnecessarily large and uncompressed image files, multiple megabytes in fact. Subsequently, I ran all the images through this awesome, free compressor: http://optimizilla.com, and was able to cut the image payload in half.

To test this optimization I uploaded the compressed images to just our staging server. The result: our staging server, a Raspberry Pi 2 run out of my apartment (http://imgur.com/a/khE8L), was just as fast or even slightly faster than our DigitalOcean VPS. Incredible. 😲

Lastly, I added CloudFlare to my personal site. Full HTTPS for no reason, woo! 😄

larouxn

Over the holidays, as I wrote in my last log, I tried to upgrade #midnightmurderparty to Ruby 2.3.2 and then 2.3.3 unsuccessfully. Well, 2.4.0 was released on Christmas and I failed to upgrade to that too.

Unfortunately, it seems the combination of one of our unmaintained gems, rvm/rvm1-capistrano3, which manages installing our gems and selecting our Ruby version during deploy and our one GitHub hosted (not RubyGems) gem, seuros/capistrano-puma, does not want to work with Ruby above version 2.3.1. It seems either I switch off RVM, originally I used ruby-install and chruby but couldn’t get deploys working, or I fork this GitHub gem and push it to RubyGems. Neither are guaranteed to work and both would kinda suck.

Aside from that, I’ve been fine tuning our error emails and finding out firsthand that a lot of random bots on the interwebs crawl your site and request nonexistent, usually PHP, files. Trying to keep from waking up to 10 ERROR 404! emails. 😅

December 2016

larouxn

Been diving head first into technical #midnightmurderparty stuff for the past two days. Yesterday I attempted to upgrade our stack to Ruby 2.3.3, from 2.3.1, but ran into some arcane gem errors. Might have to fork a gem or two if this keeps up. For now, we’re staying on 2.3.1

Aside from the Ruby version update shenanigans, I implemented some nice error handling on the backend. Now if an HTTP error code is thrown (server error), I will be emailed. Nice for when we go live as I’ll know if anything breaks, when it broke, and what the error was without having to anymore than check my email. Also, spiced up (improved) our logging so we know everything that goes on from regular functionality to errors and beyond. Getting close to beta time! 🙂

larouxn

Since my last log, about a month and a half ago, I’ve

  • completed my two week #song_a_day challenge | listen here
  • written a few more tunes since the challenge | listen here
  • made it through the Black Friday/Cyber Monday hustle here at Shopify
  • read Pyongyang: A Journey in North Korea by Guy Delisle, pretty interesting
  • traveled to and from New Jersey for 🇺🇸 Thanksgiving, heading back Monday for🎄
  • added caching to our NGINX configs to improve load times… should we CDN? Hmmm 🤔 #midnightmurderparty

Tonight I’m going to

  • meet with @rhitakorrr to discuss the Elm 0.18 upgrade, fine tune caching, and plan the coming week of dev
  • clean my apartment, gotta be spotless before I check out for three weeks
  • catch up with On A Sunbeam, one of the best web comics I’ve ever read

September 2016

larouxn

Small update. Over the last five, ten minutes, or so I set up the staging server to work double duty as the deploy server while we’re short a third physical server. As a security precaution I have disabled password based login and only allow public key based authentication, intially only on prod. Thus, as part of setting up staging to deploy to prod, I ssh-copy-ided staging’s key over to prod and then locked down staging under public key auth only as well. Next step: test staging -> prod deploys. Small steps. @rhitakorrr #midnightmurderparty

larouxn

Yesterday evening and this evening I spent some time researching, testing, and coding up a solution in the form of a POST endpoint that will allow @rhitakorrr and I to automagically deploy #midnightmurderparty whenever master is merged into via pull request or pushed to directly. Spoiler: the magic bit is GitHub webhooks. Surprise, surprise.

As of now, we have webhooks posting JSON blobs to our prod server whenever a PR or push occurs. Currently, the JSON payloads are simply logged to our app.log. Eventually the system will delegate out a bash command to #Ruby Capistrano deploy and the POST destination will be moved off of prod, to staging mayhap.

August 2016

larouxn

RSS Part 2: Glorious Refactor Heaven

  • Follow up to yesterday’s 'key' versus :key accessor debacle: I recursively deep_symbolized all keys. I prefer using symbols anyway. #Ruby
  • I rewrote the majority of the RSS release building and formatting code. Simplification level went through the roof. So, so, so much better. Borderline beautiful versus my RSS Feed version 1.0 and 1.1.
  • Refactor Pull Request

Overall, super happy with the RSS refactor from last night and tonight. Planning to merge and deploy shortly. #midnightmurderparty

larouxn

RSS Part 1: Takeaways from a Trip to Dynamic Mutable Hell and Back w/ @rhitakorrr

  • With Active Record you can use as_json to turn your database records into JSON, wahoo! 😊
  • If you .merge!({key: val}) some of your JSON blobs, the merged ones become Hashes (expectedly)… which I forgot and then had some records with values only accessible via ['key'] and some via [:key].
  • Whilst iterating through an array of objects, if you insert an object into an array you initialized outside the iteration block, you’re actually only inserting a reference to that object. If the inserted object changes later but still during the loop, the one you inserted a while back has now also changed. 😒

I love #Ruby and this may not happen often… but I wish it couldn’t happen in the first place. Shoulda used #Elm. #midnightmurderparty

larouxn

Tonight was not as crazy as last night, as described in my last post, but it was still quite a crazy #midnightmurderparty dev night. Events worth noting:

  • Upgraded my internal network by changing by Bell router into a bridge and hooking up a new router I picked up yesterday. Down with 2.4GHz WiFi congestion, 5GHz AC or bust!
  • Fixed the Raspberry Pi staging server, which was completely out of commision with the aforementioned network changes.
  • Stripped out all environment variable usage in favour of a secrets YAML file.
  • Upgraded our DigitalOcean droplet to 1GB of RAM. Hot deploys work!!!

Basically: physical infrastructure, virtualized infrastructure, and code infrastructure.

larouxn

Last night, @rhitakorrr and I jumped on a voice chat so we could discuss his monstrous build system changes for the frontend of #midnightmurderparty and then deploy it. The discussion went well. Our build system is now super powerful and will prevent browser caching, which is great considering how JS heavy the site is.

We decided to profile the RAM usage as we’ve run into our asset build process being killed before and assumed we may be running OOM since we only have a 512M box and we’re running hot deploys (server still running). Well, suspicions confirmed, we are indeed running OOM during hot deploys. Total usable RAM is 489M. Number:

  • Idle server - 88M
  • Deploy building - 250M
  • Fully running - 355M

Simple math: 489M (total) - 355M (running) - 250M (deploy) = -116M. We end up with -116M of RAM during a hot deploy. Thus, we’ll be upgrading our droplet on DigitalOcean to the $10/month box, which has a full 1GB of RAM. Oh, also, #Ruby Sinatra routing is garbage. Give me abstract named routes!

larouxn

Yesterday evening I made one small change and formulated one small hypothesis regarding deploys for #midnightmurderparty. First off, the small change.

Issue was that on deploy, @rhitakorrr’s #Elm files weren’t being recompiled. Turns on that due to the nature of our deploy system, though other files were being updated, the build process was being run on non-updated base Elm files. Threw a tasty little git pull step into deploys before asset builds. Success! 😁

Secondarily, the deploy hypothesis. Background: our deploys run whilst the server is running and perform a hot restart on our #Ruby Puma workers. Also, our server is a $5/month Ubuntu 16.04 x64 droplet on DigitalOcean with 512MB of RAM. Sometimes when running the deploys they would reach the asset build step and the process would be killed. My current hypothesis is that we are running OOM (out of memory) running the deploy while the server is running. Tomorrow I’m going to profile this RAM issue with some htop and redundant deploys.

larouxn

Yesterday morning, whilst running a routine check on both our staging server and production server I came across a bug: I could not log into the editor portion of our production server. I reset the login credentials but still, just a nice, red, HTTP status code 418, our default error code, cause why not. RFC2324

Due to supporting a myriad of operating systems, Rack environments, and recently rolling out deployments which, compared to tradition SSH, don’t actually load your entire login environment leaving a lot of config unreachable… this auth bug proved quite nefarious.

Downfall: we used to auth based on an environment variable but since deploys start the #Ruby (Puma) web servers, they were started in an environment without the variable needed for auth. This led me to a lot of “how can I load my .bashrc in deploys!?” and other fruitless questions. In the end, good only YAML saved the day for #midnightmurderparty. 😅

larouxn

Finally decided to merge and deploy my #Ruby RSS backend functionality for #midnightmurderparty so we can support RSS readers, emails, and social media distribution.

  • Initially tried rewriting all the RSS feed generation using Ruby’s RSS standard library so we can move off of a gem called Builder, which has been stagnant since 2014 (R.I.P. Jim Weirich 😢), but couldn’t seem to get the standard lib to work properly. Jim’s gem is simpler and better. Much more control.
  • After scrapping the rewrite, I tested the original RSS code, deployed to our staging server, and ran the feed by @rhitakorrr. With his feedback, I updated two fields and added one.
  • Lastly, I deployed the fixed up RSS feed code to our staging server for future pre-prod testing.

RSS had been languishing in its feature branch for a week or two because as of late I’ve been busy with deployment and error emails, likely future log subjects. All things considered, I’m very satisfied I finally got around to shipping RSS.