/images/avatar.png

Purging cached items from NGINX with LUA

Those of you any how familiar with scaling up a service, will by now be familiar with NGINX as a strong workhorse, when it comes to proxy HTTP requests between multiple backend servers. There’s one thing the open source NGINX version doesn’t do however - you can’t issue a PURGE request against a cached resource and delete it from the cache. That is, until we bring in LUA.

Stateful locks in Go

Locking serves as a construct which is used to enable writes from several threads or goroutines to the same data structures, to avoid concurrency issues. Imagine if every thread would read a value, add something to this value and then write the value back to the shared data structures - we expect in the best case to lose what we added in some routine, and we have to use a lock to avoid this.

The 12 Factors of Go

The 12 Factor App methodology is a set of best practices to follow when building modern software applications. It was created by Heroku as a guide for developers to optimize application development for their platform as a service. At the core, the 12FA methodology is a marketing document, but the concepts behind individual practices outline best practices that can be followed regardless of the platform where you publish your apps.

Docker, you lied to me

I know, I know. I’m dramatic. The title is basically clickbait, but the subject is just as true as the title. You should read this article because it will save your life! Okay, it might not save your life, but most likely it will solidify some very important information about Docker containers, which will most likely save of some drama down the line.

Setting up your own Docker swarm

Scaling your service has usually been in the domain of system operators, which installed servers and developers tweaking software when the load got high enough to warrant scaling. Soon enough you’d be looking at tens or even hundreds of instances which took a lot of time to manage. With the release of Docker 1.12, you now have orchestration built in - you can scale to as many instances as your hosts can allow. And setting up a docker swarm is easy-peasy.