→ JavaScript as assembly language for the web?

Scott Hanselman quotes Eric Meijer:

JavaScript is an assembly language. The JavaScript + HTML generate is like a .NET assembly. The browser can execute it, but no human should really care what’s there.

Yeah, this feels right to me. I know many folks had problems with this observation and this post from Scott, but I think it’s dead on.

Honestly, who likes plain JavaScript? Actually, since the advent of Prototype and jQuery, who even writes vanilla JavaScript? I bet the number is lower than you’d think. And there is a simple reason: like assembly was for early computers, the language is an efficient and effective language for the web, while being hard to read and hard to write.

I know, I know, hard is relative. I’m sure it might be easy for you because you’ve been writing it since 1996, but it’s not that easy for everyone. And honestly, wouldn’t you rather write in something like CoffeeScript instead?

The beauty of new tools and language like CoffeeScript and its ilk is in the layers of abstraction and personal choice they offer for developers who still have to work with the universal scripting language of the web. Just like languages that compile to machine code, we are now compiling other languages into JavaScript. And I think that’s a great thing.


→ Microservices architecture

Martin Fowler on the evolving practice of developing Microservices:

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

He goes to describe this new this evolving approach at length, and I strongly recommend it.

The most interesting part about this piece may be the most basic. His definition of a ‘component’, the building blocks that comprise a microservices architecture, is “a unit of software that is independently replaceable and upgradeable.” Why is this so interesting? Because in my experience, most enterprise software systems do not think in units this small. In fact, many cannot think this small. Which is why so many are behind the curve in so many ways.


Why I chose a static site for my blog

When I decided to launch code.brianlundin.com I had some firm requirements in mind for the site and blog engine I’d use. Things like:

  • It must be responsive
  • It must be fast
  • It must be cheap
  • It must have an easy to use workflow for writing and publishing posts in Markdown
  • I must have full control over the layout, design and code of the site, and use the responsive framework of my choice
  • It must be fun to develop and maintain

I don’t think this list is the same for most bloggers on the internet, but I bet other developers’ lists would look pretty similar. Of course, the real question is how I decided on a static site, and Jekyll as the engine.

Choosing a static site was easy. I’ve ran several WordPress blogs in the past, both self-hosted and on Wordpress.com, and overall I was happy with them. The amount and quality of features you get for a low price is a great deal. Plus, there are so many plugins and community tools for the platform that if you are choosing a blogging platform from scratch, it’s the easy choice.

But I didn’t want to take the easy choice with this site. The first reason is that it would be overkill. This site is simple. It’s a blog, a few pages, and an RSS feed. All of the features of the Wordpress platform are great, but they would go unused for this site. That’s too much unhelpful complexity.

In fact, the only thing that the Wordpress option had going for it in my mind was the tremendous SEO capabilities of Studiopress’s Genesis Framework, which I have used and recommend. But, after some time playing with Google’s Webmaster Tools and reading the specs at Schema.org, it was apparent that I could easily optimize my own HTML.

The second reason I did not want to go with Wordpress, or its competitors, is a reason that I suspect only other developers will understand. Simply put, I am disconnected from the code that runs my other current site. While I am not familiar with PHP, it’s not that hard to pick up. But sorting through the Wordpress code, and then the Genesis Framework theme code, is just too much for a side project. I could learn it, but it’s so far outside of my wheelhouse that even if I developed a good child theme, I wouldn’t be happy. For this project I want to be more directly involved. I want the site my readers see to be my site, my code.

Which led me to my first effort, creating my own CMS in Rails. A Rails app would satisfy most, if not all, of my requirements. Ruby and Rails are fun, and I know I can make a performant app. Obviously the code and design would all be mine. I started working on the project, and had a good time with it.

For a few hours.

Then I realized the downside of this plan. I have a job, and it’s not building a blog engine. Sure, it’s fun and a great way to learn and find some cool ways to solve new (to me) problems, but it was clear I would never finish on a reasonable timeline, much less when I wanted to.

Additionally, I did not want to run a server somewhere in the cloud. Cloud VPS providers make life easy, and I’ve had good experiences, but that’s not how I want to spend my time just to run a blog. Heroku could make that much easier of course, but that still felt like a bit too much. So in the end I passed on building my own.

All of this left me with Jekyll. After a short bit of time googling for hosting options I saw how easy it was to host a static site on Amazon S3 and use Cloudfront. With this solution in mind, Jekyll had several key things going for it:

  • Jekyll is written in Ruby, so if I need to hack away at it, I can
  • The code runs on my machine, not the server. If I want to make a bunch of changes, tear apart the site and rebuild it, there is no adverse affect on my site. Most blogging engines cannot say the same thing without a whole lot of work and configuration
  • Backing up both my site code and my content would be a snap with Github
  • Integrating responsive frameworks, JavaScript libraries, and anything else would be easy
  • I control the published site code 100%

This list of benefits won the day. The fact that I can host it on S3 and cache my content globally with Cloudfront for next to nothing was just a bonus.

This choice proved to be very good for me. Setup and configuration time was minimal, with the majority of work going into the design and development of the site—which is exactly how I wanted it. Honestly, it’s been a breeze, and fun.


Twenty years on the internet

Update: 1/21/19 - I have migrated all of the posts from code.brianlundin.com to this site. You can find them in the Technology category on this site. Though this post was the introduction to that old site, I’m archiving it here.

Twenty years ago I fell in love with the internet. I was a sophomore in high school, and only just beginning to figure out the social niceities that it takes to survive in that environment. The first PC in our home was a Packard Bell with a Pentium 75 MHz processor and some miniscule amount of RAM. It was not the fastest, even for the time, but it ran Castle Wolfenstein, Doom, Warcraft and Descent, so I was happy.

But when AOL opened up the internet to their users, I was fascinated by it. When AOL then opened up the internet to other browsers on your system I found Netscape Navigator, and then I really fell in love. It didn’t take long until I realized that all those webpages listed in the Yahoo! web directory were ran by people. Not companies, not governments, but other people. And if they could have web pages, why couldn’t I?

This realization lead me to Geocities, where I setup my first website. I think the free account had 2 MB of storage, which seemed massive to me. I mean, that was bigger than a floppy disc, and only games filled those up. I don’t have any remnants of that first page, but I remember crafting it by hand. I learned HTML from library books, and I downloaded all the free web development and image tools I could find on the web. My first page, a Grateful Dead fan page, was the first “software” I wrote, and I loved it.

Since then I’ve lived on the web. I’ve had multiple blogs covering several different topics. I discovered the power of publishing. I discovered little corners and niches to explore. I stopped watching cable news in favor of my RSS feeds. I built my own sites. I built apps for my company. I learned almost every fashionable web development language and framework, and I became a web developer.

This blog is my newest contribution to the internet, founded with the goal of getting back to my first tech love, writing code. I will cover more ground than that of course, but my foucs is software, and other things software developers think about. My other blog covers my other loves, writing and storytelling, and I found that I couldn’t touch on everything I wanted there, so I did what I love… fired up a text editor and started writing some HTML.


Writers: Beware of the ephemeral web

The internet has been a boon for writers, particularly in terms of exposure. But, there are many downsides—just ask print journalists. In my mind, one of the biggest questions we face as writers is a new framing of an old question: How can we preserve our work?

Over the last few weeks I kicked off a GTD reboot[ref]Getting Things Done is a great productivity system created by David Allen that's not really a system. It's more a way of thinking that drives your own system. I love it and it's the only thing that works for me, but my lack of discipline in pretty much all things means that I have to reboot my system a couple of times a year. Despite this, I HIGHLY recommend it.[/ref]. I usually do this a few times a year, but this one was especially needed as both work and personal projects were out of control. I wanted to tweak my system, so I went digging around on the web.

After a while I ended up on 43folders.com, the now mothballed productivity site from the inestimable Merlin Mann, checking out his classics. A lot of the really good content on the site is old, downright ancient in web terms, and dates back to 2004-2008. Half the links I seemed to click seemed to go 404 on me. Then, one more link led me to a site where I saw this:

I didn't know Ms. Harpold or her work, but this notice stopped me in my tracks. She apparently died in 2006, and now her website, her work, is gone. It may have been her wishes, it may not have been, but for my purposes that was not the question. After 30 minutes of following dead links and googling for long-gone articles, the ephemerality of the internet became very, very real.

In some sense, all of our writing that stays in bits and bytes and never makes it to paper is living in a future black hole. Sure, books go out of print, and many, many of the world's writings have been lost, but there is something undeniable about the pure physicality of a book. The internet may haunt some people forever, but it seems that for many writers it doesn't hang around long enough.

So, fellow wordsmiths, here is my advice. Don't forgo the physical. Write on paper, make offline backups of your blog, try to publish on paper. Save your work. Because the internet won't do it for you.