Plumfeed Origins: The uninteresting story of the creation of my simple RSS feed aggregator

February 12, 2017 - -

I never had a chance to write in depth about Plumfeed, so I’m going to do that now!

Over the last dozen years or so, I’ve had a few methods for consuming content from blogs and sites I liked. My original method (and probably my most clever) was to save websites in and tag them with “dailyblog”. Then, I added the RSS feed for that tag to a Firefox Live Bookmark, a sort of RSS feed reader built into the browser. If I tagged a new site, it would automatically be in the Live Bookmark. Anyone born in the 1990s probably has no idea what I’m talking about. Anyways, everyday I would just randomly open several of the sites in the Live Bookmark to see if they had updated.

At some point I switched to Google Chrome which didn’t have the Live Bookmark feature. I found myself reading more links from Twitter and Hacker News and missing out on blogs and comics I read regularly. Additionally, I know I’m in the minority here, but I haven’t liked most feed readers, Google’s included. I didn’t like treating my feeds as an inbox I had to maintain, and I strongly preferred to read the content on the original website. I felt like I would miss the design, messages, comments, quirks, and easter eggs when I read the feed contents instead of the origin site. It’s like I was missing out on the character or ambiance of the website if that’s a thing.

As a proof of concept, I wrote a PHP script that iterated my old dailyblog tagged RSS feeds, parsed them, and displayed a link to the most recent post in all of them. It was very slow to load and cumbersome to update, but I still used it frequently. While I was toying with rebuilding this, I was considering allowing others to sign up and use it too, but it would add a lot of complexity. Around the same time I saw a talk by Phil Libin who suggested that if you wanted to build something to scratch an itch, there are probably others who have the same itch. I decided to go for it, and build something anyone could use. Besides, I needed a new personal project to work on, so I got started on what became Plumfeed.

I built Plumfeed to be exactly what I want in an RSS feed aggregator. Once a day it fetches the most recent link from every RSS feed in the database. When a user visits their stream, their links are displayed in chronological order. All feeds are public. That’s it! I’m the most prolific user of course, so check out my plumfeed stream. Plumfeed is geared towards sites that update once a day or less so it’s perfect for most blogs and comics. It’s not really built for consuming TechCrunch or a large news site. You could add it if you wanted, but you would miss a lot of content.

It’s not easy to carve out time for a large side project with a full time job, but I found John Resig’s post on writing code everyday to be helpful. I didn’t follow the advice exactly, but the underlying premise is simple: try to make a little progress everyday, even if it’s just a single small commit. Some nights I could carve only carve out 10 minutes, others a couple hours, but over time it quickly adds up. It sounds obvious of course, but it was a helpful nudge when a project seems too big.

I wish the name, Plumfeed, had a good story, but truthfully, I really struggled to come up with a name that I could easily acquire a domain for. I think Feedbag was my favorite idea. I wanted to stay away from the wacky, novelty TLDs because, well, I hate them. One day I was eating a plum which was particularly tasty, and out of name-choosing-exhaustion I decided on Plumfeed. It occurred to me that “plum” can also mean prime or desired (like a plum job). If I was choosing the best feeds, maybe this would make more sense, but, eh, it’s fine.

Ok, tech spec time! Plumfeed is built on Heroku’s Hobby Dyno which costs $7 a month. I’m sure I could move it AWS and save a few bucks, but Heroku is so handy, and it was fun learning about it. It’s Express running on Node.js. There’s a lot of handy express middleware I was able to use or tweak, namely csurf for csrf, sessions, compression, cookie-parser, and favicon. I’m using Postgres for my database. I don’t have great reasons for choosing Postgres, but they are: I know SQL pretty well, people seem to really dig it, Heroku has built-in support, and I wanted to learn about it. Good enough for a hobby project.

I don’t remember much that was technically very challenging, but I read a lot about authentication, hashing, salting, CSRF, sessions, cookies, and bcrypt. I knew about some of these topics, but I gained a greater appreciation for them as I was building the site. I did spend a fair amount of time handling corner cases with strange RSS feeds, and making sure that the code that finds the RSS feed within a website is reliable.

To render the front-end I’m using Dust.js templates. Honestly, I don’t remember why I chose that engine. Around that time there was a lot of excitement around logicless templates. Although I appreciate the rationale, it’s a little too draconian for my tastes. I felt like there’s room for occasionally having some layout related logic, and Dust allows this. Anyways, I decided to use just plain vanilla javascript in the browser. I didn’t think my site would have much client-side js, so I didn’t want the overhead of using a large library. It ended up having more than I thought, but it’s been easy to maintain. I also made the site responsive for mobile viewing using CSS media queries.

People who have known me awhile, know I used to do a lot of design work in college, and I was actually ok at it. Those skills have long since atrophied, but I think I still have an eye for what’s working and not working in a design. For Plumfeed, I designed the logo and the page layouts for web and mobile. I don’t love the logo, but it gets the job done. It was fun returning to Illustrator since it was about 10 years since I last opened the program. As for the site design, I really wanted the site to be just about the feeds, so it has pretty basic layout. Getting the edit functionality looking right was tricky, and I’m not sure I succeeded. I figured users would be reading and opening links more than they would be adding so that’s why it’s a little tucked away. I want to give a shoutout to my lovely and talented wife, Liz, who gave me a few design pointers along the way!

Plumfeed logo

Finally, I have a worker job on Heroku that goes through all the feeds that have been collected, and saves the latest title and url everyday. If you see that your site was visited by PlumBot, that’s me requesting your RSS feed.

I used to have Cloudflare in front of my site for SSL termination, but I recently switched to just having Heroku handle it.

I’ve had a bunch of people sign up, but most just leave after visiting once or twice. I have a few regular users who are very nice friends of mine. I haven’t advertised or promoted it on my Plumfeed Twitter feed much. Personally, I use it everyday and still add new feeds from time to time. I don’t update the site much except for bug fixing and some package upgrades, but I’m open to feature requests.

I’d love to hear your thoughts on Plumfeed! Oh, and definitely contact me if you find a security issue.