1 2 3 4 5

Raspberry Pi Alternatives

Monday, 22 January 2018 - 13:14 PM - (Hardware)

A look at some of the many interesting Raspberry Pi competitors. more>>


Purism Progress Report, Spectre Mitigation for Ubuntu, Malicious Chrome Extensions and More

Thursday, 18 January 2018 - 16:20 PM - (Security)

News briefs for January 18, 2018.

Purism, the group behind the security and privacy-focused Librem 5 phone just recently published a progress report highlighting the latest developments and design decisions to its crowdfunded project. Changes include an even faster processor. more>>


Linux Filesystem Events with inotify

Monday, 08 January 2018 - 15:38 PM - (Security)

Triggering scripts with incron and systemd. more>>


Banana Backups

Tuesday, 21 November 2017 - 15:58 PM - (Hardware)

In the September 2016 issue, I wrote an article called "Papa's Got a Brand New NAS" where I described how I replaced my rackmounted gear with a small, low-powered ARM device—the Odroid XU4. more>>


Zentera Systems, Inc.'s CoIP Security Enclave

Wednesday, 15 November 2017 - 12:38 PM - (Security)

On the heels of being crowned "Cool Vendor in Cloud Security" by Gartner, Zentera Systems, Inc., announced an upgrade to its flagship CoIP Security Enclave solution. more>>


Testing the Waters: How to Perform Internal Phishing Campaigns

Tuesday, 31 October 2017 - 13:50 PM - (Security)

Phishing is one of the most dangerous threats to modern computing. Phishing attacks have evolved from sloppily written mass email blasts to targeted attacks designed to fool even the most cautious users. No defense is bulletproof, and most experts agree education and common sense are the best tools to combat the problem. more>>


The Wire

Monday, 30 October 2017 - 12:09 PM - (Security)

In the US, there has been recent concern over ISPs turning over logs to the government. During the past few years, the idea of people snooping on our private data (by governments and others) really has made encryption more popular than ever before. One of the problems with encryption, however, is that it's generally not user-friendly to add its protection to your conversations. more>>


Lotfi ben Othmane, Martin Gilje Jaatun and Edgar Weippl's Empirical Research for Software Security (CRC Press)

Friday, 20 October 2017 - 16:47 PM - (Software)

Lotfi ben Othmane, Martin Gilje Jaatun and Edgar Weippl's Empirical Research for Software Security (CRC Press)

James Gray Fri, 10/20/2017 - 11:47

Developing truly secure software is no walk through the park. In an effort to apply the scientific method to the art of secure software development, a trio of authors—Lotfi ben Othmane, Martin Gilje Jaatun and Edgar Weippl—teamed up to write Empirical Research for Software Security: Foundations and Experience, which is published by CRC Press.

The book is a guide for using empirical research methods to study secure software challenges. Empirical methods, including data analytics, allow extraction of knowledge and insights from the data that organizations gather from their tools and processes, as well as from the opinions of the experts who practice those methods and processes. These methods can be used to perfect a secure software development lifecycle based on empirical data and published industry best practices.

The book also features examples that illustrate the application of data analytics in the context of secure software engineering.


iStorage diskAshur Storage Drives

Friday, 06 October 2017 - 16:50 PM - (Hardware)

iStorage diskAshur Storage Drives

James Gray Fri, 10/06/2017 - 11:50

With software-free setup and operation, the new iStorage diskAshur group of ultra-secure storage drives works across all operating systems, including Linux, macOS, Android, Chrome, thin and zero clients, MS Windows and embedded systems.

Available in HDD and SDD versions, these high-speed USB 3.1, PIN-authenticated, hardware-encrypted portable data storage drives feature iStorage's unique EDGE technology. iStorage calls the EDGE technology—short for Enhanced Dual Generating Encryption—super-spy-like, due to the advanced security features that make diskAshur the "most secure data storage drives available on the market". For one thing, without the PIN, there's no way in!

diskAshur's dedicated, hardware-based secure microprocessor (Common Criteria EAL4+-ready) employs built-in physical protection mechanisms designed to defend against external tamper, bypass laser attacks and fault injections. The drives feature technology that encrypts both the data and the encryption key, ensuring that private information is secure and protected. Other security features include a brute-force hack defense mechanism, self-destruct feature, unattended auto-lock and a wear-resistant epoxy-coated keypad.

The diskAshur drives are elegantly designed and available in four striking colors and in capacity options from 128GB to 5TB.


Heirloom Software: the Past as Adventure

Thursday, 07 September 2017 - 12:17 PM - (Software)

Heirloom Software: the Past as Adventure

Eric S. Raymond Thu, 09/07/2017 - 07:17

Through the years, I've spent what might seem to some people an inordinate amount of time cleaning up and preserving ancient software. My Retrocomputing Museum page archives any number of computer languages and games that might seem utterly obsolete.

I preserve this material because I think there are very good reasons to care about it. Sometimes these old designs reveal unexpected artistry, surprising approaches that can help us break free of assumptions and limits we didn't know we were carrying.

But just as important, cultures understand themselves through their history and their artifacts, and this is no less true of programming cultures than of any other kind. If you're a computer hacker, great works of heirloom software are your heritage as surely as Old Master paintings are a visual artist's; knowing about them enriches you and helps solidify your relationship to your craft.

For exactly re-creating historical computing experiences, not much can beat running the original binary executables on a software emulator for their host hardware. There are small but flourishing groups of re-creationists who do that sort of thing for dozens of different historical computers.

But that's not what I'm here to write about today, because I don't find that kind of museumization very interesting. It doesn't typically yield deep insight into the old code, nor into the thinking of its designers. For that—to have the experience parallel to appreciating an Old Master painting fully—you need not just a running program but source code you can read.

Therefore, I've always been more interested in forward-porting heirloom source code so it can be run and studied in modern environments. I don't necessarily even consider it vital to retain the original language of implementation; the important goals, in my view, are 1) to preserve the original design in a way that makes it possible to study that design as a work of craft and art, and 2) to replicate as nearly as possible the UI of the original so casual explorers not interested in dipping into source code can at least get a feel for the experiences had by its original users.

Now I'll get specific and talk about Colossal Cave Adventure.

This game, still known as ADVENT to many of its fans because it was written on an operating system that supported only monocase filenames at most six characters long, is one of the great early classics of software. Written in 1976–77, it was the very first text adventure game. It's also the direct ancestor of every rogue-like dungeon simulation, and through those the indirect ancestor of a pretty large percentage of the games being written even today.

If you're of a certain age, the following opening sequence will bring back some fond memories:

Welcome to Adventure!!  Would you like instructions?

> n

You are standing at the end of a road before a small brick building.
Around you is a forest.  A small stream flows out of the building and
down a gully.

> in

You are inside a building, a well house for a large spring.

There are some keys on the ground here.

There is a shiny brass lamp nearby.

There is food here.

There is a bottle of water here.


From this beginning, the game develops with a wry, quirky, humorous and somewhat surrealistic style—a mode that strongly influenced the folk culture of computer hackers that would later evolve into today's Open Source movement.

For a work of art that was the first of its genre, ADVENT's style seems in retrospect startlingly mature. The authors weren't fumbling for an idiom that would later be greatly improved by later artists more sure of themselves; instead, they achieved a consistent (and, at the time, unique) style that would be closely emulated by pretty much everyone who followed them in text adventures, and not much improved on as style even though the technology of the game engines improved by leaps and bounds, and the range of subjects greatly widened.

ADVENT was artistically innovative—and with an architecture ahead of its time as well. Though the possibility had been glimpsed in research languages (notably LISP) as much as a decade earlier, ADVENT is one of the earliest programs still surviving to be organized as a complex, declaratively specified data structure walked by a much simpler state machine. This is a design style that is underutilized even today.

The continuing relevance of ADVENT's actual concrete source code, on the other hand, is quite a different matter. The implementation aged much more rapidly—and badly—than the architecture, the game or its prose.

ADVENT was originally written under TOPS-10, a long-defunct operating system for the DEC PDP-10 minicomputer. The source for the original version still exists (you can find it and other related resources at the Interactive Fiction Archive, but it tends to defeat attempts to appreciate it as a work of programming art because it's written in an archaic dialect of FORTRAN with (by actual count) more than 350 gotos in its 2.4KLOC of source code.

Preserving that original FORTRAN is, therefore, good for establishing provenance (as historians think about these things) but doesn't do a whole lot for that I've suggested as the cultural purposes of keeping these artifacts around. For that, a faithful translation into a more modern language would be far more useful.

As it happens, Don Woods' 1977 version of ADVENT was translated into C less than two years after it was written. You can still play it—and read the code—as part of the BSD Games package. Alas, while that translation is serviceable for building and running the program, it's not so great for reading. It is less impenetrable than the FORTRAN, but was not moved fully to idiomatic C and reads a bit strangely to a modern eye. (To be fair to the translators, the C language was still in its childhood in 1977, and its modern idioms weren't all that well developed yet.)

Thus, there are still a forbidding number of gotos in the BSD translation. Lots of information is passed around through shared globals in a way that was typical in FORTRAN but was questionable style in C even then. The BSD C code is full of mystery constants inherited from the ancestral FORTRAN source. And there is a serious comprehensibility problem around the custom text database that both the original FORTRAN and BSD C versions used—a problem I'll return to later in this article.

Through the late 1970s and early 1980s a lot of people wrote extensions of ADVENT, adding more rooms and treasures. The history of those variants is complicated and difficult to track. Almost lost in the hubbub was that the original authors—Will Crowther and Don Woods—continued to revise their game themselves. The last mainline version—the last release by Don Woods—was Adventure 2.5 in 1995.

I found Adventure 2.5 in the Interactive Fiction Archive in late 2016. Two things caught my attention about it. First, I had not previously known that Crowther and Woods themselves had shipped a version so extended from the famous original. Second—and unlike the early BSD port—there was nothing resembling what we'd expect in a modern source release to go with the bare code and the Makefile. No manual page. No licensing statement.

Furthermore, the 2.5 code was deeply ugly. It was C, but in worse shape than the BSD port. The comments actually included an apology from Don Woods explaining that it had been mechanically lifted from FORTRAN by a homebrew translator of his own devising—and apologizing for the bad style.

Nevertheless, I saw a possibility—and I wrote Don asking his permission to ship a cleaned-up version under a true open-source license. The reply was some time in coming, but Don not only granted permission speaking for both himself and Will Crowther, he also actively encouraged me to do this thing.

Now a reminder about what I think the goals of heritage preservation ought to be: I felt it was essential that the cleaned-up version should at no point break functional compatibility with what we got from Woods and Crowther. Therefore, the very first thing I did after getting the heirloom source to build clean was add the ability for it to capture command logs for regression testing.

When you do a restoration like this, it's not enough merely to make a best effort to preserve original behavior. You ought to be able to prove you have done so. Best practice, then is to start by building a really comprehensive set of regression tests. And that's what I did.

What we did, I should say. The project quickly attracted collaborators—most notably Jason Ninneman. The first of Jason's several good ideas was to use coverage-analysis tools to identify gaps in the test suite. Later, Petr Vorpaev, Peje Nilsson and Aaron Traas joined in. By about a month from starting, we could show more than 95% test coverage. And, of course, we ran retrospective testing with the newest version of the test suite on the earliest version we could make read the logs.

That kind of really good test coverage frees your hands. It allowed us to make rapid progress on the other prime goal, which was to turn the obfuscated source we started with into a readable work of art that fully revealed the design intentions and astonishing cleverness of the original.

So, all the cryptic magic numbers had to go. The goto-laden spaghetti code had to be restructured into something Don Woods in 2017 wouldn't feel he needed to apologize for. In general, what we aimed to transform the source code into was something we could believe Crowther and Woods—two of the most brilliant hackers of their time—would have written in 1977 if they had then had the tools and best practices of 2017 at their fingertips.

Our most (ahem) adventurous move was to scrap the custom text-database format that Crowther and Woods had used to describe the vocabulary of the game and the topology of Colossal Cave.

This—the "complex, declaratively-specified data structure" I mentioned earlier—was the single cleverest feature of the design, and it went all the way back to Crowther's very first version. The dungeon's topology is expressed by a kind of pseudo-code broadly resembling the microcode found underneath a lot of processor architectures; movement consists of dispatching to the sequence of opcodes corresponding to the current room and figuring out which one to fire depending not only on the motion verb the user entered but also on conditionals in the pseudo-code that can test for the presence or absence of objects and their state.

Good luck grokking that from the 2.5 code we started with though. Here are the first two rules as they originally appeared in adventure.text, comprising ten opcodes:

1       2       2       44      29
1       3       3       12      19      43
1       4       5       13      14      46      30
1       145     6       45
1       8       63
2       1       12      43
2       5       44
2       164     45
2       157     46      6
2       580     30

Here's how those rules look, transformed to the YAML markup that our restoration, Open Adventure now uses:

    travel: [
      {verbs: [ROAD, WEST, UPWAR], action: [goto, LOC_HILL]},
      {verbs: [ENTER, BUILD, INWAR, EAST], action:
       ↪[goto, LOC_BUILDING]},
      {verbs: [DOWNS, GULLY, STREA, SOUTH, DOWN], action:
       ↪[goto, LOC_VALLEY]},
      {verbs: [FORES, NORTH], action: [goto, LOC_FOREST1]},
      {verbs: [DEPRE], action: [goto, LOC_GRATE]},
    travel: [
      {verbs: [BUILD, EAST], action: [goto, LOC_START]},
      {verbs: [WEST], action: [goto, LOC_ROADEND]},
      {verbs: [NORTH], action: [goto, LOC_FOREST20]},
      {verbs: [SOUTH, FORES], action: [goto, LOC_FOREST13]},
      {verbs: [DOWN], action: [speak, WHICH_WAY]},

The concept of using a Python helper to compile a declarative markup like this to C source code to be linked to the rest of the game was maybe just barely thinkable when Adventure 2.5 was written. YAML didn't exist at all until six years later.

But...designer's intent. That's much easier to see in the YAML version than in what it replaced. Therefore, given the purpose of heirloom restoration, YAML is better. Rather like stripping darkened varnish from a Rembrandt—the bright colors beneath may startle if you're used to the obscuring overlayer and think of it as definitive, but they are the truth of the work.

With our choices about what we could change so constrained, you might think the restoration was drudge work, but it wasn't like that at all. It was more like polishing a rough diamond—gradually seeing brilliance emerge from beneath an unprepossessing surface. The grottiness was largely—though not entirely—a consequence of the limitations of the tools Crowther and Woods had at hand. When we cleaned that up, we found genius with only a tiny sprinkling of bugs.

My dev team fixed those bugs, of course. We're hackers; that means we consider heirloom software a living heritage to be improved, not an idol to be worshiped. We certainly didn't think, for example, that Don Woods intended use of the verb "extinguish" on an oil-filled unlit urn to make the oil in it vanish.

Petr Vorpaev, reviewing a draft of this article, observed "Sometimes, we stripped bits of genius off, too. Because it was genius that was used to work around limitations that aren't there any more." He's thinking of a very odd feature of the 2.5 code—it worked around the absence of a string type in old FORTRAN by representing strings in a six-bit-per-character encoding packing five characters into a 32-bit word. That is, of course, a crazy thing to do in C, and we targeted it for removal early.

We added some minor features as well. For example, Open Adventure allows some command abbreviations that are standard in text-adventure games today but weren't supported in original ADVENT. By default, our version issues the > command prompt that also has been in common use for decades. And, you can edit your command input with Emacs keystrokes.

But, and this is crucial, all the new features are suppressed by an "oldstyle" option. If you choose that, you get a user experience that even a subject-matter expert would find difficult or impossible to distinguish from the 1995 and 1976–1977 originals.

Some of you might nevertheless be furrowing your brows at this point, wondering "YAML? Emacs keystrokes? Even as options? Yikes...can this really still be Colossal Cave Adventure?"

That's a question with a long pedigree. Philosophers speak of the "Ship of Theseus" thought experiment; if Theseus leaves Athens, and on his long voyage each plank and line and spar of the ship is gradually replaced, until not a fragment of the original wood remains when he returns to Athens, is it still the same ship?

The answer is, as any student of General Semantics could tell you, "What do you mean by 'same'?" Identity is not a well defined predicate; it changes according to what kind of predictive problem you are using language to tackle. Same arrangement of bits in the source? Same UI? Same behaviors at some level deeper than UI?

There really isn't one right answer. Those of you predisposed to answer "same" might argue "Hey, it passes the same regression tests." Only, maybe it doesn't now. Remember, we fixed some bugs. On the other hand...if the ship of Theseus is still "the same" after being entirely rebuilt, does it cease to be if we learn that the replacement for one of its parts doesn't replicate a hidden flaw in the original? Or if a few improvements have been added during the voyage that weren't in the original plans?

As a matter of fact, Adventure already has come through one entire language translation—FORTRAN to C—with its "identity" (in the way hackers and other people usually think of these things) intact. I think I could translate it to, say, Go tomorrow, and it would still be the same game, even if it's nowhere near the same arrangement of bits.

Furthermore, I can show you the ship's log. If you go to the project repository, you can view each and every small transformation of the code between Adventure 2.5 and the Open Adventure tip version.

There is probably not a lot of work still to be done on this particular project, as long as our objectives are limited to be performing a high-quality restoration of Colossal Cave Adventure. As they almost certainly will be; if we wanted to do something substantially new in this kind of game, the smart way to do it would not be to code custom C, but to use a language dedicated to implementing them, such as Muddle (aka MDL) or Adventure Definition Language.

I hope some larger lessons are apparent. Although I do think Colossal Cave Adventure is interesting as an individual case in itself, I really wrote this article to suggest constructive ways to think about the general issues around restoring heirloom software—why you might want to do it, what challenges and rewards you'll find, and what the best practices are.

Here are the best practices I can identify:

  • The goals to hold in mind are 1) making the design intent of the original code available for study, and 2) preserving the oldstyle-mode UI well enough to fool an original user.

  • Build your regression-test suite first. You want to be able to demonstrate that your restoration is faithful, not just assert it.

  • Use coverage tools to verify that your regression tests are good enough to constitute a demonstration.

  • Once you have your tests, don't sweat changing tools, languages, implementation tactics or documentation formats. Those are ephemera; good design is what endures.

  • Always have an oldstyle option. Gain the freedom to improve by making, and keeping, the promise of fidelity to the original behavior in oldstyle mode.

  • Do fix bugs. This may conflict with the objective of perfect regression testing, but you're an engineer, not an embalmer. Work around that conflict as you need to.

  • Show your work. Your product is not just the restored software but the repository from which it ships. The history in that repository needs to be a continuing demonstration of good judgment and sensitivity to the original design intent of the code.

  • Document what you change, including the bug fixes. It is good practice to include maintainer's notes describing your restoration process in detail.

  • When in doubt about whether to add a feature, be neither over-eager to put your mark on the code nor a slave to its past. Instead, ask "What's in good taste?"

And while you're doing all this, don't forget to have fun. The greatest heirloom works, like Colossal Cave Adventure, were more often than not written in a spirit of high-level playfulness. You'll be truer to their intent if you approach restoring them with the same spirit.


PSSC Labs' Eco Blade 1U

Monday, 31 July 2017 - 13:26 PM - (Hardware)

PSSC Labs' Eco Blade 1U

James Gray Mon, 07/31/2017 - 08:26

Arguably "the greenest blade server on the market", PSSC Labs' new Eco Blade 1U rack server offers power and performance with energy savings of up to 46% over competing servers, says the company. Engineered specifically for high-performance, high-density computing environments, the Eco Blade is a unique server platform that simultaneously increases compute density while decreasing power use.

The solution offers two complete and independent servers contained in 1U of rack space. Each independent server supports up to 64 Intel Xeon processor cores and 1.0TB of enterprise memory for a total of up to 128 Cores and 2TB of memory per 1U. A unique design feature—the lack of a shared power supply or backplane—provides for the bulk of Eco Blade's power savings and thus lower long-term TCO.

PSSC Labs calls on the IT industry to contribute its share to reducing its environmental footprint. The Eco Blade enables organizations to obtain the performance needed to fuel cutting-edge research and groundbreaking enterprises while significantly reducing the power used and, thanks to the 55% recyclable material content, waste generated via the data center.

The Eco Blade 1U server is certified compatible with Red Hat CentOS, Ubuntu and Microsoft operating systems.


My Love Affair with Synology

Thursday, 22 June 2017 - 13:23 PM - (Hardware)

My Love Affair with Synology

Shawn Powers Thu, 06/22/2017 - 08:23

In my "Hodge Podge" article in the October 2016 issue, I mentioned how much I love the Synology NAS I have in my server closet (Figure 1). I got quite a few email messages from people—some wanting more information, some scolding me for not rolling my own NAS, and some asking me what on earth I need with that much storage. Oddly, the Linux-running Synology NAS has become one of my main server machines, and it does far more than just store data. Because so many people wanted more information, I figured I'd share some of the cool things I do with my Synology.

Figure 1. The Synology DS1815+ is what I use, but the entire line of Synology NAS devices shares a common interface.

Why So Much Storage?!

I guess I should address the reason I have 48TB (36 usable) of storage (Figure 2). I store a lot of data (har har har). Seriously though, I have a local copy of close to 100,000 photos, 1000s of hours of home videos and several complete Linux distribution repositories. That takes a lot of storage! The bulk of my needs, however, comes from entertainment media. Ever since my kids first used DVDs to skate across the kitchen floor, I've been backing up my movies digitally to my server. Through the years, that has migrated from DVD ripping to Blu-ray ripping, but years of movies really add up. Even those aren't the bulk of my data, however.

Figure 2. The dashboard shows you information on your NAS at a glance. I'm slowly building my collection after the horrible data loss I suffered a few years ago.

I collect television series. Sometimes those collections are ripped from my TiVo, manually edited and converted to MKV. If I'm being honest, however, most of my television shows are just downloaded from torrent sites. Yes, I know it's not kosher to download torrents of television shows. But I also know that I pay more than $200/month to the cable company for every channel available, and if I wanted to take the time, I could do the TiVo rip/edit/convert dance. I just don't have the time. Because I pay for cable access, it doesn't bother me to download television shows. (We actually do buy all our Blu-ray movies though. I'm not a proponent of pirating things you don't have rights to.) It's okay if you disagree with my choice to download television shows via torrents, I get it. Really, I do. Just ignore those parts of this article!

What Kind of Drives?

Don't skimp on hard drives. That's generally good advice regardless of the situation, but with NAS devices, please spend the extra money to get drives rated for NAS. I have eight 6TB Western Digital Red NAS drives. When I bought them, the WD Red Pro drives weren't available. Still, the standard Red drives are rated for up to eight drive bays, so I'm still within spec.

I haven't always been so picky about drives. In fact, I just used to get the biggest, cheapest drives I could. Since I use RAID6, a drive or two failing isn't a big deal—except that I actually had three drives fail at exactly the same time, and I lost all my data, including family home movies that I didn't have backed up anywhere. It still hurts. So really, don't skimp on drives, it's just not worth it. (Also remember to back up, even large files. RAID isn't a backup, trust me.)

Why Synology?

I've had Drobos, QNAPs and multiple Netgear devices. They all sucked. No, really. The performance on every single device I've had in the past has been horrible (even with good drives), and I've never been able to determine exactly why. Once more than one simultaneous read happens over the network, they all just crap out. With the Synology, I can have four 1080p video streams going at once without any slowdown at all.

The other thing I like about the Synology is its software. Most other NAS devices have apps that you can install on the Linux system, but the Synology apps seem to be more elegant and work reliably (Figure 3). In fact, there are some incredible things I do with the NAS device that I'm sure weren't exactly what it was designed to do (more about that in a bit).

Figure 3. The apps are plentiful, and there are community-supported unofficial apps as well.

Ultimately, the biggest draw for me is how well Synology keeps itself updated and maintains its drives. It automatically does scans and integrity checks, plus it does system updates without disrupting the servers I have connected to it via NFS. Every other NAS I've used stays at whatever software version it comes with, because upgrading the firmware almost always means drive failures and server lockups. I'm sure there are procedures for QNAP and such that make upgrading possible, but the Synology does it automatically—and I like that a lot.

TV and Torrents

I like the SickRage program not only because it automatically searches and downloads new episodes of my television shows, but also because it organizes my existing collection. I have every episode of Star Trek that ever has been produced (including the animated series from the 1970s), and SickRage does an incredible job of naming and organizing those files. As long as I spent ripping the Star Trek the Next Generation DVDs, I don't ever want to have to figure out which episode is which again!

In order to install SickRage, you actually need to install "Sick Beard Custom" and then paste in the SickRage Git URL. The short version of the story is that Sick Beard was the original program, but the developer stopped developing it, so folks forked it, and SickRage is the best fork out there, by far. Even if you're not using Synology, you should be running SickRage. Head here for the repo or here for the home page.

SickRage supports lots of torrent clients, and it supports NZB too. I've found NZB to be less reliable than it used to be, so I've moved back to 100% torrents. I like the Transmission web interface, so that's what I use on Synology. It's another maintained app, so just search for "transmission" in the package installer application. Integrating Transmission and SickRage is beyond the scope of this article, but rest assured, it's not difficult. SickRage is designed to work with Transmission, so setting it up is easy. Warning: if you use SickRage and Transmission to download television shows, you will get DMCA take-down notices from your ISP. Apparently the production companies disagree with my rationale for downloading TV episodes. Thankfully, I have a solution for that.

Networking and Traffic Routing

My Synology device has four Gigabit Ethernet ports. I think that's overkill, but since the software allows me to bond the four ports together (even with a switch that doesn't support 802.3ad), I'm happy to have more bandwidth than I need. I never have an issue with throughput, even when streaming those multiple video files mentioned above.

Since Synology supports VPN connections, the first thing I did was set up my privateinternetaccess.com account so my torrents would be directed through the VPN. I haven't gotten port forwarding to work through the VPN, but even without a redirected port, my torrents download fine. The problem is my VPN connection occasionally goes down. When it does, the torrents go through my gateway, and even when the VPN comes back up, the tracker connects me via the non-VPN connection. And, I get DMCA notices. This is very frustrating. So I decided to remove the gateway device from the Synology altogether! Bear with me.

I have a network address assigned on my local network so LAN computers can connect. That works fine. Without a gateway specified, however, the NAS can't connect to the internet for torrents, SickRage or even system updates. But when the VPN is connected, it sets the gateway address automatically to an address on the other side of the VPN (Figure 4). As long as my VPN is connected, the system has a gateway assigned, and it can access everything through the VPN. If the VPN goes down briefly, rather than defaulting to the local network gateway, it just can't connect to the internet. Once the VPN is re-established, it reassigns a VPN gateway, and boom, the NAS is back online! The only problem is how can I connect to the VPN if I can't get on the internet? The answer: static routes.

Figure 4. Notice the gateway is in the 10.x.x.x range, which is not what I use on my local network. That is assigned by the VPN.

If you look at Figure 5, you'll see that I have a static route set up so that traffic going to the IP address of my VPN goes through my LAN's gateway. Since it's only a static route for that network, the rest of the internet is still inaccessible. I also could do fancy firewall work and allow the NAS to access only the VPN and drop all other packets, but I like the solution to be self-contained. That way, if I change routers or router configs, I don't have to worry about getting DMCA notices.

Figure 5. This is the sneaky static route so I can connect to my VPN, but nothing else.

The Synology also will act as a router, forwarding traffic. That means I can point my Roku to the Synology as its gateway device, and I'm able to watch local blackout games on the MLB.tv app, because all the traffic goes through the VPN. The only change I have to make is on my DHCP server, which gives the Synology's IP address as the Roku's gateway address. It works perfectly and saves me setting up another VPN to get around MLB's regional restrictions. (Honestly, I usually watch baseball games on TiVo, but occasionally the game is on only via streaming, and I like having that option.)


Remember when I said RAID wasn't a backup? Yeah, I meant that. I've lost too much valuable data through the years to depend on RAID to protect my files—even when the drives and NAS device seem to be more solid than any I've had in the past. Thankfully, Synology has a few different backup options (Figure 6). The most practical one for large amounts of data is the Hyper Backup app. It has the ability to copy your entire NAS to a variety of destinations. Whether you choose to buy another Synology NAS and store it in your shed or back up your data to Amazon Glacier, the same Hyper Backup program can handle the regular updates.

Figure 6. Backup solutions are in great supply.

I don't want to pay for Amazon Storage, even though the Amazon Drive Unlimited is decently priced at $60/year. I worry that my 30TB would cause Amazon to invent a reason to suspend my account. Plus, it would take so long to back up my entire data store to the cloud, that it literally might never get done. Right now, I just back up my irreplaceable files (home movies, photos and so on). Someday I hope to get a second Synology NAS and set up that "mirror in the shed". Still, Synology has so many backup options, it's hard to find a reason to delay setting up a backup solution!

Things I Don't Do

The Synology had a decent processor, and the RAM is even upgradeable. Still, it's not a beefy server when it comes to resource-hungry applications. For example, even though the Plex Media Server is available in the package management system, I'd never install it. Plex uses way too much CPU to transcode video streams. I'm thankful the Synology is powerful enough to stream the actual video files over fileshares, but the thought of transcoding 1080p MKV streams in real time? It's a bad idea. I have a standalone server I use for Plex Media Server, and while it can transcode at least four full resolution video streams, it's also a huge i7 CPU with a boatload of RAM. Unless you're doing minimal streaming with low-resolution video, I encourage you to avoid Plex Media Server on any NAS device.

I also can't run the really amazing reverse proxy server on Synology. The setup is easy, and the configuration is very intuitive, but my VPN/no-gateway setup means that the reverse proxy doesn't work outside my network. Even if I forward a port to the NAS from my router, it tries to send responses out the VPN connection and fails. Reverse proxies are easy enough to configure on any other machine in my network, so it's not a huge loss, but it's worth noting that it's something my crazy VPN system breaks.

Not the Only Option

Before you think I was paid by the folks at Synology to brag about their product, I will freely admit that a big tower server with a bunch of hard drives and software RAID makes for an incredible NAS. It means you can beef up the hardware too and do things like run Plex Media Server. I simply like the efficiency of the Synology devices. They're fast, cool running and just sip electricity. I'm sure there are other brands of NASes that do a decent job too, and Synology isn't perfect. In all honesty, however, it's the best product I've been able to find, and I have literal piles of junk NAS devices that just couldn't do the job. If you're looking for a NAS device, in my opinion, you can't go wrong with Synology.


Minifree Ltd.'s GNU+Linux Computers

Monday, 13 March 2017 - 12:19 PM - (Hardware)

Minifree Ltd.'s GNU+Linux Computers

James Gray Mon, 03/13/2017 - 07:19

Minifree Ltd.—doing business as "Ministry of Freedom"—exists mainly for reasons Linuxers will like: to make it easier for people to get computers that respect their freedom and privacy, and to provide funding for a meaningful project, called Libreboot.

Minifree describes Libreboot as a free (libre) and open-source BIOS/UEFI replacement that offers faster boot speeds, better security and many advanced features compared to most proprietary boot firmware.

Minifree recently announced availability of three computers: the Libreboot C201 laptop, the Libreboot D16 Desktop and Libreboot D16 Server. All come with the Libreboot firmware and Debian GNU+Linux operating system preinstalled and are free of unwanted bloatware, DRM, spyware or restrictions on computer usage rights. The Libreboot C201 laptop is a configurable, lightweight and portable laptop ideal for anyone needing a small, lightweight computer for travel, work or general entertainment purposes. The Libreboot D16 Desktop is a configurable, high-end, business-grade, secure owner-controlled workstation free of backdoors implanted by the NSA and other agencies. Finally, the Libreboot D16 Server is a configurable, high-end, business-grade, secure owner-controlled server, also free of the aforementioned backdoors.

Minifree ships its machines worldwide from the United Kingdom.


Flash ROMs with a Raspberry Pi

Monday, 06 March 2017 - 10:18 AM - (Hardware)

Flash ROMs with a Raspberry Pi

Kyle Rankin Mon, 03/06/2017 - 04:18

I previously wrote a series of articles about my experience flashing a ThinkPad X60 laptop with Libreboot. After that, the Libreboot project expanded its hardware support to include the ThinkPad X200 series, so I decided to upgrade. The main challenge with switching over to the X200 was that unlike the X60, you can't perform the initial Libreboot flash with software. Instead, you actually need to disassemble the laptop to expose the BIOS chip, clip a special clip called a Pomona clip to it that's wired to some device that can flash chips, cross your fingers and flash.

I'm not generally a hardware hacker, so I didn't have any of the special-purpose hardware-flashing tools that you typically would use to do this right. I did, however, have a Raspberry Pi (well, many Raspberry Pis if I'm being honest), and it turns out that both it and the Beaglebone Black are platforms that have been used with flashrom successfully. So in this article, I describe the steps I performed to turn a regular Raspberry Pi running Raspbian into a BIOS-flashing machine.

The Hardware

To hardware-flash a BIOS chip, you need two main pieces of hardware: a Raspberry Pi and the appropriate Pomona clip for your chip. The Pomona clip actually clips over the top of your chip and has little teeth that make connections with each of the chip's pins. You then can wire up the other end of the clip to your hardware-flashing device, and it allows you to reprogram the chip without having to remove it. In my case, my BIOS chip had 16 pins (although some X200s use 8-pin BIOS chips), so I ordered a 16-pin Pomona clip on-line at almost the same price as a Raspberry Pi!

There is actually a really good guide on-line for flashing a number of different ThinkPads using a Raspberry Pi and the NOOBS distribution; see Resources if you want more details. Unfortunately, that guide didn't exist when I first wanted to do this, so instead I had to piece together what to do (specifically which GPIO pins to connect to which pins on the clip) by combining a general-purpose article on using flashrom on a Raspberry Pi with an article on flashing an X200 with a Beaglebone Black. So although the guide I link to at the end of this article goes into more depth and looks correct, I can't directly vouch for it since I haven't followed its steps. The steps I list here are what worked for me.

Pomona Clip Pinouts

The guide I link to in the Resources section has a great graphic that goes into detail about the various pinouts you may need to use for various chips. Not all pins on the clip actually need to be connected for the X200. In my case, the simplified form is shown in Table 1 for my 16-pin Pomona clip.

Table 1. Pomona Clip Pinouts

SPI Pin Name 3.3V CS# S0/SIO1 GND S1/SIO0 SCLK
Pomona Clip Pin # 2 7 8 10 15 16
Raspberry Pi GPIO Pin # 1 (17*) 24 21 25 19 23

So when I wired things up, I connected pin 2 of the Pomona clip to GPIO pin 17, but in other guides, they use GPIO pin 1 for 3.3V. I list both because pin 17 worked for me (and I imagine any 3.3V power source might work), but in case you want an alternative pin, there it is.

Build Flashrom

There are two main ways to build flashrom. If you intend to build and flash a Libreboot image from source, you can use the version of flashrom that comes with the Libreboot source. You also can just build flashrom directly from its git repository. Either way, you first will need to pull down all the build dependencies:

$ sudo apt-get install build-essential pciutils
 ↪usbutils libpci-dev libusb-dev libftdi1
 ↪libftdi-dev zlib1g-dev subversion

If you want to build flashrom directly from its source, do this:

$ svn co svn://flashrom.org/flashrom/trunk flashrom
$ cd flashrom
$ make

Otherwise, if you want to build from the flashrom source included with Libreboot, do this:

$ git clone http://libreboot.org/libreboot.git
$ cd libreboot
$ ./download flashrom
$ ./build module flashrom

In either circumstance, at the end of the process, you should have a flashrom binary compiled for the Raspberry Pi ready to use.

Enable SPI

The next step is to load two SPI modules so you can use the GPIO pins to flash. In my case, the Raspbian image I used did not default to enabling that device at boot, so I had to edit /boot/config.txt as root and make sure that the file contained dtparam=spi=on and then reboot.

Once I rebooted, I then could load the two spi modules:

$ sudo modprobe spi_bcm2708
$ sudo modprobe spidev

Now that the modules loaded successfully, I was ready to power down the Raspberry Pi and wire everything up.

Wire Everything Up

To wire everything up, I opened up my X200 (unplugged and with the battery removed, of course), found the BIOS chip (it is right under the front wrist rest) and attached the clip. If you attach the clip while the Raspberry Pi is still on, note that it will reboot. It's better to make all of the connections while everything is turned off. Once I was done, it looked like what you see in Figure 1.

Figure 1. Laptop Surgery

Then I booted the Raspberry Pi, loaded the two SPI modules and was able to use flashrom to read off a copy of my existing BIOS:

sudo ./flashrom -p linux_spi:dev=/dev/spidev0.0
 ↪-r factory1.rom

Now, the thing about using these clips to flash hardware is that sometimes the connections aren't perfect, and I've found that in some instances, I had to perform a flash many times before it succeeded. In the above case, I'd recommend that once it succeeds, you perform it a few more times and save a couple different copies of your existing BIOS (at least three), and then use a tool like sha256sum to compare them all. You may find that one or more of your copies don't match the rest. Once you get a few consistent copies that agree, you can be assured that you got a good copy.

After you have a good backup copy of your existing BIOS, you can attempt a flash. It turns out that quite a bit has changed with the Libreboot-flashing process since the last time I wrote about it, so in a future column, I will revisit the topic with the more up-to-date method to flash Libreboot.


Hardware Flashing with Raspberry Pi: https://github.com/bibanon/Coreboot-ThinkPads/wiki/Hardware-Flashing-with-Raspberry-Pi


SoftMaker FreeOffice

Monday, 20 June 2016 - 15:20 PM - (Software)

SoftMaker FreeOffice

James Gray Mon, 06/20/2016 - 10:20

The bottom line on SoftMaker FreeOffice 2016—the updated, free, full-featured Office alternative to the expensive Microsoft Office suite—is this: no other free office suite offers as high a level of file compatibility with Word, Excel and PowerPoint. This maxim applies to both Windows and Linux operating systems, says the suite's maker, SoftMaker Software GmbH. SoftMaker asserts that the myriad competing free alternatives often harbor problems opening the Excel, Word and PowerPoint file formats loss-free. Sometimes the layout and formatting get lost, and on other occasions, files cannot even be opened. SoftMaker sees itself as the positive exception to this rule, especially with the newly overhauled FreeOffice 2016. Benefiting greatly from SoftMaker's commercial offering, SoftMaker Office 2016, FreeOffice 2016 adds features such as improved graphics rendering, compatibility with all current Linux distributions and Windows flavors (XP to Windows 10), new EPUB export and improved PDF export and many other MS-Office interoperability enhancements.


1 2 3 4 5