LearnAboutLinux.com

1 2 3 .. 5

Linux Gets Loud

Wednesday, 13 June 2018 - 13:08 PM - (Hardware)

Exploring the current state of musical Linux with interviews of developers of popular packages.

Linux is ready for prime time when it comes to music production. New offerings from Linux audio developers are pushing creative and technical boundaries. And, with the maturity of the Linux desktop and growth of standards-based hardware setups, making music with Linux has never been easier.

Linux always has had a place for musicians looking for inexpensive rigs to record and create music, but historically, it's been a pain to maintain. Digging through arcane documentation and deciphering man pages is not something that interests many musicians.

Loading up Linux is not as intimidating as it once was, and a helpful community is going strong. Beyond tinkering types looking for cheap beats, users range in experience and skill. Linux is still the underdog when it comes to its reputation for thin creative applications though.

Recently, musically inclined Linux developers have turned out a variety of new and updated software packages for both production and creative uses. From full-fledged DAWs (Digital Audio Workstations), to robust soft-synths and versatile effects platforms, the OSS audio ecosystem is healthy.

A surge in technology-focused academic music programs has brought a fresh crop of software-savvy musicians into the fold. The modular synth movement also has nurtured an interest in how sound is made and encouraged curiosity about the technology behind it.

One of the biggest hurdles in the past was the lack of core drivers for the wide variety of outboard gear used by music producers. With USB 2.0 and improvements in ALSA and JACK, more hardware became available for use. Companies slowly have opened their systems to third-party developers, allowing more low-level drivers to be built.

Hardware

In terms of raw horsepower, the ubiquity of multicore processors and cheap RAM has enabled Linux to take advantage of powerful machines. Specifically, multithreaded software design available to developers in the Linux kernel offer audio packages that offload DSP and UI to various cores. Beyond OS multithreading, music software devs have taken advantage of this in a variety of ways.

A well known API called Jack Audio Connection Kit (JACK) handles multiple inter-application connections as well as audio hardware communication with a multithreaded approach, enabling low latency with both audio DSP and MIDI connections.

Ardour has leveraged multithreaded processing for some time. In early versions, it was used to distribute audio processing and the main interface and OS interaction to separate cores. Now it offers powerful parallel rendering on a multitude of tracks with complex effects.

Read More...

Data Privacy: Why It Matters and How to Protect Yourself

Tuesday, 05 June 2018 - 12:00 PM - (Security)

When it comes to privacy on the internet, the safest approach is to cut your Ethernet cable or power down your device. But, because you can't really do that and remain somewhat productive, you need other options. This article provides a general overview of the situation, steps you can take to mitigate risks and finishes with a tutorial on setting up a virtual private network.

Sometimes when you're not too careful, you increase your risk of exposing more information than you should, and often to the wrong recipients—Facebook is a prime example. The company providing the social-media product of the same name has been under scrutiny recently and for good reason. The point wasn't that Facebook directly committed the atrocity, but more that a company linked to the previous US presidential election was able to access and inappropriately store a large trove of user data from the social-media site. This data then was used to target specific individuals. How did it happen though? And what does that mean for Facebook (and other social-media) users?

In the case of Facebook, a data analysis firm called Cambridge Analytica was given permission by the social-media site to collect user data from a downloaded application. This data included users' locations, friends and even the content the users "liked". The application supposedly was developed to act as a personality test, although the data it mined from users was used for so much more and in what can be considered not-so-legal methods.

At a high level, what does this all mean? Users allowed a third party to access their data without fully comprehending the implications. That data, in turn, was sold to other agencies or campaigns, where it was used to target those same users and their peer networks. Through ignorance, it becomes increasingly easy to "share" data and do so without fully understanding the consequences.

Getting to the Root of the Problem

For some, deleting your social-media account may not be an option. Think about it. By deleting your Facebook account, for example, you may essentially be deleting the platform that your family and friends choose to share some of the greatest events in their lives. And although I continue to throw Facebook in the spotlight, it isn't the real problem. Facebook merely is taking advantage of a system with zero to no regulations on how user privacy should be handled. Honestly, we, as a society, are making up these rules as we go along.

Read More...

The Fight for Control: Andrew Lee on Open-Sourcing PIA

Wednesday, 30 May 2018 - 13:08 PM - (Security)

When I learned that our new sister company, Private Internet Access (PIA), was opening its source code, I immediately wanted to know the backstory, especially since privacy is the theme of this month's Linux Journal. So I contacted Andrew Lee, who founded PIA, and an interview ensued. Here it is.

DS: What made you start PIA in the first place? Did you have a particular population or use case—or set of use cases—in mind?

AL: Primarily PIA was rooted in my humble beginnings on IRC where it had quickly become important to protect one's IP from exposure using an IRC bouncer. However, due to jumping around in various industries thereafter, I learned a lot and came to an understanding that it was time for privacy to go mainstream, not in the "hide yourself" type of sense, but simply in the "don't watch me" sense.

DS: Had you wanted to open-source the code base all along? If not, why now?

AL: We always wanted to open-source the code base, and we finally got around to it. It's late, but late is better than never. We were incredibly busy, and we didn't prioritize it enough, but by analyzing our philosophies deeply, we've been able to re-prioritize things internally. Along with open-sourcing our software, there are a lot of great things to come.

DS: People always wonder if open-sourcing a code base affects a business model. Our readers have long known that it doesn't, and that open-sourcing in fact opens more possibilities than leaving code closed. But it would be good to hear your position on the topic, since I'm sure you've thought about it.

AL: Since Private Internet Access is a service, having open-source code does not affect the business' ability to generate revenue as a company aiming for sustainable activism. Instead, I do believe we're going to end up with better and stronger software as an outcome.

DS: Speaking of activism, back in March, you made a very strong statement, directly to President Trump and Congress, with a two-page ad in The New York Times, urging them to kill off SESTA-FOSTA. I'm curious to know if we'll be seeing more of that and to hear what the response was at the time.

AL: Absolutely! We ran a few newspaper campaigns, including one for the Internet Defense League. It's a very strong place to mobilize people for important issues for society. As a result of the campaign, many tweets from concerned Americans were received by President Trump. I would say it was a success, but from here it's up to our President. Let's hope he does the right thing and vetoes it. That said, if the bill is signed in its current form [which it was after this interview was conducted], the internet is routing, and the cypherpunks have the power of the crypto. We will decentralize and route around bad policy.

Read More...

Generating Good Passwords, Part II

Tuesday, 29 May 2018 - 13:08 PM - (Security)

Passwords. They're the bane of computer users and a necessary evil, but they have risks and challenges associated with them. None of the choices are great. If it's up to your memory, you'll end up using the same password again and again. Use a password manager like 1Password, and you're reliant on its database security and portability. Two-factor? Um, can I borrow your phone for a minute?

Still, having complex and random passwords is definitely more secure than having a favorite phrase or variation you've been using for years. You know what I mean, just own it; you've been using the same PIN and password forever, right?

Last time, I built a script that could produce a random character from one of a set of character sets. For example, a random uppercase letter can be produced like this:


uppercase="ABCDEFGHIJKLMNOPQRSTUVWXYZ"

${uppercase:$(( $RANDOM % ${#uppercase} )):1}

Add lowercase and a constrained set of punctuation and some rules on how many of each you want, and you can make some pretty complicated passwords. To start, let's just focus on a random sequence of n uppercase letters.

That's easily done:


while [ ${#password} -lt $length ] ; do
   letter=${uppers:$(( $RANDOM % ${#uppers} )):1}
   password="${password}$letter"
done

Remember that the ${#var} notation produces the length of the current value of that variable, so this is an easy way to build up the $password variable until it's equal to the target length as specified in $length.

Here's a quick test run or two:


$ sh makepw.sh
password generated = HDBYPMVETY
password generated = EQKIQRCCZT
password generated = DNCJMMXNHM

Looks great! Now the bigger challenge is to pick randomly from a set of choices. There are a couple ways to do it, but let's use a case statement, like this:


while [ ${#password} -lt $length ] ; do
  case $(( $RANDOM % 4 )) in
     0 ) letter=${uppers:$(( $RANDOM % ${#uppers} )):1}  ;;
     1 ) letter=${lowers:$(( $RANDOM % ${#lowers} )):1}  ;;
     2 ) letter=${punct:$((  $RANDOM % ${#punct}  )):1}  ;;
     3 ) letter=${digits:$(( $RANDOM % ${#digits} )):1}  ;;
  esac
  password="${password}$letter"
done

Since you're basically weighing upper, lower, digits and punctuation the same, it's not a huge surprise that the resultant passwords are rather punctuation-heavy:


$ sh makepw.sh
password generated = 8t&4n=&b(B
password generated = 5=B]9?CEqQ
password generated = |1O|*;%&A

These are all great passwords, impossible to guess algorithmically (and, yeah, hard to remember too, but that's an inevitable side effect of this kind of password algorithm).

Read More...

Privacy Plugins

Monday, 28 May 2018 - 11:30 AM - (Security)

Protect yourself from privacy-defeating ad trackers and malicious JavaScript with these privacy-protecting plugins.

Although your phone is probably the biggest threat to your privacy, your web browser is a close second. In the interest of providing you targeted ads, the web is littered with technology that attempts to track each site you go to via a combination of cookies and JavaScript snippets. These trackers aren't just a privacy threat, they are also a security threat. Because of how ubiquitous these ad networks are, attackers have figured out ways to infiltrate some of them and make them serve up even more malicious code.

The good news is that a series of privacy plugins work well with Firefox under Linux. They show up as part of the standard list of approved add-ons and will help protect you against these kinds of threats. Many different privacy plugins exist, but instead of covering them all, in this article, I highlight some of my personal favorites—the ones I install on all of my browsers. Although I discuss these plugins in the context of Firefox, many of them also are available for other Linux browsers. Because all of these plugins are standard Firefox add-ons, you can install them through your regular Firefox add-on search panel.

Privacy Badger

The EFF has done a lot of work recently to improve privacy and security for average users online, and its Privacy Badger plugin is the first one I want to cover here. The idea behind Privacy Badger is to apply some of the tools from different plugins like AdBlock Plus, Ghostery and others that inspect third-party JavaScript on a page. When that JavaScript comes from a known tracking network or attempts to install a tracking cookie on your computer, Privacy Badger steps in and blocks it.

If so many other plugins do something similar, why re-invent the wheel with Privacy Badger? Well, the downside to many of the other tools is that they often require user intervention to tweak and tune. Although it's great for people who want to spend their time doing that, average users probably rather would spend their time actually browsing the web. Privacy Badger has focused on providing similar protection without requiring any special tweaking or tuning. As you browse the web, it keeps track of these different sites, and by observing their behavior, decides whether they are tracking you.

Read More...

Tor Hidden Services

Wednesday, 23 May 2018 - 12:00 PM - (Security)

Why should clients get all the privacy? Give your servers some privacy too!

When people write privacy guides, for the most part they are written from the perspective of the client. Whether you are using HTTPS, blocking tracking cookies or going so far as to browse the internet over Tor, those privacy guides focus on helping end users protect themselves from the potentially malicious and spying web. Since many people who read Linux Journal sit on the other side of that equation—they run the servers that host those privacy-defeating services—system administrators also should step up and do their part to help user privacy. Although part of that just means making sure your services support TLS, in this article, I describe how to go one step further and make it possible for your users to use your services completely anonymously via Tor hidden services.

How It Works

I'm not going to dive into the details of how Tor itself works so you can use the web anonymously—for those details, check out https://tor.eff.org. Tor hidden services work within the Tor network and allow you to register an internal, Tor-only service that gets its own .onion hostname. When visitors connect to the Tor network, Tor resolves those .onion addresses and directs you to the anonymous service sitting behind that name. Unlike with other services though, hidden services provide two-way anonymity. The server doesn't know the IP of the client, like with any service you access over Tor, but the client also doesn't know the IP of the server. This provides the ultimate in privacy since it's being protected on both sides.

Warnings and Planning

As with setting up a Tor node itself, some planning is involved if you want to set up a Tor hidden service so you don't defeat Tor's anonymity via some operational mistake. There are a lot of rules both from an operational and security standpoint, so I recommend you read this excellent guide to find the latest best practices all in one place.

Without diving into all of those steps, I do want to list a few general-purpose guidelines here. First, you'll want to make sure that whatever service you are hosting is listening only on localhost (127.0.0.1) and isn't viewable via the regular internet. Otherwise, someone may be able to correlate your hidden service with the public one. Next, go through whatever service you are running and try to scrub specific identifying information from it. That means if you are hosting a web service, modify your web server so it doesn't report its software type or version, and if you are running a dynamic site, make sure whatever web applications you use don't report their versions either.

Read More...

Generating Good Passwords, Part I

Thursday, 17 May 2018 - 11:45 AM - (Security)

Dave starts a new method for generating secure passwords with the help of 1Password.

A while back I shared a script concept that would let you enter a proposed password for an account and evaluate whether it was very good (well, maybe "secure" would be a better word to describe the set of tests to ensure that the proposed password included uppercase, lowercase, a digit and a punctuation symbol to make it more unguessable).

Since then, however, I've really been trying personally to move beyond mnemonic passwords of any sort to those that look more like gobbledygook. You know what I mean—passwords like fRz3li,4qDP? that turn out to be essentially random and, therefore, impossible to crack using any sort of dictionary attack.

Aiding me with this is the terrific password manager 1Password. You can learn more about it here, but the key feature I'm using is a combination of having it securely store my passwords for hundreds of websites and having a simple and straightforward password generator feature (Figure 1).

Figure 1. 1Password Password Generation System

If I'm working on the command line, however, why pop out to the program to get a good password? Instead, a script can do the same thing, particularly if I again tap into the useful $RANDOM shortcut for generating random numbers.

Generating Secure Passwords

The easiest way to fulfill this task is to have a general-purpose approach to generating a random element from a specific set of possibilities. So, a random uppercase letter might be generated like this:


uppers="ABCDEFGHIJKLMNOPQRSTUVWXYZ"

letter=${uppers:$(( $RANDOM % 26 )):1}

The basic notational convention used here is the super handy Bash shell variable slicing syntax of:


${variable:startpoint:charcount}

To get the first character only of a variable, for example, you can simply reference it as:


${variable:1:1}

That's easy enough. Instead of a fixed reference number, however, I'm using $(( $RANDOM % 26 )) as a way to generate a value between 0–25 that's different each time.

Add strings that contain all the major character classes you seek and you've got a good start:


lowers="abcdefghijklmnopqrstuvwxyz"
digits="0123456789"
punct="()./?;:[{]}|=+-_*&^%$#@!~"  # skip quotes

To get even fancier, there's another notation ${#variable} that returns the number of characters in a variable, so the following shows that there are 24 characters in that particular string:

Read More...

Review: the Librem 13v2

Thursday, 03 May 2018 - 12:00 PM - (Security)

Review: the Librem 13v2

Image
Shawn Powers Thu, 05/03/2018 - 07:00

The Librem 13—"the first 13-inch ultraportable designed to protect your digital life"—ticks all the boxes, but is it as good in real life as it is on paper?

I don't think we're supposed to call portable computers "laptops" anymore. There's something about them getting too hot to use safely on your lap, so now they're officially called "notebooks" instead. I must be a thrill-seeker though, because I'm writing this review with the Librem 13v2 directly on my lap. I'm wearing pants, but apart from that, I'm risking it all for the collective. The first thing I noticed about the Librem 13? The company refers to it as a laptop. Way to be brave, Purism!

Why the Librem?

I have always been a fan of companies who sell laptops (er, notebooks) pre-installed with Linux, and I've been considering buying a Purism laptop for years. When our very own Kyle Rankin started working for the company, I figured a company smart enough to hire Kyle deserved my business, so I ordered the Librem 13 (Figure 1). And when I ordered it, I discovered I could pay with Bitcoin, which made me even happier!

Photo of Librem 13

Figure 1. The 13" Librem 13v2 is the perfect size for taking on the road (photo from Purism)

There are other reasons to choose Purism computers too. The company is extremely focused on privacy, and it goes so far as to have hardware switches that turn off the webcam and WiFi/Bluetooth radios. And because they're designed for open-source operating systems, there's no "Windows" key; instead there's a meta key with a big white rectangle on it, which is called the Purism Key (Figure 2). On top of all those things, the computer itself is rumored to be extremely well built, with all the bells and whistles usually available only on high-end top-tier brands.

No Windows key here! This beats a sticker-covered Windows logo any day.

Figure 2. No Windows key here! This beats a sticker-covered Windows logo any day (photo from Purism).

My Test Unit

Normally when I review a product, I get whatever standard model the company sends around to reviewers. Since this was going to be my actual daily driver, I ordered what I wanted on it. That meant the following:

  • i7-6500U processor, which was standard and not upgradable, and doesn't need to be!
  • 16GB DDR4 RAM (default is 4GB).
  • 500GB M.2 NVMe (default is 120GB SATA SSD).
  • Intel HD 520 graphics (standard, not upgradable).
  • 1080p matte IPS display.
  • 720p 1-megapixel webcam.
  • Elantech multitouch trackpad.
  • Backlit keyboard.

The ports and connectors on the laptops are plentiful and well laid out. Figure 3 shows an "all sides" image from the Purism website. There are ample USB ports, full-size HDMI, and the power connector is on the side, which is my preference on laptops. In this configuration, the laptop cost slightly more than $2000.

There are lots of ports, but not in awkward places (photo showing ports)

Figure 3. There are lots of ports, but not in awkward places (photo from Purism).

The Physical Stuff and Things

The Case

The shell of the Librem 13 is anodized aluminum with a black matte texture. The screen's exterior is perfectly plain, without any logos or markings. It might seem like that would feel generic or overly bland, but it's surprisingly elegant. Plus, if you're the sort of person who likes to put stickers on the lid, the Librem 13 is a blank canvas. The underside is nearly as spartan with the company name and little else. It has a sturdy hinge, and it doesn't feel "cheap" in any way. It's hard not to compare an aluminum case to a MacBook, so I'll say the Librem 13 feels less "chunky" but almost as solid.

The Screen

Once open, the screen has a matte finish, which is easy to see and doesn't have the annoying reflection so prevalent on laptops that have a glossy finish. I'm sure there's a benefit to a glossy screen, but whatever it might be, the annoying glare nullifies the benefit for me. The Librem 13's screen is bright, has a sufficient 1080p resolution, and it's pleasant to stare at for hours. A few years back, I'd be frustrated with the limitation of a 1080p (1920x1080) resolution, but as my eyes get older, I actually prefer this pixel density on a laptop. With a higher-res screen, it's hard to read the letters without jacking up the font size, eliminating the benefit of the extra pixels!

The Keyboard

I'm a writer. I'm not quite as old-school as Kyle Rankin with his mechanical PS/2 keyboard, but I am very picky when it comes to what sort of keys are on my laptop. Back in the days of netbooks, I thought a 93%-sized keyboard would be perfectly acceptable for lengthy writing. I was horribly wrong. I didn't realize a person could get cramps in their hands, but after an hour of typing, I could barely pick my nose much less type at speed.

The Librem 13's keyboard is awesome. I won't say it's the best keyboard I've ever used, but as far as laptops go, it's right near the top of the pile. Like most (good) laptops, the Librem 13 has Chicklet style keys, but the subtleties of click pressure, key travel, springiness factor and the like are very adequate. The Librem 13v2 has a new feature, in that the keys are backlit (Figure 4). Like most geeks, I'm a touch typist, but in a dark room, it's still incredibly nice to have the backlight. Honestly, I'm not sure why I appreciate the backlight so much, but I've tried both on and off, and I really hate when the keyboard is completely dark. That might just be a personal preference, but having the choice means everyone is happy.

Photo of Keyboard

Figure 4. I don't notice the keyboard after hours of typing, which is what you want in a keyboard (photo from Purism).

The Trackpad

The Librem 13 has a huge (Figure 5), glorious trackpad. Since Apple is known for having quality hardware, it's only natural to compare the Librem 13 to the Macbook Pro (again). For more than a decade, Apple has dominated the trackpad scene. Using a combination of incredible hardware and silky smooth software, the Apple trackpad has been the gold standard. Even if you hate Apple, it's impossible to deny its trackpads have been better than any other—until recently. The Librem 13v2 has a trackpad that is 100% as nice as MacBook trackpads. It is large, supports "click anywhere" and has multipoint support with gestures. What does all that mean? The things that have made Apple King of Trackpad Land are available not only on another company's hardware, but also with Linux. My favorite combination is two-finger scrolling with two-finger clicking for "right-click". The trackpad is solid, stable and just works. I'd buy the Librem 13 for the trackpad alone, but that's just a throwaway feature on the website.

Photo of Trackpad

Figure 5. This trackpad is incredible. It's worth buying the laptop for this feature alone (photo from Purism).

The Power Adapter

It might seem like a silly thing to point out, but the Librem 13 uses a standard 19-volt power adapter with a 5.5mm/2.5mm barrel connector. Why is that significant? Because I accidentally threw my power supply away with the box, and I was worried I'd have to special-order a new one. Thankfully, the dozen or so power supplies I have in my office from netbooks, NUCs and so on fit the Librem 13 perfectly. Although I don't recommend throwing your power supply away, it's nice to know replacements are easy to find online and probably in the back of your tech junk drawer.

Hardware Switches

I'm not as security-minded as perhaps I should be. I'm definitely not as security-minded as many Linux Journal readers. I like that the Librem 13 has physical switches that disconnect the webcam and WiFi/Bluetooth. For many of my peers, the hardware switches are the single biggest selling point. There's not much to say other than that they work. They physically switch right to left as opposed to a toggle, and it's clear when the physical connection to the devices have been turned off (Figure 6). With the Librem 13, there's no need for electrical tape over the webcam. Plus, using your computer while at DEFCON isn't like wearing a meat belt at the dog pound. Until nanobots become mainstream, it's hard to beat the privacy of a physical switch.

Photo of physical switch on the laptop

Figure 6. It's not possible to accidentally turn these switches on or off, which is awesome (photo from Purism).

I worried a bit about how the operating systems would handle hardware being physically disconnected. I thought perhaps you'd need special drivers or custom software to handle the disconnect/reconnect. I'm happy to report all the distributions I've tried have handled the process flawlessly. Some give a pop-up about devices being connected, and some quietly handle it. There aren't any reboots required, however, which was a concern I had.

Audio/Video

I don't usually watch videos on my laptop, but like most people, I will show others around me funny YouTube videos. The audio on the Librem 13 is sufficiently loud and clear. The video subsystem (I mention more about that later) plays video just fine, even full screen. There is also an HDMI port that works like an HDMI connection should. Modern Linux distributions are really good at handling external displays, but every time I plug in a projector and it just works, my heart sings!

PureOS

The Librem 13 comes with Purism's "PureOS" installed out of the box. The OS is Debian-based, which I'm most comfortable using. PureOS uses its own repository, hosted and maintained by Purism. One of the main reasons PureOS exists is so that Purism can make sure there is no closed-source code or proprietary drivers installed on its computers. Although the distro includes tons of packages, the really impressive thing is how well the laptop works without any proprietary code. The "purity" of the distribution is comforting, but the standout feature is how well Purism chose the hardware. Anyone who has used Linux laptops knows there's usually a compromise regarding proprietary drivers and wrappers in order to take full advantage of the system. Not so with the Librem 13 and PureOS. Everything works, and works well.

PureOS works well, but the most impressive aspect of it is what it does while it's working. The pre-installed hard drive walks you through encryption on the first boot. The Firefox-based browser (called "Purebrowser") uses HTTPS: Everywhere, defaults to DuckDuckGo as the search engine, and if that's not sufficient for your privacy needs, it includes the Tor browser as well. The biggest highlight for me was that since Purebrowser is based on Firefox, the browsing experience wasn't lacking. It didn't "feel" like I was running a specialized browser to protect my identity, which makes doing actual work a lot easier.

Other Distributions

Although I appreciate PureOS, I also wanted to try other options. Not only was I curious, but honestly, I'm stuck in my ways, and I prefer Ubuntu MATE as my desktop interface. The good news is that although I'm not certain the drivers are completely open source, I am sure that Ubuntu installs and works very well. There are a few glitches, but nothing serious and nothing specific to Ubuntu (more on those later).

I tried a handful of other distributions, and they all worked equally well. That makes sense, since the hardware is 100% Linux-compatible. There was an issue with most distributions, which isn't the fault of the Librem 13. Since my system has the M.2 NVMe as opposed to a SATA SSD, most installers have a difficult time determining where to install the bootloader. Frustratingly, several versions of the Ubuntu installer don't let the manual selection of the correct partition to be chosen either. The workaround seems to be setting up hard drive partitions manually, which allows the bootloader partition to be selected. (For the record, it's /dev/nvme0n1.) Again, this isn't Purism's fault; rather, it's the Linux community getting up to speed with NVMe drives and EFI boot systems.

Quirks

There are a few oddities with a freshly installed Librem 13. Most of the quirks are ironed out if you use the default PureOS, but it's worth knowing about the issues in case you ever switch.

NVMe Thing

As I mentioned, the bootloader problem with an NVMe system is frustrating enough that it's worth noting again in this list. It's not impossible to deal with, but it can be annoying.

Backslash Key

The strangest quirk with the Librem 13 is the backslash key. It doesn't map to backslash. On every installation of Linux, when you try to type backslash, you get the "less than" symbol. Thankfully, fixing things like keyboard scancodes is simple in Linux, but it's so strange. I have no idea how the non-standard scancode slipped through QA, but nonetheless, it's something you'll need to deal with. There's a detailed thread on the Purism forum that makes fixing the problem simple and permanent.

Trackpad Stuff

As I mentioned before, the trackpad on the Librem 13 is the nicest I've ever used on a non-Apple laptop. The oddities come with various distributions and their trackpad configuration software. If your distribution doesn't support the gestures and/or multipoint settings you expect, rest assured that the trackpad supports every feature you are likely to desire. If you can't find the configuration in your distro's setup utility, you might need to dig deeper.

The Experience and Summary

The Librem 13 is the fastest laptop I've ever used. Period. The system boots up from a cold start faster than most laptops wake from sleep. Seriously, it's insanely fast. I ran multiple VMs without any significant slowdowns, and I was able to run multiple video-intensive applications without thinking "laptops are so slow" or anything like that.

The only struggle I had was when I tried to use the laptop for live streaming to Facebook using OBS (Open Broadcast Studio). The live transcoding really taxed the CPU. It was able to keep up, but normally on high-end computers, it's easier to offload the transcoding to a discrete video card. Unfortunately, there aren't any non-Intel video systems that work well without proprietary drivers. That means even though the laptop is as high-end as they get, the video system works well, but it can't compare to a system with a discrete NVIDIA video card.

Don't let the live streaming situation sour your view of the Librem 13 though. I had to try really hard to come up with something that the Librem 13 didn't chew through like the desktop replacement it is. And even with my live streaming situation, I was able to transcode the video using the absurdly fast i7 CPU. This computer is lightning fast, and it's easily the best laptop I've ever owned. More than anything, I'm glad this is a system I purchased and not a "review copy", so I don't have to send it back!

Read More...

May 2018 Issue: Privacy

Tuesday, 01 May 2018 - 18:57 PM - (Security)

May 2018 Issue: Privacy

Image
Linux Journal May Issue: Privacy
Carlie Fairchild Tue, 05/01/2018 - 13:57

Most people simply are unaware of how much personal data they leak on a daily basis as they use their computers. Enter our latest issue with a deep dive into privacy.

After working on this issue, a few of us on the Linux Journal team walked away implementing some new privacy practices--we suspect you may too after you give it a read.

In This Issue:

  • Data Privacy: How to Protect Yourself
  • Effective Privacy Plugins
  • Using Tor Hidden Services
  • Interview: Andrew Lee on Open-Sourcing PIA
  • Review: Purism's Librem 13v2
  • Generating Good Passwords with a Shell Script
  • The GDPR and Open Source
  • Getting Started with Nextcloud 13
  • Examining Data with Pandas
  • FOSS Project Spotlights: Sawmill and CloudMapper
  • GitStorage Review
  • Visualizing Molecules with EasyChem

Subscribers, you can download your May issue now.

Not a subscriber? It’s not too late. Subscribe today and receive instant access to this and ALL back issues since 1994!

Want to buy a single issue? Buy the May magazine or other single back issues in the LJ store.

Image removed.

Read More...

Working around Intel Hardware Flaws

Monday, 30 April 2018 - 12:07 PM - (Hardware)

Working around Intel Hardware Flaws

Image
Zack Brown Mon, 04/30/2018 - 07:07

Efforts to work around serious hardware flaws in Intel chips are ongoing. Nadav Amit posted a patch to improve compatibility mode with respect to Intel's Meltdown flaw. Compatibility mode is when the system emulates an older CPU in order to provide a runtime environment that supports an older piece of software that relies on the features of that CPU. The thing to be avoided is to emulate massive security holes created by hardware flaws in that older chip as well.

In this case, Linux is already protected from Meltdown by use of PTI (page table isolation), a patch that went into Linux 4.15 and that was subsequently backported all over the place. However, like the BKL (big kernel lock) in the old days, PTI is a heavy-weight solution, with a big impact on system speed. Any chance to disable it without reintroducing security holes is a chance worth exploring.

Nadav's patch was an attempt to do this. The goal was "to disable PTI selectively as long as x86-32 processes are running and to enable global pages throughout this time."

One problem that Nadav acknowledged was that since so many developers were actively working on anti-Meltdown and anti-Spectre patches, there was plenty of opportunity for one patch to step all over what another was trying to do. As a result, he said, "the patches are marked as an RFC since they (specifically the last one) do not coexist with Dave Hansen's enabling of global pages, and might have conflicts with Joerg's work on 32-bit."

Andrew Cooper remarked, chillingly:

Being 32bit is itself sufficient protection against Meltdown (as long as there is nothing interesting of the kernel's mapped below the 4G boundary). However, a 32bit compatibility process may try to attack with Spectre/SP2 to redirect speculation back into userspace, at which point (if successful) the pipeline will be speculating in 64bit mode, and Meltdown is back on the table. SMEP will block this attack vector, irrespective of other SP2 defenses the kernel may employ, but a fully SP2-defended kernel doesn't require SMEP to be safe in this case.

And Dave, nearby, remarked, "regardless of Meltdown/Spectre, SMEP is valuable. It's valuable to everything, compatibility-mode or not."

SMEP (Supervisor Mode Execution Protection) is a hardware mode, whereby the OS can set a register on compatible CPUs to prevent userspace code from running. Only code that already has root permissions can run when SMEP is activated.

Andy Lutomirski said that he didn't like Nadav's patch because he said it drew a distinction between "compatibility mode" tasks and "non-compatibility mode" tasks. Andy said no such distinction should be made, especially since it's not really clear how to make that distinction, and because the ramifications of getting it wrong might be to expose significant security holes.

Andy felt that a better solution would be to enable and disable 32-bit mode and 64-bit mode explicitly as needed, rather than guessing at what might or might not be compatibility mode.

The drawback to this approach, Andy said, was that old software would need to be upgraded to take advantage of it, whereas with Nadav's approach, the judgment would be made automatically and would not require old code to be updated.

Linus Torvalds was not optimistic about any of these ideas. He said, "I just feel this all is a nightmare. I can see how you would want to think that compatibility mode doesn't need PTI, but at the same time it feels like a really risky move to do this." He added, "I'm not seeing how you keep user mode from going from compatibility mode to L mode with just a far jump."

In other words, the whole patch, and any alternative, may just simply be a bad idea.

Nadav replied that with his patch, he tried to cover every conceivable case where someone might try to break out of compatibility mode and to re-enable PTI protections if that were to happen. Though he did acknowledge, "There is one corner case I did not cover (LAR) and Andy felt this scheme is too complicated. Unfortunately, I don't have a better scheme in mind."

Linus remarked:

Sure, I can see it working, but it's some really shady stuff, and now the scheduler needs to save/restore/check one more subtle bit.

And if you get it wrong, things will happily work, except you've now defeated PTI. But you'll never notice, because you won't be testing for it, and the only people who will are the black hats.

This is exactly the "security depends on it being in sync" thing that makes me go "eww" about the whole model. Get one thing wrong, and you'll blow all the PTI code out of the water.

So now you tried to optimize one small case that most people won't use, but the downside is that you may make all our PTI work (and all the overhead for all the _normal_ cases) pointless.

And Andy also remarked, "There's also the fact that, if this stuff goes in, we'll be encouraging people to deploy 32-bit binaries. Then they'll buy Meltdown-fixed CPUs (or AMD CPUs!) and they may well continue running 32-bit binaries. Sigh. I'm not totally a fan of this."

The whole thread ended inconclusively, with Nadav unsure whether folks wanted a new version of his patch.

The bottom line seems to be that Linux has currently protected itself from Intel's hardware flaws, but at a cost of perhaps 5% to 30% efficiency (the real numbers depend on how you use your system). And although it will be complex and painful, there is a very strong incentive to improve efficiency by adding subtler and more complicated workarounds that avoid the heavy-handed approach of the PTI patch. Ultimately, Linux will certainly develop a smooth, near-optimal approach to Meltdown and Spectre, and probably do away with PTI entirely, just as it did away with the BKL in the past. Until then, we're in for some very ugly and controversial patches.

Note: If you're mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Read More...

Weekend Reading: Privacy

Saturday, 28 April 2018 - 13:59 PM - (Security)

Weekend Reading: Privacy

Image
Carlie Fairchild Sat, 04/28/2018 - 08:59

Most people simply are unaware of how much personal data they leak on a daily basis as they use their computers. Enter this weekend's reading topic: Privacy.

The Wire by Shawn Powers

In the US, there has been recent concern over ISPs turning over logs to the government. During the past few years, the idea of people snooping on our private data (by governments and others) really has made encryption more popular than ever before. One of the problems with encryption, however, is that it's generally not user-friendly to add its protection to your conversations. Thankfully, messaging services are starting to take notice of the demand. For me, I need a messaging service that works across multiple platforms, encrypts automatically, supports group messaging and ideally can handle audio/video as well. Thankfully, I found an incredible open-source package that ticks all my boxes: Wire.

Facebook Compartmentalization by Kyle Rankin

Whenever people talk about protecting privacy on the internet, social-media sites like Facebook inevitably come up—especially right now. It makes sense—social networks (like Facebook) provide a platform where you can share your personal data with your friends, and it doesn't come as much of a surprise to people to find out they also share that data with advertisers (it's how they pay the bills after all). It makes sense that Facebook uses data you provide when you visit that site. What some people might be surprised to know, however, is just how much. Facebook tracks them when they aren't using Facebook itself but just browsing around the web.

Some readers may solve the problem of Facebook tracking by saying "just don't use Facebook"; however, for many people, that site may be the only way they can keep in touch with some of their friends and family members. Although I don't post on Facebook much myself, I do have an account and use it to keep in touch with certain friends. So in this article, I explain how I employ compartmentalization principles to use Facebook without leaking too much other information about myself.

Protection, Privacy and Playoffs by Shawn Powers

I'm not generally a privacy nut when it comes to my digital life. That's not really a good thing, as I think privacy is important, but it often can be very inconvenient. For example, if you strolled into my home office, you'd find I don't password-protect my screensaver. Again, it's not because I want to invite snoops, but rather it's just a pain to type in my password every time I come back from going to get a cup of tea. (Note: when I worked in a traditional office environment, I did lock my screen. I'm sure working from a home office is why I'm comfortable with lax security.)

A Machine for Keeping Secrets? by Vinay Gupta

The most important thing that the British War Office learned about cryptography was how to keep a secret: Enigma was broken at Bletchley Park early enough in World War II to change the course of the war—and of history. Now here's the thing: only if the breakthrough (called Ultra, which gives you a sense of its importance) was secret could Enigma's compromise be used to defeat the Nazis. Breaking Enigma was literally the "zero-day" that brought down an empire. Zero-day is a bug known only to an attacker. Defenders (those creating/protecting the software) have never seen the exploit and are, therefore, largely powerless to respond until they have done analysis. The longer the zero-day is kept secret, and its use undiscovered, the longer it represents absolute power.

Own Your DNS Data by Kyle Rankin

I honestly think most people simply are unaware of how much personal data they leak on a daily basis as they use their computers. Even if they have some inkling along those lines, I still imagine many think of the data they leak only in terms of individual facts, such as their name or where they ate lunch. What many people don't realize is how revealing all of those individual, innocent facts are when they are combined, filtered and analyzed.

Cell-phone metadata (who you called, who called you, the length of the call and what time the call happened) falls under this category, as do all of the search queries you enter on the Internet.

For this article, I discuss a common but often overlooked source of data that is far too revealing: your DNS data.

Tor Security for Android and Desktop Linux by Charles Fisher

The Tor Project presents an effective countermeasure against hostile and disingenuous carriers and ISPs that, on a properly rooted and capable Android device or Linux system, can force all network traffic through Tor encrypted entry points (guard nodes) with custom rules for iptables. This action renders all device network activity opaque to the upstream carrier—barring exceptional intervention, all efforts to track a user are afterward futile.

Read More...

Facebook Compartmentalization

Thursday, 12 April 2018 - 15:06 PM - (Security)

Facebook Compartmentalization

Image
Kyle Rankin Thu, 04/12/2018 - 10:06

I don't always use Facebook, but when I do, it's over a compartmentalized browser over Tor.

Whenever people talk about protecting privacy on the internet, social-media sites like Facebook inevitably come up—especially right now. It makes sense—social networks (like Facebook) provide a platform where you can share your personal data with your friends, and it doesn't come as much of a surprise to people to find out they also share that data with advertisers (it's how they pay the bills after all). It makes sense that Facebook uses data you provide when you visit that site. What some people might be surprised to know, however, is just how much. Facebook tracks them when they aren't using Facebook itself but just browsing around the web.

Some readers may solve the problem of Facebook tracking by saying "just don't use Facebook"; however, for many people, that site may be the only way they can keep in touch with some of their friends and family members. Although I don't post on Facebook much myself, I do have an account and use it to keep in touch with certain friends. So in this article, I explain how I employ compartmentalization principles to use Facebook without leaking too much other information about myself.

1. Post Only Public Information

The first rule for Facebook is that, regardless of what you think your privacy settings are, you are much better off if you treat any content you provide there as being fully public. For one, all of those different privacy and permission settings can become complicated, so it's easy to make a mistake that ends up making some of your data more public than you'd like. Second, even with privacy settings in place, you don't have a strong guarantee that the data won't be shared with people willing to pay for it. If you treat it like a public posting ground and share only data you want the world to know, you won't get any surprises.

2. Give Facebook Its Own Browser

I mentioned before that Facebook also can track what you do when you browse other sites. Have you ever noticed little Facebook "Like" icons on other sites? Often websites will include those icons to help increase engagement on their sites. What it also does, however, is link the fact that you visited that site with your specific Facebook account—even if you didn't click "Like" or otherwise engage with the site. If you want to reduce how much you are tracked, I recommend selecting a separate browser that you use only for Facebook. So if you are a Firefox user, load Facebook in Chrome. If you are a Chrome user, view Facebook in Firefox. If you don't want to go to the trouble of managing two different browsers, at the very least, set up a separate Firefox profile (run firefox -P from a terminal) that you use only for Facebook.

3. View Facebook over Tor

Many people don't know that Facebook itself offers a .onion service that allows you you to view Facebook over Tor. It may seem counterintuitive that a site that wants so much of your data would also want to use an anonymizing service, but it makes sense if you think it through. Sure, if you access Facebook over Tor, Facebook will know it's you that's accessing it, but it won't know from where. More important, no other sites on the internet will know you are accessing Facebook from that account, even if they try to track via IP.

To use Facebook's private .onion service, install the Tor Browser Bundle, or otherwise install Tor locally, and follow the Tor documentation to route your Facebook-only browser to its SOCKS proxy service. Then visit https://facebookcorewwwi.onion, and only you and Facebook will know you are hitting the site. By the way, one advantage to setting up a separate browser that uses a SOCKS proxy instead of the Tor Browser Bundle is that the Tor Browser Bundle attempts to be stateless, so you will have a tougher time making the Facebook .onion address your home page.

Conclusion

So sure, you could decide to opt out of Facebook altogether, but if you don't have that luxury, I hope a few of these compartmentalization steps will help you use Facebook in a way that doesn't completely remove your privacy.

Read More...

Simple Cloud Hardening

Tuesday, 10 April 2018 - 15:30 PM - (Security)

Simple Cloud Hardening

Image
Kyle Rankin Tue, 04/10/2018 - 10:30

Apply a few basic hardening principles to secure your cloud environment.

I've written about simple server-hardening techniques in the past. Those articles were inspired in part by the Linux Hardening in Hostile Networks book I was writing at the time, and the idea was to distill the many different hardening steps you might want to perform on a server into a few simple steps that everyone should do. In this article, I take the same approach only with a specific focus on hardening cloud infrastructure. I'm most familiar with AWS, so my hardening steps are geared toward that platform and use AWS terminology (such as Security Groups and VPC), but as I'm not a fan of vendor lock-in, I try to include steps that are general enough that you should be able to adapt them to other providers.

New Accounts Are (Relatively) Free; Use Them

One of the big advantages with cloud infrastructure is the ability to compartmentalize your infrastructure. If you have a bunch of servers racked in the same rack, it might be difficult, but on cloud infrastructures, you can take advantage of the technology to isolate one customer from another to isolate one of your infrastructure types from the others. Although this doesn't come completely for free (it adds some extra overhead when you set things up), it's worth it for the strong isolation it provides between environments.

One of the first security measures you should put in place is separating each of your environments into its own high-level account. AWS allows you to generate a number of different accounts and connect them to a central billing account. This means you can isolate your development, staging and production environments (plus any others you may create) completely into their own individual accounts that have their own networks, their own credentials and their own roles totally isolated from the others. With each environment separated into its own account, you limit the damage attackers can do if they compromise one infrastructure to just that account. You also make it easier to see how much each environment costs by itself.

In a traditional infrastructure where dev and production are together, it is much easier to create accidental dependencies between those two environments and have a mistake in one affect the other. Splitting environments into separate accounts protects them from each other, and that independence helps you identify any legitimate links that environments need to have with each other. Once you have identified those links, it's much easier to set up firewall rules or other restrictions between those accounts, just like you would if you wanted your infrastructure to talk to a third party.

Lock Down Security Groups

One advantage to cloud infrastructure is that you have a lot tighter control over firewall rules. AWS Security Groups let you define both ingress and egress firewall rules, both with the internet at large and between Security Groups. Since you can assign multiple Security Groups to a host, you have a lot of flexibility in how you define network access between hosts.

My first recommendation is to deny all ingress and egress traffic by default and add specific rules to a Security Group as you need them. This is a fundamental best practice for network security, and it applies to Security Groups as much as to traditional firewalls. This is particularly important if you use the Default security group, as it allows unrestricted internet egress traffic by default, so that should be one of the first things to disable. Although disabling egress traffic to the internet by default can make things a bit trickier to start with, it's still a lot easier than trying to add that kind of restriction after the fact.

You can make things very complicated with Security Groups; however, my recommendation is to try to keep them simple. Give each server role (for instance web, application, database and so on) its own Security Group that applies to each server in that role. This makes it easy to know how your firewall rules are being applied and to which servers they apply. If one server in a particular role needs different network permissions from the others, it's a good sign that it probably should have its own role.

The role-based Security Group model works pretty well but can be inconvenient when you want a firewall rule to apply to all your hosts. For instance, if you use centralized configuration management, you probably want every host to be allowed to talk to it. For rules like this, I take advantage of the Default Security Group and make sure that every host is a member of it. I then use it (in a very limited way) as a central place to define any firewall rules I want to apply to all hosts. One rule I define in particular is to allow egress traffic to any host in the Default Security Group—that way I don't have to write duplicate ingress rules in one group and egress rules in another whenever I want hosts in one Security Group to talk to another.

Use Private Subnets

On cloud infrastructure, you are able to define hosts that have an internet-routable IP and hosts that only have internal IPs. In AWS Virtual Private Cloud (VPC), you define these hosts by setting up a second set of private subnets and spawning hosts within those subnets instead of the default public subnets.

Treat the default public subnet like a DMZ and put hosts there only if they truly need access to the internet. Put all other hosts into the private subnet. With this practice in place, even if hosts in the private subnet were compromised, they couldn't talk directly to the internet even if an attacker wanted them to, which makes it much more difficult to download rootkits or other persistence tools without setting up elaborate tunnels.

These days it seems like just about every service wants unrestricted access to web ports on some other host on the internet, but an advantage to the private subnet approach is that instead of working out egress firewall rules to specific external IPs, you can set up a web proxy service in your DMZ that has more broad internet access and then restrict the hosts in the private subnet by hostname instead of IP. This has an added benefit of giving you a nice auditing trail on the proxy host of all the external hosts your infrastructure is accessing.

Use Account Access Control Lists Minimally

AWS provides a rich set of access control list tools by way of IAM. This lets you set up very precise rules about which AWS resources an account or role can access using a very complicated syntax. While IAM provides you with some pre-defined rules to get you started, it still suffers from the problem all rich access control lists have—the complexity makes it easy to create mistakes that grant people more access than they should have.

My recommendation is to use IAM only as much as is necessary to lock down basic AWS account access (like sysadmin accounts or orchestration tools for instance), and even then, to keep the IAM rules as simple as you can. If you need to restrict access to resources further, use access control at another level to achieve it. Although it may seem like giving somewhat broad IAM permissions to an AWS account isn't as secure as drilling down and embracing the principle of least privilege, in practice, the more complicated your rules, the more likely you will make a mistake.

Conclusion

Cloud environments provide a lot of complex options for security; however, it's more important to set a good baseline of simple security practices that everyone on the team can understand. This article provides a few basic, common-sense practices that should make your cloud environments safer while not making them too complex.

Read More...

Subutai Blockchain Router v2.0, NixOS New Release, Slimbook Curve and More

Thursday, 05 April 2018 - 13:59 PM - (Hardware)

News briefs for April 5, 2018. more>>

Read More...

Happy 20th Anniversary to Mozilla, New pfSense Version, Android HiddenMiner Malware and More

Friday, 30 March 2018 - 13:28 PM - (Security)

News briefs for March 30, 2018. more>>

Read More...

1 2 3 .. 5