LearnAboutLinux.com

Astronomy on the Desktop

Wednesday, 15 February 2012 - 16:53 PM - (Software)

Astronomy on the Desktop

Image
Joey Bernard Wed, 02/15/2012 - 10:53

Many people's initial exposure to science is through astronomy, and they are inspired by that first look through a telescope or their first glimpse of a Hubble image. Several software packages are available for the Linux desktop that allow users to enjoy their love of the stars. I look at several packages in this article that should be available for most distributions.

The first is Stellarium, my personal favorite for day-to-day stargazing. When you install it, you get a thorough star catalog. By default, Stellarium starts up in full-screen mode. The layout makes for a very attractive display of the sky above you, and almost all the details of the display are customizable.

Figure 1. Opening Stellarium gives you a look at the local sky.

If you hover your mouse pointer over either the bottom-left border or the lower-left-side border, one of two configuration panels appears. From here, you can set visual items, such as constellation outlines, constellation names, whether galaxies and nebulae are visible, as well as a coordinate grid. You also can set location and time values. This means you not only can see what the sky looked like in the past or what it will look like in the future, but you also can see what it looks like on the other side of the planet. Additionally, you can add even more stars to the catalog that Stellarium uses.

Figure 2. You can set the time so it's later, letting you check out what you might want to look for that evening.

Figure 3. The configuration window lets you download even more star catalogs.

Stellarium includes a script capability. With it, you can script views of starfields and share them with others. When you install Stellarium, you get several demo scripts to use as examples. As of version 0.10.1, there is a new scripting engine based on the Qt scripting engine. A full API is available, allowing you to interact with all of the functions that Stellarium provides. It is a full scripting language called ECMAscript. You may know it better as JavaScript. You can define your own functions, encapsulating larger chunks of work. There is a for statement, providing a loop structure that will look familiar to C and Java programmers.

To access and run scripts in Stellarium, you need to open the configuration window and click on the scripts tab. Once you've written your own scripts and want to run them, you can place them in the scripts subdirectory of the user data directory. On Linux machines, the user data directory is $HOME/.stellarium. Once you put your script files there, along with any textures they may require, they will show up within the list of scripts in the configuration window. A plugin architecture also is available, but it is much harder to use, and the API varies from version to version.

The nice thing about Stellarium is that it isn't limited to your computer. It can interact with the real world in a couple ways. The first is through telescope control. Stellarium provides two different mechanisms for controlling your telescope. The older mechanism is a client-server model. The server runs as a standalone application that connects to and controls one telescope. It then can listen to one or more clients, which can include Stellarium. Several options are available for the server portion, and they provide control for many telescopes from Meade, Celestron and others. The second mechanism is a plugin for Stellarium, which first was available in version 0.10.3. This mechanism can send only slew instructions to the telescope, which essentially are "go to" instructions.

One major warning is that Stellarium will not stop you from slewing to the sun. This could damage both eyes and equipment if you don't have proper filters on your telescope, so always be careful if you are working during the day.

The plugin can interact with pretty much any telescope that understands either the Meade LX200 interface or the Celestron NexStar interface.

The other way Stellarium can interact with the real world is as a planetarium. Stellarium can handle the calculations involved in projecting over a sphere. This way, you can make a DIY planetarium. You need a dome onto which you can project your display across the inside. You also need a video projector and a spherical security mirror. Use the spherical distortion feature in Stellarium and then project the results through the video projector and onto the mirror. Then, you can lie back under the dome and see the sky above you. The Stellarium Web site has links to groups on the Internet where you can find help and hints when building your own planetarium.

The other popular astronomy program is Celestia. Celestia is a three-dimensional simulation of the universe. Where most astronomy software shows you what the sky looks like from the surface of the Earth, Celestia can show you what the sky looks like from anywhere in the solar system.

Figure 4. When you first open Celestia, you get a satellite-eye view of the Earth.

Celestia has a powerful scripting engine that allows you to produce tours of the universe. When you install Celestia, you get a script called demo.cel that gives you an idea of its capabilities. The add-on section of the Celestia Web site includes a full repository of available scripts.

Because so much work has been done to make it as scientifically accurate as possible, it also is being used in educational environments. Currently, 12 journeys are available that provide information for students and the general public on the wonders of the universe. As opposed to scripts, journeys give you more control over your speed and pace, allowing you to take more time at the areas that are of most interest to you.

When you install Celestia, you get the core part of the program and a few extra add-ons. Currently, more than 500 add-ons are available, and if you install them all, you will need more than 18GB of drive space. The main repository you should check out first is located at http://www.celestiamotherlode.net.

If you want to travel to another planet in the solar system, you can click on Navigation→Go to Object. Here you can enter the name of the object and how far away you want to be. Then, click on Go To, and you'll be taken there directly. Once you're there, you can rotate your camera view with the arrow keys. In this way, you can go to Mars and turn around and see what the sky looks like from there.

Figure 5. When you want to go to an object, you can set what object you want go to and how far away you are.

Figure 6. You can zoom in to see the Great Red Spot on Jupiter.

Figure 7. You can look out and see the night sky on Mars.

If you want to move around the orbit of the body you're currently at, you can use the Shift and arrow keys to slide around and see the whole surface. What you see when you are in orbit around another planet is a texture mapped onto the body.

Celestia's core installation includes a minimal set of textures that strive to be as accurate as possible. You can change the textures being used by including add-ons from the repository. Some of these include textures that allow you to see what the Earth may have looked like during the last Ice Age or even four billion years ago.

In 2007, Vincent Giangiulio created an add-on called Lua Edu Tools. This add-on provides all kinds of extra functionality to Celestia. A toolkit is displayed on the right side of the screen that provides sliders for controlling many of Celestia's parameters. It also provides a "cockpit" overlay, making it feel even more like you're flying through space. The default texture is the space shuttle, but you can use other ones too. Celestia also lets you use a joystick to control movement, so you can immerse yourself completely into your dream of flying through space.

You can share your experiences with others by saving still images or movies. If you click on File→Capture Image, Celestia lets you save a PNG or JPEG image file. Clicking on File→Capture Movie lets you save a movie of your travels. You can set the aspect ratio, the frame rate and the video quality. Once you click Save, Celestia will be ready to start recording. When you are ready, click the F11 key to start recording. When you're done, you can stop recording by clicking F12.

This article is only an introduction to what you can do. Hopefully, it inspires you to go explore the universe on your desktop. From there, bundle up and go spend the night out under the skies. You won't regret it.

Read More...

Fade In Pro

Tuesday, 21 February 2012 - 17:27 PM - (Software)

Fade In Pro

Image
Fade In Pro screenshot
Charles Olsen Tue, 02/21/2012 - 11:27

When I switched from Windows to Linux, I found software to replace almost everything I had been doing in Windows. Most of the software I needed was in the repos, although I did pay for a couple commercial programs.

The most difficult program to replace was Final Draft, a commercial program for writing screenplays. Final Draft is available for Windows and Macs, but not for Linux. It also does not run in Wine or CrossOver Office.

I understand that software for writing screenplays is a small niche, but it's not limited only to writers in Hollywood. Any company that prepares videos for training or other purposes would benefit from a program that helps write scripts.

You can write scripts with a word processor, of course. But, the formatting is tricky and goes beyond what you can accomplish just by using styles. A dedicated script-writing tool ensures that all your formatting is correct, and it also can help in other ways.

At first, I was able to get by with Celtx, a free screenplay program that is available for Windows, Mac and Linux. But a nasty bug crept into the Linux version, making it painful to enter character names for dialogue. Although the developer acknowledged the issue two years ago, and several new versions have been released since then, the bug is still there.

A new solution now is available. Fade In Professional Screenwriting Software is a powerful application for writing screenplays, and it includes tools for organizing and navigating the script, as well as tools for managing revisions and rewrites.

Fade In intelligently handles the various formatting elements of a screenplay. You can format the elements manually using key combinations or menus, or you can format everything just by using the Enter and Tab keys. Type a Scene Heading and press Enter, and the next element automatically is formatted as Action. Press Tab to change the formatting to Character, which automatically is followed by Dialogue. Press Tab to change from Dialogue to Parenthetical, which formats properly and inserts the parentheses.

Fade In builds autocomplete lists of your characters and locations. Once you've written a character or location, you can re-enter it with a couple keystrokes.

When it's time to produce a screenplay, Fade In can help by generating standard production reports including scenes, cast, locations and so on. You then can print these reports or save them to HTML or CSV.

Fade In can import and export files in these formats: Final Draft, Formatted Text, Screenplay Markdown, Unformatted Text and XML. It also can import files in Celtx or Rich Text Format and export to PDF and HTML. The Final Draft format is particularly important if you want to sell your script or submit it to certain screenplay-writing contests.

Fade In is not free. According to the Web site, the regular price is $99.95, although at the time of this writing, you can get it for $49.95. Either way, it's much cheaper than $249 for Final Draft. You can download the demo and try it first, then buy it if the software works for you.

There also are versions of Fade In for mobile devices: Android, iPhone and iPad.

You can download the Linux version as a DEB, RPM or tar.gz file in either 32-bit or 64-bit versions.

Check it out at http://www.fadeinpro.com.

Read More...

Interfacing Disparate Systems

Tuesday, 04 September 2012 - 20:37 PM - (Software)

Interfacing Disparate Systems

Image
James Litton Tue, 09/04/2012 - 15:37

When hearing the word interface, most people probably think of a Graphical User Interface or a physical hardware interface (serial, USB). If you dabble in scripting or are a serious developer, you, no doubt, are familiar with the concept of software interfaces as well. Occasionally, the need arises to integrate disparate systems where an interface doesn't already exist, but with a little ingenuity, an interface can be created to bridge the disparity and help you meet your specific needs.

I have an extensive home automation implementation I developed over the years. As I knocked out the "easy" integrations, I eventually came to a point of wanting to integrate systems that are not home automation-friendly. An example of this is my alarm system. Excellent alarm panels exist on the market that make integration a cinch, but I already had a fully functional alarm system and was determined to integrate it into my home automation setup rather than replace it.

My first inclination was to hack a keypad or build my own hardware interface that would allow me to capture status information. Both of those approaches are viable, but as I thought about other options, I realized I could integrate my proprietary alarm system into my home automation system without even cracking open the alarm panel.

Before I reveal the details of how I achieved the outcome I wanted, let me first lay out my integration goals. Although it would be nice to capture sensor data from the alarm system, in my case, it was totally unnecessary as the only data that might be helpful was motion sensor data or specific zone faults. Because I already have numerous motion sensors installed that are native to my home automation build, and because fault data wasn't a factor in my immediate integration requirements, I concluded that I needed to know only if my alarm was "armed" or "unarmed". Knowing the state of the alarm system helps me make my home automation system smarter. An example of this added intelligence might be to change the thermostat setting and turn off all lights if the alarm state changes to armed. Another example might be to turn on all of the lights in the house when the garage door opens after dark and the alarm is armed.

As I thought through the scenarios a bit further, I quickly realized I needed a bit more data. Depending on how an alarm system is installed and the practices of its users, there may or may not be multiple armed states that need to be considered. In my case, I have two separate armed states. One state is "armed away" (nobody home) and the other is "armed stay" (people are in the house). It wouldn't make sense to turn off all of the lights in the house, for example, if the system was set to armed stay, but that would make perfect sense if it were set to armed away. As I continued to think through my needs, I concluded that knowing whether the system was armed away, armed stay or unarmed was all I needed to add significantly greater intelligence to my home automation scenes.

Once I had a firm grasp of my needs, I realized my alarm-monitoring company already was providing me with some of the data I was looking for in the form of e-mail messages. Every time the alarm was armed or disarmed, I would get an e-mail message indicating the state change. I had been using this feature for a while, as it was helpful to know when my kids arrived home or when they left for school in the morning. Because I had been using this notification mechanism for some time, I also knew it to be extremely timely and reliable.

Because I was getting most of the data I needed, I started thinking about ways I might be able to leverage my e-mail system as the basis for an interface to my proprietary alarm panel. In days gone by I had used procmail to process incoming e-mail, so I knew it would be fairly easy to inject a script into the inbound mail-processing process to scan content and take action.

Before I started down the path of writing a script and figuring out how to make my e-mail system run inbound mail through it, I needed to deal with the shortcoming I had with status notifications. You may have noticed that I said my alarm monitoring company was sending me two status notifications: one for armed and one for unarmed. I was fairly certain that an additional relay could be configured so the folks at the company could notify me with the two variations of "armed" that I needed to proceed, so I called them to discuss the matter, and sure enough, they were able to make the change I requested. In fairly short order, I was receiving the three notifications that I wanted.

With the notifications in place, I could start the task of creating a script to scan incoming mail.

To keep things as simple as possible, I decided to write the script in Bash.

To follow this example, the first thing you need to do is capture all of the data being piped into the script and save it for processing:


#!/bin/bash
while read a
do
  echo "$a" >>~/tmp/results.tmp
done

This block of code redirects inbound e-mail messages to a temporary file that you now can perform search operations against. Because e-mail messages are simple text files, there are ample methods you can leverage to search for the strings that tell you the state of the alarm system. In this case, you could receive three possible messages which are "Your alarm has been Armed Stay", "Your alarm has been Armed Away" or "Your alarm has been Disarmed".

Now that you know exactly what you are looking for, use grep to perform the search operations. When you combine your grep searches with if statements, you can execute a specific command when one of your searches evaluates to true.

The heart of my home automation system is a software package that has an extensive REST API that I can leverage to change device states, set variables, control access groups and control device links. This makes it extremely easy to set a variable for the alarm state that I then can use to trigger various actions and control scenes in my home. To interact with the REST API, let's use curl.

In my case, my home automation software expects data to be sent as a PUT instead of curl's default of GET. To accomplish this, let's use the -X parameter to tell curl to use PUT. Identify the data you want to send to the server with the -d parameter followed by the data that you need to send to the server. Finally, you need to tell curl what URL to connect to:


url="http://ha.example.com/vars/alarmstate"
if grep -q 'Armed Stay' ~/tmp/results.tmp; then
  curl -X PUT -d value="ARMED Stay" $url
elif grep -q 'Armed Away' ~/tmp/results.tmp; then
  curl -X PUT -d value="ARMED Away" $url
elif grep -q 'Disarmed' ~/tmp/results.tmp; then
  curl -X PUT -d value=DISARMED $url
fi

When you put all of this together, the result is a block of code similar to this that will scan your file for the three string possibilities that will tell you the current alarm state. Because the evaluation is set up as an else/if, the if evaluation block will terminate when one of the expressions evaluates to true.

In order for this interface to work correctly and consistently, it is imperative that you clean up after each execution of the script. You may have noticed in the first block of code that data is appended to the temporary file using the >> I/O redirection operator. You append data, because the data streams into the file one line at a time. If you failed to use the append operator, the resulting file would contain only the last line of data from your message. Fortunately, cleaning up after ourselves is as easy as deleting our temporary file:


rm ~/tmp/results.tmp

Now that you have a script, it needs to be injected into the e-mail processing process in such a way that inbound mail is forced through the script. How this is accomplished will vary from system to system, so I won't go into great detail here, but I have implemented this very easily on both an open-source edition implementation of Zimbra and the e-mail platform provided by a very large and well-known US hosting provider. My current implementation resides with a hosting provider where I have an e-mail account called ha@example.com. I configured this account to "forward" all inbound mail to my script and then throw the message away.

With everything now in place, I configured my monitoring service to send alarm state change messages to my ha@example.com address and to my personal e-mail address. Having the messages go to both locations is helpful if you need to troubleshoot. Testing whether messages are coming from the monitoring company is as easy as checking my personal e-mail to see if the state change message is present. If the message is present in my personal mail, but state change data isn't flowing to my home automation system, I then know that I need to troubleshoot my connectivity or my interface script itself.

After having my interface script in place for a number of months, I had the opportunity to use this troubleshooting method. I noticed that I was getting e-mail notifications in my e-mail, but state changes were not flowing to my home automation system. In order to verify that data was flowing to my script and that the data was being saved to my temporary file, I commented out the line that deletes the temporary file, then I forced an alarm state change. Sure enough, the file was created, but no change to the variable in my home automation system. In order to rule out a connectivity or a firewall issue, I then ran the curl command from the command line manually to see if my REST API call could reach my home automation system and change the variable. That worked fine. This conclusively proved that there was an issue with my interface script. At this point, I inspected the contents of the temporary file more closely, and I saw a new line in the headers that said "Content-Transfer-Encoding: base64". Apparently my monitoring company had made some changes to its e-mail system that I needed to account for. To do this, I would need to add a new block of code to see if the content of the newly arrived e-mail message was base64-encoded. If you find the message is encoded, use Perl and the the decode_base64 function in the MIME::Base64 module to decode the message:


if grep -q 'Content-Transfer-Encoding: base64' ~/tmp/results.tmp; 
then perl -MMIME::Base64 -ne 'print decode_base64($_)' 
<~/tmp/results.tmp>~/tmp/results2.tmp
  rm ~/tmp/results.tmp
  mv ~/tmp/results2.tmp ~/tmp/results.tmp
fi

Adding this block of code just before the block of code that performs the grep evaluations fixed the problem, and I was off to the races once again.

I have had this homebrew script interface in place for more than a year now. Other than the encoding issue, the interface's performance has been absolutely rock-solid. This allowed me to achieve the integration I wanted, even if that integration is a bit of a loose integration. Not only did this allow me to use the system that I already had, but also the implementation was completely free, and after all, who doesn't like free?

Hands image via Shutterstock.com.

Read More...

Trying to Tame the Tablet

Wednesday, 08 May 2013 - 18:50 PM - (Software)

Trying to Tame the Tablet

Image
Shawn Powers Wed, 05/08/2013 - 13:50

Like many folks, I received a shiny new Nexus 7 tablet for Christmas. This brought me great joy and excitement as I began to plot my future paperless life. For most of the evening and an hour or so the next day, I was sure the new Android tablet would change my life forever. Sadly, it wasn't that easy. This month, I want to dive head first into the tablet lifestyle, but I'm not sure if it's really the lifestyle for me. I'll try to keep everyone posted during the next few months (most likely in the Upfront section of LJ). And please, please don't hesitate to send me messages about the ways you find your Android tablet useful at work/home/play.

At Work

The main reason I decided on the Nexus 7 was because with the leather case I bought for it (Figure 1), it was small enough to carry to meetings easily, yet big enough to view full-size documents. I figured with a tablet computer, I might be able to do away with most of the paper in my life. I have cabinets full of filed papers that I never use. I do, however, search my e-mail on a regular basis for communications sent or received years ago. I want that same accessibility for items that exist only in paper form now.

Figure 1. My case doubles as a stand.

Paperless: Evernote or Dropbox

I've been trying to go paperless since long before I got a tablet computer. There seems to be two schools of thought in the paperless department. There are the Evernote people, and there are the "every-other-kind" of people. I have Evernote on every electronic device I own (which is a significant number), and I have to admit, for raw information, Evernote is amazing. The problem comes with documents. Granted, documents can be added to an Evernote note, but they are like e-mail attachments, and they can't be modified once attached. This means, at least for me, that the only documents I ever attach are "complete" documents that are printed as PDF files.

I don't have a good solution for how to handle Word/LibreOffice documents in Evernote. So, that means I have an inconvenient combination of Evernote for unformatted information and Dropbox for documents. Thankfully, both applications run very well on Android, so although I don't have a central repository for all my information, at least I can access all the information from my tablet.

Getting Data In

Evernote includes a really nice mechanism for using a device's camera for importing digital snapshots of documents, notes, whiteboards and so forth. Unfortunately, the Nexus 7 doesn't have a rear camera. Thankfully, my cell phone has a really nice camera, and it also has Evernote installed. Because I never intended my tablet to replace my cell phone, this isn't a big issue for me. I just whip out my phone if I need to import something optically into Evernote.

My biggest hope with the Nexus 7 was that I could avoid toting around legal pads and pens to meetings. I tend to take "doodle" notes, so a laptop really isn't ideal for me at a meeting. (Plus, I tend to become distracted with a laptop and multitask my way into trouble quite often.) I researched capacitive styli and found the New Trent IMP62B to be just about the best option (Figure 2). It's less than $10, and it's remarkably precise for a stylus with a rather bulbous tip.

Figure 2. This stylus is remarkably precise given the size of its tip.

After buying a stylus, coming up with a note-taking application proved to be difficult. I almost can get there with a couple apps, but nothing has been the ideal option for me. The closest I've come to perfection is Lecture Notes, which has some critical features:

  • Importing PDF files from Dropbox for annotation during a meeting (for example, an agenda).
  • Exporting directly to Evernote.
  • Very fine lines when writing.
  • Simple interface for changing pens, erasing and so on.

I'll admit, it's still not as quick as writing on paper, but for some quick doodles on a PDF agenda, Lecture Notes does a nice job (Figure 3).

Figure 3. Lecture Notes is a great application if you want to take notes with a stylus.

My wife actually likes to type on her tablet (an iPad Mini) with the onboard keyboard. If she's taking notes, she'll just open up Google Docs and type on the screen. For me, typing on any screen is awkward and slow. If I have to do any real typing on my tablet, I'll use a Bluetooth keyboard. At that point, however, I might as well just use a laptop. In a pinch, it's certainly possible to type a few notes with the on-screen keyboard, and if you don't have a laptop, a Bluetooth keyboard will help manage some serious typing. Still, I don't recommend it. Any Nexus-size keyboards are too small to type well with, and any full-size Bluetooth keyboards are cumbersome to carry around.

Printing and Viewing

Just a couple years ago, it was absurd to think about printing from a phone or tablet. Now, it's easy to set up network printing for Android devices, and Linux users easily can share printers with iOS devices as well. So printing, interestingly enough, is fairly ubiquitous. Figure 4 shows an example of printing from Google Drive.

Figure 4. My printer has native Google Print support, but it's possible to set up a traditional printer.

Speaking of Google Drive, the native Google application does a decent job of creating Microsoft-compatible Office files. The newest version of Drive even allows editing and creating spreadsheet files! When combined with Android's built-in file viewer, it's difficult to find a document Android can't read. I've never been stuck in a meeting unable to view an e-mail attachment, which would be a real showstopper for me at work.

Geeking It Up

If you're stuck wearing a tie and attending meetings all day, the above information might be all you're interested in. For me, although I attend more meetings than I care for, I also have the opportunity to be a geek. A tablet computer offers some really great apps for system administrators or just geeks on their lunch break. Here are some of my favorites:

  • ConnectBot: this is the de facto standard SSH client for accessing remote servers. As with typing long documents, the on-screen keyboard can be frustrating for more than a few quick server tweaks, but the program itself is awesome. If you've ever SSH'd into a server on a cell phone screen, the 7" of real estate on the tablet will be a godsend. No geek is complete without a command-line interface, and ConnectBot provides remote access to one.
  • WiFi Analyzer: I've mentioned this app before in Linux Journal, and rightly so. It does exactly what's on the tin: it analyzes the Wi-Fi networks in your area. Whether you want to find an open channel or check signal strength in different areas of your building, WiFi Analyzer is amazing.
  • WiFi Map Maker: I had never heard of this application, but a reader (Roman, I won't mention his last name out of respect for his privacy) sent me information on it. If you need to make a quick-and-easy map of Wi-Fi hotspots, this is hard to beat. It uses the built-in GPS on your tablet to create a thermal map of Wi-Fi coverage in real time.
  • SplashTop: now that SplashTop supports controlling Linux workstations along with Windows and OS X, it's become a whole lot more usable for me. Using its custom application installed on your computer, SplashTop allows remote control of workstations with incredible responsiveness. It's a bit like VNC simplified and on steroids. Heck, it's even possible to play PC games over the connection! (Not that you'd ever do that at work.)

At Home: a Boy and His Recliner—and Tablet

I don't think I've watched a television show or movie at my house in the past decade without a notebook computer sitting on my lap. Whether it's to look up an actor on IMDB or to catch up on RSS feeds during the boring scenes, an on-line connection has become a requirement for me in my recliner. In this case, I've found the Nexus 7 to be a decent replacement for a full-blown laptop. Not only can I do all the things I normally do with my laptop, but I also can use an XBMC remote application to control the TV. If I happen across a cool on-line video, I can send it to my XBMC unit quickly with iMediaShare, which uses Apple's AirPlay technology to stream video directly to the TV. It gives me a certain level of satisfaction to stream video from an Android device to my Linux nettop running XBMC using an Apple protocol, yet having no Apple hardware or software in the mix. Truth be told, it works a lot more consistently than the Apple TV and actual AirPlay does at work too. iMediaShare has both a free and paid version, available on the Google Play store.

One thing I never do on my laptop is read books. Even though I can read countless Web articles on the computer, for some reason I can't bring myself to read actual book-length material. With the tablet on my lap instead of a laptop, flipping open the Kindle app allows me to read a few pages of a book if there's nothing interesting on TV. Why the Kindle app? I'm glad you asked. As it turns out, even though it has the absolute worst interface for finding a book in your collection, it has some features that I find indispensable:

  • With the "Personal Documents" feature Amazon offers, any DRM-free ebook can be e-mailed and stored on Amazon Cloud. They can be retrieved from any Kindle device or app (excluding the Cloud Reader, but I don't read books on my computer screen anyway).
  • WhisperSync used to work only on Amazon-purchased materials, but now it works on Personal Documents too. This means I can pick up my cell phone to read a few pages at the doctor's office, and then pick up my tablet later and automatically be right where I left off. Because this works across platforms, it makes the Kindle reader my go-to app.
  • I keep my DRM-free e-book collection at home on Calibre. With Calibre's export feature, sending a book to a specific Kindle device's e-mail address is a single click away.

I really do wish Amazon would improve the browsing interface for Android devices. I suspect Amazon is trying to push people into buying a Kindle Fire, however, since it also won't release the Amazon Prime streaming app. Oh well, the WhisperSync feature makes all the difference for me, and I'm willing to suffer a cruddy interface when opening a book.

Pure, Down-Home Entertainment

The tablet size and touchscreen really does make it a perfect device for simple gaming. Whether you want to sling Angry Birds at a bunch of pigs or use the tablet like a steering wheel to drive your 4x4 across rough terrain, the Nexus 7 is awesome. I'm not much of a gamer, but as it happens, that's exactly the type of person tablet games are made for! If I want to play a quick game of Solitaire or even shoot a couple zombies, the tablet interface is perfect.

Entertainment doesn't stop with games, however. I've mentioned Plex in recent issues of Linux Journal, but it bears mentioning again. If you have a collection of videos on your home server, Plex will transcode and stream them to you anywhere. It works at least as well as the AirVideo application on iOS, and the server component works excellently on a headless Linux server. When you add Netflix, Hulu Plus, Smart Audiobook Player, Pandora, Google Music, Amazon MP3 and the ability to store local media, it's hard to beat the Nexus 7 for media consumption.

And in between Work and Home

One place I never expected to use my tablet was in my car. No, I don't read books or watch videos during the daily commute, but I certainly enjoy listening to audiobooks. With its built-in Bluetooth connection, I happily can stream a book through my car's audio system. I find traffic jams much more palatable now that it means more time for "reading".

I've also found Google Map's ability to download maps for off-line use to be awesome. I opted to get the Wi-Fi-only model of the Nexus 7, so when I'm in the car, I don't have Internet connectivity. My car doesn't have a navigation system, so the 7" screen and off-line maps make for an incredible GPS system. Google's turn-by-turn navigation is amazing, and the nice big screen means it's more useful than my phone's GPS. I don't have a great way to mount the tablet in my car yet, but I suspect with a bit of Velcro it won't be a big problem.

Where to Go from Here?

I've given you a glimpse at how I use my tablet on a day-to-day basis. I hesitated to do this though, because I don't feel I'm really using the Nexus 7 to its fullest potential. Based on a few conversations I've had with fellow readers, however, I don't think I'm alone. I don't think tablet computers will replace desktop or even laptops any time soon, but I do think they have a place in our daily lives. Hopefully this article gets you started with integrating a tablet computer into your everyday life. I look forward to hearing about and sharing your experiences, so please write me at shawn@linuxjournal.com.

Resources

Dropbox: http://www.dropbox.com

Evernote: http://www.evernote.com

New Trent IMP62B Stylus: http://www.newtrent.com/stylus-pen-imp62b.html

Google Drive: http://drive.google.com

ConnectBot: http://code.google.com/p/connectbot

WiFi Analyzer: https://sites.google.com/site/farproc/wifi-analyzer

Dave's Apps: http://www.davekb.com/apps

SplashTop: http://www.splashtop.com

iMediaShare: http://www.imediashare.tv

XBMC: http://www.xbmc.org

Read More...

Designing Electronics with Linux

Wednesday, 22 May 2013 - 20:28 PM - (Software)

Designing Electronics with Linux

Image
Joey Bernard Wed, 05/22/2013 - 15:28

In many scientific disciplines, the research you may be doing is completely new. It may be so new that there isn't even any instrumentation available to make your experimental measurements. In those cases, you have no choice but to design and build your own measuring devices. Although you could build them using trial and error, having a way to model them first to see how they will behave is a much better choice—in steps oregano. With oregano, you can design your circuitry ahead of time and run simulations on it to iron out any problems you may encounter.

The first step, as always, is installing the software. Most distributions should have a package for oregano available. If you want to follow the source version, it is available at GitHub. Oregano also needs another software package to handle the actual simulation. The two packages it currently can work with are Gnucap and ngspice. Either of these two packages needs to be installed in order to do the calculations for the simulation. While this is handled automagically by your distribution's package manager, you will need to install this dependency yourself if you are building from source.

Once it's installed, you will get a blank new project when you first start up oregano (Figure 1). On the right-hand side, you should see a list of elements you can use to build your circuits. It starts up with the default library selected. This library provides all the standard electronic components you likely will want to use. But, this isn't the only library included. You can select from other libraries, such as TTL, Linear, CPU or Power Devices, among others.

Figure 1. On startup, you get a blank canvas and a parts list.

Each of these libraries contains a list of associated elements you can use in your circuits. Selecting one of the elements shows a preview of the schematic drawing of that element in the bottom window. You then can drag and drop the element onto your canvas and start building your circuit. Once you have an element on the canvas, you can double-click the element to edit its properties (Figure 2). You need to click on the "Draw wires" icon at the top of the window in order to connect the elements together into a proper circuit.

Figure 2. The property window depends on which properties are available for that element.

All circuits have some necessary components to make an actually functioning circuit. The first of these is the ground. This element is labeled GND in the default library. Along with ground, you need some sort of power source. In most cases, you will want some form of DC current. This is provided by the element labeled with VDC in the default library. With those two important elements in your circuit, you can go ahead and wire up the rest of the circuit.

Once you have a circuit made up, you will want to run a simulation to see how it behaves. Because of the nature of electrical circuits, you need to put sensors into the circuit to see its behavior. You can click on the "Add voltage clamp" icon at the the top of the window to select the sensor object. Then, you can click on the areas of your circuit where you want to measure during the simulation. At each point you click, you will see a new icon on your circuit marking a sensor location. Double-clicking on the clamp will pop up a window where you can set the parameters of what is being measured (Figure 3). You need at least one clamp in your circuit before you can run a simulation; otherwise, you won't have any measurements to study in your simulation.

Figure 3. Here you can select the properties for the clamp.

Once you have all of your clamp points selected, you can run the simulation and see what happens by clicking on the "Run a simulation" icon at the top of the window (Figure 4). When you do so, oregano opens a new window where you can see a plot of the data registered by the clamp (usually voltage or current).

Figure 4. Plotting the results of a circuit clamp.

When you do analysis, you have the choice of two different circuit analysis programs: Gnucap and spice. On Ubuntu, the default analysis program that is installed as a dependency is Gnucap. This means you need to install spice explicitly if you want to use it instead.

To select the analysis engine, click on Edit→Preferences. In this dialog, you also can set whether the log window will open automatically when needed, and you can set the data paths for the models and libraries that will be available for your circuits. In most cases, you will want to leave those as is.

To help you get started, oregano comes with several examples. Again, on Ubuntu (since that is my current desktop), these examples are located in /usr/share/doc/oregano/examples. You might want to load one of these examples first.

Once you have a completed circuit and want to run a simulation, you will want to set parameters to control this simulation. Click on the menu item Edit→Simulation Settings to bring up the dialog window. The first tab lets you see analysis parameters, such as transient options, fourier options, DC sweep options and AC options. Clicking on any of the check boxes will open up a subset of further options for each of those sections. The second tab lets you set a series of analysis options. You also can set parameters that may affect your circuit, such as the ambient temperature.

Once you have all of the options and parameters set, you can start the simulation by selecting the menu item Tools→Simulate or by pressing F11. Don't forget to attach some test clamps first; otherwise, you will get an error. Your simulation will run and pop up a new plot window where you can look at the values generated within your circuit. You can select whether to look at the transient analysis or AC analysis.

On the left, you will see a list of available plotting options. On the right-hand side, you will find the actual plot of your data. Only the items that you select from list will be plotted, so that means when this window first opens, nothing actually will be plotted.

You also can plot functions of the available values. For example, you could plot the difference in voltage between two separate test clamps. These functions will be available in the list on the left side, so you can select them and have them plotted on the graph.

Additionally, you can include even more complicated elements like full CPUs to your circuit. The problem with these is that the way they respond to electrical signals can be very complicated. These elements need a separate model file that describes this response to signals. Unfortunately, the licensing for model files means that many cannot be included with oregano. You can search the Internet and download the model files for the elements that interest you, or you can create your own model file. In either case, you can place the model files into the directory set in the preferences.

When you actually want to build your circuit, you can export the associated diagram by clicking on the menu item File→Export. You then can export the circuit diagram as either an SVG file, a PDF, a PostScript file or a PNG. Now, you can go ahead and build your new test equipment, secure in the knowledge that you did some initial testing and should get the behavior you need.

Read More...

The Usability of GNOME

Monday, 16 February 2015 - 20:49 PM - (Software)

The Usability of GNOME

Image
Jim Hall Mon, 02/16/2015 - 14:49

I work at a university, and one of our faculty members often repeats to me, "Software needs to be like a rock; it needs to be that easy to use." And, she's right. Because if software is too hard to use, no one will want to use it.

I recently spoke at GUADEC, the GNOME Users And Developers European Conference, and I opened my presentation with a reminder that GNOME is competing for mind share with other systems that are fairly easy for most people to use: Mac, iPad, Windows and Chromebook. So for GNOME to continue to be successful, it needs to be easy for everyone to use—experts and newcomers alike. And, that's where usability comes in.

So, what is usability? Usability is about the users. Users often are busy people who are trying to get things done. They use programs to be productive, so the users decide when a program is easy to use. Generally, a program has good usability if it is easy for new users to learn, easy for them to use, and easy for them to remember when they use the program again.

In a more practical view, average users with typical knowledge should be able to use the software to perform real tasks. The purpose of usability testing, therefore, is to uncover issues that prevent general users from employing the software successfully. As such, usability testing differs from quality assurance testing or unit testing, the purpose of which is to uncover errors in the program. Usability testing is not a functional evaluation of the program's features, but rather a practical determination of the program's operability.

Usability testing does not rely on a single method. There are multiple approaches to implement usability practices, from interviews and focus groups to formal usability testing. Whatever method you use, the value of usability testing lies in performing the evaluation during development, not after the program enters functional testing when user interface changes become more difficult to implement. In open-source software development, the community of developers must apply usability testing iteratively throughout development. Developers do not require extensive usability experience to apply these usability practices in open-source software.

I prefer the formal usability test, and it's not hard. You can gain significant insight just by gathering a few testers and watching them use the software. With each iteration, usability testing identifies a number of issues to resolve and uncovers additional issues that, when addressed, will further improve the program's ease of use. Usability cannot be addressed only at the end of a software development lifecycle. If you wait until the end, it is usually too late to make changes.

How to Run a Usability Test—Usability Testing in GNOME

I recently worked with the GNOME Design Team to examine the usability of GNOME. This was an opportune moment to do usability testing in GNOME. The project was in the process of updating the GNOME design patterns: the gear menu, the application menu, selection mode and search, just to list a few. I assembled a usability test to understand how well users understand and navigate the new design patterns in GNOME. Here is how I did that.

The first step in planning a usability test is to understand the users. Who are they, and what tasks do they typically want to perform? With GNOME, that answer was easy. The GNOME Web site explains that "GNOME 3 is an easy and elegant way to use your computer. It is designed to put you in control and bring freedom to everybody." GNOME is for everyone, of all ages, for casual users and software developers.

From there, I worked with the GNOME Design Team to build a usability test for these users. The test also needed to exercise the GNOME design patterns. I compared the design patterns used by each GNOME program and decided five GNOME applications provided a reasonable representation of the design patterns:

  1. gedit (text editor)

  2. Web (Web browser)

  3. Nautilus (file manager)

  4. Software (similar to an app store)

  5. Notes (a simple note-taking program)

Having decided on the programs, I wove a set of test scenarios that exercised the design patterns around tasks that real people would likely do. Designing the test scenarios is an important part of any usability test. You need to be very careful in the wording. It is too easy to "give away" accidentally how to do something just by using a particular turn of phrase. For example, one of my scenarios for Web asked testers to "Please make the text bigger" on a Web site—not to "Increase the font size", which would have hinted at the menu action they would need to use to accomplish the task.

Because I work on a university campus, I invited students, faculty and staff to participate in a usability study. Volunteers were selected without preference for gender, age group or level of experience. This reflects GNOME's preference to target a broad range of users. Each tester was given a $5 gift card to the campus coffee shop as a "thank you" for participating in the usability test.

In total, 12 testers participated in the usability test, representing a mix of genders and an age range spanning 18–74. Participants self-identified their level of computer expertise on a scale from 1 to 5, where 1 indicated "No knowledge" and 5 meant "Computer expert". No testers self-identified as either 1 or 5. Instead they filled a range between "2: I know some things, but not a lot", and "4: I am better than most", with an average rating of 3.25 and a mode of "3: I am pretty average."

Before each usability test session, every participant received a brief description of the usability study, explaining that this was a usability test of the software, not of them. This introduction also encouraged testers to communicate their thought process. If searching for a print action, for example, the participant should state "I am looking for a 'Print' button." Testers were provided a laptop running a "liveUSB" image of GNOME 3.12 containing a set of example files, including the text file used in the gedit scenario tasks. However, the USB image proved unstable for most programs in the usability test. To mitigate the stability issues, I rebooted the laptop into Fedora 20 and GNOME 3.10 to complete the scenario tasks for Web, Nautilus, Software and Notes. In testing GNOME, each participant used a separate guest account that had been pre-loaded with the same example files. These example files also included items unrelated to the usability test, similar to how real users keep a variety of documents, photos and other files on their computers, allowing participants to navigate these contents to locate files and folders required for the scenario tasks.

At the end of each usability test, I briefly interviewed the testers to probe their responses to the tasks and programs, to better understand what they were thinking during certain tasks that seemed particularly easy or difficult. Overall, the usability test included 23 scenario tasks, which most testers completed in 50–55 minutes.

My Usability Test Results

The value of a usability test is finding areas where the program is difficult to use, so developers can improve the interface in the next version. To do that, you need to identify the tasks that were difficult for most users to perform.

You easily can see the usability test results by using a heat map. This kind of visualization neatly summarizes the results of the usability test. In the heat map, each scenario task is represented in a separate row, and each block in the row represents a tester's experience with that task (Figure 1). Green blocks indicate tasks that testers completed with little or no difficulty, and yellow blocks signify tasks that presented moderate difficulty. Red boxes denote tasks where testers experienced extreme difficulty or where testers completed tasks incorrectly. Black blocks indicate tasks the tester was unable to complete, while white boxes indicate tasks omitted from the test, usually for lack of time.

Figure 1. Usability Heat Map

The "hot spots" in the heat map show tasks that were difficult for testers.

1) Interestingly, all participants experienced significant issues with changing the default font in gedit (GNOME 3.12). A significant number of testers were unable to accomplish a similar task in Notes. In observing the tests, the testers typically looked for a "font" or "text" action under the gear menu. Many participants referred to the gear menu as the "options" or "settings" menu because of a previous affiliation with the gear icon and "settings" in other Mac OS X and Windows applications (Figure 2).

Figure 2. Header Bar for gedit, Showing the Gear Menu

These participants expected that changing the font was an option in the program, and therefore searched for a "font" action under the gear or "options" menu. Part of this confusion stemmed from thinking of the text editor as though it were a word processor, such as Microsoft Word, which uses items in menus or on a toolbar to set the document font. This behavior often was exhibited by first highlighting all the text in the document before searching for a "font" action.

2) In the Nautilus file manager, testers also experienced serious difficulty in creating a bookmark to a folder. In GNOME, this task is usually achieved by clicking into the target folder then selecting "Bookmark this Location" from the gear menu, or by clicking and dragging the intended folder onto "Bookmarks" in the left pane (Figure 3).

Figure 3. Dragging a Folder to Create a Bookmark in Nautilus

However, testers initially addressed this task by attempting to drag the folder onto the GNOME desktop. When interviewed about this response, almost all participants indicated that they prefer to keep frequently accessed folders on the desktop for easier access. Most testers eventually moved the target folder into the Desktop folder in their Home directory, and believed they successfully completed the task even though the target folder did not appear on the desktop.

3) Testers also experienced difficulty when attempting "find and replace text" in gedit. In this task, testers employed the "Find" feature in gedit to search for text in the document. Experimenting with "Find", testers said they expected to replace at the same time they searched for text. After several failed attempts, testers usually were able to invoke the "Find and Replace" action successfully under the gear menu.

Although the overall GNOME desktop was not part of the usability test, many testers experienced difficulty with the GNOME "Activities" hot corner. In the GNOME desktop environment, the Activities menu reveals a view of currently running programs and a selection of available programs. Users can trigger the Activities menu either by clicking the "Activities" word button in the upper-left corner of the screen or by moving the mouse into that corner (the "hot corner"). Although testers generally recovered quickly from the unexpected "hot corner" action, this feature caused significant issues during the usability test.

General Issues

The open-source software usability challenge is cultural. To date, usability has been antithetical to open-source software philosophy, where most projects start by solving a problem that is interesting to the developer, and new features are incorporated based on need. Crafting new functionality takes priority, and open-source developers rarely consider how end users will access those features.

However, a key strength of open-source software development is the developer-user relationship. In open-source software projects, the user community plays a strong role in testing new releases. Unfortunately, testers cannot rely on the typical user-testing cycle to provide sufficient user interface feedback. Developers can learn a great deal simply by observing a few users interacting with the program and making note where testers have problems. This is the essence of a usability test.

Usability tests need not be performed in a formal laboratory environment, and developers do not need to be experts in order to apply usability testing methodology to their projects. Developers need only observe users interacting with the program for usability issues to become clear. A handful of usability testers operating against a prototype provides sufficient feedback to make informed usability improvements. And with good usability, everyone wins.

Read More...

SoftMaker FreeOffice

Monday, 20 June 2016 - 15:20 PM - (Software)

SoftMaker FreeOffice

Image
James Gray Mon, 06/20/2016 - 10:20

The bottom line on SoftMaker FreeOffice 2016—the updated, free, full-featured Office alternative to the expensive Microsoft Office suite—is this: no other free office suite offers as high a level of file compatibility with Word, Excel and PowerPoint. This maxim applies to both Windows and Linux operating systems, says the suite's maker, SoftMaker Software GmbH. SoftMaker asserts that the myriad competing free alternatives often harbor problems opening the Excel, Word and PowerPoint file formats loss-free. Sometimes the layout and formatting get lost, and on other occasions, files cannot even be opened. SoftMaker sees itself as the positive exception to this rule, especially with the newly overhauled FreeOffice 2016. Benefiting greatly from SoftMaker's commercial offering, SoftMaker Office 2016, FreeOffice 2016 adds features such as improved graphics rendering, compatibility with all current Linux distributions and Windows flavors (XP to Windows 10), new EPUB export and improved PDF export and many other MS-Office interoperability enhancements.

Read More...

Heirloom Software: the Past as Adventure

Thursday, 07 September 2017 - 12:17 PM - (Software)

Heirloom Software: the Past as Adventure

Image
Eric S. Raymond Thu, 09/07/2017 - 07:17

Through the years, I've spent what might seem to some people an inordinate amount of time cleaning up and preserving ancient software. My Retrocomputing Museum page archives any number of computer languages and games that might seem utterly obsolete.

I preserve this material because I think there are very good reasons to care about it. Sometimes these old designs reveal unexpected artistry, surprising approaches that can help us break free of assumptions and limits we didn't know we were carrying.

But just as important, cultures understand themselves through their history and their artifacts, and this is no less true of programming cultures than of any other kind. If you're a computer hacker, great works of heirloom software are your heritage as surely as Old Master paintings are a visual artist's; knowing about them enriches you and helps solidify your relationship to your craft.

For exactly re-creating historical computing experiences, not much can beat running the original binary executables on a software emulator for their host hardware. There are small but flourishing groups of re-creationists who do that sort of thing for dozens of different historical computers.

But that's not what I'm here to write about today, because I don't find that kind of museumization very interesting. It doesn't typically yield deep insight into the old code, nor into the thinking of its designers. For that—to have the experience parallel to appreciating an Old Master painting fully—you need not just a running program but source code you can read.

Therefore, I've always been more interested in forward-porting heirloom source code so it can be run and studied in modern environments. I don't necessarily even consider it vital to retain the original language of implementation; the important goals, in my view, are 1) to preserve the original design in a way that makes it possible to study that design as a work of craft and art, and 2) to replicate as nearly as possible the UI of the original so casual explorers not interested in dipping into source code can at least get a feel for the experiences had by its original users.

Now I'll get specific and talk about Colossal Cave Adventure.

This game, still known as ADVENT to many of its fans because it was written on an operating system that supported only monocase filenames at most six characters long, is one of the great early classics of software. Written in 1976–77, it was the very first text adventure game. It's also the direct ancestor of every rogue-like dungeon simulation, and through those the indirect ancestor of a pretty large percentage of the games being written even today.

If you're of a certain age, the following opening sequence will bring back some fond memories:


Welcome to Adventure!!  Would you like instructions?

> n

You are standing at the end of a road before a small brick building.
Around you is a forest.  A small stream flows out of the building and
down a gully.

> in


You are inside a building, a well house for a large spring.

There are some keys on the ground here.

There is a shiny brass lamp nearby.

There is food here.

There is a bottle of water here.

>

From this beginning, the game develops with a wry, quirky, humorous and somewhat surrealistic style—a mode that strongly influenced the folk culture of computer hackers that would later evolve into today's Open Source movement.

For a work of art that was the first of its genre, ADVENT's style seems in retrospect startlingly mature. The authors weren't fumbling for an idiom that would later be greatly improved by later artists more sure of themselves; instead, they achieved a consistent (and, at the time, unique) style that would be closely emulated by pretty much everyone who followed them in text adventures, and not much improved on as style even though the technology of the game engines improved by leaps and bounds, and the range of subjects greatly widened.

ADVENT was artistically innovative—and with an architecture ahead of its time as well. Though the possibility had been glimpsed in research languages (notably LISP) as much as a decade earlier, ADVENT is one of the earliest programs still surviving to be organized as a complex, declaratively specified data structure walked by a much simpler state machine. This is a design style that is underutilized even today.

The continuing relevance of ADVENT's actual concrete source code, on the other hand, is quite a different matter. The implementation aged much more rapidly—and badly—than the architecture, the game or its prose.

ADVENT was originally written under TOPS-10, a long-defunct operating system for the DEC PDP-10 minicomputer. The source for the original version still exists (you can find it and other related resources at the Interactive Fiction Archive, but it tends to defeat attempts to appreciate it as a work of programming art because it's written in an archaic dialect of FORTRAN with (by actual count) more than 350 gotos in its 2.4KLOC of source code.

Preserving that original FORTRAN is, therefore, good for establishing provenance (as historians think about these things) but doesn't do a whole lot for that I've suggested as the cultural purposes of keeping these artifacts around. For that, a faithful translation into a more modern language would be far more useful.

As it happens, Don Woods' 1977 version of ADVENT was translated into C less than two years after it was written. You can still play it—and read the code—as part of the BSD Games package. Alas, while that translation is serviceable for building and running the program, it's not so great for reading. It is less impenetrable than the FORTRAN, but was not moved fully to idiomatic C and reads a bit strangely to a modern eye. (To be fair to the translators, the C language was still in its childhood in 1977, and its modern idioms weren't all that well developed yet.)

Thus, there are still a forbidding number of gotos in the BSD translation. Lots of information is passed around through shared globals in a way that was typical in FORTRAN but was questionable style in C even then. The BSD C code is full of mystery constants inherited from the ancestral FORTRAN source. And there is a serious comprehensibility problem around the custom text database that both the original FORTRAN and BSD C versions used—a problem I'll return to later in this article.

Through the late 1970s and early 1980s a lot of people wrote extensions of ADVENT, adding more rooms and treasures. The history of those variants is complicated and difficult to track. Almost lost in the hubbub was that the original authors—Will Crowther and Don Woods—continued to revise their game themselves. The last mainline version—the last release by Don Woods—was Adventure 2.5 in 1995.

I found Adventure 2.5 in the Interactive Fiction Archive in late 2016. Two things caught my attention about it. First, I had not previously known that Crowther and Woods themselves had shipped a version so extended from the famous original. Second—and unlike the early BSD port—there was nothing resembling what we'd expect in a modern source release to go with the bare code and the Makefile. No manual page. No licensing statement.

Furthermore, the 2.5 code was deeply ugly. It was C, but in worse shape than the BSD port. The comments actually included an apology from Don Woods explaining that it had been mechanically lifted from FORTRAN by a homebrew translator of his own devising—and apologizing for the bad style.

Nevertheless, I saw a possibility—and I wrote Don asking his permission to ship a cleaned-up version under a true open-source license. The reply was some time in coming, but Don not only granted permission speaking for both himself and Will Crowther, he also actively encouraged me to do this thing.

Now a reminder about what I think the goals of heritage preservation ought to be: I felt it was essential that the cleaned-up version should at no point break functional compatibility with what we got from Woods and Crowther. Therefore, the very first thing I did after getting the heirloom source to build clean was add the ability for it to capture command logs for regression testing.

When you do a restoration like this, it's not enough merely to make a best effort to preserve original behavior. You ought to be able to prove you have done so. Best practice, then is to start by building a really comprehensive set of regression tests. And that's what I did.

What we did, I should say. The project quickly attracted collaborators—most notably Jason Ninneman. The first of Jason's several good ideas was to use coverage-analysis tools to identify gaps in the test suite. Later, Petr Vorpaev, Peje Nilsson and Aaron Traas joined in. By about a month from starting, we could show more than 95% test coverage. And, of course, we ran retrospective testing with the newest version of the test suite on the earliest version we could make read the logs.

That kind of really good test coverage frees your hands. It allowed us to make rapid progress on the other prime goal, which was to turn the obfuscated source we started with into a readable work of art that fully revealed the design intentions and astonishing cleverness of the original.

So, all the cryptic magic numbers had to go. The goto-laden spaghetti code had to be restructured into something Don Woods in 2017 wouldn't feel he needed to apologize for. In general, what we aimed to transform the source code into was something we could believe Crowther and Woods—two of the most brilliant hackers of their time—would have written in 1977 if they had then had the tools and best practices of 2017 at their fingertips.

Our most (ahem) adventurous move was to scrap the custom text-database format that Crowther and Woods had used to describe the vocabulary of the game and the topology of Colossal Cave.

This—the "complex, declaratively-specified data structure" I mentioned earlier—was the single cleverest feature of the design, and it went all the way back to Crowther's very first version. The dungeon's topology is expressed by a kind of pseudo-code broadly resembling the microcode found underneath a lot of processor architectures; movement consists of dispatching to the sequence of opcodes corresponding to the current room and figuring out which one to fire depending not only on the motion verb the user entered but also on conditionals in the pseudo-code that can test for the presence or absence of objects and their state.

Good luck grokking that from the 2.5 code we started with though. Here are the first two rules as they originally appeared in adventure.text, comprising ten opcodes:


3
1       2       2       44      29
1       3       3       12      19      43
1       4       5       13      14      46      30
1       145     6       45
1       8       63
2       1       12      43
2       5       44
2       164     45
2       157     46      6
2       580     30

Here's how those rules look, transformed to the YAML markup that our restoration, Open Adventure now uses:


- LOC_START:
    travel: [
      {verbs: [ROAD, WEST, UPWAR], action: [goto, LOC_HILL]},
      {verbs: [ENTER, BUILD, INWAR, EAST], action:
       ↪[goto, LOC_BUILDING]},
      {verbs: [DOWNS, GULLY, STREA, SOUTH, DOWN], action:
       ↪[goto, LOC_VALLEY]},
      {verbs: [FORES, NORTH], action: [goto, LOC_FOREST1]},
      {verbs: [DEPRE], action: [goto, LOC_GRATE]},
    ]
- LOC_HILL:
    travel: [
      {verbs: [BUILD, EAST], action: [goto, LOC_START]},
      {verbs: [WEST], action: [goto, LOC_ROADEND]},
      {verbs: [NORTH], action: [goto, LOC_FOREST20]},
      {verbs: [SOUTH, FORES], action: [goto, LOC_FOREST13]},
      {verbs: [DOWN], action: [speak, WHICH_WAY]},
    ]

The concept of using a Python helper to compile a declarative markup like this to C source code to be linked to the rest of the game was maybe just barely thinkable when Adventure 2.5 was written. YAML didn't exist at all until six years later.

But...designer's intent. That's much easier to see in the YAML version than in what it replaced. Therefore, given the purpose of heirloom restoration, YAML is better. Rather like stripping darkened varnish from a Rembrandt—the bright colors beneath may startle if you're used to the obscuring overlayer and think of it as definitive, but they are the truth of the work.

With our choices about what we could change so constrained, you might think the restoration was drudge work, but it wasn't like that at all. It was more like polishing a rough diamond—gradually seeing brilliance emerge from beneath an unprepossessing surface. The grottiness was largely—though not entirely—a consequence of the limitations of the tools Crowther and Woods had at hand. When we cleaned that up, we found genius with only a tiny sprinkling of bugs.

My dev team fixed those bugs, of course. We're hackers; that means we consider heirloom software a living heritage to be improved, not an idol to be worshiped. We certainly didn't think, for example, that Don Woods intended use of the verb "extinguish" on an oil-filled unlit urn to make the oil in it vanish.

Petr Vorpaev, reviewing a draft of this article, observed "Sometimes, we stripped bits of genius off, too. Because it was genius that was used to work around limitations that aren't there any more." He's thinking of a very odd feature of the 2.5 code—it worked around the absence of a string type in old FORTRAN by representing strings in a six-bit-per-character encoding packing five characters into a 32-bit word. That is, of course, a crazy thing to do in C, and we targeted it for removal early.

We added some minor features as well. For example, Open Adventure allows some command abbreviations that are standard in text-adventure games today but weren't supported in original ADVENT. By default, our version issues the > command prompt that also has been in common use for decades. And, you can edit your command input with Emacs keystrokes.

But, and this is crucial, all the new features are suppressed by an "oldstyle" option. If you choose that, you get a user experience that even a subject-matter expert would find difficult or impossible to distinguish from the 1995 and 1976–1977 originals.

Some of you might nevertheless be furrowing your brows at this point, wondering "YAML? Emacs keystrokes? Even as options? Yikes...can this really still be Colossal Cave Adventure?"

That's a question with a long pedigree. Philosophers speak of the "Ship of Theseus" thought experiment; if Theseus leaves Athens, and on his long voyage each plank and line and spar of the ship is gradually replaced, until not a fragment of the original wood remains when he returns to Athens, is it still the same ship?

The answer is, as any student of General Semantics could tell you, "What do you mean by 'same'?" Identity is not a well defined predicate; it changes according to what kind of predictive problem you are using language to tackle. Same arrangement of bits in the source? Same UI? Same behaviors at some level deeper than UI?

There really isn't one right answer. Those of you predisposed to answer "same" might argue "Hey, it passes the same regression tests." Only, maybe it doesn't now. Remember, we fixed some bugs. On the other hand...if the ship of Theseus is still "the same" after being entirely rebuilt, does it cease to be if we learn that the replacement for one of its parts doesn't replicate a hidden flaw in the original? Or if a few improvements have been added during the voyage that weren't in the original plans?

As a matter of fact, Adventure already has come through one entire language translation—FORTRAN to C—with its "identity" (in the way hackers and other people usually think of these things) intact. I think I could translate it to, say, Go tomorrow, and it would still be the same game, even if it's nowhere near the same arrangement of bits.

Furthermore, I can show you the ship's log. If you go to the project repository, you can view each and every small transformation of the code between Adventure 2.5 and the Open Adventure tip version.

There is probably not a lot of work still to be done on this particular project, as long as our objectives are limited to be performing a high-quality restoration of Colossal Cave Adventure. As they almost certainly will be; if we wanted to do something substantially new in this kind of game, the smart way to do it would not be to code custom C, but to use a language dedicated to implementing them, such as Muddle (aka MDL) or Adventure Definition Language.

I hope some larger lessons are apparent. Although I do think Colossal Cave Adventure is interesting as an individual case in itself, I really wrote this article to suggest constructive ways to think about the general issues around restoring heirloom software—why you might want to do it, what challenges and rewards you'll find, and what the best practices are.

Here are the best practices I can identify:

  • The goals to hold in mind are 1) making the design intent of the original code available for study, and 2) preserving the oldstyle-mode UI well enough to fool an original user.

  • Build your regression-test suite first. You want to be able to demonstrate that your restoration is faithful, not just assert it.

  • Use coverage tools to verify that your regression tests are good enough to constitute a demonstration.

  • Once you have your tests, don't sweat changing tools, languages, implementation tactics or documentation formats. Those are ephemera; good design is what endures.

  • Always have an oldstyle option. Gain the freedom to improve by making, and keeping, the promise of fidelity to the original behavior in oldstyle mode.

  • Do fix bugs. This may conflict with the objective of perfect regression testing, but you're an engineer, not an embalmer. Work around that conflict as you need to.

  • Show your work. Your product is not just the restored software but the repository from which it ships. The history in that repository needs to be a continuing demonstration of good judgment and sensitivity to the original design intent of the code.

  • Document what you change, including the bug fixes. It is good practice to include maintainer's notes describing your restoration process in detail.

  • When in doubt about whether to add a feature, be neither over-eager to put your mark on the code nor a slave to its past. Instead, ask "What's in good taste?"

And while you're doing all this, don't forget to have fun. The greatest heirloom works, like Colossal Cave Adventure, were more often than not written in a spirit of high-level playfulness. You'll be truer to their intent if you approach restoring them with the same spirit.

Read More...

Lotfi ben Othmane, Martin Gilje Jaatun and Edgar Weippl's Empirical Research for Software Security (CRC Press)

Friday, 20 October 2017 - 16:47 PM - (Software)

Lotfi ben Othmane, Martin Gilje Jaatun and Edgar Weippl's Empirical Research for Software Security (CRC Press)

Image
James Gray Fri, 10/20/2017 - 11:47

Developing truly secure software is no walk through the park. In an effort to apply the scientific method to the art of secure software development, a trio of authors—Lotfi ben Othmane, Martin Gilje Jaatun and Edgar Weippl—teamed up to write Empirical Research for Software Security: Foundations and Experience, which is published by CRC Press.

The book is a guide for using empirical research methods to study secure software challenges. Empirical methods, including data analytics, allow extraction of knowledge and insights from the data that organizations gather from their tools and processes, as well as from the opinions of the experts who practice those methods and processes. These methods can be used to perfect a secure software development lifecycle based on empirical data and published industry best practices.

The book also features examples that illustrate the application of data analytics in the context of secure software engineering.

Read More...