PDF format

Transcription

PDF format
NoSQL: HOW TO GET STARTED WITH RIAK
GET STARTED
WITH LINUX!
Beginner’s guide
Try Linux instantly
Dual-boot Windows
FREE full software!
PLUS The top three
distros explored!
Get into Linux today!
RASPBERRY
Pi 2!
500%
FASTER
EXCLUSIVE! Hands-on guide and
in-depth review of the all-new Pi 2
Made of Wynn
Python 3!
Wynn Netherland on looking after GitHub p44
Blogging
Video streaming
Get into Ghost
Linux Motion
How to install the
next-gen blog platform
Build a secure
home CCTV system
Start programming today
and easily port your code
TuxRadar.com
‘Optimising for happiness’
is kind of key to the
Ruby community
Welcome
Get into Linux today!
What we do
We support the open source community
by providing a resource of information, and
a forum for debate.
We help all readers get more from Linux with our
tutorials section – we’ve something for everyone!
We license all the source code we print in our
tutorials section under the GNU GPLv3.
We give you the most accurate, unbiased and
up-to-date information on all things Linux.
Who we are
This issue we’re asking our experts how we
can the next gen of Linux users excited and
involved with free and open source software?
Jonni Bidwell
Well, we’ve got Justin Beiber Linux,
unfortunately. But we’ve also got an
exciting movement of kids learning to code
in schools. Working together, or fighting
among themselves, is pretty much the
spirit of open source. They get to improve
on the work of their peers, or lambast their
idiotic methods. Just like the LKML
Neil Bothwick
Android is crucial to this. People exposed to
the iOS ecosystem from an early age take
the closed nature of software and services
for granted and become more difficult to
show the usefulness of free and open
software. Show them that they can own
and change their devices and they will
always want that.
Les Pounder
The next generation of FLOSS users are
now learning with the best bit of kit: the
Raspberry Pi. A low-cost Linux computer
that uses open source software to do so
much for such a small price. The mix of
cheap hardware, free software and a host
of varied applications is catnip for parents
and kids alike.
New Model Pi Army
The global phenomena that is the Raspberry Pi enters
its next phase and is taking Linux with it, not so much
kicking and screaming, but more skipping merrily
through classrooms, workshops and coding clubs around the
world. A charitable venture that was originally envisioned to
produce a few thousand Pis has become an educational
gateway to Linux for literally millions of new users. Crucially not
any old users but a new generation of programmers,
developers, system admins, kernel contributors, bug hunters
and security experts. All open-eyed and open-minded to the
open source way.
Boys and girls as young as seven are coding with Linux. I can’t
emphasise how vital this is, as it has been noted that the age of
developers maintaining crucial elements of the Linux world are
ageing. We jokingly call them greybeards, but unless a new
generation comes along to maintain those distros the open
source world is going to find itself in trouble.
That’s why it’s amazing news an all-new Raspberry Pi is on its
way. We’ve got complete in-depth coverage on the Pi starting
on page 26, with a report on the launch event on page 6. We’ll of
course have more coverage over the year, so stick with us or
even save yourself some money and subscribe on page 34.
If you’re new to Linux or looking for a change, join us on page
36 where we’re looking at the new generation of Linux distros
and how you can enjoy a fresh install of a new operating system
that’ll be faster, more stable and offer you all-new features.
It’s a cool aspect of Linux that new developments come along
when they are ready, and not when a company thinks it’s time
to to sell you something else with a big marketing campaign.
Mayank Sharma
To get the next-gen of users we need to
pipe FLOSS through next-gen devices.
So more open source apps in the Steam
app store, more Linux distros for the
Raspberry Pi, and more Android in
portables. And yeah, a little sprinkle of
Snowden-like revelations to get people to
distrust proprietary software.
Alexander Tolstoy
I think we should think about developing an
IDE for kids –that would help greatly. Also,
how about writing a C++ code by dragging
colourful blocks and balloons, and without
any keyboard input? I’m sure, that would
create the right amount of joyance about
Linux from younger people, even more so
than with the excellent Tuxpaint!
Neil Mohr Editor
[email protected]
Subscribe today
See p34
www.linuxformat.com
March 2015 LXF195 3
Contents
“If more of us valued food above hoarded gold, it would be a merrier world.” J.R.R. Tolkien
Reviews
ThinkPad Yoga 11e .......... 15
A must-buy Chromebook for those who
want to be able to run it over with a tank.
OpenPi ................................16
A new box with a Raspberry Pi Compute at
its heart and open source schematics.
The new
Raspberry Pi 2
has landed!
The launch event, p6
In-depth review, p26
Hands-on guide, p28
The OpenPi will suit entrepreneurs
with ideas and industrial aspirations.
Firefox 35 ........................... 17
Mayank Sharma tests an even more refined
browser and its global comms feature.
Roundup:
Scripting languages p20
TigerVNC 1.4.0..................18
Tiger, tiger, burning bright is your VNC got
the features or out of the fight?
Don’t call it a comeback. TigerVNC
gets a new major release.
Civ: Beyond Earth.............19
Just Civilization 5 in space or something else
a little bit different and a little special?
Raspberry Pi 2 .................. 26
A new Raspberry Pi pops out of the silicon
oven for Les Pounder to devour.
The Raspberry Pi 2 is six times
faster you say? Let’s find out.
4 LXF195 March 2015
Interview
We’ve traded an open web
in these last few years
for more walled gardens
Wynn Netherland on how the web has changed p44
www.linuxformat.com
On your FREE DVD
Fedora 21, Ubuntu
14.10, PCLinuxOS,
ArchBang 2015 and more
Killer distros from our cover feature. Treat yourself or a
PLUS: HotPicks and free eBook
p96
loved one to an LXF
subscription! p34
Don’t miss...
Get into Linux ........................ 36
Join the growing ranks of Linux users.
Explore and instantly try three top distros.
Riak NoSQL ................................ 50
NoSQL, Sir? Suit you, Sir? Discover why
admins love a bit of NoSQL.
Coding Academy
Tutorials
Minecraft/Pi
Turtle graphics ................. 70
Python 3 ................................ 84
Jonni Bidwell places a gentle arm around
probably the least loved sequels in
programming history and humbly asks that you
embrace the parentheses.
Discover exciting graphical things you can
do with a turtle and Minecraft on your
Raspberry Pi. You will never look at trees the
same way again…
Julia ........................................ 88
Mihalis Tsoukalos explains how to start
programming in Julia. This functional language
may be great for high-performance numerical
and scientific computing, but it’s also a capable
general-purpose programming language.
Regulars at a glance
News............................. 6 Subscriptions ...........34 Back issues ...............68
Faster Raspberry, Pi! Pi! Also Ubuntu
Our digital editions and print/digital
Don’t look back in anger, hit the past
Core’s got its beady eye on IoT and
deals are now available on Android!
with a sturdy back issue. Why not
buy LXF193 – it’s very orange.
Canonical teases details of its phone.
Sysadmin...................58
Mailserver................... 11
Dr Chris dons his poetry cap and
In this month’s news bag, some
then batch processes his holiday
Next month ...............98
An anonymity themed issue so
question the importance of Stallman
snaps. He also concludes his look at
private that even we don’t know
while others offer other ways to erase OpenLDAP, which is nice.
what’s in it!
a hard disk (other than shooting it).
Alexander Tolstoy is a man of many
Les Pounder celebrates three (-ish)
picks that are just too hot to handle
years of the Raspberry Pi.
Dropbox Uploader, Gloobus-preview,
Is there life after Bash? We corral
MPS-Youtube, KWave, Linuxbrew,
other scripting languages that meet
SuperTuxKart, Aualé, Hollywood
the needs of a busy sysadmin.
Technical Melodrama and Boomaga.
Neil Bothwick says, step away from your
mouse, but not so far that you can’t reach
the keyboard to manipulate images using
the power of your command line.
Motion
Detect and record............. 76
Ghost
Embedded controllers .... 80
and he’s throwing your way now!
Get ready to catch: Yarock, FFmpeg,
Roundup ....................20
ImageMagick
Convert images ................74
Build a livestreaming system for your home
using a Pi, webcam and save motiondetected video.
HotPicks ....................62
User groups................14
Turtles are very artistic as well as
being obsessed with pizza.
Our subscriptions team
is waiting for your call.
www.tuxradar.com
Don’t panic, it’s not that critical vulnerability
in glibc that had Jonni “mildly perturbed”,
but a nice and friendly tutorial on how to
create custom themes for your blog site.
March 2015 LXF195 5
THIS ISSUE: Ubuntu Core
Ubuntu Phone
Mintbox Mini
Bodhi Linux lives
RASPBERRY PI
New Raspberry Pi
Model B unleashed!
Raspberry Pi 2 Model B is six times faster than the previous model.
T
he Raspberry Pi 2 Model B has
been released much to many
people’s surprise and a lot more
people’s delight. The new model is
apparently six times faster than the
previous model [see p26 for loads more
results] and comes with 1GB of RAM.
The processor in the newest
Raspberry Pi is a Broadcom BCM2836
ARM 7 quad-core processor running at
900MHz, a substantial upgrade over
the previous models. Last year’s
Raspberry Pi Model B+, for example,
has a single-core 700MHz CPU.
In a blog post (www.raspberrypi.
org/raspberry-pi-2-on-sale) to
announce the new model, the
Raspberry Pi Foundation explained the
steps it has taken to bring increased
performance and add extra features,
without deviating from its original
mission, or alienating people who own
older Raspberry Pis. And the
relationship between the Raspberry Pi
Foundation and Broadcom was critical
to this: “Our challenge was to figure out
how to get this without throwing away
our investment in the platform or
spoiling all those projects… Fortunately
for us, Broadcom were willing to step up
with a new SoC, BCM2836. This retains
all the features of BCM2835, but
Eben Upton launches the Raspberry
Pi 2 Model B at The Shard in London.
6 LXF195 March 2015
replaces the single 700MHz ARM11
with a 900MHz quad-core ARM
Cortex-A7 complex: everything
else remains the same,
so there is no painful
transition or
reduction in
stability.”
The increase in
specs means that the
Raspberry Pi 2 Model B
can boot up in less than half
the time of its predecessor.
It comes with a 40-pin GPIO
enabling even more complex projects
that include multiple sensors,
connectors and expansion boards.
Thankfully for those of us who have
created projects on the earlier
Raspberry Pi, the first 26 pins of the
“The new Pi can boot up
in less than half the time
of its predecessor.”
new Pi 2’s GPIO are identical to the
Model A and B boards. According to the
Foundation, porting existing Raspberry
Pi projects to the new model is as
simple as updating the OS.
At the unveiling of the new Pi,
Raspberry Pi Founder, Eben Upton said
that while the previous version of the
titular PC was a “great computer”, users
“had to be forgiving” when undertaking
certain tasks due to its comparatively
low processor power, but that the Pi 2
will be able to be used as a fully-capable
PC without compromise.
Upton has also blogged that the
Raspberry Pi 2 will not only run ARM
www.linuxformat.com
The Raspberry Pi 2:
faster, more GPIO pins,
backwards compatible and
the same low price.
GNU/Linux distros, including Snappy
Ubuntu Core (see page, right), but also
Windows 10. The Foundation has been
working closely with Microsoft for the
past six months to bring a Raspberry Pi
2-compatible version of Windows 10,
which will be free of charge to
developers. This isn’t news that
everyone will celebrate, but it does
reflect the success of the Pi, generally.
The best news of the Raspberry Pi 2
is that not only has it gone on sale
immediately – you’ll be able to buy one
by the time you read this issue – but it
also keeps the same low price as the
previous models of around £26.42.
You’ll need an updated NOOBS or
Raspbian image including an ARM v7
kernel and modules, which can be
downloaded from www.raspberrypi.
org/downloads.
Sadly, this new version doesn’t
mean there’s been an official price drop
for the Raspberry Pi 1 Model B and
Model B+, which will continue to sell for
the same price, though we should see
price drops occurring once the Pi 2
picks up steam.
Newsdesk
INTERNET OF THINGS
Ubuntu Core wants
to power the future
A Small, lean distro that will make internetconnected devices even smarter.
C
anonical has just launched its Snappy
Ubuntu Core partner ecosystem, working
with 22 partners to help power some of
the most innovative and exciting projects which
cover a wide range of upcoming devices such as
robotics, drones and other smart devices.
“We are inspired to support entrepreneurs and
inventors focused on life-changing projects,” says
Mark Shuttleworth, Founder of Ubuntu and
Canonical. “From scientific breakthroughs by
autonomous robotic explorers to everyday
miracles like home safety and energy efficiency,
our world is being transformed by smart
machines that can see, hear, move, communicate
and sense in unprecedented ways.”
Ubuntu Core is a pared back version of the
popular Linux distro, and only requires a 600MHz
processor with 128MB RAM, enabling it to bring a
robust and open ecosystem to upcoming smart
devices. It will also provide much needed security
to devices that cannot easily connect to the
internet for updates and security patches.
We're already seeing some pretty cool devices
use Ubuntu Core, including the Erle-Copter.
Snappy Ubuntu Core is set to
power some really exciting projects.
According to Victor Mayoral Vilches, CTO of Erle
Robotics, the benefits of Ubuntu Core are clear:
“We are delighted to reveal the Erle-Copter as the
world’s first Ubuntu Core-powered drone that will
stay secure automatically and can be upgraded
with additional capabilities from the app store …
An open platform attracts innovators and experts
to collaborate and compete, we are excited to lead
the way with open drones for education, research
and invention.” We can't wait to see what future
Ubuntu Core-powered devices emerge, and it's
great to once again see Linux at the forefront of
innovation, where it belongs.
http://developer.ubuntu.com/en/snappy
MOBILE
Ubuntu Phone
Not another 'me too' Android clone, says Canonical.
C
anonical's much anticipated début into
the world of mobile operating systems is
getting nearer to release, and as the
launch date approaches we're getting ever more
details about what we can expect when we fire up
an Ubuntu Phone-powered device. Pleasantly,
what we've heard so far shows that Canonical is
looking to differentiate its offering compared with
Android and iOS, especially in terms of apps.
Unlike in Android and iOS where apps are
tightly restricted and controlled by the platform
owners (especially in Apple's case), and placed in
a grid for users to dispassionately poke at,
Canonical is trying something rather different
with Ubuntu Phone. Rather than displaying
separate apps in a standard grid, Ubuntu Phone
is looking to integrate content and services via
Scopes, which will integrate various apps and
services into an easy to use interface, so users
don't have to scroll through screens and screens
of icons. For example, the NearBy scope will
aggregate local services centred around where
you are and what you're currently doing. Going up
against Android and iOS will prove to be a difficult
task, however, but it's good to know that
Canonical has a plan to make Ubuntu Phone
stand out. http://www.ubuntu.com/phone
Canonical’s
Ubuntu Phone
is shaping up
to be much
more than a
fancy-looking
mobile OS.
www.tuxradar.com
Newsbytes
A new small-form desktop
computer has been announced;
the MintBox Mini. As you've probably
guessed from its name, it comes with
Linux Mint already installed. It will
feature an AMD A4-6400T Micro
processor, Radeon R3 graphics
processor, 4GB RAM, two USB 3.0
ports, two USB 2.0 ports, dual HDMI
ports (for up to two displays), a
microSD card reader, and a headset
jack. It will also have a 64GB solid
state drive and comes in a mint green
case that makes us feel fresh and
clean. It'll cost $295 in the States and
€295 in Euroland. www.fit-pc.com
After years of promises and
stalled attempts, Samsung has
finally launched a Tizen-powered
smartphone, the Samsung Z1. It won't
challenge high-end smartphones like
the Nexus 6, with its 4-inch WVGA
screen, 1.2GHz dual-core processor,
768MB of RAM, a 3.1-megapixel
camera, 4GB of storage and 3G data
limitation, but it’s aimed at
emerging markets.
The Z1 has
launched in
India, setting
Tizen up as an
operating system for
developing countries
The Z1 is small,
to rival Google's own
attractive and
Android One platform. runs Tizen.
Cast out your Macbook Pros,
there's a new high-end laptop in
town, and one that claims it is the first
that respects your freedom and
privacy. The Purism Librem 15 comes
with a kernel, operating system and
software applications that are all free
and open source. Along with some
pretty neat specifications, including a
15.6-inch display in either 1080p or
4K resolutions, it comes with a
Trisquel-based 64-bit Linux operating
system. According to the
manufacturers, the hardware used in
the Librem 15 laptop was specifically
selected so that no binary blobs are
needed in the Linux kernel that ships
with the laptop. It's not available to
buy from stores at the moment, but
you can pledge support to get an early
version from www.crowdsupply.
com/purism/librem-laptop.
Bodhi Linux is alive! There had
been concerns that the project
was shutting down, due to some
pretty big changes happening. The
good news it that Bodhi Linux 3.0 is
still being worked on, though there's a
new timeline so we're not sure when
we can expect the new release.
March 2015 LXF195 7
Comment
Cabinet Office
plug-fest
I was at the
Open
Document
Format (ODF) plug-fest last month hosted by
the Cabinet Office, in partnership with the
OpenDoc Society. The UK Government’s
policy mandating ODF for editing and sharing
documents, announced in July, commits all
departments to adopting the format. After
years of single-vendor dominance, it was great
to see the blossoming of options on-display
from the ODF ecosystem at the plug-fest.
There was an announcement that Google
Docs will upgrade its ODF support to 1.2 and
include presentation support too, which was
very welcome. I also tried out Google’s ODF
support afterwards and experienced the
improvements over recent times. Previously, I
would always choose an OOXML download,
since I knew LibreOffice’s OOXML import was
better than Google Doc’s ODF output. It’s great
to see those filters approaching parity.
Open doors
Microsoft re-iterated its support for ODF, but
hedged unhelpfully on change tracking, a key
feature request from Government department
present, which remains an unimplemented in
Microsoft office.
LibreOffice was well represented with
excellent ODF change-tracking support
built-in, with Collabora contributing a large
number of interoperability tests during the
hacking to help to improve document
exchange. Naturally there were other
participants from many smaller projects,
Apache OpenOffice, Calligra, EuroOffice and
many others; a total of over 50 delegates from
30 organisations, including 17 Government
representatives.
The choice of vendors enabled by ODF
provides an exciting side-effect: of opening the
platform choice door, so making a Linux OS
and LibreOffice combination a
viable option in Government.
Michael is a pseudo-engineer, semi-colon lover,
SUSE LibreOffice hacker and amateur pundit.
10 LXF195 March 2015
Hitting the mirrors
What’s behind the free software sofa?
EVOLVE OS BETA 1
The first ever beta of Evolve OS has
been released, so if you want to
check out this brand new Linux
distro that's been built from
scratch and don't mind a few bugs
here and there, you now can.
With a custom desktop known
as Budgie and it's own package
manager that's been forked from
Pardus Linux, Evolve OS is shaping
Evolve OS is only in beta, but it's
already looking very promising.
up to be a rather unique distribution
and can be downloaded from
https://evolve-os.com.
TINY CORE LINUX 6.0
The latest version of this
minimalist Linux distro is known as
the ‘piCore’ edition, and as the
name suggests it's been designed
for the Raspberry Pi. The biggest
change with the latest release is
the use of the 3.12.36 Linux kernel,
offering more stable performance
and broader hardware support.
A new feature also allows Tiny Core
Linux 6.0 to boot into a safe
overclocking mode that shortens
boot time by 20%. Download it from
http://bit.ly/TinyCoreLinux6.
It's small, compact and is a good
alternative to Raspbian on the Pi.
PROXMOX 4.0 MAIL GATEWAY
If you're after a new email proxy,
then Proxmox 4.0 Mail Gateway
could be of interest. It's based on
Debian 7.8, and the latest 4.0
version of the anti-spam and
antivirus protection has had all its
packages updated. Proxmox
protects email servers from spam,
viruses, trojans and phishing and is
managed through an easy, web-
based interface and has been going
for over 10 years, so check it out at
www.proxmox.com.
Compatible with every type of mail
transfer agent, including Postfix.
4MLINUX 11.0
Another lightweight Linux distro
that has been recently updated is
4MLinux and it’s on the mirrors to
download. It's been built from
scratch and uses a customised
JWM window manager. Version 11.0
is a major release that brings GNU
C Library 2.20 and GNU Compiler
Collection 4.9.2, as well as a new
lightweight rescure live CD
4MRescueKit. Download it from
http://4mlinux.com.
www.linuxformat.com
4MLinux focuses on the four Ms:
Maintenance, Multimedia, Miniserver
and Mystery (cool games, basically).
Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath, BA1 1UA or [email protected]
Wipe Dban
Thanks for the recent article
about erasing hard drives. I was
disappointed, though, to see the
omission of nwipe. Some time
ago I forked the dwipe command
(the wiping software in DBAN),
due to the (then) poor
maintenance of DBAN. Now that
DBAN is near-enough
proprietary software, it seems
even more appropriate that
nwipe be used instead.
It has several additional
features to DBAN: much better
hardware support (as it's
OS-agnostic) and is contained in
several live distros if you prefer a
DBAN-like experience (most
notably PartedMagic). I'm no
longer the maintainer, but
Martijn van Brummelen (the
software's Debian packager)
kindly took it over recently.
Andy, via email.
Neil says: Thanks for the details
here Andy. So you know, the
background to most if not all of
Sean’s articles are real-life
application. I believe the erasing
article was based on events from a
while back, but he chose DBAN at
the time. We’ll keep an eye on
Nwipe and hopefully cover it in
the future.
Root canal
I read the article in LXF179
about rooting an Android mobile
and you mentioned two main
ROM sources, being TWRP and
Letter of the month
Minimalist
I
'm looking at my Christmas 2014 edition
[LXF192] and I think Bothwick's Screen
article [Tutorials, p74] is interesting
because of what it misses. I'm a
minimalist, but that's not my topic now.
I'm running Slackware64 13.37 here. My
Slackware came with a variety of Screen
managers, and of those, I chose TWM. I've
looked at the others and for me, the TWM
wins by far. And looking at Screen, what it
does, TWM does much better.
I can start, close, move, resize, reduce to
icons, retrieve windows according to my need
(generally up to three). These windows can
serve as Terminal windows or they can start
other resources like Firefox or the
Thunderbird I'm using now. In my view TWM
has what I need.
I removed the gimmicks from TWM's
overly ambitious startup array, mine opens
with one 'login' window which is the least I'll
need. I think that one of these days, a good
Clockwork Mod. The device I was
intending to root is a Samsung
Galaxy Note (GTN7000). TWRP
doesn't do a ROM for this device
but CWM does. The problem was
that the CWM download is a ZIP
file (Superuser.zip) with the
following four folders (armeabi,
META-INF, mips, x86), two files
install-recovery.sh and
piece on TWM
would prove
helpful to
many of your
A basic but easily viewed
readers and I'd
tabbed window manager, so
like to see any
here’s a grab from the year
that I may
2000 and issue 2.
have missed.
And there very much is something I've
missed. At age 83 I am getting some macular
degeneration and some cataracts, so I like strong
dark print on my screen. It turns out Bash offers
a font option that you find using Ctrl+right
mouse button, that offers a font option labelled
'Huge.' So I think something in your pages about
those hidden options in Bash would be good;
including, is there some way I could lock-in that
dark font option in my Bash?
Martha Adams, via email.
Neil says: True to your minimalist ways my reply
is: yes. For everyone else we are planning to run an
Escape the GUI feature in a couple of issues.
Superuser.apk. I felt that I
couldn't proceed as your
instructions based on the TWRP
solution as it puts an image
specific file in the fastboot path.
When I've looked at various
forums on this they seem to be
copying a ZIP file to the device's
external micro SD card and
booting from that. Is there any
chance that you could add a bit
more detail for the alternative
CWM method please? I’m using
Linux Mint 17. Sorry if I'm being
over cautious, but this is my first
rooting attempt and I don't want
to brick the device.
Roger Ralph, via email.
Neil says: LXF179 was back in
October 2013 and you’ll find many
[email protected]
www.tuxradar.com
March 2015 LXF195 11
Mailserver
keyboard issue, if your passkey
has punctuation this can change
depending on the keyboard layout
used. Something, I’ve discovered
to my annoyance in the past, as #
and £ symbols can jump around
rather annoyingly. I’d be surprised
if the driver wasn’t working as it’ll
be a bog-standard Intel unit.
Otherwise good luck with the
Linux learning, it’s great fun to get
stuck into and offers almost
endless entertainment!
Xara
Root access opens up your
phone to more advanced apps
and complete backup tools.
things have changed since then;
usually making life far easier. The
issue that always crops up is that
ROMs, boot loaders and root files
are model specific. As you
mention it looks like the approach
for the GTN7000 is to use the
standard Recovery Bootloader
and use this to flash the root
SuperUser app using a standard
shell script, all run from an SD
card, which is much easier. A
guide and the files are here:
http://bit.ly/1CfjTED.
For total control you can also
pop on Cyanogenmod. This is
somewhat more involved but you
can find out how to do that here:
http://wiki.cyanogenmod.
org/w/Install_CM_for_n7000.
WiFiless
There’s a recurring problem that
I have with the disc that comes
with your (excellent) magazine.
Although the discs boot and
load the distro, I am unable to
connect to my Wi-Fi. I’m a Linux
user, Ubuntu 14.10 and Linux
Mint 17 Mate in a dual-boot
setup on my Samsung
NP350v5c Laptop.
My only connection to the
Internet is via a Mobile (3G)
Wi-Fi hub running on Three.
I installed Ubuntu around eight
months ago and Linux Mint just
over a month ago, without issue.
In both cases, however, the
installation discs came from a
source other than yourselves.
I’m fully aware that the problem
12 LXF195 March 2015
may be with me or my particular
hardware setup.
I am new to Linux having
been away from computers for
20 years or more, other than
limited use at work. When
purchased it came with
Microsoft Windows 8, which
lasted around five weeks, the
last three of them spent finding
out how to circumvent the BIOS
in order to boot the Ubuntu disc.
Other than this very minor
problem, I am having a great
time learning how Linux works.
On my list of things to do is learn
Python. Just because I’m 65
doesn’t mean I am unable to
learn new stuff, which is one of
the reasons for buying your, and
other, Linux magazines.
Tony Maughan, via email.
Neil says: Is it a case that the
wireless isn’t working at all or that
you can see the hub (so the
drivers are correct) and it just
won’t allow you to connect? In the
last case it could be a US/UK
Is there or will there be a Linux
version of Xara's Bitmap Editor?
Ian Learmonth, via email.
Neil says: Why the heck do you
want Xara Bitmap Editor? We’re
not actually sure that Xara has
released a package called that, but
certainly it doesn’t support Linux.
You could try running Xara
Designer 10 via Wine, though its
rating of Silver is hardly
reassuring. Honestly, give Gimp a
spin, we even gave away a
bookzine on it with LXF194 so go
grab the ISO now, or better, order
a back issue from http://bit.ly/
LXF193GIMP.
www.linuxformat.com
Mind games
I would never want to diminish
Richard Stallman, but I do have
to take a tiny little bit of issue
with this bit of science fiction.
BSD comes from a university
that was always something of a
renegade within the Unix world,
and was a part of the general
movement that involved
Richard. Who knows what may
have come from them in an
alternate history?
It may have been years after
Richard started his efforts, and
also a commercial product, but
it was showing a different, more
capable and robust solution
than the Microsoft products.
It was also not the only product
to try to bring the Unix
environment to the huge market
being generated by the IBM PC
computer, and its ‘clones’,
indeed Xenix development had
Microsoft involved from it's
earliest days. Unix was the
primary direction being taken to
bring a more capable
environment to users.
Minix was an independent
solution to a related need, and
we will never know what may
have come of this project
without the presence of Linux.
There seems to have been many
paths to the present world of
open source, and some are
almost as inspired and
generous as the efforts of
Richard Stallman.
History would not be the
same without Richard, take
any person out of the
The world would
have been very
different without
Richard Stallman.
Mailserver
story and it changes. However, in
my honest opinion, we would
have found (stumbled?) our way
to a world not unlike this world
without Richard Stallman; if only
because of the massive
dissatisfaction felt by so many
with the limited vision of the
corporate ‘machines’ that were
trying to manipulate the
computer industry to their
financial benefit.
It would very likely have been
a longer path. But the impetus
was there from way back. To
ignore all the other contributors
to what we now enjoy, and
suggest that nothing would have
happened without Richard,
seems a bit unkind to all the
other contributors.
John Paterson, via email.
Neil says: Absolutely, Jonni was
appalled at the notion open
source could possibly ever be
“wiped from the surface of the
earth” as it’s human nature to
share knowledge. As I mentioned
it was a fun academic exercise
with Stallman being the
embodiment of the open source
movement. Being more realistic
you’re right, the world probably
would have rallied around BSD,
but with Minix it always seemed
Tanenbaum wanted it to be
commercialised, though that’s
wild speculation.
I do think it’s important not to
underestimate the need for a
figurehead to rally a cause and
that’s what Stallman brought to
the world. Obviously not without a
near unquantifiable amount of
help from generous and stupidly
talented developers from around
the globe. We’d never want to
diminish the contributions made
by everyone, large or small, to the
FOSS world.
Education distro, Escuelas Linux has a new release. The world needs more well-supported Linux distros.
problem trying to load the MB
drivers. From the MB CD I get
"Please update to the latest
Linux Kernel for motherboard
chipset and components
support" I’ve purchased LXF187
to ‘Escape Windows’ but I am
still stuck. I don’t want to load
Windows XP on to a separate
partition. Info on how to proceed
will be greatly appreciated.
Robert, via email.
Neil says: Ahh you’re falling for
the oldest Windows user’s trap of
them all: driver updates. Unlike
Windows, Linux doesn’t really
need drivers at all, as they’re all
built directly into the Linux kernel
itself. The message is basically a
default error down to you trying to
run a CD built for Windows on
Linux. I’ll now contradict myself
and say that driver updates are
possible, but not often required.
No drivers
A few months ago I decided to
change my old computer with
Windows XP OS for a new
rebuild and a later Windows OS.
Having assembled a new box
with an Asus Maximus V11 Hero,
an i7-4770K, and Sapphire R9
280X. I then spotted your Linux
Format mag and was hooked.
I have loaded Ubuntu 14.10
64-bit from your CD and
updated from the net. I have a
Escape Windows and leave
drivers behind. Well, most of them.
The biggest exception for this
‘third-party graphics drivers’ is for
your Radeon card. This is a
complicated bug-bear for Linux
and is down to closed-source
elements that can only be
installed via Nvidia or AMD. See
the Settings > Additional Drivers
control panel.
Sanitised
In Sean Conway's article about
erasing drives, [Tutorials, p68,
LXF193] he says that shred is
impractical for sanitising a hard
drive due to the need for it to be
run on individual files. While this
is indeed true, what is a device if
not a file? You can therefore run
shred on both a device (eg
/dev/sda) or partition (eg /dev/
sda1). Run this with flags -n 3 -z
-v and you’ve successfully
removed all files in one quick
and easy command with no
chance of recovery. When
decommissioning a disk, boot
knoppix or another live distro
and you can sanitise a disk
easily without need for anything
more complicated. Just ensure
you get the right device…
Bret Giddings, via email.
Neil says: says: Great advice,
Bret! And you’re right devices are
just treated as files as Unix
demands. We’d even argue that
-n 1 is fine from the latest research
on modern drives, as the -z option
does a second write any way with
zeros. You still can’t beat Sean’s
shooting option though can you?
www.tuxradar.com
Finally in English
I'd like to point you to an
educational Linux distro. Today
we’ve released 3.0.22 of
Escuelas Linux: http://bit.
ly/1yrJcTm.
You might not have heard
about this distro before, but we
have been around for a lot of
years. Until recently we decided
to go outside our local focus and
to add English language support.
Are we a small distro? That
depends. In the schools we
support ourselves, and we have
reached 44,864 students and
2,541 teachers using voluntarily
our distro. And since September
2014, Escuelas Linux has been
downloaded from 36 countries
via SourceForge. I hope you take
a look, that would mean a lot to
me as a subscriber to your
beautiful magazine!
Alejandro Díaz.
Neil says: Always good to hear
about well-supported Linux
distros and thanks for supporting
us English speakers! LXF
Write to us
Do you have a burning Linuxrelated issue you want to discuss?
Want to let us know what inventive
uses you’ve been finding for your
Raspberry Pi, or suggest future
content for the magazine? Write
to us at Linux Format, Future
Publishing, Quay House, The
Ambury, Bath, BA1 1UA, or email
us at [email protected].
March 2015 LXF195 13
Linux user groups
United Linux!
The intrepid Les Pounder brings you the latest community and LUG news.
Find and join a LUG
Blackpool Makerspace Meet every Saturday,
10am to 2pm. At PC Recycler, 29 Ripon Road FY1 4DY.
http://blackpool.lug.org.uk
Bristol and Bath LUG Meet on the 4th
Saturday of each month at the Knights Templar (near
Temple Meads Station) at 12:30pm until 4pm.
www.bristol.lug.org.uk
Hull LUG Meet at 8pm in Hartleys Bar, Newland
Ave, 1st Tuesday every month.
http://hulllug.org
Lincoln LUG Meet on the third Wednesday of
the month at 7:00pm, Lincoln Bowl, Washingborough
Road, Lincoln, LN4 1EF.
www.lincoln.lug.org.uk
Liverpool LUG Meet on the first Wednesday of
the month from 7pm onwards at the Liverpool Social
Centre on Bold Street, Liverpool.
http://liv.lug.org.uk/wiki
Manchester Hackspace Open night every
Wednesday at their space at 42 Edge St, in the
Northern Quarter of Manchester.
http://hacman.org.uk
Surrey & Hampshire Hackspace Meet
weekly each Thursday from 6:30pm at Games Galaxy
in Farnborough.
www.sh-hackspace.org.uk
Tyneside LUG Meet from 12pm, first Saturday
of the month at the Discovery Museum, Blandford
Square, Newcastle.
www.tyneside.lug.org.uk
Hap-Pi Birthday
Time to don a party hat and faceplant the cake.
W
e are fast approaching the
party you'll have the chance to learn
third birthday of the
from the best. The team behind Pi Wars
Raspberry Pi (Well, strictly
will also be on hand running a series of
speaking that won't happen until 2016
obstacle courses for robots. Will yours
as the Raspberry Pi was released on
survive the trials? There will be plenty of
February 29, 2012, but February 28 is a
stalls from the most popular vendors in
good compromise) and to celebrate the
the community too, so you're
Raspberry Pi Foundation has teamed
guaranteed to see lots of the latest
up with Michael Horne and Tim
Pi-related technology on offer.
Richardson, of CamJam and Pi Wars
Of course, no celebration would be
fame respectively, to put on a big
complete without a proper birthday
birthday extravaganza in the home of
party and on the Saturday night there
the Raspberry Pi, Cambridge. The twowill food and drink for all and a series of
day long party will celebrate the great
games and quizzes, all suitably
projects and ideas that have been
Pi-themed, of course. So grab your
created thanks to this small computer
tickets from http://bit.ly/
and there will be panel discussions
PiBigBirthdayWeekend and get ready
featuring prominent names in the
for a birthday blowout! LXF
Raspberry Pi Foundation,
such as Eben and Liz Upton
and other talks hosted by
community members, such
as Ryan Walmsley and
Charlotte Godley.
As well as great talks
there will be a series of
hands-on workshops for all
skills levels. Have you ever
wanted to hack a robot,
send a Pi into space or
The Raspberry Pi party is your chance to
make music? Well at this birthday rub shoulders with a great community.
Image Credit: Richard Kenworthy
Community events news
Southend Raspberry Jam
The next Southend-On-Sea
Raspberry Jam will take place on
21 February and its schedule
looks chock full of great things.
There are three all-day
workshops covering soldering,
14 LXF195 March 2015
Python and Scratch GPIO,
alongside Sonic Pi 2 and
environmental sensor hacking
with Astro Pi.
This great event is organised
by the SOSLUG team and will
take place at the Tickfield Centre
from 10am until 5pm. This is a
great opportunity for children,
parents, adults and teachers to
mix together and talk about their
projects. You can find out more
by visiting their EventBrite
website at http://bit.ly/
SouthendRaspJam5.
Covent Garden Jam
The Jam is back for 2015 and
this time its teaming up with the
local Coder Dojo to bring a full
day of coding, hacking and
learning for all the family.
This event takes place on 28
February at the DragonHall in
Covent Garden from 2pm until
5pm, and pairing the Jam with a
Coder Dojo enables two
communities to work together
for a common goal. Head to the
website for tickets http://bit.ly/
CoventGardenJam2.
www.linuxformat.com
Makerfaire UK 2015
We’ve mentioned this event
recently, but we don’t want you
to miss out on this great two-day
maker event at the Life Centre,
Newcastle starting 25 April.
This is the largest makerfaire
in the UK and there are many
great projects on show. For
instance, this was where Matt
Venn showed off his Raspberry
Pi balloon camera project in
2014. Tickets for the event are
around £10 per adult.
www.makerfaireuk.com
Xxxxxxxxx Reviews
All the latest software and hardware reviewed and rated by our experts
ThinkPad Yoga 11e
With a cost that's in the ‘proper’ laptop ballpark David Eitelbach
wonders if a flexible, durable Chromebook is worth the money?
Specs
CPU: 1.83GHz
Intel Celeron
N2930 (quadcore)
GPU: Intel HD
Graphics
RAM: 4GB
DDR3 (1,333MHz)
Display: 11.6inch, 1,366 x 768
HD LED
Storage: 16GB
SSD (eMMC)
Ports: HDMI 1.4,
USB 3.0, USB 2.0,
4-in-1 card reader
Comms: Intel
7260 802.11ac
(dual-band),
Bluetooth 4.0
Camera: 720p
HD webcam
Weight: 1.5kg
(3.3lbs)
Size: 11.81 x 8.5
x 0.87 inches
(WxDxH)
T
he Yoga is one of the few
Chromebooks on the market
with a truly rugged exterior.
Designed for use in rough-and-tumble
classroom environments, it boasts
rubberised edges and extra-strong
hinges. At about £360 inc VAT, however,
the Yoga 11e is more than twice as
expensive as most Chromebooks. So is
this machine's combination of flexibility
and durability worth the price?
The Yoga 11e sports the same
minimalist aesthetic as other
ThinkPads, featuring a matte black
plastic chassis and few adornments
other than a few silver ThinkPad logos.
At 300 x 216 x 22mm and 1.5kg, the
Yoga 11e is a bit heavier than the Lenovo
N20p and the 13.3-inch Toshiba
Chromebook 2. Still, the laptop is light
enough that you would barely notice it
in your bag.
As with other notebooks in the Yoga
series, the lid of the Yoga 11e can rotate
360 degrees to lie flush against the
bottom of the laptop. This flip-screen
design lets you use the device in four
modes: traditional laptop mode, tablet
mode (folded completely backward),
stand mode (placed keys-down on the
table) and tent mode. This final mode
sees the Chromebook at a 270-degree
angle, standing on the edges of its lid
and base. Rotating the screen
automatically disables the keys.
While the Yoga 11e is comfortable to
hold as a tablet, its utility in this mode is
limited. The Chrome OS interface
features dense clusters of small
buttons, particularly in the upper right
corner of the browser, that are tricky
to touch accurately.
Unlike most Chromebooks, the
ThinkPad Yoga 11e was built to
withstand both the rigours of the
classroom and the battlefield.
Rubberised edges on the lid
protect the laptop from
unexpected drops, and extrathick bezel minimises damage
to the LCD panel, while militaryspecification testing means the
notebook can withstand
extremes of pressure, humidity,
vibration, temperature and
airborne dust.
Multitasking is where this
Chromebook runs up
against its limited
hardware. Using Google
Docs while streaming music with a
dozen other tabs open, for example, the
audio occasionally stutters. Worse even,
periodically there are half-second
delays after pressing on the arrow keys.
Solid components
Thankfully, the Yoga 11e's 11.6-inch,
1,366x768 touchscreen delivers bright
and crisp visuals. Colours popped when
watching HD video, and text looked
sharp on websites. However, the
viewing angles are shallow: moving
more than a foot to either side of the
screen, colours begin to wash out.
The touchscreen responds promptly
to input, and had no problem with pinchto-zoom. The notebook's rear-facing
speakers pumped out surprisingly wellbalanced and load audio.
We've come to expect an
outstanding typing experience when
using a ThinkPad, and this doesn't
disappoint. The island-style keys offer
plenty of vertical travel and tactile
feedback, and the textured surface
makes it easy to touch type. The
touchpad is large and responsive, and
the scrolling is easy using two fingers,
while the cursor accurately tracks finger
movement across the pad. Battery life
is a letdown, however. The Yoga 11e falls
www.tuxradar.com
A Chromebook that can manage
more gymnastics than a Russian
teenager at the Olympics.
short of competing Chromebooks. With
the brightness at 30%, a dozen tabs
open and streaming music, the Yoga
lasted 6 hours and 21 minutes.
The Lenovo Chromebook is more
than twice as expensive as many
Chrome OS-powered laptops. Still, it's
one of very few Chromebooks that’s a
true 2-in-1 laptop. Plus, the Yoga's
ruggedised exterior will help it endure
the wear and tear of life. LXF
Verdict
Lenovo ThinkPad Yoga 11e
Developer: Lenovo
Web: www.lenovo.com/uk/en/
Price: £359
Features
Performance
Ease of use
Value for money
6/10
7/10
9/10
6/10
A rugged and flexible design with
an excellent keyboard. If only the
Yoga 11e didn't cost over £350.
Rating 7/10
March 2015 LXF195 15
Reviews Raspberry Pi expansion
OpenPi
Ever intrigued by little boxes and blueprints, Jonni Bidwell explores a Pipowered Internet of Things hub that you can redesign yourself.
In brief...
A Pi Compute
Module-based
board with a subGHz wireless
tranceiver, a RTC
and a thermal
sensor. The board
and schematics
are open source
so budding
entrepreneurs and
inventors are free
to modify them.
Specs
Raspberry Pi
Compute Module:
Broadcom 2305
SoC 700MHz, 512
MB RAM (shared
with GPU), 4GB
eMMC storage
OS: Raspbian
Ports: 2x USB
2.0, HDMI, 2x
micro USB (power
& programming)
RTC: TI BQ3200
Temp Sensor: Ti
TMP1000NA
Sub-GHz
Wireless: Wireless
Things SRF
W
ireless Things is the new
name for Ciseco, the
Nottingham-based Internet
of Things company responsible for the
critically acclaimed EVE Alpha device.
EVE is a compact device that fits snugly
on top a Raspberry Pi, connecting via
the GPIO pins. There are a plethora of
modules that can be fitted to the EVE
board: SRF or RFM12B radios, real-time
clocks (RTCs), temperature sensors
and more. With the OpenPi, Wireless
Things hope to take this idea one step
further and it’s in the middle of a
crowdfunder as we write this (http://
bit.ly/KickstartOpenPi).
Using the Raspberry Pi Compute
Module as a base, OpenPi provides two
USB ports, a RTC (battery is included),
868-915 MHz SRF transceiver (for long
range serial connections) and a
temperature sensor. All this is boxed up
in a tidy plastic case emblazoned with
the Wireless Things logo, and apertures
allowing HDMI and power connections.
In terms of extensibility, you get 18 (plus
two power) GPIO pins (which you'll
need to break out yourself), headers for
SRF and USB EEPROM programming,
as well as wire-by-hand pins for
connection to Wireless Things' XBee
module. The Compute Module has 4GB
eMMC storage onboard which houses a
mostly-standard Raspbian install
(PuTTY is included for easy serial
connections), but this can be reflashed.
For the hobbyist user, it could be
that a humble, standard issue
Raspberry Pi will do the job, you can
Features at a glance
Starter kit
Open schematics
The kit includes a 5v
power supply, an HDMI
cable, wireless keyboard/
touchpad and dongle.
The inquisitive can
download the EagleCAD
circuit designs, the inventive
can even modify them.
16 LXF195 March 2015
The OpenPi has a footprint slightly larger than the original Pi, the cut corners
may or may not be an homage to Battlestar Galactica.
add the transceiver and RTC and any
other peripherals easily and cheaply.
It may not be a very tidy arrangement,
but you can probably Macguyver
yourself a suitable container using Lego,
gaffer tape or a cigarette packet.
A single basic OpenPi pack (comprised
of just the Pi Compute module and
OpenPi PCB mounted in a case) costs
£49 inc VAT, so it comes down to
whether having it all in a convenient box
is valuable, and whether you will miss
the line out and Ethernet connectors.
Open hardware
What is unique about OpenPi is that all
the board schematics are open source
– you can download them as EagleCAD
files. In so doing, OpenPi aims to make
it easier to make the Compute Module
a more flexible platform upon which to
develop projects and products.
Manufacturer research suggests that
form is as important as function, and
that this partly explains the hitherto
lacklustre uptake of the Compute
Module as a platform. They claim that
while no single design will suit everyone,
using the OpenPi as a reference point
will greatly simplify the design process
for any budding IoT entrepreneur.
So the product is really targeting those
with ideas and industrial aspirations.
www.linuxformat.com
Quantity discounts are available (10%
for more than 100, 30% for five-figure
orders) which speaks to this, but also
makes the device more for educators.
We found that getting access to the
RTC on our prototype required loading
a module and sending some bytes to
the i2c bus. The thermal sensor can
easily be queried with i2cget, but the
raw sensor data needs to be re-ordered
and scaled to give correct reading.
These are things that will be addressed
come launch day, it's just a question of
making some wrappers. The Kickstarter
ends on 4 March 2015, and a release is
planned for the end of March. LXF
Verdict
Wireless Things OpenPi
Developer: Wireless Things
Web: http://wirelessthings.net
Price: £49/£99 (basic/starter kit)
Features
Performance
Ease of use
Value for money
8/10
7/10
7/10
7/10
A bold step conceptually, which will
excite those with big ideas. But home
users may prefer existing tools.
Rating 7/10
Web browser Reviews
Firefox 35
Mayank Sharma sings a smouldering rendition of Hello by Lionel Richie as he
plays with the voice and video calling feature in the new Firefox.
In brief...
One of the most
popular, feature
rich crossplatform open
source web
browsers. See
also: Chromium.
T
here’s no dearth of quality web
browsers available on the Linux
desktop. Despite the saturation,
Firefox has managed to stay ahead of
the competition with continuous
improvements and innovative additions.
Mozilla's latest release, Firefox 35 isn’t
bursting with new features, but it does
give users a chance to soak in all of the
new features that were added in the
previous releases.
By far the biggest feature in this
release is fully functional voice and
video calling, dubbed Firefox Hello.
Mozilla calls it “the first global
communications system built directly
into a browser.” Although it’s still in beta
the feature works well and manages to
deliver everything it promises. Mozilla’s
primary intention was to enable users
to communicate with each other
without handing over their personal
information to a third-party.
Furthermore, Mozilla wanted to
democratise the convenience of a free
communication service, which is why
you can use the feature to call any
WebRTC-compatible browser, such as
Chrome, Opera and Firefox, and across
all desktop and mobile platforms.
Mozilla first introduced the
experimental WebRTC feature in Firefox
33 and it was officially launched in
Firefox 34, powered by the OpenTok
real-time communications platform
from TokBox. In the current release,
Mozilla has further refined and
simplified the process of making and
receiving calls.
Features at a glance
Refined Firefox Hello
For web developers
The process for video calls
has been simplified and
now uses a room-based
conversation model.
Over 150 devtool fixes and
new CSS features, like the
filter property and CSS
source maps as default.
Firefox’s privacy controls extend to the Hello feature as well. You can turn off
the camera or mic with a single click and turn off the ability to receive any calls.
To initiate a call, all you need to do is
click the new Firefox Hello icon in the
toolbar and then press the Start a
conversation button. This action will
generate a link to a conversation chat
room that you can pass on to anyone
you wish to converse with. When you
click on a link to start a conversation, a
pop-in window opens with your
chatroom. When a person you’ve invited
also clicks on the link they are added to
this chatroom and the Hello icon lights
up to alert you.
Call on me
The Firefox Hello also enables you to
create multiple conversation chat
rooms, each with their own unique URL.
Additionally, you can name a
conversation and keep it for quick
access to people you converse with
regularly. And did we mention that you
can do all this without setting up an
account with anyone? If, however, you
do get yourself a Firefox account, you
can add and import existing email
contacts. While adding a contact is
pretty straightforward, importing your
Google accounts requires you to equip
the browser with your Google account’s
OAuth credentials. Once you’ve added
them, contacts can be edited, blocked
or deleted by clicking on the arrow next
to their names.
If you think this feature is more of a
gimmick than a real software
replacement for making video calls with
the likes of Skype and Facetime, then
www.tuxradar.com
we think you'll be surprised by the
results. Firefox Hello is an earnest step
towards replacing desktop apps with a
web-based solution. It’s feature rich and
in our testing, it worked across different
browsers on different OSes, and even
across Android devices. The quality of
the calls was impressive and none of
the calls were dropped.
Besides Firefox Hello, there are other
changes in this release as well. The
release notes highlight that the browser
is now much better equipped to deal
with dynamic styling changes and has
implemented a new public key pinning
extension which should improve the
authentication of encrypted
connections. The release also has
several improvements for web
developers, such as CSS filters that are
enabled by default. LXF
Verdict
Firefox 35
Developer: Mozilla Foundation
Web: www.firefox.com
Licence: Mozilla Public Licence
Features
Performance
Ease of use
Documentation
9/10
9/10
9/10
9/10
A more stable video calling feature
and the usual lot of security updates
make this a useful update to Firefox.
Rating 9/10
March 2015 LXF195 17
Reviews Remote desktop
TigerVNC 1.4.0
Mayank Sharma discovers whether the latest release of the
once celebrated VNC server deserves another chance or not.
In brief...
A remote
desktop client and
server that uses
the VNC protocol.
See also: Vino and
Krfb/Krdc.
H
alf a decade ago in 2009, the
TigerVNC server was forked
from the TightVNC project with
the help of Red Hat. The goal of the
project was to make it possible to run
graphics intensive 3D and video apps in
fullscreen mode on a remote computer.
Soon after its debut TigerVNC was
adopted by the Fedora distro as the
default VNC implementation. Over the
years, the project has lost steam with
some developers jumping ship.
TigerVNC releases became rarer. But
recently the project announced a new
major release and we wanted to see
how it compared with its peers.
Some of TigerVNC’s marquee
features, such as accelerated JPEG
compression, have been borrowed from
other projects, and in addition to Linux,
the project also runs on Windows and
Mac OS X. TigerVNC includes a
barebones cross-platform viewer that
doesn’t include any of the fancy
features you get with other clients, such
as Remmina and Vinagre. There’s no list
of running VNC servers on the network
or the ability to bookmark connections.
Instead, you get a simple window with a
text box to enter the IP address of the
server you want to connect to.
But that doesn’t mean it lacks
settings. Click on the Option button and
you get plenty of settings to configure
the connection. By default, the software
is configured to automatically select the
best settings for your connection. This
is a good thing considering that most of
these settings will only make sense to a
Features at a glance
Code spring clean
Cross-platform
As well as fixing bugs, the
code has been tweaked
and several existing
features are improved.
Both the Windows and
Mac OS X clients have
also been improved for a
better user experience.
18 LXF195 March 2015
The VirtualGL project has published a detailed report on this release after it
discovered a serious performance regression.
seasoned network admin who's used to
the ways of remote desktop. You can
specify things, such as colour depth
and compression level, choose an
authentication scheme and encryption
mechanism, and more.
There’s also the Xvnc server which
creates its own virtual X session for
remote users to connect to. On the
other hand, if you wish to connect to an
existing X session, there’s also the
x0vncserver server. In the latest release,
this server now supports the XDamage
extension for tracking changes which
according to the official release notes
makes it “slightly less useless”.
Toothless tiger?
In fact, the latest release is full of similar
minor tweaks and improvements,
mostly hidden away from sight. In
addition to lots of bug fixes, the
developers have made behind-thescenes improvements to the keyboard
handling both on the server and client,
and also improved TLS implementation
in the project’s Java client. The server
component of the new release gains
support for newer versions of the X.org
server and has dropped support for the
256 colour indexed map mode. One
feature that didn’t make it past the beta
release is IPv6 support, which was held
back by the developers from the final
release because it required more work
and polish.
www.linuxformat.com
In our tests TigerVNC worked on
expected lines. While you can connect
to the TigerVNC server with other
clients, it works best and doesn’t throw
any unaccepted errors when coupled
with its own client. Surprisingly,
TigerVNC was more responsive when
viewing a remote Windows session
compared to another Linux distro.
Furthermore, getting it to display a
remote Gnome session requires more
time and effort than we’d want the
average desktop user to spend.
All things considered, there are no
stellar features that stand out in this
release. in fact when you look at it,
TigerVNC doesn’t offer any unique
features that’d make us recommend it
over others, such as Vino or krfb. LXF
Verdict
TigerVNC 1.4.0
Developer: TigerVNC Developers
Web: www.tigervnc.org
Licence: GPL
Features
Performance
Ease of use
Documentation
5/10
5/10
4/10
5/10
An average release of an average
app that works but doesn’t offer any
compelling benefits over its peers.
Rating 5/10
Linux games Reviews
Civ: Beyond Earth
Ruined one planet? Don’t worry, says Russ Pitt go where no human has gone
before and colonise a universe of planets and poison them as well.
In brief...
A new sci-fithemed entry into
the Civilization
series. Head an
expedition and
find a home
beyond Earth,
explore and
colonise an alien
planet and create
a new civilisation
in space.
Specs
Minimum spec:
OS: Steam OS,
Ubuntu 14.04
CPU: Intel Core
i3 or AMD A10
Mem: 4GB
Gfx: Nvidia
Geforce 260
HDD: 8GB
Mind Flower.
Secure the
Transcendence
victory by
harnessing the
disgusting power
of the planet’s
consciousness.
C
ivilization: Beyond Earth begins
with the very sci-fi premise of
“What if?” What if you took
Civilization, the classic turn-based
grand strategy game, and made one of
its signature endings the beginning of a
whole new game?
Beyond Earth has a lot of new looks:
new units, new victories, a completely
new tech tree (actually, it’s a web), new
leaders, new civilizations and a handful
of things under the hood that are also
completely new. But the experience of
cracking it open, watching a colony ship
settle onto a completely dark map and
then setting foot onto this alien world
feels just like playing Civ 5 — at first.
That said, if you came to Beyond
Earth looking for a direct sequel or a
modern update to the 1999 game, you
would be wildly disappointed. Although
it does have narrative elements, and
certain signature aspects of Alpha
Centauri have crept in, Beyond Earth is
very much its own game.
This is most noticeable when dealing
with the planet’s indigenous creatures.
Instead of Civilization’s barbarians,
Beyond Earth has a variety of alien
lifeforms, some more aggressive than
others. The other big new thing is the
orbital layer. You can build and launch
satellites, and these will impart benefits
to specific tiles.
Now for the bad news: It’s easy to
feel like Beyond Earth is just an
expansion to Civ 5, albeit spacier than
those that came before. For Civ 5 fans
like myself, this is a loaded proposition.
If you like Civ 5, then more Civ 5 will
be great, but there’s no denying that
even as much as we love Civ 5, we were
expecting something more than Civ 5
with a sci-fi skin.
Beyond Earth offers five victory conditions: One is Emancipation, where you
build an Emancipation Gate to return home to Earth, and conquer it.
Beyond Earth’s many similarities to
Civ 5 mask, to its detriment, a game
that is remarkably new and different,
and once we were able to see past
those similarities, the newness and
wonder of playing in a future Civ
sandbox washed over us and we were
engrossed before we realised it.
More than Civ 5 in space
As the Brazilians, we were aiming for a
Purity affinity, but fumbled our way
through the research web willy-nilly and
eventually lost the game without ever
realising one of our enemies had been
close to victory. So we started again,
this time as the Slavic Federation. We
specialised in Supremacy and after a bit
of research, we decided to shoot for the
Contact victory, but build a strong
enough civ that, should all else fail, we
could at least take over the world.
Beyond Earth offers five victory
conditions, although two are similar,
differing only in which affinity will unlock
it. Contact involves discovering an alien
signal and unlocking the secret of your
new planet’s ‘Progenitor’ species, an
ancient alien race.
Domination is what it sounds like,
giving you the win if you capture all of
the opposing civ’s capitals. Promised
Land and Emancipation are two sides of
the same coin. You must research the
tech to open either an Emancipation or
Exodus gate back to Earth, bringing
those left behind either salvation or
www.tuxradar.com
dominance. Transcendence is the
Harmony victory. It involves researching
alien tech to create a ‘mind flower’ that
will unite your consciousness with that
of the alien planet.
For our third playthrough, we wanted
to win without firing shot. We picked the
Franco-Iberian civ and focused on the
Harmony affinity. Ultimately, war never
came. Although forced to kill a handful
of aggressive aliens, we dominated with
science and trade, with guns silent. And
when our mind flower bloomed, we felt
like we’d finally understood everything
Beyond Earth had to offer.
This is how Beyond Earth succeeds:
It offers a game steeped in the
traditions and mechanics of Civilization
that’s nevertheless surprising and new
in often unexpected ways. LXF
Verdict
Civilisation: Beyond Earth
Developer: Aspyr
Web: GameAgent.com
Price: £30
Gameplay
Graphics
Longevity
Value for money
9/10
8/10
8/10
8/10
Although its foundation in Civ 5
makes it familiar, Beyond Earth is full
of interesting surprises.
Rating 8/10
March 2015 LXF195 19
Roundup
suites
Roundup Office
Xxxxxxxxxxxxxxxx
Every month we compare tons
of stuff so you don’t have to!
Scripting languages
Richard Smedley goes beyond Bash to see which scripting languages
measure up to the needs and wants of a Linux system administrator.
How we tested...
Comparisons, they say, are invidious.
This is certainly true for
programming languages, where
personality and local support are, at
least, of equal import to criteria such
as speed, and the level of support for
different paradigms. Given this,
we're presenting a mixture of facts,
collective opinions and our own
prejudices, but it's a basis for further
investigation. The key to a scripting
language's usefulness to the
sysadmin lies not just in how easily it
helps solve problems, but in how
many of the solutions have already
been written, and are available to
download and adapt, and preferably
well-documented.
We tried to work across the range
of versions installed on a typical
network, but insisted on Python 3.
Other than that, we've tried to stay
in the context of working with what
you're likely to find on your network.
E
Our
selection
Bash
Perl 5
Python
Ruby
NewLISP
very admin loves time-saving
shortcuts, and carries a
selection of scripts from job
to job, as well as inheriting
new ones when arriving in post. The
question any new admin asks is which
is the best language to learn? (Followed
by, where’s the coffee?) Veterans of
language wars should know that the
best language question rarely has a
simple or definitive answer, but we
thought it would be well worth
comparing the most useful choices to
make your Linux life easier.
Most scripting languages have been
around longer than you think. For
20 LXF195 March 2015
“The question any new admin
asks is which is the best
language to learn?”
example, NewLISP was started on a
Sun-4 workstation in 1991. They've
borrowed from each other, and
elsewhere, and accumulated a long
legacy of obsolete libraries and
workarounds. Perl's Regular
Expressions, for instance, are now found
everywhere, and in some cases better
implemented elsewhere. So what
matters most? How fast the script runs,
www.linuxformat.com
or how quickly you can write it? In most
cases, the latter. Once up and running,
support is needed both from libraries or
modules to extend the language into all
areas of your work, and from a large
enough community to support the
language, help it keep up with trends,
and even to innovate it. So, which
scripting language should you learn to
improve your Linux life this year?
Scripting languages Roundup
The learning curve
Online resources, books and good people.
T
he key questions are: how easy is
the language to pick up? Are the
learning resources at least
adequate? Even if these two questions
are answered in the positive, they still
need to be backed up by a helpful
community to assist you in quickly
producing something useful, and help
maintain that initial enthusiasm as you
hit inevitable problems.
To produce a backup script and test
scripts in each of the languages, we
started by browsing Stack Overflow.
But downloading random code means
no consistency between Posix (pure
Bourne Shell) scripts, modern Bash,
and legacy code that occasionally fails.
Fortunately, www.shellcheck.net is a
great tool for checking the correctness
of scripts, and teaches you best
practice as it corrects them. The Linux
Document Project's (perhaps overly)
comprehensive Advanced Bash
Scripting Guide (www.tldp.org/LDP/
abs/html) is also excellent and will
help you quickly gain confidence.
Perl's online and built-in
documentation is legendary, but we
started by running through an exercise
from the classic O'Reilly admin book,
Running Linux, then leapfrogged the
decades to No Starch's recent Perl OneLiners by Peteris Krumins. Those who
eschew the book form should try
http://perlmonks.org, a source of
cumulative community wisdom.
Recent efforts at getting youngsters
learning through Code Club (www.
codingclub.co.uk) and the rest of us
through PyConUK education sprints
and open data hackdays have shown
Python to be easily picked up by
anyone. But out-of-date advice, such as
the many ways of running
subprocesses which persist for
compatibility reasons, means careful
reading is needed, and it's yet another
good reason for starting with Python 3,
not Python 2. Head to www.python.
org/about/gettingstarted for large list
of free guides and resources.
Ruby is also an easy sell to learners,
and before Rails, command-line apps
were what it did best. David B.
Copeland's book, Build Awesome
Command Line Applications in Ruby will
From MOOCs to the bookshop, Python learning resources
are everywhere.
save you hours of wading through
online documentation, but we were able
to get up and running on our test
scripts with a couple of web tutorials.
Last, we come to NewLISP: a
challenge to programmers schooled
only in non-LISP family languages,
but you'll be amazed by what it
manages to accomplish with just lists,
functions and symbols. We dived right
in with the code snippets page on
http://newlisp.org, adapting to build
our backup script, and were rewarded
with terse, powerful code, that was
easier to read than its equally compact
Perl counterpart.
Verdict
Bash
+++++
NewLISP
+++++
Perl 5
+++++
Python
+++++
Ruby
+++++
Python and
Ruby are easier to
learn, because of
good docs and
helpful users.
Version and compatibility
Beating the wrong version blues.
T
he question here is: have I got
the right version? Lets start with
Bash. Every modern Linux distro
ships with a version that will run your
scripts and anyone else's. Bash 4, with
its associative arrays, coproc (two
parallel processes communicating),
and recursive matching through
globbing (using ** to expand filenames)
appeared six years ago. Bash 4.2 added
little, and is four years old and Bash
4.3's changes were slight.
As the Unix shell dates back decades, you will find that recent Bash versions
contain a few unexpected syntax changes.
www.tuxradar.com
Perl is still included in the core of
most distros. The latest version is 5.20
(with 5.22 soon to appear), but many
stable distros ship with 5.18. No matter,
you're only missing out on tiny
improvements, and just about every
script you'd want to write will be fine.
The switch from Python 2 to 3 still
catches out the unwary. Run Python 3 if
you can and check the documentation if
you come unstuck. Python 3.3 is our
baseline for Python 3 installs and Python
3.4 didn’t add any new syntax features.
Ruby version changes have caused
enough problems that painless
solutions have appeared, rvm enables
you to run multiple versions of Ruby,
and bundle keeps track of the gems you
need for each script.
NewLISP's stability and lack of thirdparty scripts is an advantage here.
We cann't, however, guarantee every
script will run on the latest versions.
Verdict
Bash
+++++
newLISP
+++++
Perl 5
+++++
Python
+++++
Ruby
+++++
Ruby's version
workaround is
good, but Bash's
lack of problems
is a better result.
March 2015 LXF195 21
Roundup Scripting languages
Web native scripts
Get your admin scripts moving over HTTP.
M
uch of a sysadmin's life has migrated
to the web, so you'll need a scripting
language that has kept pace. We
examined both ease of writing our own code,
and finding available solutions for doing
anything from web interfaces to system stats.
What's noticeable about these languages is
the difference in expressiveness and style to
produce similar results. However, this is, once
again, secondary to personal preference and
local support for many admins.
Ruby is quick and enjoyable; Python 'feels
right' probably due to it being more human
readable; newLISP is astonishingly powerful.
But these observations remain partisan
clichés without a supportive and maintainable
environment to use and develop the code for
your own networks.
Bash +++++
While Bash will be no one's first choice for a web programming language,
it's good to know that when your server doesn't provide for your first
choice you can fall back on it thanks to bashlib. This a shell script that
makes CGI programming in the Bash shell somewhat more tolerable.
Your script will be full of echo statements, interspersed with your
commands to produce the desired output. Security considerations
mean we wouldn't recommend running this on the open Internet, but it’s
worth bearing in mind that Bash works well as a prototyping language.
It's easy to fill a text file with comments describing the broad structure
that you want, then fill in the gaps – testing snippets interactively and
pasting into www.shellcheck.net to check your code as you go. You'll
soon be up and running with a proof of concept.
newLISP +++++
Code Patterns, by NewLISP creator Lutz Mueller, is available on the
www.newlisp.org website and has chapters on HTTPD and CGI, as well
as TCP/IP and UDP communications. If you add in the section on
controlling applications, and you’ll have everything to get you started.
NewLISP's built-in networking, and simple (or lack of) syntax, makes
it surprisingly easy to generate HTML pages of results from, for instance,
your monitoring scripts. For a ready built framework, newLISP on
Rockets – which uses Bootstrap, jQuery and SQLite – combines rapid
application development with good performance.
NewLISP on Rockets provides several functions, from (convert-jsonto-list) via (twitter-search) to (display-post-box), which will help you add
web functionality. We're impressed but we remain concerned by the
small size of the community and the intermittent pace of development.
Community support
Does it have a community large enough to support real work.
D
evOps, cloud deployment, testdriven development and
continuous integration – the
demands on a sysadmin change and
evolve, but the requirement to learn
something new is constant. Everyone
uses Bash to some extent but, you’ll
need to learn Bash plus one other.
Perl was the traditional Swiss Army
chainsaw of Unix admins through the
‘80s and ‘90s, gradually losing ground
to Python and then Ruby over the last
decade or so. Anyone who started work
22 LXF195 March 2015
in the ‘90s or earlier will be comfortable
with it, so finding someone to help with
your scripts is often not a problem.
However, the world doesn't stand
still, and many tech businesses have
standardised on Python, which is used
extensively at Google, for example.
Much of the software necessary for
modern sysadmin work is Python based
although the same can be said of Ruby.
Ruby benefits from being the basis
of Chef and Puppet, as well as Vagrant
and Travis CI, meaning a little familiarity
www.linuxformat.com
will be helpful anywhere that uses them
for deployment. The web frameworks
and testing tools written in Ruby have
popularised the language at many of
the younger web companies.
NewLISP has a much smaller
community supporting it, and there
aren’t many ready made solutions and
you may know no-one who uses it. The
keenness of the online community goes
some way to ameliorate this deficiency,
but you have to ask who will maintain
your tools when you leave a company?
Verdict
Bash
+++++
NewLISP
+++++
Perl 5
+++++
Python
+++++
Ruby
+++++
Ruby has
taken mindshare,
thanks to some
great DevOps
software.
Scripting languages Roundup
Perl 5 +++++
Perl was the first web CGI scripting language and has more or less kept
pace with the times. It certainly has the libraries, and enough examples
to learn from, but with no dominant solution you'll have to pick carefully.
Catalyst, Dancer, and Mojolicious are all good web application
frameworks. More likely you'll find everything you need in CPAN. You can
glue together a few of the libraries – many of which are already collected
together in distros – to handle a pipeline of tasks, such as retrieving XML
data, converting the data to PDF files and indexing it on a web page.
Perl's traditional CGI interface is still available, and despite better
performing alternatives abstracted through PSGI, you may find that use
CGI; is all you need to web-enable your script, and remember: 'there's
always more than one way to do it'.
Python +++++
Python's Web Server Gateway Interface (WSGI), which was defined in PEP
333, abstracts away the web server interface, while WSGI libraries deal
with session management, authentication and almost any other problem
you’d wish to be tackled by middleware. Python also has plenty of fullstack web frameworks, such as Django, TurboGears and Pylons. Like
Rails, for some purposes you may be better off coding web functionality
onto an existing script. But Python's template engines will save you from
generating a mess of mixed HTML and Python.
Python has many other advantages, from the Google App Engine
cloud with its own Python interpreter, which works with any WSGIcompatible web application framework, for testing of scalable
applications to supporting a clean style of metaprogramming.
Ruby +++++
Don't imagine for one moment that Rails is a panacea for most
sysadmin problems. It's not. And while Sinatra certainly makes it easy to
roll out anything web-based in Ruby, even this is overkill for most
purposes. That said, Rails does a good job of getting code up quickly and
just doesn't drown in all that magic, generated code.
Ruby is ideal for getting any script web-enabled, thanks to gems that
are written by thoughtful people who have made sane decisions. Putting
a web interface on our backup script, for example, was fun, but
distracting as we played with several gems, eg to export reports to
Google spreadsheets. Tools like nanoc, which generate static HTML from
HAML, and some of the reporting gems complement the language's
expressiveness, and make adding any functionality to scripts a breeze.
Programmability
Verdict
Managing big scripts requires other programming paradigms.
B
efore reaching 1,000 lines of
code, Bash scripts become
unmanageable. Despite its
procedural nature, there are attempts
to make an object-orientated (OO)
Bash. We don't recommend it, we think
it's better to modularise. Functional
programming (FP) in Bash (http://bit.
ly/BashFunsh) is also impractical.
Perl's bolted on OO won't be to
everyone's taste, but does the job. Perl
has fully functional closures, and despite
syntactical issues, can be persuaded
into FP – just don't expect it to be pretty.
For that you should wait for Perl 6.
Python is equally happy with
imperative, OO and also manages FP.
Functions are first class objects but
other features are lacking, even if its list
comprehension is very good. Mochi, the
FP language (http://bit.ly/FPMochi),
uses an interpreter written in Python 3.
Ruby is designed as a pure OO
language, and is perhaps the best since
Smalltalk. It can also be persuaded to
support a functional style of
www.tuxradar.com
programming. But to get FP code out of
Ruby, you’ll have to go so far from best
practices that you should be using
another language entirely.
This brings us neatly to NewLISP, an
elegant and powerful language with all
the functional features at your
fingertips. NewLISP uses a pseudo OO
implementation in the form of
functional-object oriented
programming (FOOP), but this doesn't
mean, however, that it can cut it for real
OO programming.
Bash
+++++
NewLISP
+++++
Perl 5
+++++
Python
+++++
Ruby
++++
Python is a
multi-paradigm
language and
the easiest to
maintain.
March 2015 LXF195 23
Roundup Scripting languages
Extending the language
Libraries, modules, and getting them working.
N
one of these scripting
languages are as bloated with
classes as, for example, Java so
that you'll need to use non-core
libraries (or modules as they are
sometimes called) for writing many
scripts. How comprehensive these are,
and how easy they are to manage with
your script varies greatly.
Perl continues to impress with the
mind-boggling choice to be found on
CPAN, but its 'there's more than one
way to do it' approach can leave you
easily overwhelmed. Less obvious, is
the magnitude of Bash extensions
created to solve problems that are
perhaps not best suited to any sh
implementation.
There's more than one library for that – CPAN is a useful resource for Perl.
Python has excellent library support,
with rival choices considered very
carefully by the community before
being included in the core language.
The concern to “do the right thing” is
evident in every decision, yet alternate
solutions remain within easy reach. At
least the full adoption of the pip
package manager, with Python 3.4, has
ensured parity with Ruby.
RubyGems provide the gem
distribution format for Ruby libraries
and programs, and Bundler which
manages all of the gems for
dependencies and correct versions.
Your only problem will be finding the
best guide through Ruby's proliferation
of libraries. Read around carefully.
NewLisp is not a large language, but
it’s an expressive one, accomplishing
much without the need of add-ons.
What modules and libraries that there
are address key needs, such as
database and web connectivity. There's
enough to make NewLISP a useful
language for the admin, but not in
comparison to the other four choices.
Verdict
Bash
+++++
NewLISP
+++++
Perl 5
+++++
Python
+++++
Ruby
+++++
The CPAN's
longevity and
popularity marries
well with good
organisation.
Network security
Testing and securing the network – or fixing it afterwards.
P
enetration testing and even
forensic examination after an
attack will fall under the remit of
the hard-pressed sysadmin in smaller
organisations. There are enough ready
made tools available that you can roll
everything you may need into a neat
shell script, kept handy for different
situations, but writing packet sniffers or
tools for a forensic examination of your
filesystem in Bash isn’t a serious option.
Perl has lost some security
community mindshare since the early
days of Metasploit, but the tools are still
there, and are actively maintained by a
large user group who aren't about to
jump ship to another language. Perl has
tools like pWeb – a collection of tools
for web application security and
vulnerability testing – which is included
in distros, such as Kali and Backbox.
Tools such as Wireshark are a
powerful aide to inspecting packets, but
sometimes you'll need to throw
24 LXF195 March 2015
together your own packet
sniffer. Python not only has
Scapy, the packet
manipulation library, but
provides a socket library for
you to easily read and write
packets directly.
Ruby's blocks (write
functions on-the-fly without
naming them) and other
features are great for writing
asynchronous network code,
and its rapid prototyping
NewLISP has impressive networking features,
matches (and even beats)
even if it lacks the pen-testing tools of the others.
Python. But Ruby's biggest
boon is Metasploit, which is
the most-used pen-testing software.
Last, NewLISP isn't well-known
In terms of ready rolled tools, you
among penetration testers and grey hat
can mix and match as needed, but Perl,
hackers, but thanks to the networking
Python and Ruby all provide everything
built in to the language, a function call
you need to quickly examine a network
and a few arguments will create raw
for weaknesses or compromises
packets for pen testing. Once more,
on-the-fly. Note: Python is featured in
NewLISP has clear potential but suffers
more security-related job adverts now.
from its relatively tiny user base.
www.linuxformat.com
Verdict
Bash
+++++
NewLISP
+++++
Perl 5
+++++
Python
+++++
Ruby
+++++
Python edges
ahead of Ruby
and Perl, but all
three are friends
of the pen tester.
Scripting languages Roundup
Scripting Languages
The verdict
W
e admit it’s difficult to take
the verdict out of a practical
context and just declare the
best language. For example, Bash isn’t a
strong language, and many time-saving
bits of code can be thrown together
better with the other four languages,
but no-one with tasks to perform at the
Linux command line should avoid
learning some Bash scripting.
Perl is the traditional next step as it's
intimately associated with the *nix
command line and still found
everywhere. It may suffer in comparison
with newer languages, but Perl
continues to offer not just the Swiss
Army Chainsaw of the Linux CLI, but
Perl also has a huge and very
supportive community.
NewLISP is a pleasant surprise. Yes
it has those – Lisp is about lists – but
what a compact language for the
embedded space as well as the
command line. Sadly, the size of the
community doesn't match the power of
1st Ruby
the language, so you'll need to be
prepared to back its use with plans to
maintain the code yourself.
Python is a powerful, multiparadigm supporting language. The
Python community is large and friendly,
and supports everything from
education sprints to training teachers.
It also backs up community efforts,
supporting young learners at Code
Clubs, and many other events.
But useful as a community is to the
sysadmin, it's often the quick and dirty
hacks, readily downloadable and
reusable examples, backed by an
expressiveness that makes many
programming challenges if not trivial,
far less of a headache. Rails brought
wider attention to Ruby, but Chef,
Puppet and Vagrant
have helped remind
the admin just what
can be done with
the expressive
and eloquent
4th newLISP
Web: www.ruby-lang.org Licence: GPLv2 or 2-clause Version: 2.2.0
Powerful, expressive and very quick to learn.
Next i
anonymsissue:
distrosing
+++++
Web: www.newlisp.org Licence: GPL Version: 10.6.1
So powerful, it deserves to get more use.
5th Bash
+++++
Web: www.python.org Licence: PSFL Version: 3.4.2
+++++
Web: www.gnu.org/software/bash Licence: GPLv3+ Version: 4.3.30
Multi-paradigm, encourages good practices and great community.
3rd Perl 5
scripting language that was developed
by Yukihiro Matsumoto.
Does Ruby edge out Python? Is Bash
to be ignored? Not for the admin: as
they need good knowledge of Bash to
follow what’s going on with the system.
And in addition to Bash, every sysadmin
should know a little Perl, Python and
Ruby, but have in-depth knowledge of
the one that they prefer. LXF
“In addition to Bash, every
Linux admin should know a
little Perl, Python and Ruby.”
+++++
2nd Python
We can't help acknowledging Ruby's power and charms.
Doesn't do everything, yet remains essential.
Over to you...
+++++
Web: perl.org Licence: GPL or Artistic License Version: 5.20
We don't want to start a holy language war, but we would love to hear
what you use. Email your opinions to [email protected].
Still a great friend to the sysadmin.
Also consider...
While Bash falls down in some areas,
traditional shell scripting could also be
represented by Zsh, which has some small but
useful differences, such as better access to
positional variables, and the ability to extend
the shell through widget functions.
Nevertheless, it's not a rival to our other
choices, and nor is PHP, despite its use in
command scripts by some devotees. Instead,
our left-field alternative is Rebol (Relative
Expression Based Object Language), whose
leap towards software freedom two years ago
may have come too late to ensure universal
popularity. However, Rebol has an elegant
design and syntax, and a useful read–eval–
print loop (REPL) console.
www.tuxradar.com
Rebol's 'dialecting' (using small, efficient,
domain languages for code, data and
metadata) equips it for just about anything. It’s
particularly good at dealing with the exchange
and interpretation of information between
distributed computer systems, but also
powerful, terse shell scripts. If you're looking
for a new language for 2015 give it a try.
March 2015 LXF195 25
Reviews Raspberry Pi 2
Raspberry Pi 2
Les Pounder salivates at the prospect of a new Pi and promptly
breaks his teeth on the sweet, new raspberry-flavoured treat.
In brief...
The latest
single board PC
from the
Raspberry Pi
Foundation comes
with the spec
boost that we
were all hoping
for. The Pi 2 is the
latest in a line of
products from the
Foundation and
can run a number
of Linux distros.
Specs
SoC: Broadcom
2836
CPU: Quad-core
ARM7 800MHz
GPU: Videocore
IV 250MHz
Mem: 1GB
GPIO: 40-pin
Ports: 4x USB
2.0, 100BaseT
Ethernet, HDMI,
MicroSD card
Size: 85.60 ×
56.5mm
W
hen the Raspberry
Pi appeared in 2012 few
could have envisaged how
popular it would be. In the years after its
release the Raspberry Pi has become
the most popular single-board
computer on the market and spawned
many imitators, but none with the rich
community that has grown organically
around the Raspberry Pi.
Since the release of the original
Raspberry Pi there have been three
versions of the flagship B model,
starting at 256MB RAM and increasing
to 512MB with the second B and B+.
But in all of these models the system on
a chip (SoC) has remained the trusty
BCM2835 with an ARM 11 700MHz
CPU. The community have done
wonderful things with these resources
but now the specification boost that
they were waiting for has arrived.
In early February, the Raspberry Pi 2
arrived and the original ARM 11 has
been replaced with an ARM 7 CPU
running at an improved 800MHz.
But rather than stick with a single core,
the new verision comes with four cores
which speeds up the Raspberry Pi by as
much as six times. To go with the new
CPU, the amount of RAM has also been
upgraded to 1GB. The rest of the
hardware, however, matches that of the
26 LXF195 March 2015
B+: a 40-pin GPIO,
four USB 2 ports and
10/100 Ethernet. Physically the
Raspberry Pi 2 also has the same
dimensions as the B+.
On the testing bench
To show the improvements made to the
Pi part deux, we wanted to run a few
real-world benchmarks to show how
powerful the new Pi actually is when
Creating a new world in
Minecraft took 42 seconds on
the B+, and 21 seconds on the Pi 2.
Loading IDLE 3 took 13 seconds on the
B+ and a mere 4 seconds on the Pi 2.
Running SunSpider in the new
optimised browser gave a glimpse at
real-world performance. Over the suite
of tests there was a 2.5 times boost in
speed. Considering the complexities of
multi-threading this sounds like a
reasonable
expectation. Even
so, individual
results showed a
near four-fold
increase on this
unoptimised code.
The Raspberry Pi B+ and Pi 2 both
come with the same Videocore GPU as
before and in our tests there was a
small improvement in FPS (Frames Per
Second) for the Pi 2, largely thanks to
the increased RAM present on the
board. Our last test was file transfer
speeds via Ethernet, for this we used
scp to copy a 692MB Big Buck Bunny
video file to each Pi. On the B+ we saw
an average of 3.8MB/s and on the Pi 2
“Booting from cold:
The B+ managed it in 33
vs 17 secs for the Pi 2.”
directly compared to the B+.
The first test on our list is booting
both Pis from cold to login prompt.
The B+ managed this is in 33 seconds
versus 17 seconds for the Raspberry Pi
2. We then set both Pis to boot straight
to desktop and the B+ managed 42
seconds while the Pi 2 came in at 21
seconds – half the time of the B+!
Once at the desktop we tested a few
common applications.
www.linuxformat.com
Raspberry Pi 2 Reviews
SunSpider Benchmarks
Test
Pi 2
B+
Times faster
Total
2760.9
8178
2.96
3d
550.9
1427.8
2.59
cube
157.3
473.6
3.01
167
296
1.77
raytrace
226.6
658.2
2.90
morph
The form factor may be the same as the B+, but the Pi 2 packs a punch.
we saw 4.6MB/s, which is an 0.8MB
speed increase.
The Raspberry Pi Foundation have
released an updated Raspbian image
which includes the ARM v7 kernel
image necessary to use the new CPU.
Applications written for the original
Raspberry Pi are fully compatible with
the Raspberry Pi 2 , though – building
upon the rich projects that have been
written since the initial launch of the
Raspberry Pi.
The Raspberry Pi 2 fulfills a lot of the
requests made by the community and
provides a stable and well-supported
platform for hackers, makers and
learners to carry on with excellent
projects for many years to come. LXF
access
211.9
435.9
2.06
binary-trees
27.6
69.8
2.53
fannkuch
101.5
190.1
1.87
nbody
52.8
118.7
2.25
nsieve
30
57.3
1.91
bitops
113.8
206.1
1.81
bits-in-byte
22
35.6
1.62
bitwise-and
29.1
48.2
1.66
nsieve-bits
52.8
104.1
1.97
controlflow
28.3
64.6
2.28
2.28
recursive
28.3
64.6
crypto
221.4
578.6
2.61
aes
112.4
287.6
2.56
md5
60.1
162.2
2.70
sha1
48.9
128.8
2.63
date
336.3
1269.9
3.78
format-tofte
171.5
641.9
3.74
format-xparb
164.8
628
3.81
math
158.4
394.5
2.49
cordic
43.3
99.9
2.31
partial-sums
78.7
215.7
2.74
spectral-norm
36.4
78.9
2.17
regexp
101.9
160.6
1.58
string
1038
3640
3.51
2.82
base64
63.3
178.8
fasta
156.9
409.7
2.61
tagcloud
177.8
617.7
3.47
unpack-code
514.5
2021.6
3.93
validate-input
125.5
412.2
3.28
74.68
509.58
6.8
Sysbench
Prime
Features at a glance
Verdict
Raspberry Pi 2
Developer: Raspberry Pi Foundation
Web: www.raspberrypi.org
Price: £20
Features
Performance
Ease of use
Value for money
Powerful 4-core ARM v7 processor
A great new Raspbian UI
The new Broadcom BCM2836 ARM v7 quadcore processor with 1GB of RAM yields results
(see the benchmarks, above) that are four times
the performance of the old BCM2835 SoC.
Available since December, the new sleek
Raspbian desktop runs well on the B+, but on
the Pi 2 it feels like a responsive desktop that
we’d expect to see on our main computers.
www.tuxradar.com
9/10
10/10
10/10
10/10
An almost perfect single-board
computer that marries great hardware
– that’s backward compatible – with
a lively and supportive community.
Rating10/10
March 2015 LXF195 27
Raspberry Pi 2
Hands-on with the
Raspberry Pi 2
Les Pounder has early access to the first batch
of Raspberry Pi 2s and takes one for a test drive.
The new Raspberry Pi comes with
much more power than it’s
predecessors. This is thanks to the
improved ARM 7 processor running
four cores at 800MHz each and the
generous 1GB of RAM. This increase in both
CPU and RAM is a massive
benefit to projects that
require pure CPU grunt, such
as OpenCV and Minecraft.
The Raspberry Pi 2 also
benefits from the design
improvements made for the B+
with more USB ports thanks to the LAN9514
providing four ports over the 9512’s two.
The B+ also introduced better power
management and this, again, is also present in
the Raspberry Pi 2. “Power consumption while
performing a given task is comparable to that
of B+,” explains Eben Upton, CEO of Raspberry
Pi Trading. “Obviously if you push Pi 2 hard it
will consume more power as it’s doing more
work. Power consumption of B+ under heavy
load is roughly the same as the old Model B.”
So now that we have the latest Raspberry
The easiest method to setup your microSD
card is to use NOOBS (New Out Of The Box
Software) To use it you will need at least an
8GB microSD card. Download NOOBS as an
archive from the Raspberry Pi website and
extract the contents to your 8GB microSD
card which should be formatted
to use a FAT32 filesystem.
With NOOBS copied to your
microSD card, unmount and
remove the card from your
computer and insert it into your
Raspberry Pi 2, you should hear
a gentle click once it is firmly in place.
Connect up your Raspberry Pi to a monitor
via the HDMI port and then attach a keyboard
and mouse via the USB ports. You will also
need to ensure that you have Internet access
for your Pi. The easiest method is to use the
“The Raspberry Pi 2 also
benefits from the design
improvements made for the B+.”
28 LXF195 March 2015
Pi, lets take it for a test drive! And for this
extended tutorial we will be using the latest
version of Raspbian available via the Raspberry
Pi website (www.raspberrypi.org/
downloads)as it comes with the kernel7.img
necessary to use the ARM7 CPU.
www.linuxformat.com
Raspberry Pi 2
Ethernet port. Finally, attach the power supply to the micro
USB port. Your Raspberry Pi 2 will now boot for the first time.
On first boot NOOBS will ask you which OS to install, in
this case we require Raspbian. Select the OS and start the
installation, which will take around 10 minutes.
With the install complete the Pi will reboot and start
Raspbian for the first time, and the first thing that you will
notice is how quick the boot process is now at 17 seconds
versus 33 seconds for the B+! For the observant you’ll also
note that there are now four raspberries at boot up. This
denotes that the Raspberry Pi 2 has four cores, a rather nice
little Easter egg that harks back to the old boot screens of
Linux distributions. Once booted we are immediately
presented with the raspi-config menu, this is a tool to further
configure your Raspberry Pi. At this stage we will simply exit
out of the menu and login as normal. The standard login
details have not been changed and remain as:
USERNAME: pi
PASSWORD: raspberry
Once logged in you need to type
startx
to load the desktop. You’ll see that the desktop is a little
different to previous versions of Raspbian, this is due to
extensive changes that were made by the Foundation back
in December of 2014 and is largely the work of Simon Long,
who used to work for Broadcom. In fact, it was Long who
hired Eben Upton at Broadcom, but now Simon Long has
joined the Raspberry Pi Foundation to work on the user
interface and his first project was to develop a new desktop.
The Raspberry Pi Foundation have created a very
powerful single board computer, but how can we test that
power? It is somewhat fitting that we can calculate Pi to as
many decimal places as we wish, so lets say 10,000? To do
that we need to install some software, first. Open LXterminal
and type the following two lines:
sudo apt-get update
sudo apt-get install bc
We’ve just installed a precision calculator that we can use
from the terminal. So now to the test, calculating Pi to
10,000 places and timing the activity.
time echo “scale=10000; 4*a(1)” | bc -l
In our test it took 17 minutes 25.725s to calculate Pi to
ten thousand decimal places on our stock Raspberry Pi 2.
We repeated the same Pi test on a Raspberry Pi B+ and it
took significantly longer at 25 minutes 5.989 seconds to
perform the calculation. As you can see straight away
that’s a very clear indication that the processor of the new
Raspberry Pi 2 is easily much more powerful than previous
models we’ve seen.
Computing Pi to 10,000 places is a fitting test for our Raspberry Pi.
The command to run the test is run from LXTerminal. Here we show the time it
took before we overclocked.
Our quick test, gives us a good idea of how much of a
serious performance beast the Raspberry Pi 2 is out of the
box, but can we tweak it into more of an animal? Well, earlier
we dismissed the raspi-config menu but for this next step we
need it again. In LXTerminal, and type the following:
sudo raspi-config
Turbo charge your Pi
Our first post install configuration is to review the memory
split. This is how the memory is divided between the GPU
(Graphical Processing Unit) and the main system. On the
Raspberry Pi a typical setup would be around 64MB of RAM
for the GPU and the remaining RAM would be allocated to
the system. This is an optional config tweak but you could
just leave it as is. You can easily tinker with these values, and
a general rule is that a terminal will not need as much RAM
as the full desktop, so for a terminal-only project you can
easily get away with 6MB allocated to the GPU. For desktop
applications such as Minecraft a minimum of 64MB is
needed. You will be prompted to reboot your Raspberry Pi,
do this and you will be returned to the login prompt.
With the changes made to the memory split now let us
return to the raspi-config main menu and head to the
Overclock menu. Your Raspberry Pi 2 already runs at
800MHz per core, which is an improvement over the original
700MHz single core ARM 11 CPU. We spoke to Eben Upton
and Gordon Hollingsworth about the new CPU and they both
confirmed that it can be overclocked to around 1.1GHz per
Quick
tip
The default web
browser for
Raspbian, Midori
has recently been
replaced with
Epiphany which
has been optimised
for use on the
Raspberry Pi.
The new browser
is available via the
latest Raspbian
update and works
really well on
Raspberry Pi 2 and
older Pis.
Ubuntu on Pi?
The biggest surprise brought about by the new
Raspberry Pi 2 is compatibility with Ubuntu for
ARM 7 CPU. Before the original Raspberry Pi
was released in early 2012, Ubuntu was often
mentioned as a candidate for the Pi, but as
Canonical didn’t support the ARM 6 architecture,
which the ARM 11 CPU used in the original Pi,
another distro was needed. Early on we saw
Pidora, a fork of Fedora for the Pi, being used to
demonstrate the power of the Pi. But Pidora
brought a full desktop experience to hardware
that required a lighter distribution. After further
investigations Debian, in the form of Raspbian,
was seen as a suitable candidate and this
remains the official distro and is used for all
official tutorials and supporting documentation.
But compatibility with Ubuntu doesn’t mean
that the Foundation is abandoning Raspbian:
“We’re not planning an official Ubuntu image,”
says Eben Upon. “We’re going to benchmark
www.tuxradar.com
‘regular’ armhf Debian against Raspbian, and
may switch if we see a compelling performance
advantage. Our preference is to stick with
Raspbian, perhaps with a few key libraries
swapped out dynamically, as this will allow us to
support Raspberry Pi 2 and Classic from the
same image.” At the time of writing there are no
ready made images for Ubuntu on Pi, but over
the coming months there are sure to be many
on offer for you to try. including Ubuntu Core.
March 2015 LXF195 29
Raspberry Pi 2
Quick
tip
Watching YouTube
videos is now
possible thanks
to Youtube_dl.
Normally YouTube
videos are Flash
based but there are
no Flash packages
for Raspbian. When
watching a YouTube
video in the web
browser, Youtube_dl
substitutes the
Flash element
of the web page
with an HTML5
compliant video.
core. We won’t be going quite that high, but we will overclock
our Raspberry Pi to a stable 900MHz using the Overclock
menu. While this is a relatively safe activity it should be
noted that going too far with overclocking can severely
damage the CPU due to the excessive heat generated by it
working harder.
We asked the Raspberry Pi team and it confirmed that
the core can reach temperatures of 85 degrees, and when it
does it will automatically turn off the Raspberry Pi for
protection. For ‘extreme’ tinkerers who want to push their
Raspberry Pi 2 to the limit, now might be the time to invest
in a set of heat sinks. If at anytime you wish to return the
CPU to it’s normal speed, re-enter the raspi-config menu
and return the values to the stock 800MHz.
With our configuration changes made, and after a few
reboots we have successfully ‘turbo charged’ our new
Raspberry Pi. Let’s start the graphical user interface. After
logging back in use the LXTerminal to type
startx
to return to the desktop. Now, lets see how the changes have
improved our Pi to 10,000 score by repeating the test.
Open LXTerminal and repeat the test code which was
time echo “scale=10000; 4*a(1)” | bc -l
The code will run and in our test it took 15 minutes 28.519
seconds – that’s an improvement of two minutes!
The Raspberry Pi Foundation has taken great care to
build upon the legacy created by the Raspberry Pi Classic:
“Raspberry Pi 2 has been in development for a couple of
The raspi-config advanced menu contains configuration options that enable
you to really make the Pi your own.
The Hello_Pi demos are a great way to show off your
Raspberry Pi 2. You can wrap video around the teapot
using any H.264 compliant video.
years,” say Upton and Hollingsworth, and that includes the
time spent developing BCM2836. “The first silicon arrived at
the start of last May; there will be a video on the blog of me,
James and Dom in the lab at Broadcom at 1am, the day after
silicon came back, with the ‘video on a teapot’ demo running
from Linux on a Broadcom “Ray” dev board. The design of
the Rasberry Pi 2 board started last August (2014), and
we’ve had samples since October (2014). We went through
three iterations of prototypes to get exactly the right
performance,” says Upton.
Compatibility
The performance is reflected in the choice of CPU for the
Raspberry Pi 2. Rather than choose another architecture the
Foundation has stuck with an ARM-based CPU that is
compatible with the ARM11 found in the earlier Raspberry Pi.
The quad-core ARM7 can run software written for the older
Raspberry Pi: “Raspbian works out of the box, but requires a
new v7 kernel, which will be included in the downloads from
our website” said Eben.
As for hardware compatibility, the Raspberry Pi 2 shares
the same GPIO as the B+, which means that boards made
for the A+ and B+ will also work with the Raspberry Pi 2 and
this even includes HAT boards (Hardware Attached on Top),
which contain an onboard chip that communicates with the
Raspberry Pi to set up the board quickly.
There are some boards, however, that are not compatible
with the B+ and the Raspberry Pi 2 due to their size and
design. Boards such as PiFace [see LXF180] and PiFace
Control and Display – that was used to control a camera rig
for the Royal Institution Christmas Lectures – cannot be
The making of Raspberry Pi 2
The Raspberry Pi Foundation are rather excited
about the new Raspberry Pi 2, We spoke to the
engineering team and Gordon Hollingsworth
about development: “The Raspberry Pi 2 is
100% awesome. It’s as close to a typical PC as
we wanted when we first started the project.”
And the amount of effort that’s gone into
development is impressive: “The team have put
the equivalent of 20 years of work into the new
Raspberry Pi and it’s processor, and that runs at
30 LXF195 March 2015
a cost of between £2-3 million.” But there’s still a
lot of enthusiasm for the older Raspberry Pi,
Eben Upton explains: “There are a lot of
industrial customers who won’t want to switch,
and, of course, we still have the Model A+.
To give you an idea of the ‘stickiness’ of an old
platform, we sold something like 80,000 Model
Bs after the Model B+ launch.”
The Foundation also has its Compute
Module, which was created to embed the
www.linuxformat.com
Raspberry Pi inside industrial applications.
We asked Eben if the Compute would be seeing
a similar upgrade or not: “We’ll do a Compute
Module 2 at some point, but probably not in the
first half of 2015.” And what of the A+? Would
there be a similar upgrade for it: “Nothing is
currently planned as the A+ price point is quite
challenging.” No upgrade for now then, but the
Raspberry Pi family has grown considerably
since 2012 and now features six devices.
Raspberry Pi 2
The raspi-config overclock menu comes with the
required warning that pushing too far will break your Pi.
attached. But the OpenLX SP team behind these boards has
released special versions for the B+ and Raspberry Pi 2.
3D graphics test
Every Raspberry Pi comes with the same VideoCore IV GPU
(Graphical Processing Unit) that enables the Raspberry Pi to
play back high-definition video at 1080p. The new PI also
comes with this GPU, also made by Broadcom just like the
BCM2836 powering the new Pi. Did you know that there’s a
test suite for the GPU?
You can find the test suite by opening LXTerminal and
typing the following:
cd /opt/vc/src/hello_pi/
In there you will find a number of directories containing
many different demos. But before we can use them we need
to build the demos from source, and to make this easier the
Foundation have provided an automated build script that will
only need to run once. To run the script, in LXTerminal type
./rebuild.sh
This build script will run the build process for all of the
demos so it may take a few minutes, even on our new
speedy Raspberry Pi.
Once completed there are a number of demos that you
can try out and the first on the list should be hello_teapot.
To run it, in LXTerminal make sure that you are still in the
hello_pi directory and type:
cd hello_teapot
./hello_teapot.bin
You will now see a 3D render of a teapot with video that’s
been directly rendered on to its surface. To exit out of the
teapot demo hold Control+C together and you will be
returned to LXTerminal.
Another demo to try is hello_triangle2 and to getto that
you will need to go back to the hello_pi directory and we can
do that by typing.
cd ..
From hello_pi we can change our directory to hello_
triangle2 and run the demo by typing
cd hello_triangle2
./hello_triangle2
This demo appears to be rather static at first, but try
moving the mouse around and you will see two fractals
superimposed one over the other moving and reacting to
your mouse movements. Apparently, you can also control
the fractals to create a perfect circle. To exit out of the hello_
triangel2 demo hold Control+ C together and you’ll be
returned to LXTerminal
So we’ve taken a look around the new Raspberry Pi 2,
seen how it performs out of the box and taken time to
supercharge our Pi. On the next page we will interface
Minecraft with Pibrella to create a pushbutton bomb
deployment system!
Quick
tip
The Raspberry Pi
2 shares the same
dimensions as the
B+ but for those
of you looking to
reuse a B+ case,
such as the Pibow,
it’s worth noting
that some surface
mount components
have been moved.
These changes
don’t affect the
overall size of the
board but as the
Pibow uses layers
to build, a new layer
will be required for
your Pibow.
Updating your Pi
The Raspberry Pi Foundation has released
many new updates to the existing Raspbian
install and keeping your Raspberry Pi up to date
is a really good practice. There are a few handy
Terminal commands to help you, such as
sudo apt-get update
which will update the list of installable software.
Using the following
sudo apt-get upgrade
will check for the latest software, while
sudo apt-get dist-upgrade
is similar to upgrade, but a more intelligent
updater that removes old kernels.
If you would like to install the new desktop,
and who wouldn’t as it is a really great piece of
work by Simon Long, type the following three
lines into a terminal:
sudo apt-get update
www.tuxradar.com
sudo apt-get dist-upgrade
sudo apt-get install raspberrypi-ui-mods
Once completed, reboot your Raspberry Pi
for the changes to fully take effect. Once logged
back in you will be greeted with the new and
improved interface. Updating is done via the
terminal using the very powerful apt package
management system that’s used by all Debianbased distributions.
March 2015 LXF195 31
Raspberry Pi 2
Link a Pibrella to Minecraft
1
Attach Pibrella to your Raspberry Pi
We’ll use the big red button on the Pibrella.com to set off TNT in
Minecraft. The Pibrella fits over the first 26 GPIO pins. Never attach
while the power is on! Use a blob of blu tack or modelling clay to
prevent the underside shorting out the HDMI. Now connect the
other cables as usual, but insert the power into the micro USB port.
3
sudo idle
This opens idle, a Python editor, with elevated privileges enabling us
to use Pibrella with Python. Now open the example code.
32 LXF195 March 2015
This installs the software that we need to use Pibrella with Python.
Examining the code
Our code is written in Python 2 as the Minecraft module is
currently only available for that version. The code is fairly easy to
read. Note the line starting with # are comments. The first few
lines are imports, they import extra functionality, in the form of
Pibrella and Minecraft libraries, for our project. After that we use a
variable called mc to store connection details to Minecraft.
6
In position
Minecraft uses x,y,z coordinates to know the position of objects in
the world. We create a function called button_changed(), which
locates the player and then creates a cube of TNT at coordinates
near to the player. Lastly we set the function to be called when the
button is pressed. Keep the window open and open Minecraft and
create a new world.
Setup Pibrella
sudo apt-get update
sudo apt-get install python-pip
sudo pip install pibrella
4
Get the code
We’ve created a GitHub repository that contains the code for this
tutorial, visit https://github.com/lesp/Pibrella-Minecraft-TNT
and download a copy. Next open LXTerminal and type
5
2
With our Pi booted to the desktop, open LXTerminal and type:
Drop the bomb
With Minecraft ready and our code open, press TAB to release the
mouse from Minecraft and click on Run > Run Module in idle. The
idle shell will come to life and run the code. Switch back to Minecraft
and go to a nice spot. Press the red Pibrella button to drop the
bomb. Hit the TNT with your sword... and then RUN! Note: You can
run this the original Pi, but it could crash Minecraft. LXF
www.linuxformat.com
TECHNOLOGY, TESTED
NE W TECH
WE AR ABLES
CHANNEL!
The best just got better!
NOW
LIV E !
Still showcasing the best news, reviews and
features but now optimised for every screen and
your satisfaction. Love tech? Then visit...
www.techradar.com
twitter.com/techradar
facebook.com/techradar
Subscribe to
Get into Linux today!
Read what matters to you when and where you want.
Whether you want Linux Format delivered to your door, your device,
or both each month, we have three great options to choose from.*
Choose your
package today!
#
1 for Free Software
Print
Digital
Now o
Androidn!
Every issue delivered to your door with
a 4GB DVD packed full of the hottest
distros, app, games, and more.
PLUS exclusive access to the
Linux Format subscribers-only area.
Instant access to the digital editions
of the magazine on your iPad,
iPhone and Android* devices. PLUS
exclusive access to the Linux Format
subscribers-only area, featuring
complete issues & disc downloads.
ONLY £31.99
ONLY £20.49
Your subscription will then continue at £31.99 every
6 months – SAVING 17% on the shop price.
Your subscription will then continue at £20.49 every
6 months – SAVING up to 37% on the shop price.
*Only available in certain territories: http://bit.ly/LXFwhere
34 LXF195 March 2015
www.linuxformat.com
Get the complete
Print + Digital
Get into Linux today!
package
Now o
Androidn!
A DVD packed with the best
new distros and free & open
source software every issue.
Exclusive access to the
Linux Format archive – with
1,000s of DRM-free tutorials,
features, and reviews.
Every new issue of the
magazine in print and on iPad,
iPhone, and Android* devices.
!
E
U
L
A
V
T
BES
Never miss an issue, with
delivery to your door and
straight to your device.
Huge savings, the best value
for money, and a money-back
guarantee.
ONLY
£38.49
Your subscription will then continue at £38.49 every
6 months – SAVING 17% on the shop price and giving
you up to a 78% discount on a digital subscription.
Two easy ways to subscribe...
Online: myfavouritemagazines.co.uk/LINsubs
Or call 0844 848 2852
(please quote PRINT15, DIGITAL15, BUNDLE15)
Prices and savings quoted are compared to buying full-priced UK print and digital issues. You will receive 13
issues in a year. If you are dissatisfied in any way you can write to us or call us to cancel your subscription
at any time and we will refund you for all undelivered issues. Prices correct at point of print and subject to
change. For full terms and conditions please visit: myfavm.ag/magterms. Offer ends 19/03/2015
www.tuxradar.com
March 2015 LXF195 35
Get into Linux
Get into
Linux
Curious about Linux but not sure how to traverse this
unfamiliar territory? Mayank Sharma will help you get
started so you can join the growing ranks of Linux users.
here’s never been a better
time to get into Linux. It’s
slicker than ever, easier to
install than ever and all the
big-name distros like Ubuntu, have
just received updates. So Linux
Format thinks it’s time for a fresh
start with a fresh install.
If you’ve never touched Linux before
never fear, we’ve got a complete guide
to started, while the cover disc has
Fedora 21 and Ubuntu 14.10 updated
and patched. These are “Live” versions
T
mainstream OSes are well worth this
extra step. For starters, Linux is open
source, which is to say that you can
legally download a copy of Linux and
install it on all your computers. It also
ships with a ton of software and you
can download more with a single click.
Unlike proprietary OSes, a Linux
flavour (or distro) is very malleable. You
can swap out any of the default apps or
even its entire interface and replace it
with something more suitable to your
needs. Choice is another hallmark of
Linux with
multiple options
from simple
components to
complex suites.
Furthermore,
besides being
compatible with all your newer
hardware, Linux can also turbocharge
hardware that’s past its glory days. To
top it all, you can do everything you can
on a Windows box. From streaming
video to playing the latest games, a
Linux desktop will work just as well for
you as any other system.
“To top it all, you can
do everything you can
on a Windows box.”
so you can boot them straight off the
DVD and try them out instantly.
One of the biggest impediments to
widespread Linux adoption is that you
don’t get Linux on a PC from Currys –
getting Linux onto a PC is a more
involved process. But the advantages a
Linux desktop offers over the other
36 LXF195 March 2015
www.linuxformat.com
Ubuntu
ON THE DISC
The most commonly known
Linux distribution (often abbreviated to
distro), Ubuntu pays special attention to desktop
usability. In the 10 years of its existence, Ubuntu has
galvanised the development of Linux on the desktop
and is the go-to distro for third-party developers and
vendors who want to run their wares on Linux.
Fedora
ON THE DISC
Red Hat’s open source offering
to the world, Fedora is known for adapting
and offering new technologies and software to its users.
Over the years, the distro has managed to find a clever
balance between offering new features and stability to
its users, which makes it popular with both new and
experienced Linux campaigners.
Mageia
Although it’s just had four releases to date,
Mageia has a pedigree in usability that dates
back to the 1990s. Mageia is a community project that’s
supported by a non-profit organisation, which is
managed by a board of elected contributors. The distro
is known for its customised user-friendly tools for
managing the installation.
Get into Linux
All you need to know to anchor Linux on your computer.
he Linux installation process is involved but it isn’t
actually that cumbersome. The exact installation
steps are slightly different for every distribution, but
in general the distro’s graphical installer will guide you
through the necessary steps pretty easily.
In essence, installing Linux is very similar to installing a
piece of software, albeit with a few caveats:
Disk partitioning Unlike a piece of software, installing
Linux requires you to create a dedicated partition on your
hard disk. This isn’t an issue if Linux will be the only OS on
this computer. However, if you’re installing Linux alongside
another OS, such as Windows, you’ll have to take steps to
preserve the existing data. Many Linux distros will offer to
partition the disk for you automatically, though you can
create partitions yourself with ease from within Windows
using the Disk Management tool (see Make Room for Linux,
below). The advantage of manually partitioning your disk is
that you get to decide how much space to allocate to Linux.
When creating partitions remember to create two new
partitions. The bigger one with at least 12GB of disk space is
for the OS itself, which you’ll format as ext4. You’ll also need
to create a second partition for what’s called swap space.
In simple terms, the swap partition extends the amount of
physical RAM on your computer. A general rule of thumb for
computers with a small amount of RAM (one or two
gigabytes) is to create a swap partition that’s twice as large as
the amount of RAM on your computer. For computers with
more RAM, it’s best to create a swap partition that’s the same
size as the amount of RAM you have.
Securing data During the installation process, many
distros including Fedora and Ubuntu will give you an option to
encrypt the Linux partition. This option gives you an added
layer of security by insulating your data from unauthorised
access. To enable this option you will need to supply a
passphrase which will then act as the key to unlock the data.
Another important step during installation is setting up a
root account. On most distros this step is part of the user
creation process where you define the login credentials of
your regular user account. The regular user doesn’t have any
permissions to modify the system while logging in as root
gives you complete control over your system.
Dual boot One software you should be familiar with when
installing Linux is the bootloader. It’s a small program that
tells the computer where to find the different OSes on the
disk. Most Linux distros use the Grub 2 bootloader.
In general, you shouldn’t have to do anything here, even
when installing Linux on a Windows 8 computer that uses the
UEFI BIOS with Secure Boot enabled. The latest versions of
most mainstream distros, including Ubuntu and Fedora install
a UEFI-compatible bootloader that will work correctly out of
the box. However, since different vendors implemented UEFI
differently, you might not get to the Grub bootloader screen
and instead end up booting straight into Windows after
installing Linux. In such a case, you should consider enabling
the Legacy BIOS mode wherein the UEFI firmware functions
as a standard BIOS. The option to enable Legacy BIOS is
under the UEFI settings screen.
Testing before installation Almost every mainstream
distro, including Ubuntu, Fedora and Mageia allow you to boot
into a ‘live’ environment, which lets you experience the distro
without disturbing the contents of your hard disk. You can use
the live environment to get familiar with the distro and also
verify the compatibility of your hardware with the distro.
Also note that Linux distributions are distributed as ISO
images. You can burn them to a CD or DVD, depending on
their size, using the option to burn ISO images. You can also
transfer ISO images to a USB drive. There are tools, such as
UNetbootin and Yumi that will create bootable USB drives
with the ISO of your distro, while Mageia recommends using
the Rufus utility.
Make room for Linux: Resize a Windows partition
1
Shrink Windows
Before you can partition your disk you’ll need
to squeeze your Windows partition to free up
some disk space for the new partition. Head
to the Disk Management tool, and right-click
your main partition that is typically assigned
the drive letter C. Then select the Shrink
Volume option from the pop-up menu.
2
Create new partition
The Shrink dialog box shows you the total size
of the drive and the maximum amount of
shrink space available. You cannot squeeze
more space out of a drive than the size shown
here. To create a new partition, specify the
size of the partition in the space provided in
MB and click Shrink to start the process.
www.tuxradar.com
3
Use the partition
After the process is complete, a new partition
showing the amount of free, or unallocated,
space appears next to the Windows C: drive
partition. You can then point your Linux
distro’s installer to this free space. Remember
to repeat the process and create another
partition for the swap space as well.
March 2015 LXF195 37
Get into Linux
Taking baby steps with Linux.
ne of the biggest difference that foxes users
coming from proprietary OSes is the lack of a
consistent ‘look and feel’ to the Linux desktop.
The default desktops on Ubuntu, Fedora and Mageia
distros all look and behave differently from each other. We
say default because unlike other proprietary OSes, a Linux
distro enables you to swap out the desktop and replace it
with an entirely different one to better suit your workflow.
The idea of the desktop as a separate entity from the
operating system sounds foreign to users coming from
Windows or Mac. But like all things Linux and open source,
users are spoilt for choice when it comes to putting a face on
top of their Linux distro.
Unity in diversity
The Ubuntu distribution uses its own home-brewed Unity
desktop. The most prominent component on the desktop is
the vertical Launcher which functions pretty much like a
taskbar. It houses icons for the frequently used apps for quick
access that you can modify as per your requirements. Also,
some icons have specialised right-click context menus that
give you quick access to frequently used features.
The first icon on the Launcher brings up the Dash, which is
Ubuntu’s take on the traditional menu-based navigation
system. It features a search box at the bottom and anything
you type here is used to look for matching apps, documents,
music, videos, instant messages, contacts and other content.
Furthermore, you can also use the Dash to install and
uninstall apps and preview media files. Unity also includes the
Heads Up Display (HUD), which is an innovative take on the
application menus. Using HUD helps you avoid the trouble of
looking for options embedded deep within nested menus.
To access HUD press the Alt key from inside any app and use
the Dash-like search box to perform a task.
The default Unity experience is the result of extensive
usability research by Canonical. But you’ll find some options
to tweak the desktop from under the System Settings tool
accessible via the gear & spanner icon in the Launcher. The
settings are grouped into three broad categories. The
Personal group houses settings for customising the look and
feel of the desktop by changing the wallpaper and modifying
the behaviour of the launcher. Pay attention to the Online
Accounts settings which you can use to sign into several
online services, such as Facebook and Google Docs, and
integrate their contents with the desktop apps. For example,
adding your Flickr account will integrate it with the Shotwell
photo manager.
Gnome thyself
Gnome is another popular desktop, and the Gnome 3
desktop contains more or less the same elements as
Ubuntu’s Unity but presents them in a different way. For
starters the desktop is very bare. Click on the Activities button
in the top-left corner to reveal the Overview which is very
similar to Unity’s Dash. In this view, you also get a Launcherlike Favourites bar for accessing frequently used apps.
In the centre you get a preview of all open windows. To the
right is the Workspace Switcher, which always shows the
current Workspace and an additional one. If you add windows
Users of
Gnome-based
distros should
use the Gnome
Tweak Tool
to tweak the
behaviour of
their desktop.
Resuscitate an old workhorse
One Linux speciality is infusing life into
machines doubling up as paperweights
because they can’t keep up with the
hardware demands of modern OSes.
While there are many distros that are
designed to power older hardware, our
all-time favourite is Puppy Linux.
The distro uses one of the lightest
window managers (JWM) and though it
might not be pretty to look at, it’ll turn
that old lethargic work horse into a
galloping stallion. But the main reason for
Puppy’s stellar performance on hardware
with limited resources is its sheer number
of lightweight custom apps. The distro
has graphics apps, productivity apps,
38 LXF195 March 2015
apps to playback, edit and even create
multimedia. Using its custom apps, you
can block website ads, grab podcasts, do
internet telephony, burn optical media,
and a lot more.
The distro is available in different
flavours. The Wary Puppy edition uses an
older kernel and includes additional
drivers to support peripherals like dial-up
modems. There are flavours based on the
recent Ubuntu releases too, such as
Tahrpup based on Ubuntu 14.04 and
Slacko Puppy based on Slackware Linux.
These editions use a newer kernel than
Wary but take advantage of Puppy’s
custom apps for limited resources.
www.linuxformat.com
Puppy Linux has very helpful forum boards and loads of
documentation written specifically for new users.
Get into Linux
to the second Workspace, a third will automatically be added.
At the top is a search box that will match any text to apps and
documents on the local computer as well as online services.
Gnome includes an Online Accounts app that enables you to
sign into online services, such as Google Docs and Flickr.
In fact, Fedora will ask you to sign into these online services
when you boot into the distro for the first time.
The Gnome desktop also has several custom apps of its
own that can fetch information and data from the added
online accounts. For example, the Gnome Contacts apps can
pull in contacts from various online sources, such as Gmail.
Similarly, Gnome Documents will help you find documents
from online repositories such as Google Docs.
New users should also keep an eye out for the desktop’s
peculiarities. For one, you won’t find the Minimise buttons on
any of the windows in Gnome. When you want to switch to
another app, head to the Activities Overview and launch a
new window or select an existing open one. Another esoteric
aspect is the lack of any desktop icons as well as the ability to
create any shortcuts or place any folders on the desktop.
However, Gnome’s redeeming aspect is its tweakability:
you can add new features literally with a single click. Gnome
supports a plethora of extensions that you can enable
without any installation. Just head to the Gnome Extensions
website (http://extensions.gnome.org), find the plugin you
wish to enable and toggle the button to activate it on your
desktop. Some of the popular extensions are designed to help
ease the transition for users moving to Gnome from
proprietary operating systems, such as Windows.
Furthermore, KDE ships with multiple interfaces or Views
designed to make the best of the available desktop realestate. There are different Views for regular screens and
netbook though you can use any View on any type of
computer. To switch Views, right-click on the desktop and
from the context-menu select the Default Desktop Settings
option. In the window that opens up, select the View tab and
checkout the different views from the Layout pull-down list.
Many KDE distros place the Folder View widget on the
desktop which displays the contents of a folder in a neat little
box that you can place anywhere on your screen. Then there’s
the Folder View which lets you place files and folders
anywhere on the desktop. The Search and launch View is
designed for devices with a small screen or a touchscreen.
Each View has additional configurable elements.
In addition to bundling the configuration options along
with the individual elements, KDE also houses them all under
the System Settings panel, alongside other system-wide
configuration options to administer the underlying Linux
distro. It might seem daunting, but you don’t need to set up
or review each and every option before using the desktop.
Customising KDE is an on-going process and not a one-time
“Ubuntu, Fedora and Mageia are
available in multiple editions
with a different desktops.”
affair. The desktop is designed to grow and mutate as per
your usage requirements.
Kick off with KDE
Unlike the other two desktops, the layout and behaviour of
the KDE desktop and the placement of its Kickoff app
launcher will certainly feel familiar to users from non-Linux
operating systems. But KDE is so malleable that many KDE
distros look unlike each other. In many ways, KDE is the
quintessential Linux desktop with its flexibility and myriad
number of options. There’s literally no end to KDE’s
customisation options.
One of the most useful KDE features is Activities. Using
this feature, you can create several context-aware activities,
each with its own set of apps and desktop furniture. For
example, you can create a Social activity that signs you into
all your instant messaging accounts and displays updates
and feeds from various social networks. Many KDE distros
ship with just the default activity, called the Desktop Activity.
However, you can fetch more activities from the internet and
build on them to suit your workflow.
31 flavours
In addition to these three chief desktop environments, there
are a lot more that you can put atop your distro. There are
fully fledged environments, such as Cinnamon, as well as
lightweight ones, such as Xfce, LXDE and Mate. In fact, most
mainstream distros, including Ubuntu, Fedora and Mageia are
available in multiple editions with a different desktops.
For example, the Ubuntu distro has a number of officially
supported spins. There’s Kubuntu which dresses Ubuntu with
KDE as well as a Gnome spin, a Xfce spin and another that
uses the Mate desktop. The Fedora distro, which was once
known as the premier Gnome distro, now also has a
wonderful KDE flavour as well. Similarly, you can also use
Mageia with the Gnome desktop as well. In fact, Mageia and
Fedora, also have install-only DVD images that give the user
the option to install multiple desktops.
Switch desktop environments
You can also install multiple desktops on
top of your distribution, such as the
popular Cinnamon desktop. It’s available
in the official repositories of Fedora and
Mageia and you can install it via their
respective graphical package managers.
On Ubuntu, it’s available via the
ppa:gwendal-lebihan-dev/cinnamonstable PPA. Add this PPA to your system
[as explained in the main text, over on
page 38] and then install the desktop
environment from the Software Center.
Once you’ve installed multiple desktop
environments you can easily switch to
another one. To do this you just log out of
the current desktop environment, and
use the login manager and enter your
login credentials. Before logging into the
desktop, explore the buttons on the login
manager. One of the buttons will reveal a
drop-down list of all the installed
desktops. Select the desktop
environment you want to use and the
login manager will log you into that
desktop. This way you can try them out
and choose one you like the best. Choice!
www.tuxradar.com
There are several desktop environments that you can
install using your distro’s package manager.
March 2015 LXF195 39
Get into Linux
Fleshing out your chosen distribution.
nlike Windows, a typical Linux desktop distribution
is more ready-to-use right out of the box. Instead
of shipping with basic apps, such as a vanilla text
editor or a barebones drawing tool, your average Linux
distro will include a fully fledged office suite and a
comprehensive graphics editor. This is in addition to the
basic set of apps for common tasks, such as browsing the
Internet, checking email, instant messaging with your
friends across various networks, organising photos,
listening to music and watching videos.
That said, since we all use our computers differently, you’ll
likely want a piece of software that isn’t included by default.
Your distro has specialised tools that’ll help you install
gazillions of quality open source software without clicking
through complex setup wizards.
Linux distros use a collection of software tools, both
graphical and command-line based, that are together referred
to as a package management system. These tools help you
install, remove, and upgrade software (also called packages)
with ease. Individual pieces of software are grouped inside
packages. In addition to the software itself, packages also
include other information, such as a list of other packages or
dependencies which are required for the app to function
properly. Furthermore, the package management system
relies on a database known as the repository to keep track of
all the available packages.
U
Package Management 101
The Linux world is divided broadly into two different package
formats – RPM and Deb. These are precompiled binary
packages that are designed to simplify the installation
process for desktop users. RPM was created by Red Hat
Linux, and is used by distros such as Fedora, and Mageia
while Deb is used on Debian-based systems, such as Ubuntu.
Additionally, almost every major distro maintains its own set
of graphical tools to enable desktop users to install, upgrade
and remove app. You must also be familiar with the distro’s
repository structure and how and where it houses software.
Ubuntu uses the Advanced Packaging Tool or APT package
In addition to enabling repositories you can also select a
different mirror for downloading software.
management system. You can use the Software & Updates
tool for manipulating Ubuntu’s repositories (or repos). The
tool lists repos in four different tabs. By default, the four official
repos under the Ubuntu Software tab are enabled. The Main
repo includes officially supported software, and the Restricted
repo includes software that isn’t available under a completely
free license. The two interesting repos are Universe and
Multiverse repos, which include software maintained by the
community and software that isn’t free, respectively.
Unlike Ubuntu the Fedora distro uses the RPM package
management system. The distro houses repositories under
the /etc/yum.repos.d directory and the main repository is
named fedora.repo.
Mageia uses the urpmi package which is a wrapper for the
RPM package management system. The distro has three
official repos. The core repository contains open source
packages, the non-free repository contains closed-source
apps, and the tainted repository has packages that might
infringe on patents and copyright laws in some countries.
Each of these repos is further divided into four sub-repos.
The release repo includes stable packages, the updates repo
includes packages that have been updated since the release,
A backup primer
Your distribution will include a tool to help
you backup your data and you should
take some time out to get familiar with it.
Ubuntu, for instance, ships with the Deja
Dup backup app which is designed for
new users. You can also install it on top of
Fedora and Mageia.
No matter what backup tool you use,
you should take a moment to consider
what you should backup and where.
Backing up the entire home directory
might be convenient but is usually just an
overkill. Instead you should just include
the directories under your home directory
such as Downloads and Documents.
40 LXF195 March 2015
Also check with important app, such as
email clients, who keep downloaded
emails, attachments and address books
under hidden directories (prefixed with a
./) beneath the home folder.
Also, keeping the backed up data on
another partition of the same disk isn’t
going to be of much use, since the whole
disk might fail and render the backup
copy useless. One solution is to keep the
backup on another separate disk or
external drive. Or, if you have good
Internet bandwidth, the backup app
might also help you store the backups on
a cloud storage service.
www.linuxformat.com
Déjà Dup has a simple interface that shouldn’t
intimidate even first time users.
Get into Linux
and the backports repo contains packages of new versions
backported from the Cauldron repository, which will
eventually become the stable repo for the next release.
There’s also the testing repo which will contain software
primary for QA purposes.
To configure repos, launch the Mageia Control Center and
head to Software management > Configure media sources
for install and update. To install the official online repositories,
click on Add and then either on Update sources only or Full
set of sources. The first choice is the minimum to keep the
distro updated, while the second allows you to install new
software. These options will populate the window with a list of
repos. Then toggle the Enabled checkbox next to the repo you
want to fetch software from.
FedEx packages
All major desktop distros include a graphical tool for
managing packages. Ubuntu’s Software Center is one of the
best tools for the job. You can find software by clicking on the
category reflecting the type of software that you’re looking
for. When you select a category, you will be shown a list of
apps. There’s also a search box in the upper-right corner of
the window which will look for software matching any entered
keywords. Once you’ve found the software you want, click the
Install button to its right. This will fetch the software as well as
any required dependencies and automatically install it.
All newly installed software is added to the Launcher and you
can also find it from under the Dash.
Fedora uses the PackageKit graphical tool that’s listed as
Software in the Gnome’s Activities menu. It too lists software
categories as well as a keyword-matching search box at the
top to help you find software. Once you’ve found the software
that you’re looking for, click on the Install button and the app
will add it to your installation.
The graphical package management tool for Mageia is
named Drakrpm. The tool isn’t as pretty as the software
centres in Ubuntu and Fedora, but is very functional and
intuitive enough to get the job done. You can filter its list of
available apps to show only packages with GUI, security
updates, bug fix updates, and more. Applications groups are
listed in the sidebar and there’s also a search box to hunt for
packages based on keywords. When you find a package you
wish to install, simply toggle its corresponding checkbox and
click on Apply.
The Repo men
The larger open source community offers a lot more
packages than the ones listed in your distro’s official repos
and almost every distro has a mechanism to add and install
software from these third-party repos.
External repos in Ubuntu are known as a Personal Package
Archive or PPA. You can add a PPA repo to your distro using
the Software & Updates tool. But first you need the address
of the PPA. This is listed on the PPA’s Launchpad site and will
be something like ppa:example-ppa/example. Now fire up the
tool and switch to the Other Software tab. Then click on the
Add button and paste the address of the PPA in the window
that opens. Ubuntu will then ask you to refresh the repos to
enable the PPA.
Most desktop
distros have
an easy to
use graphical
package
manager.
“Unlike Windows, a typical
Linux desktop distro is readyto-use right out of the box.”
Similarly, Mageia has a number of third-party repos as well
and you’ll need their URL to add them to your distro. Once
you have the URL, fire up the Mageia Control Center and head
to Software management > Configure media sources for
install and update and click on the Add a medium option.
Enter the address of the repo in the window that pops up
along with its type such as HTTP or FTP.
Fedora also has a number of third-party software repos
but the most popular is RPMFusion. The repo is further subdivided into two independent repos that house free and nonfree software. You can install both of these repos from within
the browser itself by following the instructions on the repos
website (www.rpmfusion.org/configuration).
Grabbing Google
All popular Google software, such as Chrome,
Earth, the audio and video plugin for Hangouts
and others can be installed on Linux. But you
won’t find them in the official repos of major
distros because of their licensing. However, you
now have all the know-how to install them with
ease if we point you in the right direction.
The downloads page of each supported
Google app contains links to both 32-bit and
64-bit versions of the software in both RPM and
Deb formats. Download the package for your
particular distro and double-click on the file to
install it with the distro’s package manager. All
official packages for Google apps will also install
the appropriate external Google repository in
your distribution to keep the software updated.
Another popular proprietary software that you
may want to install is Skype. Ubuntu users can
simply enable Partner Repositories by visiting
Software & Updates > Other Software. This will
add the official Skype repos and you can then
install the software from the Software Center as
www.tuxradar.com
for any other software package.
Mageia, on the other hand, includes the Skype
package in its non-free repo. If you’ve enabled
this repo, then simply search for the get-skype
package which will download Skype from its
website. You can also head to the Linux download
page on Skype’s website and choose your distro
and architecture from the pull-down list which
will download either a Deb file or an RPM file.
Double-click on the file to install it with the
distro’s package manager.
March 2015 LXF195 41
Get into Linux
Turn your distro into the ultimate media centre.
ost Linux distros designed for desktop users are
capable of handling all types of content you
throw at them. But some content, especially
most audio and video, is distributed in closed formats that
are encumbered by patents. The distros can’t play these
files straight out of the box, however, most have clearly
outlined procedures to allow users to install components
to play these popular non-free media formats.
Ubuntu gives you the option to install the components
that can play files in restricted formats, such as MP3s, during
the installation itself. If you’ve already installed the distro, you
should then use the package manager to install the ubunturestricted-extras package, which includes popular
proprietary codecs and plugins.
On Fedora these codecs are bundled in the third-party
RPM Fusion repository. You’ll have to first enable the repo as
mentioned earlier and then fire up a terminal and enter the
following commands to fetch the codecs:
su yum install gstreamer{1,}-{plugin-crystalhd,ffmpeg,plugins{good,ugly,bad{,-free,-nonfree,-freeworld,-extras}{-extras}}}
ffmpeg libmpg123 lame-libs
If you’re using Mageia, you’ll find the multimedia codecs
under the Tainted repository, so make sure you enable it
following the procedure mentioned earlier. Then launch the
Welcome app from under the Tools menu and switch to the
Applications tabs. From here you can install several useful
and popular packages including multimedia codecs.
Where’s the bling?
Mageia’s
Welcome app
is a wonderful
utility to setup
the distro for all
kinds of users.
Despite the rise of the open source WebM format, many
websites still require Adobe’s Flash plugin to properly stream
multimedia content. Getting the Flash plugin on Linux is
tricky since Adobe is no longer developing Flash for Firefox on
Linux. The only way to use the latest Adobe Flash plugin on
Linux is to use Google’s Chrome browser which includes the
Pepper-based Flash plug-in.
That said, if you don’t want to switch to Chrome you can
You can tweak Linux’s Grub bootloader with the GrubCustomizer tool (ppa:danielrichter2007/grub-customizer).
still install the out-of-date Flash plugin and continue using the
Firefox browser. Or, you can extract the newer Pepper-based
Flash plugin from the Chrome browser and use it on
Chrome’s open source cousin, the Chromium browser.
Ubuntu users can install Flash for Firefox with the
sudo apt-get install flashplugin-installer
command. If you’re using Chromium, you can use the latest
Pepper Flash plugin by installing the pepperflashpluginnonfree package.
Fedora users can download the Firefox plugin from
Adobe’s website by adding its repo. If you are running a 64-bit
installation, this command:
yum -y install http://linuxdownload.adobe.com/adoberelease/adobe-release-x86_64-1.0-1.noarch.rpm
will download and install the correct repo. You can then
import the key for the repo with:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-adobe-linux
before installing the plugin with:
yum -y install flash-plugin
Mageia users can simply enable the nonfree repository
which houses the Flash plugin and then install it from the
Applications tab in the Welcome app.
Best multimedia apps
Most distros ship with audio and video players. The popular
ones are Rhythmbox which is the default on Gnome-based
distros and is well integrated in Ubuntu, and KDE’s default
Amarok. In addition to local tracks, both players can also
stream Internet radio and podcasts. If you want more
attractive looking players, fire up the package manager and
look for Banshee and Clementine.
Similarly, the default video player on most Gnome-based
distros is Totem (now simply called Videos). If you want
something with more feature you can grab MPlayer. This is
essentially a command-line media player but it has a number
of frontends. Gnome users can use Gnome-Mplayer and KDE
users can use KMPlayer. There’s also the popular crossplatform VLC that can handle pretty much any file format.
42 LXF195 March 2015
www.linuxformat.com
Get into Linux
Full steam ahead.
aming has long been considered Linux’s Achilles’s
heel. Over the years we’ve had several quality open
source games, but they’ve lacked the mass-appeal
of popular gaming titles available on proprietary desktops.
All that changed in early 2013 with Valve’s announcement
of Steam for Linux client for its hugely popular game
distribution service.
You can install the proprietary client in your distro with
ease. Ubuntu users should enable the Partner repo and then
install the client from the Software Center. Fedora users
should install the RPM Fusion repos and then install Steam
using the package manager. Similarly, Mageia users should
enable the non-free repo and then use the Mageia Welcome
app to install the Steam client.
But before you fire up the Steam client make sure you’ve
got the proper drivers for your graphics hardware. This tends
to be the major sticking point for users, as card
manufacturers restrict distribution of their closed-source
driver elements. Start by getting details about the make and
model of your graphics card using the
lspci | grep VGA
command. You can get more details, such as its clock speed
and capabilities, with:
sudo lshw -C video
Your distro will ensure you are using the most suitable
open source driver as soon as you boot into the distro. The
Oibaf PPA (ppa:oibaf/graphics-drivers) is popular with
Ubuntu users for getting the latest bleeding edge open source
drivers. However, users of Nvidia and ATI/AMD hardware
should use proprietary drivers from the respective vendor for
the best gaming performance.
Ubuntu users should use the X-Swat PPA (ppa:ubuntu-xswat/x-updates) for the latest stable Nvidia drivers. Once
enabled fetch the drivers with:
sudo apt-get install nvidia-current
Fedora users will find the latest GeForce drivers in the
RPM Fusion repo. After adding the repo, install the driver with
yum install kmod-nvidia xorg-x11-drv-nvidia-libs kerneldevel acpid
Mageia users should first enable the non-free repo and
then launch the Mageia Control Center and head to Hardware
> Set up the graphical server. In the window that open click
on the toggle next to the Graphic Card label which will display
a list of graphics card. Browse the list and select yours. If
Mageia has a proprietary driver for the card, it’ll install it.
Mageia’s list also includes AMD cards, but if you are using
Ubuntu or Fedora, the best source for the proprietary driver is
AMD’s website (http://support.amd.com/en-us/
download). This page has several dropdown menus that you
can use to pinpoint the exact driver for your graphics card.
Then download the suggested driver and extract it to reveal a
.run script. Before you install the driver make sure you install
its dependencies with:
sudo apt-get install dh-make dh-modaliases execstack
libc6-i386 lib32gcc1
Once that’s done, you can execute the script with:
sh ./amd-driver-installer-13.35.1005-x86.x86_64.run
This will launch the graphical AMD Catalyst proprietary
driver installer and will also install the Catalyst Control Center
GPU management software. When the installer has finished,
head back to the terminal and enter
/usr/bin/aticonfig --initial
to configure the driver. LXF
While distros do
come with open
source games
in their repros,
Steam offers
access to AAA
gaming titles like
Dying Light.
User admin basics
When Linux is installed, it’s automatically
configured for use by a single user, but you
can easily add separate user accounts.
The superuser, root, has complete access
to the OS and its configuration; it’s
intended for administrative use only.
Unprivileged users can use the su and
sudo programs for controlled privilege
escalation. Users are grouped together
into a group, and inherit the group’s
access privileges. Every Linux user is a
member of at least one group. Now, you
can control which files and folders are
accessible by a user or a group. By
default, a user’s files are only accessible
by that user, and system files are only
accessible by the root user. In Linux files
and folders can be set up so that only
specific users can view, modify, or run
them. This allows you, for instance, to
share a file with other users so they can
read the file but not make any changes.
Access to files is controlled via a
system of ownership permissions. You
can use the ls -l to list the permissions.
When used without specifying a filename
it will list the permissions of all files within
the directory. The different characters in
the output represent a file’s read (r), write
(w), and execute (x) permissions for the
owner, group, and all other users. You can
alter the permissions of files and folders
with the chmod command or graphically
from the file manager.
www.tuxradar.com
The file manager of almost every distro allows you to
tweak the group associated with a file or folder.
March 2015 LXF195 43
Wynn Netherland
Made of
Wynn
Matthew Hanson talks to
Wynn Netherland about the
importance of APIs, testing
and the joy of Ruby.
44 LXF195 March 2015
www.linuxformat.com
Wynn Netherland
Wynn Netherland has
helped build the web
as we know it for
nearly twenty years.
A prolific creator and
maintainer of Ruby
API wrappers, he now
works on the GitHub
API, as well as authoring many books.
Linux Format: What are the biggest changes
you’ve seen during all your time spent
building the web?
Wynn Netherland: Great question. It’s got
more professional in many ways, it’s got less of
a cowboy type of atmosphere. I remember early
in my career at Compaq and later on HewlettPackard, we were kind of pioneering open
source for e-commerce, as and when. So
learning how to take payments online was just
something that everyone was trying to figure
out. The things we used to do even without
RoboSource Control and things like that, the
standards that are now commonplace. Even
legal requirements back then. So in many ways
the web has grown up, it has gotten a lot more…
corporate. In some ways we’ve lost a bit of the…
I want to say innocence, but a lot of the fun,
playground type atmosphere that we had early
on where everyone was figuring this out and it
wasn’t in this walled garden, except for AOL!
It seems like we’ve traded an open web in these
last few years for more and more walled
gardens. And for some reason as a community
we’re OK with that. I’m not sure why.
Interview
LXF: I suppose in the early days younger
people started off having Geocities accounts
where you could make your own website,
play around with HTML and even drop in a
checkout if you were selling anything. It was
quite easy to get started. Now people are,
rather than learning how to code their own
website, a lot of people are just thinking I’ll
have a Wordpress blog, or a Facebook page.
WN: Right, and it seems that Facebook pages
are more commonplace than Wordpress blogs.
Even if you pick a theme for a Wordpress blog,
or any other similar service, or open source
CMS tools, at some point you’ll find yourself in
HTML and CSS, tweaking it somewhat. With
Facebook, it’s what you see is what you get.
LXF: So as the internet has changed with, as
you say, more walled gardens, would you still
say that the internet has changed for the
better in some ways?
WN: As is the way when humans are involved
it’s both a positive and a negative. So
interoperability, even though data is more
segmented, we’re more interoperable in the
way that we exchange data. So back in the late
‘90s we were still doing things like Corba and
XML wasn’t really quite a thing yet. You would
send HTML to the browser and just hope that it
would get rendered! And now we’re kind of
settled on tools that make data interchange
really easy; JSON and HTTP, these were the dial
tones of the web. But who owns that data?
How can I export that data and take it with me?
These are some of the larger challenges. Who's
looking at my data is probably one of the bigger
ones now.
LXF: So creating a stable API is obviously
incredibly important – but how important
would you say it is?
WN: It’s of the utmost importance. I think it’s
still in some ways an idealistic goal. As a
consumer it’s easy to say ‘Oh you broke my
app! You shouldn’t change your API!’, right?
We’ve got a small but vocal minority in the API
community that would say if you’re versioning
an API then you’re pretty much doing it wrong.
Things like HyperMedia and media types.
Although they help insulate change, I don’t
think, short of infinite foreknowledge, that you
can always remove it … Sometimes you
introduce change even when you’re fixing bugs
and bringing it in line with what you published
LXF: Yeah, that’s always a concern. So with
your self-proclaimed passion for API user
experience that ties in as well, why is it
so important?
WN: From a user experience standpoint, I
began my career in a print design shop for a
newspaper, and had been moving further down
the stack as I’ve gone. And when open APIs
took off a few years
ago it began just by
HOW THE WEB HAS CHANGED
scratching my own
itch and wanting to
get data out of the
systems I was using.
I wanted to stitch
together my own
blog, with all the data
that already works. Say you have a bug out
from all the points that I was using on the web. I
there, and you’ve published the API and it works
just found myself frustrated at common, almost
one way. Now you’ll have a whole subset of
deliberate attempts to make it painful for the
people who have built tools around that bug.
folks that were getting [the data] out! I think it
If you go and notice the bug, and fix the bug,
boils down to a lack of empathy. User
now you’ve broken people's tools that worked
experiences are really just about empathy. We
around the work around.
tend to equate a user interface as a graphical
user interface. We almost imply the ‘G’ now of
LXF: So you need to have empathy for
the GUI. The way that Unix works, it’s text in,
people who are using the API, but there is a
text out. That’s a user interface. It just doesn’t
certain point where you have to close off and
have a graphical element, for the most part. So
just do your thing; sort of follow your path?
APIs are the same way, even if you adapt REST,
WN: Right. So the way that we approach that at
there’s a way to archetype those APIs to put
the GitHub API is to first quantify it. To measure
yourself in the position of the consumer, so that
the change that we’re going to have. Whether
you can think ‘what would I like to see?’, or ‘what
it’s a refactoring, or adding a new end point or
would enable workflows around this tool?’ and
taking an end point away, something like that.
not just wrapping your database in REST.
“We’ve traded an open web in
these last few years for more
and more walled gardens.”
www.tuxradar.com
March 2015 LXF195 45
Wynn Netherland
We’ll measure the change and quantify it. From
there we figure out how to minimise the change.
That allows us to handle the change. So if you
have an API end point that no one’s calling, you
can remove that end point and you haven’t
affected anybody. Really, it’s human impact that
is what it boils down to. So when we look at
shutting off a particular API endpoint that is
problematic to the Ops team and to the
infrastructure, or it’s putting the whole site at
risk – something like that. We’ll look at ways to
minimise that risk and if we end up having to
shut off an API method, the very first thing
we’re going to do is see who’s calling it and how
often and reach out to those folks and say ‘let’s
look at alternate workflows to get around this’.
LXF: So you mentioned GitHub. How did you
get involved?
WN: It was through the API wrapper. I was
working with the Changelog podcast a few
years ago, and one of the things we wanted to
do was just showcase a lot of the open source
that was happening on GitHub. It was
something we called ‘Tale of the Changelog’ and
it was basically a fire hose of everything that
was coming out on GitHub. I didn’t like any of
the Ruby wrappers that went around at the
time, they mapped too closely to REST – I’m a
firm believer that API wrappers should be
idiomatic to the language that they’re written in.
So it should feel like an extension of Ruby if
you’re writing in Ruby. So I wrote a wrapper that
later became octokit and then realised after
they reached out to me a couple of years later
to join GitHub, that GitHub was using it
internally. Had I known that I’d have put a little
more effort into it! Because for me it was a
hobby project. The very first time I
bootstrapped the GitHub source code and it
started installing octokit I was ‘wow!’. So now we
use it as the primary API interface for a lot of
our tools, because a lot of our tooling is written
in Ruby. So it powers things like checking
permissions of internal scripts, getting stats,
authenticating our various internal applications.
It all uses octokit internally.
LXF: It must have been flattering to see your
code there…
WN: It was, but any open source project is a
team effort, so there’s been a number of
contributors over the years. Erik Michaels-Ober
has probably been the biggest one on octokit.
He works at Soundcloud and has a number of
wrappers out there. But when the Mac team
wanted to extract their API wrapper for
Objective-C from the GitHub for Mac app, they
wanted a name. Naming things and invalidating
caches are the big computing problems, right?
So, it was obvious that the best name for the
Objective-C client would be octokit, so I told
them: ‘You guys should have this name, and we
should just make that an overall umbrella brand
– for lack of a better term – and so our project
became octokit and I moved it from my
personal GitHub namespace into the octokit
org, and so now we’ve got flavours in
Objective-C and .net. People keep asking why
there’s no Python. We do have one in Go, but
our philosophy on releasing Octokit wrappers is
that we want them to be first party, something
that we’re using and testing and maintaining.
That way we’re not trying to maintain
something that we’re not really using, as it gets
out of date.
LXF: So as part of the API team at GitHub
what are you main responsibilities?
WN: Keeping the lights on, first and foremost.
In the API team at the moment there are just
two of us, there’s Jason Rudolph and myself.
A small team embedded within the larger, what
we call ‘.com engineering team’. So, primary
responsibility is site-wide horizontal aspects to
the API. Things like authentication, rate limiting
and anything that any of the API swots would
use. Then we coach and cheerlead the feature
teams for pull requests issues and repository
access to extend and maintain their APIs, as an
extension of what they’re doing on the site. The
API is a smaller app embedded within the larger
web app, so we share a lot of the backend code,
just the workflows and the output are isolated.
LXF: So you test a lot of the API behaviour
as well, and in previous talks you’ve given
you’ve touched upon some of the tools you
use, such as JSON. Could you explain to us
the testing process?
WN: Sure, so our primary output format for the
API is JavaScript Object Notation (JSON), so we
use JSON schema. One of the shortcomings of
JSON is, unlike XML, there’s no built-in schema
technology that comes along with it. It’s very
easy to parse and you get real primitive spec in
the language of your choice, but there’s no way
to know what shape the data should have.
So we use JSON schema, both in the input and
the output. Just recently we’ve been using it on
the input and we’re getting some big gains
there. Traditionally over the output we can,
when we run our test suite, look at the JSON
that’s coming back and validate that this object
response should have these properties, and
things of that sort. Now we’re starting to add
that back in, allowing us to validate inputs.
So when you go to create a pull request you
pass us some JSON, we can run it through
those same schema validations to say ‘Sorry
this isn’t the right format’. And what it does for
us is that it allows us to declaratively specify in
the test what a valid document looks like. Then
we don’t have to write one-off tests for every
permutation under the sun to check if they are
valid or not.
LXF: Also Rack::Test is another tool you use…
WN: So Rack::Test is for any Rack apps, so the
Ruby world has and HTTP pipeline called Rack.
So a few years ago a lot of the web frameworks
got together and standardised on an inputoutput pipeline pattern that they called Rack.
So now many conform to this interface where
you can rack up – like with a rack of servers in a
datacentre – you can rack up these middleware
applications to do things in a pipeline format. So
if you have things that take an HTTP response
and they act on it in some way and they pass it
on down to the next app to consume. So things
like authentication, rate limiting, cookie
handling, if you need to scrub input or output
for cross site scripting vulnerabilities, things like
that. We use Rack::Test to be able to test our
apps at test time.
LXF: It’s important to catch any potential
problems at that stage, isn’t it? So can you
accurately track how your API is being used?
46 LXF195 March 2015
www.linuxformat.com
Wynn Netherland
Going back to your empathy comments, it’s
definitely useful to see how your API is being
used and how it performs.
WN: We have a couple of metrics on a couple of
different levels. One of those is just sheer
responses and the status of those responses at
particular endpoints. We know the Events
endpoint which is our activity streaming API. It
gets a certain amount of volume and we know
that these are the types of applications that
people are building on top of GitHub. So that’s
one level just to see the sheer volume, but
volume doesn’t always tell the whole story.
There’s also, I guess because not every method
has the same value from a workflow standpoint,
the things that really excite me are the, I guess
more in-depth tasks, around creating pull
requests and merging pull requests and doing
everything that you would do in the website.
Instead of the API being simply a data store for
a mashup to use as a database, I really get
excited when I see applications building entire
workflows on top of it, their cloning a repository,
they’re adding features and then they’re using
the API to submit those pull requests. So you’ll
see a lot of tools that build code review type
applications on top of GitHub that mirror what
we’re doing with the website, but fit their
individual workflows better.
WEB VS NATIVE
coming from. Or the new code path is 100% in
line, but it’s slower, or faster. Science allows us
to compare those things and roll them out with
confidence and make it data driven, evidencebased, instead of just kind of putting your finger
in the air and saying ‘I hope it works’!
LXF: And I suppose it helps you identify
patterns, then?
WN: Yes, especially when it’s things that involve
security that just can’t afford to be wrong.
LXF: How long have you been using Science?
WN: Science has been around… this particular
definition of it [laughs]… I think it’s about a year
old. It hasn’t been open source that long. I first
spoke about it back in March 2014, and it’s
really catching on throughout the team as a
codebase gets long in the tooth and we’re trying
to figure out how to make things a little faster
and a little bit more modern in certain places.
LXF: So how important are open source
tools for creating and testing APIs?
WN: I think they’re indispensable. Outside of a
Microsoft ecosystem I don’t know how you
would do it otherwise. The very first slide of my
talk [at OSCON last year] is about how I want to
build a resilient API in open source, and for me
that actually starts with
them [the audience]
some tea mugs and
some Shell tools… I only
have 40 minutes for the
talk so I have to fast
forward a little bit! But it
really involves everything
from the web
frameworks you choose, what servers they’re
going to run on, the instrumentation stack of
how you’re going to collect metrics and gather
any sort of feedback on how your website
performs. Your testing frameworks… every one
of these aspects is a decision matrix, really,
figuring out which tools you’re going to use. It’s
almost an embarrassment of riches in the open
source community. Because you have choices.
It’s a tyranny of choice, and often it’s nice to
come into a project, like I did with GitHub, and a
lot of those decisions have already been made,
you just get to play in somebody else’s sandbox.
“It’s a false dichotomy
because even native apps
are using JSON and APIs.”
LXF: Another open source project that
you’ve touched on in your talk is Science.
Could you explain what that is?
WN: This is from John Barnette and Rick
Bradley, two GitHubbers. So we had a particular
problem inside of the .com codebase where as
good as the tests suite is, and it’s a phenomenal
test suite just based on the experience of
projects I’ve worked on, there are certain
aspects of your applications, a soft underbelly,
that you really don’t trust your tests 100%. It
could be because your framework’s using
metaprogramming techniques that are difficult
and are a dynamic language to track down at
design time. So if there’s something that’s
critical that can’t change but needs refactoring,
Science is an application… a project, rather…
that tells you test alternate code paths. If you
have a method that is read only, it’s not
mutating, and it returns a value you now, just
like the scientific method, you can run both
code paths side by side and compare the
results and then publish those results into some
charts and graphs. So you can see side by side,
OK this is the original code path, this is the new
code path. They differ about 10% of the time,
and we need to figure out where that variance is
LXF: So going back to the APIs, there’s now
a huge variety of devices that we can now
use to access the internet and online
services. Has that provided challenges for
creating APIs?
WN: It definitely introduced more load on APIs.
I saw a definition the other day where, when it
comes to at least the mobile space, when
people say ‘web versus native’, it’s almost a false
dichotomy because even native apps are using
JSON and APIs behind the scenes. For the most
part those are hitting the same applications,
they’re just rendering them in a different
www.tuxradar.com
computer-friendly format, right? So it’s
definitely exploded the amount of applications
that are wired up to the web, and it’s not just the
browsers that are having that fun anymore.
But it’s also kind of changed the way we think
about native apps. I remember one of the ways I
got into writing my second or third API wrapper
was for the now defunct [location-based social
media] service called Gowalla. At the time it was
up against Foursquare. Where with Foursquare
you would be competing with everyone else to
be mayor of a certain geographic location, with
Gowalla it was more of your personal backpack.
So every state of the United States, or every
country in the world that you visited, you got a
different badge. You could also leave virtual
objects for other people. It just had a totally
different vibe, it was really art driven and I just
loved it. I knew some friends who worked there
and I kept bugging them to come out with an
API and they kept saying they were working on
it, but keeping it really close to the vest. Well,
then their mobile app came out, so I wrote a
Gowalla API wrapper by sniffing out their mobile
API, by running their app through a proxy
server. It also turned me on to a lot of the things
that I’d poked on in their API that were sound
decisions that had made those patterns that
you see in other APIs now, and a lot of those
have landed in the GitHub API as well.
LXF: So when you’re trying to figure out how
different people are going to be accessing
certain services or apps, to test that is
probably quite difficult as well?
WN: So one of the things we’ve done since
change is expensive, and usually not welcome
as we mentioned before! We’ve started
tinkering with the notion that beta API
endpoints, and normally the ways these happen
is you’ll have somebody will release an API and
after it gets some adoption and they figure out
what they want to rev on it and stuff, they’ll
introduce the second version of the API – and
GitHub even did this in the past where they had
the V2 and a whole different namespace of
March 2015 LXF195 47
Wynn Netherland
URLS and then V3. When we moved to V3 we
decided we were going to start versioning
endpoints based on output, not based on path.
Which meant a certain HTTP header that you
send in, you ask for a particular version of the
resource. And here lately over the last 18
months of so we’ve started using that as
essentially beta passwords for new features.
We’ll release a beta version of an API and right
there in the media type header we’ll have the
word ‘preview’, which is basically your opt-in
contract to say ‘I understand I’m not supposed
to build anything in production on top of this
API endpoint’, but it allows us to get some realworld feedback. It’s subject to change at any
moment, we make no warranty period of these
beta APIs, but we’ve done this a couple of times
and gotten some really nice real-world feedback
from folks that have used some particular
feature in ways that we haven’t anticipated.
So we’ve made changes before we release the
final product and the final product has been
much better for it due to the changes.
key to the Ruby community. Ruby is just… I was
a .net developer before I came into the Ruby
community, and for me personally nothing
makes me as happy when I’m writing Ruby.
There’s certain times when you need more
performance than Ruby just can’t give you,
because it’s a dynamic language, although you
can push it pretty hard. Go, I like a lot but it’s not
as approachable as Ruby for the most part.
Which is interesting because I’ve written a lot of
JavaScript too, but there’s just something about
JavaScript’s syntax, with Function word
everywhere that tends to read not as close to, I
guess, human speech, as Ruby does … You can
do interesting things with its metaprogramming
capabilities to write specific languages …
instead of writing around dozens of assertions, I
can roll that up into like a five-line DSL, that I
can write those assertions behind. It makes it
very clear and very declarative about what it’s
going to do without having to be as verbose as a
lot of languages have been.
LXF: And that’s a lot friendlier to newcomers
as well. They don’t just have to wait until the
end to see the result…
WN: It’s very easy to
follow, especially if you’re
SOFTWARE EATING THE WORLD
new to Ruby … some of
its faults are that there
can be three ways to do
something, and you
might not know which is
the idiomatic way to do
something in Ruby. So
but now if we discover that a particular resource producing it does take a bit more time than
consuming it, but it’s very easy to follow.
doesn’t work well for a particular class of users,
we can give them a different media type output
LXF: So GitHub has recently reported that it
that does. For instance, there’s fields that have
has eight million users, which makes it the
to be removed when certain features are
largest code host in the world, what do you
removed from an application, and in the Python
think has helped make it so successful?
or Ruby space, removing a field from a JSON,
WN: Open source and getting influential
even setting it to null, is no big deal. For static
developers and influential projects into GitHub
clients they don’t like this as much. Compile
has just been phenomenal. I remember in the
time issues. So in those cases we can offer a
early days, back in the beta period, when you
different media type on that endpoint and allow
had to sell someone on Git first before you
them to fetch a different version of that
resource that meets their needs.
could sell them on GitHub. Just doing that at
lunch there was a gentleman sitting next to me,
LXF: You mentioned at GitHub Ruby is the
they’ve just moved to Subversion with Git.
language of choice?
So the whole table just erupted into ‘Oh you
WN: Yeah, I think the vast majority of our
need to try this and that, and that’s how it
tooling is in Ruby. We’re doing more and more in helped me’, so when you see those light bulb
Go, with media uploads and things like that, just
moments and you see entire projects moving
because it’s well-suited for concurrent uploads,
into GitHub, it’s just a tidal wave of activity and
things like that. Of course, if you’re doing a Mac
community that goes into GitHub and that goes
app you’ll be using Objective-C and now Swift,
back out. The big thing is there’s no longer a
those teams are now evaluating Swift and
gatekeeper for most projects. The Canonical
moving over to that. Of course, the Windows
repository is whichever one is most active, and
team is going to be in .net. So we’re always
so if your project lies dormant, there’s nothing
going to be a polyglot shop, but when it comes
stopping the community from forking it and
to web apps, the vast majority are in Ruby.
now this project is the Canonical one, because
that’s where all the activity is. Maintainers really
LXF: What is it about Ruby that’s so useful?
have to make an effort to stay on top of the
WN: Well, ‘optimising for happiness’ is kind of
communities. Or delegate. But we’re still finite
LXF: So has the take up been encouraging?
WN: It has. Not only from a beta standpoint,
“It’s getting to a point where
there’s this ubiquity that’s
easy to underestimate.”
48 LXF195 March 2015
www.linuxformat.com
humans so as activity has exploded, you’ll see a
new pattern emerge where someone commits
a pull request and it looks good and it looks
solid, the next thing you do, which in previous
years would be unheard of, you’d go ahead and
give them a commit, so they can then help
maintain that project, because there’s just no
way you’re going to maintain all the open
source – you can’t keep up with your tooling
from that standpoint.
LXF: I suppose positive word of mouth in the
open source community helps.
WN: I’m still blown away by if you go out of the
echo chamber of typical tech conferences –
especially if you go into design circles – where
more and more of the HTML 5, and some other
technologies, that whole community is now
intersecting with traditional open source
development communities. Or you go into
enterprise-flavoured conferences, we’ve got far
less penetration, but you see it’s still advancing
at the same rate, so it’s very encouraging to get
immersed in those communities and spend a
few days with folks that really had that light bulb
moment of ‘Oh wow, this could revamp things’.
LXF: What does the future hold for GitHub?
WN: Revolutionise the way people write
software. I guess the saying inside the company
is that software is eating the world. So if you
think about how far software goes into every
facet of life. From the devices that you hold in
your hand to even the scarier parts, like
hospitals. It’s not the needles that scare me, it’s
the software! Because I know how a lot of that is
written! [Laughs] But software is everywhere,
right? And it’s getting to a point where there’s
this ubiquity that’s easy to underestimate, and
so the more software is around us, the more
tooling is going to have to evolve to keep up with
just every industry out there. I think GitHub is
going to be ground zero for that activity. LXF
Helping you live better & work smarter
LIFEHACKER UK IS THE EXPERT GUIDE FOR
ANYONE LOOKING TO GET THINGS DONE
Thousands of tips to improve your home & workplace
Get more from your smartphone, tablet & computer
Be more efficient and increase your productivity
www.lifehacker.co.uk
twitter.com/lifehackeruk
facebook.com/lifehackeruk
Riak - NoSQL
Mihalis Tsoukalos explains everything you wanted
to know about NoSQL, but were too afraid to ask and
discover why admins love this high-speed system.
he world of databases moves
slowly (but perhaps not when
transacting), so when a
revolution hits it can take
decades for the repercussions to be felt.
Coined back in 1998, NoSQL are
databases that started life not using the
then standard query language SQL. But
more revolutionary the actual database
designs themselves, moved away from
the standard relational model
altogether for speed and
ease of design.
As you might expect by the
name, even though NoSQL
databases were originally
designed not to use SQL, they
can now, instead they use
various different query languages. While these
originally might have appeared in 1998, NoSQL
didn’t gained prevalence until the late
Noughties when it was adopted as a rallying
Twitter hashtag for a group of non-relational
distributed database projects that were after
something small and unique.
T
If you are wondering whether or not it’s
worth considering NoSQL databases, you
should be aware that according to DB-Engines
Ranking (https://db-engines.com/en/
ranking), MongoDB, a popular NoSQL
database, is currently the fifth most popular
after Oracle, MySQL, Microsoft SQL Server and
PostgreSQL – and even Oracle has a NoSQL
version of its famous database.
The problem with relational databases is
that in order to store complex information you
The next logical step is to use many
machines to run your database, but that also
creates a problem because relational
databases were originally designed to run as
single-node systems. So, large companies,
such as Google and Amazon, developed their
own database systems, Bigtable and Dynamo
respectively, that were quite different from
traditional relational database systems, and
which inspired the NoSQL movement.
It’s quite difficult to define what a NoSQL
database is but you can identify
a few common characteristics
among NoSQL databases: they
are non-relational; open source
(although not always); schemaless; easy to be distributed on
many machines (again, not
always) and trying to serve data from the 21st
century web culture.
So, NoSQL databases are designed for the
web and don’t support joins, complex
transactions and other features of the SQL
language. Their terminology is also a little
different, but lets dive into the details.
“NoSQL DBs are designed for
the web and don’t support joins
and complex transactions…”
50 LXF195 March 2015
have to deconstruct it into bits and fields, and
store it in lots of different tables. Likewise, in
order to restore the data, you have to retrieve
all those bits and fields and put them back
together. Neither of those two tasks is efficient
particularly if you have a big and busy website
that’s storing and querying data all the time.
www.linuxformat.com
NoSQL - Riak
Examples of column-family NoSQL databases
include Cassandra and Apache HBase.
Graph This model is totally different from
the other three as it is based on the Graph
structure. As a logical consequence Graph
NoSQL databases handle hierarchies and
relationships between things very well; doing
similar things with a relational database would
be an extremely challenging and slow task.
Neo4j is a graph NoSQL database. For this
article, we’ll be using Riak as our NoSQL
database test case.
Installing Riak
Every Riak server has a web interface. In this case, we’re accessing the server statistics,
using http://localhost:10018/stats/ The port number and IP are defined in riak.conf.
The main advantage of a NoSQL database
is that they are suited to and efficient for big
data and real-time web applications. They also
offer easy scalability, and enable you to
implement high availability painlessly. They
are also generally easier to administer, set up
and run, and they can store complex objects.
Additionally, it’s easier to develop applications
for and with NoSQL databases. Their schema
can change easily without any downtime
because, in reality, they have no schema.
Most of them, with the exception of Oracle
NoSQL, are open source projects.
Key disadvantages of NoSQL databases
include the fact that they require a totally new
way of thinking and that you still need a DBA
on large and/or critical projects. If your
company needs to use both SQL and NoSQL
databases, you will have two entirely different
systems to program and administer and
therefore will need even more people. Being
relatively new, they are not as mature as
relational databases; therefore choosing a
NoSQL database for a critical problem may
not always be the safest solution, but this will
not be a problem in a couple of years. The last
disadvantage is the fact that although they
look like they have no schema, you will need to
assume an implicit schema in order to do
some serious work with your data. This isn’t
unexpected because as long as you are
working with data, you cannot get away with
having a schema, even an informal one.
There are several kinds of NoSQL database
each of them being good in one or more areas
but not all. You can categorise NoSQL
databases according to their data model:
Document This is a very common data
model. It thinks of the database as a big
storage for documents where each document
is a multipart data structure that’s usually
represented in forms of JSON. You can still
store documents in any format you want.
MongoDB, CouchDB and RavenDB are
representative document NoSQL databases.
Key-Value This is also a common data
model that’s similar to the hash map data
structure, where you have a key and you ask
the database to return the value stored for
that particular key. The value can be anything
from a single number to a whole document.
The database knows nothing about the stored
data. Examples of key-value NoSQL databases
include Riak, Redis and Project Voldemort.
Column-family This is a rather complex
data model. You have a ‘row key’ that enables
you to store and access multiple column
families. Each column family is a combination
of columns that fit together. Row keys must be
unique within a column family. The data model
might be more complicated than the others
but it results in faster retrieval times.
The first thing you should know before
installing Riak is that you need Erlang [See
Tutorials, p88, LXF194] on your system.
The best way to install Riak is by compiling it
from source because you have better control
and a totally autonomous build of Riak. Follow
the next steps:
$ wget http://s3.amazonaws.com/downloads.
basho.com/riak/2.0/2.0.2/riak-2.0.2.tar.gz
$ tar zxvf riak-2.0.2.tar.gz
$ cd riak-2.0.2
$ make rel
Alternatively, you can get the Riak source
code from GitHub and compile it as before:
$ git clone git://github.com/basho/riak.git
$ cd riak
$ make rel
Both ways should work without any
particular problems; we used the first way to
compile Riak. After successfully compiling
Riak, you can find its main binary files inside
the ./rel/riak/bin directory. In the same
directory that you build Riak, you can run
make devrel and get eight ready to run Riak
databases that we will use as example servers.
Generating a Riak cluster with five nodes
is pretty easy, see p53 for details.
Map and Reduce
MapReduce is an advanced querying technique
and a tool for data aggregation used in NoSQL
databases. It’s an alternative technique for
querying a database that differs from the usual
declarative querying techniques. You give
instructions to the database on how to find the
data you are looking for and MapReduce tries to
find the data. (See the top of p52 for a simple
example of how MapReduce works.) Using
MapReduce can be very tricky sometimes.
Nevertheless, it enables you to create queries
that would have been extremely challenging to
create using SQL.
Once you understand the MapReduce
process and practice it, you will find it both very
reliable and handy. The MapReduce solution
www.tuxradar.com
takes more implementation time but it can
expand better than an SQL solution. It provides
some flexibility that’s not currently available in
the aggregation pipeline. The tricky thing is
deciding whether or not the MapReduce
technique is appropriate for the specific
problem you are trying to solve. This kind of
knowledge comes with experience!
March 2015 LXF195 51
Riak - NoSQL
This is the main reason to get the Riak source
code and compile it for yourself.
Before we continue with the rest of the
article, we need to introduce you to some
terms. First, a Riak node is analogous to a
physical server. A Riak cluster is a 160-bit
integer space which is divided into equallysized partitions. Partitions are, in turn, the
spaces into which a Riak cluster is divided.
Each vnode in Riak is responsible for a
partition. Vnodes coordinate requests for the
partitions they control. A Riak cluster can have
many nodes that reside on the same or
different physical machines. A ring is a 160-bit
integer space equally divided into partitions,
and a bucket is a namespace for data stored
in Riak. Internally, Riak computes a 160-bit
binary hash of each bucket/key pair and
maps this value to a position on an ordered
ring of all such values.
As you will see later in this article, any
client interface to Riak interacts with objects
in terms of the bucket and key in which a
value is stored, as well as the bucket type that
is used to set the properties of the bucket.
The default mode of operation for Riak is
to work as a cluster consisting of multiple
nodes. Riak nodes are not clones of one
another by default.
You can start three example Riak database
servers – you don’t have to start all eight Riak
servers – by executing the next commands:
$ ./dev/dev1/bin/riak start
$ ./dev/dev2/bin/riak start
$ ./dev/dev3/bin/riak start
$ ./dev/dev1/bin/riak start
Node is already running!
Each Riak server offers a web interface
(see top of p50 for an example of what you will
see after connecting to a Riak server). The port
number and the server IP address are defined
inside the riak.conf file. This is a plain text file
that you can edit. The following command
reveals the IP and the port number that each
Riak server listens to:
$ grep listener.http.internal `find ./dev -name
riak.conf`
./dev/dev2/etc/riak.conf:listener.http.internal =
127.0.0.1:10028
A MapReduce example. It may look simplistic but MapReduce is a very powerful technique.
Attempting the same with SQL would be extremely difficult.
./dev/dev1/etc/riak.conf:listener.http.internal =
127.0.0.1:10018
./dev/dev3/etc/riak.conf:listener.http.internal =
127.0.0.1:10038
And so on… Every node in Riak has a name
associated with it. You can change the name
by changing the nodename variable of the
riak.conf file. The first server (dev1) uses port
number 10018, the second Riak server (dev2)
uses port number 10028 and the third (dev3)
uses port number 10038. Riak versions prior
to 2.0 used a configuration file called app.
config which has been replaced by riak.conf.
The easiest way of finding out if a Riak
node is up or down is via the curl command
and the web interface of the node to ping it:
$ curl http://localhost:10018/ping
OK
$ curl http://localhost:10038/ping
curl: (7) Failed to connect to localhost port
10038: Connection refused
Alternatively, you can use the following:
$ ./dev/dev1/bin/riak ping
pong
$ ./dev/dev6/bin/riak ping
Node ‘[email protected]’ not responding to
pings.
The advantage of the ‘curl way’ is that you
can run it from a remote machine – provided
that the Riak server also listens to an external
IP address – without having to login to the
machine that runs the Riak node. You can stop
the dev1 Riak server by executing the ./dev/
dev1/bin/riak stop command.
Riak uses epmd – the Erlang Port Mapper
server – which plays a crucial part in the
whole Riak operation. The epmd process
starts automatically by the erl command if the
node is to be distributed and there’s no
running instance present. The epmd process
enables Riak nodes to find each other. It’s an
extremely lightweight and harmless process
that can continue to run even after all Riak
nodes have stopped. You may kill it manually
after you stop all Riak nodes, but this isn’t
compulsory. The following command lists all
names registered with the currently running
epmd process:
$ epmd -names
epmd: up and running on port 4369 with
data:
name dev3 at port 49136
name dev1 at port 55224
name dev2 at port 48829
Storing and retrieving data
You can connect to Riak dev1 server and store
a document using the web interface:
$ curl -v -X PUT http://127.0.0.1:10018/riak/
LXF/test -H “Content-Type: text/html” -d
“<html><body><h1>This is a test.</h1></
body></html>”
Riak benchmarking
Basho (http://basho.com) offers a
benchmarking tool for Riak written in Erlang.
You can get and install it with:
$ git clone git://github.com/basho/basho_
bench.git
$ cd basho_bench
$ make
You should then run ./basho_bench
myconfig.config to get the tool collecting data,
and either create a myconfig.config yourself or
52 LXF195 March 2015
modify an existing config. Existing files reside in
the examples directory. We used the
examples/basho_bench_ets.config file as a
starting point and added the {riakclient_nodes,
[‘[email protected], ‘[email protected]’]}.” line.
Basho Bench creates one Stats process and
workers based on what’s defined in the
concurrent configuration setting in myconfig.
config file. As soon as these processes are
created and initialised, Basho Bench sends a
www.linuxformat.com
run command to all worker processes and this
initiates the testing. The Stats process is notified
every time an operation completes. It also gets
the elapsed time of the completed operation
and stores it in a histogram.
All the results are inside the tests directory.
The latest results can be found using the
./tests/current/ soft link. To generate a graph
against the current results, run make results.
[See the bottom of p53 for a sample output.]
NoSQL - Riak
What is actually stored in the /riak/LXF
test location is what follows the -d option.
When you successfully insert a new value,
Riak will return a 204 HTTP code. As you
already know, Riak is a key-value store,
therefore in order to retrieve a value you need
to provide a key to Riak. You can connect to
Riak dev1 server and ask the previously stored
document by going to the http://127.0.0.1:
10018/riak/LXF/test URL. Every URL follows
the http://SERVER:PORT/riak/BUCKET/
KEY pattern. The following command returns
the list of available buckets:
$ curl -i ‘http://127.0.0.1:10018/
riak?buckets=true’
HTTP/1.1 200 OK
Vary: Accept-Encoding
Server: MochiWeb/1.1 WebMachine/1.10.5
(jokes are better explained)
Date: Fri, 19 Dec 2014 21:13:37 GMT
Content-Type: application/json
Content-Length: 33
{“buckets”:[“LXF”,”linuxformat”]}
The following command returns the list of
keys in a bucket:
$ curl ‘http://127.0.0.1:10018/buckets/LXF/
keys?keys=true’
{“keys”:[“test2”,”test”,”test3”]}
Most of the times, you are going to use a
script written in a programming language to
access a Riak database. The following is a
Python script that connects to a Riak
database, stores and retrieves a document:
import riak
# Connect to the cluster
client = riak.RiakClient(pb_port=10017,
protocol=’pbc’)
# The name of the bucket
bucket = client.bucket(‘python’)
# “myData” is the name of the Key that will
be used
aRecord = bucket.new(‘myData’, data={
‘Name’: “Mihalis”,
‘Surname’: “Tsoukalos”
})
# Save the record
aRecord.store()
# Define the key for the record to retrieve
myRecord = bucket.get(‘myData’)
# Retrieve the record!
dictRecord = myRecord.data
# Now print it to see if all this actually
worked.
print dictRecord
$ python myRiak.py
{u’Surname’: u’Tsoukalos’, u’Name’:
u’Mihalis’}
The pb_port value of 10017 is defined in
the ./dev/dev1/etc/riak.conf file using the
listener.protobuf.internal parameter. This is
the Protocol Buffers port that is used for
connecting to the Riak Cluster.
Due to the flexibility in the way that a
NoSQL database stores data, inserting,
querying and updating a NoSQL database is
more complex than a database that uses SQL.
Generating a Riak cluster
Creating and manipulating clusters in Riak is
relatively easy with the help of the riak-admin
command. If you try to add a node that’s not
already running to a cluster, you will fail with
the following error message:
$ dev/dev2/bin/riak-admin cluster join
[email protected]
Node is not running!
$ ./dev/dev2/bin/riak start
$ dev/dev2/bin/riak-admin cluster join
[email protected]
Success: staged join request for
‘[email protected]’ to ‘[email protected]’
$ dev/dev2/bin/riak-admin cluster join
Riak offers a benchmarking tool called
Basho Bench. The graph is produced with R.
[email protected]
Failed: This node is already a member of a
cluster
Similarly, if you try to join a node to itself,
you will get an error message:
$ dev/dev1/bin/riak-admin cluster join
[email protected]
Failed: This node cannot join itself in a
cluster
The following command shows the
members of an existing cluster:
$ dev/dev2/bin/riak-admin status | grep
members
ring_members : [‘[email protected]’,’d
[email protected]’]
$ dev/dev1/bin/riak-admin status | grep
members
ring_members : [‘[email protected]’,’d
[email protected]’]
$ dev/dev3/bin/riak-admin status | grep
members
Node is not running!
Another useful command that shows the
status of the nodes is the following:
$ ./dev/dev1/bin/riak-admin member-status
The joining status is a temporary status
and will become valid when all changes that
are waiting in a queue will be applied and
committed. If you want to force changes,
you should execute the riak-admin cluster
commit command.
If you run the riak-admin member-status
command again you will see the new status of
the dev3 node, and the riak-admin cluster
plan command displays the changes that are
about to be applied.
For a node to actually leave the cluster (see
bottom of p51, to see what an interaction with
a cluster of five nodes looks like), you must
first review the changes using the riak-admin
cluster plan command and then commit
them with riak-admin cluster commit.
So far, you won’t have seen any security
when interacting with a Riak database.
Nevertheless, Riak supports users and
passwords. You can find a lot more
information on how Riak deals with
authentication and authorisation at
http://bit.ly/RiakDocsAuthz. LXF
Data consistency
Data consistency in databases is critical. ACID
(Atomicity, Consistency, Isolation and Durability)
is a set of properties that guarantee that
database transactions perform reliably.
Atomicity means that when you do something
to change a database, the change should work
or fail as a whole. Isolation means that if other
things are taking place at the same time on the
same data, they should not be able to see halffinished data. Durability refers to the guarantee
that once the user has been notified of the
success of a transaction, the transaction will
persist, and won’t be undone even if the
hardware or the software crashes afterwards.
Graph databases perform ACID transactions
by default, which is a good thing. On the other
hand, not every problem needs ‘perfect’ ACID
compliance. MongoDB is ACID-compliant at the
single document level, but it doesn’t support
multiple-document updates that can be rolled
www.tuxradar.com
back. Sometimes, you may be OK with losing a
transaction or having your DB in an inconsistent
state temporarily in exchange for speed.
You should carefully check the characteristics
of a NoSQL database and decide if it fits your
needs. Nevertheless, if data consistency is
absolutely critical, you can always implement it
in code if it’s not fully supported by your NoSQL
DB. Keep in mind that this might be non-trivial
especially on distributed environments.
March 2015 LXF195 53
Fedora 21
Fedora 21
Jonni Bidwell dons his felt hat (but leaves his whip
at home) to uncover the treasures in the new distro.
e tried Fedora 20 roughly a
year ago [Reviews, p19,
LXF180] and now we're trying
out the latest
release of the venerable
distribution, which is made of
three distinct distributions.
By the time you read this
Fedora 21 will have been
released into the wild and will be
roaming the internets.
Additionally, at the time of writing, thanks to
the release being postponed several times
while the kinks were ironed out, your reviewer
has to make do with the latest release
W
54 LXF195 March 2015
candidate (being at the mercy of deadlines,
and also the bar at office parties). As such we
won't mention several minor bugs that we
Schrodinger's Cat, Spherical Cow, Beefy
Miracle – but if all you care about is quirky
names you can always count on Ubuntu.
Fedora, like Debian, has always
been targeted towards the
intermediate and above tiers.
This doesn't mean it's hard to
use, but it does mean there's a
distinct absence of wizardy
interfaces for doing stuff that's
best done at the command line.
Fedora has always enabled SELinux by default,
but unless you have specific requirements it
won't get in your way. It didn't always used to
be this way, incidentally.
“a distinct absence of wizardy
interfaces for stuff that’s best
done at the command line.”
know will be fixed when the official release
makes it out the door. Breaking from tradition,
this release doesn't have a funky codename
like its predecessors – Heisenbug,
www.linuxformat.com
Fedora 21
hile we could spend the entire
article, detailing all the version
updates and comprehensively
enumerating all the packages with which a
default install of this new Fedora ships, we
shan't. For most users, everything will be
sufficiently contemporary, excepting the
usual gripes about official proprietary driver
packages, for which there exist ways and
means by which to ameliorate the situation.
In general, if you do need up-to-the-minute
releases, then use a rolling release distribution,
such as Fedora's Rawhide. Fedora is pretty
much the last major distro still to use the
much maligned Gnome Shell as the default
desktop, but don't worry if it's not your cup of
tea. There are plenty of other desktops
available to install, and with a big name distro
such as this, things are sufficiently wellpackaged as to make the process all but
trivial. The hardest part will be making a cup of
tea to enjoy while the requisite packages are
downloaded and installed.
W
Three is the magic number
So what exactly is worth mentioning? Well for
starters this isn't really one distribution, but
three: Fedora has been split into Cloud, Server
and Workstation incarnations. Lots of work
has been done to make the Cloud image as
small as possible, in particular the kernel
package is now a meta-package, with any
modules irrelevant for a minimal cloud image
annexed off to a separate package called
kernel-modules.
The kernel-core package contains,
besides the kernel image itself, just those
modules required to work as a, for example,
an EC2 or OpenStack instance. So the Cloud
image relies only on kernel-core package
while the Server and Workstation images
depend on the meta-package (which installs
both the -core and -modules packages).
The cloud release also features Red Hat's
Project Atomic framework, which enables you
to set up a so-called Atomic Host, being a
lightweight platform whose sole purpose is to
host Docker containers. Besides Fedora 21,
Atomic hosts are also available built from
CentOS and Red Hat Enterprise Linux. This is
an ambitious project (in keeping with Red
The weather program didn’t recognise Bath UK, We’re sure Bathurst, Canada is lovely, but
also glad to be here, rather than there.
Hat's often touted adage "to lead and not
follow") and is in direct competition with the
already established CoreOS distribution
(which incidentally is planning a container
schism with the introduction of a rival format
entitled App Container). Project Atomic does
away with traditional package management
entirely – the underlying OS is updated, using
a new technology called OSTree, in much the
same way as a Git repository, which makes it
easy to perform incremental updates, and to
roll things back when they fail.
On the server side of things, we have
several new technologies to streamline
provisioning and administration. Rolekit
provides a deployment system so that servers
can be quickly equipped to perform a certain
function, or role, if you will. Each role can be
managed from the same interface, providing
simplicity through consistency. The Cockpit is
a web-based server management interface,
which will be particularly appealing to those
new to administering Linux. Much like the
Webmin control panel, Cockpit enables you to
start and stop services, inspect log files and
perform all manner of perfunctory
housekeeping without having to type anything
(but where's the fun in that?). Cockpit can
also manage remote servers as well and best
of all it doesn't force itself upon you – services
started from Cockpit can be stopped via the
command-line, and vice versa.
Perhaps the most ambitious feature of
Fedora Server 21 is the introduction of
OpenLMI, a Linux Management Infrastructure
which aims to abstract away the myriad tools,
syntaxes and idiosyncrasies which sysadmins
have hitherto had to learn and subsequently
keep up with. OpenLMI steps in with a
standardised API accessible from C/C++,
Python, Java or good old-fashioned command
line tools, allowing routine tasks to be carried
out with ease. OpenLMI provides a clientserver interface, and so lends itself to remote
management and monitoring. A web interface
is available too and as such, there's a fair
amount of overlap with the aforementioned
Cockpit. However, these tools won't tread on
Upgrading
If you're already a Fedora user, and want to
upgrade your system swiftly and efficiently
without munging your carefully-curated
package selections, then good news: You can
use the FedUp tool to do precisely this. Make
sure everything is up to date, then install FedUp:
$ sudo yum update
$ sudo yum install fedup
While it’s possible to use an ISO as FedUp's
upgrade source (using the --iso option), it's
simpler to just pull everything from the
internets, since in general packages on the
image will be superseded fairly quickly, using
$ sudo fedup-cli --network 21
--product=nonproduct
will pull all of the required packages, instigate a
www.tuxradar.com
reboot and then install them without any further
user intervention required. If you want to your
install to have everything, for example, the
Workstation release has, then you can instead
supply --product=workstation. Bear in mind
that this will install Gnome in addition to any
other desktops you're running, so you may want
to tidy things up later.
March 2015 LXF195 55
Fedora 21
each other's toes, and it's nice to have options.
OpenLMI has the potential to be much bigger
than just Fedora, it uses Distributed
Management Task Force standards to shuffle
data back and forth, and works in tandem with
(rather than replacing) standard system
programs. Those already familiar with the
arcanum probably have no use for OpenLMI
(other than something at which to scoff
derisively) but it will certainly ease the learning
curve for those getting into the game.
And so to the Desktop, sorry Workstation,
edition, in which we discover... nothing
particularly surprising. The install process is
straightforward, Btrfs is not the default
filesystem (Ext4 retains this honour and it
probably will do so until Fedora 23) but it is
exactly two clicks away if you are hungry for
COW (Copy-on-write) [see Get The Tech Of
2015, p34, LXF194]. If you just go with the
defaults, you'll end up with three partitions:
boot, root and swap, and the latter two are
managed via LVM. While the system is
installing you can set up users and set the root
password, which is slightly more fun than
watching the progress bar, but you'll still have
time for the LXF-mandated cup of tea. The
initial install occupies about 4GB, which is
pretty standard nowadays.
Assisting developers
The default desktop is Gnome 3.14, and
comes with a Dreamworks-esque cloudy
background, in the royal blue hue that has for
many years been Fedora's colour. While
Gnome 3.15 was released at the end of
November, we wouldn't expect it to be
included in this release. There are, of course,
ways and means of upgrading, though.
Application-wise, there is LibreOffice 4.3,
Firefox 33, Shotwell, Rhythmbox, Transmission,
all of which you will probably be familiar with.
Perhaps less familiar will be DevAssistant, a
Fedora project which started nearly two years
Installing the 173 packages that constitute Plasma 5.1 is easy with DNF.
ago. DevAssistant aims to take the hassle of
out of programming projects – setting up
development environments, installing
dependencies and publishing your code –
so that the programmer can concentrate on
actually programming rather than fiddling.
There are also some of the Gnome core apps,
including the new cloudy Documents
application (as well as the trusty Evince, it's
desktop-bound counterpart), the not-exactly
Google-threatening Maps, some kind of
Instagram/Snapchat clone of dubious worth
called Cheese, the venerable Brasero discburning application and a handy weather
application. Gnome also supports DLNA
media streaming, file sharing via WebDAV and
screen sharing via the Vino VNC client.
Gone is the old PackageKit frontend,
having been replaced with the applicationfocused Software, which is certainly more
tolerable than Ubuntu's Software Center.
Around 50% of Fedora applications ship
with the required AppData files (which provide
XML-wrapped screenshots, descriptions,
translations and ratings) required to display
nicely in the application. Red Hat are
encouraging upstream developers (rather
than its own packaging team) to supply
AppData files – the newly redesigned Gnome
Software is no longer the preserve of Fedora
users, with support now included for Arch,
Debian and OpenSUSE. Further, as was
mentioned in [Package Management: How
Does That Work Then?, p51, LXF186],
AppData is part of a movement for a panDesktop approach (which is being led by
Fedora) to Software Centre type applications,
so it will eventually be useful on KDE and other
desktops too.
Fedora also ships a (non-default) Gnome
on Wayland session, which is refreshingly
stable. Obviously not all applications support
Wayland at this time, but these gracefully fall
back to the Xwayland compatibility layer.
While it's nice to know that Wayland is making
progress, it (or at least the ecosystem it
intends to support) is still not quite there yet,
so this should be considered a preview more
than anything else.
Spinning and wrangling
Although you can customise the official release
to your hearts content, you may prefer to start
from one of a number of Fedora-endorsed
'Spins'. These are releases tailored for particular
preferences or workloads. So you could try the
KDE Spin (which uses the Plasma 4 Desktop),
or any of the other desktop Spins, which include
but are not limited to: Xfce, LXDE and MateCompiz. Besides desktops, there are Spins
customised for scientists, robotics aficionados,
gamers and the tinfoil hat brigade (a security-
56 LXF195 March 2015
focused release with forensics, intrusiondetection and network-sniffing tools).
You can even make your own remixes, and if
you feel it's worthwhile, submit it to Fedora for
official endorsement, whereupon it will become
a Spin. The process is well-documented at
https://fedoraproject.org/wiki/Spins_
Process, but the general idea is to start from a
pre-made kickstart configuration, which may
come from an official release or Spin, and add
(or subtract) whatever you see fit. Once you've
www.linuxformat.com
tidied everything up so that it conforms to
Fedora's guidelines, you can submit it to the
Spin Wrangler, who will decide if your Spin Page
is complete and accurate, if so then your work
travels up the river to the Spins SIG (Special
Interest Group) who will decide if your work is
worthy of passing to the Board who ultimately
hold the rubber stamp. It seems like a lot of
politics and bureaucracy, but you can't just let
any old Thom, Rickert or Harriet sullying the
Fedora brand with their half-baked wares.
Fedora 21
We did a grand old feature on package
management [Again, see p51, LXF186] but
poor ol’ Yum hardly got a mention. While it is in
many ways comparable to Debian's Apt, it
does have a few unique features. Yum
automatically updates a local SQLite database
of available packages. This means that there's
no need for a separate upgrade command,
so you should never run into the situation
where the system attempts to install a
package older than what is available on the
repositories. Another notable feature is its use
of delta-RPMs, which can save a huge amount
of bandwidth by downloading diffs, where
available, rather than complete packages. Yum
is written in Python and is extensible via a
plugin system, the diff functionality originated
in a plugin called Presto, but it was so good
that it ended up being mainlined into Yum
back in Fedora 19. You list the officially
endorsed plugins with:
$ yum search yum-plugin
This will reveal such plugin delights as
fastestmirror (which will do exactly what it
says on the tin), local (which maintains a
local repository of all installed packages –
useful if you're installing the same packages
on several machines) and merge-conf (for
dealing with changes to configuration files).
Naturally, there are many more unofficial
plugins available to use, which you are
encouraged to explore.
New package manager
Another notable feature of Yum is that it will, in
Fedora 22, be replaced by DNF (some claim
this is short for Dandified Yum, but the official
response is that it doesn't stand for anything).
Don't worry though, the syntax is much the
same, and you can even install the dnf-yum
package to provide a wrapper for DNF, so that
it is called instead of Yum. DNF has been
shipped since Fedora 18, but only since Fedora
20 has it been considered ready for everyday
use. DNF is a fork of Yum, with a much tidier
codebase thanks in part to shipping out much
of the functionality to a backend library called
Hawkey. The main reasons for another
package manager (besides the seemingly
ineluctable desire to fork) are:
Extensibility While plugins can be written
for Yum, the API is not well-documented, and
everything has to be done in Python. DNF
(more correctly Hawkey) provides a clean API
which can be accessed through C as well as
Python, and hence plays much nicer with
frontends such as Gnome Software.
Dependency resolution While everyday
users will not notice this, it is possible to lead
Yum into a situation of unresolvable
dependencies. DNF uses OpenSUSE's
advanced libsolv, is much more adept at
getting out of these dependency dilemmas.
Speed From a consumer experience point
of view, DNF also takes the bold step of
synchronising metadata as a background
service via a cron job, which means that: First,
you'll see random spikes in your internet
traffic and second, installing new packages
Gnome Software: the new design is like
someone took the ‘in-your-face’ out of the
Ubuntu Software Center.
streamline the process of packaging your
software into an easily accessible repository.
Prior to Copr, making unofficial Fedora
packages involved using either Fedora's Koji
Build System or the OpenSuse Build Service.
Both of these are comprehensive and as such
are rather heavyweight.
Furthermore, since Koji is also how official
Fedora packages get made (they don't come
from storks and
cornfields), anyone
using it has to comply
with its draconian and
lengthy guidelines.
So Copr is certainly
not a replacement for
Koji, in many ways, it is its opposite: an easy to
use, lightweight and feature-full build system.
For example, if you want to run Plasma 5.1, the
latest KDE desktop, you first enable Daniel
Vrátil's Copr repository and then let DNF take
care of the rest:
$ sudo dnf copr enable dvratil/plasma-5
$ sudo dnf install plasma5
You can install the KDE 5 Application suite,
by adding the repository kde-applications-5
as above. While the underlying KDE 5
Frameworks libraries are already in the
standard Fedora repos, Plasma and the
applications are still considered unstable and
need to be annexed since they conflict with all
the stable KDE4 gubbins.
To summarise, Fedora 21 is a grand old
distribution triumvirate, which builds on a
great legacy. The newly anointed trinity
approach marks a bold step, though we tend
to see it as a needed one, as each edition is
tweaked for the relevant target audience:
Server's do not need desktop environments,
and workstations do not need to run LDAP
servers. Some say the idea of a distribution is
dying, with the future being all clouds and
containers – be that as it may, these three
distros, for now at least, are very much alive
and kicking. LXF
“DevAssistant aims to take
the hassle of out of
programming projects.”
and upgrading the system will feature about
half as many progress bars than before.
Besides being a generally slicker package
manager, DNF also makes it even easier to use
the new Copr repositories. These are
analogous to Ubuntu's PPAs – helping to
Boxes makes setting up virtual machines easy. Also, we note that the default background
is a crescent moon and a small boy with a fishing rod away from a lawsuit.
www.tuxradar.com
March 2015 LXF195 57
Dr Brown’s Administeria
Dr Brown’s
Dr Chris Brown
Administeria
The Doctor provides Linux training, authoring
and consultancy. He finds his PhD in particle
physics to be of no help in this work at all.
Esoteric system administration goodness from
the impenetrable bowels of the server room.
The 3M machine
M
y first exposure to a graphical
desktop (and to BSD UNIX) was a
Sun-2 workstation. I had one in
my office. At that time (around 1985), the
talk was of a '3M' machine – one million
bytes of memory, one million instructions
per second, and one million pixels on the
screen. Sometimes a fourth M was
suggested – the cost should not exceed
one megacent ($10,000). My Sun-2 just
about scraped in on all four M's.
Fast forward to a current low-end laptop
and memory is up by a factor of 4,000 (to
4GB). MIPS figures are hard to compare
because today's processors have multiple
cores and more complex instruction
pipelines; however, for a humble Intel Core
i3, 2,000 MIPS is not an unrealistic claim.
And the price (around $250) is down by a
factor of forty. But when we come to pixels,
one million is still the going rate. The reason,
of course, is that this is already quite close
to the limits of visual acuity.
My trusty Stanley tape tells me I'm
sitting half a metre from my laptop screen,
which is 340mm wide. Doing the math, I
discover that my screen occupies 38
degrees (2,280 minutes) of my field of view.
Now a human eye with '20/20' vision can
discriminate two pixels separated by one
minute of arc – so I should be able to
resolve about 2,000 pixels across my
screen. Significantly more than that, and I
just won't see them. Many humans can do
better than 20/20, and there might (just)
be a case for so-called '4K' screens. But it
would take a truly inspired marketing effort
to persuade me to go any higher.
The Sun never sets
Although the Sun finally set beneath an Oracleshaped horizon several years ago, its rays still light
the Linux landscape.
I
'm in an historical frame of mind this
month, pondering where some of the
technology in Linux has come from.
And you can't go far down that road without
coming across Sun Microsystems.
Sun is probably best known for its
hardware; in particular its high-end multi-core
servers based on its SPARC architecture and
targeted at large-scale data centres. However,
Sun has also contributed a huge amount to the
world of open source software. Bill Joy (the
only one of Sun's founders I have actually met)
wrote the vi editor and the C-shell and
contributed heavily to BSD Unix, especially the
TCP/IP stack.
Sun was the developer of NFS (Network File
System), which is still the mainstay of filesharing across Unix and Linux networks. It gave
us NIS and NIS+. Sun was the employer of
James Gosling, who developed Java, and Sun
eventually released it under the GPL licence.
The company was also instrumental in
introducing PAM (Pluggable Authentication
Modules) in 1995; PAM remains central to all
The Apple Lisa
Apple’s Lisa, introduced in 1983, took the ideas
of the Xerox Alto and produced the very first
personal computer system with a graphical
desktop to be sold commercially. With just
seven applications (lisawrite, lisadraw, lisacalc,
58 LXF195 March 2015
lisagraph, lisaproject, lisalist and lisaterminal)
there was no doubt about its intended
audience, but it wasn’t a commercial success
for Apple, being (in hindsight) a stepping stone
towards the Macintosh.
www.linuxformat.com
Linux authentication. Through
acquisitions of other
companies, Sun bought
other important
technologies, including
MySQL, Open Office and
VirtualBox. Sun also gave us
ZFS, arguably the best file system
out there.
Though not lying directly on the ancestral
family line of Linux, OpenSolaris (2005)
represented a milestone in the open-sourcing
of Unix. It was a blessing for those of us who
wanted a free 'real' Unix that could run on
ordinary PCs. Richard Stallman commented:
"I think Sun has contributed more than any
other company to the free software
community in the form of software. It shows
leadership. It's an example I hope others will
follow". Unfortunately, after Oracle acquired
Sun in 2010, it decided to discontinue open
development of the core software, instead
releasing a proprietary distribution called
Solaris Express. Immediately prior to the
closure of the Solaris source, the community
forked the Illumos and OpenIndiana projects to
keep the dream alive. (Roughly speaking,
Illumos is the kernel project; OpenIndiana
builds a distro on top of it.)
Sun ceased to exist as a company after it
was bought by Oracle, leaving many of us
nervous about the future of many open-source
products of which Sun had been the custodian.
Dr Brown’s Administeria
The mouse's tale
We trace the history of the computer mouse back to 1968
and the extraordinary demo by Douglas Engelbart.
H
ere's a question for you. Who invented windows,
mice, menus and so on? No Googling please…
What's that? Microsoft you say? Well, no. Microsoft is
a marketeer par excellence, but (at the risk of opening the
floodgates of flame) it has actually invented rather little. Well,
perhaps Apple then? That's a little closer to the truth. Apple
was certainly the first to put the technology in front of large
numbers of users, initially with a machine called the Lisa
which dates from 1983 and later with the MAC Classic in
1990. The Lisa sold for about $10,000 which put it within
reach of some professional business users, and the Classic
brought the price down to $1,000, making it truly a 'personal
computer'. But the desktop metaphor, with its windows,
icons, menus and mice, did not originate there.
It turns out that Steve Jobs and his team at Apple were
much inspired by a visit to the Palo Alto Research Center of
Xerox in 1979 where they saw a machine called the Xerox Alto,
which is generally regarded as the first computer to provide a
graphical desktop. (By the way, Xerox PARC also invented
Ethernet, which was just as important as the GUI stuff, but
not relevant to our story here.) But we can trace the lineage
of the mouse back even further than that, because the folks
at Xerox PARC were, in turn, influenced by the work of a team
headed by Douglas Engelbart.
Engelbart worked at the Stanford Research Institute, and
the name of his group is interesting – it was called the
Augmentation Research Center (ARC). Engelbart was
interested in moving the computer out of a pure numbercrunching role and making it "an instrument for helping
humans to operate within the domain of complex information
structures." I wrote a short story about what might happen if
we let this go too far [See Administeria, p56 LXF192]. And
central to our theme here, he is generally regarded as the
inventor of the computer mouse.
Let me shift gear a little. It may seem strange to say this,
having grown up through a lifetime of intense technological
development, but I sometimes feel disappointed at
One of Engelbart’s
prototype mice. Apple
fans note there’s only
one button!
Video superposition of
Engelbart and
his computer
screen during
the Mother of All
Demos in 1968.
having missed out on some of the key historical moments:
to have turned on that first electric light bulb; to have
witnessed the first transatlantic telegraph in operation; to
have tuned in my crystal set and listened for the first time to
that faint voice plucked by pure magic from the ether.
And there's one other: to have attended the Mother of All
Demos (as it has been called retrospectively) which Engelbart
and his ARC colleagues presented on December 9, 1968 in
San Francisco.
The entire demo (minus the bits lost during reel changes!)
is available on YouTube. The image quality is terrible, but if you
can put up with that, you'll see demonstrated a folding editor
presenting various hierarchical views of a document, the use
of a mouse to select text, cut and paste, and a simple form of
hyperlink. We even see Engelbart using the computer to show
a hierarchical view of the presentation itself – and to put that
into context, it's 22 years before PowerPoint came along.
Engelbart has his co-worker turn the mouse upside down
to demonstrate how it works; at that time it had two rotating
wheels mounted at right angles. (It was Bill English who built
the first mice and he didn't come up with the ball mouse until
1972. Optical tracking mice appeared much later, around
1980. They were much better because you didn't have to
keep dismantling them to dislodge the hairballs and bits of
Bombay Mix.) In the demo, Engelbart explains the concept
of a cursor (he calls it a "tracking spot") that follows the
mouse on the screen. He also showed a five-point
touch device that lets you enter characters by
playing “chords” with your fingers. The
networking involved in the demo is
impressive too – Engelbart is interacting in
real-time with a computer that's actually
30 miles away in Menlo Park.
Engelbart died in July 2013, and as
when Dennis Ritchie passed away, the
press were (for the most part) entirely
unaware of a missed opportunity to write
an obituary for a genius. Nowadays, with multitouch screens the order of the day, users have to do
little more than make an artistically-inspired hand gesture in
the general direction of the computer to make their wishes
clear. But it is worth remembering that the whole thing began
some 50 years ago when Douglas Engelbart placed two little
wheels in a box at right angles, turned it upside down, and
called it a mouse.
www.tuxradar.com
March 2015 LXF195 59
Dr Brown’s Administeria
ClamAntiVirus
Scanning for malware on Linux file servers and mail gateways can help
prevent infected files from reaching Windows in the first place.
I
f you’re running Windows, virus protection is something
you really can’t do without and most users, I suspect, fork
out for a decent antivirus subscription. Though not a
complete stranger to malware, Linux suffers to a much
smaller extent. Analysts will generally point to two factors:
First, the security model for Linux is stronger, making it much
harder to write viruses. Second, the low market share of Linux
makes it a less profitable target. And to these two I would add
a third: Linux users are more cautious and streetwise when it
comes to security. For example, they may choose to only
install digitally signed software from their distributor’s repos.
I don’t really know how true each of these three is; but one
thing I think is clear: Linux doesn’t represent the ‘low hanging
fruit’ that malware writers prefer to target. So is there any
case at all for running antivirus software on Linux? Well, there
probably isn’t much of a reason for performing on-access
scanning of every executable file, but if your Linux system is
carrying files that are destined to be consumed on Windows
machines (for example if it’s a mail server or a Samba file
server) then there’s certainly a case for scanning that content
for viruses before it ever reaches the Windows systems.
Clam AntiVirus is an open source (GPL 2 licensed)
antivirus toolkit available for multiple platforms including
Linux, Solaris, FreeBSD and Windows. ClamAV was owned by
Sourcefire, which was acquired by Cisco in October 2013. It’s
now actively maintained by the Talos Group, the Security
Research and Intelligence group within Cisco. The website is
http://clamav.net, and several mailing list archives are
available at http://lists.clamav.net. The heart of the
antivirus engine is a shared library; built on top of this is a
command-line scanner, and a multi-threaded scanning
daemon. One of its main uses is scanning incoming messages
on mail gateways.
I decided to take a look at ClamAV on Ubuntu 14.04.
Installation is trivially easy, because it’s in the Ubuntu repos:
$ sudo apt-get install clamav clamav-docs
Dependency resolution also brings in packages clamavbase, clamav-freshclam and libclamav6.
The main clamav package includes three executables:
clamscan, sigtool and clambc; along with their
ClamAV can delve inside a wide variety of file formats in search of viruses.
60 LXF195 March 2015
www.linuxformat.com
corresponding man pages. Let’s dive in and try running a scan
from the command line, using clamscan:
$ sudo clamscan -i -r Training/
Training/XXX/tmp.tar: Trojan.Linux.RST.b FOUND
Training/XXX/psybnc/whiper: Linux.RST.B-1 FOUND
Training/XXX/psybnc/pico: Linux.RST.B-1 FOUND
----------- SCAN SUMMARY ----------Known viruses: 3720060
Engine version: 0.98.5
Scanned directories: 476
Scanned files: 3570
Infected files: 3
Data scanned: 1771.99 MB
Data read: 1168.35 MB (ratio 1.52:1)
Time: 277.079 sec (4 m 37 s)
Here, -r is the recursive option and -i says to display a
report only for infected files. I have edited the path names
down to ‘XXX’ so they fit in our printed column, but the report
itself is genuine. The three files listed here alarmed me at first,
until I realised they were actually sample files from a Linux
security course I used to teach. As you’ll notice, clamscan’s
default behaviour is just to print the names of the infected
files, but you can also ask it to ‘quarantine’ them by moving
them into a specified directory, or to delete them. Just think
about what we just did: one short command to install the tool
and initialise the virus database, another short command to
run the scan, and no suggestion of any money changing
hands. Don’t you just love Linux?
Agent Clamd, licence to scan
There is also graphical front-end called clamtk. It is basically
just a wrapper around clamscan and doesn’t add any
functionality, but it does look a little more like a Windowsbased scanner. ClamAV also makes its scanning engine
available as a daemon (clamd), which is distributed as a
separate package (clamav-daemon). By default, this daemon
listens on a Unix-domain socket, so it can only be accessed
from the local machine. However, you can configure it to listen
on a TCP port instead, which opens up the possibility of
running clamd as an ‘agent’ on all your machines, and
controlling the scans from one central location. There is a
client tool called clamdscan which will connect to clamd;
however as far as I can tell there’s no way to tell it to connect
to a remote daemon. For this you’ll need to hunt down a
python program called clamdscan.py. The clamd daemon
logs its actions to the file specified by the LogFile directive in
its config file, /etc/clamav/clamd.conf.
If you want to write a scanning tool from scratch, there’s a
library (libclamav) which provides the actual scanning engine.
The library is well-documented in the ClamAV user manual.
The user manual includes mention of “on-access”
scanning using the dazuko module: this is a third-party kernel
module that intercepts file access calls and passes the file
information to a user-space application. The idea is to
support on-access virus scanning, file access logging or other
external security tools. However, the official website at
Dr Brown’s Administeria
http://dazuko.org has a forlorn look; the most recent
release is almost four years ago and the project is currently
unmaintained. There is, however, a FUSE-based user space
filesystem for Linux called clamfs which provides on-access
AV file-scanning through clamd (see Clamfs box, below).
The Virus Signature Database
The quality of a virus-scanning tool depends on two factors:
how up-to-date its virus definition database is, and the range
of file formats it can peer inside.
The virus definitions for ClamAV are primarily stored in
two files (daily.cld and main.cvd) in the directory /var/lib/
clamav. The program freshclam can be run to manually
update this database:
$ sudo freshclam
ClamAV update process started at Wed Dec 31 06:30:56 2014
main.cvd is up to date (version: 55, sigs: 2424225, f-level: 60,
builder: neo)
Downloading daily-19859.cdiff [100%]
Downloading daily-19860.cdiff [100%]
daily.cld updated (version: 19860, sigs: 1299145, f-level: 63,
builder: neo)
bytecode.cvd is up to date (version: 244, sigs: 44, f-level: 63,
builder: dgoddard)
Database updated (3723414 signatures) from db.local.
clamav.net (IP: 193.1.193.64)
You’ll notice that freshclam is downloading only the diffs,
not the entire database. (That got downloaded as part of the
original installation.)
Freshclam can also be run as a daemon (freshclam -d) to
update the database automatically. In fact, installing ClamAV
onto Ubuntu automatically configures the freshclam daemon
to run once an hour, with no further configuration or action on
my part. A log of freshclam’s activity is maintained in /var/
log/clamav/freshclam, and a quick examination of this file
shows that updates to the virus database are being added
(and downloaded to my machine) roughly four times a day.
The command sigtool can be used to examine virus
signatures. For example, to list all the signatures in the daily.
cld file:
$ sigtool --list-sigs daily.cld
Trojan.Hupigon-9863
Trojan.IRCBot-1971
Trojan.SdBot-8227
... plus another 3.7 million ...
Or to find specific signatures bases on a regex match, we
might try something like this:
$ sigtool --find-sigs=”Linux.*Worm” daily.cld
As you might expect for an open-source project, you can
contribute by submitting your own virus signatures if you find
something that ClamAV doesn’t already recognise.
Just email them to [email protected].
Typically, virus signatures won’t just show up ‘in the
clear’ in the files on your system.
More likely, they’ll be
lurking in files that
have been packaged in
some way (zipped
archives for example,
or MSI files built for the
Microsoft installer), or have
been compressed using
something like Gzip. Windows
PE (Portable Executable)
files – a very common
target for viruses – are
commonly compressed
with tools, such as UPX or Petite, or are deliberately
obfuscated to hide them from the eyes of virus scanners
using something like Yoda’s Crypter. To help ClamAV find
viruses in as many places as possible, it can delve inside a
truly enormous range of file formats ranging from TAR, CPIO,
GZIP and BZIP2 archives to PDFs, mailbox files, Windows
Cabinet files, and a host of compressed or obfuscated PE
formats. The package clamav-testfiles includes a total of 44
files of different types which have all been ‘infected’ with the
virus signature ClamAV-Test-File and can be used as a basis
for testing. Not surprisingly, clamscan found all of them. (I
tried copying these files across to a Windows system and
scanned them with Norton 360 premier edition. It found no
threats, presumably because the ClamAV-Test-File virus
signature is not in its database.)
The library also includes a ‘data loss protection’ module
which can detect credit card numbers from the major credit
card issuers, as well as U.S. social security numbers inside
text files, though I haven’t tested this.
The original
Trojan horse,
which gave rise
to the quote
“beware of geeks
bearing GIFs”.
Oh yes, I did.
Filtering mail
As mentioned, one of the key uses of ClamAV is scanning the
messages received by a mail gateway. This is made much
easier by a technology called milter (mail filter). Basically,
milter is a set of hooks that can be used by mail transfer
agents, such as Postfix and Sendmail to interact with an
external virus scanner or other filter program at various
points during mail message delivery. The package clamavmilter provides the necessary filter; installing the package
automatically configures Sendmail to use it, but for detailed
instructions on setting this up under Ubuntu, see http://bit.
ly/UbuntuMailFiltering. In fact, clamav-milter is just one of
many available milters – see www.milter.org for a list. LXF
Clamfs
Clamfs is another tool built around
ClamAV. It implements a user-space
filesystem in which all your file accesses
are vetted by clamfs, which in turn
connects to the clamd daemon to
perform a virus scan on the file. I got this
to work, despite the absence of proper
documentation. Here are the steps:
1 Install the clamfs package:
$ sudo apt-get install clamfs
2 Copy the sample config file
somewhere sensible, and unzip it:
$ cd /etc/clamav
$ sudo cp /usr/share/doc/clamfs/clamfssample.xml.gz clamfs.xml.gz
$ sudo gunzip clamfs.xml.gz
3 Tweak the config file. The only line I
actually changed was the <filesystem>
node, to define the pathname of the file
system I wanted to mount, and the
www.tuxradar.com
directory where I wanted to mount it.
4. Start the clamfs daemon, specifying
the path to the config file:
$ sudo clamfs /etc/clamav/clamfs.xml
At this point I have a mount in place of
type fuse.clamfs; any files I access under
this mount point will be scanned when I
access them. If a signature match is
found, the operation is not permitted,
and an email is sent to root.
March 2015 LXF195 61
The best new open source
software on the planet
Alexander Tolstoy
handpicks the finest chunks of open
source gold to be melted down and
hammered into shape for this month’s
HotPicks showcase.
Yarock FFmpeg Dropbox Uploader
KWave Linuxbrew SuperTuxKart
Gloobus-preview MPS-Youtube
Aualé Hollywood Boomaga
Music player
Yarock
Version: 1.0.0 Web: https://launchpad.net/yarock
E
ven though there are plenty of
Linux music players, new ones
just keep coming out. Yarock is
another new kid on the block, recently
grown to a 1.0.0 version after almost
five years as a series of 0.x releases.
The player's most distinctive feature, as
mentioned on the official website, is an
"easy and pretty music collection
browser based on cover art".
Besides that, Yarock is also a Qt4based application with a stylish exterior,
which was revamped for the 1.0.0
release with new UI icons and some
layout changes. These changes include
the playback controls being relocated
at the bottom, and the category pane
on the left side of player's window now
housing more items.
The player's interface has three
parts. The left pane offers subcategories (Home, Music, Playlist and
Radio browsers, and also local folders)
while the central part shows where you
are, and the right pane is used for
exploring and possibly changing track
metadata. The details may not thrill
veteran users up to this point, but
A smart and fast music player written in pure C++ and
bound to Qt4 and the Phonon multimedia framework.
“Sorting your music
collection in Yarock
has rich options.”
Exploring the Yarock interface
Navigation
Just as you find in a web
browser, Yarock’s upper panel
shows where you are and
houses various optional
buttons that prompt you to
take some decision.
Feature list
The left pane always shows
the various sources of music
and the different browsing
modes available.
Main view
The current selection is
usually displayed here as
cover art. This is the Settings
view showing off the great
customisation available for
lyrics, scrobbling and more.
62 LXF195 March 2015
Reserved for a track
Now playing controls
When something is playing, you find
more details here. This pane is also
for arranging items into playlists.
Play, pause, stop, skip back or forth here.
Extra buttons on the sides stand for volume
control, equaliser and shuffle modes.
www.linuxformat.com
Yarock's strengths are revealed when
you start using the player for enjoying
your music collection. Yarock offers fast
indexing of files into the SQlite3
database and enables you to manually
fetch missing covers from the web.
You don't need to drag a song into a
playlist – though you can do it – as
Yarock plays your tracks directly from
collection. Sorting your music collection
in Yarock has rich options: by albums,
artists, songs, genres, years and even
folders. Additionally, when you've
listened to a few tracks, Yarock can
generate smart playlist based on the
playback history.
Could Yarock become your player of
choice? Well, it depends. The
application is robust, and doesn't have
either KDE or Gnome dependencies,
and provides command line interface
for all you Bash fans. It's suitable if
you're looking for a player with simple
play queue, favourites, tag editor,
volume normaliser and other essentials.
By the way, Yarock is desktopindependent – its supported formats
list is defined by your Phonon backend.
In most cases this would pass you
down to the GStreamer backend, so
take care of the codecs that you have.
LXFHotPicks
Multimedia libraries and programs set
FFmpeg
Version: 2.5.2 Web: http://ffmpeg.org
T
he FFmpeg project is a mature
and well-recognised one in free
software circles that produces
libraries and programs for handling
multimedia data. FFmpeg is the
backend of many popular media
players, such as VLC, MPlayer,
Handbrake Converter and many more.
FFmpeg is even used by YouTube on the
server side and in Google Chrome
locally to handle HTML5 video and
audio data.
The new version brings many
improvements and now supports UDP
Lite protocol, which enables playback of
broken network bits by attempting to
restore missing data by local FFmpeg
decoder. FFmpeg has also started to
support animated WebP and APNG
images, as well as multithreading
(thanks to merging with ffmpeg-mt
fork), new demuxers and muxers,
including HEVC/H.265 RTP container
and support for fragmented MPEG-
DASH streams. This feature means that
a video stream is divided into fragments
of certain size, and if network
bandwidth occasionally shrinks, an
FFmpeg-based player can automatically
change quality to a lower bitrate,
without interrupting playback. FFmpeg
also features nearly all improvements of
libav, the concurrent fork, again thanks
to solid backporting work.
Of course, it isn’t the only choice for
enabling multimedia support in Linux,
but it does boast very high code quality.
The project analyses all incoming
patches using its own regression test
suite called FATE (which stands for
FFmpeg Automated Testing
Environment), so that if any patch
If you expect your player to use the latest FFmpeg, mind
the installation prefix and define in ./configure command.
“FFmpeg also
features nearly all
improvements of libav.”
created slows things down more than
0.1%, it gets dismissed.
The project website offers very
detailed descriptions about every major
library that comes with the whole
bundle, and you can also download the
source code there. In order to compile
the code, you'll need some
prerequisites, such as Yasmans
standard devel-stack for building
software on your machine. Some
FFmpeg components are optional, and
the ./configure script will report which
of them will be included in build. The
new features and additions to the 2.5.x
series, make this a must-have.
Dropbox CLI app
DB Uploader
Version: 0.14 Web: http://bit.ly/Dropbox-Uploader
S
ome time ago, the Dropbox API
was changed in a way that made
it impossible to perform basic
actions with files and folders from the
command line. For example, for the
sake of better security, public links (of
shared files in the Public folder) were
changed so they could only be created
after communicating with the cloud,
which returned the link with a unique
server-side calculated hash value.
There's also currently no service
menu for the likes of Dolphin, Thunar
and other file managers. Of course,
users could have built those menus
themselves if they could communicate
with Dropbox cloud directly. Thanks to
Andrea Fabrizi, this is now possible
(again) with the use of his gorgeous
Dropbox Uploader. This is a Bash script,
which restores the CLI interaction with
Dropbox by adding itself as a third-party
app and connecting to the service.
Download the master ZIP archive
from the project's Github page, extract
it and run ./dropbox_uploader.sh. The
first time the script is launched, it will
guide you through the setup process.
There's very good built-in
documentation with all necessary
details. The script will explain the steps
to register the script as a third-party
app, to which Dropbox grants access for
working with shared files. After you
paste the newly created app key and
secret to the script's prompt, the initial
setup is finished. The uploader doesn't
require any extra authentication and
thus doesn't store any sensitive data.
It works with the official Dropbox API
The first time your run the script, a helpful wizard will
guide your through initial setup.
“This is a Bash script,
which restores the CLI
interaction with Dropbox.”
www.tuxradar.com
and enables copying, moving, deleting
and renaming files within your account,
as well as uploading, downloading and
sharing files. For instance, getting a
public link for a file is as simple as:
./dropbox_uploader.sh share Public/
My_file.odt
The application uses the ~/Dropbox
folder as a root directory, you don't
need to provide full path. If a file exists,
it’s synced and you'll receive a valid link
from Dropbox followed by a success
notification. The uploader script doesn't
depend on the official client either, so
it's even more cross-platform.
March 2015 LXF195 63
LXFHotPicks
Quick preview tool
Gloobus-preview
Version: 0.4.5 Web: http://bit.ly/GloobusPreview
T
his is a well-known quick file
previewer, designed primarily
for GTK-based desktop
environments, but suitable for almost
any desktop. The application supports
images, documents (PDF, ODF and
ODS, etc), audio (MP3, WAV and OGG),
video (AVI, OGG, MKV and FLV, etc),
folders, archives, fonts, plain text files
and more.
In Gnome, Unity or Cinnamon,
Gloobus-preview integrates with the
Nautilus or Nemo file manager (you
press Space to preview), while in KDE
you can manually create a simple
servicemenu, which would open a file in
quick previewer using:
gloobus-preview %f command.
A previous version of Gloobuspreview dates back to late 2009, and
since then the application has lost
compatibility with modern Linux
distributions. However, a new developer,
György Balló, joined the project recently
and brought some major updates to the
code. He's ported the user interface to
Pygobject3 and GTK3; the media
framework to GStreamer 1.0; and
media keys to GDBus. The list of
supported file formats is also greatly
enhanced. The old icns engine has been
superseded with ImageMagick, and
XPS format support has been added
and the overall office plugin capabilities
are improved by using ssconvert
(derived from Gnumeric), while bsdtar
improves archives support.
Gloobus-preview isn’t a loner in its
class. Various competing projects exist,
from Gnome Sushi and Nemo Preview
to Klook, but Gloobus-preview has the
widest list of supported file types and
The preview feels very nice in any environment and works
like a lightweight multi-purpose reader and player.
“Has the widest list of
supported file types and
looks very polished.”
also looks a very polished product and
works smoothly.
Installing the new version in Ubuntu
and its derivatives is quite simple
thanks to the dedicated PPA (http://
bit.ly/WebUpd8PPA). On other
systems you may want to either convert
Ubuntu's Deb package using Alien (it
worked wonderfully for OpenSUSE) or
build the previewer from source. In the
latter case make sure you're working
with the latest branch (use the URL in
the strap), not the outdated 4.1 one.
Youtube CLI app
MPS-Youtube
Version: 0.2.1 Web: http://bit.ly/MPS-Youtube
W
ith MPS-Youtube we continue
our series of command-line
tools that enable easy
control of your favourite apps and
services. MPS-Youtube was planned as
an audio player and track downloader
for YouTube, but it was soon equipped
with a video playback feature.
The project is based on MPS,
a terminal-based program to search,
stream and download music. As the
name suggests, MPS is bound with
YouTube as a source of music and
videos, and the app has decent support
of YouTube features, too. It can retrieve
metadata (view count, duration, rating,
author, thumbnail and keywords),
create and save local playlists, search
and import YouTube playlists, view
comments for videos and more. All this
is possible through the Pafy library
(http://bit.ly/PafyLib), which is
already included with MPS-Youtube.
64 LXF195 March 2015
The application is packaged as a
Python module (both 2.7 and 3.x series
are supported) and can be easily
installed using Python's pip catalogue.
Make sure you have python-pip (or
similarly named) package in your
system and issue the following:
sudo pip install mps-youtube
Once installed, launch it with the
mpsyt command. By default MPSYoutube doesn't have video playback
enabled, so let's fix it:
set show_video true
And another prerequisite for
enabling video search feature is (only
music is used for searching by default):
set search_music false
This CLI-based app could be a replacement for your
browser, offering both Flash plugin and YouTube extensions.
“MPS-Youtuber: Search,
stream and download
music and video.”
www.linuxformat.com
Now you can search for a YouTube
video, using a dot or slash sign followed
by a search string. For example:
.beatles cover
The application will return a table of
search results, with two columns: a
number and a corresponding name.
Enter the desired number and press
Enter to start watching. You can change
the default video player, MPlayer, to
another, for instance, MPV with:
set player mpv
To return to the search results, close
the video window and press Ctrl+C in
the shell. You can easily download best
audio or best video version of a track by
using da # or dv # respectively.
LXFHotPicks
Sound editor
KWave
Version: 0.8.99-2 Web: http://bit.ly/KwaveApp
K
Wave is a sound wave editor,
built with an Qt4 interface and
thus fits nicely with KDE and
any other desktop environment for that
matter. Handling wave sound data in
Linux is considered a niche occupation,
and there’s a lack of high-quality editors.
Audacity seems to be the most
powerful solution, but sometimes you
need a more lightweight application,
which can offer a bit more than the
legacy kde3-krecord. KWave meets
these conditions just fine. It began life
back in 2004 and nowadays is being
updated every few months.
KWave can work with Pulseaudio,
ALSA and even OSS sound system,
which can be set in Settings > Playback.
After you've done that you can start
recording by pressing the red button.
A new window with recording settings
will appear, and you can change the
sound quality, pre-define recording
timeslot, input device and more.
The new recording will appear in a
new tab in KWave. Under the FX menu
you'll also find some essential effects,
such as Normalise, Low and Band pass
filters, Pitch Shift and Notch filter.
Under the Calculate menu you can add
extra noise to your recording, or convert
the selected wave area into silence.
The wave stripe can be zoomed in
and out, plus you can select part of it
and export to a file. KWave supports
multiple track handling with all
necessary features for making
multitrack composition. You can then
export your work into OGG, FLAC, WAV
or even MP3.
The new version fixes a lot of past
stability issues, adds new translations
If you are looking for an Audacity equivalent for KDE,
you’ve probably found the best match.
“KWave can work with
Pulseaudio, ALSA and
even OSS.”
and some new command line
arguments. For instance, it’s possible to
quickly load the desired record using
something like:
kwave --iconic --disable-splashscreen
test.wav
KWave also supports remote usage
through methods, such as
kwave:plugin%3Aexecute?normalize,
kwave:save
KWave is included in the Ubuntu
apps directory, though sometimes it's
not updated there to the latest version.
The official KWave page (see above)
offers the latest version for most of
RPM-based distros, and the source code.
Package manager
Linuxbrew
Version: rolling dev Web: http://brew.sh/linuxbrew
A
nyone who might have tried
Mac OS X will be familiar with
the Homebrew package
system. While package management is
a bit unusual for Apple users, it’s a
keystone system feature in Linux.
However, Homebrew greatly differs
both from Macports (BSD style) and
Apt and Yum etc for Linux. Linuxbrew is
a fork of Homebrew for Linux, and it
shares nearly all the peculiar features of
its parent, so lets cover the basics.
Linuxbrew is a Ruby script that
enables access to an online repos of
software, particularly system tools,
Ruby on Rails packages and related
extras. If you're not a Ruby developer,
this will open a whole new world of
software at http://braumeister.org.
To install the Linuxbrew client you'll
first need to check that you have some
basic programming stuff installed (curl,
git, m4, ruby, texinfo, bzip2-devel, curl-
devel, expat-devel, ncurses-devel and
zlib-devel). When you're ready, issue the
installation command
git clone https://github.com/
Homebrew/linuxbrew.git ~/.linuxbrew
and add the following to the .bashrc file:
export PATH="$HOME/.linuxbrew/
bin:$PATH"
export MANPATH="$HOME/.
linuxbrew/share/man:$MANPATH"
export INFOPATH="$HOME/.
linuxbrew/share/info:$INFOPATH"
After that installing any package is
as simple as:
brew install package_name
Linuxbrew doesn't need root
authentication or sudo because it puts
Browse repositories online or use the brew search
command to find the right package.
“If you’re not a Ruby
developer, this will open
a new world of software.”
www.tuxradar.com
all installed files in your home folder at
~/.linuxbrew. The Linuxbrew solution
might not seem very germane for
outdated or aging LTS Linux distros,
which no longer provide fresh versions
of development tools. But the choice is
far richer than just development
armoury. Homebrew houses many
repositories, including games with
many popular titles (Freeciv, SuperTux
etc). The application database is also
the same for both Homebrew and
Linuxbrew, so it’s very ease to sync your
development stack between OS X and
Linux, if needed.
March 2015 LXF195 65
LXFHotPicks
HotGames Entertainment apps
Race simulator
SuperTuxKart
Version: 0.8.2b Web: http://bit.ly/SuperTK
S
uperTuxKart is a kart racing
game originally started in
2004 as a free and open
clone of Nintendo's Mario Kart. Since
then the game has absorbed new
features and technologies, as well as
lovely details from many proprietary
kart simulators, particularly
Moorhuhn Kart. The game features
many popular mascots from the
open source world, including Tux,
Beastie. Wilber, Pidgin, Konqi and the
XFCE Rat. Each can be selected as
your driver, while others will be your
competitors. The game referee is
Thunderbird, by the way.
The last stable version of
SupertuxKart is 0.8.1, which dates
back to November 2013, and there
wouldn't be much sense in writing
about it here in HotPicks unless
some important changes hadn't landed
recently in the game's Git repository.
And indeed, they are ground-breaking.
The latest unstable 0.8.2b version
uses the new Antarctica graphics
engine, which supports 100% dynamic
light/shadows, volumetric fog, depth of
field, different model maps (glossmap/
specular map/normal map/emit map)
and requires an OpenGL 3.1-capable
video chip to unleash all these benefits.
There's also a new shader-based
rendering engine, physics, racing tracks,
Wii support, online accounts support
and a long-awaited multiplayer mode.
This beloved open source kart simulator has returned
with renewed vigour and lots of graphics enhancements.
“There's a new shaderbased rendering engine,
physics and tracks.”
The game's settings enable you
to change the graphics quality, but
keep in mind that in order to use
the top-grade settings (with
dynamic lighting etc) you've got to
have the appropriate graphics.
Intel's HD3000 is an official
minimum requirement, but the
best results can only be achieved
with Nvidia or AMD high-end cards
using proprietary graphics drivers.
Board game
Aualé
Version: 1.0 Web: www.joansala.com/auale
A
ualé is a strategy board game
with a long history. The name
is a Catalan translation of
Mancala, which is a bigger family of
board games from Africa and Middle
East. The main principle is to collect
seeds from one hole on your side of
the board and then distribute (sow)
them into the holes counterclockwise; putting one seed into each
hole. In certain conditions you can
grab seeds from your opponent’s
holes and store them in your barn.
Aualé has each player start with
six holes, and there’s a total of 48
game pieces, or seeds. According to
the rules, you can grab seeds for your
barn when a hole belonging to your
opponent contains two or three
seeds. The player who collects 24
seeds first wins the game – and
Aualé turns out to be quite hard to
66 LXF195 March 2015
play, even against the computer, which
provides very basic artificial intelligence.
The gameplay requires you to
constantly re-evaluate risks and
possibilities and, of course, accurately
count the number of seeds in each hole.
Both players try to reach a situation
where an opponent runs out of seeds
on his side and thus doesn’t have
enough space to manoeuvre. In this
case it is quite easy to put the squeeze
on the opponent and win. Apparently
the long game tends to require well
thought out positional moves, where
players concentrate seeds in a few
holes and redistribute them carefully.
After getting energised with racing, we spend a couple
of hours with seeds and holes. This really calms us down.
“Aualé turns out to be
quite hard to play, even
against the computer.”
www.linuxformat.com
Aualé lets you choose the
strength of the computer player. You
can also save a match and continue it
later, and roll back and forth recent
moves. The game’s website provides
binaries for many popular Linux
distros, both Deb and RPM-based.
But the inside of a package shows
that Aualé is a noarch game,
consisting of Python scripts and
bindings and some platformindependent JAR pieces. If you like
logic games, Aualé is a strong choice.
LXFHotPicks
Activity simulator
Hollywood
Version: 1.8 Web: http://bit.ly/HWTechMelodrama
T
here's a rare category of
applications that are
entertaining, delivering pure fun
and joy, but at the same time are nearly
useless from a practical perspective.
Dustin Kirkland and Case Cook, who
both work at Canonical, found a few
spare hours to bring us such a totally
useless package, but it's utterly brilliant
one for the Ubuntu console: it's called
Hollywood Technical Melodrama.
But what is it? This package turns
your Ubuntu console into a Hollywood
technical melodrama hacker interface,
blending the 'real-life hacking' seen in
classics like the 1995 Hackers movie,
and the kind of rolling text and graphics
that you tend to see on huge screens,
whenever a generic top secret control
centre is required in an action movies.
We doubt that the NSA or GCHQ bother
sliding vast marble floors back to reveal
huge maps every time they run a
mission (the healthy and safety issues
alone would be a nightmare), but of
course, most of what's displayed on the
vast screens in such movie scenes
makes little or no sense, but who cares
if it manages to convey the right mood
of covert and usually, evil enterprise.
The Hollywood package supports all
recent versions starting from 12.04, as
well as other compatible *buntu
flavours and derivatives. If you're not on
Ubuntu, you can still download the tar.
gz package manually (http://bit.
ly/1Aig3x2), extract it and run.
The applications contains no
binaries, but has a large number of
dependencies, some of which are
familiar command-line tools. The
Hollywood Technical Melodrama relies
This screenshot
is reserved for
the movie about
Mir development.
“Delivering pure fun
and joy, but at the same
time nearly useless...”
on a Byobu, a text-based window
manager and terminal multiplexor,
which creates a tmux session and splits
the main window into parts. Then
different commands output are shown
there. For extra pizzazz, the host script
changes the Byobu layout a few times a
minute. In order to run Hollywood
smoothly, you'll need htop, MPlayer,
mlocate, ccze and some other utilities.
To guess what's missing, simply
examine the splits for errors and
complaints. Fully charged Hollywood
should flicker with a Matrix effect,
colourised text, logs and even an MP4
file, played with the ASCII video output.
Of course, CPU load will be quite high,
so take care of battery drain when
showing off your leet hacking skillz to
everyone in your local coffee shop..
Virtual printer
Boomaga
Version: 0.6.2 Web: www.boomaga.org
T
his software will be a great relief
for anyone who ever wants to
print out a booklet in Linux.
Normally, most Linux text processors
and graphics applications allow
standard print settings with page
selection and access to printer driver
options, and it's more than enough
unless your task is more specific.
For example, you may want to place
two pages in a single-paper sheet and
create a two-up saddle stitch booklet,
where the first sheet will have the first
and the last page, the next will have the
second and the last but one page and
so on. And all this should be printed
double-sided. In the old days only
Pagemaker (and later InDesign) offered
the Booklet plug-in. Sometimes this
feature was included in the printer
driver for some models.
Boomaga (BOOklet MAnager) does
this job gracefully. If you run it as a
standalone application, you'll have to
prepare your PDF file beforehand, as it's
the only supported file format. However,
Boomaga also adds itself to the list of
your printers, so you can use it from any
application, which supports printing.
After choosing to print to Boomaga,
the program will automatically generate
a PDF and show it loaded in the
Boomaga main window. Here you can
sort pages the way you need them to be
placed on paper. The layout can be oneup (plain order), two-, four- and eight-up
or booklet (for a saddle-stitch after
folding sheets in two). After choosing
your layout variant you can select the
actual physical printer and access its
Rearrange
pages and print
out a book,
as a real DTP
professional.
“A great relief for anyone
who wants to print out a
booklet in Linux.”
www.tuxradar.com
options by pressing the Configure
button. Options vary depending on the
printer model you have. If your printer
doesn't have duplex printing feature,
you may want to enable it in the
Boomaga Layout settings. In this case
the application will ask you to turn over
the pages halfway through printing your
document. The applications places all
printing jobs in its own queue, enabling
you to export each job as PDF (very
helpful in pre-printing), renaming jobs,
rotating entire jobs and more.
The project's official download page
(above) contains instructions for
installing Boomaga. Luckily, packages
for most mainstream Linux
distributions are available. LXF
March 2015 LXF195 67
Back issues Missed one?
Get into Linux today!
Issue 194
February 2015
Issue 193
January 2015
Issue 192
Christmas 2014
Product code:
LXFDB0194
Product code:
LXFDB0193
Product code:
LXFDB0192
In the magazinee
In the magazine
In the magazine
What’s the Next Big
Thing in Linux OS tech?
We show the hot stuff
you’ll want to try. Bored
of your default desktop?
?
Take your pick of our
alternatives. Plus, cake
for everyone! Firefox
celebrates 10 years.
Create a multimedia
hub for your home and
stream films, music
and photos around the
house. Try out next-gen
filesystems for a RAID
array, mod Minetest and
simplify your firewalls
and so much more!
More power! Charge
up your distro with
essential tips and tools.
Build a robot and a
monstrous 24TB NAS
box. Plus: system
recovery, Linux certs
and our pick of the most
productive desktops.
LXFDVD highlights
Fedora 21 Workstation, Manjaro,
ALT Linux, 4MLinux and more!
LXFDVD highlights
Ubuntu 14.10, OpenSUSE 13.2
and XBMCbuntu 13.2 and more.
Issue 191
December 2014
Issue 190
November 2014
Issue 189
October 2014
Product code:
LXFDB0191
Product code:
LXFDB0190
Product code:
LXFDB0189
In the magazine
In the magazine
In the magazine
Take your Raspberry
Pi mastery to the
next level with our hot
hacks. Learn how to
contain everything
with Docker and plug
in to professional audio
production using JACK.
Plus: Develop with PHP.
Origin of the distro –
LXF tracks the source of
your favourite distro and
picks the best of each
genus. Plus: we chart
Ubuntu’s bumpy history
as it celebrates 10 years.
Also, Pi alternatives and
the best web browsers.
Discover how to solve
your Linux problems
with a huge troubleshooting guide. Plus:
Chromebook roundup,
run Linux on almost
any Android device,
cryptography explained
and Minecraft hacking.
LXFDVD highlights
Hot Pi distros! Kali Linux, Jasper,
RetroPie, Pi MusicBox and more.
LXFDVD highlights
Tails 1.1 Live DVD, Deepin 2014
and 3 essential rescue distros.
LXFDVD highlights
Ubuntu 14.10 Remix (pick from
5 desktops), ROSA, Rescatux.
LXFDVD highlights
Tails 1.1 Live DVD, Deepin 2014
and 3 essential rescue distros.
To order, visit www.myfavouritemagazines.co.uk
Select Computer from the all Magazines list and then select Linux Format.
Or call the back issues hotline on 0844 848 2852
or +44 1604 251045 for overseas orders.
Quote the issue code shown above and
have your credit or debit card details ready
GET OUR DIGITAL EDITION!
SUBSCRIBE
TODAY AND
GET 2 FREE
ISSUES*
Available on your device now
*Free Trial not available on Zinio.
FULLY
REVISED &
UPDATED
FOR 2015
180 PACKED PAGES!
THE COMPLETE GUIDE TO
SETTING UP AND TRADING ONLINE
DISCOVER HOW TO:
CREATE A COMPANY • SELL PRODUCTS ONLINE
• MARKET ON SOCIAL MEDIA
Available at aall go
good
oo
od
dn
newsagents
ewsagent
nts orr vvisit
Tutorial
Xxxx
Minecraft/Pi
Unleash the artistic turtle
inside you to make trippy trees
Minecraft/Pi:
Turtle graphics
Jonni Bidwell once again fires up Minecraft on the Raspberry Pi, and plays
with the graphical power that resides within a humble turtle.
Our
expert
Jonni Bidwell
learned that in
North America the
world ‘turtle’ refers
to Tortoises,
Terrapins and
Turtles — the
whole chelonid
family. Whereas in
the UK, the
meaning is much
more specific.
Y
Quick
tip
You can also use
LibreLogo to do
turtle graphics in
LibreOffice. Go to
View>Toolbars>Logo.
Yet another avenue
of unneeded
procrastination.
ou may be old enough, while still possessing the
necessary memorial faculties, to recall programming
Turtle Graphics in a language which called itself Logo.
Actually, there have been several languages calling
themselves thus, the first one appeared in the late 60s. Be
that as it may, through the captivating medium of controlling
a pen-wielding turtle (robotic or screen-abstracted), users
could make shapes and patterns using diversely coloured
lines by issuing simple directional commands. The turtle had
a head (or at least a tail) and these commands (forward,
backwards, left 20 degrees etc) were all taken to be relative to
our critter's current bearing, not based on an absolute
screen-defined compass. Students were encouraged to learn
by ‘body-syntonic’ reasoning – imagining themselves as the
obedient reptile in order to better understand the effect of
any given command. Once the basics had been mastered,
one could then learn all about functions and parameter
passing, so that you could draw a regular ununpentagon
(which would be a 115-sided figure if it was a real thing) of any
given side length with a single command.
Python comes with its own turtle graphics module, which
uses the lightweight Tk toolkit to handle the graphics. If you're
70 LXF195 March 2015
www.linuxformat.com
using Raspbian you already will have it installed, and if you've
not experienced the joy of being a turtle that draws, then you
should definitely have a play with it – check the docs at
https://docs.python.org/2/library/turtle.html . However,
for this tutorial we're bringing Turtle graphics to Minecraft,
so we're going to make our own Turtle implementation. One in
which our chelonian plotter can move in not two but three
dimensions. Did I say "make our own"? That's not really true,
the work has all been done by our good friend Martin O'
Hanlon, do check out his website www.stuffaboutcode.com
for all manner of Minecraft: Pi Edition projects. We've already
'borrowed' Martin's stuff before, so you might have already
seen his fantastically handy MinecraftDrawing class in
previous tutorials, [see Tutorials, p84, LXF186]. The class
provides functions for drawing arbitrary points, lines, faces,
circles and spheres all over the Minecraft world, and forms
the basis for our mcpiturtle project.
Before we begin, it's worth explaining some of the
complications that arise from letting our turtle move in three
dimensions. In the 2D case, we really have only one angle with
which to concern ourself: how many degrees to turn left or
right from the current heading. Moving to three dimensions
grants us another heading we can adjust: the elevatory angle.
Thus our turtle will have four different rotation functions, as
well as the two translation ones (forward() and backward()).
The default representation for our turtle is a diamond
block that’s because our turtle is hard.
Minecraft/Pi Tutorial
Inspired spirals
Turtle graphics make easy work of
drawing some shapes which are quite
hard to construct objectively. Consider a
spiral in two dimensions: Each coordinate
is given by a parametric equation, of the
form x=a.t.cos(t), y=a.t.sin(t) where t is a
parameter that varies from 0 to infinity
(or until you get bored of drawing
spirals). But turtles don't give a damn
about trigonometry, if they want to trace
out a spiral, they need only follow the
following recipe:
for step in range(100):
t.forward(step // 2)
t.right(45)
Okay, so that's not quite as curvey as a
spiral ought to be but it's a much less
complicated method. Here's another
approach to try, which constructs a spiral
from the outside in by rotating and
scaling squares:
def square(length):
for k in range(4):
t.forward(length)
t.left(90)
def sqr_spiral():
length = 50
for j in range(100):
square(t, length)
t.left(10)
length = int(0.9 * length)
Calling the second function, with
everything set up correctly, results in the
much more impressive result shown.
By adding another dimension things can
get even more impressive: extruding a
circular path (a degenerate case of a
spiral, parametrically speaking) results in
a helix, so having two oppositely sensed
circles on top of each other will result in a
double helix. Thus by pairing four
colours you could fill the Minecraft world
with DNA.
We will keep track of the two different headings via the turtle
object's heading and verticalheading properties. It can be
confusing at first, but once you've played around a little it will
become second nature.
Besides the six commands for relative rotation and
movement, our mcpiturtle object accepts commands for
setting absolute position and headings: setx() etc for setting
individual co-ordinates, setposition() for setting all three,
setheading() and setverticalheading() for changing the
angles. While these weren't part of the original language (they
don't make sense if you're a robot liberated from screendefined co-ordinates), they can be very useful for getting to
where you need to be in Minecraft world. We also have
penup() and pendown() to control whether or not our turtle
will draw anything, and penblock() to control the block with
which the turtle will draw (the default is black wool). We can
also choose the speed of our turtle, by calling speed() with a
number from 0 to 10. With 10 being very fast and 1 being very
slow, and setting the speed to 0 will draw lines instantly.
keep the y-coordinate constant, so that turtle will fly above
valleys and tunnel through mountains.
Once you've got Minecraft and mcpiturtle.py installed
[see bottom of p73] we can begin to explore the power of the
turtle. With a Minecraft world loaded, open up an LXTerminal,
cd to the directory where you copied mcpiturtle.py, and start
Python (you can also use the IDLE development environment
if you prefer). Import our module and set up the connection
to the game server with:
import mcpiturtle.py
import mcpi.minecraft as minecraft
import mcpi.block as block
Turtle wrangling
Being a turtle in a 3D world full of blocky hills, lakes and
other features comes with its own challenges. Suppose for a
second you are that turtle and are told to walk forward 50
blocks, but right in front of you is a great mountain. As an
obedient creature, how ought you proceed? You could follow
the level of the ground, up and possibly over the mountain, or
you could ignore the scenery and tunnel mercilessly through
it. A similar dilemma applies if you stand before a gaping
chasm. These two alternative means of tackling a varying
ground level are chosen by setting the turtle (you can stop
imagining its you now if you want) to walk() or fly() mode.
Walk will respect the ground level, so that the y-coordinate of
the turtle and any tracks it lays will vary. Fly (the default) will
These psychedelic trees can be yours with just a few lines of code.
Get print and digital subs See www.myfavouritemagazines.co.uk/linsubs
www.tuxradar.com
March 2015 LXF195 71
Tutorial Minecraft/Pi
Quick
tip
If you're turtle is
travelling in the
negative y direction
(what the vulgar
call 'down') then
the diamond block
which represents
it will erase it's
previous mark. You
are free to fix this
bug by caching and
restoring blocks as
the turtle moves.
mc = minecraft.Minecraft.create()
And instantiate a new turtle object at Steve's current position:
>>> pos = mc.player.getTilePos()
>>> t = mcpiturtle.MinecraftTurtle(mc,pos)
Behold, a block of diamond appears at your current
position. This is actually our turtle, if you want to change its
appearance then you can modify the turtleblock attribute.
For example, to change it into a block of stone:
>>> t.turtleblock = block.Block(block.STONE.id)
Once you're satisfied with the appearance of your turtle,
we can begin some artistic, or otherwise, endeavour.
This code will draw the edges of a square-based pyramid:
>>> for j in range(4):
>>> t.forward(20)
>>> t.right(90)
>>> t.right(135)
>>> t.up(52)
>>> t.forward(23)
>>> t.down(104)
>>> t.forward(23)
>>> t.up(52)
>>> t.right(135)
>>> t.forward(20)
>>> t.right(135)
>>> t.up(52)
>>> t.forward(23)
>>> t.down(104)
>>> t.forward(23)
Note that our turtle draws underneath its current position.
This gives rise to an interesting bug (see the Quick tip, left).
For now, enjoy this example wherein we have created a
wireframe pyramid. If you fill in the gaps (or get several
thousand slaves to do it for you) then you would have a fitting
tribute to Pharaoh Stevehotep of Minecraftland (or something
like the illustration pictured, below). All very well, but fairly
tedious (and you didn't even have to work out the geometry),
and not really as awe-inspiring as the one at Giza, (even
though the proportions are close, the Great Pyramid has a
base angle of just under 52 degrees). Here's a slightly more
colourful example, this time we'll put our drawing commands
“Steve, what do you see?”. “Things! Wonderful things!”. Or maybe ancient
curses and diseases, all made of blocks. [Image credit raspberrypi-spy.co.uk]
inside a function. You can either enter these commands into
the Python interpreter directly, or save them as a file, say
ngon.py, in the same directory as mcpiturtle.py.
import random
def ngon(t, n, length):
angle = 360./n
for j in range(n):
colour = random.randint(0, 15)
t.penblock(35, colour)
t.forward(length)
t.right(angle)
Now import the ngon module, if you saved it externally,
move your turtle out of the way of your pyramid and from the
same interpreter run:
>>> import ngon
>>> ngon.ngon(t, 9, 10)
This will draw a regular 9-gon (a nonagon) of side-length
10 blocks. Through the random module we change the
turtle's draw colour. Wool has blockType 35 (we don't have
access to the block class so we can't use the more
descriptive names seen earlier) and the second line of code
sets the blockData parameter, which for wool blocks controls
the colour.
Lignum fractus
We've covered fractals before, most recently using Python to
generate von Koch snowflakes in Gimp [see Tutorials, p84,
LXF190]. There we learned all about recursion, and how a
function which calls itself can lead to pretty pictures. For this
tutorial our focus shall be more arboreal than meteorological,
in that we shall produce some glorious trees, making this a
carbon-neutral tutorial.
Making 2D fractal trees with turtle graphics is a pretty
straightforward process – start with a straight line (the
trunk), turn right through a set angle and repeat the
procedure for a smaller line, then turn left a set angle and
repeat. In Python the general shape of this code (excluding
initial setup) would be:
def tree2d(branchlen, t):
if branchlen > 2:
t.forward(branchlen)
t.right(20)
tree2d(branchlen - 2, t)
t.left(40)
tree2d(branchlen -2, t)
# then return to original heading/position
t.right(20)
t.backward(branchlen)
Then calling branchlen(20) would slowly but surely grow
a tree-like object. The if statement stops the function being
an infinite recursion – each time we draw smaller and smaller
branches, but the function will stop doing anything when
called with a branch length of 2 or less. Hence our function
call produces a tree with a trunk of length 20, and up above
twigs of length 4.
Generalising this to three dimensions is quite
straightforward – instead of having two recursive calls to our
function, we shall instead have four. One for each compass
direction, and each one deviating from its parent's vertical
heading by 20 degrees. We can also add some flair by
drawing the shorter (higher up) branches in random colours,
Never miss another issue Subscribe to the #1 source for Linux on page 34.
72 LXF195 March 2015
www.linuxformat.com
Minecraft/Pi Tutorial
Next issu
Build youe:
own rout r
er
speaking to the spring blossom that will be appearing Any
Day Now. Larger branches (and the trunk) will be made of
wood, just like in real life. Finally, since this can be quite a slow
algorithm (calling tree(20) results in 4,096 recursive calls)
we reset the turtles original position directly using
t.setposition() per function call, rather than using
t.backward() and doing all the inter-block calculations and
animations unnecessarily. And without further ado, here is
our 3D tree function:
def tree(branchLen,t):
if branchLen > 6:
if branchLen > 10:
t.penblock(block.WOOD)
else:
t.penblock(block.WOOL.id, random.randint(0,15))
x,y,z = t.position.x, t.position.y, t.position.z
t.forward(branchLen)
t.up(20)
tree(branchLen-2, t)
t.right(90)
tree(branchLen-2, t)
t.left(180)
tree(branchLen-2, t)
t.down(40)
t.right(90)
tree(branchLen-2, t)
t.up(20)
t.setposition(x, y, z)
You'll find all this together with all the required setup in the
file mcpi3dfractaltree.py on this month’s LXFDVD.
Importing this file (while a Minecraft world is loaded) will
result in animation being disabled (by calling speed(0)) and a
rather funky looking tree automatically being generated in
your current position.
You can easily alter the tree if it's not to your tastes. If you
wanted to expand the 3D trees project a little, why not add
small random numbers to the angles – after all, nature isn't
all right angles and perfect symmetry. Also, trees (in the
northern hemisphere) tend to lean southwards, so your code
could favour one direction in order to recreate this
phototropism. And that concludes another exciting
instalment of our Minecraft: Pi Edition series. LXF
Helices ain’t no
thang, if you’re a
turtle. Next week:
Build your own
Minecraft DNA
sequencer [Ed –
er, no].
Quick
tip
There are all
manner of other
Minecraft Turtle
examples on
Martin's GitHub:
http://bit.ly/
MineCraftTurtle.
Installing Minecraft:Pi Edition and the turtle module
We've covered this before, but since packages
are now available for Raspbian it's a bit easier
now. First, if you downloaded Raspbian after
September 2014, then you'll already have
Minecraft installed system-wide and don't need
to do anything. Otherwise a simple
$ sudo apt-get update
$ sudo apt-get install minecraft-pi
will get the job done. Prior to this packaging, it
was necessary to install the program to your
home folder and copy the Python API files from
there to your project's folder.
This approach will still work, but it's messier
than the new way. You'll find the mcpiturtle.py
file on the LXFDVD, this will work without
modification if you've installed the package as
above. It will also work the old way if you happen
to have the API files in a directory called mcpi
and the mcpiturtle.py file one level above this
directory. Otherwise you'll need to modify the
www.tuxradar.com
first two import lines, according to where the API
files are located. You can test this out by starting
Minecraft ($ minecraft) and entering a world. In
a separate terminal window, cd to wherever
you've copied mcpiturtle.py, the run it with
$ python ./mcpiturtle.py
This will run the tests in the
if __name__ == "__main__":
stanza, which include: drawing a pentagon,
changing pen colours, and general meandering.
March 2015 LXF195 73
Tutorial
Xxxx
ImageMagick
Use the convert and
mogrify commands to alter images
ImageMagick:
Convert images
Neil Bothwick says forget Gimp, put your mouse away and
learn how to manipulate images from the command line.
Our
expert
Neil Bothwick
has a great deal
of experience with
booting up, as he
has a computer in
every room, but
not as much with
rebooting since he
made the switch
from Windows
to Linux.
If you find the man pages a little terse, the ImageMagick
website has more details and command examples.
A
sk a group of Linux users to name a graphics
processing package and the vast majority will
mention Gimp (GNU Image Manipulation Program)
first. You may get a few votes for other programs but you are
unlikely to get many for one of the most used graphic
packages on Linux, ImageMagick. This is a suite of command
line image processing programs, and what is the point of such
a thing, you may be asking? Surely editing graphical images is
best handled by a graphical application? If you want to
retouch or otherwise modify a single image, you would be
right, but have you ever tried converting a directory of images
to a different format,or resizing them? That's where the
ImageMagick programs come into their own, and the reason
why this is one of the most used image processors on Linux,
even if it is behind the scenes.
The most used command in the ImageMagick toolbox is
convert, and format conversion is the simplest of its
operations. Using
convert pic1.jpg pic1.png
does exactly what you would expect it to, converts an image
from JPEG to PNG. Well, it makes a new image file, in place
modification is something we will look at shortly. We didn't
need to specify the type of input and output, convert works
74 LXF195 March 2015
www.linuxformat.com
the input format out for itself (that's part of the magic) and
uses the file extension to choose the output format. You can
change this by prefixing the output file with an image type
and a colon, like this:
convert pic1.jpg bmp:pic1.gif
This would create a BMP image with a GIF extension,
which may seem rather useless until you consider that, in
common with many other programs, convert accepts a
filename of - to represent standard input or output. So if you
want to send image data to another program in PPM format,
you could do:
convert pic1.jpg ppm:- | another_program
Some output formats also accept extra options, such as
JPEG quality:
convert pic2.png -quality 75 pic2.jpg
The convert man page and online documentation give the
full range of such options.
Resizing images
After format conversion, resizing is one of the more popular
image processing tasks, and convert does this too.
convert pic3.png -resize 50% pic3_small.png
This is the simplest resize operation, a single scaling factor
is applied to the image. You can scale by different factors on
the X and Y axes with -resize 40x60%, even though there is
only one % sign, it applies to both numbers. If a single
numeric value is given then that is taken to be the width in
pixels of the new image; the height is calculated to preserve
the aspect ration of the image.
ImageMagick Tutorial
convert pic3.png -resize 1000 pic1000.png
convert pic3.png -resize x1000 pic1000.png
in the second example, the value is for the height – think
of it as widthxheight with the width value missing. If both
width and height are given, the resulting image size is the
largest that will fit into that area while preserving the aspect
ratio. If you want to set absolute dimensions and ignore the
aspect ratio, add a ! to the size:
convert pic4.png -resize 1000x600 pic5.png
convert pic4.png -resize 1000x600! pic5.png
The first line preserves aspect ratio, the second creates an
image exactly 1,000x600 pixels. If you are creating
thumbnails of larger images, you may want to avoid enlarging
any smaller images in the list, which usually just creates a
bigger but fuzzier image, so add > like so:
convert pic4.png -resize 1000x600\> pic5.png
The > tells convert to only resize the image if the original
is larger than the given size. Note: that you will need to escape
or quote the > when running this in a shell or it will be
interpreted as a redirection operator. Operations can be
combined in a single call, and are applied in order, so you can
resize and convert to a different format at the same time:
convert pic6.png -resize 30% -quality 75 pic6a.jpg
Add some text
An interesting option is the ability to add a text caption to an
image, for example:
convert pic7.png -gravity SouthEast -family Times -pointsize
30 -annotate +10+10 "Some text" caption7.png
The key component here is annotate, which is followed by
the x,y position of the text and then the text itself. The
-gravity setting is also important, it describes where
coordinates are measured from. In this example the text is
drawn 10 pixels in from the bottom right-hand corner of the
image. The -family and -pointsize options control the font
and size of the text. It’s often preferable to use -family rather
than -font because -family will fall back to the closest match
if the exact font is not available, instead of failing with an error.
ImageMagick options apply from the point at which they
appear in the command line, so you must specify the gravity
and font settings before -annotate or they will not apply
(unless you have a subsequent text operation, in which case
they will apply there). This could be used in conjunction with
the date command to caption images with their timestamp in
the following way:
convert pic7.png -gravity SouthEast -annotate +10+10
"$(date -r QR.png +%d-%b-%y\ %H:%M)" new7.png
Convert creates a new file but sometimes you may need
to modify a file in place, ImageMagick provides the mogrify
command for this. This works like convert, except mogrify
overwrites the input file with the changes, compare these two
commands:
convert pic8.png -resize 50% pic9_small.png
mogrify -resize 50% pic9.png
As you are destroying the original, you need to be sure of
the operation you are performing before running it, so it's
wise to use convert to test it out first. However, mogrify has
another trick up its sleeve. With convert, you need to specify
both the input and output file, so each invocation can only be
run on a single file. On the other hand, mogrify only ever
needs one file name, so if you feed it several, it runs the
command on each file in turn, so
mogrify -resize 50% *.png
will process all matching files in the current directory. We said
that mogrify overwrites the original file, but there is an
important exception to this; if the command is used to
change the file format, a new file is created with the relevant
extension and the original is left untouched, even if other
operations are also performed, using
mogrify -resize 25% -quality 70 -format jpg *.png
will create JPEG thumbnails of all PNG files found in the
current directory.
We have only scratched the surface of what is possible
with the ImageMagick package, the commands we have
covered have many more options and there are other
commands. For instance, Identify does what it says, tells you
about the format and size of a file, while the import
command can be used to grab images from the X server, and
screenshots of individual windows or the entire screen.
The man pages are rather dry to read, but there are plenty of
explanations and examples on the official ImageMagick
website at www.imagemagick.org. LXF
Use identify
to get more
information
about image files
of any type.
Batch processing
While mogrify can work with multiple files,
convert works on a single file at a time, so how do
you process a directory full of files? The easy way
is with the shell's for operator.
for i in *.png
do
convert "$i" -resize 500x500 "thumbnails/${i%.
png}.jpg"
done
This runs the commands between do and
done once for each file that matches the pattern.
Each time, $i is replaced with the name of a file.
The ${i%.png} part is replaced with the name of
the file with the .png removed from the end (%
removes the following string from the name) and
we then add the extension .jpg, so what is
actually run is:
convert "pic1.png "-resize 500x500 "thumbnails/
pic1.jpg"
convert "pic2.png" -resize 500x500 "thumbnails/
pic2.jpg"
...
We enclose the file names in quotes in case
there are any that contain spaces, which would
otherwise confuse the shell. This works well if all
the files are in a single directory, if they are in
subdirectories it is sometimes easier to use find:
find photos -name '*.jpg' -exec convert "{}"-resize
25% "thumbnails/{}" \;
Each file that is found is passed to the
command following -exec, where {} is replaced
by the file name. Once again, use quotes to avoid
problems with spaces.
Get print and digital subs See www.myfavouritemagazines.co.uk/linsubs
www.tuxradar.com
March 2015 LXF195 75
Tutorial
Xxxx Detect movement, record a
Motion
livestream and back it up on a server
Motion: Detect
and record
Kent Elchuk demonstrates how to build a livestreaming system, using a
Raspberry Pi and a webcam and how to save motion-detected video.
Our
expert
Kent Elchuk
is an avid web
developer, Linux
enthusiast and the
creator of Cloner
and Sitemakin
CMS. He likes to
experiment with
new ways to make
the web a greater
experience for
everyone.
W
e'll assume you have none of the required
packages to follow this tutorial on video
surveillance and video recording. You will use
Motion which is the heart of this article. Aside from that, you
will require Apache (or Nginx) and PHP. Although this tutorial
is geared towards using a Raspberry Pi, you can use another
PC setup if you prefer. Do note, that if you go the Apache and
PHP route, everything will work very easily without having to
make extra changes to the server and PHP.
If you do decide to go with Nginx instead of Apache you
will need to make some extra changes: such as installing
PHP-FPM; changing the root folder path for web page files;
and editing the following files: /etc/nginx/sites-available/
Use simple password protected authentication to keep files secret.
76 LXF195 March 2015
www.linuxformat.com
default, /etc/nginx/sites-enabled/default and /etc/
php5/fpm/php.ini.
Now, for the synopsis of each package. Motion will be used
to record video after movement is triggered. The video clips
will be written to a folder as Flash SWF files. However, Motion
still allows you to see the location even without movement,
much like a regular security camera.
Once you have those files, you may want to be able to sort
through them effectively. Here is where the web server and
PHP play their role. With the Apache or Nginx server, you can
serve these files over the web.
Realistically, many files will be accumulated and you may
want to create a loop with PHP in order to output each file
into a link that can display the video in a popup. In which case
a free video popup application, such as Shadowbox can be
used. Lucky for you, the code included on the LXFDVD
contains the files that do all that stuff.
With all that covered, you'll have a setup that can deliver
your videos. This tutorial will show you various options and
their how-to counterparts. Since a camera like this could be
used in your home as a security camera, you may want to
password protect any web pages or the folder where you
keep the videos. Now, if someone did happen to break into
your premises and decide to steal or wreck your Raspberry
Pi, we'll also guide you through a backup plan that can be
used to move your video files to a foreign web server that the
robber won't have a clue exists.
Getting things to work
Since this article is about Motion, let's install this first:
sudo apt-get update
sudo apt-get install motion
Now that one installation is out of the way, let's add the
rest, which includes Apache
sudo apt-get install apache2
and PHP:
sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt
Let's move on and make some basic procedures and tests
to see everything is working as it should. The main files which
you will customise are /etc/motion/motion.conf and /etc/
default/motion. Open up motion.conf with your favourite
editor. By default, you'll note that the parameters shown
below are the opposite of the default values. For example,
daemon off becomes daemon on:
daemon on
webcam_localhost off
control_localhost off
Motion Tutorial
Save the changes and open up the /etc/default/motion
file and make the following changes:
start_motion_daemon=yes
Now, let's fine tune some options. Three changes that are
needed are: the frame rate, quality and minimum amount of
frames to trigger the motion to record:
framerate 30
quality 90
minimum_motion_frames 5
Without changing this setting, two frames per second
looks way too jerky and will miss out a lot of action, so we
change the frame rate from 2 to 30 frames per second.
The second change is obvious since it's a quality upgrade.
The third change sets the minimum amount of frames of
motion that need to be detected. By default, the value is 1.
The problem with a number this low is that you can end up
with unwanted recordings from things such as lights flicking.
Keep in mind that you have many options and can look
deeper into the features. A good place to start is on the offical
website (http://bit.ly/MotionConfigFileOptions).
Some of the other features you might want to consider
are: taking a picture at a desired interval, such as every
second, every minute or every hour. This feature makes it
easy to host a live weather cam, for instance, or to determine
if someone is sitting on your couch.
Configuring Motion
Changing all parameters to suit your specific needs is very
easy and the motion.conf file will often have nice, selfexplanatory comments while the website and man page have
more information to offer. Obviously, this service doesn't do
much without a working, compatible webcam and a list of
webcams worth trying with the Raspberry Pi can be found at
http://elinux.org/RPi_USB_Webcams.
Using a plug and play webcam makes life easy for this
task, and one cheap, readily available webcam that works is
the Logitech C170. Note: If you are using the Raspberry Pi, the
Raspberry Pi cam won't work with Motion. To tell if the USB
webcam connects OK, run the command lsusb.
At this point, you will likely have a working webcam,
a working web server and an adequate Motion configuration.
This is good, but you'll also need to create a folder for the
images and set ownership for Motion. By default, Motion
drops the images and SWF files into the /tmp/motion
folder. It won't create the folder, therefore, you will need to:
cd /tmp
mkdir motion
Alert! Man on
sofa. Capturing
live video feed.
chown motion:motion motion
Now, let's see how everything works. To start with, you can
get Motion up and running with the command
service motion start
and you can always restart it with the command
service motion restart
The first thing you'll need to do to test that everything
works fine is to see if you can view the default web page.
Since your Pi will have a unique network address, you can just
type it in the browser. For example, if the Pi is connected to a
router with the IP of 192.168.0.1, your device could have an IP
like 192.168.0.106. Thus, the URL would also be
http://192.168.0.106. If you have success returning the
default web page, you will see a message stating that
everything is working properly. If not, you will get a typical
browser error which makes it obvious that something is not
quite right.
Now with a working server, let's move on and get down to
viewing and recording video. You can test the video in your
browser by typing your network IP and port. By default, the
Motion webcam port will be 8081. Therefore, if you type
http://192.168.0.106:8081 in the browser, you should see
your video stream.
A simple setup like this can be beneficial and have many
uses aside from security: such as keeping an eye on a newborn while you’re working in another room. Since all should
be well at this point with the Motion service running, you can
now go in front of your webcam and jump around a bit.
Actually, hand waving will suffice but a few jumping jacks
aren't going to hurt. After that, you should be able to browse
the new motion folder and see a bunch of JPEG files and at
least one SWF file.
Quick
tip
When you're logged
in via SSH and need
to edit files, using
vim to find a string
is easy. To do this,
all you need is a
/ followed by the
string name. Just
type n to move on
to the next one.
Using multiple webcams
Need more than one webcam? No problem.
Motion enables you to add more easily. You need
to open up /etc/motion/motion.conf and set
up the threads. If you go to the bottom of the file,
you see various lines that are commented out
followed by the word thread. As you can see, the
default location for these new files are in /usr/
local/etc folder.
To keep it simple, you can change the thread
folder to the /etc/motion folder. This way, you
keep all the editing in one simple location. Now,
the first thread would resemble the line below:
thread /etc/motion/thread1.conf
Once you’ve set up your threads since you are
using multiple webcams, you can create the
thread files. Thus, your first thread would be
called thread1.conf, followed by thread2.conf,
and so on. The code that you add to these
threads just needs to be a few lines. The code
samples below display two threads. As you can
see, each thread has its own videodevice
parameter, custom text that appears on the left-
hand side of the video stream and image folder
and port number. Here' s thread1.conf
videodevice /dev/video0
text_left Camera #1
target_dir /var/www/images
webcam_port 8081
followed by thread2.conf:
videodevice /dev/video1
text_left camera #2
target_dir /var/www/images_cam2
webcam_port 8082
Get print and digital subs See www.myfavouritemagazines.co.uk/linsubs
www.tuxradar.com
March 2015 LXF195 77
Tutorial Motion
Displaying
stored images
and Flash video.
Now, that you have a motion-detecting device that you
can view from your local network, you can move on and make
adjustments so that you can see and view your webcam from
outside your local network. For some of you, this may be the
setup you desire; especially if your webcam and Raspberry Pi
are well hidden and unlikely to be tampered with.
However, in addition to storing data in your own home, we
will explain how to back up the files to another server for safe
keeping, just in case your SD card or hard drive fails, or
someone decides to steal, break or ruin your webcam (or
webcams) in your home.
With that said, we will move on and create a simple way to
organise the saved files that recorded the movement. The
first detail that will need to be changed is to save the images
and SWF files into a folder within the web directory. The root
web folder is located at /var/www/html or /var/www.
At this point, a light should go off since you have already
made several changes to the Motion setup. Reopen the
/etc/motion/motion.conf file and change the target
directory. By default, the target directory is located at /tmp/
motion. The new target is /var/www/images:
target_dir /var/www/images
Making it web friendly
Viewing recorded video and images
The whole idea here is to record, store and manage video
from a website or by using your IP address that was given to
you from your ISP. To find out your IP head to http://
whatismyipaddress.com. In order to broadcast video out,
you'll need to set up port forwarding on your router to enable
your network IP to use port 8081.
While you are at it, you may as well do the same for port
80 since this same network IP will be used to display web
pages from computers outside your local network; such as
your friend across town or a loved one overseas.
After you have made the previous changes to your router's
settings, try typing http://my_ipaddress_from_isp and
http://my_ipaddress_from_isp:8081. You should get the
same results as you did when you were checking your local IP.
The next step is to clean this up and view organised data
view web URLs; such as http://my_ipaddress_from_isp/
video.php or to view it from another website using an iframe.
In order to show the webcam from the page video.php,
you just need to use an img tag with the network IP and port.
Have a look at the code below, which shows this in complete
detail and displays it with the default width and height as
specified in the motion.conf file:
<img src="http://192.168.0.106:8081/" width="320"
height="240"/>
Now, let's imagine a scenario where you want to stream
this livecam from another website using an iframe. Well, all
you have to do is make an iframe from another page on the
different server. The simple one-liner is shown below.
<iframe style="width:320px; height:240px;" src="http://isp_
ipaddress/video.php"></iframe>
The next set of code will explain how to display the files
that have been saved after motion is detected and recorded.
After making changes to motion.conf, type the command:
sudo service motion reload
so that it will now attempt to write any new files to the
/var/www/images folder. Now, you can easily access the
files created by the Motion service and display them on the
web just like any other typical web page. Although the path
has been changed in motion.conf, the images folder hasn't
been created yet. So, make it now.
The folder will be located within the www or html folder. If
it sounds like we’re repeating ourselves here, because it
means you have been paying attention and are aware that the
Apache root web folder can be in one of two paths:
cd /var/www
mkdir images
By default, the www directory will be owned by the root
user and root group. You will want to make some changes;
such as all files will be owned by pi and the group will be
www-data. To change this use:
cd /var
chown -R pi:www-data www
So, what we are up against now is to make this images
folder writable by the Motion service. As of right now, the
other files have adequate ownership and permissions, but,
the images folder does not. Well, let's change that right now.
The code snippet below has three commands. The first
command will add the user motion to the www-data group.
In case you are wondering, www-data is an existing user and
group for the Apache server. The second command gives the
images folder permissions to the Motion user and www-data
group, and the final command makes the folder writable so
that the images and SWF files can magically appear in the
images folder:
usermod -a -G www-data motion
chown motion:www-data images
chmod 777 images
It's within the www folder where you can create a file
called shadowbox.php that will be used to display content on
the web. This file has all the code you need to display a
thumbnail from each recorded SWF video, the video itself and
the first JPEG image of motion.
The coding process to display the content goes like this:
The images directory gets scanned and an array of files is
created and sorted in descending order. These files are
multiple JPEG files and a single SWF file for each event.
All files have a name that starts with the event, followed by
the date and finally followed by the sequence.
The date sequence is year, month, day, hour, minutes and
seconds. After that, Motion just adds the sequence starting
from 01 only for the JPEG files.
Never miss another issue Subscribe to the #1 source for Linux on page 34.
78 LXF195 March 2015
www.linuxformat.com
Motion Tutorial
In order to keeps things simple and uncluttered, the
coding for the shadowbox.php file will only display a single
image and a single SWF file for each event. This simple script
also uses shadowbox to create popups of the first JPEG
image and flash SWF file for each event. Now, you can see all
the latest detected motion, down to the first recorded by
Motion. This file gives all of the results and here is where you
may want to customise the output.
If you want to keep these web pages password protected
with Apache, you can open up the file /etc/apache2/sitesavailable/default and make some minor changes.
If you look for the line <Directory /var/www/>, you can
add three simple lines to it. The code sample is shown below:
AuthType Basic
AuthName "Private Documentation Repository"
AuthUserFile /var/www/.htpasswd
Require valid-user
After that, you navigate to the /var/www folder and create
a blank file called .htpasswd, and create the username and
password with the simple command displayed below. You will
be prompted for a password twice. You simply add it followed
by Enter:
sudo htpasswd /var/www/.htpasswd add_username_here
Since the files can pile up pretty quickly and your disk
space can readily disappear, you may want to create a
purging system or back them up to another drive. Some
backup plans are discussed next.
Backup plans
One tip for determining your backup plan is to watch how
much space you routinely tend to use and develop a plan
based on those conditions. Simple. For example, if you go
through 1GB a week and you have a 8GB card you may want
to TAR the images folders, SCP the file to a remote server and
remove all files that are more than one-week old. Since the
files contain the year, month and date, it's a rather easy
process to delete the ones that have expired. The file called
purge.php is a cleanup file that is included on the LXFDVD
this month.
This file removes every file that's more than a couple of
days old. I will explain the code in a little more detail in a
moment. First off, the images folder is scanned and all of the
files become an array. That array of files then iterates through
a foreach loop. A few built-in PHP functions, such as strstr(),
preg_replace(), substr_replace(), substr(), date() and
unlink() are used to translate all the file names into actual
date timestamps that can be used for comparison.
Once a timestamp is made from the filename, it goes
through a simple if() statement and is compared against a
time that is set to two days ago from the current time. This
part is really easy to change since you just need to change the
number 2 to your desired amount of days in the past.
Once this criteria is met, the file is deleted with the unlink()
function. Since this system is only using files without a
database, it's rather elementary to move all of these files to
your backup location, and since this is copying and moving
files, two methods come to mind. One is using a package
such as rsync and the other is a simple method of
compressing the desired files and folders with ZIP or TAR and
shipping them to their new destination with SCP. An simple
example of SCP is shown below:
scp -P 22 /var/www/images.tar [email protected]:/home/pi/
images.tar
So there we have it. You've just created your own video
surveillance and motion recording system that has several
options to suit your needs or that you can customise.
Although we've made a rough skeleton and files for you to
monitor your video, record files and make backups, you can
take this farther if you want. Some simple suggestions would
be are to add a responsive template to both the video.php
and shadowbox.php files, and polish up the content with a
little CSS magic.
On top of that, you could set up webcams at other sources
and have them viewable by the public or friends, depending
upon what you want to achieve. Have fun! LXF
All the files
in the images
folder are named
with an event,
followed by date
and sequence.
Quick
tip
You may want to
include the time
in the file name so
backups will not
be overwritten.
In addition to that,
you may want to
run a cron job that
does this procedure
on a regular basis.
Nginx and Motion
Nginx doesn't ship ready to go out-of-the-box for
Motion as Apache does. In fact, after you install
Nginx there's a series of required steps that you
must do before you have a smooth operation.
In the case of Raspberry Pi, you can adjust
the worker_processes value from 4 to 1. You
can change this in the /etc/nginx/nginx.conf
file. This is recommended since the Pi only has a
single CPU core.
After that, you will want to change the default
web folder since the default points to /usr/
share/nginx/www. To change this, you open
the file called /etc/nginx/sites-enabled/sitesenabled/default. The change is shown below so
the web folder is /var/www:
#root /usr/share/nginx/www;
root /var/www;
After the previous step, you can quickly install
fastcgi. The command is below:
apt-get install php5-fpm
After that, you’ll need to open the file /etc/
nginx/sites-available/default and change a
few lines of code so it resembles the content
below. Basically, you’ll just need to remove a
few comments:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo
= 0;" in php.ini
# With php5-cgi alone:
www.tuxradar.com
# fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
We’re almost there. Now you’ll have to open
the file /etc/php5/fpm/php.ini and remove
another comment so that it looks like the line of
code below:
cgi.fix_pathinfo=1
Finally, make sure to restart Nginx after
making all of the changes. The command
/etc/init.d/nginx restart
will do the trick.
March 2015 LXF195 79
Tutorial
Xxxx blogging platform Build a
Ghost
custom theme using Handlebars.js
Ghost: Create
custom themes
Steven Wu explains how to get started and build your own theme
for the popular open source blogging platform Ghost.
Our
expert
Steven Wu
is a freelance
Magento &
Wordpress web
developer. Follow
him on Twitter at
@designtodevelop
to learn about his
current projects.
W
Quick
tip
All the files you
need for this
tutorial can
be found on
Steven's GitHub
at http://bit.ly/
BuildAGhostTheme.
e covered the fundamentals of Ghost previously
[See Tutorials, p74, LXF183], but in case you
missed it: Ghost is a free open source blogging
platform. Successfully funded through Kickstarter in May
2013, it surpassed its original request of only £25,000
achieving over £196,000 in funding. Started by John O’Nolan,
Ghost has a unique purpose in providing bloggers with a
simple interface that enables them to write and publish their
content without the hassle or distraction by the sheer
complexity of development from traditional platforms.
Ghost has been beautifully designed from the ground up.
Its clean and simplified UI enables you to quickly browse
through the archive so you spend less time managing your
blog and more time blogging. It has a smart writing screen
using Markdown with a real-time preview on the right-hand
screen and simple drag and drop functionality to add images
into place.
Ghost has three main core principles: First, it’s developed
for users rather than developers, unlike many blogging and
CMS platforms out there. Second, the platform has a MIT
licence so you can do what you like with this platform with
80 LXF195 March 2015
www.linuxformat.com
few limitations. Third, it’s made for love. Ghost is a non-profit
organisation, which means its motivations are to support
bloggers rather than satisfying investors. In this tutorial we
will show you how to install and set up Ghost locally and build
your first Ghost theme.
To begin building our Ghost theme, start within the Ghost
installation folder [see top, p81 for the installation guide].
Under content/themes, create a new theme directory called
mytheme, or something more imaginative – make sure it's in
lowercase without any spaces (hyphens are acceptable). This
will be the directory that houses our theme codebase. Within
this directory, create the following files and folders:
assets/
css/
normalize.css
screen.css
images/
js/
fonts/
partials/
header.hbs
default.hbs
index.hbs
post.hbs
Both index.hbs and post.hbs are the only files required
for a valid theme. Without any of these you’ll receive an error.
Now in the Ghost dashboard, navigate to Settings >
General. Under Theme, select the new theme you just created
called mytheme. If it’s missing, you’ll need to go to the
terminal and restart Ghost. Click Save to activate this theme.
You won’t see anything in the frontend yet. This is because we
have yet to add any markup in our theme.
Using Handlebars
Ghost makes use of a templating language called Handlebars.
js (http://handlebarsjs.com), and its predefined
expressions make it simple to build and maintain Ghost
themes. Handlebars separates the templates from the raw
HTML for you. Bear in mind that, with Handlebars, you can’t
write functions or hold variables. Handlebars is developed
simply to display content where the expressions are
outputted. Handlebars expressions are wrapped with curly
brackets and look like this: {{author.name}}. This basically
looks up the author.name property and outputs it.
Let’s get our hands dirty and start creating our theme.
Open up the default.hbs file in your favourite text editor. This
is the base template and includes all the basic <html> ,
Ghost Tutorial
Installing Ghost
Ghost is a lightweight web application. It only
takes a matter of minutes to install locally and
get up and running. Ghost is a JavaScript
application built upon Node.js. You can install the
latter from your distribution's repositories – they
will almost certainly contain a version from the
required latest stable 0.10.x series. Debian/
Ubuntu/Mint users can get this with:
$ sudo apt-get install nodejs
If you want/need a later version then you can
head over to http://nodejs.org and download it,
or if you prefer a tidier install check out the
distro-specific guidelines at http://bit.ly/
InstallingNodejs. Ghost itself hasn't been
officially packaged on most distros, so go to
https://ghost.org/download to get the latest
version and uncompress it. To install Ghost in the
terminal, run:
$ cd /path/to/downloads/ghost
$ npm install --production
A javascript package manager that comes
with Node.js called npm will now install all the
necessary dependencies to the production and
development environment. Once this is
completed you can now start Ghost in
development mode with:
<head> , <body> tags that will be used throughout your
Ghost website.
In this template we input our HTML doctype, basic meta
tags and head and body tags. (See default.hbs in tutorial
files on the GitHub here: http://bit.ly/BuildAGhostTheme.)
You’ll notice expression tags: {{! Responsive Meta Tags }}. Any
expressions preceded by an exclamation point within the
curly brackets are comments and will not be printed in the
final source code.
The {{ghost_head}} helper is used to output any system
scripts, styles and meta tags. The {{ghost_foot}} helper is
used to output scripts at the bottom of the document. The
Handlebars expression {{{body}}} is an important one. This
is where all your content will be displayed, which extends the
default template. The following {{body_class}} is used to
automatically generate CSS class names for targeting
specific pages:
<body class="{{body_class}}">
<div class="mytheme_page">
{{{body}}}
</div>
{{ghost_foot}}
</body>
Index.hbs
Now in our index.hbs (see this source code in the project
files on the GitHub), we use the Handlebars expression {{!<
You can use the {{date}} helper to output the published
date and use the format option to control the date format.
$ npm start
In your web browser, navigate over to your
newly installed Ghost blog at
http://127.0.0.1:2368, and you can register your
administration login by going to
http://127.0.0.1:2368/ghost. When you visit
this URL, you’ll notice Ghost notifies you of the
send email and Ghost URL being used. Configure
this by changing the base URL. Open up config.
js in your text editor and change the url
parameters in the Development or Production
stanzas. Once you’ve completed development,
shut down by pressing Ctrl+C in the terminal.
default}} at the very head of this document to reference to
our previous base template. This template will be used for our
homepage. We will want to style each blog post within the
foreach helper. By using the opening {{#foreach posts}} and
closing {{/foreach}} loop, anything inside this will display each
post with the markup.
To display the content of each blog post we use a
Handlebars expressions {{content}}. We can also limit the
word count by using the parameter words="100".
You’ll notice all of the class names are proceeded with
mytheme_. This is a recommended practice when building a
Ghost theme. Ghost will automatically assign particular class
names and more specifically IDs to certain elements in your
theme. You’ll want to prevent clashes and consider your
scope of class names.
Partials
Typically we can insert our header markup just below the {{!<
default}}, but the advantage of Handlebars templates are
hierarchical support, whereby one template can extend
another. This includes the use of partials. This helps to
eliminate repetition of code and encourage reusability. We can
separate our header into a partial template.
Within the partials directory open up the header.hbs.
In Ghost dashboard Settings, you can upload your own blog
logo and blog cover image. We’ll use an if statement to check
whether a blog cover image exists. If so, we’ll output it as a
background image:
{{#if @blog.cover}}
style="background-image: url({{@blog.cover}})"
{{/if}}
This time, we’ll check if a blog logo is available.
{{#if @blog.logo}}
<a class="blog-logo" href="{{@blog.url}}">
<img src="{{@blog.logo}}" alt="Blog Logo" />
</a>
{{/if}}
The @blog global data accessor has access to global
settings in Ghost we can output in our theme.
Now let’s dive into the fun part of styling our theme. In our
theme, we’ve linked to normalize.css for our HTML5-ready
CSS reset. Within the screen.css is where we’ll input all our
custom theme styles. We’ll then add some global styles,
followed by styling our header and set a max-width to
prevent our layout from expanding over certain pixel size:
If you missed last issue Head over to http://bit.ly/MFMissues now!
www.tuxradar.com
March 2015 LXF195 81
Tutorial Ghost
:
Next issuee
Keep timP
with NT
.mytheme_page {
max-width: 980px;
margin: 0 auto;
}
.mytheme_header {
padding:20px 0;
text-shadow: 2px 2px 2px rgba(26, 26, 26, 0.1);
text-align: center;
color: #2f3727;
}
Now style each blog post within its article container:
main article {
margin: 30px 0;
border-left:1px solid #DBDBDB;
border-right:1px solid #DBDBDB;
background-color: #FFFFFF;
}
.mytheme_post_content {
padding: 0 20px;
}
.mytheme_post_title {
margin: 0 0 20px 0;
padding: 10px;
font-size: 2em;
letter-spacing: 2px;
text-align: center;
text-shadow: 2px 2px 2px rgba(26, 26, 26, 0.2);
color: #FFFFFF;
background-color: #8ACD36;
}
.mytheme_post_title a {
text-decoration: none;
color: #FFFFFF;
}
.mytheme_main_img img {
width: 100%;
max-width: 100%;
border: 0;
Select the newly
created theme
in the dashboard
to activate it.
You may need to
restart Ghost in
the terminal to
pick up the theme.
After installing Ghost, you’ll see your development URL
and a warning about setting up an email service.
}
Place the date on the left-hand side and the Read More
button opposite. Give this link the presentation of a button:
.mytheme_post_info {
overflow: auto;
padding: 0 20px;
background-color: #98C148;
}
.mytheme_date {
float: left;
padding-top: 20px;
color: #FFFFFF;
}
.button {
float: right;
padding: 20px 0;
}
.button a {
padding: 5px;
transition: ease .3s;
text-decoration: none;
color: #FFFFFF;
background-color: #39A9DA;
}
.button a:hover {
background-color: #199ED9;
}
We touched upon the key aspects of developing a Ghost
theme. Hopefully this insight has opened up possibilities to
creating your own personalised theme. You’ll see that building
a Ghost theme is very simple to picking up with the predefined Handlebars expressions. Don’t forget to download the
tutorial’s accompanying files from the GitHub: http://bit.ly/
BuildAGhostTheme. Also check out the documentation for
more information to help you build and customise your own
theme by visiting: http://docs.ghost.org/themes. LXF
Preparing post.hbs
Having only set up the homepage template, the
post.hbs controls the display of each single blog
post page. Again on this template (see the post.
hbs file in the project folder, which you can
download from GitHub here: http://bit.ly/
BuildAGhostTheme) we use the same {{!<
default}} and {{> header}} Handlebars
expressions. This time we use the opening
{{#post}} and {{/post}} expressions to display a
82 LXF195 March 2015
single blog post. While in the single post context
we have access to the author data, including the
author name and bio.
We can display the author details by simply
adding the code displayed below:
<section class="author">
<h3>Written By:</h3>
<h4>{{author.name}}</h4>
<p>{{author.bio}}</p>
www.linuxformat.com
</section>
We can now apply some CSS styling to our
single blog post. It’s recommended that you
actually place all your single-post styles in a
separate CSS file. It’s important to do this is
because, in the near future, Ghost will be
unveiling a new split screen feature, which will
load all the custom theme styles in the
admin interface.
IT INSIGHTS FOR BUSINESS
THE ULTIMATE DESTINATION FOR
BUSINESS TECHNOLOGY ADVICE
Up-to-the-minute tech business news
In-depth hardware and software reviews
Analysis of the key issues affecting your business
www.techradarpro.com
twitter.com/techradarpro
facebook.com/techradar
Python 3
Python: Dive
into version 3
Jonni Bidwell investigates probably one of the least loved and
disregarded sequels in the whole history of programming languages.
Our
expert
Jonni Bidwell
is sad that all his
print statements
are now at least
two characters
longer, but happy
that Python 3 is
better than the
third (and second)
Matrix movie.
W
ay back in December 2008 Python 3.0
(alternately Py3k or Python 3000) was released,
yet here we are, seven years later, and most
people are still not using it. For the most part, this isn't
because Python programmers and distribution maintainers
are a bunch of laggards, and the situation is very different to,
for example, people's failure/refusal to upgrade (destroy?)
Windows XP machines. For one thing Python 2.7, while
certainly the end of the 2.x line, is still regularly maintained,
and probably will continue to be until 2020. Furthermore
since many of the major Python projects (also many, many
minor ones) haven't been given the 3 treatment, anyone
relying on them is forced to stick with 2.7.
Early on, a couple of big projects – NumPy and
Django – did make the shift, and the hope was that
other projects would follow suit, leading to an
avalanche effect. Unfortunately, this didn't happen
and most Python code you find out there will fail
under Python 3. With a few exceptions, Python 2.7 is
forwards-compatible with 3.x, so in many cases it's
possible to come up with code that will work in both, but still
programmers stick to the old ways. Indeed, even in this
84 LXF195 March 2015
www.linuxformat.com
esteemed publication, this author, whether by habit,
ignorance or affection for the past, continues to provide code
that is entirely incompatible with Python 3. We won't do that
in this article. We promise.
So let's start with what might have been your first ever
Python program:
print 'Hello world'
Guess what? It doesn't work in Python 3 (didn't you just
promise...). The reason it doesn't work is that print in Python
2 was a statement, in Python 3 print is a function, and
functions are, without exception, called with brackets.
Remember that functions don't need to return anything
(those that don't are called void functions), so print is now a
void function which, in its simplest form, takes a string as
input, displays that string as text to stdout, and returns
nothing. In a sense you can pretend print is a function in
Python 2, since you can call it with brackets, but a decision
was made to offer its own special syntax and a bracketless
shorthand. As an aside, this is rather like the honour one
receives in mathematics when something named after its
creator is no longer capitalised, eg abelian groups. But these
kind of exceptions are not a part of the Python canon
("Special cases aren't special enough to break the rules"),
so its brackets all the way. On a deeper level, having a
function-proper print does allow more flexibility for
programmers – as a built-in function it can be replaced,
which might be useful if you're into defying convention or
making some kind of Unicode-detecting/defying wrapper
function. In sum, your first Python program should have been:
The Greek kryptos graphia, which translates to ‘hidden
writing’ [see Cryptography Old And New, p50, LXF189]
followed by a new line using the correct script.
Python 3
The Unicode revolution
Traditionally, text was encoded in ASCII, in which
each character is encoded as a 7-bit codepoint,
which gives you 128 characters to play with.
Some of this number are invisible teletype codes
(ASCII originated in the 1960s) and once we've
counted the familiar alphanumeric characters
there isn't really much room left. Since we like
things to be bytes, several 256-character
extensions of the ASCII encoding emerged. The
most notorious of these is ISO-8859-1,
sometimes called Latin-1. This widely-used
character set (and the related Windows-1252)
contains almost all the accents required for the
Latin-scripted languages, as well as the
characters used in the romanisation of other
languages. As a result, it’s fairly common in the
western hemisphere, but doesn't really solve the
problem elsewhere.
The correct solution would be a standard
encoding (or maybe a couple of them) which
accounts for as many as possible of the set of
characters anyone on Earth might conceivably
wish to type. Obviously this will require many
more than 256 characters, so we'll have to do
away with one character encoding to one byte
(hence the divergence of codepoints and byte
encodings), but it's for a greater good.
Fortunately all the wrangling, tabulating and
other rigmarole has been done, and we have an
answer: Unicode. This accounts for over 100,000
characters, bidirectional display order, ligature
forms, and more. Currently there are two
encodings in use: UTF-8 which uses 1 byte for
'common' characters (making it entirely
backwards compatible with ASCII) and up to four
bytes for the more cosmopolitan ones, and UTF16 which uses two bytes for some characters and
four bytes for others. Unicode has been widely
adopted, both as a storage encoding standard
and for internally processing tests. The main
raison d’etre of Python 3, is that it's predecessor
did not do the latter.
print ('Hello world')
Which is perfectly compatible with Python 2 and 3. If you
were a fan of using a comma at the end of your print
statements (to suppress the newline character), then sad
news: This no longer works – instead we use the end
parameter, which by default is a new line. For example
print ('All on', end=" ")
print ('one line')
does just that.
Print in Python 3
A significant proportion of Python programs could be made
compatible to 3 just by changing the print syntax, but there
are many other, far less trivial, things that could go wrong. To
understand them, we must first be au fait with what really
changed in Python 3.
Most of the world doesn't speak English, in fact, most of
the world doesn't even use a Latin character set, even those
regions that do tend to use different sets of accents to
decorate the characters. As a result, besides the ol' ASCII
standard, numerous diverse and incompatible other
character encodings have emerged. Each grapheme (an
abstraction of a character) is assigned a codepoint, and each
codepoint is assigned a byte encoding, sometimes identically.
In the past, if you wanted to share a document with foreign
characters in it then plain ASCII wouldn't help – you could use
one of the alternative encodings, if you knew the people you
were sharing it with could do the same, but in general you
needed to recourse to a word processor and with a particular
font, which just moves the problem elsewhere. Thankfully, we
now have a widely adopted standard: Unicode (see The
Unicode revolution box, above) that covers all the bases, and
is backwards compatible with ASCII and (as far as codepoints
are concerned) its Latin-1 extension. We can even have
Unicode in our domain names, although internally these are
all still encoded as ASCII, via a system called Punycode.
Python 2 is far from devoid of Unicode support, but its
handling of it is done fairly superficially (Unicode strings are
sneakily re-encoded behind the scenes) and some third-party
modules still won't play nice with it. Strings in Python 2 can
be of type str (which handles ASCII fine, but will behave
unpredictably for codepoints above 127) or they can be of
type unicode. Strings of type str are stored as bytes and,
when printed to a terminal, are converted to whichever
The PyStones benchmark will likely be slower in Python 3, but the same won’t
be true for all code. Don’t be a Py3k refusenik without first trying your code.
encoding your system's locale specified (through the LANG
and LC_* environment variables in Linux). For any modern
distro, this will probably be UTF-8, but it's definitely not
something you should take for granted.
The unicode type should be used for textual intercourse –
finding the length of, slicing or reversing a string. For example,
the Unicode codepoint for the lowercase Greek letter pi is
03c0 in hex notation. So we can define a unicode string from
the Python console like so, provided our terminal can handle
Unicode output and is using a suitable font:
>>> pi = u'\u03c0'
>>> print(pi)
ϖ
>>> type(pi)
<type 'unicode'>
>>> len(pi)
1
However, if we were to try this on a terminal without
Unicode support, things will go wrong. You can simulate such
a scenario by starting Python with:
$ LC_ALL=C python
Now when you try and print the lowercase character pi
you will run into a UnicodeEncodeError. Essentially, Python
is trying and failing to coerce this to an ASCII character (the
Quick
tip
Arch Linux is one
of few distributions
to use Python 3
by default, but it
can live happily
in tandem with
its predecessor
(available in the
python2 package).
Never miss another issue Subscribe to the #1 source for Linux on page 34.
www.tuxradar.com
March 2015 LXF195 85
Python 3
only type supported by the primitive C locale). Python 2 also
tries to perform this coercion (regardless of current locale
settings) when printing to a file or a pipe, so don't use the
unicode type for these operations, instead use str.
The str type in Python 2 is really just a list of bytes
corresponding to how the string is encoded on the machine.
This is what you should use if you're writing your strings to
disk or sending them over a network or to a pipe. Python 2
will try and convert strings of type unicode to ASCII (it's
default encoding) in these situations, which could result in
tears. So we can also get a funky pi character by using its
UTF-8 byte representation directly. There are rules for
converting Unicode codepoints to UTF-8 (or UTF-16) bytes,
but it will suffice to simply accept that the Pi character
encodes to the two bytes CF 80 in UTF-8. We can escape
these with an \x notation in order to make Python
understand bytes:
>>> strpi = '\xCF\x80'
>>> type(strpi)
<type 'str'>
>>> len(strpi)
2
So ϖ apparently now has two letters. The point is: if your
Python 2 code is doing stuff with Unicode characters, you'll
have to have all kinds of wrappers and checks in place to take
account of the localisation of whatever machine may run it.
You'll also have to handle your own conversions between str
and unicode types, and use the codecs module to change
encodings as required. Also if you have Unicode strings in
your code, you'll need to add the appropriate declaration at
the top of your code:
# -*- coding: utf-8 -*The main driving force for a new Python version was the
need to rewrite from the ground up how the language dealt
with strings and their representations in order to simplify this
process. Some argue that it fails miserably here (see for
example, Armin Ronacher's rant on his blog http://bit.ly/
UnicodeInPython3), but it really depends on your purposes.
Python 3 does away with the old unicode type entirely, since
everything about Python 3 is now Unicode-orientated. The str
type now stores Unicode codepoints, the language's default
encoding is UTF-8 (so no need for the -*- coding decorator
above) and the new bytes object stores byte arrays, like the
old str type. The new str type will need to be converted to
bytes if it's to be used for any kind of file I/O, but this is trivial
via the str.encode() method. If you're reading Unicode text
files you'll need to open them in binary 'rb' mode and convert
the bytes to a string using the converse bytes.decode()
method (see the picture for details, see bottom, p84).
Python 3, however, brings about many other changes
besides all this Unicode malarkey, some of these are just the
removal of legacy compatibility code (Python 3, unlike 2.7,
doesn't need to be compatible with 2.0), some of them
provide new features and some of them force programmers
to do things differently.
But wait, there's more: For example, functions can now be
passed keyword-only arguments even if they use the *args
syntax to take variable length argument lists and catching
exceptions via a variable now requires the as keyword. Also
you can no longer use the ugly <> comparison operator to
test for inequality, instead use the much more stylish !=
which is also available in Python 2.
Automating conversion with 2to3
Aleph, beth, gimel… The 2to3 program show us how to refactor our Python 2
code for Python 3. Shalom alekhem.
For reasonably small projects, and those that aren't going to
encounter any foreign characters, it’s likely that your Python
2 code can be automatically made Python 3 compatible using
the 2to3 tool. This helpful chap runs a predefined set of fixers
on your Python 2 code, with the goal of emitting bona fide
Python 3 code. Using it can be as simple as
The division bell
One change in Python 3 which has the potential
to flummox is that the behaviour of the division
operator / is different. For example, here's an
excerpt from a Python 2 session
>>> 3/2
1
>>> 3./2
1.5
>>> 3.//2
1.0
which shows the first command operating on
two ints, and returning an int, thus in this case
the operator stands for integer division. In the
second example, the numerator is a float and
the operator acts as floating point division,
returning what you'd intuitively expect half of
three to be. The third line uses the explicit floor
division operator // which will return the
rounded-down result as a float or an int
depending on the arguments. The latter
operator was actually backported to 2.7, so it's
behaviour is the same in Python 3, but the
behaviour of the classic division operator has
changed: if both numerator and denominator
are ints, then an int is returned if one divides the
other, otherwise a float is returned with the
correct decimal remainder. If at least one of the
arguments is a float (or complex), then the
behaviour is unchanged.
This means the / operator is now as close to
mathematical division proper as we can hope,
and issues involving unexpected ints will no
longer cause harm.
If you missed last issue Head over to http://bit.ly/MFMissues now!
86 LXF195 March 2015
www.linuxformat.com
Python 3
The popular
matplotlib
module has
been Python
3 compatible
since v1.2, for all
your graphing
and plotting
requirements.
$ 2to3 mypy2code.py
which will output a diff of any changes against the original file.
You can also use the -w option to overwrite your file, don't
worry – it will be backed up first. Some fixers will only run if
you specify them explicitly, using the -f option. An example is
buffer which will replace all buffer types with memoryviews.
These two aren't entirely compatible, so some tweaking
might be necessary in order to successfully migrate.
Using 2to3 can save you a whole heap o' effort, since
searching and replacing print statements manually is a
drudgerous task. The pictured (over on the left there, page
86) example shows the changes to a simple three-line
program – the unichr() function is now chr(), since Unicode
is now implicit, and the print line is reworked, even if it uses
% to format placeholders.
Parlez-vous Python Trois?
Another option is to create 'bilingual' code that is compatible
with both Python 2 and Python 3. While you should really only
target one specific Python version, this middle ground is very
useful when you're porting and testing your code: You might
still need to rework a few things in Python 2 while still enjoying
the new features and learning the new syntax. Many Python 3
features and syntaxes have already been backported to 2.7,
and many more can be enabled using the __future__ module.
For example, you can get the new-fangled print syntax by
using the following:
>>> from __future__ import print_function
This is used with the from in this way, and __future__
doesn't behave like a standard module, instead it acts as a
compiler directive which in this case provides the modern
print syntax. We can likewise import division to get the new
style division, or unicode_literals to make strings Unicode by
default. There is another module, confusingly called future
which isn't part of the standard library, but can help ease
transition issues.
When Python 3.0 was released, it was generally regarded
as being about 10% slower than it's predecessor. This was
not surprising, since speed was not really a priority for new
version and many special-case optimisations were removed
in order to clean up the code base. Now that we're up to 3.4
(3.5 is scheduled for final release in September 2015), it
would be nice if performance had been improved.
Unfortunately, this hasn’t been the case, which you can verify
for yourself using the PyStone benchmark. We tested it [see
p85] on an aging machine in the office, which has now come
back from the dead twice and hence possesses supernatural
powers far beyond it's dusty 2.3GHz processor would
suggest. But don't be misled; PyStone tests various Python
internals of which your code may or may not make extensive
use. It’s important to test your code in both versions to get a
more accurate picture. You can always use Cython [see
Coding Academy, p84, LXF193] to speed up code that is
amenable to C translation (loops, math, array access), or the
bottleneck module.
Guido van Rossum, the author of Python, says Python 2.7
will be supported until 2020, but that's no excuse to postpone
learning the new version now. Python 3 adoption is
increasing, so you won't be alone. Half the work is retraining
your fingers to add parentheses. LXF
www.tuxradar.com
March 2015 LXF195 87
Julia
Julia: Dynamic
programming
Mihalis Tsoukalos explains the necessary things that you need to know to
start programming in the fast and dynamic Julia language.
Our
expert
Mihalis
Tsoukalos
is a Unix admin,
programmer,
mathematician
and DBA. You can
reach him at www.
mtsoukalos.eu
and @mactsouk.
The Julia shell is where you do most of your
experimentation. You can see that integer division results
in a floating point result.
J
ulia (http://julialang.org) is a fast dynamic language
for technical computing designed and developed by Jeff
Bezanson, Stefan Karpinski, Viral Shah and Alan
Edelman. It’s also a functional language to address the
requirements of high-performance numerical and scientific
computing while also being capable of doing general purpose
programming. Julia uses an LLVM-based just-in-time (JIT)
compiler to run its programs. The compiler combined with
the design of the language enables it to approach and often
match the performance of C code. You can also call C
functions directly from Julia without the need for wrappers,
special APIs or other nasty tricks.
Julia's design and implementation is driven by the
following three principles: first, It has to be fast; second, It has
to be a dynamic programming language; and third, it also has
to be expressive. The core of Julia is written in C but the rest
of the language is written in Julia itself, which means you can
see how it works behind the scenes and easily make changes.
Even in 'untyped' programming languages, there's some
kind of type used by the compiler. Julia thinks that if the
compiler uses types, then why not enable the programmer to
also use types if they so choose? In Julia, it's up to the
programmer to mention the type of a variable. However, there
88 LXF195 March 2015
www.linuxformat.com
are situations where you need to talk about types, so the type
system of Julia is good and sophisticated.
As you can probably tell, Julia is attempting to replace R,
Matlab, Octave and the NumPy module of Python in
arithmetic computing by counting on faster performance.
The reactions from users of its competition is good so far and
we'll show you why.
Installing Julia
On Ubuntu, you can install Julia with sudo apt-get install
julia. After installation, you can check the install with
$ julia -v
julia version 0.2.1
and the Julia version of "Hello World!" is:
println("Hello World!")
You can save the program and execute it from the Linux
shell as follows:
$ julia hw.jl
Hello World!
The Julia shell is the best place to explore the language
and try to understand its features (pictured above are some
simple interactions with the Julia shell). You can exit the Julia
shell by pressing Control+d or by typing quit().
You can also run Julia code from the shell without saving it
first. This is very handy for small programs:
$ julia -e 'for x in ARGS; println(x); end' Hello World
Hello
World
Julia
The ~/.juliarc.jl file used by Julia is similar to the .bashrc
file. It executes Julia code whenever the Julia shell is executed.
$ cat ~/.juliarc.jl
println("Using the .juliarc.jl file!")
Julia is a case-sensitive programming language. Variable
names must begin with a letter (A-Z or a-z), underscore, or a
subset of Unicode code points greater than 00A0. As a result
88a is an invalid variable name whereas _99 is valid.
Julia uses many types of integer variables, both signed
(Int8, Int16, Int32, Int64 and Int128) and unsigned (Uint8,
Uint16, Uint32, Uint64 and Uint128) as well as bool variables
(Bool) and single chars (Char). Additionally, it supports three
types of floating point variables: Float16, Float32 and Float64.
The number at the end of a type denotes the number of bits
required for that particular type. Julia also defines the Int and
Uint types, which are aliases for the signed and unsigned
native integer types of the system that Julia runs. So, on a
64-bit system Int is an alias for Int64.
Getting to know her
Julia also supports Fractions with the help of the // symbol.
So 4//5 is a fraction and isn't converted to 0.8. If the
numerator and the denominator have common factors, Julia
will automatically simplify the fraction; otherwise, it will leave
it as it is. You can do calculations with fractions using the
standard operators. Julia also supports complex numbers.
The global constant im is the complex number i. So, 4 - 2im
is a complex number. You can do complex number arithmetic
using the standard operators.
Now, let us see some Julia code. The code for calculating
Fibonacci numbers is the following:
$ cat fibonacci.jl
fib(n) = n < 2 ? n : fib(n - 1) + fib(n - 2)
The fib() function takes just one argument as its input.
As you can see, you don’t have to declare its type. Should you
wish to declare it, you can do it as follows:
fib(n::Int8) = n < 2 ? n : fib(n - 1) + fib(n - 2)
If you save your Julia code in a separate file, you can use:
julia> include("fibonacci.jl")
fib (generic function with 1 method)
julia> fib(10)
55
> fib("test")
ERROR: `isless` has no method matching
isless(::ASCIIString, ::Int64)
in fib at /Users/mtsouk/docs/article/working/Julia.LXF/code/
fibonacci.jl:2
julia> fib(' ')
2178309
Profile module
The Profile module provides tools that help developers to
improve their code. The module takes measurements on
running code and helps you to understand how much time is
being spent on each executed line of code.
You run the function you want to profile once in order to
compile it and run it again by putting the @profile keyword in
from of it. Last, you execute the Profile.print() command to
get the output.Julia at a fast pace.
julia> fib(32)
2178309
julia> @elapsed fib(40)
0.883197884
Although you can't find the Fibonacci number for the
"test" string, you can find the Fibonacci number for a single
character; Julia automatically uses the ASCII code of the
character as the input to the fib() function! The @elapsed
macro evaluates an expression and returns the number of
seconds it took to execute as a floating-point number,
discarding the resulting value. It is very useful for
benchmarking your code.
Two other useful macros are called @linux and @unix.
They can help you identify the operating system your
program runs on and act accordingly:
julia> @linux? println("Linux!") : println("Not Linux!")
Not Linux!
julia> @unix? println("UNIX!") : println("Not UNIX")
UNIX!
Not Linux!
Despite its simplicity, Julia is also a programming language
for the advanced programmer. The code_native() function
enables you to look at the Assembly code the JIT compiler
generates for the println() function [pictured, bottom of
p90]. If you are comfortable with Assembly code, you can
clearly understand that Julia has the potential of being almost
as fast as C code, despite the fact that's primarily a
dynamically typed language. This kind of optimised code
can’t be generated for Python or Ruby easily, because the JIT
compiler can’t know the type of arguments being passed.
Calling a C function is done using the ccall() function.
The following code calls the getpid() system call to get the
process ID of the calling process:
julia> pID = ccall((:getpid, "libc"), Int32, ())
44763
julia> typeof(pID)
Int32
Quick
tip
Julia supports
TCP sockets.
By just running
listen(1234) you
can create a socket
that listens to port
number 1234 on
localhost. You can
then connect to it
in Julia by typing
connect(1234). If
you give client =
connect(1234),
you should
then close the
socket by typing
close(client).
Why Julia was created?
As the creators of Julia say: "We want a language
that’s open source, with a liberal licence. We want
the speed of C with the dynamism of Ruby. We
want a language that’s homoiconic, with true
macros like Lisp, but with obvious, familiar
mathematical notation like Matlab. We want
something as usable for general programming as
Python, as easy for statistics as R, as natural for
string processing as Perl, as powerful for linear
algebra as Matlab, as good at gluing programs
together as the shell. Something that is dirt
simple to learn, yet keeps the most serious
hackers happy. We want it interactive and we
want it compiled. (Did we mention it should be as
fast as C?)" Not asking for much then. Julia also
wants to make the development of numerical
code better, faster and more efficient than with
other programming languages. The library,
largely written in Julia itself, also integrates
mature C and Fortran libraries for linear algebra,
random number generation, signal processing,
and string processing. In addition, the Julia
community provides a large number of external
packages through the built-in package manager.
If you missed last issue Call 0844 848 2852 or +44 1604 251045
www.tuxradar.com
March 2015 LXF195 89
Julia
Most operators in Julia are just functions; therefore 1+2+3
is equivalent to writing +(1,2,3). You can also execute
expressions, such as +(-(1,2),3). Anonymous functions are
also supported in Julia and their main usage is for passing
them as arguments to other functions. Functions in Julia can
return multiple values or other functions.
The following program reads an integer from the user and
returns its factors:
function factors(n)
f = [one(n)]
for (p,e) in factor(n)
f = reduce(vcat, f, [f*p^j for j in 1:e])
end
return length(f) == 1 ? [one(n), n] : sort!(f)
end
The result of the factors() function is an array that
contains the desired numbers. You can use the input()
function to get user input:
function input(prompt::String="")
print(prompt)
chomp(readline())
end
You can use the factors() function in combination with the
input() function to automatically process user input. The
trick can be done as follows:
julia> factors(int(input("Give me a number: ")))
So the readline() function enables you to get user input
and the print() or the println() functions enable you to print
output on screen. The difference between the two functions is
that println() automatically prints a newline character.
Julia has error handling capabilities. The following output
shows how to handle division by zero errors when doing
integer division:
julia> try
div(100, 0)
catch x
println(typeof(x))
end
DivideError
Julia treats arrays as first class citizens with powerful
functionality. You can easily generate a new vector with 100
random integer elements and name it myVec using the
myVec = rand(Int32, 100) command. The first element of
myVec can be accessed by using the myVec[1] notation;
therefore the first element has an index number of 1, not 0.
Similarly, its last element is accessed as myVec[100]; not
myVec[99]. The last element of a list or array can also be
accessed using the end symbol as its index. So far it may
appear that arrays in Julia are the same as in any other
programming language. Where Julia truly excels is in its
multidimensional array capabilities:
julia> anArrayA = [-1 -2 -3; 1 2 3; 10 20 30]
# command 1
julia> anArrayB = [1 2 3; -1 -2 -3; -10 -20 -30] # command 2
julia> arrayAPlusB = anArrayA + anArrayB
#
command 3
julia> productAB = anArrayA * anArrayB
#
command 4
The first command creates an array named anArrayA
with two dimensions. The second command creates a second
array named anArrayB. The third command adds the two
arrays and stores the result in a variable called arrayAPlusB.
Similarly, the fourth command finds the product of the two
arrays and saves it using a new variable. As you can see, you
can do matrix calculations using the standard operators.
The anArrayA .+ 2 command adds 2 to all elements of
the array! The sum(anArrayA, 1) command calculates the
sum over the first dimension of the array: (-1 + 1 + 10), (-2 + 2
+ 20) and (-3 + 3 + 30).
Julia also supports Sparse Matrices. A sparse matrix is a
matrix most of whose values are equal to zero. To reduce
storage costs, such matrices are represented by storing only
the values and coordinates of their non-zero elements.
Strings in Julia
Strings are defined with double quotes and can contain any
Unicode character. Single characters are defined using single
quotes. There exist the usual functions for converting a string
to uppercase or lowercase, called uppercase() and
lowercase() respectively. The length() function can be used
for getting the length of a string. String concatenation is also
supported with the help of the * operator, instead of the +
operator. Alternatively, you can concatenate strings using the
string() function.
Searching a string for a char or another string can be done
with the search() function. Similarly, the replace() function,
which takes three arguments, replaces a string with another.
Search() also supports regular expressions. Three other
handy functions are repeat(), matchall() and eachmatch()
and their use is illustrated in the following examples:
julia> thisx3 = repeat("This ", 3)
"This This This "
julia> "This" ^3 # The same as repeat("This",3)
"ThisThisThis"
External programs
Seeing the Assembly code of your Julia code is easy. Here’s what Assembly
makes of the println() function, for instance.
Julia also enables you to run external programs and get their
output. You can execute the ls command by typing run(`ls`).
You can assign its output to a new variable by typing
lsOutput = readall(`ls`). If you want to run the variable
command in your shell, you can do it by putting an $ in front
of the variable name, for example: run(`$command`).
Please be warned that this is not a safe way to write a
program!
Never miss another issue Subscribe to the #1 source for Linux on page 34.
90 LXF195 March 2015
www.linuxformat.com
Julia
julia> r = matchall(r"[\w]{4,}", thisx3)
3-element Array{SubString{UTF8String},1}:
"This"
"This"
"This"
julia> r = eachmatch(r"[\w]{4,}", thisx3)
RegexMatchIterator(r"[\w]{4,}","This This This ",false)
julia> i = 1 # have to declare it first
1
julia> for(word in r)
println("Match ", i)
i += 1 # i++ is not supported
end
Match 1
Match 2
Match 3
The matchall() function returns a vector with
RegexMatches for each match whereas the eachmatch()
function returns an iterator over all matches.
Implementing Bubble Sort
Julia’s Bubble Sort algorithm, as you can see below, has a
slightly different format than the C for loop:
function bubbleSort(inputVec)
elements = length(inputVec)-1
for vecIndex in elements:-1:1
for pass in 1:vecIndex
# Swap them if needed
if inputVec[pass] > inputVec[pass + 1]
tmp = inputVec[pass]
inputVec[pass] = inputVec[pass + 1]
inputVec[pass + 1] = tmp
end
end
end
return inputVec
end
In order to get the vecIndex numbers in reverse order, it is
necessary to use the elements:-1:1 format. Comments in
Julia begin with the hash character.
You can now use the bubbleSort() function to sort an
array that is called myVec by executing the following
command
print(bubbleSort(myVec))
You can also use the Pkg.status() command to show you
information about the installed packages. (The image, right,
Here’s how to install the Date Julia package to construct a date.
shows how to install and use a Julia package.) You can add a
package with the Pkg.add() command and remove it with the
Pkg.rm() command. Last, the Pkg.update() command
enables you to update every installed package to its latest
and greatest version.
Julia also allows you to look at the source code and full
development history of all installed packages, and you can
make changes and fixes to them. So, for instance, you can
find the source code of the Dates package inside the
~/.julia/v0.3/Dates/ directory. Should you wish to generate
your own package, then you can begin by using the
command Pkg.generate().
Julia allows you to work with both text and binary files.
This section will show you how to work with text files. The
presented program reads a text file, line by line and converts
each line in uppercase, and saves the output into a new text
file. The code looks like this:
input = ARGS[1]
output = ARGS[2]
OUTPUT = open(output, "w")
open(input,"r") do f
for line in eachline(f)
print(OUTPUT, uppercase(line))
end
end
close(OUTPUT)
When you are using do to read a file, the file closes
automatically when Julia reaches its end. Otherwise, you
should close the file using the close() command.
Julia has many more capabilities than the ones presented
in this article, and as a modern programming language it has
many handy features mainly for scientific computing. LXF
Next issu
MySQL e:
MariaDB
Text plotting in Julia
TextPlots is an extremely simple plotting
library which generates terminal plots
using Braille characters. This enables
you to plot any continuous real-valued
function or any relatively small collection
of data points.
The image (right) shows the output of
the following commands:
julia> Pkg.add("TextPlots")
julia> using TextPlots
julia> plot(cos)
julia> plot([x-> cos(x) * sin(x), x-> x/5],
-6:6)
As you can understand, TextPlots
allows you to plot multiple function
simultaneously and define the range of
the values of x. The following code
changes the output of the plot()
command and produces different kinds
of plots:
julia> plot(cos, border=false)
julia> plot(cos, border=false, title=false,
labels=false)
julia> plot([1, 2, 5, 8], [10, 11, 12, 3])
The last command draws pairs of (x,y)
points on screen using plot().
You can find more information about
how to use the TextPlots Julia package at
https://github.com/sunetos/
TextPlots.jl. Other useful packages for
plotting that you may like to try are
Winston, PyPlot and Gadfly.
www.tuxradar.com
The TextPlots
package enables
you to plot on
your terminal
using Braille
characters and
is very handy
when you don’t
have a graphical
interface.
March 2015 LXF195 91
Got a question about open source? Whatever your level, email it to [email protected] for a solution.
This month we
answer questions on:
1 Fedora 21
package manager
2 Backing up
onto DVD
3 The Case of
the Disappearing
Printers
1
4 Copying Linux
Format DVDs to
USB sticks
5 Dealing with
Misbehaving
hotkeys
+ UEFI booting
Apper not appy
I recently downloaded and installed
from Fedora-Live-KDE-i686-21-5.iso.
The install went smoothly and I have
successfully applied all available updates
without any problems. I was impressed by
how quickly and smoothly this proceeded.
However, the system does have a problem in
that it cannot install extra packages from the
standard Fedora repo. Clicking on System
Settings and then on Software Management
takes me to the Lists and Groups section.
When I click on any of the icons within the
Fedora 21’s KDE package manager falls over if you try to browse groups. For now, use a
different package management frontend, such as yumex.
Groups section, the system throws up a
dialog stating "A problem that we were not
expecting has occurred. Please report this
bug with the error description." Clicking on
this dialog's Details button reveals
SearchGroups not supported by backend.
Enter our competition
Win!
if ($letter == winner)
get $books
Get your questions answered and exploit our generosity.
Linux Format is proud
to produce the biggest
and best magazine
Get into Linux today!
about Linux and free
software that we can. A rough word count
of LXF193 showed it had 55,242 words.
That’s a few thousand more than Animal
Farm and Kafka’s The Metamorphosis
combined, but with way more Linux, coding
and free software (but hopefully less bugs).
That’s more than most of our competitors,
and as for the best, well… that’s a subjective
claim, but it’s one we’re happy to stand by.
92 LXF195 March 2015
As we like giving nice things to our
wonderful readers, the Star Question each
month will win a copy of Martin O’Hanlon’s
Adventures in Minecraft, and it’s a great
intro to fun Minecraft API projects for you
or a youngster. For a chance to win, email a
question to lxf.answers@
futurenet.com, or post it
to www.linuxformat.
com/forums to seek
help from our lively
community of readers.
See page 94 for our star question.
www.linuxformat.com
I'm at a loss to understand this error. Have I
downloaded the correct ISO file? The ISO file
name does mention i686, am I correct in
thinking that PI is i586 and PII is i686?
Is there some way that SearchGroups can be
enabled in the backend?
stuarte9, From the forums
You are right in thinking that i686 is
32-bit, PII or later. 64-bit ISOs are
generally marked with amd64 or
x86_64. This is a known issue where Apper, the
KDE package manager for Fedora, tries to call a
function that‘s not implemented on the
package management backend. This is one of
those situations where it’s always someone
else's fault, depending on who you ask. Is it the
backend's fault for not implementing the
function or the frontend's fault for not
gracefully falling back to an alternative.
Whoever it is they will eventually need to fix it,
but until then the Groups functionality of Apper
is currently broken. That means you need to
use an alternative package management front
end for the time being. This can be the
command line with yum or a different graphical
interface. For those that installed from
LXFDVD194, which contains both Gnome and
KDE frontends, there’s already the choice of
gnome-software. As you installed from the
KDE-only ISO, you may choose to install
gnome-software from the command like with
yum install gnome-software
then use gnome-software to browse available
packages. As this is a Gnome program,
installing it on a KDE system will bring in a large
Answers
Terminals and
superusers
We often give a solution as commands to type in
a terminal. While it is usually possible to do the same
with a distro’s graphical tools, the differences between
these mean that such solutions are very specific.
The terminal commands are more flexible and, most
importantly, can be used with all distributions.
System configuration commands often have to
be run as the superuser, often called root. There are
two main ways of doing this, depending on your distro.
Many, especially Ubuntu and its derivatives, prefix the
command with sudo, which asks for the user password
and sets up root privileges for the duration of the
command only. Other distros use su, which requires the
root password and gives full root access until you type
logout. If your distro uses su, run this once and then run
any given commands without the preceding sudo.
number of dependencies. Alternatively, you
could use YUM Extender (yumex), the package
manager used by LXDE, which has a smaller
dependency footprint. You will see that
browsing groups in gnome-software does not
give the same problems as with Apper.
This issue only relates to the software groups
features, updates or searching and installing
packages by name works fine in Apper.
2
Scripts and hotkeys
I am using Xubuntu 14.10. I have
installed xclip and written a one-line
shell script to read the system date:
date "+%A %d %B %Y" | xclip -selection
clipboard
I have also written a one-line shell script that
outputs the date:
xclip -o -selection CLIPBOARD
Both of these work from the command line.
As a refinement, I made a keyboard shortcut
to each of these scripts. The script which
reads the date works when invoked by the
keyboard shortcut. The script which should
output the date appears to do nothing. I
have the feeling that I am missing something
obvious but I can't think what it is.
elenmar,
From the forums
Diagnosing these problems can be
tricky because you cannot see any
messages from the command when
running it from a keyboard shortcut but
running it from a terminal changes its
execution environment. When you open a shell
session in a terminal, it executes your profile,
usually ~/.bash_profile or ~/.bashrc, which
doesn't happen when running a script from a
keyboard shortcut (or cron task for that
matter). So how do you get to see the output
from your script when it is not connected to a
terminal window? By adding these lines near
the top of the script:
exec >>~/myscript.log 2>&1
set -x
The exec command can be used to change
redirections for a shell process, this line
redirects both standard output and standard
error to a file. The first redirection sends
standard output to the file, then 2>&1 sends
standard error to stdout. Now you can inspect
the output from the program after calling it
from a hotkey. You can view the output in real
time by running this in a terminal
tail -f ~/myscript.log
and then pressing the key. The other line set -x
tells the shell to print each command before it
runs it, adding this lets you see exactly what
the script is doing and which commands are
creating any messages.
If you suspect the problem is caused by the
different execution environments, you can
make sure your script loads the profile before
running anything with one of:
source ~/.bash_profile
source ~/.bashrc
Alternatively, you can have a single script
that simply outputs the date in the format you
need, but I assume you have a reason for
storing that date and using it later. Be aware
that any other program could change or clear
the clipboard's contents, so it may be safer to
use a temporary file rather than the clipboard.
3
Backing up DVDs
I want to rip, store and back up
some of my DVD collection onto my
NAS so that I can watch them via
my Raspberry Pi and XBMC. Ideally, I'd like
to store the files in MKV or MP4 format.
I looks like I will need to remove any copy
protection, save the copy as an ISO and then
convert the resulting file to either MKV or
MP4. So what are your favourite programs to
do this?
Noob_Computa_Ninja, from the forums
First of all, you have to decide how
much of each DVD (please note it may
be illegal in your territory to copy
DVDs or circumvent DRM protection, in the UK
it’s now legal for personal use) you want to
keep. Do you want the whole DVD, complete
with menus and features, or just the main title?
If you want the whole DVD, and have libdvdcss
installed on the player, you can simply copy the
entire DVD to an ISO file with this command.
cp /dev/dvd name.iso
This gives you the truest copy, but at the
A quick reference to...
Finding files
inux has two main tools for finding
files; locate and find. The former works
by keeping a database of files on your
system. It is very fast, but limited to files that
were present when the database was last
updated. Most distros install a cron script to
keep the file database up to date. Locate is
also limited to searching on file names, eg:
locate somefile
To add -i for a case-insensitive search.
The alternative is find, which searches the
filesystem directly. It’s slower, and can only
search areas the users have permission to
read, but provides completely up to date
information. It also enables you to specify
which directories to search or exclude and
search on properties other than the file
L
name, such as the owner or creation date.
find -name '*somefile*'
find /usr -iname '*someotherfile*'
find /usr -maxdepth 2 -iname '*whatfile*'
The first line starts the search in the
current directory, and descends into all subdirectories. The second starts in /usr instead,
and performs case-insensitive searching.
The third does the same, but descends a
maximum of two directories.
With more options, find is a far more
flexible option, but locate is great for quick
searches. You can pipe locate's output
through grep to search specific directories
locate -i myfile | grep /home/
Note also that locate searches for a
substring, whereas find looks for an exact
match, hence the use of * as a wildcard.
Have you ever wondered where a
program keeps its config files? Try using
www.tuxradar.com
Kfind adds a friendly face to the
powerful find command.
touch /tmp/now
then run the program, edit its preferences
and quit. Then do
find ~ -newer /tmp/now
to find all files that have changed, which will
include your program's configuration and,
probably, little else.
March 2015 LXF195 93
Answers
expense of storage space, each ISO image can
be up to 9.4GB in size. If you want a single
movie file containing the main feature of each
DVD, there are a number of programs you can
use, both command line and GUI, once again
depending on your needs. If you want a straight
copy of the title with no transcoding, you can
use MPlayer's dump options.
mplayer -dumpstream -dumpfile movie.mpg
dvd://1
This will dump the first track of the DVD to a
file instead of playing it. The only processing
done is to remove any encryption. If you want
to transcode to a different format, a good
example of a GUI program for this task is
HandBrake (which comes with a command line
tool too). HandBrake should be in most distros'
package repositories, so install it in the usual
way and start it up. While HandBrake doesn't
appear to have an option to read directly from
a DVD, by pressing the Source button and then
selecting /dev/dvd (or /dev/sr0, depending
on your system) it will read the DVD in that
drive and enable you to choose the title to
transcode, then you can set encoding
parameters. This is an important consideration
when using a Raspberry Pi for playback.
You need to choose codecs and compression
that work well with your Pi. Fortunately
HandBrake has a couple of features that help
with this. You can encode a small part of a
video for testing, enabling you to try various
options without having to transcode the whole
DVD every time, and you can save these
settings in a preset for easy recall next time.
You may as well start with the Normal
Star
Question
+
Winner!
4
HandBrake makes transcoding video easy, and lets you
define your own custom presets.
LXFDVD to USB
Why can I not make a bootable USB
from the LXF192 DVD, please?
I have tried on two different 64-bit
machines. I’ve booted from the LXF192 disc,
loaded the Ubuntu 14.10 Mate version and
I’ve used disc creator. After 20-30 minutes it
gave me an error and the process stopped.
I tried again on a 64-bit Compaq HP laptop,
erased the USB stick and again after 20-30
minutes the same thing happened again.
Why? I never seem to be able to use LXF
discs to make bootable USB from the
attached DVD that comes with my
subscription. Am I the only one who has this
kind of trouble?
el10,
From the forums
Disc creators like the one you tried,
and generic ones, such as Unetbootin,
make certain assumptions that are
not always true. In this case, it assumes that
you have booted from an Ubuntu disc and that
is what it expects to find in your optical drive.
In fact, there's a completely different disc in
there and you have booted from an ISO image
file on that disc.
Enough of why that approach doesn't work,
the good news is that not only is what you want
possible, it is far simpler than messing around
with various disc creators. Both the LXFDVDs
themselves and the various distros ISO images
on them are what are known as hybrid images.
This means they can be used on either an
This month’s winner is Peter Woolley. Get in touch with us to claim your glittering prize!
UEFI conundrum
I read Neil Bothwick’s article on
configuring Grub,[p78, LXF193]
however, it didn't answer the
problem that I’m experiencing. I bought a
Lenovo Flex 10, having been happy with
Lenovo machines before. Unlike previous
models this hasn’t been easy to get Linux
working. It has a 32-bit UEFI firmware and a
64-bit processor. This makes installation
much harder, though not impossible. I’ve got
as far as booting into Ubuntu 13.04 using a
pendrive I prepared using rufus in Windows.
From this, I have run Gparted and set up my
drive as follows:
Disk partition table set as GPT:
Partition Filesystem Size Flag
Mountpoint
/dev/sda1 FAT32
511MB boot
/dev/sda2 EXT4
10GB none /
/dev/sda3 Swap
1GB none
However, I am unable to boot using the
HD. It isn't recognised by the firmware and I
94 LXF195 March 2015
preset, and leave the video
codec as H.264 but change the
audio codec to AAC. Try
encoding a short clip and see
how it plays back. Then you can
adjust the settings if the
playback is not smooth enough
for you. Once you have the
settings you want, click the Save
icon below the Presets list.
If you want encode a number
of DVDs, and have sufficient
space, you can rip each to an
ISO file as above, then set up
encoding in HandBrake and add
each one to your ripping queue.
Then you can select Queue >
Start Queue to batch process all
the DVDs.
get a kernel panic. The strings I use from the
Grub console are:
linux (hd1,gpt2)/boot/vmlinuz-3.8.0-19-generic
root root=dev/sda/2
initrd (hd1,gpt2)/boot/initrd img-3.8.0-19generic
I was wondering if I can avoid all this by
using an EFI stub loader? Right now I just
have an expensive paperweight!
Peter Woolley
Getting started with UEFI can be a
little daunting, especially if you have a
32-bit UEFI. There are a couple of
significant points to bear in mind. The first is
that it’s not enough to make your EFI System
Partition FAT format, it must also be marked as
such in the partition table, by setting the
partition type to EF00. There also seems to be
some confusion over your disk identification.
As a laptop, I would expect it to have a single
disk, /dev/sda in Linux terms, as described in
your partition table, but this is hd0 in Grub
www.linuxformat.com
terms. Grub 2 has taken the rather confusing
step of counting disks from zero but
partitions from one, so sda2 is hd0,gpt2.
The other important point is the chicken
and egg situation of installing Grub for a
UEFI system. In order for Grub to install to
the right place, it needs access to UEFI
information from your EFI firmware, which it
gets from /sys/firmware/efi/efivars. If this
directory is empty, or doesn’t exist, Grub will
install as if for a BIOS boot, but this
information is only available if you’re already
booted using the EFI firmware, so check the
files are there before trying to install Grub.
Another factor is your mixing of 32-bit
UEFI with 64-bit CPU and OS. The Linux
kernel supports this, but only from version
3.15 onwards. Ubuntu 13.04 comes with
version 3.8.8 of the kernel, which doesn’t
support what you want to do. Having said all
that, you may well have already read the
follow-up to that article [p74, LXF194], which
explains Gummiboot for EFI booting.
Answers
optical disk or a USB flash drive. To copy the
entire DVD to a flash drive, run the following:
sudo dd if=/dev/dvd of=/dev/sdX bs=4k
sync
Note you must get the mount point right as
giving the wrong device will trash any data on
it. To be sure, run:
dmesg -w
and then plug in the flash drive. You will see
something like this
usb-storage 1-2:1.0: USB Mass Storage device
detected
scsi host7: usb-storage 1-2:1.0
scsi 7:0:0:0: Direct-Access SanDisk Cruzer
Edge
1.20 PQ: 0 ANSI: 5
sd 7:0:0:0: [sdb] 31266816 512-byte logical
blocks: (16.0 GB/14.9 GiB)
so in this case you would want /dev/sdb. Use
the device name without a trailing number,
you are writing to the whole device.
This gives you a copy of the whole DVD, so it is
a fairly slow process. The alternative is to copy
just the ISO image you want. In your example
of wanting the Ubuntu image, you would use
the same command but set it to the path to
the ISO image, instead of the whole drive:
sudo dd if=/media/LXFDVD192/ubuntu-14.10desktop-remix-amd64.iso of=/dev/sdX bs=4k
sync
The sync command makes sure that all data is
written to the flash drive before the command
prompt returns, to avoid unplugging it too
soon. In either case, the flash drive should not
be mounted while writing to it.
5
Invisible printer
Since upgrading from Ubuntu 14.04
to 14.10 I have had no end of trouble in
getting the computer to see my Officejet
4620 printer, there was no problem with
14.04. Strangely enough, provided that I
switch the printer on for a minute or two
prior to starting the computer, there seems
to be no problem. However, if I start the
computer first, followed by the printer then I
cannot print at all, and I have to restart the
computer to solve the problem.
Not only that, but when I tried to connect
my camera to the computer for the first
time, the computer couldn’t 'see' it either.
The camera has a straightforward USB
connection. Next, I tried it on another
computer running a Ubuntu 14.10 OS that
has nothing on it except a downloaded 14.10
fresh install. That computer could not see
the camera! Then I connected the camera to
a laptop running Ubuntu 12.04 and was able
to transfer my photos to it – no problem!
Bryan Mitchell
This looks like a problem with hotplug
detection on your computer. The first
thing to do is to check whether the
kernel even sees the appearance of the printer.
Open a terminal and run:
dmesg --follow
Then connect the printer and watch the
output. Is udev detecting the printer?
Switching it on before booting means the
printer is already there when udev and CUPS
start up. If you load the CUPS web interface at
http://localhost:631 – is the printer visible?
If it is visible but disabled, you can enable it
from the web interface or with this command:
sudo cupsenable printername
Help us to help you
We receive several questions each month that we are
unable to answer, because they give insufficient detail
about the problem. In order to give the best answers to
your questions, we need to know as much as possible.
If you get an error message, please tell us the
exact message and precisely what you did to invoke it.
If you have a hardware problem, let us know about the
hardware. If Linux is already running, you can use the
Hardinfo program (http://hardinfo.berlios.de) that
gives a full report on your hardware and system as an
HTML file you can send us.
Alternatively, the output from lshw is just as useful
(http://ezix.org/project/wiki/HardwareLiSter).
One or both of these should be in your distro’s
repositories. If you are unwilling, or unable, to install
these, run the following commands in a root terminal
and attach the system.txt file to your email. This will
still be a great help in diagnosing your problem.
uname -a >system.txt
lspci >>system.txt
lspci -vv >>system.txt
If CUPS doesn't see the printer at all, you
could try running sudo /etc/init.d/cups
restart after powering up the printer, to force
CUPS to detect it. It's a bit of a sledgehammer
to crack a nut, but a far smaller hammer than
rebooting the whole computer.
The camera problem appears to be
separate and a bug has already been raised for
it on Launchpad (http://bit.ly/LXF195bugs)
where the blkid command mis-identifies some
filesystems. This may have been fixed in the
repositories by the time you read this answer, if
not there’s a Deb file you can download from
http://packages.ubuntu.com/vivid/libblkid1
(this is referenced in that bug report's
comments) that solves the issue. LXF
Frequently asked questions…
Mobile broadband
What is this mobile broadband
I keep hearing about? Is is some
sort of ADSL over a mobile
phone connection?
Sort of. It uses the 3G mobile
phone network, but the technology
is not ADSL, it uses HSPA (High
Speed Packet Access) which is
designed for mobile use. This is
the technology used by
smartphones that require a
constant (or at least frequent)
connection to the Internet.
Does this offer a broadband
level of speed?
No, it's not broadband in the
original sense of the word (neither
are some of the slower landline
connections for that matter) but it
is fast enough for important tasks,
such as reading email and
watching YouTube videos. The
companies bandy around various
speeds, but these are all heavily
dependent on signal strength and
other related factors.
What do I need to use it?
You need a broadband modem
and an account with a mobile
telecoms company. Most of these
include the modem, but you can
use your own. The telcos all use
basically the same modem, a USB
dongle, which takes a SIM card to
authenticate the connection via
your account.
What does it cost?
As with most technology, the price
varies according to the supplier,
the length of contract and the data
allowance. Expect to pay around
£15 in the UK, which will give you
around 3GB of data transfers
per month.
How long do I need to commit
to generally?
Contracts vary in length from one
month to 18 months, or you can
get pay as you go services. The
longer term contracts come with a
free modem.
How well does this technology
work with Linux?
Very well nowadays. As all the
companies seem to provide
modems from a single
manufacturer, Huawei, and this
has drivers in the Linux kernel, it
should ‘just work’. You need to set
it up as a dialup connection, using
www.tuxradar.com
something like KPPP or GnomePPP, with the modem device as
/dev/ttyUSB0. Because it uses
the SIM card for authentication,
you can put whatever you like for
username and password (although
the software does expect you to
provide something for these).
Does it matter which distro
I use?
The standard PPP setup should
work with any distro. Those that
use Network Manager have an
advantage as this now supports
3G modems. With such a distro,
plugging in the modem for the first
time can pop up a requester for
you to select your provider, then it
sets it all up for you. After that,
you can put the modem on and
offline through the Network
Manager menu.
March 2015 LXF195 95
On the disc
Distros, apps, games, books, miscellany and more…
The best of the internet, crammed into a phantom-zone like 4GB DVD.
Distros
fter
installing
each of the
distros from this month's DVD, the
differences between the installers
was clear. Ubuntu has a clear and
simple graphical installer. Fedora,
the oldest and most experienced in
this field, has the rather pretty but
at times confusing Anaconda
installer while ArchBang opens a
terminal window, albeit a very
attractive one, and throws
questions at you in plain text.
At first ArchBang's approach
seems rather geeky and unfriendly
to those preferring the GUI, but it
has some significant advantages.
All the stages of the installation
process are presented in the main
menu, you can jump to where you
want at the press of a key – try
doing that with a GUI. You can
immediately see how far you are
through the usual Q&A ritual of an
installation and it doesn't do
anything until you’ve answered all
the questions and told it to
proceed. That’s a little slower but a
lot more comforting.
In our desire to make everything
as user-friendly as possible with a
nice pointy-clicky GUI have we
actually gone the other way? Have
we made it look more friendly
while making it less
functional?
Friendly is good,
but dumbing
down is not.
A
Important
NOTICE!
Defective discs
In the unlikely event of your Linux Format
coverdisc being in any way defective,
please visit our support site at
www.linuxformat.com/dvdsupport for
further assistance. If you would prefer to
talk to a member of our reader support
team, email us at discsupport@futurenet.
com or telephone +44 (0) 1225 822743.
96 LXF195 March 2015
An intermediate user’s distro
64-bit
Fedora & Ubuntu
On last month's LXFDVD, we included a remix of
Fedora 21, with several extra desktops. But as
we’ve gone in-depth with the latest releases of
A newbie friendly distro
Fedora (there are three distinct versions now:
Cloud, Server and Workstation) in one our features
this issue (see p54), it made sense to have it again
so you can try out the new
features that Jonni describes.
Note, however, that this
time it only has its standard
Gnome 3 desktop (or 3.14 to
be precise). As is so often the
case, there was a flurry of
updates in the weeks
following the release, and so
many more people using it
resulted in more bugs being
discovered and fixed. So this
month's Fedora offering is
also a respin, this time with all
the updates from the six
weeks after the initial release.
The Ubuntu 14.10 on the
LXFDVD is also updated.
32-bit
PCLinuxOS 2014.1
In the dim and distant past, before Ubuntu became
the world’s most popular distro, Mandrake was top
of the heap. The merger with Connectiva and then
the fork to Mageia are fairly well known, but there
was an earlier fork of Mandrake called
PCLinuxOS. This started life as an extra
software repository for Mandrake –
probably the main source of unofficial
packages – and it gradually became a
distro in its own right.
It was, and still is, an easy to use
distro ideally suited to new Linux users.
It was also the distro that introduced
the idea of a live CD that could be used
to both demo a distro (or use a distro)
and then install the distro to a system
directly once a user was satisfied with
the experience.
Note, however, that the current
PCLinuxOS live boot doesn’t play nicely
www.linuxformat.com
with USB devices, either flash disks or external
DVD drives. You will need to boot from an internal
DVD drive, using either the LXFDVD or burning the
PCLinuxOS ISO image to a DVD, in order to use it.
New to Linux?
Start here
What is Linux? How do I install it?
Is there an equivalent of MS Office?
What’s this command line all about? Are you reading
this on a tablet?
How do I install software?
Open Index.html on the disc to find out
A lightweight distro
32-bit
ArchBang 2015.01
ArchBang is another lightweight distribution that
takes its inspiration from the excellent
#!CrunchBang Linux. As the other part of the
name implies, this is one of the increasing number
of distros using Arch Linux as a base.
If you are impressed by the live CD experience
and want to install ArchBang, beware. The installer
isn’t a friendly mouse-driven affair. It’s a text-based
system that asks you a series of questions, so you
need a reasonable level of experience and
knowledge about your system.
However, it doesn’t touch your disk until
you have answered all the questions,
so it’s perfectly safe to try. When we
tried a test installation, we chose Grub
as the bootloader and the installed
system hung without booting. Trying
again with the Syslinux choice resulted
in a trouble-free setup.
Lightweight desktops are often
touted as ideal for older hardware, but
they fly on a modern computer, leaving
more of your system’s resources to do
the actual work.
A NAS distro
Download your DVD from
www.linuxformat.com
And more!
Free eBook
LibreOffice:
The Complete Guide
Get everything you need to master the
powerful productivity suite, LibreOffice
for the unbeatable price of, well, nought
at all. Bargain! This month’s LXFDVD
includes an eBook from our TechBook
series that would normally set you back
£5.99, and in it you’ll find a 100 pages of
guides and tips, all written in plain
English by our office experts.
The guide goes through all the
applications in the suite. For example,
you’ll learn how to make a stylish
newsletter in Writer, use master
documents and make mail merges.
In Calc & Math, we’ll show you how to
manage your money, split unwieldy
worksheets and make complex
spreadsheets more manageable with
pivot tables. In Impress, you’ll learn to
create knockout presentations and in
Draw we’ll dive into flowcharts and
logos. LibreOffice even includes a
database app and we’ll help you build a
DB and how to use forms and queries.
64-bit
OpenMediaVault
Our final distro this month is specialised, but an
excellent choice for that perfectly good computer
that has been gathering dust since you bought
yourself a new one for Christmas. OpenMediaVault
1.9 turns a computer into a NAS (Networked
Attached Storage) box. This isn’t a live distro, you
need to install it to your hard drive, so copy the ISO
image from the DVD to a CD or USB stick and boot
with that. Once installed, it will give you a URL that
you can use to admin and use the NAS.
System tools
Essentials
Checkinstall Install tarballs with your
package manager.
GNU Core Utils The basic utilities that
should exist on every operating system.
Hardinfo A system benchmarking tool.
Kernel Source code for the latest stable
kernel release.
Memtest86+ Check for faulty memory.
Plop A simple boot manager to start
operating systems.
RaWrite Create boot floppy disks and
other images in Windows.
SBM An OS-independent boot manager
with an easy-to-use interface.
WvDial Connect with a dial-up modem.
www.tuxradar.com
March 2015 LXF195 97
Get into Linux today!
Future Publishing, Quay House,
The Ambury, Bath, BA1 1UA Tel 01225 442244
Email [email protected]
20,238 January – December 2013
A member of the Audit Bureau of Circulations.
LXF 196
will be on sa
le
Thursday
1
9 March 201
5
EDITORIAL
Editor Neil Mohr
[email protected]
Technical editor Jonni Bidwell
[email protected]
Operations editor Chris Thornett
[email protected]
Art editor Efrain Hernandez-Mendoza
[email protected]
Editorial contributors Neil Bothwick, Chris Brown,
David Eitelbach, Kent Elchuk, Matt Hanson, Russ Pitt,
Les Pounder, Mayank Sharma, Richard Smedley,
Alexander Tolstoy, Mihalis Tsoukalos, Steven Wu
Illustrations Shane Collinge
ADVERTISING
For ad enquiries please contact:
Key Accounts - sales manager Richard Hemmings
[email protected]
MARKETING
Marketing manager Richard Stephens
[email protected]
PRODUCTION AND DISTRIBUTION
Production controller Marie Quilter
Production manager Mark Constance
Distributed by Seymour Distribution Ltd, 2 East
Poultry Avenue, London EC1A 9PT Tel 020 7429 4000
Overseas distribution by Seymour International
LICENSING
International director Regina Erak
[email protected] Tel +44 (0)1225 442244
Fax +44 (0)1225 732275
Next month:
The best 100 open
source tools ever!
Don’t miss out on the best FOSS tools, utilities and
apps known to man, well, the LXF team at least.
Build a Tor box
We (finally) explore how open source and a bit of
electronics can create an anonymising box.
CIRCULATION
Trade marketing manager Juliette Winyard
Tel 07551 150 984
SUBSCRIPTIONS & BACK ISSUES
UK reader order line & enquiries 0844 848 2852
Overseas reader order line
& enquiries +44 (0)1604 251045
Online enquiries www.myfavouritemagazines.co.uk
Email [email protected]
MANAGEMENT
Content & Marketing director Nial Ferguson
Head of Content &
Marketing, Technology Nick Merritt
Group editor-in-chief Paul Newman
Group art director Steve Gotobed
Editor-in-chief, Computing Brands Graham Barlow
LINUX is a trademark of Linus Torvalds, GNU/Linux is abbreviated to Linux
throughout for brevity. All other trademarks are the property of their respective
owners. Where applicable code printed in this magazine is licensed under the GNU
GPL v2 or later. See www.gnu.org/copyleft/gpl.html.
Copyright © 2015 Future Publishing Ltd. No part of this publication may be
reproduced without written permission from our publisher. We assume all letters
sent – by email, fax or post – are for publication unless otherwise stated, and reserve
the right to edit contributions. All contributions to Linux Format are submitted and
accepted on the basis of non-exclusive worldwide licence to publish or license others
to do so unless otherwise agreed in advance in writing. Linux Format recognises all
copyrights in this issue. Where possible, we have acknowledged the copyright holder.
Contact us if we haven’t credited your copyright and we will always correct any
oversight. We cannot be held responsible for mistakes or misprints.
All DVD demos and reader submissions are supplied to us on the assumption they
can be incorporated into a future covermounted DVD, unless stated to the contrary.
Disclaimer All tips in this magazine are used at your own risk. We accept no liability
for any loss of data or damage to your computer, peripherals or software through
the use of any tips or advice.
Printed in the UK by William Gibbons on behalf of Future.
Stay anonymous online
The ultimate showdown of anonymising distros, and
we have no idea who’s writing it!
Build a home router
Take an old system and create the ultimate home
router run by Linux, mad-genius Jonni is your guide.
Contents of future issues subject to change – hang on, I recognise that Tor box from last month…
98 LXF195 March 2015
www.linuxformat.com
Future is an award-winning international media
group and leading digital business. We reach more
than 49 million international consumers a month
and create world-class content and advertising
solutions for passionate consumers online, on tablet
& smartphone and in print.
Future plc is a public
company quoted
on the London
Stock Exchange
(symbol: FUTR).
www.futureplc.com
Chief executive Zillah Byng-Maddick
Non-executive chairman Peter Allen
&KLHIÀQDQFLDORIÀFHURichard Haley
Tel +44 (0)207 042 4000 (London)
Tel +44 (0)1225 442 244 (Bath)
We are committed to only using magazine paper which is
derived from well-managed, certified forestry and chlorinefree manufacture. Future Publishing and its paper suppliers
have been independently certified in accordance with the rules
of the FSC (Forest Stewardship Council).
9000
9012