Category: Linux Journal – The Original Magazine of the Linux Community

An Immodest Proposal for the Music Industry

An Immodest Proposal for the Music Industry:


How music listeners can fill the industry’s “value gap”.

From the 1940s to the 1960s, countless millions of people would put a dime
in a jukebox to have a single piece of music played for them, one time. If
they wanted to hear it again, or to play another song, they’d put in
another dime.

In today’s music business, companies such as Spotify, Apple and Pandora pay
fractions of a penny to stream songs to listeners. While this is a big
business that continues to become bigger, it fails to cover what the music
industry calls a “value gap”.

They have an idea for filling that gap. So do I. The difference is that
mine can make them more money, with a strong hint from the old jukebox

For background, let’s start with this graph from the IFPI’s Global Music
Report 2018

Figure 1. Global Music Report 2018

You can see why IFPI no longer gives its full name:
International Federation of the Phonographic Industry. That phonographic
stuff is what they now call “physical”. And you see where that’s going (or
mostly gone). You also can see that what once threatened the
industry—"digital"—now accounts for most of its rebound (Figure

Figure 2. Global Recorded Music Revenues by Segment (2016)

The graphic shown in Figure 2 is also a call-out from the first. Beside it is this
text: “Before seeing a return to growth in 2015, the global recording
industry lost nearly 40% in revenues from 1999 to 2014.”

Later, the report says:

However, significant challenges need to be
overcome if the industry is going to move to sustainable growth. The whole
music sector has united in its effort to fix the fundamental flaw in
today’s music market, known as the “value gap”, where fair
revenues are not being returned to those who are creating and investing in

They want to solve this by lobbying: “The value gap is now the
industry’s single highest legislative priority as it seeks to create a
level playing field for the digital market and secure the future of the
industry.” This has worked before. Revenues from streaming and performance
rights owe a lot to royalty and copyright rates and regulations guided by
the industry. (In the early 2000s, I covered this like a rug in Linux
. See here.)

via Linux Journal – The Original Magazine of the Linux Community

Time for Net Giants to Pay Fairly for the Open…

Time for Net Giants to Pay Fairly for the Open Source on Which They Depend:


Net giants depend on open source: so where’s the gratitude?

Licensing lies at the heart of open source.
Arguably, free software began
with the
publication of the GNU GPL in 1989
. And since then, open-source projects
are defined as such by virtue of the licenses they adopt and
whether the latter meet the Open Source
. The continuing importance of licensing is shown by the
periodic flame wars that erupt in this area. Recently, there have been two
such flarings of strong feelings, both of which raise important issues.

First, we had the incident with Lerna, “a
tool for managing JavaScript projects with multiple packages”. It came about
as a result of the way the US Immigration and Customs Enforcement (ICE) has
been separating
and holding children in
cage-like cells
. The Lerna core team was appalled by this behavior and
wished to do something concrete in response. As a result, it added an extra clause to the
MIT license
, which forbade a list of companies, including Microsoft,
Palantir, Amazon, Motorola and Dell, from being permitted to use the code:

For the companies that are known supporters of ICE: Lerna will no
longer be licensed as MIT for you. You will receive no licensing rights and
any use of Lerna will be considered theft. You will not be able to pay for a
license, the only way that it is going to change is by you publicly tearing
up your contracts with ICE.

Many sympathized with the feelings about the actions of the ICE and the
intent of the license change. However, many also pointed out that such a
move went against the core principles of both free software and open source.
Freedom 0 of the
Free Software Definition
is “The freedom to run the program as you wish,
for any purpose.” Similarly, the Open Source Definition requires “No
Discrimination Against Persons or Groups” and “No Discrimination Against
Fields of Endeavor”. The situation is clear cut, and it didn’t take long for
the Lerna team to realize their error, and they soon reverted the

via Linux Journal – The Original Magazine of the Linux Community

Weekend Reading: FOSS Projects

Weekend Reading: FOSS Projects:

FOSS Project Spotlights provide an opportunity for free and open-source project team members to show Linux Journal readers what makes their project compelling. Join us this weekend as we explore some of the latest FOSS projects in the works.

FOSS Project Spotlight: Nitrux, a Linux Distribution with a Focus on AppImages and Atomic Upgrades

by Nitrux Latinoamericana S.C.

Nitrux is a Linux distribution with a focus on portable, application formats like AppImages. Nitrux uses KDE Plasma 5 and KDE Applications, and it also uses our in-house software suite Nomad Desktop.

FOSS Project Spotlight: Tutanota, the First Encrypted Email Service with an App on F-Droid

by Matthias Pfau

Seven years ago, Tutanota was being built, an encrypted email service with a strong focus on security, privacy and open source. Long before the Snowden revelations, the Tutanota team felt there was a need for easy-to-use encryption that would allow everyone to communicate online without being snooped upon.

FOSS Project Spotlight: LinuxBoot

by David Hendricks

Linux as firmware.

The more things change, the more they stay the same. That may sound cliché, but it’s still as true for the firmware that boots your operating system as it was in 2001 when Linux Journal first published Eric Biederman’s “About LinuxBIOS”. LinuxBoot is the latest incarnation of an idea that has persisted for around two decades now: use Linux as your bootstrap.

FOSS Project Spotlight: CloudMapper, an AWS Visualization Tool

by Scott Piper

Duo Security has released CloudMapper, an open-source tool for visualizing Amazon Web Services (AWS) cloud environments.

When working with AWS, it’s common to have a number of separate accounts run by different teams for different projects. Gaining an understanding of how those accounts are configured is best accomplished by visually displaying the resources of the account and how these resources can communicate. This complements a traditional asset inventory.

FOSS Project Spotlight: Ravada

by Francesc Guasch

via Linux Journal – The Original Magazine of the Linux Community

The Monitoring Issue

The Monitoring Issue:

November 2018 Cover

In 1935, Austrian physicist, Erwin Schrödinger, still flying high after his
Nobel Prize win from two years earlier, created a simple thought experiment.

It ran something like this:

If you have a file server, you cannot know if that server is up or
down…until you check on it. Thus, until you use it, a file server
is—in a
sense—both up and down. At the same time.

This little brain teaser became known as Schrödinger’s File Server, and
regarded as the first known critical research on the intersection of Systems
Administration and Quantum Superposition. (Though, why Erwin chose,
specifically, to use a “file server” as an example remains a bit of a
mystery—as the experiment works equally well with any type of server.
It’s like,
we get it, Erwin. You have a nice NAS. Get over it.)

Okay, perhaps it didn’t go exactly like that. But I’m confident it would
have…you know…had good old Erwin had a nice Network Attached Storage
server instead of a cat.

Regardless, the lessons from that experiment certainly hold true for servers.
If you haven’t checked on your server recently, how can you be truly sure
it’s running properly? Heck, it might not even be running at all!

Monitoring a server—to be notified when problems occur or, even better,
when problems look like they are about to occur—seems, at first blush, to
be a simple task. Write a script to ping a server, then email me when the
ping times out. Run that script every few minutes and, shazam, we’ve got a
server monitoring solution! Easy-peasy, time for lunch!

Whoah, there! Not so fast!

That server monitoring solution right there? It stinks. It’s fragile. It
gives you very little information (other than the results of a ping). Even
for administering your own home server, that’s barely enough information and
monitoring to keep things running smoothly.

Even if you have a more robust solution in place, odds are there are
significant shortcomings and problems with it. Luckily, Linux
your back—this issue is chock full of advice, tips and tricks for how to
keep your servers effectively monitored.

You know, so you’re not just guessing of the cat is still alive in there.

Mike Julian (author of O’Reilly’s Practical Monitoring) goes into detail on a
bunch of the ways your monitoring solution needs serious work in his
adorably titled “Why Your Server Monitoring (Still) Sucks” article.

We continue “telling it like it is” with Corey Quinn’s treatise on Amazon’s
CloudWatch, “CloudWatch Is of the Devil, but I Must Use It”. Seriously,
Corey, tell us how you really feel.

via Linux Journal – The Original Magazine of the Linux Community

Why Your Server Monitoring (Still) Sucks

Why Your Server Monitoring (Still) Sucks:

Five observations about why your your server monitoring still
stinks by a monitoring specialist-turned-consultant.

Early in my career, I was responsible for managing a large fleet of
printers across a large campus. We’re talking several hundred networked
printers. It often required a 10- or 15-minute walk to get to
some of those printers physically, and many were used only sporadically. I
always know what was happening until I arrived, so it was anyone’s
guess as to the problem. Simple paper jam? Driver issue? Printer currently
on fire? I found out only after the long walk. Making this even more
frustrating for everyone was that, thanks to the infrequent use of some of
them, a printer with a problem might go unnoticed for weeks, making itself
known only when someone tried to print with it.

Finally, it occurred to me: wouldn’t it be nice if I knew about the problem
and the cause before someone called me? I found my first monitoring tool
that day, and I was absolutely hooked.

Since then, I’ve helped numerous people overhaul their monitoring
systems. In doing so, I noticed the same challenges repeat themselves regularly. If
you’re responsible for managing the systems at your organization, read
on; I have much advice to dispense.

So, without further ado, here are my top five reasons why your monitoring
is crap and what you can do about it.

1. You’re Using Antiquated Tools

By far, the most common reason for monitoring being screwed up is a
reliance on antiquated tools. You know that’s your issue when you spend
more time working around the warts of your monitoring tools or when
you’ve got a bunch of custom code to get around some major missing
functionality. But the bottom line is that you spend more time trying to
fix the almost-working tools than just getting on with your job.

The problem with using antiquated tools and methodologies is that
you’re just making it harder for yourself. I suppose it’s certainly
possible to dig a hole with a rusty spoon, but wouldn’t you prefer to use a

Great tools are invisible. They make you more effective, and the job is
easier to accomplish. When you have great tools, you don’t even notice

Maybe you don’t describe your monitoring tools as “easy to use”
or “invisible”. The words you might opt to use would make my editor
break out a red pen.

This checklist can help you determine if you’re screwing yourself.

via Linux Journal – The Original Magazine of the Linux Community

GNOME 3.30.2 Released, Braiins OS Open-Source …

GNOME 3.30.2 Released, Braiins OS Open-Source System for Cryptocurrency Embedded Devices Launched, Ubuntu 19.04 Dubbed Disco Dingo, Project OWL Wins IBM’s Call for Code Challenge and Google Announces New Security Features:

News briefs for November 1, 2018.

3.30.2 was released yesterday
. It includes several bug fixes, and
packages should arrive in your distro of choice soon, but if you want to
compile it yourself, you can get it here. The
full list of changes is available here. This is the last planned point release
of the 3.30 desktop environment. The 3.32 release is expected to be
available in spring 2019.

Braiins Systems has announced Braiins
, which claims to be “the first
fully open source system for cryptocurrency embedded devices”. FOSSBYTES
that the initial release is based on OpenWrt. In addition,
Braiins OS “keeps monitoring the working conditions and hardware to create
reports of errors and performance. Braiins also claimed to reduce power
consumption by 20%”.

Ubuntu 19.04 will be called Disco Dingo, and the release is scheduled for
April 2019. Source: OMG!

announces Project OWL is the winner of its first Call for Code
. Project OWL is “an IoT and software solution that keeps
first responders and victims connected in a natural disaster”. The team
will receive $200,000 USD and will be able to deploy the solution via the
IBM Corporate Service Corps. The OWL stands for “stands for Organization,
Whereabouts, and Logistics”, and it’s a hardware/software solution that
“provides an offline communication infrastructure that gives first
responders a simple interface for managing all aspects of a disaster”.

Google yesterday announced four new security features for Google accounts.
According to ZDNet,
Google won’t allow you to sign in if you have disabled JavaScript in your
browser. It plans to pull data from Google Play Protect to list all
malicious apps installed on Android phones, and it also now will notify you
whenever you share any data from your Google account. Finally, it has
implemented a new set of procedures to help users after an account has been

via Linux Journal – The Original Magazine of the Linux Community

Papa’s Got a Brand New NAS: the Software

Papa’s Got a Brand New NAS: the Software:

Who needs a custom NAS OS or a web-based GUI when command-line
NAS software is so easy to configure?

In a recent letter to the editor, I was contacted by a reader who
enjoyed my “Papa’s
Got a Brand New NAS”
article, but wished I had
spent more time describing the software I used. When I
wrote the article, I decided not to dive into the software too much,
because it all was pretty standard for serving files under Linux.
But on second thought, if you want to re-create what I made, I
imagine it would be nice to know the software side as well, so this article
describes the software I use in my home NAS.

The OS

My NAS uses the ODROID-XU4 as the main computing platform, and so
far, I’ve found its octo-core ARM CPU and the rest of its resources
to be adequate for a home NAS. When I first set it up, I visited the
official wiki
for the computer, which provides a number of OS
images, including Ubuntu and Android images that you can copy onto a
microSD card. Those images are geared more toward desktop use,
however, and I wanted a minimal server image. After some searching,
I found a minimal image for what was the current Debian stable
release at the time (Jessie)

Although this minimal image worked okay for me, I don’t necessarily
recommend just going with whatever OS some volunteer on a forum
creates. Since I first set up the computer, the Armbian project has
been released, and it supports a number of standardized OS images for quite
a few ARM platforms including the ODROID-XU4. So if you
want to follow in my footsteps, you may want to start with the minimal Armbian
Debian image

If you’ve ever used a Raspberry Pi before, the process of setting
up an alternative ARM board shouldn’t be too different. Use another
computer to write an OS image to a microSD card, boot the ARM board,
and at boot, the image will expand to fill the existing filesystem.
Then reboot and connect to the network, so you can log in with the default
credentials your particular image sets up. Like with Raspbian builds,
the first step you should perform with Armbian or any other OS image
is to change the default password to something else. Even better,
you should consider setting up proper user accounts instead of
relying on the default.

via Linux Journal – The Original Magazine of the Linux Community

Internationalizing the Kernel

Internationalizing the Kernel:

At a time when many companies are rushing to internationalize their products and
services to appeal to the broadest possible market, the Linux kernel is
actively resisting that trend, although it already has taken over the
broadest possible market—the infrastructure of the entire world.

David Howells recently created some sample code for a new kernel library,
with some complex English-language error messages that were generated from
several sources within the code. Pavel Machek objected that it would be
difficult to automate any sort of translations for those messages, and that
it would be preferable simply to output an error code and let something in
userspace interpret the error at its leisure and translate it if needed.

In this case, however, the possible number of errors was truly vast, based
on a variety of possible variables. David argued that representing each and
every one with a single error code would use a prohibitively large number of
error codes.

Ordinarily, I might expect Pavel to be on the winning side of this debate,
with Linus Torvalds or some other top developer insisting that support for
internationalization was necessary in order to give the best and most useful
possible experience to all users.

However, Linus had a very different take on the situation:

We don’t internationalize kernel strings. We never have. Yes, some people
tried to do some database of kernel messages for translation purposes, but I
absolutely refused to make that part of the development process. It’s a

For some GUI project, internationalization might be a big deal, and it might
be “TheRule™”. For the kernel, not so much. We care about the technology,
not the language.

So we’ll continue to give error numbers for “an error happened”. And if/when
people need more information about just what _triggered_ that error, they
are as English-language strings. You can quote them and google them without
having to understand them. That’s just how things work.


There are places where localization is a good idea. The kernel is *not* one
of those places.

He added later:

I really think the best option is “Ignore the problem”. The system calls
will still continue to report the basic error numbers (EINVAL etc), and the
extended error strings will be just that: extended error strings. Ignore
them if you can’t understand them.

That said, people have wanted these kinds of extended error descriptors
forever, and the reason we haven’t added them is that it generally is more
pain than it is necessarily worth.

via Linux Journal – The Original Magazine of the Linux Community

Simulate Typing with This C Program

Simulate Typing with This C Program:

Tech Tips

I recently created a video demonstration of how to do some work
at the command line, but as I tried to record my video, I kept running
into problems. I’m just not the kind of person who can type commands
at a keyboard and talk about it at the same time. I quickly realized
I needed a way to simulate typing, so I could create a
“canned” demonstration that I could narrate in my video.

After doing some searching, I couldn’t find a command on my distribution that
would simulate typing. I wasn’t surprised; that’s not a common thing
people need to do. So instead, I rolled my own program to do it.

Writing a program to simulate typing isn’t as difficult as it first
might seem. I needed my program to act like the echo command, where it displayed
output given as command-line parameters. I added command-line options so
I could set a delay between the program “typing” each letter, with an
additional delay for spaces and newlines. The program basically did this
the following
for each character in a given string:

  1. Insert a delay.
  2. Print the character.
  3. Flush the output buffer so it shows up on screen.

First, I needed a way to simulate a delay in typing, such as someone
typing slowly, or pausing before typing the next word or pressing
Enter. The C function to create a delay is usleep(useconds_t
. You use
usleep() with the number of microseconds you want your program to
pause. So if you want to wait one second, you would use

Working in microseconds means too many zeroes for me to type, so I wrote a
simple wrapper called msleep(int millisec) that does the same thing
in milliseconds:

msleep (int millisec)
  useconds_t usec;
  int ret;

  /* wrapper to usleep() but values in milliseconds instead */

  usec = (useconds_t) millisec *1000;
  ret = usleep (usec);
  return (ret);

Next, I needed to push characters to the screen after each
delay. Normally, you can use putchar(int char) to send a single character
to standard output (such as the screen), but you won’t actually see the
output until you send a newline. To get around this, you need to flush the
output buffer manually. The C function fflush(FILE *stream) will flush an
output stream for you. If you put a delay() before each
fflush(), it will
appear that someone is pausing slightly between typing each character.

via Linux Journal – The Original Magazine of the Linux Community

Firefox 63 Released, Red Hat Collaborating wit…

Firefox 63 Released, Red Hat Collaborating with NVIDIA, Virtual Box 6.0 Beta Now Available, ODROID Launching a New Intel-Powered SBC and Richard Stallman Announces the GNU Kind Communication Guidelines:

News briefs for October 23, 2018.

63.0 was released
this morning. With this new version, “users can
opt to block third-party tracking cookies or block all trackers and create
exceptions for trusted sites that don’t work correctly with content
blocking enabled”. In addition, WebExtensions now run in their own process
on Linux, and Firefox also now warns if you have multiple windows and tabs
open when you quit via the main menu. You can download it from here.

Hat this morning announced it is collaborating with NVIDIA
to “bring a
new wave of open innovation around emerging workloads like artificial
intelligence (AI), deep learning and data science to enterprise datacenters
around the world.” Leading this partnership is the certification of Red Hat
Enterprise Linux on NVIDIA DGX-1 systems, which will provide “a foundation
or the rest of the Red Hat portfolio, including Red Hat OpenShift Container
Platform, to be deployed and jointly supported on NVIDIA’s AI

6.0 Beta 1 was released
today. Note that this is a beta release
and shouldn’t be used on production machines. Version 6.0 will be a new
major release. So far, some of the changes include Oracle Cloud
Infrastructure integration and improvements in
the GUI design. See the forum
for more information.

ODROID recently announced it is launching a new Intel-powered SBC.
According to Phoronix,
the “ODROID-H2 and is powered by an Intel J4105 Geminilake 2.3GHz quad-core
processor, dual channel DDR4 memory via SO-DIMM slots, PCIe NVMe storage
slot, dual Gigabit Ethernet, dual SATA 3.0 ports, and HDMI 2.0 / DP 1.2
display outputs”. It is expected to be available in late November. See the
for further details.

Richard Stallman yesterday announced the “GNU Kind Communication
. Stallman writes that in contrast to a code of conduct with
punishment for people who violate the rules, “the
idea of the GNU Kind Communication Guidelines is to start guiding
people towards kinder communication at a point well before one would
even think of saying, ‘You are breaking the rules’.” The
initial version of the GNU Kind Communications Guidelines is here.

via Linux Journal – The Original Magazine of the Linux Community