It can't escape anyone's notice that while some computer applications are very friendly and inviting from the word go, some are absolutely mindblowingly complex.
Take for example this screengrab from Band-in-a-Box (the 2012 edition).
There are three toolbars on-screen, with a total of 73 visible buttons between them -- and there's a further 11 buttons that are missing due to lack of screen space (screen resolution 1366x768). There's 14 menus in the menu bar, and several dozen clickable bits. All the functionality of the software is laid out in front of you.
But is that a good thing? I think not.
Band-in-a-Box is, first and foremost, an accompaniment generator. The first thing any newbie will want to do is select a music style, plonk in the names of a few chord, and listen to it. Which means that for one thing, you'll be playing Where's Wally trying to find the play button.
The problem with Band-in-a-Box is that it has been gradually improved over the years into a do-everything audio workstation, and it's very powerful, but there is very little separation of concerns, so it throws everything at the user simultaneously, overwhelming and overpowering, and leaving the simplest task actually quite hard to do.
Personally I keep giving up on the software when I can't find out how to do something very basic. When I come back to it I find out one more thing, then hit another roadblock. I installed it the other day, and I found out how to insert repeats... except the repeats didn't actually work. This is basic stuff. Why do I care how many technologies it can interface to if I can't make sections of my music repeat reliably? (A lot of the buttons on screen are specific to MIDI, VSTi or DXi plugins.)
As a beginner, the software should make it easy for me to do beginner tasks, without making the complexity unavailable for when I need it later.
Another class of software that has been vexing me a bit recently is animation software: I want to make a cartoon, but I don't know where to start.
I know what I want to do, but it's hidden from me. Take a look at this trailer for the game the Banner Saga:
Notice how little animation there is, with some scenes relying on camera moves and parallax effects to give any sense of life to the picture... and note how effective it is.
Most of us above a certain age will have memories of kids TV programmes that were essentially readings of illustrated story-books. At first, they used the illustrations from the book, and simply zoomed and panned a camera over them to make them look a bit more lively. Windows Movie Maker will almost do this, but it does panning only, and it's fully automatic as far as I can see, meaning I don't even get to decide what's shown when.
The next step in the evolution of the TV storybook was translating the hand-drawn images into pseudo-3D using cardboard cutouts, and panning and zooming really did start to bring more life to the story.
2D cutouts on a 3D stage offer a heck of a lot of creative potential for very little work, and therefore offer a rewarding introduction into computer animation.
Of course, they're not actually animated, but I don't think that's really a problem, because even though cel animation came to life almost fully formed in the first half of the 20th century, storybook TV (there may be a more accepted formal term for this -- feel free to tell me if you know it!) followed its own evolutionary path that brought it slowly closer to traditional cel animation and also to stop-motion animation.
Perhaps the most unique TV storybook from my own childhood was Paddington Bear:
Notice the blend of 2D monochrome cardboard scenery, 2D sketchily drawn characters with minimal animation, and a fully animated stop-motion puppet as the main character with fully coloured 3D props. And it works -- we all loved it, because it took the least realistic element of the story and made it the most realistic looking. Even though the style was designed for low cost, it was an artistic decision.
This style offers a progressive introduction to animation that would leave the learner open to produce something of value at each and every stage... if the tools supported it. The problem is that 3D layering has been identified as something advanced, and therefore no effort has gone into making it immediately available and understandable to the new user. Instead, developers assume that the first thing the new learner wants to do is make a ball bounce or a stick man walk...
Well, yes... that is the first thing that new users want to do, because they turn up with the expectation of producing something that is recognisably a "cartoon". The problem with this is simple: cartoons are big and expensive and a lot of work. What newbies want is irrelevant, because it's out of their reach -- it is too complex a task. If a new user is to stick at it, they're going to need to get some sort of gratification early on, and that means they need to do something creative, and with an end-product that delivers satisfaction. If I spend an entire day at an animation workshop, I might end up with a character made of bones with a walk cycle, and perhaps 2 or 3 seconds of lipsync. Am I going to upload that to YouTube? Would people watch it? Probably not, so I therefore feel like I've done a lot of technical work, but not achieved anything of value.
Furthermore, even if I slog through and after months and months of practise get to the point where I can produce a 5-minute animated short, I'm still stymied by limited resources in that I only have one voice, and cartoons typically require multiple voice actors.
A one-day workshop on storybook TV techniques, on the other hand, could see me instead leave with a complete short film.
It would start with selecting a story of a few minutes in length. You would make a recording of the story, which would essentially be the entire soundtrack. Then you would start assembling the 2D props you need, either from a clip art library or by drawing them on a computer. Build up the scenes by placing the flat props, then start playing with camera angles. You might even have time left to discuss each other's films and go back and re-edit to make them better, and everyone gets to go home with something they feel proud of.
And then you can focus on minimal animations on day 2.
You could start with environmental features such as waves and weather. Did you notice the amount of snow in the Banner Saga trailer? Quite often it's almost the only thing moving (at 1:05, for example), but because it respects the layering, it enhances the 3D effect while reinforcing our perception of time passing. Snow, rain and fog -- these things could easily be handled procedurally by the animation engine.
Then our characters can start changing their expressions a bit, or we could do that old anime trick of making the facial features wobble backwards and forwards a little to express tension or effort.
Individually, these things may seem insignificant, but because we started with a technique that allowed us to tell a full story, each thing we learn now has very real effects on the quality of our story-telling. Instead of struggling a full semester at a college in order to produce a single solitary short, you'd reach the end of that semester with a DVD or two's worth of video, multiple revisions and recuts of the same thing. You would have learned so much more about the whole process of storytelling, which would more than make up for any specific technical skills that you hadn't learned. But actually, I suspect that you'd actually learn the technical stuff even better, because each time you learned a technique, it would be in context, and with a particular goal in mind. By the time you started trying to animate bones, it would be for a character you already know in a story you know back-to-front. You know what the movement is trying to express, so the whole process is more meaningful.
I could write my own software to do the first few steps of this myself, and I am kind of tempted to have a try, but in the end, I don't really know if it's worth it. Why not? Because this should not be separate software. If it is, then it doesn't really serve as a particularly useful introduction, because everything that you learned to do in my software, you'd have to relearn it in a "proper" animation package.
So it needs to be that fully-featured software packages are designed from the ground up to support beginners by presenting low-complexity options, then gradually increasing the complexity so that the new user progresses effortlessly to a power user.
Rethinking Computing
Computers: they're great, aren't they? But wouldn't it be nice if they were better...? Right now, we're not using them anywhere near their potential. So what could we do differently?
Thursday, 5 February 2015
Friday, 24 January 2014
A new perspective on copyright freedoms
Just recently, the venerable GCC (GNU C Compiler), once the undisputed king of C compilation the world over, found itself with a genuine challenger in the form of Clang, built on top of the LLVM infrastructure. This has upset the Free Software Foundation's Richard Stallman, who described his objections in very adversarial terms on the FSF's mailing list.
The problem, in short, is that GCC is licensed under the GPL, a so-called "copyleft" license in that it requires that any derivative works adhere to the same license, whereas LLVM is licensed with the University of Illinois/NCSA license, a license derived from the MIT and BSD licenses, which permits users to do pretty much anything they like with the source, including "closing" their own versions rather than feeding everything back into the community. Richard Stallman sees the proprietary software companies as an "enemy" to the free software movement, but I simply don't agree.
The first question you have to ask is:
Let's look at it from a different, non-IT perspective.
Teachers have been sharing materials for years, typically without any sort of explicit license conditions. The internet has facilitated ever-easier sharing, but at the same time, it has also introduced the concept of licensing, with a massive amount of teaching material shared under a Creative Commons license. Unfortunately, the most common license is CC-NC-SA (non-commercial use only, share-alike/copyleft). The share-alike part isn't a problem, but NC definitely is in certain fields, such as language teaching. A heck of a lot of language teaching takes place outside the state school system, particularly in English teaching. Most of the English teachers in the world work for small private enterprises, teaching paying customers. That means most of the free material is not available to most of the teachers, which is madness.
After all, the end beneficiary of all teaching is the student. I'm an English teacher myself, and I don't profit by using other people's pre-prepared materials. Instead, I get time back, and that time I can use more productively in marking and individual feedback. The student gets a better lesson from me, and gets more for their money. In the end, isn't that the reason that people make free resources available?
Going back to compilers...
Every computer programmer relies on compilers, so if better compilers are available both commercially and free, isn't everyone in a better position?
Now, my line of thought may seem to be heading towards "everything should be licensed BSD style," but actually, it's not. I want to instead suggest that in all technological advancement there are two major paradigms:
The floor is something that we all share, and that we can all do. In the world of physical technology, it is the accumulated portfolio of all expired patents and unpatented scientific papers.
The roof is the cutting edge; what only the best can achieve. It may be protected by patents or by trade secrets, but it presents a challenge to competitors to attempt to equal or better it with their own technology.
A BSD license establishes a new "floor" for a given software technology -- with Clang and LLVM, we now have a baseline standard, and no compiler writer has any excuse for shipping a product that is inferior. The quality of every product on the market is (in theory at least) guaranteed by a raising of the floor.
There is a huge gap between the young doctor's floor and ceiling, and he is unable to write a whole compiler himself. He only has two choices: release the innovation for free, and start his career from scratch, or sell out to one of the big software houses that already have a complete proprietary compilation package.
This means that...
With the Illinois/NCSA license, our young doctor can bundle his new technique into the Clang/LLVM code stack and publish a new compiler that he can sell for a modest sum immediately. He keeps all the profits.
Without permissive licenses of this sort, he would be forced to sell to one of a vanishingly small market of compiler makers, and with so few buyers, they can afford to squeeze the seller.
So it's the big guy that profits.
And what does that mean for the users? Well, we know that most compiler makers have an agenda to push. Microsoft is going to make it Windows only, and Apple will only use it in C# for Mac, and are going to keep the new feature as a Windows/Mac-only feature. Google would tie it into Go and Android's Dalvik system. A very small subset of the programming fraternity would benefit.
Meanwhile, the young doctor's own company would be selling it as part of a cross-platform stack that covers not only multiple OSes, but also multiple programming languages. A much larger part of the programming fraternity would benefit.
And it would almost certainly cost less. You'll need to license a full version of the development suite to get it from a large software house, whereas the independent vendor will be licensing a module that fits in with your existing software stack. (Which also means you get a free choice of IDEs, incidentally.)
In effect, he is failing to learn from the history of his own sphere, because the early days of Unix were almost entirely defined by floor-raising activities, with a proliferation of vendors improving and sharing the system.
The GPL prevents floor-raising, making it much harder for smaller players to gain any traction in the market, and there is nowhere that this is more evident than in the market for compilers, where buyouts and consolidation around the turn of the century dramatically reduced the number of active players in the compiler market. And at the same time, the only real free software alternative was GCC.
With luck, the rise of LLVM will breathe new life into the compiler market as a whole, and perhaps even trigger experimentation in languages. And aren't we overdue a new programming paradigm anyway?
The problem, in short, is that GCC is licensed under the GPL, a so-called "copyleft" license in that it requires that any derivative works adhere to the same license, whereas LLVM is licensed with the University of Illinois/NCSA license, a license derived from the MIT and BSD licenses, which permits users to do pretty much anything they like with the source, including "closing" their own versions rather than feeding everything back into the community. Richard Stallman sees the proprietary software companies as an "enemy" to the free software movement, but I simply don't agree.
The first question you have to ask is:
Who benefits?
Stallman asserts that it is the commercial companies that benefit, because they make money off other people's backs. But is that really true? In the final analysis, is it not the end user that benefits? After all, the software that they can buy is better than otherwise possible, and because the companies can afford to sell it cheaper, having spent less work on it.Let's look at it from a different, non-IT perspective.
Teachers have been sharing materials for years, typically without any sort of explicit license conditions. The internet has facilitated ever-easier sharing, but at the same time, it has also introduced the concept of licensing, with a massive amount of teaching material shared under a Creative Commons license. Unfortunately, the most common license is CC-NC-SA (non-commercial use only, share-alike/copyleft). The share-alike part isn't a problem, but NC definitely is in certain fields, such as language teaching. A heck of a lot of language teaching takes place outside the state school system, particularly in English teaching. Most of the English teachers in the world work for small private enterprises, teaching paying customers. That means most of the free material is not available to most of the teachers, which is madness.
After all, the end beneficiary of all teaching is the student. I'm an English teacher myself, and I don't profit by using other people's pre-prepared materials. Instead, I get time back, and that time I can use more productively in marking and individual feedback. The student gets a better lesson from me, and gets more for their money. In the end, isn't that the reason that people make free resources available?
Going back to compilers...
Every computer programmer relies on compilers, so if better compilers are available both commercially and free, isn't everyone in a better position?
Now, my line of thought may seem to be heading towards "everything should be licensed BSD style," but actually, it's not. I want to instead suggest that in all technological advancement there are two major paradigms:
Raise the floor and raise the roof
What I mean here is fairly simple.The floor is something that we all share, and that we can all do. In the world of physical technology, it is the accumulated portfolio of all expired patents and unpatented scientific papers.
The roof is the cutting edge; what only the best can achieve. It may be protected by patents or by trade secrets, but it presents a challenge to competitors to attempt to equal or better it with their own technology.
A BSD license establishes a new "floor" for a given software technology -- with Clang and LLVM, we now have a baseline standard, and no compiler writer has any excuse for shipping a product that is inferior. The quality of every product on the market is (in theory at least) guaranteed by a raising of the floor.
Innovation benefits from a short ceiling-floor distance
Innovation is hard. Not only do you have to come up with a good idea, but before you can raise the ceiling, you have to reach it. Imagine a newly-graduated PhD student had done his thesis on a new technique, let's say a new way to compile list structures for better runtime performance. His innovation alone is not a product that he can sell, because it is not a full compiler. Imagine there's no LLVM, and that his only option is the GPL licensed GCC,There is a huge gap between the young doctor's floor and ceiling, and he is unable to write a whole compiler himself. He only has two choices: release the innovation for free, and start his career from scratch, or sell out to one of the big software houses that already have a complete proprietary compilation package.
This means that...
The only people who profit from a large ceiling-floor distance are the proprietary software houses
Think about it.With the Illinois/NCSA license, our young doctor can bundle his new technique into the Clang/LLVM code stack and publish a new compiler that he can sell for a modest sum immediately. He keeps all the profits.
Without permissive licenses of this sort, he would be forced to sell to one of a vanishingly small market of compiler makers, and with so few buyers, they can afford to squeeze the seller.
So it's the big guy that profits.
And what does that mean for the users? Well, we know that most compiler makers have an agenda to push. Microsoft is going to make it Windows only, and Apple will only use it in C# for Mac, and are going to keep the new feature as a Windows/Mac-only feature. Google would tie it into Go and Android's Dalvik system. A very small subset of the programming fraternity would benefit.
Meanwhile, the young doctor's own company would be selling it as part of a cross-platform stack that covers not only multiple OSes, but also multiple programming languages. A much larger part of the programming fraternity would benefit.
And it would almost certainly cost less. You'll need to license a full version of the development suite to get it from a large software house, whereas the independent vendor will be licensing a module that fits in with your existing software stack. (Which also means you get a free choice of IDEs, incidentally.)
Stallman's mistake
Stallman refuses to raise the floor, because he views "proprietary software" as a single entity. He fails to recognise the difference between small independents and the major corporations, and consequently isn't looking for how the two react differently to changes in the environment.In effect, he is failing to learn from the history of his own sphere, because the early days of Unix were almost entirely defined by floor-raising activities, with a proliferation of vendors improving and sharing the system.
The GPL prevents floor-raising, making it much harder for smaller players to gain any traction in the market, and there is nowhere that this is more evident than in the market for compilers, where buyouts and consolidation around the turn of the century dramatically reduced the number of active players in the compiler market. And at the same time, the only real free software alternative was GCC.
With luck, the rise of LLVM will breathe new life into the compiler market as a whole, and perhaps even trigger experimentation in languages. And aren't we overdue a new programming paradigm anyway?
Tuesday, 14 January 2014
What is the purpose of this blog?
I've been using computers for as long as I can remember. I started by pushing keys on an Acorn Electron while sitting on my mother's lap, and I started coding in BASIC on that very same computer. When I went to university for the first time, it was to study computer science, and I spent almost ten years of my life working in IT consultancy.
In the early days, I was full of wonder. I was amazed by all the things computers could do, and yet I was still capable of reaching some of the limits of the possible. (I was very proud of myself the first time I encountered the message "Out of memory error" while typing in a program.) In those days, the buzz was split between what computers could do, and what computers would be able to do in the future.
But somewhere along the line, the train slipped off the tracks.
Very rarely do we ever reach the limits of what our computers are capable of, and we flit around from one gadget to another trying to find something that works for us, rather than improving what we've got.
So what can we do differently? That's what I'd like to explore.
I want to look at how inertia and historical accidents have led us to where we are today, and how a little bit of thought would allow us to move forward. I'd also like to look at innovations that aren't working as well as we'd expected, and start trying to find the reasons why they're failing -- what didn't the designers think about?
In the end, it all comes down to thought -- a little more thinking, and computers could be so much more than they are today.
In the early days, I was full of wonder. I was amazed by all the things computers could do, and yet I was still capable of reaching some of the limits of the possible. (I was very proud of myself the first time I encountered the message "Out of memory error" while typing in a program.) In those days, the buzz was split between what computers could do, and what computers would be able to do in the future.
But somewhere along the line, the train slipped off the tracks.
Very rarely do we ever reach the limits of what our computers are capable of, and we flit around from one gadget to another trying to find something that works for us, rather than improving what we've got.
So what can we do differently? That's what I'd like to explore.
I want to look at how inertia and historical accidents have led us to where we are today, and how a little bit of thought would allow us to move forward. I'd also like to look at innovations that aren't working as well as we'd expected, and start trying to find the reasons why they're failing -- what didn't the designers think about?
In the end, it all comes down to thought -- a little more thinking, and computers could be so much more than they are today.
Subscribe to:
Posts (Atom)