I've been using computers for as long as I can remember. I started by pushing keys on an Acorn Electron while sitting on my mother's lap, and I started coding in BASIC on that very same computer. When I went to university for the first time, it was to study computer science, and I spent almost ten years of my life working in IT consultancy.
In the early days, I was full of wonder. I was amazed by all the things computers could do, and yet I was still capable of reaching some of the limits of the possible. (I was very proud of myself the first time I encountered the message "Out of memory error" while typing in a program.) In those days, the buzz was split between what computers could do, and what computers would be able to do in the future.
But somewhere along the line, the train slipped off the tracks.
Very rarely do we ever reach the limits of what our computers are capable of, and we flit around from one gadget to another trying to find something that works for us, rather than improving what we've got.
So what can we do differently? That's what I'd like to explore.
I want to look at how inertia and historical accidents have led us to where we are today, and how a little bit of thought would allow us to move forward. I'd also like to look at innovations that aren't working as well as we'd expected, and start trying to find the reasons why they're failing -- what didn't the designers think about?
In the end, it all comes down to thought -- a little more thinking, and computers could be so much more than they are today.