In almost any endeavor, you can go it alone, or you can get help. You can spend all of your time researching and practicing and tweaking until you figure things out, or you can buy a book or hire a consultant and have someone tell you what they have already figured out after years of his/her life were spent on the topic.
Leveraging the work that has been done by others is a shortcut, and it is perfectly fine to take them. If you want to learn how to do software development, you don’t need to build your own computer architecture, as you can leverage the existing Von Neumann architecture in most modern machines. You don’t need to start from first principles. Someone already figured it out, and you can take advantage of it.
This kind of advice is ingrained in our culture.
Don’t reinvent the wheel.
Don’t spend your time doing that task when you can hire someone to do it for you faster and at a level higher quality, which saves you time, too.
This is the way it has always been done, and it’s the best way we know.
On the other hand, sometimes we advance the arts and sciences by starting over and exploring our assumptions.
In Bret Victor’s talk The Future of Programming in which he pretends to be an IBM engineer from 1973, complete with transparencies and a projector, he talks about the problem of people who think they know what they are doing:
He starts out explaining the resistance to the creation of assembly code by the people used to coding in binary. Coding in binary WAS programming, and assembly was seen as a waste of time and just plain wrong.
He goes on to talk about exciting advances in programming models from the late 60s and early 70s, and extrapolates some tongue-in-cheek “predictions” about how computers will work 40 years in the future, predictions that lamentably did not come about. Today we still code much the same way people did back in the 60s.
Ultimately, he warns that there is a risk to teaching computer science as “this is how it is done”.
The real tragedy would be if people forgot you could have new ideas about programming models in the first place.
The most dangerous thought that you can have as a creative person is to think that you know what you’re doing, because once you think you know what you’re doing, you stop looking around for other ways of doing things. You stop being able to see other ways of doing things. You become blind.
Game design applies here, too. Video games from the 70s, 80s, and 90s were quite varied. People were figuring them out because no one knew what they were. They tried everything.
Eventually some key genres popped out of this period of experimentation, and some control schemes and interfaces became common. It’s hard to imagine real-time strategy games without Dune 2‘s UI conventions.
Five years ago, Daniel Cook wrote about reinventing the match-3 genre:
It occurred to me that game design, like any evolutionary process, is sensitive to initial conditions. If you want to stand out, you need to head back in time to the very dawn of a genre, strike out in a different direction and then watch your alternate evolutionary path unfurl.
When people think of a match-3 game, they have something in mind because all match-3 games tend to be similar. Triple Town ended up being quite different, yet it was still recognizable as a match-3 game, and people loved it.
Some people merely need to leverage existing infrastructure. People are using Unity for game development because, much like Microsoft’s XNA before it, it handles all of the boiler-plate for you, and it also provides a lot of the technical tools in an easily-accessible way so you can focus on the development of the game rather than the technical details of making a game.
But some people are pushing what’s been conventionally thought of as possible. Spore, for instance, had to procedurally generate animations for characters that weren’t prebuilt, which meant someone had to figure out how to do so. There was no existing 3rd-party library to leverage. The shoulders of giants here weren’t high enough.
I’m part of a book club right now involving algorithms. We’re reading Steven Skienna’s mostly-accessible book The Algorithm Design Manual, and it’s been enjoyable and challenging. I haven’t studied algorithms since college, and I kind of wish I could go back and check my notes from class.
But what bothers me when reading this book is the warning about trying to completely invent a new algorithm on your own. Skienna argues that most problems can probably be adapted by sorting the data or otherwise thinking about it in a way that an existing algorithm can solve it.
And he’s right.
But someone had to have figured out these algorithms in the first place, right? Someone saw a problem and had no way to solve it, so he/she came up with a way, optimized it, and published it.
But today I’m expected to just learn what they did and use it, and I feel like I’m being told to stay away from actually trying to figure out a better way on my own, as if all of the algorithms that can be invented have been invented.
And if I just want to solve particular existing problems, it’s probably practical advice.
But if I want to explore an entirely new kind of problem, what am I supposed to do with old assumptions and solutions? Square pegs don’t go in round holes, and I don’t think we want a future where we are taught that round holes are the only kinds of holes in existence.