Here’s the latest news from the Pragmatic Bookshelf.
The very engaging and useful Your Code as a Crime Scene: Use Forensic Techniques to Arrest Defects, Bottlenecks, and Bad Design in Your Programs is now in print and shipping. We have an article by its author, Adam Tornhill, in the upcoming April issue of PragPub.
And Ruby Performance Optimization: Why Ruby Is Slow, and How to Fix It is now out in beta. It’s by Alexander Dymo, it’s edited by, uh, me, and it’s great!
Seems to me that one big problem with the “create the social media platform, grow it, and figure out how to monetize it later” model is that you create existential uncertainty for anyone who might try to extend your service. You should want other developers to enrich your users’ experience, but if they don’t know what part of the ecosystem you’re someday going to wall off as your own and what parts you’ll open up for them, they attempt to extend your ecosystem at their peril.
This looks like a brilliant refactoring of industry, combining production and delivery, taking just-in-time to a new level, and eliminating unnecessary steps. As 3D printing gets more and more advanced, the kinds of products that could be produced this way would expand.
The plan suggests two other business models:
What happens if the trucks run out of raw material to feed to the 3D printer? Doesn’t this ultimately call for convenient filling stations for the truck/factories so they can top up as needed?
And then there’s the flip side of 3D printing: 3D scanners. You drop in an object and they output the code to reproduce it. We’ll need one in every home so you can say, “Hey, this whatzis broke, make me a new whatzis.”
Dr. Dobb’s ceased even online publication last year, having shut down the papermill a few years earlier. For those of you who feel a twinge of nostalgia over this, here’s some retro nostalgia: Dr. Dobb’s at 30.
If you’re a fan of Marvel superhero movies (and who isn’t) or if your fandom goes back to the earliest days (you were a member of the Merry Marvel Marching Society and you weighed in on Vince Colletta’s inking of Jack Kirby’s pencils and you remember when Roy Thomas was just another nit-picking fan writing letters to the editor), there’s a question you’ve probably been asking yourself for some time now:
Where is the API?
Wonder no longer, True Believer.
I need one of these. And I don’t even own a bike.
In fact, I think the developers of this product are thinking too small. Making a bike sound like a galloping horse is just fun. But how about equipping electric cars with a galloping-horse device? How about mandating that all electric (or hybrid) cars have galloping-horse devices factory-installed?
For whatever reasons, we are ignoring a serious threat to public safety posed by these silent vehicles.
I’m all for electric cars, but even better than an electric car would be an electric car that can’t sneak up on you. And even better than that would be if all electric cars sounded like galloping horses.
Let’s do this.
Here’s a podcast Paul Freiberger and I did about Fire in the Valley and vintage computing.
Chris Ford of ThoughtWorks has put together his list of the most important academic papers in computer science. This is becoming a thing: Michael Feathers and Fogus have also published such lists.
Chris set three criteria for inclusion in his list:
The paper must have changed the world. (I.e., they must truly be important.)
The paper must have changed his perspective. (This makes it personal.)
Only one paper is allowed from each decade. (This makes the list interesting.)
His list includes some choices that are hard to argue with, like Alan Turing’s “On Computable Numbers, with an Application to the Entscheidungsproblem” and Claud Shannon’s “A Mathematical Theory of Communication.”
I thought about trying to put together my own list, and maybe I will sometime, but I realized as I thought about it that all the papers I would choose are already in this list.
Elon Musk is seriously worried about artificial intelligence. He’s donated ten million dollars to the Future of Life Institute to ensure that AI research is beneficial.
I’ll grant you that ten million is chump change to Elon Musk, but the Institute has the word “catalyze” in its mission statement, and we all know that this means they expect small nudges in the right places to have huge impacts.
I’m trying not to be cynical because I think that Musk’s concern is legitimate, and I hope humanity does take the right steps to avoid a future of killer robots and an Internet of Evil Things. And maybe the consciousness-raising that the Institute is doing will be enough. But I’m going to have to see a few practical programs before they get my ten million.