Blog
Last week amidst the deluge of Consumer Electronics Show coverage, Farhad Majoo of the New York Times wrote that we’re in a era of lots of exciting new ideas that aren’t quite ready for prime time: “Welcome to Prototype World...during which everything new will more or less stink.”
Nonetheless: those embryonic ideas still need to be shown to the public, to gain mindshare and traction in the press and marketplace. And that’s where the art of the demo comes in.
In the Nineties, when I was creating “new media” for Newsweek and The Washington Post, we were most definitely in Prototype World. We developers could see just how cool everything was going to be--someday. But thanks to primitive technology like pokey CD-ROM drives and 1200 baud modems, even our best products could be slow, unreliable, and hard to use. Usually, all three at the same time.
And so we learned how to demo them--at trade shows like CES, on live television, in front of advertisers or potential retailers. We knew the weaknesses of our products intimately, so we designed demonstration routines that cleverly skirted the bumpy patches.
If the digital video stuttered during fast-moving scenes, we’d show video snippets that were fairly stationary. If the program crashed when you went from viewing slideshows to reading text, then that particular feature wasn’t part of the demo.
One of my best tricks was with our CD-ROM newsmagazine. It was quite cool and far ahead of its time--but it ran on a little Sony player that took about ten seconds to start up after you clicked on the Play button. That was an unacceptably long time in the interactive world.
I quickly learned that it was possible to click Play, wait exactly nine seconds, and then hit the Pause button. When it was show-time, I’d release Pause and a second later the program--theme music, splash screen, animation--was running. But if I waited too long, the pause timed out and you sat through the ten second warmup again.
The technology was sufficiently new and sexy that we ended up on quite a few television shows. It was invariably unnerving, sitting backstage before the segment, trying to time the Pause trick so that it would be ready to go when once we were on air. Just in case the trick failed and I had the ten second delay, I also had some engaging patter that I could launch into, to distract the audience’s attention, just like a magician during tricks.
In our minds, the demo wasn’t really dishonest--we were just emphasizing the best parts of the product. And sooner or later, when the technology caught up, it really would run like that. But other demo artists weren’t so scrupulous.
I once demonstrated an online version of Newsweek to an audience of potential advertisers, using a dial-up telephone line, just like our real customers used. It worked, but it wasn’t exactly fast--waiting for a full color picture to appear on the screen was a bit like watching paint dry. But I still thought it looked pretty good.
Then a competitor from another newsmagazine, one with a four letter title, got up to demonstrate the online version of his magazine. And it was fabulously fast! Pictures and text flew across the computer screen almost instantly!
I immediately knew that he wasn’t using the telephone line at all; he’d downloaded his entire site onto a hard drive. And thus that wasn’t a demo--that was cheating. But unfortunately, in those early days, most advertising folk didn’t really understand the difference between online and hard drives in the first place. So I lost that day.
The high point of my demo career came at a software conference, when one of our programmers introduced me to a group of friends: “Meet Michael. This guy could demo a dead dog!”
There were even demo jokes back then. My favorite was one in which a hacker dies and meets St. Peter at the pearly gates. St. Peter says “Today we have a special offer: you get to choose whether you want heaven or hell.”
The hacker asks if he can take a look before he decides.
Sure, says St. Peter, and snaps his fingers. In a moment the hacker is in heaven. It’s full of angels, playing harps, floating around peacefully on fluffy white clouds.
Another finger snap and the hacker is in hell: it’s a vast room of high powered computers, with huge flat screens, and dozens of young programmers pounding away at keyboards, with unlimited Diet Cokes and pizza and Doritos.
The hacker tells St. Peter that it may sound strange, but he thinks he’d rather go to hell. Yet another finger snap and now the hacker is standing in a pool of hot lava, with a little red demon poking him with a pitchfork.
Wait a minute, the bewildered hacker says, what happened to all the computers?
The little demon looks puzzled, and then says: “Oh--you must have seen our demo!”
So this year’s CES--whether it was self-driving cars, smart appliances, VR headgear, or humanoid robots--involved an unusually high proportion of carefully orchestrated demos.
And there’s nothing wrong with that, as long as one knows the difference between demo and real life.
Late last week I visited the CEDIA conference--a long-time gathering of “custom integrators”--the professionals who, traditionally, installed high-end whole-house audio-video systems. Think home theaters with huge screens, floor-shaking sound, custom leather seats and a popcorn cart. But over recent years, CEDIA members have increasingly found themselves also installing smart homes. And this year, their conference, previously called CEDIA Expo, was renamed Future Home Experience.
I found an audience very sensitive to the shifting under their feet--the entrance of giants like Google and Apple into territory, as well as ambitious young start-ups that aim to build the voice-activated intelligences that will control everything in the home from the front door locks to the window shades.
It’s going to be an interesting few years for everyone in the industry, but it’s clear that the long-promised smart house is finally arriving and the business opportunity is enormous.
Here’s an interview I did recently with seven predictions for the home of the future.
Anyone who flies probably cringed at the reports of American Airline’s massive data fail yesterday--stranding passengers, canceling flights, creating general chaos in a half-dozen airports. It’s not the first time for American--a similar glitch grounded 400 flights a couple of years ago. And of course United Airlines managed a similar data faceplant in July, when a failed router grounded all its aircraft for over an hour.
It made me think of a intriguing session I’m helping with at the annual DellWorld 2015 conference next month in Austin. It’s called “Inventing the Data Center of Tomorrow”, taking in all the implications of real-time data analytics, cloud computing, the Internet of Things and ubiquitous mobility.
But the element of the session that’s relevant to today’s airline story is the notion of using smart objects, sensors and software to monitor the ongoing health of the IT infrastructure itself--to predict upcoming component failures and maintenance issues before they turn into system crashes.
Continuing with the airline theme, it’s not unlike the array of smart sensors that are now built into jet engines to monitor performance. Some of those systems are so smart they can radio ahead to the next airport to order a replacement part before the plane lands.
Should we be doing anything less with the data centers that increasingly control so much of our lives and livelihood?
Last week I gave a speech about the world in 2095. Not my usual timeframe--I’m happiest talking about the mid-Twenties. But thinking about it took me down a number of interesting paths.
One example: for perspective, I tried to imagine what it would be like for a person from 80 years ago--1935--to suddenly land in 2015. What would amaze him or her about our technology?
Actually, quite a few things would be familiar--cars, electric light bulbs, airplanes, recorded music. Even television, which our visitor would pretty quickly figure out was “radio with pictures.”
But there’s one branch of technology that we take for granted that might really surprise our visitor: material science.
Specifically: I think Ms. 1935 would really stop with wonder when she spied her first zip-lock plastic bags. “These,” she would say, “are really amazing.” After all, back in 1935, the word “plastic” was itself only ten years old, and the nylon stocking was still five years in the future.
Flexible, waterproof, transparent bags that can seal themselves? Now that’s impressive technology.
I suspect that by 2095 there will be a raft of materials that will amaze us, from glass that generates electricity to plastics that when damaged, heal themselves. And of course, they will all be taken completely for granted by the citizens of that era.
Today I was shopping for trash bags in an Italian supermarket. The Italians seem to make a large number of different sized trashbags, all measured in centimeters, and for some reason, I can never remember the exact sizes that we use. So a few months ago I photographed the labels of the correct sizes and uploaded them to the extremely useful Evernote app, so I can just take out my phone, search “trashbags” and there’s the picture.
It made me think about how wearable computers will change that simple action. In another few years I’ll have a wrist computer (not a watch–see my thoughts on that here). It will have voice recognition, so I’ll just murmur, “Hey wrist, trashbags?”, glance down and the correct labels will be displayed.
And then a few years after that I’ll have smart glasses. I mean really smart glasses. They will know, through video, that I’m in the trashbag aisle. When I hesitate more than a few seconds in front of the trashbag choices, the image of the correct labels will float up in my vision. It will be, in fact, not much different than my own process of remembering–except the brain that’s doing the remembering will be somewhere in the cloud.
Although, come to think of it, some people say that's where my brain is most of the time anyway....
There’s a major public issue brewing that sooner or later will explode into common debate. You could probably trace its beginning back a few years ago when entrepreneur and technologist Martin Ford wrote a book called “The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future”. His new book is an extension of his thinking called “The Rise of the Robots” reviewed here in the New York Times by Barbara Ehrenreich, another sharp thinker about the nature of work.
With his first book, Ford raised the issue that we may be facing a new kind of automation. Previous bouts of automation have eliminated jobs, but always created new jobs, and for years most economists assumed that would be true with computers and robots. But the conversation has started to shift toward the notion that “this time is different.” Different in two ways: 1) this will impact white collar workers as well as those who work with their hands and 2) it is not at all clear where the new middle-class jobs will come from.
I see the trend everywhere among my clients–from so-called e-discovery software that is eliminating that common task for young lawyers, to programmatic buying tools in advertising agencies that replace traditional media buyers. And in her review, Ehrenreich points out that an increasing number of financial and sports stories are written by robots–and then does a pretty good job of suggesting how someday smart software could be used to replace book reviewers.
And recently I was introduced to the concept of Robotic Process Automation, which effectively automates many routine clerical tasks, without requiring fundamental changes to the company’s underlying software. A lot of clerical tasks involve, say, checking one number against another to make sure it was properly recorded, or moving data from one program to another program that isn’t fully compatible.
That’s the kind of work that is often outsourced to India. Now, RPA advocates suggest that with these new efficiencies it may be possible to bring those jobs home to the United States. But that would still mean that say, a thousand jobs that were lost in the US a decade ago might return to our shores–but this time only employing fewer than one hundred.
Ford points out that a country that consists of a wealthy elite and everyone else performing minimum wage jobs is not a healthy economy. Indeed, that’s generally agreed upon by both liberal and conservative economists. We need the kind of strong middle class that existed in the United States post-WWII, created in part by the unionization of factory workers. That’s the rationale behind the movement across the country right now to raise the wages of service workers to create a new middle class. One wage goal that is often suggested is $15 an hour.
Ironically, I recently saw a business plan for a sophisticated fast food robot that would easily replace several workers. What struck me most was a graph in the plan that showed how the machine became a profitable investment when the wages of workers approached….$15 an hour.
In”The Rise of the Robots” Ford makes an obvious but controversial suggestion: a guaranteed annual income. If robots and smart software are creating additional wealth, but that’s not being distributed beyond the owners of the machines, then the notion of income redistribution raises its head. And in a world in which jobs are created and destroyed quickly, and workforce flexibility is important, then a guaranteed annual income would give people the freedom to take some risks, as well as participate in growing but insecure opportunities like Uber or Task Rabbit.
Like climate change, job loss through automation is one of those issues that creeps up very slowly, and is also highly susceptible to political manipulation. We will hear much, much more on this topic long before any solutions come into sight.
“Home Sweet Home” is going to need some new adjectives.
As I mentioned earlier, last month I spoke at CEDIA Future Home Experience--a conference for companies that design and install whole-house audio-video systems, as well as home security and home automation.
I made some predictions, and I also did a little experiment with my audience.
First, a few predictions:
--The home of the future will have facial recognition--it will know who is in the house, and recognize people as they approach the front door. When you walk into your living room, the lights, climate control and music will adjust to your preferences.
--Video screens will be so inexpensive they can be built into any object or appliance. The refrigerator door, for example, may become a true “home page”--a big video screen that shows everything from the household calendar and messages--Don’t eat the cake, it’s for company!-- to the kids’ artwork and even real-time fitness updates for dieters.
--All of this will be managed with voice commands--”House, turn off the outdoor lights at 11 tonight.” “House, start the air conditioner tonight when I'm five miles from home.” “House, activate the security system.” “House, have the children come home yet?”
Thus the house of the future, controlled through voice commands, is inevitably going to have a personality. Look at something as simple as the voice of Siri on today's iPhone; with the right questions, she’ll tell sly jokes or kid around a bit.
Hence my experiment. I asked the audience--over a thousand of the people who will create these houses--what kind of emotional experience the house of the future will create. They texted in their ideas throughout the speech, building the “word cloud” shown above.
“Sweet” doesn't appear once. But warmth, calm, relaxation, and delight all figure prominently. My favorite contribution, however, was that the home of the future will need a heart.
So the Apple watch has received its first reviews, and they are tentatively positive. Tentative, because most of the reviews caution that unlike most Apple products, it’s a device with a steep learning curve. And it has fairly limited capabilities at present. However, like technology reviewers throughout history, they can’t help but factor in just how cool the Watch will be when it, uh, works better.
But that optimism is simply because technology reviewers also know that if the first generation of a product is promising, relentless engineering plus the acceleration of technology means that the subsequent generations will inevitably be much better.
In my brief experience with the Apple Watch, however, I had a different response. I found myself staring at the white fluoroelastomer “Sport Band” that held the Watch on my wrist, and wondered, why aren’t we using that space as well?
I suspect that the “wrist watch” form itself is a problem. When you look at a smartphone or tablet, the entire device is the screen. Smart watches inevitably give up more than 50% of their real estate to the strap, which just sits there.
Bendable LCDs, batteries and circuits are already well along in the laboratories and showing up in prototypes. By the end of this decade, the “smart bracelet” may become the preferred wrist display, in which the entire object is a curved touch screen that can display anything from video screen to a numeric keypad to a list of emails. The bracelet form would also allow a larger battery, a key problem in today’s smart watches.
And, when you weren’t using it as a display, your wrist bracelet would be a new fashion opportunity. The entire band could display any kind of color, shape or pattern: designer screen savers for smart bracelets.
After all, even the shape of the traditional watch evolved as the technology improved. The first personal timepiece was the size of a large egg, worn around the neck. Then came the pocket watch. Finally, watches became even smaller, and the rest was, well, wristory.
We may seem the same transition for the smart watch in the years to come. and someday the Apple Watch will seem as quaint as those timekeeping eggs once worn around the neck.
I’m spending a few weeks in Sicily to do some writing...and enjoy springtime in the Mediterranean. It’s a welcome relief after the East Coast winter.
Yesterday I made an appointment for a mid-afternoon conference call on the Monday after Easter, and then realized I’d made a mistake. Easter Monday in Sicily is a major holiday, a day when lots of long lunches and spontaneous invitations and impromptu visits take place.
My Sicilian friends would certainly understand if I said, no, we can’t visit because I have a business call...I am, after all, an Americano and we have strange habits. But it would be quite bad form. And I’d also probably miss some great food and companionship.
In New York, we joke that you know someone is a real friend when you can cancel lunch with them at the last minute if a business meeting comes up. It’s pretty much the opposite here.
Which leads me to ponder how much culture shapes our work habits. My audiences are often very concerned about the way that technology is blurring the lines between work and home. But much of that blurring is due to choices we as people, as organizations, as a society, ourselves make. The technology just makes it easier.
The FCC has decided to regulate the Internet as something closer to a public utility, or the federal highway system, than a cable television service. Critics of the FCC decision--which include, of course, all of the major Internet service providers and their lobbyists--say that this will spell doom for the Internet. There will be less innovation, higher prices, reduced competition, general bad news for the consumer.
When I hear these dire warnings, I think about my Internet and wireless experience in Italy, where telecommunications is distinctly more regulated than in the US.
At my isolated stone farmhouse in the midst of Sicilian cow country, I receive signals from three different cellular companies (all of whom offer high-speed Internet service), plus a choice of two high-speed DSL providers (if I ever get around to putting in a phone line), plus two operators of a sophisticated wireless technology called WiMAX (never adopted by the big ISPs in the United States).
That’s a lot of choices, in middle of the countryside, on a Mediterranean island that is hardly the technology center of Europe. And all those services are cheaper than what one would pay in the United States. So if that’s what a bit more government regulation produces, then I say: bring it on.