“Home Sweet Home” is going to need some new adjectives.
As I mentioned earlier, last month I spoke at CEDIA Future Home Experience--a conference for companies that design and install whole-house audio-video systems, as well as home security and home automation.
I made some predictions, and I also did a little experiment with my audience.
First, a few predictions:
--The home of the future will have facial recognition--it will know who is in the house, and recognize people as they approach the front door. When you walk into your living room, the lights, climate control and music will adjust to your preferences.
--Video screens will be so inexpensive they can be built into any object or appliance. The refrigerator door, for example, may become a true “home page”--a big video screen that shows everything from the household calendar and messages--Don’t eat the cake, it’s for company!-- to the kids’ artwork and even real-time fitness updates for dieters.
--All of this will be managed with voice commands--”House, turn off the outdoor lights at 11 tonight.” “House, start the air conditioner tonight when I'm five miles from home.” “House, activate the security system.” “House, have the children come home yet?”
Thus the house of the future, controlled through voice commands, is inevitably going to have a personality. Look at something as simple as the voice of Siri on today's iPhone; with the right questions, she’ll tell sly jokes or kid around a bit.
Hence my experiment. I asked the audience--over a thousand of the people who will create these houses--what kind of emotional experience the house of the future will create. They texted in their ideas throughout the speech, building the “word cloud” shown above.
“Sweet” doesn't appear once. But warmth, calm, relaxation, and delight all figure prominently. My favorite contribution, however, was that the home of the future will need a heart.
So the Apple watch has received its first reviews, and they are tentatively positive. Tentative, because most of the reviews caution that unlike most Apple products, it’s a device with a steep learning curve. And it has fairly limited capabilities at present. However, like technology reviewers throughout history, they can’t help but factor in just how cool the Watch will be when it, uh, works better.
But that optimism is simply because technology reviewers also know that if the first generation of a product is promising, relentless engineering plus the acceleration of technology means that the subsequent generations will inevitably be much better.
In my brief experience with the Apple Watch, however, I had a different response. I found myself staring at the white fluoroelastomer “Sport Band” that held the Watch on my wrist, and wondered, why aren’t we using that space as well?
I suspect that the “wrist watch” form itself is a problem. When you look at a smartphone or tablet, the entire device is the screen. Smart watches inevitably give up more than 50% of their real estate to the strap, which just sits there.
Bendable LCDs, batteries and circuits are already well along in the laboratories and showing up in prototypes. By the end of this decade, the “smart bracelet” may become the preferred wrist display, in which the entire object is a curved touch screen that can display anything from video screen to a numeric keypad to a list of emails. The bracelet form would also allow a larger battery, a key problem in today’s smart watches.
And, when you weren’t using it as a display, your wrist bracelet would be a new fashion opportunity. The entire band could display any kind of color, shape or pattern: designer screen savers for smart bracelets.
After all, even the shape of the traditional watch evolved as the technology improved. The first personal timepiece was the size of a large egg, worn around the neck. Then came the pocket watch. Finally, watches became even smaller, and the rest was, well, wristory.
We may seem the same transition for the smart watch in the years to come. and someday the Apple Watch will seem as quaint as those timekeeping eggs once worn around the neck.
I’m spending a few weeks in Sicily to do some writing...and enjoy springtime in the Mediterranean. It’s a welcome relief after the East Coast winter.
Yesterday I made an appointment for a mid-afternoon conference call on the Monday after Easter, and then realized I’d made a mistake. Easter Monday in Sicily is a major holiday, a day when lots of long lunches and spontaneous invitations and impromptu visits take place.
My Sicilian friends would certainly understand if I said, no, we can’t visit because I have a business call...I am, after all, an Americano and we have strange habits. But it would be quite bad form. And I’d also probably miss some great food and companionship.
In New York, we joke that you know someone is a real friend when you can cancel lunch with them at the last minute if a business meeting comes up. It’s pretty much the opposite here.
Which leads me to ponder how much culture shapes our work habits. My audiences are often very concerned about the way that technology is blurring the lines between work and home. But much of that blurring is due to choices we as people, as organizations, as a society, ourselves make. The technology just makes it easier.
The FCC has decided to regulate the Internet as something closer to a public utility, or the federal highway system, than a cable television service. Critics of the FCC decision--which include, of course, all of the major Internet service providers and their lobbyists--say that this will spell doom for the Internet. There will be less innovation, higher prices, reduced competition, general bad news for the consumer.
When I hear these dire warnings, I think about my Internet and wireless experience in Italy, where telecommunications is distinctly more regulated than in the US.
At my isolated stone farmhouse in the midst of Sicilian cow country, I receive signals from three different cellular companies (all of whom offer high-speed Internet service), plus a choice of two high-speed DSL providers (if I ever get around to putting in a phone line), plus two operators of a sophisticated wireless technology called WiMAX (never adopted by the big ISPs in the United States).
That’s a lot of choices, in middle of the countryside, on a Mediterranean island that is hardly the technology center of Europe. And all those services are cheaper than what one would pay in the United States. So if that’s what a bit more government regulation produces, then I say: bring it on.
What enormous year-end event could possibly cause media ranging from CNN, the BBC, Newsweek, and NPR to The Globe and Mail and Mental Floss to call the Practical Futurist for an interview?
Try the 1989 movie “Back to the Future 2”--which happens to be set in 2015 and is thus full of predictions for our upcoming year.
The reporters were particularly interested in what the film got wrong, which includes both Doc’s flying car and Marty McFly’s hoverboard. Of course futurists have been getting the flying car wrong since at least 1957, when Popular Mechanics featured a flying car on the cover. They cautioned in the article, however, that we wouldn’t actually have them until 1967.
And the hoverboard? Entrepreneurs have lately come up with a version, but it functions magnetically and thus only floats above metal surfaces. Marty’s hoverboard, on the other hand, floats over anything and the only way I can imagine it might work is anti-gravity. Alas, in 2015, it’s unlikely we’ll even have a complete theory of gravity.
On the other hand, BTTF2 got some things right: Marty uses a thumbprint to pay for a taxi ride (shades of the iPhone 6); TV screens are flat and wall-sized; video telephone calls are increasingly common.
Of course, BTTF2 wasn’t meant to be a futurist manifesto but rather an entertaining movie. And it certainly succeeded at being memorable, considering the number of journalists who are writing about it 26 years later. (We’ll see how many articles appear at the end of 2018 about “Bladerunner”, which was set in 2019.)
But it’s also a good reminder of the difference between futurism and science fiction. New technologies can run into all sorts of financial, governmental and social problems that the fiction writer can happily ignore. For example: even if you could build a reasonably-priced flying car, you’d need new infrastructure for landing, a whole new range of driver skills and the approval of government agencies from the Department of Transportation to the FAA.
And thus a good futurist needs to understand not just technology, but the worlds of business, government, and human nature.
Human nature was one thing that BTTF2 got right. My favorite prediction was that objects from the 70s and 80s would become sought-after antiques in 2015. Sure enough, a couple of weeks ago, an Apple I from the mid-’70s sold at auction for $360,000. Don’t ditch that 1984 Mac quite yet!
I was speaking in Iowa City earlier this week and was reminded again of how vital many Midwestern cities have become. At the same time, a new research group, City Observatory, released a report about where young college graduates are moving. As we already know, they like to move to cities. But, as an excellent New York Times summary points out, what’s interesting is that cities like Nashville, Austin, Portland, Buffalo, Pittsburgh and St. Louis have had the highest percentage increase of young graduates since 2000, all significantly higher than New York City.
I’ve long thought that this is a trend that will continue. As work becomes more virtualized, and cities like New York and Los Angeles become increasingly expensive, it simply makes sense that both employers and employees will look to cities that offer more affordable lifestyles. That’s going to be especially true when the bulk of the Millennial generation begins to think about having kids.
The Internet has not only made it more possible to work at a distance, but it also enhances the smaller city lifestyle. You don’t have to drive fifty miles to see a foreign film--they’re available, streaming. The Internet takes care of just about any exotic shopping needs. There’s the Metropolitan Opera in live HD in your local theater. And given the speed at which trends now spread across the country, the latest artisanal kale shop will probably show up in your neighborhood only a few months after it debuts in Brooklyn.
Yet real estate developers in the major cities continue to build new apartments at a record pace. In New York City alone, developers like to say there are another million people on the way. But I’m not so sure. People like cities, and I don’t expect any reversal of our species‘ five-thousand-year march into urbanization. But when you add in the new factor of virtual work and life, I don’t think bigger (and more crowded and more expensive) will continue to be better.
Yesterday the Sony Computer Science Laboratories --Sony’s elite corporate think-tank--gave its first symposium in New York City, at the Museum of Modern Art. As is appropriate for an independent think tank, some of the ideas were visionary to the point of dream-like, such as 3D-printable gardens. Others were of the ilk that make perfect sense but will be tough to implement in the real world, such as a microgrid power system that used DC rather than AC power plus wind and solar power to create energy-independent neighborhoods. Probably not practical for the developed world, but at the right price, ground-breaking in developing countries where large percentages of the population don’t have electricity to start with.
But the most remarkable demonstration for me was very close to Sony’s own home turf: an artificial intelligence system that is able to listen to a musical performer and extract their “style”, rather than recording the actual notes. The system can then create new pieces of music in the style of the performer, or accompany a real musician in the style of a particular accompanist. Researcher Francois Patchet showed examples of a John Coltrane song done in the style of Wagner, a Brazilian ballad performed in the style of the a cappella group Take 6, and an original composition in the style of jazz legend Bill Evans. A good piece in The Atlantic took a more in-depth look at this last month.
Interesting detail: while the software will take bits and pieces of a composer’s work, it is constrained from copying so much as to constitute plagiarism. It’s a fine line, of course, that hip-hop artists have struggled with in the process of sampling over the years. But Patchet took the intellectual property question an additional step. Recorded music, he pointed out, thanks to everything from illegal downloading to low-cost streaming services, is getting to be pretty low-value these days. “The real new asset of value,” said Patchet, “is style.”
I have a feeling that’s a concept that the lawyers over at Sony Music are thinking about right now. Sony co-owns the largest music library in the world, including, oh, the Beatles and Michael Jackson. If a computer is smart enough to listen to the entire Michael Jackson oeuvre, and then write “new” Michael Jackson songs, just where do those royalties go?
I saw a great presentation last week at a wearable computing conference, by the wearables group at Motorola--a team that’s really focused on building Google Glass-like equipment for industry, rather than consumers.
It was interesting that even at this small industry event, no one in the audience quite agreed on what to call these embryonic devices. Of the two most popular phrases--”head-mounted displays” or “smart glasses”--I think I’ll take the latter. Although now it looks Google is making progress in making “glass” legally it’s own. (Hopefully if Apple introduces a version they can call them i-glasses.)
It made me realize that adoption of smart glasses will probably be a throwback to the patterns of the last century, when commercial applications came first and then the technology migrated to consumers. (Of course, that pattern has been turned on its head this century--employees tend to have better computers and phones in their homes than they do at work.)
It’s pretty clear that the first compelling applications of smart glasses will initially be in areas like public safety (firefighters, for example), equipment maintenance workers, maybe warehouses and logistics--areas where people need detailed and up-to-date information, while keeping their hands free. Because it's such advanced technology, the first really usable smart glasses are going to be expensive, as well.
It’s probably going to be a bit like the adoption curve of tablet computers. Twenty years ago, Fujitsu was already making a good business out of tablet computers for specialized purposes like healthcare, inventory and sales.
Then in 2001 Microsoft tried to introduce the Tablet PC more broadly, and it was pretty much only early adopters who bought it. I was one of them. Frankly, it was a bit of pain--you had to use a special pen, for starters--but it certainly got lots of attention from curious passengers on airplanes. All in all, not unlike today’s Google Glass.
Finally, in 2010, touch screens plus better interfaces came along and the tablet was launched--twenty years after Fujitsu started selling them.
I suspect it will be the same with smart glasses--although they will be mainstream far more quickly than the tablet did, thanks to Moore’s Law and our increasingly rapid acceptance of new technology.
Most of my speaking is for private organizations. But if you happen to be in New York on July 29, I’ll be speaking at the Adorama store at 42 W. 18th Street, not far from Union Square, at 4 PM and 6 PM.
Adorama, of course, is the 35-year-old camera store that has grown to be one of the leading online consumer technology retailers in the US. My topic is Tomorrow’s Technology--gazing out at my favorite year, 2020--so I’ll be looking at wearables, smart objects, cloud-based intelligence and more.
I confess that I get a lot of my best anecdotes from the audience, so there will be plenty of time for Q&A and discussion. I’m looking forward to hearing technology consumers talk about their thoughts on the future.
Tickets are free; Adorama suggests registration here. Hope to see you there!
The phrase "generation gap" first appeared in the Sixties, when the unprecedented social upheaval of that decade truly created a cultural chasm not just between generations, but even within families. These days "generation gap" sounds a bit old-fashioned, but I'd say that for the first time in forty years, the condition it describes is back.
Not, this time, in families--indeed, the children of the Baby Boomers are emotionally closer to their parents than any generation in history. (Also physically closer, when they move back home after college.)
Now the gap is in the workplace. No matter what kind of audience I speak to--from educators to lawyers to venture capitalists--and no matter what the topic is, during the Q&A session there's always some form of the question "What's up with these kids, anyway?"
The questions--well, more accurately, complaints--range from lack of social skills to attention span to reading ability to work ethic to that perennial favorite, "entitlement".
All of that makes for some lively discussion, but at the end I have to say: these are, in fact, your future employees and customers. And one way or another, you're going to have to learn to live with them.
That's why I'm looking forward to speaking at a conference this July at Colorado State University in Ft. Collins called "Why Hire Gen Y?". The agenda begins with the assumption that Gen Y--the Millennials--are not only an inevitable part of the workforce, but that they also will bring new strengths.
"What's up with those kids?" is a serious question that deserves a thoughtful response. And that's something that should particularly be appreciated by anyone who stood on the opposite side of the generation gap forty years ago.