Yesterday the Sony Computer Science Laboratories --Sony’s elite corporate think-tank--gave its first symposium in New York City, at the Museum of Modern Art. As is appropriate for an independent think tank, some of the ideas were visionary to the point of dream-like, such as 3D-printable gardens. Others were of the ilk that make perfect sense but will be tough to implement in the real world, such as a microgrid power system that used DC rather than AC power plus wind and solar power to create energy-independent neighborhoods. Probably not practical for the developed world, but at the right price, ground-breaking in developing countries where large percentages of the population don’t have electricity to start with.
But the most remarkable demonstration for me was very close to Sony’s own home turf: an artificial intelligence system that is able to listen to a musical performer and extract their “style”, rather than recording the actual notes. The system can then create new pieces of music in the style of the performer, or accompany a real musician in the style of a particular accompanist. Researcher Francois Patchet showed examples of a John Coltrane song done in the style of Wagner, a Brazilian ballad performed in the style of the a cappella group Take 6, and an original composition in the style of jazz legend Bill Evans. A good piece in The Atlantic took a more in-depth look at this last month.
Interesting detail: while the software will take bits and pieces of a composer’s work, it is constrained from copying so much as to constitute plagiarism. It’s a fine line, of course, that hip-hop artists have struggled with in the process of sampling over the years. But Patchet took the intellectual property question an additional step. Recorded music, he pointed out, thanks to everything from illegal downloading to low-cost streaming services, is getting to be pretty low-value these days. “The real new asset of value,” said Patchet, “is style.”
I have a feeling that’s a concept that the lawyers over at Sony Music are thinking about right now. Sony co-owns the largest music library in the world, including, oh, the Beatles and Michael Jackson. If a computer is smart enough to listen to the entire Michael Jackson oeuvre, and then write “new” Michael Jackson songs, just where do those royalties go?
I saw a great presentation last week at a wearable computing conference, by the wearables group at Motorola--a team that’s really focused on building Google Glass-like equipment for industry, rather than consumers.
It was interesting that even at this small industry event, no one in the audience quite agreed on what to call these embryonic devices. Of the two most popular phrases--”head-mounted displays” or “smart glasses”--I think I’ll take the latter. Although now it looks Google is making progress in making “glass” legally it’s own. (Hopefully if Apple introduces a version they can call them i-glasses.)
It made me realize that adoption of smart glasses will probably be a throwback to the patterns of the last century, when commercial applications came first and then the technology migrated to consumers. (Of course, that pattern has been turned on its head this century--employees tend to have better computers and phones in their homes than they do at work.)
It’s pretty clear that the first compelling applications of smart glasses will initially be in areas like public safety (firefighters, for example), equipment maintenance workers, maybe warehouses and logistics--areas where people need detailed and up-to-date information, while keeping their hands free. Because it's such advanced technology, the first really usable smart glasses are going to be expensive, as well.
It’s probably going to be a bit like the adoption curve of tablet computers. Twenty years ago, Fujitsu was already making a good business out of tablet computers for specialized purposes like healthcare, inventory and sales.
Then in 2001 Microsoft tried to introduce the Tablet PC more broadly, and it was pretty much only early adopters who bought it. I was one of them. Frankly, it was a bit of pain--you had to use a special pen, for starters--but it certainly got lots of attention from curious passengers on airplanes. All in all, not unlike today’s Google Glass.
Finally, in 2010, touch screens plus better interfaces came along and the tablet was launched--twenty years after Fujitsu started selling them.
I suspect it will be the same with smart glasses--although they will be mainstream far more quickly than the tablet did, thanks to Moore’s Law and our increasingly rapid acceptance of new technology.
Most of my speaking is for private organizations. But if you happen to be in New York on July 29, I’ll be speaking at the Adorama store at 42 W. 18th Street, not far from Union Square, at 4 PM and 6 PM.
Adorama, of course, is the 35-year-old camera store that has grown to be one of the leading online consumer technology retailers in the US. My topic is Tomorrow’s Technology--gazing out at my favorite year, 2020--so I’ll be looking at wearables, smart objects, cloud-based intelligence and more.
I confess that I get a lot of my best anecdotes from the audience, so there will be plenty of time for Q&A and discussion. I’m looking forward to hearing technology consumers talk about their thoughts on the future.
Tickets are free; Adorama suggests registration here. Hope to see you there!
The phrase "generation gap" first appeared in the Sixties, when the unprecedented social upheaval of that decade truly created a cultural chasm not just between generations, but even within families. These days "generation gap" sounds a bit old-fashioned, but I'd say that for the first time in forty years, the condition it describes is back.
Not, this time, in families--indeed, the children of the Baby Boomers are emotionally closer to their parents than any generation in history. (Also physically closer, when they move back home after college.)
Now the gap is in the workplace. No matter what kind of audience I speak to--from educators to lawyers to venture capitalists--and no matter what the topic is, during the Q&A session there's always some form of the question "What's up with these kids, anyway?"
The questions--well, more accurately, complaints--range from lack of social skills to attention span to reading ability to work ethic to that perennial favorite, "entitlement".
All of that makes for some lively discussion, but at the end I have to say: these are, in fact, your future employees and customers. And one way or another, you're going to have to learn to live with them.
That's why I'm looking forward to speaking at a conference this July at Colorado State University in Ft. Collins called "Why Hire Gen Y?". The agenda begins with the assumption that Gen Y--the Millennials--are not only an inevitable part of the workforce, but that they also will bring new strengths.
"What's up with those kids?" is a serious question that deserves a thoughtful response. And that's something that should particularly be appreciated by anyone who stood on the opposite side of the generation gap forty years ago.
During the last year I’ve watched the conversation among economists and labor experts go from the traditional “Automation always destroy jobs but then it creates more jobs” to “Uh, maybe it’s different this time.”
It is different this time. Between robots, sophisticated artificial intelligence, and flexible global outsourcing, we’re going to eliminate a lot of jobs and it’s no longer clear where the new jobs--at least well-paying ones--will come from. A detailed Oxford University analysis last year said that nearly half of current jobs can be automated.
Coincidentally, over the last few months I’ve been working with a Minneapolis-based group called Nexstar, that helps home service providers--electricians, plumbers, heating and air conditioning contractors--manage their businesses. Their members are financially successful local companies, often with multiple locations.
One problem they share, however, is this: where is the next generation of their employees coming from?
Being a plumber or electrician is not a sexy job for kids growing up in a world of high tech billionaires. Yet these are jobs that can support a solid middle-class lifestyle. As one electrical contractor told me: “I watch some of my kids’ friends go to college, graduate, and come back to live with their parents. By then, a guy who goes to work for me has bought a house, is getting married and thinking about having children.”
My advice to these companies was that they need to tell a new story about jobs in the trades--not just to potential employees, but to teachers, school board officials and parents. The story has three parts:
--These are jobs that can’t be outsourced or automated. No matter how much Internet bandwidth you have, you’ll never be able to hire someone in Bangalore to change your kitchen faucets. And high on the list of jobs that can’t be done by robots involve physical dexterity in small spaces and flexible problem solving.
--These are jobs that, increasingly, use high technology like smart smart sensors and automated systems to control both electricity and plumbing. Home contractors are right at the interface between the physical world and the “Internet of things.”
--Finally, these are jobs that will help save the planet. We’re on our way to 9 billion humans by the middle of the century, many with middle-class lifestyles that will increasingly tax both our energy and water resources. By the Twenties, energy and water conservation will become major concerns--and an increasing part of electrical and plumbing and HVAC work.
In the United States educational system high schools measure their success by how many graduates go on to higher education. Yet many of those students fail to complete their degrees. And even if they do, four years of college may not give students much more than a lot of debt and a job at Starbucks. Maybe it’s time for policy-makers to broaden the definition of “a good job” beyond the confines of the office cubicle.
Regular readers know that my favorite form of communication is public speaking, and I'm occasionally asked where people might be able to see me. Alas, almost all of my engagements are private, so I usually don't have a good answer. But this spring and summer I have a few public speeches, and I'll mention the first of them today.
BlueWater Technologies is a Michigan-based company that specializes in high-end audio and video technology for installations and meetings. They're also known for an annual full-day event in Detroit called TechExpo that showcases the latest in AV tech and related gadgetry along with educational sessions (and meals, and entertainment). This year's TechExpo is May 21st at The Fillmore - Detroit. And they're currently offering tickets at half-price, so that makes it an even better deal.
Recently, between speeches, I’ve been restoring an old stone farmhouse in Sicily. In many ways Sicily is a step back in time (a great tonic for a futurist); it’s also often a reminder of how life works in a culture where the virtual world is still just gaining a foothold.
One morning in Sicily I needed to order a bathtub. I was buying from the same store where I’d initially seen the tub, they had sold me lots of other plumbing fixtures and already had all my financial information on file. All I wanted to do was place the order.
In New York City we would have done this by email. In Sicily, however, there was another visit, coffee, nice conversation, the careful writing of the invoice by hand, a bit more conversation, then “Ciao”. (Later I would receive, by email, an electronic version of the order; it had been entered into the computer after I left.)
On the same trip I asked the kitchen designer if he could email me PDFs of the final shop drawings. Instead he set up an appointment. A coffee, nice conversation, a careful and leisurely review of the shop drawings, some additional conversation, and only then—a copy of the shop drawings.
Efficient? Certainly not. But once I let go of my American timeframe, it was pleasant and enriching. Somehow it reminded me of a morning, years ago, in Senegal, when I watched a street merchant selling kola nuts, the caffeine-rich berries that are West Africa’s morning cup of coffee. Each customer would stand, perusing the tray of nuts, a conversation would begin and after a few minutes, the deal would be done. Finally I went up and bought my own kola nut. It cost something like one-sixth of a cent—an amount that to my American mind seemed radically out of scale with the amount of time each customer took to purchase.
Of course, in both Sicily and Senegal, the point wasn’t simply the transaction, but the social event as well.
For years I’ve told retailers that their virtual stores must duplicate the social environment of their physical stores; when I go into a virtual store I need to be able to look around and see if any of my friends are shopping there also.
That’s the element that “social shopping” startups are restoring to the world of e-commerce. Sites ranging from Polyvore to Pinterest let friends and family make suggestions and comment on your shopping, mimicking the social event of a group visit to the mall. And they’re driving a lot of purchases.
But what Sicily reminded me was that there is another social element retailers need to integrate: the relationship between the seller and the customer. Sure, sometimes you’d rather just make the order and get out. But other times, a salesperson who really knows their product is a great pleasure. Beyond simple information, there is also a very old and traditional social exchange that can enrich both customer and salesperson.
Some might suggest that Americans no longer value that kind of exchange, but I suspect they’re wrong. It’s something we need to duplicate in the virtual world, in some way that’s more tangible and social than pop-up instant messaging boxes. And perhaps more importantly, that salesperson-customer relationship, properly managed, will continue to be a strong advantage for the brick-and-mortar world.
Lately I’ve had a number of requests for my speech “How to Use the Downturn to Rethink and Thrive”, which demonstrates a couple of things. First: the economy still isn’t out of the woods (no surprise). And second: more and more organizations realize that the past five years have seen fundamental shifts in business and society that will hinder their recovery even when better times return.
In short: The rising tide, when it comes, may not lift all boats—unless you’ve been upgrading your boat in the meantime.
Thus there’s a new openness among executives to learning about future tools and techniques. But then there’s usually also a question: how do we sell these new ways of working to our managers and staff? Just this month I’ve heard versions of that question from school administrators in the Midwest, convenience store operators in Atlanta and a major insurance brokerage in California.
My answer? Consider our bodies’ immune system. It’s a wonder of nature: a team of various agents, from macrophage to T-cells, highly evolved to attack bacteria, viruses, allergens, anything that looks like a foreign invader.
And just like the body, organizations have also developed immune systems--but these systems attack outside ideas. Long ago, that was probably generally a good thing. Business moved slowly, the world didn’t change much, and most new ideas were probably just going to waste time and money.
But now. too often, the corporate immune system attacks good ideas. Like the body’s immune system, there can be multiple agents in the corporate immune system. Sometimes it might be the lawyers. Corporate lawyers don’t usually get fired for saying “no.” In fact I once worked with one for whom I prefaced every idea with the plea “Please don’t say no until I finish talking.”
Or, more surprisingly, it can be the sales staff. Salespeople like to know their product, so they appreciate New and Improved! But they don’t necessarily like Altogether New. (Years ago the newspaper business learned that when they trained their print sales people to also sell online ads. But the print people were never fully comfortable with the online lingo--and thus online ads never seemed to come up in the sales calls.)
In fact, immune agents can be any job-title in your business, up to and including the board of directors. As a result, when you encounter resistance to new ideas, you first need to identify which part of the corporate immune system has switched on. Next—and here’s the hard part for true innovators--you need to make your new idea look as much as possible like something that’s already being done. And then, with some gentle urging, you can get that new idea past the corporate immune system and into practice.
Earlier this week I had an interesting question from a group of executives in central Pennsylvania. We were discussing the implications what I call “the virtualization of America” and how much of our lives and work will take place in cyberspace by the end of the decade.
During the conversation one of the executives, in her mid-thirties, said that for a variety of reasons—privacy, human contact, security—she really wasn’t that comfortable with social networks and email and all of the other digital accoutrements steadily consuming our lives. She acknowledged that she was in the minority, but suggested that there will always be people who simply don’t want to engage with the virtual world. What, she wanted to know, are those people going to do? In the future, will you be able to have a life without the Internet?
Coincidentally, there’s also been quite a bit of chatter lately among the digerati on a similar theme. Dave Roberts, a popular blogger at the environmental site Grist, had announced that he was leaving the cybersphere, cold turkey, for a year—no blogging, no email, no tweets. He was burned out on virtuality.
At the same time the author of the influential legal blog Groklaw announced that she was shutting down her blog since, given recent revelations about US government surveillance, she could no longer offer her email correspondents and sources any hope of anonymity.
The desire to get “off the grid” is of course not new. In decades past, that meant detaching from civilization—generate your own power, grow your own food, dig wells, join the barter economy, etc. Just about every generation has a group that decides civilization is soon to end for one reason or another—Peak Oil, social collapse, the Year 2000—and heads for the hills. Some percentage decide after five or ten years that it’s hard work out in the wilderness, the apocalypse may not be quite so imminent and so they return to the world of pavement and petroleum.
Getting off the “virtual grid”, however, may be much more demanding. By the end of the decade, while it will still be possible to eschew all forms of electronic communication, it’s going to harder and harder to move through society without involving some chips and data. Everything from car and parking places to your electric meter and dishwasher will be connected to the Internet. Web-connected video cameras, perhaps running facial recognition software, will be everywhere. Already, even small businesses can hook up their existing video security systems to a cloud-based system that automatically reports customer demographics and movements. All I could tell the young manager in my meeting was that getting off the virtual grid will probably require even effort than in the past. Head for the hills or the desert, give up on the money economy, become entirely self-sufficient in food and energy, detach entirely from news of the outside world…
And then make sure that your new high-tech solar panels aren’t actually Web-enabled.
I was doing an interview this morning about the movie “Back to the Future II” and its predictions about 2015. There were some hits and some misses in that 1989 movie--and you’ll undoubtedly hear more about that as 2015 approaches. But perhaps the most glaring miss was the appearance of a phone booth as part of the 2015 plot.
The interviewer asked me: How could the writers have missed ubiquitous cellphones, such an obvious element of the future?
Good question, and one I had already considered—because I’d published a near-future science fiction novel called Forbidden Sequence back in 1987, and I’d pretty much missed cellphones as well. Yet they’d been around since 1983, and Michael Douglas’ Gordon Gekko made them talismanic in the 1987 movie “Wall Street”.
So by 1989, the trend was unmistakeable. My most prized possession at the time was a Motorola MicroTAC phone--the first portable phone that wasn’t the size and shape of a man’s shoe. Well, it wasn’t exactly my possession. It cost about $3000--over $5000 in today’s dollars--and was on loan from Motorola since I was the technology writer for Newsweek. The mobile phone I could actually afford in 1989 was mounted in the armrest of my car and had a shoe-box sized transmitter in the trunk.
So I should have had cellphones on my horizon in 1987, and, Hollywood being Hollywood, the writers of Back to the Future II probably actually owned them in 1989. I suspect the reason we repressed mobile phones is that they really change narrative. Much of traditional dramatic plotting back then revolved around one character knowing something crucial and trying desperately to inform the others.
At that moment in history, introducing mobile phones to the storyline would have made plotting much more difficult. Within a few years, of course, mobile devices began to appear in stories and films and now constant and ubiquitous communication is the basic assumption.
And younger writers learn to use mobile devices as plot devices in themselves--see the final episode of season two of HBO’s “Girls”, where the entire soap opera finale with Hannah and Adam is dramatized through FaceTime.
It’s a great moment when Hannah accidentally turns on FaceTime and realizes that Adam has an iPhone. Even in the middle of an enormous emotional crisis, she blurts: “You have an iPhone?!” It’s an accurate reflection of how devices, increasingly, define us. As is always the case, technology may have taken away one kind of plotting trick but has given us plenty of new alternatives.