Blog
There’s little doubt: artificial intelligence is coming after human jobs, for everyone from the customer service rep on the 800 line to the young lawyer with a shiny new degree.
The key to this onslaught is machine learning—software that can train itself for jobs, rather than depend on strict programming by humans. The technology, in development for decades, has accelerated rapidly in the last ten years.
An early example of the technology was in 2016, when a computer called AlphaGO beat the world champion at the ancient game Go. Go is an incredibly complex game with 391 pieces but fairly simple rules—meaning it has a multitude of possible moves. Chess has about 20 plausible opening moves; Go has hundreds. Go Masters teach the game through metaphors and similes, rather than firm rules.
Researchers thought it would take until 2030 to teach a computer to win at Go. Then came self-teaching AI. The researchers at Deep Mind programmed two computers with the rules of Go, and then had them play each other, millions of times, learning constantly. The researchers then pitted the self-taught software against the reigning world champion and it beat him 4 out of 5 games—entirely self-taught. Another Go master, watching the game, said “It’s like another intelligent species opening up a new way of looking at the world.”
In recent years, with names like cognitive computing or deep learning, self-teaching AI has been everywhere: sorting through piles of evidence for lawyers; reading X-rays; creating new formulas for pharmaceuticals or battery electrodes; even developing a novel alloy for a cheaper US nickel. AI has also moved into the visual world—making robot vision far smarter, and also creating new images on its own, from cloning dead movie stars to creating art that actually wins prizes in competition.
The latest shock has been the success of the machine learning program ChatGPT in imitating human writing. It’s called a “large language model”—software that has learned to write like a human by reading and analyzing the vast amounts of digital content stored on the Internet. All you do is give it a “prompt”, such as: “I would like you to write an 800 word article about the future of artificial intelligence, giving examples of how it will be used and what the impact will be for humans and work.” Moments later, ChatGPT comes back with the article.
The website CNET has already started to use AI to write its news articles. Multiple companies are developing customer service representative apps that use the new technology. (McDonalds has for several years been testing robot order-takers in its drive-thru lanes.) Some bloggers use ChatGPT to craft their posts—you simply give the program a brief overview of what you’d like to discuss, and the software turns out a complete blog post. The results aren’t perfect, but basically you’ve got a first-draft that’s close to finished. (A few bloggers have used ChatGPT to write a blog post on “What is ChatGPT?”)
And finally, high schools and colleges are already banning access to ChatGPT on campus, fearing that students will use it to write their essays. And in fact, some already do: one professor in Michigan busted a student when they handed in a paper that was “suspiciously coherent and well-structured.”
Teachers are quickly adapting to the new reality. Some require students to write first drafts of essays while sitting in class. Software is being designed that will detect ChatGPT-authored essays. Colleges are even considering dropping the essay requirement on their student applications.
Those are defensive reactions. Some teachers are actively adopting Chat GPT in class as a teaching device, generating text that then drives classroom discussion. And, in fact, students need to be familiar with how to use automatic writing software—because it will ultimately be common in everyday life and business.
But perhaps the biggest lesson for teachers from ChatGPT is this: education must identify the unique human skills that AI and robots can’t duplicate. That will be crucial for future workers. I call these skills Three C’s.
Communication with Empathy
An AI customer service rep will be extremely good at telling you everything about life insurance tailored to you needs. An empathetic human will be the one who talks you into raising the policy from $500,00 to $1,000,000.
Collaboration
There is a special energy in having multiple minds in the same room, brainstorming about a problem or challenge. Work to make AIs collaborate has been slow, and it’s not clear it will have a similar power as the human version.
Creative problem solving
These are problems where the boundaries for a useful answer are unclear. If you’re a city looking to put in a new parking lot, AI will do a brilliant job of going through traffic density, accident reports, legal issues, zoning, and construction costs, to pinpoint the most efficient place for new parking. But an AI will probably not ask: “Do we really want a new parking lot?”
These three skills are, of course, innately human abilities—but they are skills that young students, surrounded by distracting technology, may not learn or practice on their own.
Students need to be taught, in real life, and practice with one another.
Not coincidentally, educators will soon face a challenge to their own profession: AI will ultimately do part of what teachers do today, particularly in factual areas like math or chemistry or grammar. And so for their own job futures as well, teachers should begin to focus on the skills that can only they can teach: the Three C’s.
Can government keep up with the accelerating rate of technologic change? The answer is, usually not. In the US, at least, government moves slowly and is captive to multiple conflicting influences. Legislators react after problems arise, and even then may not fully understand the underlying technology.
Instead, insurance companies may become the real regulators of new technology.
Consider autonomous vehicles. Some states are passing legislation to allow self-driving cars, in part hoping to attract Google or Tesla or Uber dollars. But insurance companies are taking a wait-and-see attitude. They’re already planning how they’ll insure self-driving cars (who has the liability?), but they’re in no rush to do so--there are still too many unknowns.
As one insurance executive told me: “Politicians can make self-driving cars legal. The real question will be: can you insure one?”
Or take climate change. One immediate step, given increased extreme weather and sea-level rise, might be to limit construction on low-lying coastal land in hurricane zones. But government doesn’t want to tell voters, no, you can’t build a house there. Insurance companies, on the other hand, are perfectly willing to say they won’t insure it.
And insurers aren’t just saying “no.” One company I work with, for example, has a laboratory that tests building materials in extreme weather conditions. The goal: requiring resilient materials as a prerequisite for insurance coverage.
A very recent example is computer security. Security experts will tell you that most corporate hacking cases are not the result of brilliant hackers. They’re the result of careless security. And bad cybersecurity is starting to impact the market value of companies.
Sounds like another role for insurance. But until recently, cybersecurity insurance was a tricky proposition--insurance companies weren’t sure how to assess the risk. Now, however, insurance companies are beginning to formulate security requirements for companies who seek cyberattack coverage.
In the early 20th century, insurance companies had real impact on increasing factory safety, because they required certain levels of safety before issuing workers’ compensation insurance. A full century later, they may well do the same with cybersafety.
One trend is clear: the “virtualization” of our world has greatly accelerated. Work from home, telemedicine, virtual shopping, distance learning, socializing, exercise: more activities than we might imagine will move to the virtual world during the rest of this decade. This will impact almost all sectors of business and society.
The winners will be those who choose wisely what must stay in the real world and what is best done virtually.
A few more possibilities:
- Instead of fully restaffing, business will invest in artificial intelligence and robotics.
- Businesses will move to either “luxury, full-service” or “everyday low prices,” with diminishing focus on the middle market.
- A new focus on personal wellness, with widespread use of apps and wearable health sensors.
- Hyper-local social networks and community organization will grow in importance. The “sharing economy” may come to mean actual sharing, rather than Uber.
- Consumers will seek a sense of control and sustainability in their personal lives: health, shopping, transportation and more.
- Society will rethink the size, influence and responsibilities of social media and Big Tech.
- The COVID crisis will transition into another crisis: disasters due to extreme weather. Are there tools and practices from COVID that can help with this new threat?
And one hopeful prediction:
- Scientists master rapid vaccination development, governments create smart global health monitoring systems, and COVID becomes the last human pandemic in history.
I spent last summer writing in the farm country of Sicily, a place that usually seems very far from the future. It’s the land where ancient Greek myths lurk in the landscape and the local language, still widely spoken in lieu of Italian, is the oldest in Europe.
The other day I was talking to a young friend, Fabio. Fabio deftly uses all the latest tools of technology to promote his agriturismo business, but keeps them in a clear perspective. “The future,” he told me over a lunch of pasta con limone, “is sometimes the past.”
Fabio offered an example: when he inherited his grandfather’s citrus orchards they had fallen into disuse. Cheaper fruit from Spain and Morocco and Egypt had flooded the European market. But then organic food became popular and Sicily proved to be the gold standard for organic; most farmers there had never used chemicals in the first place. Now Fabio’s lemons are profitable again.
In a similar way I suspect that as artificial intelligence and robotics remove the human element from more and more of what we do, we will find skills from the past become more relevant again. The resurgence of handcrafted goods and food is one example; the art of conversation is another.
When we use new technology to reshape how we work or live, we shouldn’t forget the value of what has come before. As Fabio learned: sometimes the future turns out to need the past.
Recently, an HR director told me that her company is planning a “remedial social skills” course for some of its new employees.
What exactly, I wondered, does that include?
For starters, she said, how to decide when to text, when to send email, when to make a phone call, when to show up in person for a chat.
That makes sense, I said. It’s a kind of business etiquette. After all, someone had to teach the Baby Boomers not to type in ALL CAPS.
But the real focus, she said, would be this: how to start a conversation, and how to know a conversation is over.
I found that disturbing--until I thought about it a bit.
The generation entering the workplace now is the first to grow up with texting and instant messaging as central ways to communicate. Both are ”asynchronous”--you always have time to think about your reply, even if all you text back is “LOL.”.
Face-to-face conversation, on the other hand, is real-time and spontaneous. Some kids, of course, are naturally social. But not all. If you’re an awkward adolescent, a bit unsure about what to say, which communication method would you choose?
This doesn’t mean the problem is with the technology--texting and IMing are here to stay. The problem is that we adults didn’t realize that now there may be another skill we need to start teaching, probably as early as elementary school.
The question of what we should teach will become ever more crucial as artificial intelligence enters the workplace. Skills like empathetic communication (which includes, among other things, conversation) and creative problem solving are two of the unique human abilities that machines won’t easily replace.
But at the same time, kids who grow up with one foot in the virtual world may have less and less opportunity--or need---to practice those skills. Some of the abilities we once took for granted, like conversation, may now need a bit of extra help in the classroom.
Amazon continues to expand its enormous presence in the robotics industry, particularly in terms of warehouse automation. Now both retail and fast food companies are also pushing forward in automating all aspects of their business.
Even so, employers continue to argue that automation will simply create new jobs for humans.
As employers struggle post-COVID to hire workers, it's growing clearer and clearer that the majority who can afford to do so will invest in artificial intelligence and automation. Unlike human employees, technology gets cheaper as it gets better at the job, and as a capital expense it's handy for company finances.
Long-term, robotics and artificial intelligence will fundamentally alter the face of work in ways that our society is very poorly equipped to handle. The "new jobs" will not appear out of thin air--smart management will need to look ahead to how to use existing staff to improve their quality of service.
It's a topic that very few politicians want to address, but which at some point later in this decade will rise to the level of a potential workforce disaster.
For decades now climatologists have agreed that the first symptoms of global warming would be extreme weather events. (Here’s my 1989 Los Angeles Times article on the early researchers in the field.)
And I’ve argued that it will be extreme weather events that catalyze public opinion to demand further climate action from both governments and corporations. As humans, we really can’t perceive “climate”--it’s just too long a time frame. What we do understand is weather.
We’ve always named the traditional forms of extreme weather: cyclones and hurricanes. But in 2015 the UK started also naming severe storms (Angus, in 2016, disrupted transportation throughout the country). And severe storm names are growing more popular in the US (Jonas, the same year, set numerous East Coast records).
This summer, Europe is in the midst of a record-setting heatwave, and it’s been named as well: Lucifer. In the act of naming extreme weather events, we take them more seriously, and perhaps we will ultimately demand that our governments do the same.
I worked last week with a major credit card company, and one topic was whether cash will disappear. Will there come a day when all transactions are electronic, perhaps using your smart phone--or even, say, just your fingerprint--and cash will be kept only in museums?
Some countries are almost there--in Sweden, for example, half the banks keep no cash on hand. Many restaurants and coffee houses no longer accept cash and churches, flea markets and even panhandlers take mobile phone payments.
For merchants, going cashless lessens the threat of robbery and eliminates daily treks to the bank. This summer in the United States, Visa International announced it will give $10,000 grants to selected restaurant and food vendors who agree to stop accepting cash. (Merchants, of course, also pay a fee for every electronic transaction, significantly more in the US than in Europe.)
So is this the end of cash? As the saying goes, it’s complicated.
Electronic payments may be more convenient, but cash is still anonymous. As a result cash fuels criminal ventures as well as tax evasion. Large bills, in particular: $50,000 in $100s is a convenient stack only about 4 inches high. The EU will stop printing €500 notes, a criminal favorite, in 2018 and there are calls to eliminate the $100 bill in the US. And cash also powers the underground economy for tax evasion.
Governments, in short, might be just as happy to get rid of cash entirely. But many law-abiding citizens consider the privacy of cash a valuable option--even though they may not actually take advantage of it very often. It’s comforting to know that it’s there, and they’re likely to complain loudly if it’s threatened. When India removed some large bills from circulation in late 2016, the result was a months-long national crisis that nearly brought down the government.
My guess is that governments won’t go to the trouble of eliminating cash. What they will do is make cash increasingly less attractive to use. In southern Italy, where I spend part of the year, the “black” economy is huge--work is done off the books and paid for with cash. But the Italian government has gradually made it harder to withdraw or deposit even moderate amounts of cash at the bank without paperwork and questions. (The U.S. has similar bank regulations but for much larger amounts. For now.)
On the other end of the scale, next year Italy will also stop minting 1 and 2 cent coins. Merchants will still be allowed to price merchandise at, say, €1.99--but you’ll only get that price if you pay electronically. For cash, the price will be rounded up to €2. The result is another subtle nudge toward cashless transactions.
Cash is likely to be with us much longer than many futurists predict. The real question may be: who will bother to use it?
Sometimes I joke that we’ve been talking about the Millennial generation for so long, they got old.
Old enough, at least, to start families. And thus Millennials were a central focus at the Juvenile Product Manufacturers Association conference in California, where I spoke earlier this month.
The JPMA represents companies that serve the prenatal to toddler phase of parenting--car seats, strollers, feeding, furniture and, increasingly, baby monitors. And not just baby monitors, but really smart baby monitors.
The “Connected Nursery” was a big topic at the show. Sleep trackers like the Mimo, integrated into little kimonos or body suits, now connect wirelessly with, for example, your Nest thermostat, so if baby is too warm, the nursery heat turns down. Or your video monitor will notify you when baby starts to move. Smart scales track and record baby weight precisely. And smart diaper clips will let you know when baby needs changing.
There is even a smart sound machine that detects when baby is stirring and will play soothing natural sound, or lullabies, or project animations on the ceiling...and if all else fails, puts Mom on the line to have a two-way chat. All, of course, controlled by a smartphone app.
Clearly it’s early days for these devices and physicians warn they’re no substitute for vigilant parents. There is even suggestion that some of these devices should be approved by the FDA. But as a trend, it’s inevitable.
And there’s more in store. Besides watching and enjoying baby, of course, the other new parent activity is worrying and asking for advice. Already there are simple applications for Amazon’s voice-powered Alexa that will verbally answer a limited range of parenting advice questions.
It’s not hard to imagine a future in which an artificial intelligence--like IBM’s Watson or Google’s DeepMind--can be loaded with an encyclopedic body of baby and childcare information. Mom or Dad will be able to ask aloud, any time, day or night, their crucial questions: “Is this rash normal?” And get an immediate, authoritative answer.
Or, that intelligent baby advisor in the cloud could monitor the smart scales and other monitors in the “Connected Nursery” so it can answer questions like “Is my baby’s weight normal today?”
The only question it probably can’t answer is whether your baby is the cutest baby ever. For that, you still need grandparents.
As a science writer I always looked for stories about the future, so it’s no surprise that I started covering global warming and climate change during the late 80’s. Recently going through my files I ran across one of those stories, from 1989, that ran on the cover of The Los Angeles Times Sunday magazine.
What’s interesting now is how clear the science was, even back then--and how relatively uncontroversial the topic seemed. I wrote about the researchers at Scripps Oceanic Laboratory, who were then arguably the leaders in atmospheric research.
The Scripps researchers seemed confident that there was still time to slow or even stop the warming trend, as long as society acted relatively quickly. I think back then, the success in the Seventies of the global community at banning Freon--to prevent atmospheric ozone destruction--was still a recent memory. Of course the world would rally to prevent an even bigger hazard.
And back then, there were no climate change skeptics for me to quote--something I would have done in my normal science-writing practice. Certainly there was no one from the fossil fuel industry to quote: back then, their own researchers were also concerned about global warming.
What strikes me now about the story is its calm innocence, given how politically charged and divisive the issue has become in the United States. Back then, I don’t think anyone on the science side suspected what kind of opposition waited ahead as the fossil fuel industry moved to protect its commercial interests.
Ironically, they learned quickly. Within a couple of years two of the Scripps researchers I profiled in the article were deep into the political thickets. Roger Revelle, sometimes called “the father of global warming”, under dubious circumstances was made co-author of an article that questioned the need for action on the issue. His young assistant, Justin Lancaster, publicly protested that his professor hadn’t been fully aware of the content of the article and that it didn’t reflect his views.
Very quickly, an early group of global warming deniers sued Lancaster. To avoid a lawsuit he couldn’t afford, the young researcher withdrew his statement--although years later, as Revelle’s apparent skepticism was repeatedly cited, he went back on the record.
But perhaps we should have suspected back then just how powerfully the fossil fuel industry would attack the science. It was, after all, much earlier in the century that another writer, Upton Sinclair, observed that "It's hard to get a man to understand a thing when his paycheck depends on his not understanding it."
For anyone interested in a bit of scientific nostalgia, a PDF of the article is here.