Driver of change – governance

From the BBC comes this highly relevant article about the challenges of modern democracy. It highlights several flaws with the nature of Western governance and then highlights the key issue:

The time has come to face an inconvenient reality: that modern democracy – especially in wealthy countries – has enabled us to colonise the future. We treat the future like a distant colonial outpost devoid of people, where we can freely dump ecological degradation, technological risk, nuclear waste and public debt, and that we feel at liberty to plunder as we please.

The whole piece is worth reading.

The power of serendipity

I’m often asked what my job title is, and my standard reply is that I don’t have one.  If pushed however I use the term ‘serendipity architect.’  Mentioning this phrase is usually a good indicator of  the mindset of the person I’m talking to – if they’re open and want to know what the title means, I’m more likely to enjoy working with them.

Some people are dismissive of the term. In my experience those people are likely to be highly operational, and not the type of people that cope well with the inherent ambiguity of long term thinking and the connections to strategy and innovation.

The power of serendipity is acknowledged occasionally in the mainstream business media, like this recent example in McKinsey online:

Serendipity involves stumbling over something unusual, and then having the foresight or perspective to capitalize on it. What makes that such an attractive story? It’s the juxtaposition of seemingly independent things. In a serendipitous flash, one recent winner, an engineering firm, realized that the gear it designed for scallop trawlers could also be used to recover hard-to-get-at material in nuclear-waste pools. Surprising connections such as these set off a chain of events that culminate in a commercial opportunity. So to build this story line, think about the quirky combination of ideas that got you started and remember that serendipity is not the same as chance—you were wise enough, when something surprising happened, to see its potential.

By the way, the entire article is a good read…

Why leaders should focus on long term growth (new book)

The Vice Chairman of Korn Ferry and a McKinsey partner have published a short book that has studied the benefits of long term thinking.  There’s an interview with the authors on the Wharton site that gives some context and one extract from this stands out:

Just beware of the trends going on in the world. Larry Fink, the CEO of BlackRock, which manages $6 trillion in assets, says that it would be key for CEOs to realize some of the changes going on in society. For example, [consider] this shift towards automation and artificial intelligence. A McKinsey study we cite in the book says that [those technologies] could displace 30% of American workers.

CEOs who want to survive in the long run, and want their companies to survive in the long run, have to be aware of what’s going on in society, and try to steer their companies to address some of these issues. If they do that, they’ll get the support of their investors, customers and employees.

Turbulence ahead (Bain and Co in the HBR)

As most of my updates now go to my clients rather than here on my blog, this post may seem out of place compared to previous writings.  However I’ve become increasingly concerned about the failure of governments to understand the implications of the:

  1. interplay of complex systems that form the framework of modern society (including the complex system that is the climate)
  2. effects of automation
  3. alarming rise in inequality
  4. threats from cybersecurity

There are significantly more risks to consider in the years ahead, and these have severe implications for stability.  Bain and Company has completed some good work on this recently, and a summary has just appeared on the HBR site.  I don’t usually include large quotes here, but this piece of work is a concise summary that is hard to beat (the highlights are mine):

The benefits of automation, by contrast, will flow to about 20% of workers—primarily highly compensated, highly skilled workers—as well as to the owners of capital. The growing scarcity of highly-skilled workers may push their incomes even higher relative to less-skilled workers. As a result, automation has the potential to significantly increase income inequality.

The speed of change matters. A large transformation that unfolds at a slower pace allows economies the time to adjust and grow to reabsorb unemployed workers back into the labor force. However, our analysis shows that the automation of the U.S. service sector could eliminate jobs two to three times more rapidly than in previous periods of labor transformation in modern history.

Of course, the clear pattern of history is that creating more value with fewer resources has led to rising material wealth and prosperity for centuries. We see no reason to believe that this time will be different—eventually. But the time horizon for our analysis stretches only into the early 2030s. If the automation investment boom turns to bust in that time frame, as we expect, many societies will develop severe imbalances.

The coming decade will test leadership teams profoundly. There is no set formula for managing through significant economic upheaval, but companies can take many practical steps to assess how a vastly changed landscape might affect their business. Resilient organizations that can absorb shocks and change course quickly will have the best chance of thriving in the turbulent 2020s and beyond.

The full report from Bain is also well worth reading, and is available here.

Luck = success?

A university study in Italy has simulated the effect of luck on wealth creation.  The study showed that richer people were more likely to be also lucky.   While this study was focused on individuals, it also looked at the wider implications, and concluded that casting wider for insights will provide better returns than placing specific bets.

If this research is able to be reproduced, it would give further support to the idea that expanding an organisation’s field of view will create long term returns.

Full details here

Things creep up on you…

The Financial Times has published an article on the death of retail in the USA.  In addition to being an interesting read about the impact of technology on jobs, it also contains a great quote about the risk of not having a view over the horizon, and the boiling frog effect:

Wayne Wicker, chief investment officer of ICMA-RC, a pension fund for US public sector workers says “These things creep up on you, and suddenly you realise there’s trouble. That’s when people panic and run for the exit.”

I’m betting that senior teams in the companies mentioned in the article have been sitting in their comfortable paradigms for too long, and their own biases have been filtering signposts that may have helped anticipate what’s coming.

Must read article on knowledge and AI

The smart, insightful and deep-thinking David Weinberger has published a must-read article on Wired about the implications of AI on the human concept of knowledge.  Rather than paraphrase his excellent writing, I’m going to extract some of the key sections:

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?

Even if the universe is governed by rules simple enough for us to understand them, the simplest of events in that universe is not understandable except through gross acts of simplification.

As this sinks in, we are beginning to undergo a paradigm shift in our pervasive, everyday idea not only of knowledge, but of how the world works. Where once we saw simple laws operating on relatively predictable data, we are now becoming acutely aware of the overwhelming complexity of even the simplest of situations. Where once the regularity of the movement of the heavenly bodies was our paradigm, and life’s constant unpredictable events were anomalies — mere “accidents,” a fine Aristotelian concept that differentiates them from a thing’s “essential” properties — now the contingency of all that happens is becoming our paradigmatic example.

This is bringing us to locate knowledge outside of our heads. We can only know what we know because we are deeply in league with alien tools of our own devising. Our mental stuff is not enough.

The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend.

Additional Conference Presentation Notes

Late last week I spoke at a conference in New Zealand which had an unusual audience.  It was made up of deep thinkers who deal regularly with ambiguity at the sharp end of policy.  The Q&A session was fascinating, and a lot of attendees asked for more information.  With this in mind, here’s a few bullet points that provide more context on some of the topics:

Practical Tips for Online Privacy

  • never connect to a public wifi, even in hotels – they’re magnets for hackers and stealing your data is literally child’s play.
  • when going online away from work or home, either use your mobile phone as a hotspot, or purchase a virtual private network service.  It increases security and makes it harder to steal your data when online. I use this service.
  • cover the front facing camera on your laptop – it’s relatively easy for hackers to access the camera even when it looks like it’s not turned on
  • when you’re browsing online, it’s very easy for advertisers to track you and show ads targeted at you across different websites.  It’s a significant privacy intrusion that you can combat with this tool.

VUCA

Read/Viewing

  • A short video on the Cynefin framework for complexity
  • an interview that explains more about software biases with Cathy O’Neil – author of the book Weapons of Math Destruction
  • a sobering view of the future is painted in the book Homo Deus.  Here’s a review of the book in The Guardian

 

 

 

Human predictions about AI winning games are wrong

When Kasparov challenged the IBM chess-playing computer called Deep Blue, he was absolutely certain that he would win.  An article in USA Today on 2 May 1997 quoted him as saying “I’m going to beat it absolutely.  We will beat machines for some time to come.

He was beaten conclusively.

In early 2016 another landmark was reached in game-playing computing, when AlphaGo (DeepMind) challenged Lee Se-dol to a game of Go.  The Asian game is a magnitude more complex than chess, and resulted in Lee making the observation that “AlphaGo’s level doesn’t match mine.”

Other expert players backed Lee Se-dol, saying that he would win 5-0.  In the end he only won a single game.

Now the same team that developed AlphaGo is setting it’s sights on a computer game called StarCraft 2. This is a whole new domain for artificial intelligence because, as The Guardian points out:

StarCraft II is a game full of hidden information. Each player begins on opposite sides of a map, where they are tasked with building a base, training soldiers, and taking out their opponent. But they can only see the area directly around units, since the rest of the map is hidden in a “fog of war”.

“Players must send units to scout unseen areas in order to gain information about their opponent, and then remember that information over a long period of time,” DeepMind says in a blogpost. “This makes for an even more complex challenge as the environment becomes partially observable – an interesting contrast to perfect information games such as Chess or Go. And this is a real-time strategy game – both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.

Once again, humans believe that the computer cannot beat humans.  In the Guardian article, the executive producer for StarCraft is quoted as saying “I stand by our pros. They’re amazing to watch.”

Sound familiar?

If AI can win at a game like StarCraft, it’s both exciting and troubling at the same time.

It will mean that an AI will have to reference ‘memory,’ take measured risks and develop strategy in a manner that beats a human. These three things – pattern recognition (from memory), risk taking, and strategy, are skills that command a premium wage in economies that value ‘knowledge workers.’

In 2015 a research team at Oxford University published a study predicting 35% of current jobs are at “high risk of computerisation over the following 20 years.”  The StarCraft challenge might cause them to revise this prediction upwards.