Posts by Roger Dennis

Innovation without the jargon to give clear tangible results.

Things creep up on you…

The Financial Times has published an article on the death of retail in the USA.  In addition to being an interesting read about the impact of technology on jobs, it also contains a great quote about the risk of not having a view over the horizon, and the boiling frog effect:

Wayne Wicker, chief investment officer of ICMA-RC, a pension fund for US public sector workers says “These things creep up on you, and suddenly you realise there’s trouble. That’s when people panic and run for the exit.”

I’m betting that senior teams in the companies mentioned in the article have been sitting in their comfortable paradigms for too long, and their own biases have been filtering signposts that may have helped anticipate what’s coming.

Tools for thinking about the future

This HBR article from a couple of years ago has some good techniques for helping make better bets about how the future might evolve for a specific outcomes.  They would be useful when you’re at the pointy end of a scenario exercise, rather than at the start.  The entire piece is a worthwhile read, and my three main relevant takeaways can be summarised as:

  1. When estimating data points that may occur in the future, make three estimates – one high, one low, and then, by extension, one that falls in the middle.  The middle estimate is much more likely to be accurate.
  2. In a similar fashion, make two estimates about future data points, then take the average.  Note that it’s important to take a break between making the two estimates in order to avoid bias.
  3. Create a premortem i.e. imagine a future failure and then explain the cause.

Must read article on knowledge and AI

The smart, insightful and deep-thinking David Weinberger has published a must-read article on Wired about the implications of AI on the human concept of knowledge.  Rather than paraphrase his excellent writing, I’m going to extract some of the key sections:

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?

Even if the universe is governed by rules simple enough for us to understand them, the simplest of events in that universe is not understandable except through gross acts of simplification.

As this sinks in, we are beginning to undergo a paradigm shift in our pervasive, everyday idea not only of knowledge, but of how the world works. Where once we saw simple laws operating on relatively predictable data, we are now becoming acutely aware of the overwhelming complexity of even the simplest of situations. Where once the regularity of the movement of the heavenly bodies was our paradigm, and life’s constant unpredictable events were anomalies — mere “accidents,” a fine Aristotelian concept that differentiates them from a thing’s “essential” properties — now the contingency of all that happens is becoming our paradigmatic example.

This is bringing us to locate knowledge outside of our heads. We can only know what we know because we are deeply in league with alien tools of our own devising. Our mental stuff is not enough.

The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend.

NBR Column – Why you need to understand Facebook

Here’s the full text of my latest NBR column:

You might have seen the movie, you might already pay the company for advertising or you might simply be a user. No matter how you interact with Facebook, it’s arguably the one piece of software that everyone online today should understand in detail.

The company was started by Mark Zuckerberg in 2004 as a small business in a university dorm room in the US. The premise was simple – it was a method for people to update their social life on the internet so their friends could see what they were doing.

From this humble beginning the business has now grown to the point where it is regularly used by 1.8 billion people, including almost 80% of all American adults.

The company now offers a range of compelling ways of keeping in touch with people including the capability to upload live video, send instant messages, and call friends free around the world (no-cost international phone calls). This last point is particularly relevant, as it raises the question of how it can offer these services to billions of people without the need to charge a subscription.

Facebook can offer these services free because it also shows advertisements – a lot of advertisements.  Last year the company made $US10.2 billion, primarily from advertising revenue.

Advertisers are attracted to Facebook because the average user spends almost an hour a day on the site, and the more time people spend on the site, the more advertisements Facebook can sneak in front of people. The company is showing more advertisements to users than it used to.

Checking for updates
To ensure people keep looking at Facebook, the company spends a lot of money working out how to make sure users constantly check the site for updates. The updates they’re viewing are not simply about their friends but also advertisements and information from commercial organisations including news outlets. Facebook offers people the opportunity to give their feedback on this information by clicking an icon titled ‘like.’ It’s important to note that there is no icon to ‘unlike’ something.

The updates are viewed in a user’s ‘news feed.’  Bear in mind that the news feed may contain what used to be known as news but is more likely to contain a mix of content, some of which might be from reputable media outlets. Almost any organisation can pay for updates that then appear in users’ news feeds. These updates may or may not look like advertisements.

Once users start to ‘like’ information in their news feed, detailed personal data starts to be created. Research has found that after a Facebook user clicks ‘like’ on 70 updates, the company knows more about that person than their friends. Once they get past 170 likes, Facebook knows a user better than their parents.

Knowing users at this level allows Facebook to tailor the information it delivers to each user so they spend more time on the site.  The company runs massive social experiments involving hundreds of thousands of users to understand how to manipulate information to boost time on the site and, in turn, boost advertising revenue.

One of the results of this strategy is Facebook users only see information that reflects what they like, because to view information that conflicts with their world view would run the risk that they spend less time on the site.

Shaping public opinion
Another result is that Facebook is now such a compelling way to spend ‘free’ time that over 60% of millennials get their political news from their Facebook news feed. At first glance this might not seem important but it’s critical to understand the role of technology in shaping public opinion in today’s world.

To illustrate this, consider the curious example of the UK technology entrepreneur and commentator Tom Steinberg. He was against the UK leaving the EU, and his Facebook information feeds reflected his preference for this. What this meant was that the day after the result of the referendum, he could not find a single person celebrating the Brexit victory on the site.

Bear in mind that Steinberg is very internet literate, and should have been able to find at least one person in his Facebook network from the 33 million people who voted to leave the EU.  However, as he supported the other side of the vote, Facebook filtered his information feed so it only reflected his own world view.

The implications of this start to get complex, so to recap:

  1. Facebook needs people to spend time using its software, so it can sell more advertising and generate larger profits.
  2. To achieve this, it uses psychological research to encourage people to return to the site many times a day.
  3. It also manipulates the information you see so it reflects your world views, which in turn makes you more likely to – you guessed it – spend more time on Facebook.
  4. The more time you spend on Facebook, the more likely you are to ‘like’ information updates, which then gives the company feedback that allows it to legitimately say that it knows billions of users better than their parents know them.

Political business model
At this point you may think that this isn’t really a significant issue because, after all, it’s only Facebook.  However, the influence of the company now extends well beyond influencing the virtual world and is having a real impact on the physical world.

Facebook recognises the influence it now can exert and this translates into new business models.  One of these models is focused on politics, as it points out on its own website where it gives the example of how Facebook was a crucial tool in the election of a senator in the US.

On its site, there is a quote from one of the leaders of this campaign which states: “Facebook really helped us cut through the clutter and reach the right voters with the message that matters most to them. In a close race, this was crucially important.”

The key phrase here is “the message that matters the most to them.” Now recall the point that over 60% of millennials get their political view of the world via Facebook. When you combine these two points, Facebook makes it possible to target voters with the ‘right message’ in a way that simply hasn’t been possible in history.

Granted that there’s a rich history of politicians manipulating the media but the reach of Facebook makes the power of the software unprecedented.  To put this in a local perspective, research in 2015 revealed more than two million New Zealanders use the software every day.

Suppressing the news
Consider a scenario where Facebook itself wants to influence an election – perhaps opposing a candidate who favours regulation that limits the influence of the company.  It would be remarkably easy for the company to suppress news and support for that candidate, without people even knowing it was doing so.

So what does this mean for the average Facebook user?

Next time you check your Facebook feed, consider what information you’re giving to Facebook, and how it might be used.  People freely give the company deeply personal information, and the power of that data gives the company both enormous profit and enormous influence. Most of the media headlines about Facebook focus on the former.

For most active Facebook users, the closest real-world analogy to the software would be a casino where it’s free to play and your payout isn’t cash but information that makes you feel good about yourself.  For Facebook, the result is the same as a casino – a license to print money.

Additional Conference Presentation Notes

Late last week I spoke at a conference in New Zealand which had an unusual audience.  It was made up of deep thinkers who deal regularly with ambiguity at the sharp end of policy.  The Q&A session was fascinating, and a lot of attendees asked for more information.  With this in mind, here’s a few bullet points that provide more context on some of the topics:

Practical Tips for Online Privacy

  • never connect to a public wifi, even in hotels – they’re magnets for hackers and stealing your data is literally child’s play.
  • when going online away from work or home, either use your mobile phone as a hotspot, or purchase a virtual private network service.  It increases security and makes it harder to steal your data when online. I use this service.
  • cover the front facing camera on your laptop – it’s relatively easy for hackers to access the camera even when it looks like it’s not turned on
  • when you’re browsing online, it’s very easy for advertisers to track you and show ads targeted at you across different websites.  It’s a significant privacy intrusion that you can combat with this tool.

VUCA

Read/Viewing

  • A short video on the Cynefin framework for complexity
  • an interview that explains more about software biases with Cathy O’Neil – author of the book Weapons of Math Destruction
  • a sobering view of the future is painted in the book Homo Deus.  Here’s a review of the book in The Guardian

 

 

 

NBR column – the state of AI

This is my NBR column from Feb 2017:

In June last year a fascinating aerial battle took place. It didn’t take place in the actual sky but rather in the virtual one, which was appropriate considering it was a battle of man against machine.

The man in question wasn’t an ordinary pilot but a retired US Airforce pilot, Gene Lee, with combat experience in Iraq and a graduate of the US Fighter Weapons School. The machine he was battling was a simulated aircraft controlled by an artificial intelligence (AI).

What was surprising about the outcome was that the artifical AI emerged as the victor. What was more surprising was that the computer running the software wasn’t a multimillion dollar supercomputer but one that used about $35 worth of computing power.

Welcome to the fast-moving world of AI.

It’s an area that has attracted significant media focus, and justifiably so. Experts in the field see the deployment of AI as the dawn of a new age. Andrew Ng, chief scientist at Baidu Research, is one of the gurus in the field.

“AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

Most of the current applications of AI focus on recognising patterns. Software is “trained” with vast amounts of information, usually with help from people who have manually tagged the data. In this way, an AI may start with images that have been labelled as cars, then, through trial and error guided by programmers, eventually recognise images of cars without any intervention.

Extraordinary breakthroughs
This simple explanation of AI belies the extraordinary breakthroughs achieved with this approach and is illustrated by an experiment conducted by an English company called DeepMind.

In 2015, DeepMind revealed that its AI had learned how to play 1980s-era computer games without any instruction. Once it had learned the games, it could outperform any human player by astonishing margins.

This feat is a stark contrast to the battle waged almost two decades ago when an IBM computer beat Russian grandmaster Gary Kasparov at chess in the mid-1990s. To beat him, the computer relied on a virtual encyclopaedia of pre-programmed information about known moves. At no point did the machine learn how to play chess.

Winning simple computer games clearly wasn’t enough to prove the abilities of DeepMind, so a more challenging option was found in the game called Go. It’s an incredibly complex Asian board game with more possible moves than the total number of atoms in the visible universe.

To learn Go, the AI played itself more than a million times. To put this in perspective, if a person played 10 games a day every day for 60 years, they would only manage to play around 180,000 games.

Despite the bold predictions of expert Go players, when the tournament ended in 2015, it was the DeepMind AI that had beaten one of the world’s best players.

The ability to “learn” can be easily leveraged into the real world. While gaming applications may excite hard-core geeks, DeepMind’s power was unleashed on a more useful challenge last year – increasing energy efficiency in data centres.

By looking at the information about power consumption – such as temperature, server demand and cooling pump speeds – the AI reduced electricity requirements for a Google data centre by an astonishing 40%. This may seem esoteric but around the world data centres already use as much electricity as the entire UK.

Potential implications
Once you start to consider the power of AI, the feeling of astonishment evaporates and is replaced with an unsettling feeling about the potential implications. For example, at the end of last year a Japanese insurance company laid off a third of one of its departments when it announced plans to replace people with an IBM AI.  In this example, only 34 people were made redundant but this trend is likely to accelerate.

At this stage, it’s useful to put this development in context and consider what jobs might be replaced by AI. Andrew Ng has a useful rule of thumb – “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”

What’s important about this quote is the term “near future.” Once you extend the timeline out longer, researchers have theorised that the implications of AI on the workforce are significant.  One study published in 2015 estimated that across the OECD an average of 57% of jobs were at risk from automation.

This number has been disputed heavily since it was published but it doesn’t really matter what the exact percentage will be. What is important to keep in mind is that AI will change the nature of jobs forever, and it’s highly likely that work in the future will feature people working alongside machines. This will result in a more efficient workforce, which will in turn likely to lead to job losses.

However, it’s not just the workforce that could change. The potential for this technology dwarfs anything humans have ever invented, and, just like the splitting of the atom, the jury is out on how things will develop.

One of the world’s experts on existential threats to humanity – Nick Bostrom at Oxford University – surveyed the top 100 AI researchers.  He asked them about the potential threat that AI poses to humanity, and responses were startling. More than half of them responded that they believed there is a substantial chance that the development of an artificial intelligence that matches the human mind won’t end up well for one of the groups involved.  You don’t need to work alongside an AI to figure out which group.

The thesis is simple – Darwinian theory applied to the biological world leads to the dominance of one species over another.  If humans create a machine intelligence, probably the first thing it would do is re-programme itself become smarter.  In the blink of an evolutionary eye, people could become subservient to machines with intelligence levels that were impossible to comprehend.

The exact timeframe for this scenario is hotly debated, but the same experts polled by Bostrom thought that there was a high chance of machines having human-level intelligence this century – perhaps as early as 2050.

To paraphrase a well-worn cliché, we will live in interesting times.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/keeping-eye-artificial-intelligence
Follow us: @TheNBR on Twitter | NBROnline on Facebook

NBR Column – driverless cars

This is my NBR column from December 2016:

Since the invention of the first “horseless buggy” in 1891, there haven’t been many significant changes to the basic of the car. There have been incremental improvements to the platform – such as better engines, increased safety and more comfort – but the core has remained unchanged. A driver from 1920 would be able to adapt to a modern car and the reverse would also apply.

While a driver from the 1920s would be able to drive a car, a mechanic from the same era would no longer recognise the key components. Today’s new cars are equipped with collision avoidance sensors, traction control, ABS, air bags, reversing cameras, engine computers and media players. This technology means that new vehicles contain more software than a modern passenger aircraft and a laptop is more useful than a wrench when tinkering under the hood.

While this may be startling to some people, it pales into insignificance compared to what’s about to happen to the car when driverless vehicles become mainstream.

Since their first significant debut in 2004, driverless cars have evolved quickly. They have now been demonstrated in a range of situations, with manufacturers posting videos online showing just how well their machines work (usually in near-perfect conditions).

These advances have been enabled by developments in sensors, cameras and computing power. On their own, each of these required technologies was prohibitively expensive only a decade ago. Fast forward to now, however, and the cost has fallen to the point where it’s feasible to bundle them into a car.

For example, one of the key components is a device called a LIDAR which creates a millimetre accurate map of the world around the car. Early versions of LIDAR systems fitted on a car cost $75,000. Just last week one manufacturer announced a version with similar capabilities that would cost about $50.

Implications for ownership
While a lot of attention is on the technology in the car, most astute analysts are focused on the second and third tier implications of driverless vehicles. This is the most interesting part of the discussion because cars are ubiquitous in most urban environments, and a change in their form and function has massive implications.

The most significant implication will concern the very notion of car ownership.

A car is one of the most expensive assets in a household but at the same time it’s also one of the least used. Most a car’s life is spent stationary, though the cost of ownership is justified through what it creates.

In modern society a car creates access to opportunity, and for cities without an efficient mass transit system, car ownership is the way people access opportunity.

However, the notion of car ownership is being questioned in some cities and people have calculated that using a car-sharing service is cheaper than owning a car in some situations. Driverless cars are the next evolution of on-demand mobility without requiring ownership.

The most likely scenario to emerge in cities is that private car ownership will dwindle, and the demand for mobility will be met by fleets of vehicles available on demand and tailored to your requirements.

For example, a two-seater car could take you to a meeting, while a people carrier may stop past your house in the morning to collect your kids and take them to school.

Eliminating road congestion
Once you have a network of fleets running in a city, and every car is sending data about its state, it then becomes possible to optimise roads in a way that’s simply not possible now. When you know exactly how many cars are on the road at any one time and where they are going, you can start to organise their routes in such a way that eliminates congestion.

Another implication of driverless cars is the remodelling of city streets to remove carparks – cars without drivers never need to be parked for hours on the kerbside.

The biggest benefit of driverless cars is likely to be the near elimination of road accidents. A car that’s operated by a computer will never get distracted by phone calls or fall asleep at the wheel. Some researchers have predicted that driverless cars have the potential to reduce road deaths by up to 90%.

Regulating for driverless cars is one of the biggest hurdles to their adoption, and for this reason uptake on private roads (which are free of regulation) has already begun.

To illustrate, some Australian mines have operated driverless trucks since 2008, and since their introduction productivity has increased and accidents have decreased. In New Zealand one of the first significant pilots of driverless vehicles will take place in 2017 when Christchurch airport will introduce a driverless shuttle bus on its private roads.

In the next few years the workforce will start to be impacted by this technology, with truck drivers likely to be affected first. Already a delivery truck owned by an Uber subsidiary has driven almost two hundred kilometres across the US on interstate highways in self driving mode. This has profound implications for the three million truck drivers employed in the US and the industries that support them.

The next decade will be a transition period where driverless vehicles start to become commonplace in some situations. They’re unlikely to be widespread in cities as many experts believe that there are very hard problems that still need to be solved. For this reason it won’t be until after 2025 that we’re likely to see a dramatic change in the transportation fleet.

What makes this timeframe interesting, is that unlike many technology driven changes that have slowly changed business, this one is clear to see.  Organisations that have the foresight to leverage insights about the changes created by driverless cars will do extremely well. Those that don’t will end up like the horseless buggy.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/fast-forward%C2%A0normalisation-driverless-cars-not-so-far
Follow us: @TheNBR on Twitter | NBROnline on Facebook

 

NBR – Monthly column

nbr-logoI’ve started writing a monthly column for a business weekly in New Zealand called The National Business Review.  The first column was online recently and looked at the Singularity Summit in NZ, and set the tone for future columns.

The column can be on the NBR site here or below:

 

Not many conferences in New Zealand attract more than 1400 people. Even fewer – perhaps none – of this size include a diverse range of professional directors, politicians, chief executives, teachers, university students, entrepreneurs and school pupils.

One that did was the three-day Singularity Summit in Christchurch. On stage were experts from Silicon Valley and New Zealand discussing how rapidly changing technologies would affect the world.

What was startling for many attendees is that many of these disruptive innovations aren’t vague predictions but are already in use – or about to be.

Science fiction author William Gibson once famously remarked that the future is already here – it’s just unevenly distributed. The truth of this was highlighted at the summit as speakers gave example after example of how entire industries are going to be upended as technology advances.

Given the audience size, this is clearly a hot topic and something that a lot of people are grappling with.

On the last day of the summit I talked to David Roberts, the opening and closing speaker, to get his insight on the level of interest.

“I think there really is something happening right now,” Roberts says. “My sense is that we’re at an inflection point.”

The international speakers were well placed to observe inflection points as many of them are members of the Singularity University – a think tank based in the heart of Silicon Valley. The name has its origin in a concept that speculates artificial intelligence will surpass human intelligence in the next few decades, leading to a technology singularity where computers outperform people.

While the concept of the singularity is controversial, it’s clear the world our children will inherit will have a dramatically different working environment to the one we know today.

Software running on extremely fast computers can already perform better than humans in a range of intricate tasks, including driving cars, flying planes and playing complicated games.

Technology has enabled some startling developments.

University of Auckland researcher Mark Sagar began his presentation with a relatively dry discussion about creating computing “building blocks” for designing virtual avatars.

His work aims to create super-realistic computer-generated faces that respond to external stimuli just like a real person.

For example, staying within the view of a laptop camera means that the software can “see” a human face. This then triggers the software model to release virtual oxytocin, a neurochemical that is related to trust and bonding.

The end result is that the virtual face – which is controlled by the virtual brain – starts to smile.

“It’s like a Lego system for building brains,” he casually mentioned just before he showed the audience exactly what he meant.

At this point it’s fair to say Dr Sagar is a man who knows how to capture your attention.  When he demonstrated the end result on screen there was an audible gasp as the audience watched him interact with an extraordinarily life-like baby – or at least its face.

Using only his laptop, Dr Sagar’s virtual baby smiled when it was talked to and got anxious when he moved out of camera view. Although it couldn’t “see’ the audience” if it could it would have seen 1400 jaws drop open.

Plenty of other jaw-dropping moments occurred during the event and at the end of the three days it was certain few organisations would be immune to an increasing pace of technology change.

While making predictions about the future is notoriously difficult, from a strategic standpoint it’s increasingly important to develop the capability to have an over-the-horizon view.

In a series of monthly columns I will take a closer look at some of the risks and opportunities presented by rapidly changing technology in areas such as driverless cars, artificial intelligence, employment, politics and the role of New Zealand organisations.

Copyright NBR. Cannot be reproduced without permission.
Read more: https://www.nbr.co.nz/opinion/fast-forward%C2%A0-roger-dennis-hold
Follow us: @TheNBR on Twitter | NBROnline on Facebook

Human predictions about AI winning games are wrong

When Kasparov challenged the IBM chess-playing computer called Deep Blue, he was absolutely certain that he would win.  An article in USA Today on 2 May 1997 quoted him as saying “I’m going to beat it absolutely.  We will beat machines for some time to come.

He was beaten conclusively.

In early 2016 another landmark was reached in game-playing computing, when AlphaGo (DeepMind) challenged Lee Se-dol to a game of Go.  The Asian game is a magnitude more complex than chess, and resulted in Lee making the observation that “AlphaGo’s level doesn’t match mine.”

Other expert players backed Lee Se-dol, saying that he would win 5-0.  In the end he only won a single game.

Now the same team that developed AlphaGo is setting it’s sights on a computer game called StarCraft 2. This is a whole new domain for artificial intelligence because, as The Guardian points out:

StarCraft II is a game full of hidden information. Each player begins on opposite sides of a map, where they are tasked with building a base, training soldiers, and taking out their opponent. But they can only see the area directly around units, since the rest of the map is hidden in a “fog of war”.

“Players must send units to scout unseen areas in order to gain information about their opponent, and then remember that information over a long period of time,” DeepMind says in a blogpost. “This makes for an even more complex challenge as the environment becomes partially observable – an interesting contrast to perfect information games such as Chess or Go. And this is a real-time strategy game – both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.

Once again, humans believe that the computer cannot beat humans.  In the Guardian article, the executive producer for StarCraft is quoted as saying “I stand by our pros. They’re amazing to watch.”

Sound familiar?

If AI can win at a game like StarCraft, it’s both exciting and troubling at the same time.

It will mean that an AI will have to reference ‘memory,’ take measured risks and develop strategy in a manner that beats a human. These three things – pattern recognition (from memory), risk taking, and strategy, are skills that command a premium wage in economies that value ‘knowledge workers.’

In 2015 a research team at Oxford University published a study predicting 35% of current jobs are at “high risk of computerisation over the following 20 years.”  The StarCraft challenge might cause them to revise this prediction upwards.

Making Sense of Current VUCA Levels: Carlota Perez

Among colleagues around the world at the moment, there’s a definite recognition that VUCA is increasing.  One of more interesting theories about why this is happening comes from the work of academic Carlota Perez who has studied long-wave change theories for three decades.  In a nutshell, she believes that we’re currently transitioning from what she calls the “installation period” (where technology is developed) to the “deployment period” (where economic booms occur).  Perez believes that the levels of VUCA we are seeing now are reflective of the transition.

So how do you know when you’re in the gap between the two?  Here’s one metric that she uses to support her view:

During Installation, there is always strong asset inflation (both in equity and in real estate) while incomes and consumption products do not keep pace. This creates a growing imbalance in which the asset-rich get richer and the asset-poor get poorer. When salaries can buy houses again, we will be closer to the golden age.

In many countries around the world there is a profound disconnect between average income and the ability to buy a house. For example in Canada the average home price was $480,743 for July 2016 while the average Canadian employee makes just over $49,000 a year. 

In parts of the UK such as Trafford (and it’s important to note that this isn’t London) house prices are now 8.9 times higher than average wages and 7 times higher in Stockport. In Manchester, the number has risen to 5.1 times in 2015.

In New Zealand the average house price is now six times the annual household income.

One of the other key changes Perez points to as an indicator, is the birth of new economic instruments:

…there need to be innumerable investments and business innovations to complete the fabric of the new economy. Here’s one small example: Millions of self-employed entrepreneurs work from home with uneven sources of income. Where are the financial instruments to smooth out their money flow so they can work and live without anxiety?

This sounds remarkably like the innovations surrounding the deployment of blockchain, where one of the best quotes that I’ve heard about this technology is that:

If the Internet is a disruptive platform designed to facilitate the dissemination of information, then Blockchain technology is a disruptive platform designed to facilitate the exchange of value.

Perez quotes two other indicators that can be used to spot the transition: the first is more financial regulation at a global level.  However the complexity at play here is that in a world that is heading away from globalisation, it’s very difficult to bring nations together to agree on these types of initiatives.  It may take another severe financial crisis to induce a global agreement.

The final indicator is increasingly stable industry structures, and I’d argue that currently this is harder to discern.  However one signal may be in the form of  digital consolidation of internet traffic by Google, Apple, Microsoft, Facebook and Amazon.  Most of the world’s internet flows through one of these organisations and they also act as enablers – for example the creation of a store front with Amazon with promotion via Facebook/Google.

Whichever way you look at the current macro global situation, it’s clear we’re not in what Perez calls the “Golden Age.”  Perez herself notes that the Golden Age might not even eventuate, and that patterns from the past might not foretell the future:

Historical regularities are not a blueprint; they only indicate likelihood. We are at the crossroads right now.