Seeking Perfect Goals for an Imperfect World

There is little debate that Artificial Intelligence (AI) is essential to the evolution of continually emerging technology ecosystems. Computer systems through increasingly sophisticated algorithms (the DNA of AI) can now routinely perform tasks that previously required “human” intelligence, such as visual perception, speech recognition, rule based decision-making, and problem solving.  The opportunities to leverage the potential of AI across all sectors has created the next paradigm shift in the complex relationship of how we think about humans and machines.

From TedTalks to academic journals and every form of media, AI and it’s component parts Machine Learning (ML) and Deep Learning(DL) are often at the center of major economic, social, political and cultural discussions. While the opportunities appear infinite for the paradigm shifting applications of advanced machine learning, of equal importance is the current opportunity to surface and address building human capacity and potential in new ways.

From Scientific Management Theory through AI 

Consider two major developments in the relationship of humans and machines. In the early 20th century the goal was to produce better and more cost effective products and services by having humans act more like machines. Today, the focus is on how to have machines act & think more like humans.

“Last October, the DeepMind team published details of a new Go-playing system, AlphaGo Zero, that studied no human games at all. Instead, it started with the game’s rules and played against itself. The first moves it made were completely random. After each game, it folded in new knowledge of what led to a win and what didn’t. At the end of these scrimmages, AlphaGo Zero went head to head with the already superhuman version of AlphaGo that had beaten Lee Sedol. It won 100 games to zero.

One characteristic shared by many games, chess and Go included, is that players can see all the pieces on both sides at all times. Each player always has what’s termed “perfect information” about the state of the game. However devilishly complex the game gets, all you need to do is think forward from the current situation.

Plenty of real situations aren’t like that. Imagine asking a computer to diagnose an illness or conduct a business negotiation. “Most real-world strategic interactions involve hidden information,” said Noam Brown, a doctoral student in computer science at Carnegie Mellon University. “I feel like that’s been neglected by the majority of the AI community.” Excerpted from and article in Quanta magazine.

What the AI community is not ignoring are Evolutionary Algorithms (EA) which power the deep learning of AI in diverse applications from maps to Alexa, self-driving cars to merchandising, and as mentioned above, even training of the machines that mastered AlphaGo Zero. AI with the help of EA is also fueling breakthroughs in healthcare by accelerating clinical decisions for cancer diagnosis and treatment to helping stroke victims when “time is brain”.

A recent post AI 101: Intro to Evolutionary Algorithms from the AI firm Sentient provides a quick overview of how EA works.  “Evolutionary algorithms are inspired by biological evolution, and use mechanisms that imitate the evolutionary concepts of reproduction, mutation, recombination and selection. Suppose you have a problem you wish to find the solution for, and suppose this solution is not apparent. Or perhaps you have a solution, but you are unsure if this solution is the most optimal solution to your problem. This is precisely the kind of situation that Evolutionary Algorithms are meant to address.”

Are we adequately addressing the need to develop better understanding and engage in different conversations as we sort out the interdependence of human and artificial intelligence and what we can expect from both?

The bigger question to be explored is not whether hidden information can be found and optimized but whether the complex conditions of human systems in the form of unpredictable, independent and irrational behaviors can be reduced to an EA. In complex adaptive human systems the solution is not always the final goal.

There are important insights to be gained from past management practices that strive to make humans act like machines. Fredrick Taylor, the father of Scientific Theory of Management, established a set of organizational practices that emphasized maximized product value, continued improvement, operational efficiency and high levels of certainty and predictability. The evolution of this theory led to the widely adopted statistical process control (Total quality management, Six Sigma, Lean, etc.) which relied on a shared assumption – the causes of inefficiency could be tracked down to root causes and if known the system could be “fixed.” The rules based methodology of scientific management offered a model for the development of AI and machine learning evolutionary algorithms.

If we think of machine learning as the ability to classify something and then assign numbers to what has been classified, you can understand how quantifiable knowledge can begin to approximate intelligence. “Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious approach.” Technopedia.com

Although Taylor and his followers were correct that once designed, humans could execute the optimized processes, what they did not foresee is that the reduction of activities to an algorithmic component opened the doors for a machine to perform like a human.

Repeating Patterns

You can see a similarity between Taylor’s human workers and machine workers, neither had to be “smart” to execute their tasks. Under scientific management, workers were often able to do more than routine tasks but were not allowed. Once the studies of a particular task were completed, the workers had very little opportunity for further thinking, experimenting, or suggestion-making.

With machine workers (AI), their cognitive ability may be advancing exponentially, but they are not yet able to be human.  So how do we consider the complex relationship between human workers, machine workers, and existing management practices that remain committed to efficiency and continuous improvement throughout the enterprise?

The pattern that should not be repeated is to ignore the “interactions” of people between themselves and the systems they are part of, including AI. These actions may not be easily reduced to simply “objects, categories, properties and relations” but they do offer critical insight into how to leverage human smartness in addition to building smart machines.

The Plexus Network has members involved in AI across a broad spectrum of industries and fields. We hope to foster conversations that lead to better understanding and application of AI in our all too human systems.

Watch Steve Ardire’s a Pop-Up Conversation on AI + Complexity = A Powerful Strategic Advantage.

Access the Presentation Slides