Learning as a Core Competence

Paul Daugherty, H. James Wilson, and Nicola Morini Bianzino wrote in 2017 about jobs that, in their opinion, artificial intelligence would create. MIT Sloan Management Review recently interviewed two of the three co-authors to find out what they have learned since that article was published two years ago. For those of us in education, the interview is a worthwhile read, as we consider how learning (itself) is a core competence.

In many ways, parsing arguments around job displacement through intelligent automation is not a worthwhile exercise; certain trends are already in motion, whilst others are still in the conjecture phase only. The interview here explores what is happening in the world of work, as evidenced in the creation of new positions over the past two years. In other words, the co-authors are conducting research related to real jobs that exist. These jobs, they say, fall into three categories at the moment: trainers, explainers, and sustainers. Hmmm, sounds like teaching...

--Trainers. The people who are doing the actual building of the AI systems: they do the data science and the engineering around machine learning.

--Explainers. The people who explain (within the organisation, and to the public) how AI works and the kinds of outcomes generated.

--Sustainers. The people who ensure that AI systems work ('behave') properly at the outset of a process, as well as produce desired outcomes over time, considering any unintended consequences.

Following are some highlights from the interview, with regard to each role type.

Trainers

(1) Trainer types do include the people who are actively building the AI systems (e.g., deep learning scientists, robotics engineers), but the researchers are learning that "it's important to have functional experts as trainers, as well. You might have someone with a marketing background or an operations background on your team. They help identify problems that the technical experts will then go in and solve." 

(2) "Companies are recognising that AI become the brand." This is a statement that merits our reflection. The co-authors cite one specific non-technical job that they see in this category: the AI Personality Trainer. This is the person who trains the behaviour of chatbots (and the like), ensuring that the early-stage AI behaves "in the right way, to have the right answers, the right tone, and so on."

Explainers

(1) "Last year [...] there were about 75,000 new explainer roles related to the right to transparency mandated by GDPR.

(2) "In health care, we're seeing explainers working with physicians to help them understand why an AI system is making a particular recommendation and whether the doctor can make a medical recommendation to a patient as a result." 

Sustainers

(1) "Sustainers spend a good deal of their day thinking about unintended consequences and how they may affect the public. [For instance], how do you come up with a pricing model [if you're a company] that's algorithm driven but also workable in terms of public acceptance?" They go on to state, "The risks of bias in algorithms, discriminatory facial recognition systems--these are things that the first wave of trainers didn't necessarily given enough consideration to. Sustainers address the question of whether these unanticipated and unintended consequences can be managed and how. They might even recommend that an AI system be taken out of operation until the company figures out how to get it right."

The interviewer asks a question that is often posed by educators, or sometimes posed *at* educators: "One challenge organisations face is that many of the jobs created by AI have no established path for training and development because they're brand-new. How do they solve for that?" Some of the co-authors' responses are things that we've all heard before (e.g., more experiential learning is needed), but their comment around learning was particularly valuable: "Learning and training as organisational capabilities will be differentiating for companies that experiment effectively, find the optimal mix of approaches for themselves, and then scale up." They go on: "A lot of our findings have been surprising to us. For instance, you might think that STEM skills are the be-all and end-all for the age of AI. But our research is showing that four distinctively softer skills are become much more valuable as people begin collaborating with smart machines: these are complex reasoning, creativity, social/emotional intelligence, and certain forms of sensory perception."

Paul Daugherty offers the final comment of the interview, sharing that "We've also launched a research project on responsible AI. How do organisations make sure they get good, ethical outcomes? We're looking at issues like transparency and explainability [...]. We are looking at bias. We are looking at accountability and trustworthiness with AI systems."

Previous
Previous

Competing Commitments

Next
Next

Nine Lies About Work