Executive Summary

There is no question that new technologies, coupled with the near-ubiquitous digitization of work and work-related behaviors, has the potential to help organizations monitor, predict, and understand employee behaviors (and thoughts) at scale, like it has never been done before. At the same time, these same technologies, deployed in an unethical or illegal way, also permit employers to control and manipulate employees, violating trust and threatening not just their freedom and morale, but also their privacy. The only way to keep this from happening is through strict enforcement of adequate laws and regulations that ensure employees remain in the driver’s seat, able to authorize employers to use their data (or not), and benefiting from whatever insights and knowledge are derived from it. To be sure, there is no logical tension between what is good for the employer, and what is good for the employee. But the temptation to force people into certain behaviors, or to use their personal data against them, is more real than one would think.

Illustration by Shani Pleasants

A century ago, Frederick Taylor’s Scientific Management laid the foundations for modern HR. His central premise was that organizations should turn their workplaces into real-world psychology labs, measuring and monitoring employees’ every move in order to boost their performance and reduce their stress levels. The paradigm was revolutionary, and led famous industrialists like Henry Ford to unprecedented innovations in human engineering, with the creation of the seminal assembly line, and a science-infused formula for optimizing roles, tasks, and job design to enhance employee productivity. Big companies, such as the Ford Motor Company, became a testing ground for applied psychology, and evidence-based HR was born.

Fast forward 100 years or so, and it is all footnotes to Taylor. Some of the largest, most successful corporations, such as Google and Microsoft, are ramping up on data science, recruiting an army of Ph.D.’s in Industrial/Organizational Psychology, and accelerating their digital transformation to deploy smart technologies around AI and big data to improve their talent management systems. The people analytics age is here to stay, and it was already well advanced before the pandemic. But in a world of work that is increasingly virtual (and perhaps even only virtual), the volume of data available to understand and predict employees’ behaviors will continue to grow exponentially, enabling more opportunities for managing through tech and data.

Broadly understood, people analytics is the HR function dedicated to the pursuit of data-driven insights about an organization’s workforce — yes, the geeky part of HR. Think of data as digital records of employees’ behaviors, and people analytics as the science that translates these data into actionable insights that improve the organization’s effectiveness. Most organizations sit on a wealth of data. We have repeatedly heard that “data is the new oil”, but data without insights are meaningless — just 0’s and 1’s. You need the right framework, model, or expertise to ensure that data acquires meaning, and the next stage of the process is to act on the basis of those insights to create data-driven decisions, changes, and a data-oriented culture, in an organization. As such, people analytics is a deliberate and systematic attempt to make organizations more evidence-based, talent-centric, and meritocratic, which, one would hope, should make them more effective.

Consider the employee experience, which has traditionally been evaluated via annual surveys focused on job satisfaction or employee engagement. Although these measures are positively linked to job performance, the correlation is typically small (suggesting less than 20% overlap between engagement and productivity), and conflated with irrelevant factors, such as employees’ personality. It is also unreasonable to wait for an entire year to evaluate whether morale has gone up or down, so why not monitor this more regularly?

This is where more regular “pulse surveys” and employee listening tools have started to gain popularity and can quickly be used to drive real action that benefits employees and businesses. Companies such as Glint, CultureAmp, Qualtrics, and Peakon are all able to help organizations to regularly “pulse” their employees to understand engagement and employee sentiment on a real-time basis. While employee listening has been around for a while, it has gained even more popularity in response to the Covid-19 crisis. Companies such as Rabobank, Merck, and National Australia Bank are all using employee listening to understand how their employees are coping with new remote working arrangements, how their needs for support are changing, and what their preferences are for returning to work. Using techniques such as stratified sampling (an alternative to random sampling that enables data scientists to partition a given sample into “strata” in order to make predictions about the population) and text analytics on free text comments (software that decodes words and word frequency into emotional sentiment or different psychological traits) and discussion boards, companies can gain valuable insights into what’s important to their employees in a rapidly changing environment, while avoiding survey fatigue and preserving anonymity at an individual level.

Another important issue, particularly in the current context, is whether new technologies could be used to keep people safe, monitoring their mental and physical well-being. With widespread discussions right now on how employers can make their workplaces safe and ensure a healthy reopening of their offices in the post lockdown phase, it is not just the usual measures, such as temperature checking or social distancing, that may help. There are many ways that companies are implementing new technology to support their employees. Wearables can now monitor stress and anxiety, if employees choose to share that data. Chatbots that can be deployed to ask about your emotional state and provide advice. Of course, the same information could be used to support or control people: if you know how someone is feeling, what their physiological and psychological state is, that information could very well be used to help them and make them better, or, one would hope not, manipulate them and control them. This is true whenever technology enables other parties or individuals to gain insights into your deeper emotional states.

Most notably, “track and trace” apps, such as those developed by Google and Apple in the U.S., which were instantly deployed by some governments (e.g., China, Singapore, and Israel) in response to the pandemic, could easily be adopted by employers to monitor and improve people’s health. Likewise, academics are partnering with wearable start-ups, such as in the case of Oura ring and UCSF, to translate the biometric data people are already sharing — willingly, of course — into a Covid-19 risk profile. Think of these innovations as the digital equivalent of having your temperature checked when you get to the office, or having a doctor on site checking for key symptoms. Although these measures are controversial, because they have the potential to intrude in people’s personal lives and hijack their privacy and anonymity, they are increasingly being adopted by large employers, and it is becoming harder to see the difference between those that are digital, and those that are analogue or physical, as the lines between our physical and digital lives themselves begin to blur.

Another key goal may be to boost employee performance or productivity. In most organizations, this will always remain the main objective, even when companies care a lot about morale and well-being, largely because they see these things as being linked to performance. However, this is also where the “creepy” factor of monitoring can start to kick in. With phones, sensors, Alexa, wearables, and the IoT being more than capable of detecting and recording our moves, and opportunities to be truly offline and off the radar being rather minimal, it can all get rather invasive and Big Brother-like pretty quickly. For example, some companies are now looking to rollout more intrusive monitoring software that can take screenshots while employees work, and can track people’s movements as way of tracking productivity and monitoring a workforce that has become remote overnight. Earlier this year, PwC attracted considerable criticism for developing surveillance to track whether employees were away from their computers or not.

Others are considering surveillance tools that will monitor the spread of Coronavirus within offices. But what is the trade-off that employees will have to make as we see a rise in the use of surveillance technology in response to the Covid-19 pandemic? If such tools become mandatory, under the guise of protecting the health of the workforce, then how can employees be sure that their privacy will be protected and their data won’t be used for other purposes? This is where HR departments must step in and drive a conversation that addresses employee trust, corporate responsibilities, and the ethical implications of any new technology, striking a balance between the needs of the employee, manager, and business.

Although we are still at the very beginnings of this revolution, there have been clear advances in each of the major verticals of talent management, with a range of novel tools and technologies that, in some instances, are backed up by science. If leaders can instill a culture of trust, respect, and fairness in their organizations, and deploy these emerging innovations according to the strongest ethical and legal parameters (and that is not a small “if”), there is a real opportunity to make work significantly better.

It’s not enough to hope that ethics are at the forefront when companies are considering new technology or people analytics projects. In our view, companies need to adopt an ethics charter for people analytics that helps them to clearly govern what they should or shouldn’t do, in the same way that they have guidelines for the usage of customer or financial data. In order to build and maintain employee trust in the use of people data, organizations need to tackle the ethics and privacy topic head on, being open and transparent with employees in how they are using their data.

There is no question that technology, coupled with the near-ubiquitous digitization of work and work-related behaviors, has the potential to help organizations monitor, predict, and understand employee behaviors (and thoughts) at scale, like it has never been done before. At the same time, these same technologies, deployed in an unethical or illegal way, also permit employers to control and manipulate employees, violating trust and threatening not just their freedom and morale, but also their privacy. The only way to keep this from happening is through strict enforcement of adequate laws and regulations that ensure employees remain in the driver’s seat, able to authorize employers to use their data (or not), and benefiting from whatever insights and knowledge are derived from it. To be sure, there is no logical tension between what is good for the employer, and what is good for the employee. But the temptation to force people into certain behaviors, or to use their personal data against them, is more real than one would think.