People have long feared that technological innovation and progress could negatively impact their jobs. As history has proven time and again, such worries are far from unfounded.
For example, in the early 1900s pinsetters would manually reset bowling pins following a successful strike. By the 1950s, automated pinsetting technology had rendered this role all but obsolete. The same is true regarding the position of switchboard operator, a physically and mentally demanding career that increasingly became automated starting in the 1930s, leaving thousands of people to find alternate employment.
The list goes on: warehouse and manufacturing workers, travel agents, bank tellers, clerks of every kind. In the U.S., the Bureau of Labor Statistics regularly tracks the fastest declining jobs, many of them made redundant thanks to automation and invention. Will humanity’s eternal desire for progress ultimately lead to mass unemployment? Or, as has happened in the past, will it simply open up other avenues for would-be workers to explore?
Rise of the Digital Worker
Until recently, the term “digital worker” would have simply described an actual human with digital skills of some kind. But in this context, the term refers to a programmed or programmable “employee” designed to assist human workers to achieve their work goals more efficiently and sustainably. Via the use of machine learning and clever coding these digital workers can — and already do — play an important role across a host of business functions and activities in areas like accounting, facilities management, and even writing. Ideally, these digital workers would streamline procedures and free up human counterparts to focus elsewhere.
Scores of companies are already using AI technology on a daily basis, and according to Andreas Cebulla, a professor of sociology specializing in the future of work at Flinders University in Australia, digital workers are likely to become more common across businesses as their capabilities expand — so long as they bolster the bottom line.
“Business will only adopt new technology where it offers profitability gains. It’s about profitability, not productivity, although the two are not mutually exclusive,” Cebulla says.
This is a point supported by Ying Zhou, a professor of human resource management at the University of Surrey in the U.K. She is also director of the university’s Future of Work Research Centre and co-author of Mapping Good Work: The Quality of Working Life Across the Occupational Structure.
“New technologies have the potential to transform work both for the better and for the worse,” Zhou says. “The impact can vary depending on the type of technology and the type of work. The main benefits for employers include increased productivity and lower labor costs. Unlike humans, AI, robots and other digital technologies do not need to rest and can be used to work around the clock. The declining cost of computerized technologies over time also makes it more attractive compared to waged labor.”
However, according to Zhou, “these benefits can come at significant costs to workers,” and suggests that the negative impacts could be more pronounced and profound than simply the loss of people’s jobs.
“There is the obvious risk of technological unemployment — jobs taken from humans by technologies — but, additionally, digital technologies can be used to increase the surveillance of workers. The intensified monitoring, assessment and control of workers could lead to an erosion of job autonomy, which is important for employees’ work motivation and personal wellbeing,” she notes.
Cebulla is also keen to highlight that, while digital workers and AI can certainly help businesses become more efficient and productive, there are various negatives that need to be considered and assessed.
“As with most things in life, digital workers have pluses and minuses,” Cebulla says. “Digital workers can make work safer, cleaner, and can make the output better. Precision and speed (as well as work, health and safety) are the most likely benefits. However, an often overlooked downside is the risk of complacency. We may ‘trust’ the digital worker, but it may play to its own rules.” Or the rules the AI follows may be subverted, with dramatic and undesirable results.
In 2016, a Microsoft chatbot called Tay, launched to try and engage millennials via the use of artificial intelligence, was taken offline after just 48 hours after Twitter users “taught” it to be racist. Meta’s BlenderBot 3 AI chatbot, released in 2022, became embroiled in a similar scandal just days after it was released.
More on AI and Digital Work:
The human brain inspired neural networks and AI, but how similar are they?
Human and AI collaborations leading to new scientific discoveries.
So, while it’s clear that digital workers and AI technologies are being rolled out and developed across all sectors — to varying degrees of success — are they actually “stealing” people’s jobs, or should we regard them as playing a vital role in supporting people’s workplace efforts?
The Future of Employment
According to a 2019 report by the Office for National Statistics (ONS) using data gathered in 2017, just 7.4 percent of jobs in the U.K. are currently at “high risk” of automation. Interestingly, a similar ONS study carried out in 2011 concluded that 8.1 percent of jobs were at high risk of being replaced by technology, meaning that the risk has decreased slightly over time.
But such data has not placated everyone. In 2021, a study by University College London (UCL) found more than half of people between the ages of 16 to 25 fear for both their future and their job prospects, while a 2019 study conducted by CNBC revealed that 27 percent of workers surveyed is concerned technology will eliminate their job within the next five years. The same study found that 37 percent of people between the ages of 18 and 24 harbor the same worry.
However, Zhou believes that, while automation and technology will result in certain roles requiring less human input going forward, this need not necessarily be considered a negative.
“Throughout industrial history we have witnessed significant skills change in response to technological development,” Zhou notes. “On the whole, technological revolution tends to push up the skill requirements for the labor force.” Research shows that in many European countries and the U.S., the growth of information and computerized technologies was accompanied by a significant expansion of professional and managerial occupations and a decline of low-skilled jobs.
“We are yet to see the full impact of AI/machine learning/robotics technologies on the labor market,” Zhou says. “But it seems likely that employees will be required to develop higher levels of skills as a result of this wave of technology development than they had been in the past.”
Cebulla believes that, if businesses are so inclined, the rise in digital workers could see workplaces become more dynamic and could foster creativity across departments.
“One could envisage business models that allow for workers to be reallocated to different and more innovative activities, but only if that’s the road the business wants to go down,” he says. Where resources are channeled into creative and innovative activity, this should benefit the company. “It should generate new impetus for growth and development,” he says.
Ultimately, Zhou says, the deployment of technology and automated processes can, and potentially will, play a major role in creating jobs that will stimulate people’s minds and make their work lives more satisfactory.
However, Zhou also notes that there is a need for careful thought when it comes to the future of the workplace, and admits that there will be hurdles for some people to overcome. “Not everyone will benefit from these trends,” she says. How to support these workers to develop the skills which are needed for transitioning into higher-skill jobs will be a key issue for researchers and policy makers in the coming decades.