Canadian telecom giant Rogers Communications has made headlines for laying off over 1,000 customer service employees across multiple provinces. But according to former employees, this wasn’t just about cost-cutting, it was a calculated strategy to use human workers to train the AI systems that would eventually replace them.
The story, which first broke on Reddit, reveals a troubling trend in how companies are implementing artificial intelligence in the workplace. Former Rogers employees claim they were essentially tricked into training their own replacements, raising serious questions about corporate ethics and the future of work in the AI age.
The Deception Behind the “Efficiency” Tool
According to a viral Reddit post from a former Rogers employee, the company introduced what they called an “AI summary tool” in 2023, supposedly designed to make workers’ jobs easier. Management told employees this tool would help them handle customer interactions more efficiently.
The reality was far different. “Starting last year we were forced to use this AI summary tool to make ‘our jobs easier’ but the reality all along was they needed us to train the AI and were strategically plotting our demise of replacing us,” wrote the former employee, who posted under the username “Cool_Alfalfa.”
This revelation highlights a disturbing practice where companies present AI implementation as a helpful addition to the workforce, while secretly using employee interactions to build the very systems designed to eliminate their jobs.
How the Training Process Worked
The training process was more extensive than employees initially realized. Workers were required to use the AI tool for every customer interaction, effectively feeding the system thousands of real-world scenarios and solutions. Each conversation became a training data point, teaching the AI how to handle various customer service situations.
But the exploitation went deeper. Employees reported being trained for multiple departments beyond their original job descriptions, without additional compensation or voluntary agreement. This meant the AI system was learning not just one specific role, but multiple customer service functions simultaneously.
The company also reduced break times between calls from 30 seconds to just 12 seconds, increasing the volume of training data while making working conditions more stressful. Workers had to manually select this brief break option before each call, adding another layer of control over their time.
The Human Cost of AI Implementation
The impact on Rogers employees was severe. Many worked for years believing they were building their careers, only to discover they were training their replacements. The psychological toll of this deception cannot be understated.
“We were exploited and taken advantage of,” the Reddit post continued. “We get treated like a number and are expected to work non-stop.”
The layoffs affected customer service staff across Ontario, British Columbia, Quebec, Alberta, and Manitoba. Many of these were experienced employees who had dedicated years to the company, including former Shaw Communications staff who joined Rogers after the 2023 merger.
Perhaps most troubling, employees were reportedly told by management to “stay quiet and not to go to the media” about the layoffs. This attempted silence suggests Rogers was aware of the controversial nature of their AI training strategy.
The Broader Industry Pattern
Rogers isn’t alone in this approach. The telecommunications industry has been particularly aggressive in implementing AI-driven cost-cutting measures. BCE Inc. (Bell) and Telus Corp. recently offered voluntary separation packages to 1,200 and 700 employees respectively, citing similar technological shifts and competitive pressures.
This trend extends beyond telecommunications. Companies across industries are discovering that the most effective way to train AI systems is through human workers who understand the nuances of real customer interactions. The ethical implications of this practice, however, remain largely unaddressed.
The Customer Experience Suffers Too
The Reddit post revealed that customers are also paying the price for Rogers’ AI transition. The author described the replacement AI agents as “incompetent AI agents that send you around in loops, and less/no human interaction.”
Customer complaints support this assessment. One user reported spending “11 hours on hold over the course of a week to finally talk to a human being” just to cancel their account. Another described the AI system as taking customers “through endless loops and directs you to the wrong person, only for them to put you back into the queue and start all over again.”
Rogers attributed the layoffs to a 20% drop in online chat interactions, but critics argue this decrease likely reflects customer frustration with AI-driven support rather than genuine reduced demand for help.
Legal and Ethical Concerns
The Rogers situation raises significant legal and ethical questions about AI implementation in the workplace. Employment law firm Samfiru Tumarkin LLP reported that dozens of laid-off Rogers employees sought legal advice, suggesting potential grounds for legal action.
The practice of using employees to train their replacements without their knowledge could constitute a form of fraud or misrepresentation. Workers who dedicated time and effort to what they believed was job security may have grounds for compensation beyond standard severance.
More broadly, the case highlights the need for stronger regulations around AI implementation in workplaces. Companies should be required to disclose when employee activities are being used to train AI systems, especially when those systems are intended to replace human workers.
Privacy and Data Concerns
The Rogers case also raises questions about customer data privacy. When employees used the AI summary tool, they were essentially feeding customer conversations and personal information into a machine learning system. Customers likely had no idea their interactions were being used to train AI systems.
This practice may violate privacy regulations and customer trust. Companies implementing similar AI training programs should consider the legal and ethical implications of using customer data without explicit consent.
What Other Companies Can Learn
The Rogers controversy offers valuable lessons for other organizations considering AI implementation:
Transparency is crucial. Companies should be honest with employees about how AI tools will be used and whether they’re designed to replace human workers.
Ethical training matters. Using employees to unknowingly train their replacements is not just ethically questionable—it can damage company reputation and potentially lead to legal action.
Customer experience shouldn’t be sacrificed. Rushing to implement AI without ensuring it can adequately replace human capabilities ultimately hurts the business.
Change management is essential. Successful AI implementation requires proper planning, employee communication, and transition support.
Moving Forward Responsibly
The Rogers layoffs represent a cautionary tale about the wrong way to implement AI in the workplace. While artificial intelligence can certainly improve efficiency and reduce costs, the method of implementation matters enormously.
Companies looking to integrate AI should consider more ethical approaches, such as retraining employees for new roles, being transparent about AI implementation plans, and ensuring AI systems are truly ready to replace human capabilities before making the transition.
The future of work will undoubtedly include more AI, but it doesn’t have to involve deceiving employees or sacrificing service quality. By learning from Rogers’ mistakes, other companies can find better ways to balance technological advancement with human dignity and customer satisfaction.
The Rogers case serves as a stark reminder that in the rush to embrace AI, we must not lose sight of the human cost of these technological transitions. True innovation should enhance human potential, not exploit it.
FAQs: Frequently Asked Questions
Q1: What happened at Rogers with AI training methods?
A1: Employees at Rogers alleged they were unknowingly made to train AI systems that eventually replaced their jobs. This incident has raised ethical concerns regarding transparency and fairness in AI implementation.
Q2: Why is this a concern for workers?
A2: The main concern is that employees were not informed about how their efforts were contributing to the development of AI that would make their roles redundant. This lack of transparency undermines trust and raises questions about the ethical use of technology in workplaces.
Q3: How can companies ensure ethical AI practices?
A3: Companies should prioritize transparency, actively involve employees in discussions about AI integration, and provide clear communication on how AI systems will impact roles. Additionally, retraining and upskilling programs can help workers stay relevant in an evolving job market.
Q4: Is this practice common in other industries?
A4: Similar practices have been observed in other industries where AI and automation are replacing traditional roles. However, the extent and transparency of these processes vary by company and industry.
Q5: What are the potential benefits of AI in the workplace?
A5: AI can improve efficiency, reduce costs, and enhance customer service. However, these benefits must be balanced with the ethical responsibility to ensure the well-being of employees impacted by these changes.
For More Information CLICK