The mining Artificial Intelligence that harms businesses

Chinese artificial intelligence company DeepSeek has recently shocked the financial world, causing chaos in the market and fueling uncertainty among technology policymakers. OpenAI released a statement acknowledging possible evidence that DeepSeek trained its model on data generated from the output of OpenAI’s GPT-4o model through a process called distillation. Simply put, DeepSeek is accused of training its model on OpenAI’s model and profiting from this knowledge transfer. But before we ask ourselves whether DeepSeek stole from OpenAI, we should ask a deeper question: who did OpenAI get it from?

OpenAI has been accused of misappropriating data in the form of news articles, stories, and even YouTube video transcripts to feed its models. These models are trained on vast amounts of human-generated data, often without compensation or acknowledgement of the human creator. These practices are only minimally discussed at major international AI security summits – such as those in the UK, South Korea and most recently last February in France – which tend to focus on whether AI can invent biological weapons, develop new cyberattacks or whether invisible model bias poses a threat to humanity. The tacit transfer of value from creators to algorithms is emerging as one of the most overlooked economic risks of the AI ​​boom. The truth is that people are already beginning to express that they have been harmed by decisions to use or employ AI.

In a recent landmark event, one of the first major labor disputes regarding the use of AI was seen in the Writers Guild of America (WGA) strike in 2023. While the main issue revolved around streaming services and the balances owed to writers, negotiations over the use of productive AI prolonged the strike, which has its own section in the WGA’s Minimum Basic Agreement (MBA). Essentially, the WGA argued against the use of AI by companies or studios to write or rewrite literary material and that content generated by AI cannot be used as source material, which would have implications for how writers receive credit for their original work. Furthermore, the MBA gives the WGA the right to claim that exploiting writers’ work used to train an AI model is prohibited.

Studios weren’t pursuing AI to enrich storytelling. They wanted faster, cheaper content. It’s a business strategy, not a creative one, and it reflects a broader trend: companies are leveraging productive AI not to augment human labor, but to replace it. While the WGA example was successful in its attempt to protect writers, the shift in labor to productive AI remains an imminent threat to the broader U.S. workforce across all industries. AI systems are trained on years of human experience: journalistic reporting, customer service copy, school curricula, and more. Workers are being displaced by tools that rely on their own labor, without credit or compensation. It’s not just automation. It’s outsourcing.

This approach to AI does not lead to shared productivity—it lowers wages, erodes job quality, and accelerates inequality. Investors should take note: AI products that exploit human labor without its consent increasingly face lawsuits, regulatory scrutiny, and reputational damage. Even in customer-facing roles, the business case for replacement is tenuous. AI chatbots trained in customer service conversations replicate scenarios but not human judgment, empathy, or creativity. The result? A degraded user experience and reduced customer satisfaction. You don’t have to be a labor economist to understand that replacing real knowledge with superficial imitation poses a risk to your brand.

The same is true in media, law, design, and finance. Professionals watch as their intellectual output is transformed into training data used to create products that may ultimately devalue their expertise. In the long term, this threatens talent pipelines, corporate culture, and competitive moats.

The way forward requires recognizing this form of harm. Companies that invest in enhancing human labor, respect creative rights, and prioritize transparency in AI development will be in a better position to attract top talent and earn long-term trust. In the next decade, speed alone will not determine AI winners—the advantage will go to those who treat human capital as a strategic asset.

The question is not whether AI will transform work, but whether companies will use that power to upskill human talent or outsource it. Bet on the former, because outsourcement is not a long-term business model.

About the author

The Liberal Globe is an independent online magazine that provides carefully selected varieties of stories. Our authoritative insight opinions, analyses, researches are reflected in the sections which are both thematic and geographical. We do not attach ourselves to any political party. Our political agenda is liberal in the classical sense. We continue to advocate bold policies in favour of individual freedoms, even if that means we must oppose the will and the majority view, even if these positions that we express may be unpleasant and unbearable for the majority.

Leave a Reply

Your email address will not be published. Required fields are marked *