Connect with us

Tech

Microsoft Faces £1 Billion Class Action in UK Over Alleged Overpricing of Software

Published

on

Microsoft Faces £1 Billion Class Action in UK Over Alleged Overpricing of Software

Microsoft is at the center of a £1 billion class action lawsuit in the UK, with thousands of businesses potentially in line for compensation. The claim, led by regulation expert Dr. Maria Luisa Stasi, alleges that Microsoft overcharged companies for its Windows Server software, a key tool in cloud computing operations.

  • The lawsuit accuses Microsoft of unfair pricing practices, claiming the company leveraged its dominant position to inflate costs for businesses.
  • Over £1 billion in damages is being sought on behalf of UK businesses.
  • The case is filed on an opt-out basis, meaning all UK businesses using Microsoft’s software are automatically included unless they choose otherwise.

This lawsuit is the latest in a wave of class action cases targeting tech giants in the UK, including Facebook and Google. Such claims, enabled by legislation introduced in 2015, are still relatively novel, and outcomes remain uncertain.

The legal process is expected to take years to resolve, with little precedence available to predict success rates.

The case coincides with an ongoing investigation by the UK’s Competition and Markets Authority (CMA) into the cloud computing sector.

Advertisement
  • Cloud services are integral to modern businesses, providing solutions for data storage, software licensing, and streaming services. Microsoft’s Azure, Amazon Web Services, and Google Cloud dominate the sector.
  • Google previously raised concerns with the CMA, accusing Microsoft of using restrictive licensing practices that increase costs for competitors and undermine their ability to compete effectively.
  • The company has denied these allegations, stating that its licensing terms do not significantly impact rivals’ costs or competitiveness.

If successful, the lawsuit could mark a landmark decision for tech regulation in the UK, reinforcing scrutiny on the practices of dominant players in critical digital infrastructure markets. Businesses across the nation stand to benefit from compensation, with the case also potentially influencing licensing and competition policies in the cloud computing industry.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

X to stop Grok AI from undressing images of real people

Published

on

Elon musk
X to stop Grok AI from undressing images of real people

X has announced that its artificial intelligence tool, Grok, will no longer be able to edit images of real people to depict them in revealing clothing in jurisdictions where such activity is illegal, following widespread backlash over the misuse of sexualised AI deepfakes.

In a statement published on the platform, X said it had introduced new safeguards to prevent the Grok account from being used to manipulate photos of real individuals in a sexualised manner. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing,” the company said.

The move has been welcomed by UK authorities, who had previously raised concerns about the tool’s use. The UK government described the decision as a “vindication” of its calls for X to take stronger action to control Grok. Media regulator Ofcom also said the change was a “welcome development”, while stressing that its investigation into whether the platform breached UK laws is still under way.

Advertisement

“We are working round the clock to progress this and get answers into what went wrong and what’s being done to fix it,” Ofcom said, signalling continued scrutiny despite the latest measures.

Technology Secretary Liz Kendall welcomed X’s announcement but emphasised the need for accountability. She said she would “expect the facts to be fully and robustly established by Ofcom’s ongoing investigation”, underlining the government’s commitment to ensuring online safety rules are upheld.

However, campaigners and victims of AI-generated sexualised images say the decision has come after significant harm had already been caused. Journalist and campaigner Jess Davies, who was among women whose images were edited using Grok, described the changes as a “positive step” but said the feature should never have been permitted in the first place.

Advertisement
Continue Reading

Tech

Alibaba Opens AI Video Generation Model for Free Use Globally

Published

on

Alibaba Opens AI Video Generation Model for Free Use Globally

Chinese tech giant Alibaba has made its latest AI video generation models freely available worldwide, intensifying competition with rivals such as OpenAI.

The company announced on Wednesday that it is open-sourcing four models from its Wan2.1 series, its most advanced AI model capable of generating images and videos from text and image inputs. These models will be accessible via Alibaba Cloud’s Model Scope and Hugging Face, making them available to academics, researchers, and businesses globally.

Following the announcement, Alibaba’s Hong Kong-listed shares surged nearly 5%, continuing a strong rally that has seen the stock gain 66% in 2025. Investors have been optimistic about the company’s growing role in AI and its improving financial performance, buoyed by recent policy signals from Chinese President Xi Jinping supporting the domestic private sector.

Advertisement

Alibaba’s move aligns with a broader trend in China, where companies are increasingly embracing open-source AI. In January, DeepSeek, another Chinese firm, shook global markets by revealing that its AI model was trained at a fraction of the cost of competitors, using less-advanced Nvidia chips. Both Alibaba’s and DeepSeek’s models are open-source, meaning they can be downloaded and modified freely, unlike proprietary AI models such as those developed by OpenAI, which generate direct revenue.

The shift towards open-source AI has sparked debate over whether AI models will become commoditized. While companies like Meta are leading the open-source push in the U.S. with their Llama models, Chinese firms have been particularly aggressive in this space, aiming to drive innovation and build global AI communities.

Advertisement
Continue Reading

Tech

VP JD Vance Pledges to Protect U.S. AI and Block Its Weaponization

Published

on

VP JD Vance Pledges to Protect U.S. AI and Block Its Weaponization

Vice President JD Vance reaffirmed the U.S. commitment to safeguarding its artificial intelligence and semiconductor technologies, vowing to block efforts by authoritarian regimes to weaponize them.

Speaking at France’s AI Action Summit in Paris, Vance warned that some nations have exploited AI for military intelligence, surveillance, and foreign data manipulation. “This administration will block such efforts, full stop,” he stated. “We will safeguard American AI and chip technologies from theft and misuse, work with our allies and partners to strengthen and extend these protections, and close pathways to adversaries attaining AI capabilities that threaten all of our people.”

While he did not directly name China’s AI model DeepSeek, which has drawn global attention for its competitive performance at a lower cost, Vance criticized heavily subsidized technologies exported by authoritarian states. “We’re all familiar with cheap tech in the marketplace that’s been heavily subsidized and exported by authoritarian regimes,” he said.

Advertisement

In a pointed message to allies, Vance cautioned against collaborating with companies linked to such regimes, arguing it would compromise national security. “Chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure never pays off,” he added.

The U.S. has ramped up efforts to control AI development and chip manufacturing, tightening restrictions on exports to China and strengthening alliances in the tech sector.

Advertisement
Continue Reading

Trending