Tech
Justice Department Accuses TikTok of Collecting User Data on Sensitive Issues
The Justice Department has accused TikTok of collecting bulk information on users’ views regarding sensitive social issues such as gun control, abortion, and religion. The allegation is part of a legal battle concerning the popular social media platform, which could face a ban in the U.S. if it does not sever ties with its Beijing-based parent company, ByteDance.
According to a brief filed to the federal appeals court in Washington, TikTok and ByteDance used an internal system called Lark to enable communication between TikTok employees and ByteDance engineers in China. This system allowed the transfer of sensitive U.S. user data to Chinese servers, accessible to ByteDance employees.
One of Lark’s tools allegedly permitted employees to gather information on users’ content and expressions on controversial topics. Reports indicate that TikTok tracked users who watched LGBTQ content via a dashboard that has since been deleted.
The court documents mark the government’s first significant defense in the case, stressing the potential risk of “covert content manipulation” by the Chinese government. The Justice Department warned that China could manipulate the algorithm to influence public opinion and exacerbate social divisions in the U.S.
TikTok spokesperson Alex Haurek countered the claims, asserting that the ban would violate the First Amendment by silencing 170 million Americans and arguing that the government has not provided concrete proof of its allegations.
The Justice Department also highlighted concerns about TikTok’s Project Texas, a $1.5 billion initiative to store U.S. user data on Oracle-managed servers, questioning its effectiveness in mitigating national security risks.
Oral arguments in this pivotal case are scheduled for September.
Tech
OpenAI Revises Pentagon AI Deal After Backlash Over Military Use
OpenAI says it is amending its recent agreement with the United States Department of Defense following criticism over the potential use of its technology in classified military operations.
Chief executive Sam Altman announced that the company will insert clearer restrictions into the contract, explicitly prohibiting the intentional use of its systems for domestic surveillance of US citizens and nationals.
The controversy emerged after tensions between OpenAI’s rival Anthropic and the Pentagon, related to concerns that Anthropic’s AI model, Claude, could be used for mass surveillance or in fully autonomous weapons systems.
In a statement over the weekend, OpenAI said its Pentagon agreement contained “more guardrails than any previous agreement for classified AI deployments”. However, Altman later acknowledged that the rollout of the deal had been rushed.
“The issues are super complex, and demand clear communication,” he wrote on social media, adding that the company had sought to de-escalate tensions but recognised that the announcement appeared “opportunistic and sloppy”.
Under the revised terms, intelligence agencies such as the National Security Agency would require additional contractual modifications before being permitted to use OpenAI systems.
The backlash has had measurable effects. Reports indicate that day-over-day uninstalls of the ChatGPT mobile app surged sharply following the announcement, while Anthropic’s Claude climbed to the top of Apple’s App Store rankings.
Anthropic’s model had previously been blacklisted by the administration of Donald Trump after the company refused to abandon a corporate principle barring the use of its technology in fully autonomous weapons. Despite that position, reports have since indicated that Claude was used in the US-Israel conflict with Iran shortly after the ban.
The Pentagon has declined to comment on its arrangements with Anthropic.
Tech
X to stop Grok AI from undressing images of real people
X has announced that its artificial intelligence tool, Grok, will no longer be able to edit images of real people to depict them in revealing clothing in jurisdictions where such activity is illegal, following widespread backlash over the misuse of sexualised AI deepfakes.
In a statement published on the platform, X said it had introduced new safeguards to prevent the Grok account from being used to manipulate photos of real individuals in a sexualised manner. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing,” the company said.
The move has been welcomed by UK authorities, who had previously raised concerns about the tool’s use. The UK government described the decision as a “vindication” of its calls for X to take stronger action to control Grok. Media regulator Ofcom also said the change was a “welcome development”, while stressing that its investigation into whether the platform breached UK laws is still under way.
“We are working round the clock to progress this and get answers into what went wrong and what’s being done to fix it,” Ofcom said, signalling continued scrutiny despite the latest measures.
Technology Secretary Liz Kendall welcomed X’s announcement but emphasised the need for accountability. She said she would “expect the facts to be fully and robustly established by Ofcom’s ongoing investigation”, underlining the government’s commitment to ensuring online safety rules are upheld.
However, campaigners and victims of AI-generated sexualised images say the decision has come after significant harm had already been caused. Journalist and campaigner Jess Davies, who was among women whose images were edited using Grok, described the changes as a “positive step” but said the feature should never have been permitted in the first place.
Tech
Alibaba Opens AI Video Generation Model for Free Use Globally
Chinese tech giant Alibaba has made its latest AI video generation models freely available worldwide, intensifying competition with rivals such as OpenAI.
The company announced on Wednesday that it is open-sourcing four models from its Wan2.1 series, its most advanced AI model capable of generating images and videos from text and image inputs. These models will be accessible via Alibaba Cloud’s Model Scope and Hugging Face, making them available to academics, researchers, and businesses globally.
Following the announcement, Alibaba’s Hong Kong-listed shares surged nearly 5%, continuing a strong rally that has seen the stock gain 66% in 2025. Investors have been optimistic about the company’s growing role in AI and its improving financial performance, buoyed by recent policy signals from Chinese President Xi Jinping supporting the domestic private sector.
Alibaba’s move aligns with a broader trend in China, where companies are increasingly embracing open-source AI. In January, DeepSeek, another Chinese firm, shook global markets by revealing that its AI model was trained at a fraction of the cost of competitors, using less-advanced Nvidia chips. Both Alibaba’s and DeepSeek’s models are open-source, meaning they can be downloaded and modified freely, unlike proprietary AI models such as those developed by OpenAI, which generate direct revenue.
The shift towards open-source AI has sparked debate over whether AI models will become commoditized. While companies like Meta are leading the open-source push in the U.S. with their Llama models, Chinese firms have been particularly aggressive in this space, aiming to drive innovation and build global AI communities.
-
News1 week agoBritish Tourist Among 19 Victims in Tragic Nepal Bus Crash
-
News1 week agoMore than 5,000 flights cancelled as major snow storm blasts US north-east
-
News1 week agoTrump’s Global Tariff Rollout Begins at 10% Amid Policy Adjustments
-
Entertainment1 week agoRobert Carradine, dies aged 71
-
News6 days agoTram derails in Milan, leaving one dead and dozens injured
-
News1 week agoLouvre Director Steps Down Following Jewel Heist and Security Review
-
Sports1 week agoMan Utd ruled out signing Osimhen ‘because of Afcon’
-
Sports1 week agoJoshua crash driver case adjourned to March
