Connect with us

Tech

Bill Aimed at Protecting Children Online Sparks Debate Over Censorship and Privacy

Published

on

Bill Aimed at Protecting Children Online Sparks Debate Over Censorship and Privacy

The U.S. Senate passed the Kids Online Safety Act on Tuesday, with a vote of 91-3. If the bill is approved in the House, it will mark the first time in 25 years that Congress has enacted a law specifically designed to enhance the protection of children from online dangers.

The bill aims to address various risks that children face on the internet, including exposure to harmful content, cyberbullying, and exploitation. The proposed legislation seeks to establish stricter regulations for online platforms and social media companies, requiring them to implement more robust safety measures and privacy protections for young users.

The Young People’s Alliance, an organization dedicated to promoting youth advocacy. Smithing emphasized the importance of such legislation in today’s digital age, highlighting the increasing threats that children encounter online and the need for comprehensive measures to safeguard their well-being.

Advertisement

While the bill has garnered significant support, it has also sparked a debate over issues of censorship and privacy. Critics argue that the proposed regulations could lead to overreach and unintended consequences, potentially stifling free expression and invading privacy. They caution that the bill must strike a careful balance to ensure that it effectively protects children without infringing on fundamental rights.

The passage of the Kids Online Safety Act in the Senate represents a significant step towards enhancing online safety for children. However, the ongoing debate underscores the complexities involved in crafting legislation that addresses modern digital challenges while preserving essential freedoms. As the bill moves to the House for consideration, stakeholders on all sides will be closely watching to see how these issues are navigated and resolved.

Advertisement
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

X to stop Grok AI from undressing images of real people

Published

on

Elon musk
X to stop Grok AI from undressing images of real people

X has announced that its artificial intelligence tool, Grok, will no longer be able to edit images of real people to depict them in revealing clothing in jurisdictions where such activity is illegal, following widespread backlash over the misuse of sexualised AI deepfakes.

In a statement published on the platform, X said it had introduced new safeguards to prevent the Grok account from being used to manipulate photos of real individuals in a sexualised manner. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing,” the company said.

The move has been welcomed by UK authorities, who had previously raised concerns about the tool’s use. The UK government described the decision as a “vindication” of its calls for X to take stronger action to control Grok. Media regulator Ofcom also said the change was a “welcome development”, while stressing that its investigation into whether the platform breached UK laws is still under way.

Advertisement

“We are working round the clock to progress this and get answers into what went wrong and what’s being done to fix it,” Ofcom said, signalling continued scrutiny despite the latest measures.

Technology Secretary Liz Kendall welcomed X’s announcement but emphasised the need for accountability. She said she would “expect the facts to be fully and robustly established by Ofcom’s ongoing investigation”, underlining the government’s commitment to ensuring online safety rules are upheld.

However, campaigners and victims of AI-generated sexualised images say the decision has come after significant harm had already been caused. Journalist and campaigner Jess Davies, who was among women whose images were edited using Grok, described the changes as a “positive step” but said the feature should never have been permitted in the first place.

Advertisement
Continue Reading

Tech

Alibaba Opens AI Video Generation Model for Free Use Globally

Published

on

Alibaba Opens AI Video Generation Model for Free Use Globally

Chinese tech giant Alibaba has made its latest AI video generation models freely available worldwide, intensifying competition with rivals such as OpenAI.

The company announced on Wednesday that it is open-sourcing four models from its Wan2.1 series, its most advanced AI model capable of generating images and videos from text and image inputs. These models will be accessible via Alibaba Cloud’s Model Scope and Hugging Face, making them available to academics, researchers, and businesses globally.

Following the announcement, Alibaba’s Hong Kong-listed shares surged nearly 5%, continuing a strong rally that has seen the stock gain 66% in 2025. Investors have been optimistic about the company’s growing role in AI and its improving financial performance, buoyed by recent policy signals from Chinese President Xi Jinping supporting the domestic private sector.

Advertisement

Alibaba’s move aligns with a broader trend in China, where companies are increasingly embracing open-source AI. In January, DeepSeek, another Chinese firm, shook global markets by revealing that its AI model was trained at a fraction of the cost of competitors, using less-advanced Nvidia chips. Both Alibaba’s and DeepSeek’s models are open-source, meaning they can be downloaded and modified freely, unlike proprietary AI models such as those developed by OpenAI, which generate direct revenue.

The shift towards open-source AI has sparked debate over whether AI models will become commoditized. While companies like Meta are leading the open-source push in the U.S. with their Llama models, Chinese firms have been particularly aggressive in this space, aiming to drive innovation and build global AI communities.

Advertisement
Continue Reading

Tech

VP JD Vance Pledges to Protect U.S. AI and Block Its Weaponization

Published

on

VP JD Vance Pledges to Protect U.S. AI and Block Its Weaponization

Vice President JD Vance reaffirmed the U.S. commitment to safeguarding its artificial intelligence and semiconductor technologies, vowing to block efforts by authoritarian regimes to weaponize them.

Speaking at France’s AI Action Summit in Paris, Vance warned that some nations have exploited AI for military intelligence, surveillance, and foreign data manipulation. “This administration will block such efforts, full stop,” he stated. “We will safeguard American AI and chip technologies from theft and misuse, work with our allies and partners to strengthen and extend these protections, and close pathways to adversaries attaining AI capabilities that threaten all of our people.”

While he did not directly name China’s AI model DeepSeek, which has drawn global attention for its competitive performance at a lower cost, Vance criticized heavily subsidized technologies exported by authoritarian states. “We’re all familiar with cheap tech in the marketplace that’s been heavily subsidized and exported by authoritarian regimes,” he said.

Advertisement

In a pointed message to allies, Vance cautioned against collaborating with companies linked to such regimes, arguing it would compromise national security. “Chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure never pays off,” he added.

The U.S. has ramped up efforts to control AI development and chip manufacturing, tightening restrictions on exports to China and strengthening alliances in the tech sector.

Advertisement
Continue Reading

Trending