Tech
Flutterwave Partners with EFCC to Establish Cybercrime Research Center in Nigeria
Nigerian fintech giant, Flutterwave, has partnered with the Economic and Financial Crimes Commission (EFCC) to create a Cybercrime Research Center in Nigeria. This initiative aims to combat internet crime, enhance transaction security, and provide sustainable opportunities for youths across the country.
Memorandum of Understanding (MoU)
A Memorandum of Understanding was signed on June 14, 2024, by the Secretary of the EFCC, Mr. Mohammadu Hammajoda, and the CEO of Flutterwave, Olugbenga Agboola. This partnership marks a significant step in the fight against financial crimes and underscores the commitment of both parties to fostering a secure financial environment.
Objectives of the Cybercrime Research Center
The Cybercrime Research Center, to be established at the new EFCC Academy, will serve as a hub for advanced research, training, and capacity building. The center will focus on several key areas:
- Advanced Fraud Detection and Prevention
- Develop and implement cutting-edge technologies to detect and prevent financial fraud.
- Offer comprehensive training for law enforcement and industry professionals to effectively combat modern financial crimes.
- Collaborative Research and Policy Development
- Engage in joint research initiatives and policy formulation to enhance understanding and regulation of financial crime.
- Provide a platform for the exchange of ideas and best practices between the public and private sectors.
- Youth Empowerment and Capacity Building
- Provide high-end training and research opportunities for 500 youths, equipping them with the skills needed to navigate and excel in the digital economy.
- Technological Advancement and Resource Enablement
- Create a repository of advanced tools, technologies, and resources to support financial crime investigations.
- Develop protocols for addressing emerging threats, such as cryptocurrency-related crimes.
Statements from Key Stakeholders
Flutterwave’s CEO, Olugbenga Agboola, emphasized the company’s dedication to promoting secure transactions:
“This initiative underscores our commitment to creating a fraud-free financial ecosystem and leading the charge in safeguarding transactions across Africa. We applaud the EFCC’s relentless efforts to combat internet fraud and other illicit activities in the financial sector.”
EFCC Executive Chairman, Mr. Ola Olukoyede, expressed appreciation for the partnership:
“The EFCC is impressed with the strides and expanse of Flutterwave across Africa. This partnership marks a significant leap forward in our efforts to combat financial crimes and ensure a secure financial landscape for Nigerians. The Cybercrime Research Center will significantly enhance our capabilities to prevent, detect, and prosecute financial crimes.”
Importance of the Initiative
As the payments ecosystem evolves, financial fraud remains a significant challenge, threatening the stability and trust in financial systems. The partnership between Flutterwave and the EFCC exemplifies how public-private collaboration can address these issues, paving the way for a more secure and prosperous economy in Nigeria and across Africa.
The Cybercrime Research Center is poised to play a crucial role in enhancing the fight against financial crimes, ensuring safer transactions, and empowering the next generation with the necessary skills to thrive in the digital economy.
Tech
X to stop Grok AI from undressing images of real people
X has announced that its artificial intelligence tool, Grok, will no longer be able to edit images of real people to depict them in revealing clothing in jurisdictions where such activity is illegal, following widespread backlash over the misuse of sexualised AI deepfakes.
In a statement published on the platform, X said it had introduced new safeguards to prevent the Grok account from being used to manipulate photos of real individuals in a sexualised manner. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing,” the company said.
The move has been welcomed by UK authorities, who had previously raised concerns about the tool’s use. The UK government described the decision as a “vindication” of its calls for X to take stronger action to control Grok. Media regulator Ofcom also said the change was a “welcome development”, while stressing that its investigation into whether the platform breached UK laws is still under way.
“We are working round the clock to progress this and get answers into what went wrong and what’s being done to fix it,” Ofcom said, signalling continued scrutiny despite the latest measures.
Technology Secretary Liz Kendall welcomed X’s announcement but emphasised the need for accountability. She said she would “expect the facts to be fully and robustly established by Ofcom’s ongoing investigation”, underlining the government’s commitment to ensuring online safety rules are upheld.
However, campaigners and victims of AI-generated sexualised images say the decision has come after significant harm had already been caused. Journalist and campaigner Jess Davies, who was among women whose images were edited using Grok, described the changes as a “positive step” but said the feature should never have been permitted in the first place.
Tech
Alibaba Opens AI Video Generation Model for Free Use Globally
Chinese tech giant Alibaba has made its latest AI video generation models freely available worldwide, intensifying competition with rivals such as OpenAI.
The company announced on Wednesday that it is open-sourcing four models from its Wan2.1 series, its most advanced AI model capable of generating images and videos from text and image inputs. These models will be accessible via Alibaba Cloud’s Model Scope and Hugging Face, making them available to academics, researchers, and businesses globally.
Following the announcement, Alibaba’s Hong Kong-listed shares surged nearly 5%, continuing a strong rally that has seen the stock gain 66% in 2025. Investors have been optimistic about the company’s growing role in AI and its improving financial performance, buoyed by recent policy signals from Chinese President Xi Jinping supporting the domestic private sector.
Alibaba’s move aligns with a broader trend in China, where companies are increasingly embracing open-source AI. In January, DeepSeek, another Chinese firm, shook global markets by revealing that its AI model was trained at a fraction of the cost of competitors, using less-advanced Nvidia chips. Both Alibaba’s and DeepSeek’s models are open-source, meaning they can be downloaded and modified freely, unlike proprietary AI models such as those developed by OpenAI, which generate direct revenue.
The shift towards open-source AI has sparked debate over whether AI models will become commoditized. While companies like Meta are leading the open-source push in the U.S. with their Llama models, Chinese firms have been particularly aggressive in this space, aiming to drive innovation and build global AI communities.
Tech
VP JD Vance Pledges to Protect U.S. AI and Block Its Weaponization
Vice President JD Vance reaffirmed the U.S. commitment to safeguarding its artificial intelligence and semiconductor technologies, vowing to block efforts by authoritarian regimes to weaponize them.
Speaking at France’s AI Action Summit in Paris, Vance warned that some nations have exploited AI for military intelligence, surveillance, and foreign data manipulation. “This administration will block such efforts, full stop,” he stated. “We will safeguard American AI and chip technologies from theft and misuse, work with our allies and partners to strengthen and extend these protections, and close pathways to adversaries attaining AI capabilities that threaten all of our people.”
While he did not directly name China’s AI model DeepSeek, which has drawn global attention for its competitive performance at a lower cost, Vance criticized heavily subsidized technologies exported by authoritarian states. “We’re all familiar with cheap tech in the marketplace that’s been heavily subsidized and exported by authoritarian regimes,” he said.
In a pointed message to allies, Vance cautioned against collaborating with companies linked to such regimes, arguing it would compromise national security. “Chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure never pays off,” he added.
The U.S. has ramped up efforts to control AI development and chip manufacturing, tightening restrictions on exports to China and strengthening alliances in the tech sector.
-
Sports1 week agoMane’s Decisive Strike Sends Senegal into Another AFCON Final as Egypt Exit
-
Sports1 week agoCarrick Returns to Steady Manchester United as Caretaker Head Coach
-
News1 week agoTrump tells Iranians to keep protesting, says help is on its way.
-
News1 week agoRobert Jenrick sacked by Tories for plotting to defect
-
News1 week agoAt least 28 killed after crane collapses on train in Thailand
-
Sports1 week agoEto’o handed ban for misconduct at Afcon
-
News1 week agoIran Judiciary Rejects Execution Claims as Officials Signal Easing of Tensions Over Protest Arrests
-
Tech1 week agoX to stop Grok AI from undressing images of real people
