Today, December 5, was an important day in the artificial intelligence space. Union Minister Rajeev Chandrasekhar met social media platforms to review progress made by them in tackling the issue of deepfakes, and misinformation in general. New advisories for compliance are expected to be released. Meta Platforms and IBM have joined an AI alliance with 40 other organizations to create an industry group dedicated to open-source AI work, aiming to share technology and reduce risks. This and more in today’s AI roundup. Let us take a closer look.
Govt reviews social media platforms’ efforts to curb deepfakes
Union Minister Rajeev Chandrasekhar recently met with social media platforms to assess their efforts in combating misinformation and deepfakes. He emphasized that advisories will be issued within the next two days to ensure complete compliance by the platforms. Additionally, new and amended IT Rules are being considered to enhance platform compliance and ensure the safety and trust of online users. Chandrasekhar shared this information on X (formerly Twitter).
“Held the 2nd #DigitalIndiaDialogues on Misinformation and #Deepfakes with intermediaries today, to review the progress made since the Nov 24 meeting. Many platforms are responding to the decisions taken last month and advisories on ensuring 100 per cent compliance will be issued in the next 2 days. A new amended #ITRules to further ensure compliance of platforms, and safety & trust of #DigitalNagriks is actively under consideration,” Chandrasekhar said in the post.
Meta, IBM create AI Alliance
Meta Platforms and IBM are part of a newly formed industry group, the AI Alliance, which includes over 40 companies and organizations, reports Bloomberg. The alliance aims to promote open-source AI work, fostering collaboration to share technology and minimize risks. The focus is on responsible AI development, including safety and security tools. The group aims to increase the availability of open-source AI models, encourage the development of new hardware, and collaborate with academic researchers. The AI Alliance plans to establish a governing board and a technical oversight committee. Other participants include Oracle, Advanced Micro Devices, Intel, Stability AI, the University of Notre Dame, and the Mass Open Cloud Alliance.
SenseAI Ventures launches fund for AI startups
SenseAI Ventures, an AI-focused fund targeting seed or pre-Series A startups, has introduced its SenseAI Fund I with a corpus of ₹200 crore, reports Financial Express. The fund aims to invest in 18-20 AI-first startups and plans to make follow-on investments in promising companies from its portfolio during subsequent funding rounds.
“Our approach is beyond capital; as experienced founders and operators we offer bespoke support tailored to the unique needs of each AI-first startup. AI is the single largest value creation opportunity of our lifetimes,” said Rahul Agarwalla, cofounder of SenseAI Ventures.
UK could use AI to ensure underage citizens don’t watch porn
The UK has proposed new age-check guidance to safeguard children from accessing online pornography, reports Reuters. In line with the recently passed Online Safety Act, websites and apps featuring adult content must ensure that children cannot easily access such material. The proposal suggests using AI-based technology, including facial age estimation, to determine if a viewer is of legal age. This may involve users taking a selfie and uploading it for analysis. Other measures in the guidance include photo identification matching, where users upload a photo ID like a passport, and credit card checks to verify age. The legal age to watch pornography in Britain is 18 or over.
Getty Images lawsuit against Stability AI to go to trial
A UK court has allowed a lawsuit filed by Getty Images against Stability AI, the creator of Stable Diffusion, to proceed to trial, as per a report by The Verge. Getty claims that Stability AI used its copyrighted images to train AI models. The Business and Property Courts of England and Wales found merit in Getty’s assertion, stating that the case warrants further investigation. Stability AI argued against the UK court’s jurisdiction, contending that none of the individuals involved in training or developing Stable Diffusion were based in the UK and that the model was trained using cloud computing power from AWS in the US.