Thursday, September 25, 2025

Why Building Trust in AI Will Be India’s Biggest Challenge-

Must read


India has the capacity to develop its models, recruit talent and erect GPU farms. Trust is the hard part. We are already used to rumors that leap between WhatsApp groups, and reach prime time. Even official handles can blow up half truths at times. Voice cloning, photorealistic video and inexpensive model inference are going to fan the flames of that fire. In case citizens do not understand what is real, all institutions lose altitude.

The positive one is that India is experienced in assembling trust rails nationwide. UPI, Aadhaar, DigiLocker, CoWIN, and FASTag all showed that common protocols are able to transport a billion people without danger. Use the same script on AI. Trust is infrastructure not press release.

A Simple trust stack India can get on board now

1. Provenance by default: Promote C2PA style signatures on cameras, newsroom tooling, and creator apps and sites. Any media that does not have a signature at all has a visible tag with the words origin unknown. Provide community APIs to allow fact checkers and citizens to authenticate in a single tap.

2. Risk classes and audits: Establish the definitions of high risk, medium risk and low risk AI uses. In high risk lie deepfakes, health, credit, and elections. In those buckets vendors are required to register models, publish model cards, support red teaming in Indian languages and accept independent audits. Pay people to find bug bounties not only security ones, but misuse.

3. Elections and crisis protocol: Rapid response playbooks across platforms through mandate of 24 to 72 hour rapid disaster and poll plays. With high risk, slow virality. For content that spikes without provenance freeze recommendation spread, limit forwards that are frictionless and add context panels. Punish careless amplification by government handles.

4. Back open source for Indic reality: Majority of the detectors and safety devices are inefficient in India languages. Institutionalize Bhashini aligned datasets, open detectors, and community red teams, which bring in coverage of code mixing, dialects, and regional scripts. Collaborate with universities and startups in building evaluation benches that are Indian, not English.

5. Self regulation with teeth: The same thing can be done with AI content as the OTT code of practice worked because the ecosystem bought into it. Transparent disclosures around AI type political content, a public database of political advertising spend and targeting and agreed upon abuse signals across companies. Publicly report quarterly takedown SLAs on transparency reports.

6. Procure with safety requirements: The government needs to ensure that it only purchases AI systems which allow audit access, privacy assurances, eval results and data retention clarities. Enshrine this in GeM templates such that all buyers can apply this without the need to create custom contracts.

7. Developer hygiene by default: For public sector AI, need immediate logging, human in the loop for high consequence decisions, rollback procedures, post mortem for AI-related incidents. Punish teams that put their playbooks out in the open. Trust comes when people see the work, not just the results.

Pilots within six months

  • A deepfake hotline involving state police inclusive of takedown SLA and support of victims.
  • The use of C2PA within Doordarshan, PIB and the five leading newsrooms.
  • The Virality Breaks Go on WhatsApp, Instagram, YouTube and Election silence windows are triggered by the Election Commission.

The path forward

AI is coming and there is no way we can avoid it. It has to be adopted with the constraints suitable to the information landscape of India. The giants in the platform, the communities of the open model, and the government already communicate. Establish a Trust in AI Taskforce with a 90 day code of practice, a 12 month rollout of high reach platforms, and actual noncompliance consequences.

As long as India takes the product, not the afterthought approach to trust we will be in the lead. Otherwise, AI confidence will undermine confidence, which will take time to establish. It is a decision that we can make and the time is limited.

(The author, Aravind Putrevu, is the director of Developer Marketing, Coderabbit.)

- Advertisement -spot_img

More articles

- Advertisement -spot_img

Latest article