A recent WCO News article by officers of the Central Board of Indirect Taxes and Customs records that since 2022, Indian Customs has deployed in-house artificial intelligence models to analyse X-ray images of imported cargo. The models classify goods, detect heterogeneity between declared and scanned contents, identify specific objects, and predict container weight. They flag consignments for examination. They have already driven seizures.
What the article does not address, and what Indian Customs jurisprudence has not yet confronted, is the legal status of the algorithm's output when it passes through examination, seizure, and the show cause notice and arrives in adjudication. That question cannot wait for the first writ petition to answer it.
What the WCO Article Tells Us
The article, authored by three officers of the Central Board of Indirect Taxes and Customs and published in WCO News No. 109 (Issue 1, 2026), describes an image analytics solution developed by the National Customs Targeting Centre. Four model types are in use:
- A product classification model analyses the scanned image and proposes tariff classification within the Indian nomenclature
- A heterogeneity detection model compares the image to the goods declared and flags mismatches
- An object detection model using the YOLOv7 deep-learning architecture identifies and locates specific objects within the container
- A container weight prediction model estimates probable weight variances by reference to declared density
The output is not, the article suggests, a determinative decision. Field officers retain discretion, analyse images using their own expertise, and classify the image as "suspicious" or "not suspicious". The human remains in the loop. The model is presented as decision support.
The article also records the operational claim: the "hit rate", meaning the proportion of AI predictions that led to the discovery of irregularities, has improved. Representative detections cited include firecrackers in consignments declared as miscellaneous items, cosmetics found among declared garments, and cigarettes concealed in cargo declared to contain air fresheners.
These are, on their face, the vindicating examples. The legal difficulty begins precisely where the vindicating examples end.
The Evidentiary Question Nobody Has Asked
When an AI model assigns a container a "suspicious" classification, three things typically follow. First, the container is marked for examination. Second, if examination reveals discrepancy, goods are detained or seized under Section 110 of the Customs Act, 1962. Third, a show cause notice issues under Section 124, and in due course an order-in-original under Section 28 or Section 125.
At none of these stages does the current procedural framework acknowledge that an algorithmic determination has intervened in the chain of events. The SCN records the physical discovery. It does not record the algorithmic trigger. The reply to SCN contests the physical discovery. It cannot contest what it is not told. The order-in-original treats the departmental case as a conventional examination case. In structural terms, the AI output has passed through the system invisibly.
If the algorithm is merely a targeting aid, its output forms no part of the evidentiary foundation and disclosure is arguably not required. That is the position the WCO article implicitly adopts. But if the algorithm's output materially contributes to the belief that goods are liable to confiscation, to the framing of the SCN, or to the adjudicating authority's satisfaction under Section 111, then its output is part of the evidentiary foundation and the noticee is entitled to know of it, challenge it, and test it.
The honest answer is that we do not know which of the two positions Indian law will ultimately adopt, because no Indian court has yet been asked the question.
What a Noticee Would Be Entitled to Ask For
Assume the question arises. Assume a noticee before the Bombay High Court, resisting a Section 110 seizure on the footing that examination was triggered by an AI flag, seeks disclosure. The architecture of what they would be entitled to ask for is already reasonably well-settled in comparative jurisdictions. Four heads are relevant.
1. Model Identity and Version
Which model flagged the consignment, which version of that model was in production on the date of scanning, and what changes to training data or architecture have occurred since.
2. Confidence Threshold
What probability score the model assigned to the flagged consignment, and what threshold was configured to trigger examination. A flag at 0.51 and a flag at 0.97 are not, in any meaningful sense, the same flag.
3. False Positive Rate
On the model's historical performance, what proportion of flagged consignments have, on examination, been found compliant. If a model generates high volumes of false positives, the weight any adjudicator can legitimately place on its output diminishes correspondingly.
4. Training Data Provenance
Whether the model was trained on data representative of the product category in question, and whether synthetic data (which the WCO article expressly contemplates, referring to the "threat image" technique of inserting fictional but realistic threat items into real cargo images) formed part of the training set. A classification model trained on an unrepresentative corpus will make systematically skewed predictions.
None of these disclosures would compromise operational security in any serious sense. They would, however, allow adjudicators and courts to weigh the algorithmic contribution to the departmental case in an informed way, rather than treating it as either invisible or unchallengeable.
The Classification Model: A Concern of Its Own
The product classification model deserves separate attention. Tariff classification under the First Schedule to the Customs Tariff Act, 1975, is one of the most contested areas of Indian customs litigation. Classification disputes occupy a substantial portion of the CESTAT docket. The same goods are regularly classified differently across Commissionerates. CESTAT reverses departmental classification with some frequency. WCO Harmonized System Committee Classification Opinions themselves evolve.
Against that backdrop, a model that "proposes various options for classification within the Indian nomenclature of goods" is doing something more than targeting. It is anchoring a classification position. Once the model's suggested heading enters the departmental workflow, the burden on the examining officer to depart from it is, at the very least, psychological.
Over time, with training data that includes the department's own historical classification decisions, the model will tend to reinforce the departmental view and narrow the range of classifications the department considers plausible. The risk is not one of error. It is one of systematic convergence of departmental classification positions around model outputs, at the same time that those very positions are being contested and reversed at CESTAT.
This is a concern that will not be resolved by disclosure alone. It will require the department to maintain clarity, internally and in adjudication orders, that model-proposed classifications carry no evidentiary weight and do not relieve the adjudicating authority of the obligation to independently justify the classification adopted.
A Governance Vocabulary Already Exists
The Indian legal profession is not writing on a blank slate. The Ministry of Law of Singapore published in March 2026 a Guide for Using Generative AI in the Legal Sector which articulates three operating principles (professional ethics, confidentiality, transparency), distinguishes between human-in-the-loop and human-on-the-loop supervision, and provides risk-based oversight templates. The European Union's AI Act classifies AI systems used in law enforcement and border control as high-risk and imposes transparency, accuracy, and human oversight obligations. The Council of Europe's Framework Convention on Artificial Intelligence, adopted in 2024, sets baseline obligations of accountability and remedy.
India has its own emerging architecture: the Digital Personal Data Protection Act, 2023, the NITI Aayog Responsible AI principles, and the draft Digital India Act discussions. None of these yet speaks specifically to AI in customs enforcement. The silence is the opening.
An Agenda for the Profession
Three propositions follow for the indirect tax Bar and for practitioners advising importers.
The CBIC should be urged, through representations and, if necessary, through writ practice, to amend the SCN template in AI-flagged cases to disclose the algorithmic trigger and the four heads identified above. A procedural instruction along these lines would pre-empt litigation rather than invite it.
Noticees in cases likely to have involved scanner-based examination should, as a matter of course, seek disclosure of whether AI models were involved in selection, and if so, the four heads above. A reply that does not ask the question preserves no ground for later challenge.
The law review and seminar circuit should take up the evidentiary question as a matter of priority. Section 138B of the Customs Act, the Evidence Act provisions on expert opinion and electronic records, and the Section 65B certification regime all bear on it. The scholarship that develops now will frame the first judicial confrontation when it comes.
The Narrow Point
The WCO article closes with the observation that Indian Customs "remains committed to sharing its experience with other Customs authorities and welcomes further engagement with interested administrations." That is a welcome sentiment. The legal profession is also an interested stakeholder, and the engagement owed to it is of a different character: not collaboration on deployment, but scrutiny of deployment. Indian customs jurisprudence has a long tradition of disciplining revenue enforcement through procedural fairness. The algorithm is the newest entrant to that jurisprudence. It should be received with the same disciplined scepticism that every previous instrument of enforcement has been received with, and on the same terms: disclose the material, permit challenge, bear the burden of justification.
The first writ petition on an AI-flagged seizure will be filed in an Indian High Court in the foreseeable future. Whether the jurisprudence that follows is coherent or improvised depends on the work done, by the Bar and by the Bench, in the period before it.