Industrial Vision Systems

Machine Intelligence.
See. Think. Act.

Print  ·  Textile  ·  Agri  ·  Recycling

AI Engineering Services

We've built our own.
Now we build yours.

Consult  ·  Build  ·  Port  ·  Lifecycle

🦅 Falcon Sorting Platform
OEM Sorting Platform

Falcon Sorting Platform

Hard real-time bulk sorting intelligence for machine builders

Falcon Sorting Platform

The Falcon Sorting Platform is YantraVision's FPGA-powered optical sorting system that delivers hard real-time bulk sorting decisions in under 2 milliseconds. Built on a three-layer architecture — custom FPGA hardware, configurable multi-spectral sensing (RGB, SWIR, NIR), and the Clever Sight AI engine — Falcon serves OEM machine builders deploying sorting lines across agriculture, recycling, textiles, and minerals.

📷

Sensing

Sensing Layer

Configurable cameras and illumination — RGB, SWIR, NIR, and combinations.

⚙️

FPGA Core

FPGA Core

Hard real-time, <2ms cycle. Custom boards, YantraVision IP.

🧠

CSE

Clever Sight Engine

Analyzes data, auto-tunes parameters continuously.

The central claim of Falcon is hard real-time processing. Every pixel from every line scan is processed within a deterministic cycle of under 2 milliseconds — no operating system, no scheduling jitter, no unpredictability.

The FPGA boards are custom-designed and manufactured by YantraVision — built specifically for the harsh conditions of industrial sorting lines: vibration, dust, heat, and continuous 24/7 operation. The firmware and processing algorithms running on them are entirely YantraVision IP.

Falcon FPGA Board — enhanced variant Falcon FPGA Board

Falcon FPGA processing boards — custom-designed and manufactured by YantraVision

PC-based systems introduce OS jitter — they cannot guarantee the microsecond-level timing that precise ejection valve control demands. DSP boards are the previous generation: purpose-built but throughput-limited, unable to keep pace with modern line speeds. Falcon is what comes next.

Feature PC-based DSP Falcon FPGA
Latency Variable / jitter Limited <2ms deterministic
Throughput Moderate Limited High — line speed
Hardware Off-shelf Off-shelf Custom, YantraVision IP
Harsh environment ready Partial
AI feedback loop (CSE)
Contaminant capture + audit

Bulk sorting is not one problem. Sorting rice is different from sorting PET flakes, which is different from sorting copper ore. The contaminants differ, the material speeds differ, the lighting conditions differ.

The Falcon sensing layer is not a fixed menu. YantraVision selects and integrates the right camera and illumination configuration for each application from a vendor partner ecosystem — matched to the FPGA core. All configurations use 2048–4096 pixel line scan sensors running at belt line speed.

📷
RGB Camera
🔭
RGB + SWIR Camera
🔬
SWIR + SWIR Camera
🌿
RGB+NIR Camera

The core challenge in bulk sorting is not detection — it is consistency. Contamination type, colour, and size shift hour to hour, not just batch to batch.

Consider a rice sorting line.

One batch: husk. The next hour: stones. Fixed parameters that cleared the first will over-eject on the second — or under-detect and let contamination through.

CSE solves it.

Ejection events are sampled in real-time, analysed against processing parameters, and corrections fed back into the processing pipeline — continuously, not just at setup.

📸
Capture
🔍
Analyse
🎛️
Correct
🔔
Alarm
if beyond range

The result: a sorting line that does not degrade on long runs and changes in material batches.

What is the Falcon Sorting Platform?
Falcon is YantraVision's FPGA-powered optical sorting platform that delivers hard real-time bulk sorting decisions in under 2 milliseconds. It combines custom FPGA hardware, configurable multi-spectral sensing (RGB, SWIR, NIR), and the Clever Sight AI engine in a three-layer architecture designed for OEM machine builders.
How does FPGA-based sorting differ from PC-based or GPU systems?
FPGA sorting delivers deterministic sub-2ms latency with no OS jitter — critical for precise ejection valve timing at high line speeds. PC-based systems suffer from variable latency due to operating system scheduling, while DSP boards are throughput-limited. Falcon's custom FPGA boards are also designed for 24/7 industrial operation in harsh environments with dust, vibration, and heat.
What industries use the Falcon platform?
Falcon is used across agriculture (grain, rice, nut, coffee, and seed sorting), textiles (cotton contamination removal), recycling (plastic and paper sorting), and minerals processing. The platform's configurable sensing layer means it adapts to each material type and contaminant profile.
What is the Clever Sight Engine (CSE)?
The Clever Sight Engine is Falcon's AI feedback layer that continuously analyses ejection events in real time. It auto-tunes image processing parameters as material conditions change — correcting for batch-to-batch variation in contamination type, colour, and size without operator intervention. If drift exceeds the auto-correction range, CSE raises an alarm.
What camera configurations does Falcon support?
Falcon's sensing layer supports RGB cameras for colour-based detection, RGB+SWIR for material composition analysis, dual SWIR for deep spectral classification, and RGB+NIR for combined visual and near-infrared differentiation. The configuration is selected based on the specific application and contaminant profile.
Can Falcon be integrated into existing sorting machines?
Yes. Falcon is designed as an OEM platform — it ships with FPGA compute, imaging, illumination, and ejection actuation ready for integration into third-party sorting machines. YantraVision provides full integration support including sensor selection, mechanical interface design, software integration, and on-site commissioning.

Falcon ships as a complete platform — FPGA compute, imaging, illumination, and ejection actuation — ready for integration into OEM sorting machines. YantraVision provides full integration support: sensor selection, mechanical interface, software integration, and commissioning.

Last updated: March 2026
🌾 Textile Sorting
Partner Solution — Textile Sorting

Textile Sorting

Vision intelligence for cotton tuft and recycled fibre processing

Powered by the Falcon Sorting Platform  →
Textile Sorting — Cotton Inspection

High-volume cotton tuft sorting runs at 20 m/sec — and at that speed, two very different detection problems have to be solved reliably, every run. For natural cotton, the critical contaminant is White Polypropylene: it is the same colour as the tuft itself, giving any colour-based system nothing to detect against. For recycled cotton, the problem flips — incoming fibre arrives in every possible colour, with no consistent baseline. Detection parameters that work for one batch fail on the next unless the system re-baselines itself for each run.

Cotton tuft vs White Polypropylene

Natural Cotton — The Invisibility Problem

Most contamination in natural cotton is visible. Coloured threads, jute fibres, and synthetic particles stand out against white cotton. The critical failure point is white polypropylene and transparent PP. These contaminants share the colour of the fibre itself — optical systems that rely on colour contrast cannot see them. They pass through undetected, embedding into finished fabric and surfacing as defects, customer rejections, and costly reprocessing.

The contaminants hardest to detect are the ones that matter most.

Recycled cotton and synthetic fibre

Recycled Cotton — The Unpredictability Problem

Recycled cotton introduces a different kind of difficulty. Reclaimed fibre arrives in a wide range of colours — there is no consistent white baseline to measure against. Any detection approach that depends on colour contrast between the contaminant and the surrounding fibre becomes unreliable. A system tuned for white cotton will miss contamination in a mixed-colour recycled stream entirely.

As recycled fibre volumes grow, sorting systems must work regardless of input colour.

The Falcon Sorting Platform is YantraVision's core sorting intelligence platform — built for high-speed, low-latency, hard real-time sorting across industrial applications. It combines line scan cameras, LED illumination, embedded hardware with FPGA, and AI-driven automation algorithms — all designed and built by YantraVision — into a single, compact module.

Textile sorting is one application of the platform. The capabilities described on this page reflect the Falcon Sorting Platform deployed in the blow room, applied to both natural cotton and recycled fibre streams.

Learn more about the Falcon Sorting Platform  →

Five capabilities — each addressing a specific gap in how conventional sorting systems handle cotton fibre.

textureVision — Seeing What Others Miss

One Module. Every Contaminant.

Contamination Captured and Stored.

Clever Sight Engine — The Machine Learns

Full Control. From Anywhere.

One Module. Every Contaminant.

Coloured threads, jute, synthetic fibres, white PP, transparent PP — all detected in a single module. One unit, one installation point, no module coordination overhead.

Simple in design. Algorithmically built to handle everything.

The Falcon Sorting Platform uses standard RGB colour vision — no exotic sensors, no multi-module complexity. The same RGB cameras that handle a natural cotton stream also handle mixed-colour recycled fibre, white PP, and transparent PP, because the intelligence is in the algorithms, not the hardware configuration. Simple to install. Simple to maintain.

📋
Audit-Ready Proof
⚙️
Self-Optimising System
💰
Lower Cost of Ownership
📡
Remote Oversight

The differences between sorting platforms are not marginal — they reflect fundamentally different architectural choices.

Capability German OEM Italian OEM Falcon Platform — YV
Detection modules5 separateMultiple1 universal module
White PP detectionShininess + LEDColour + lightPure vision AI — textureVision
Transparent PP detectionDedicated moduleLimitedIncluded — textureVision
Recycled fibre support
Remote monitoringNot standardOptional add-onBuilt in — app + web
Auto-control (CSE)✓ Clever Sight Engine
Contamination capture✓ images saved on ejection
Cost / maintenanceHighHighLowest in Industry

The Falcon Sorting Platform is not a more configurable version of what already exists. It is a different approach.

The Falcon Sorting Platform is deployed and running in cotton processing lines today. Talk to us about your application — natural cotton, recycled fibre, or both.

🖨️ Strategic Partner — Sheet Inspection
Partner Solution — Sheet Inspection

Strategic Partner — Sheet Inspection

Vision Intelligence for Sheet Inspection on Print Lines

Embedded precision inspection for press OEMs and machinery integrators — turning inspection into a built-in capability, not an afterthought.

Sheet Inspection on Offset Press

YantraVision's Sheet Inspection System is an OEM-integrated machine vision solution that embeds AI-powered precision inspection directly into print presses and finishing machinery. Operating at full production speed with no line slowdown, it performs 100% inline inspection of every sheet — detecting print, colour, coating, die-cut, and substrate defects below 0.1 mm using multi-camera synchronised capture.

The result: every sheet leaving the press has been inspected — catching print, colour, coating, punch, and die-cut defects before they reach the customer. Built for press OEMs and machinery integrators where adding inspection capability is a competitive differentiator.

High detection sensitivity alone does not make a production system work.

YantraVision's inspection software gives operators and QA teams configurable control over accept/reject decisions — with defect severity classification, tolerance thresholds, and zone-based rules tunable per job.

The operator defines which defects are critical and which are acceptable for a given job. This job-level sensitivity control is what drives high yield — compliant quality is maintained, while defined defects are reliably eliminated. The same engine works across different press types and substrate families.

The result: compliance-grade inspection with production-grade efficiency.

Machine-Integrated Mount
Machine-Integrated Mount

Camera heads, strobe lighting, and compute modules mount within the press — factory-aligned during commissioning. No separate cabinet, no extra floor space.

Sheet-by-Sheet Capture
Sheet-by-Sheet Capture

Every sheet imaged at full press speed — front, back, and surface finish in one synchronous pass. Slaved to press encoder signals for precision at any speed.

AI-Driven Classification
AI-Driven Classification

Deep learning classifies defects in under 30ms per sheet — separating real faults from acceptable substrate variation. AI-assisted job setup auto-configures tolerances from the approved master.

Accept / Reject Output
Accept / Reject Output

Integrates with the machine's PLC — triggering diverters, line-stop events, or marking systems. Clean sheets proceed; defective sheets are flagged or ejected per sheet.

Print and colour defect detection
Coating and surface defect detection
Punching and die-cutting defect detection
Substrate and back-face defect detection
Pharmaceutical packaging
💊 Pharmaceutical Packaging

Folding cartons for blister packs, bottles, tubes, and sachets — where every print element carries regulatory weight and recall risk.

Commercial Print
🖨️ Commercial Print

High-volume sheetfed offset runs — where brand colour consistency and zero-defect delivery define the customer relationship.

Full Speed
No Line Slowdown
100%
Sheet Coverage
< 0.1 mm
Min. Defect Size
Multi-Cam
Synchronised Capture

YantraVision supplies the complete inspection stack — illumination design, optics, imaging hardware, and AI compute — as an integrated, OEM-ready system. The partnership is an ongoing engineering relationship: co-development of mounting configurations, PLC integration protocols, and inspection parameter libraries specific to each press model. Each generation carries accumulated engineering depth forward.

What is YantraVision's sheet inspection system?
YantraVision's Sheet Inspection System is an OEM-integrated machine vision solution that embeds AI-powered inspection directly into print presses and finishing machinery. It performs 100% inline inspection of every sheet at full production speed — detecting print, colour, coating, die-cut, and substrate defects at below 0.1 mm resolution.
How does sheet inspection integrate with print presses?
Camera heads, strobe lighting, and compute modules mount within the press — factory-aligned during commissioning. No separate inspection station or additional floor space required. The system slaves to press encoder signals for precise synchronisation at any speed, and integrates with the machine's PLC for automated accept/reject decisions.
What defects does the sheet inspection system detect?
Four defect categories: print and colour defects (missing print, colour deviation, registration error, hickey, ghosting, smear), coating and surface defects (uneven coating, streaks, bare patches, lamination bubbles), punching and die-cutting defects (mis-punched holes, misregistered cuts, incomplete cuts, Braille verification), and substrate/back-face defects captured by a dedicated underside camera.
Does the system slow down the press?
No. The system operates at full press speed with no line slowdown. AI classification processes each sheet in under 30ms using deep learning that separates real faults from acceptable substrate variation. Multi-camera synchronised capture ensures complete sheet coverage without affecting cycle time.
What industries use sheet inspection?
Primarily pharmaceutical packaging (folding cartons for blister packs, bottles, and sachets where every print element carries regulatory weight) and commercial print (high-volume sheetfed offset runs where brand colour consistency and zero-defect delivery are required).
How does AI classification reduce false rejects?
The AI engine provides job-level sensitivity control — operators define which defects are critical and which are acceptable for each specific job. Defect severity classification, tolerance thresholds, and zone-based rules are configurable per job. This yield control approach maintains compliance-grade inspection while maximising production efficiency.

YantraVision embeds precision inspection directly into your press — a complete OEM-ready system covering illumination, optics, imaging, and AI compute. Talk to us about your press model or integration requirements.

Last updated: March 2026
🏷️ LUMA Offline Label Inspection System
Product — Label Inspection

AI-Powered Label Inspection System

Catch defects before labels reach your customer — ensuring only flawless labels leave your facility.

Fully AI-powered workflow and real-time inspection deliver unmatched precision, while giving operators complete control and flexibility. An intuitive, operator-focused UI ensures the operator stays focused on quality control and not software handling.

LUMA Label Inspection — Flexo Press

The LUMA Label Inspection System is YantraVision's AI-powered offline inspection solution that detects printing and material defects on labels at speeds up to 100 m/min with 4K line-scan resolution, identifying defects as small as 0.1 mm² across eight categories including missing print, colour deviation, registration errors, and barcode defects.

The Challenge

Traditional systems often misidentify acceptable variations as defects, leading to high setup time and triggering false alarms. This forces operators to spend time on unnecessary checks, reducing productivity and yield, and making the systems less reliable.

The LUMA Advantage

LUMA overcomes these challenges with reduced false defects, low setup time, and high defect detection accuracy — delivering higher yield and reliable quality assurance that traditional systems fail to achieve.

Equipped with high-resolution line-scan cameras, the system performs 100% inspection of every label — identifying print defects, colour deviations, registration errors, missing text, smudges, substrate and matrix flaws. It supports a wide range of substrates and integrates seamlessly with all unwinders and rewinders in market. Designed for industries where accuracy is critical — pharmaceuticals, food & beverage, cosmetics, and industrial manufacturing.

Artwork Created
LUMA Proof Reader
verifies proof vs PDF
Approved for Production
LUMA Label Inspection
100% inline inspection

Which LUMA is Right for You?

LUMA Proof Reader LUMA Label Inspection
When Before production run After printing (rewinder stage)
Input First-press proof vs PDF master Printed web at slitter/rewinder
Speed ≤ 4 min per sheet Up to 100 m/min
Best For Prepress QA, artwork approval 100% inspection before dispatch
Missing Print
Colour Deviation
Registration Error
Text Error
Smudge & Scum
Punch Out / Die Cut
Barcode & QR Errors
Substrate Flaw

Hover over a defect to learn more

Eight defect types detected across every label — from missing print to substrate flaws.

Key Features
🧠 AI-Driven Detection & Deep Learning
Advanced AI and adaptive deep learning models accurately detect and classify defects while continuously learning from past inspections — reducing false positives, improving accuracy, and minimizing yield loss over time.
🎯 Critical Area-Based Defect Classification
Define critical and non-critical areas on the artwork, and classify defects accordingly — ensuring that defects in critical areas are prioritized while non-critical variations are handled appropriately.
🔍 Masking & Regions of Interest (ROI)
Define inspection zones to focus only on relevant areas of the label while ignoring liner, carrier, or non-print regions — reducing false detections and improving inspection efficiency.
🔥 Hotspot View
Displays defects in a heat-map style visualisation, instantly highlighting critical zones across the label web for faster identification and correction of defects than conventional inspection views.
🔲 Matrix Monitoring
Grid-based layout for complete coverage of the label web. Operators can track multiple zones simultaneously with clarity and precision. This ensures no area is overlooked, even in high-speed production.
💡 Advanced Illumination
High-intensity LED dome lighting — rated for 20,000+ hours of continuous operation — ensures consistent, high-quality images regardless of substrate type or environmental lighting conditions.
📊 Real-Time Processing & Comprehensive Reporting
Processing at full 100 m/min web speed with 4K line-scan resolution, the system automatically generates detailed inspection reports with defect counts, types, positions, and production quality statistics. Fully customisable to meet specific customer requirements.
💾 Job Storage & Recipe Management
Store and recall job settings instantly, allowing quick setup for repeat orders and reducing changeover time.
Specifications
Inspection Speed100 m/min
Camera TechnologyLine scan camera with 4K resolution
Min. Defect Size0.1 mm²
Field of View300 mm / 450 mm
Light SourceLED illumination — Diffuse Dome
Inspection TypeOffline
Special FeaturesBarcode & Matrix inspection
SubstratesMetallic foils, films, synthetic materials, matte surfaces

Primarily integrated with slitting rewinders, the system performs a final quality check before product delivery. It automatically stops the rewinder at the exact position of any detected defect, allowing the operator to remove and replace the faulty label. For enhanced efficiency, operators can also choose to bypass minor defects that do not require stopping the machine.

What is the LUMA Label Inspection System?
LUMA is YantraVision's AI-powered offline label inspection system that performs 100% inspection of every label at speeds up to 100 m/min using 4K line-scan cameras. It detects defects as small as 0.1 mm² across eight categories — missing print, colour deviation, registration error, text error, smudge, die-cut, barcode/QR, and substrate flaws.
What defects can LUMA detect?
LUMA detects eight defect categories: missing print, colour deviation, registration errors, text errors, smudge and scum marks, punch-out and die-cut defects, barcode and QR code errors, and substrate flaws such as tears, wrinkles, and contamination.
What is the minimum defect size LUMA can detect?
LUMA detects defects as small as 0.1 mm² using high-resolution 4K line-scan cameras. The system supports field-of-view configurations of 300 mm and 450 mm to match different label widths.
How does LUMA reduce false positives?
LUMA uses adaptive deep learning models that continuously learn from past inspections, progressively reducing false detections. Operators can define critical and non-critical areas, set tolerance thresholds, and configure masking zones to focus inspection on relevant regions — minimising unnecessary alarms while maintaining detection accuracy.
What substrates can LUMA inspect?
LUMA inspects a wide range of substrates including metallic foils, synthetic films, matte surfaces, and standard paper labels. The advanced LED dome lighting ensures consistent image quality regardless of substrate reflectivity or surface finish.
Does LUMA integrate with existing rewinders?
Yes. LUMA is designed to integrate seamlessly with all slitter-rewinders on the market. The system automatically stops the rewinder at the exact position of a detected defect for operator review, with the option to bypass minor defects to maintain production efficiency.
What industries benefit from LUMA label inspection?
LUMA serves industries where label accuracy is critical — pharmaceuticals (regulatory compliance, serialisation), food and beverage (ingredient labelling, allergen declarations), cosmetics (brand consistency), and industrial manufacturing (safety labels, hazard markings).

Ready to eliminate label defects before they reach the market? Get in touch with the YantraVision team.

Last updated: March 2026
🔤 LUMA Proof Reader System
Product — Proof Reader

AI-Powered Proof Reader for Print Quality

Catch print errors before they become costly reprints or recalls — so you never waste time, material, or an entire production run.

Built for prepress teams, quality managers, and brand owners in pharma and FMCG.

LUMA Proof Reader System

The LUMA Proof Reader is YantraVision's AI-driven prepress verification system that compares first-press proof samples against approved PDF master artwork to detect text errors, colour deviations, and layout shifts before full production begins. Operating at 300 dpi resolution, it inspects sheets up to A0 size (1030 mm x 800 mm) in under 4 minutes, preventing costly reprints and regulatory recalls.

The Challenge

Traditional proofing solutions struggle with a trade-off: the more accurate they try to be, the more setup time and false positives they generate. Operators end up spending hours managing false alarms, slowing production and reducing yield — making them unreliable.

The LUMA Advantage

Built for today's high-speed printing environments, the system combines low setup time, reduced false defects, and high defect detection accuracy — all in one system — ensuring high yield and production efficiency that traditional proofing systems fail to achieve.

Designed for the demands of modern packaging, the system works seamlessly across different printing machines and substrates including paper, cartons, metallic foils and films. Particularly valuable in pharmaceuticals and FMCG where accuracy is non-negotiable — helping prevent costly recalls and protecting brand reputation.

Artwork Created
LUMA Proof Reader
verifies proof vs PDF
Approved for Production
LUMA Label Inspection
100% inline inspection
LUMA Proof Reader LUMA Label Inspection
When Before production run After printing (rewinder stage)
Input First-press proof vs PDF master Printed web at slitter/rewinder
Speed ≤ 4 min per sheet Up to 100 m/min
Best For Prepress QA, artwork approval 100% inspection before dispatch
Key Features
🔲 NVZ Area Masking
Gives operators the flexibility to ignore or closely inspect NVZ areas — ensuring focus on critical print areas while reducing unnecessary false detections.
⚙️ Auto Segmentation
Zero manual setup — automatically identifies and selects different inspection regions in the label for precise analysis.
📑 Layer PDF Selection
Multi-layer PDF support — when working with multi-page PDFs, unwanted pages can be deselected allowing inspection of only the relevant layers, ensuring faster and more accurate verification.
🧠 Deep Learning Inspection
Powered by adaptive deep learning algorithms, the system continuously improves by learning from past inspections — minimizing false positives and delivering higher accuracy with every run.
🔁 Flexible Inspection in Repeat
Whether a repeat contains same-ups or mixed-ups, each label can be inspected separately with custom criteria as per your requirements. For example, in an 8-up layout across an A0 sheet (1030 mm × 800 mm), each label is verified independently with its own inspection criteria — all completed in under 4 minutes at 300 dpi.
📊 Comprehensive Reporting
Generate a detailed PDF report for every job, including master vs sample comparison, defect locations to 0.09 mm precision, highlighted defect regions, and operator comments — ideal for quality audits, traceability, and compliance.
🌐 Multi-Language Text Inspection
Verifies text accuracy across diverse languages and scripts.
👤 User Management & Audit Trail
Manage user access and keep a record of all inspection activities for better control, traceability, and compliance.
Specifications
Inspection Speed≤ 4 mins / A0 sheet
Min. Defect Size0.09 mm × 0.09 mm
Resolution300 dpi
Max Paper Size1030 mm × 800 mm
Supported File FormatsPDF, TIFF
SubstratesPaper, Cartons, Metallic Foils and Films
Defect Tolerance Settings
Defect Severity Levels
UI Comparison Mode
Defects DetectedFont and text defects, background defects, missing letters, barcode defects, NVZ area defects, die-cut alignment defects, language mismatch defects, filled and broken characters
What is the LUMA Proof Reader?
The LUMA Proof Reader is YantraVision's AI-driven prepress verification system that compares first-press proof samples against approved PDF or TIFF master artwork. It detects text errors, colour deviations, missing content, and layout shifts before full production begins — preventing costly reprints and regulatory recalls.
How does proof reading differ from label inspection?
Proof reading happens before production — comparing a first-press proof against the approved PDF master to catch artwork errors. Label inspection happens after printing on the rewinder, performing 100% inline inspection of every printed label at production speed. LUMA Proof Reader handles prepress QA; LUMA Label Inspection handles production quality control.
What file formats does LUMA Proof Reader support?
LUMA Proof Reader supports PDF and TIFF reference formats. It handles multi-layer PDFs with the ability to select or deselect specific layers for inspection, allowing verification of only the relevant print layers.
How long does a proof inspection take?
Under 4 minutes per A0-size sheet (1030 mm x 800 mm) at 300 dpi resolution, detecting defects as small as 0.09 mm x 0.09 mm. The auto-segmentation feature eliminates manual setup, so the system is ready to inspect immediately after loading the master artwork.
What defects does the Proof Reader detect?
The Proof Reader detects font and text defects, background defects, missing letters, barcode defects, NVZ (non-visible zone) area defects, die-cut alignment defects, language mismatch errors, and filled or broken characters. It supports multi-language text inspection across diverse scripts.
Can it handle multi-up label layouts?
Yes. Whether a repeat contains identical labels or mixed-ups with different variable data, each label is inspected independently with its own inspection criteria. For example, in an 8-up layout, each label is verified separately against its specific master.

Ensure flawless print quality before full production begins. Talk to the YantraVision team about integrating the Proof Reader into your workflow.

Last updated: March 2026
🖨️ Printing Industry Solutions
Industry — Printing & Packaging

Defect Detection for Print Quality

India's printing and packaging industry is one of the fastest-growing in Asia — projected to reach USD 46.7 billion by 2034 (IMARC Group), driven by pharma, FMCG, and e-commerce demand. In pharmaceuticals, a single printing mistake can be deadly. In FMCG, even a minor error can damage brand reputation and trigger costly recalls. Impeccable print quality is non-negotiable.

Print quality inspection

High accuracy and reliability are critical across all major printing methods — flexographic, rotogravure, offset, and screen printing — used for labels, packaging, and commercial prints. Defects may occur due to human error or automation issues, and any error, no matter how small, can lead to significant waste and increased costs.

Traditional systems often misidentify acceptable variations as defects, leading to high setup time and false alarms. YantraVision's AI-powered inspection uses advanced models to reduce false defects with very low setup time and high detection accuracy — delivering higher yield and reliable quality assurance.

LUMA Series — Label Inspection

LUMA Label Inspection System

100% inspection of every label at up to 100 m/min — identifying print defects, colour deviations, registration errors, missing text, smudges, barcode errors, die-cut faults, and substrate flaws. Low setup time, reduced false defects, high detection accuracy.

100% Inline Inspection Up to 100 m/min Pharma & FMCG Rewinder Stage
LUMA Series — Proof Reader

LUMA Proof Reader System

Compares first-press samples directly with the approved PDF master artwork — catching critical print errors in ≤ 4 minutes per sheet before full production begins. Prevents costly recalls across paper, cartons, metallic foils, and films.

Prepress QA ≤ 4 min per sheet PDF Master Comparison Artwork Approval
Label inspection system Gravure printing Offset printing

YantraVision is your trusted partner as a leader in web inspection platforms — supplying the complete vision stack for labels, packaging and commercial print lines.

📦 Packaging Industry Solutions
Vertical

Packaging

Packaging is one of the most inspection-intensive manufacturing environments — every substrate, every print layer, every seal, every fill level is a potential failure point. YantraVision addresses the full packaging inspection chain, from raw substrate web to finished, filled package.

Packaging inspection
Strategic Partner Solution

Sheet Inspection for Offset Print Lines

YantraVision powers 100% inline sheet inspection for offset and digital print lines — detecting print, colour, coating, die-cut and surface defects at full press speed before any sheet leaves the line. Integrated directly into the machine as an OEM partnership.

Offset & Digital 100% Coverage OEM Integration AI Classification

Packaging lines begin with continuous material webs — duplex paperboard, kraft paper, BOPP film, aluminium foil, and flexible packaging. YantraVision's web inspection system runs inline at full production speed, detecting holes, tears, contamination, colour streaks, thickness variation and coating defects using high-resolution line scan cameras and GPU-accelerated Falcon Compute processing. No production interruption.

Web inspection Web material

What visible cameras cannot see, SWIR can. YantraVision's SWIR inspection platform sees through opaque containers and labels to detect under-fills, overfills, missing components and micro-leaks. The same system detects moisture ingress, contaminants, and foreign materials invisible in the RGB spectrum. A single prism-aligned camera delivers simultaneous RGB + SWIR without dual-camera complexity.

Fill level inspection Seal integrity inspection

YantraVision supplies the complete inspection stack — illumination, optics, imaging and compute — as an integrated OEM-ready system. Suitable for line builders, packaging machine manufacturers and converter OEMs.

🧵 Textile Industry Solutions
Vertical

Textile

The global textile market is valued at over USD 760 billion and projected to reach USD 974 billion by 2030, growing at ~5% CAGR. India is one of the world's largest textile producers — with a domestic market projected to reach USD 350 billion by 2030, employing over 45 million people across fibre, yarn, fabric, and apparel. The shift toward recycled fibres and technical textiles is accelerating globally, raising new demands for fibre-level quality control and contamination detection. Maintaining consistent material quality — from incoming fibre to finished fabric — is both a compliance requirement and a competitive differentiator.

Textile inspection

Woven fabric running on continuous production lines requires inline defect detection without stopping the line. YantraVision's web inspection system uses high-resolution line scan cameras at 30 KHz to detect holes, tears, contamination, colour streaks and weave defects across the full fabric width at production speed. AI-based defect classification distinguishes true defects from natural substrate variation.

Fabric Web Inspection

Beyond visible inspection, SWIR imaging reveals material composition differences invisible to RGB cameras — distinguishing natural from synthetic fibre blends, detecting moisture in fabric, and identifying foreign material inclusions. The same prism-aligned single-camera platform delivers simultaneous RGB + SWIR for hybrid detection algorithms.

SWIR Material Composition

YantraVision supplies the complete vision stack — sorting platform, web inspection system, illumination and compute — for integration into textile machinery and sorting line OEM products.

🌾 Agriculture Processing Solutions
Industry — Agriculture

AI-Powered Sorting for Agriculture

India is a major agricultural powerhouse — one of the world's largest producers of grains, spices, fruits, and vegetables. Growing demand for export-quality produce is driving the shift to automated sorting, replacing manual processes that limit yield and consistency.

Agriculture sorting

An automated platform that identifies and removes unwanted materials from product streams in real time. Using high-speed line-scan cameras and advanced AI models, the system detects defects based on colour, size, and material properties.

Our solution is scalable and multi-industry, widely used for sorting fruits, vegetables, grains, and seeds — delivering high efficiency, improved yield, and reliable quality control.

  • Ensures only high-quality products reach the market
  • Processes large volumes quickly and accurately
  • Learns from ejection data using AI and deep learning, improving defect detection over time
  • Reduces wastage, optimising yield and efficiency
Grain Sorting

Removes discoloured, moldy, or damaged grains — ensuring only high-quality grains proceed to market.

Nut Processing

Sorts nuts by removing shells, damaged nuts, and foreign materials — enhancing the quality of the final product.

Fruit & Vegetable Sorting

Removes overripe, underripe, and damaged fruits — ensuring consistent quality in produce reaching the market.

Coffee sorting system Egg sorting system Potato sorting system

YantraVision transforms agricultural sorting — helping producers meet growing market demands with reliable, AI-driven quality control.

AI Acceleration
Engineering Services

Full Stack
AI Compute.

One team owns the full compute stack — from gigapixel FPGA pipelines at sub-millisecond latency, through 50–60 TOPS on-device NPU inference, to GPU-side post-processing and analytics.

AI Acceleration

Pick the right compute platform.

The right compute substrate depends on your latency target, power envelope, and deployment environment — not a fixed architecture. We work across all three and will tell you which one fits.

FPGA
<1ms · Deterministic
Deterministic, sub-millisecond latency. High-speed inspection, camera-direct pipelines. Vivado HLS on Xilinx Zynq / Ultrascale+. Own CameraLink IP.
NPU
50–60 TOPS · No cloud
Efficient on-device AI inference without cloud. AMD Ryzen AI / XDNA — MLIR-AIE kernels, INT8/INT4 quantisation, ONNX Runtime.
GPU
Parallel · High throughput
Heavy parallel compute — post-processing, analytics, rich visualisation, and operator dashboards where throughput matters more than determinism.

Not sure which fits? We'll assess your requirements and recommend the right platform — or a combination where it genuinely makes sense.

Core capabilities at the pixel level.

High-speed embedded vision on Xilinx Zynq and Ultrascale+ SoCs. Hardware parallelism synthesised via Vivado HLS — bridging software development and deterministic hardware execution.

Real-Time Vision Pipelines
Sub-millisecond, deterministic latency that GPU scheduling cannot match. Ideal for high-speed inspection, multi-camera synchronisation, and real-time sorting decisions.
🔌
Proprietary CameraLink IP
Own CameraLink IP — Base, Medium and Full mode up to 2.72 Gbps at 1–3 µs latency. Zero-copy data path. No third-party frame grabbers, no per-unit licensing cost.
🖥️
HW / SW Co-Design
FPGA fabric and ARM cores partitioned for maximum throughput on Zynq SoCs. Full BSP, kernel drivers, and application SDK written in-house — complete source handed over.

AMD Ryzen AI / XDNA.

AMD Ryzen AI / XDNA NPU — 50–60 TOPS on-die, fully on-device, independent of CPU and GPU. Three engagement tiers matched to where you are.

Architecture & Feasibility
Model analysis, ONNX integration assessment, INT8/INT4 quantisation sensitivity, XDNA AIE tile mapping, power & latency benchmarking — delivered as a go/no-go report.
Custom NPU Applications
MLIR-AIE kernel development, multi-tile compute graph design, image/signal pre-processing on AIE tiles, custom operator implementation, and Falcon Compute integration.
Validation & Enablement
CPU/GPU benchmark comparison, regression test suite, OEM qualification documentation, team handover training, and ongoing integration support.

Ownership at stack delivery.

The same five-layer design discipline applies across FPGA and NPU engagements. No black-box components in the critical path — full source at every layer.

L1
RTL / MLIR-AIE Kernel Design
Hardware logic & compute kernels
L2
IP & AIE Tile Integration
Wiring IP blocks onto the fabric
L3
Embedded Linux BSP
Board support, kernel, device tree
L4
ONNX Runtime / SDK / Drivers
Inference runtime & integration APIs
L5
Application Layer
UI, analytics, operator dashboards

Where this compute stack runs today.

Print & Web Inspection
Free-Fall Sorting
Defect Detection
Object Classification
On-Device Analytics
SWIR Imaging
Stereo Vision
Semiconductor / PCB

Concrete outputs. At every stage.

Every engagement closes with defined, signed-off deliverables. Here's what you take away.

Architecture & Feasibility Report
A go/no-go assessment with compute stack recommendation, latency projections, power budget, and a phased build plan — delivered before any development begins.
Implemented FPGA or NPU Solution
RTL, BSP, firmware, and SDK running on your target hardware — synthesised, integrated, and validated end to end. A system that works in your environment, not just on a bench.
Latency & Throughput Benchmarks
Formal performance report comparing pre- and post-implementation figures with CPU and GPU baselines — signed off against the targets agreed at project start.
Full Source Ownership
Complete RTL, MLIR-AIE kernels, driver stack, and application source — with documentation. No black-box IP in the critical path. When something needs changing, you can change it.
Integration Support & Handover
Team handover training, OEM qualification documentation, and integration guidance — so your engineering team can maintain, extend, and build on the system independently after delivery.

Tell us your stack.

Tell us your camera interface, model, latency target, and OEM platform — we'll map the full FPGA + NPU architecture.

Contact Sales →
Last updated: March 2026
🔭 Machine Vision OEM
OEM Partnership

Your Vision, Our Engineering.

Your engineering partner for the full vision stack — so machine makers focus on their core strength and domain expertise while we bring the technology. From first prototype to 10-year production, through bespoke design, lifecycle improvements, and lifelong support.

Machine Vision OEM Partnership
🔍

Phase 1 — Discovery & Consultation

We act as your internal vision department. Free feasibility tests in our labs, light/lens selection, and ROI modelling. Goal: first-time-right architecture to save months of R&D.

🔧

Phase 2 — Custom Development

From illumination and optics to sensors, processing boards, and AI algorithms — we cover the full machine vision stack. Designed solutions shaped as per volume, pricing, and performance requirements.

♾️

Phase 3 — Lifecycle & Sustainability

We eliminate "End-of-Life" panic. 10+ years of guaranteed supply, frozen firmware versions, proactive Product Change Notifications (PCN), global RMA and field engineering on standby.

🔄
Lifecycle Upgrades
📈
Scalability
🔒
Frozen BOM
📋
Regulatory Readiness
🌐
Global RMA
🌾

Agriculture & Horticulture

Fruits, vegetables, grains, seeds, nuts, and spices. Rejects discolouration, broken kernels, shell fragments, foreign matter, and insect damage. SWIR adds aflatoxin and moisture detection capability.

Agriculture placeholder
🧵

Textile

Fibre identification and foreign material detection in cotton and recycled cotton streams — removing synthetic fibre contamination, coloured inclusions, and non-cotton material before spinning.

Textile
♻️

Plastic Recycling

SWIR-based polymer identification distinguishes PET, HDPE, PP, and PVC by spectral signature — enabling clean material stream separation in post-consumer plastic flake and municipal solid waste lines.

Plastic Recycling
📄

Paper Recycling

Sorting grades and removing contaminants from mixed paper streams — ensuring consistent fibre quality for pulp and paper reprocessing.

Paper Recycling
💎

Minerals & Mining

Ore sorting, scrap metal separation, and mineral grading. SWIR hyperspectral classifies minerals by molecular absorption signature — separating lithologies invisible to RGB cameras.

Minerals & Mining

Ready to start your OEM partnership? Request a feasibility study or talk to us about your machine vision requirements.

Last updated: March 2026
Careers
Careers at YantraVision

Have passion to build machines?

Come join us.

We build machines that inspect, sort, and measure at production speed — on real shop floors, in real factories. If that excites you more than a clean desk and a sprint board, read on.

Careers at YantraVision

Systems thinkers.
Not just coders.

Deep learning changed what engineering looks like. Coding alone is no longer the differentiator — AI tools handle more of it every month. What's genuinely rare is the ability to think in systems.

A machine vision system isn't just software. It's optics, illumination, mechanics, electronics, firmware, and algorithms — all working as one. The person who understands how a dirty lens upstream breaks a neural network downstream, and fixes it without a playbook — that's who we're looking for.

"Job titles don't matter here. Curiosity and the ability to learn — and un-learn — do."

Things that matter more than your CV.

Loves the Shop Floor
You're more at home next to a running production line than in an air-conditioned office. Noise, grease, and tight tolerances don't bother you — they interest you.
Thinks in Systems
You understand that a machine is a chain — optics, mechanics, electronics, compute, software. You look for where the chain is weak, not just where the code is broken.
Goes Back to First Principles
When something doesn't work, you reach for physics and mathematics before you reach for Stack Overflow. You'd rather understand it than patch it.
Learns and Un-learns
You pick up new domains fast — optics this month, firmware next month. And when a new tool makes an old skill obsolete, you let it go without ego.

It isn't a single-discipline job.

You'll get your hands into each of these areas — some deeply, some broadly — depending on where the problem takes you.

🏗️
Software Design
📷
Vision Systems
🗂️
System Design
⚙️
HW–SW Co-design
🔧
Machines & Mechanics

Interested? Tell us about yourself.

No formal openings list — we hire when we find the right person. Send us a note about what you've built, what you're curious about, and why a machine vision company on the shop floor sounds like the right place.

Send a Note →
Company
Who We Are

We Build Perception into Machines.

That is literally our name.

Yantra (यंत्र) is the Sanskrit word for machine. Vision is what we bring to it. Together, YantraVision means machine vision — giving machines the ability to see, process, and act. Applied to one clear purpose: Making machines perceive.

YantraVision — Making machines perceive

Built for Speed.
Designed for Reliability.

We build machine vision-based quality control solutions — inspection systems that catch defects, verify print, sort by quality, and ensure consistency at production speed. Every product is aimed at helping manufacturers ship with confidence.

We work predominantly in process industries like print, textile, plastic, and paint — and as an OEM Technology Partner we embed our vision stack inside machines built by OEM partners, so their products come with inspection capability built in.

"From optics and illumination to algorithms and deployment — one team, one stack, end to end."

Built from First Principles.

Optics & Illumination
Hardware-level understanding of how light, lenses, and sensors interact at production speed.
Sensing & Processing
Camera interfaces, high-speed data pipelines, and real-time compute on demanding hardware.
Algorithms & AI
Classical vision and deep learning models built, trained, and optimised in-house.
Production Deployment
From lab prototype to shop floor — one team owns the full path, every time.

Life @ YantraVision

We make it a point to stay curious beyond work — yoga, cycling, treks, and lab experiments are all part of the culture. A calm and composed mind is what makes us effective, and we build the environment to support that.

Life at YantraVision
Life at YantraVision
Life at YantraVision

Let's Talk.

Whether you're exploring a new inspection system, looking for an OEM partner, or want to discuss a technical challenge — we're ready to listen.

✉️
Sales & Partnerships
sales@yantravision.com
💼
Careers
jobs@yantravision.com
📞
Phone
+91 7899225359
📍
Location
Kasturi Nagar, Bengaluru, Karnataka 560043
AI Product Engineering
AI Product Engineering

From First Principles
to Production.

Every engagement starts with a clear assessment — hardware fit, AI feasibility, architecture blueprint. Then we build. Hardware, firmware, AI, and application — one team, no handoffs.

AI Product Engineering
🗺️
Discovery & Architecture
Before a line of code is written, we assess hardware fit (FPGA, NPU, GPU, or hybrid), AI model feasibility on-device, dataset sufficiency, and compute architecture. We set latency, accuracy, and throughput targets upfront — and produce a costed build roadmap you can take to procurement.
Build Sprint — Concept to Working PoC
A time-boxed, fixed-scope engagement that takes your problem from definition to a working prototype on real hardware. Sensor input, AI inference, output signal — running on the actual compute platform, not a simulation. Ends with a working PoC and a scoped roadmap for the production build.
🚀
Full Production Build
The complete journey from Sprint output to a production-ready system. Hardware finalised, firmware hardened, AI pipeline validated against production accuracy targets, system integrated into the client's machine or process. We hand over a system that works, not just code.
🔌
Hardware Design and Supply
We design custom FPGA compute boards, integrate our proprietary CameraLink IP core, and produce hardware for PoC and low-volume production. For production-scale manufacture, we provide full design files and documentation for the client's preferred contract manufacturer.
⚙️
Firmware and BSP Ownership
We write and own the full firmware stack — RTL, kernel drivers, embedded Linux BSP, SDK. No black-box IP from third parties in the critical path. When something needs changing in the field, we can change it.
System Validation
Every build engagement closes with system-level testing against agreed latency, accuracy, throughput, and reliability targets. The client receives a validation report alongside the system.

We work with.

Companies exploring on-device AI for the first time who need a clear assessment before committing to a build programme.
OEM machine builders who need an embedded AI system and don't have an AI or hardware engineering team in-house.
AI product companies who have a working model and need someone to build the production hardware and integration around it.
Industrial automation companies who need a new on-device AI capability built from the ground up by a full-stack partner.

A system that works, not just code.

Costed Architecture Report & Build Roadmap
A detailed system design with hardware selection, AI feasibility analysis, and a phased build plan — so you know exactly what you're getting, what it costs, and how long it takes before a line of code is written.
Working PoC on Real Hardware
A functional prototype with sensor input, AI inference, and output signal running on the actual compute platform — not a simulation. Proves feasibility and delivers a scoped roadmap for the full production build.
Fully Production-Ready AI System
Hardware, firmware, inference pipeline, and application layer — validated against agreed performance targets. Proven in production: Falcon Compute Platform and LUMA Inspection Series are YantraVision products running in industrial environments today, built on the same stack we offer here.

Not sure where to start?

Begin with a Discovery session — we'll tell you what's possible and what it takes. No commitment required.

Contact Sales →
Last updated: March 2026
AI Model Platform Porting
AI Model Platform Porting

Your Model. Better Hardware.
No Compromise on Accuracy.

Models trained on GPU servers or deployed via cloud APIs often can't meet the latency, cost, or connectivity requirements of real industrial environments. We move them — to FPGA, NPU, or on-device runtimes — and prove the ported model matches the original before anything ships to production.

AI Model Platform Porting

Several porting paths, one accuracy guarantee.

☁️
Cloud or GPU → On-Device (De-Cloudify)
Remove the cloud inference dependency entirely. We compress, quantize, and deploy your model to an on-device compute platform — FPGA or NPU — eliminating cloud latency, connectivity risk, and per-inference cost. Your system makes decisions locally, in real time, regardless of network availability.
🖥️
GPU Server → FPGA Pipeline
For applications that need deterministic sub-millisecond latency — which GPU scheduling cannot guarantee — we restructure your model for FPGA deployment via Vitis AI HLS synthesis. Integrated with our CameraLink IP for camera-direct pipelines.
🤖
PyTorch / TensorFlow → AMD NPU
We export your model to ONNX, apply INT8 or INT4 quantization, develop MLIR-AIE kernels for the AMD Ryzen AI / XDNA NPU, and deploy via ONNX Runtime on-device. Validated for latency and accuracy before handover.
🔄
FPGA Generation Migration
Moving from one Xilinx or AMD FPGA generation to another — or from one board family to another. RTL porting, IP re-synthesis, driver update, BSP revalidation. We carry across the full stack, not just the bitstream.
🔀
Cross-NPU Platform Migration
Model running on one NPU target that needs to move to another — Intel, Qualcomm, Rockchip, or AMD. ONNX conversion, re-quantization, target-specific kernel tuning, and accuracy validation against the source model.
⚖️
Edge + Cloud Split Deployment
For applications where full on-device deployment isn't feasible — we partition the model at the right layer boundary. Inference-critical layers run on-device; heavier analytics run in the cloud. Clean interface between the two, with no cloud dependency for the real-time decision path.
📋
Accuracy Regression Validation
Every porting engagement includes formal accuracy benchmarking against a held-out production test set. We prove the ported model matches the source model on the metrics that matter — and deliver a signed-off accuracy report alongside the ported model.

Things we do well.

Companies whose cloud inference costs have become unsustainable at production volume.
Teams running GPU server inference that can't meet the latency requirements of their production line.
Organisations with data sovereignty or air-gap requirements that prevent cloud inference.
AI teams who have a trained model and need a hardware deployment partner with FPGA and NPU depth.

A validated model. On the right hardware.

Validated Model on Target Hardware
A validated model running on the target hardware platform — FPGA, NPU, or on-device runtime — optimised for the latency, power, and cost constraints of your production environment.
Formal Accuracy Equivalence Report
A signed-off accuracy report proving the ported model matches the original on the metrics that matter. Every engagement closes with documented proof.
Deployment Documentation & Integration Guidance
Inference benchmarks, deployment documentation, and integration guidance — everything your team needs to take the ported model cleanly into production.

Need your model on better hardware?

Need your model running on better hardware — without trading away accuracy? Tell us your current deployment environment and target platform.

Contact Sales →
Last updated: March 2026
Case Studies
Case Studies

Work we've been part of.

A look at some of the programs we've contributed to — the areas we worked in, and how we engaged.

Silicon Engineering
Embedded Engineering for a Next-Generation Silicon Program
Contributed across silicon validation, ML compiler stack development, and test automation — embedded within the client's team across the full delivery lifecycle.
Silicon Validation ML Compiler Test Automation Embedded Partnership
Case Study · Silicon Engineering · Embedded Partnership

Embedded Engineering for a
Next-Generation Silicon Program

YantraVision contributed across silicon validation, ML compiler stack development, and test automation — working within the client's team and processes across a multi-phase silicon program.

Silicon Engineering Case Study
1000+Test cases in regression
2Silicon environments
3Engineering tracks
Multi‑phaseProgram coverage
End‑to‑endValidation pipeline

An on-device AI compute architecture,
built for inference at the edge.

The client was developing a next-generation silicon architecture designed to run machine learning workloads on-device, without reliance on cloud infrastructure. The program covered the full silicon lifecycle — from architecture definition and pre-silicon modelling through to post-silicon bring-up and production validation.

The scope included the silicon itself, the software stack that exposes it to ML frameworks, and the validation infrastructure that covers both environments. YantraVision was brought in to contribute across all three areas, working as part of the client's engineering organisation rather than as an external delivery stream.

"We worked within their team — using their tools, their processes, and their workflows. The work was assigned, reviewed, and delivered the same way as any internal contribution."

Integrated into the team,
not alongside it.

The engagement model was embedded from the start. Work was tracked in the client's issue management system, reviewed through their code review process, and delivered against their release milestones. There was no separate project management layer or dedicated interface between YantraVision engineers and the client's teams.

Engineers worked directly with counterparts in the client's silicon validation, compiler, and test infrastructure teams. Handoffs were direct. Decisions were made in the same rooms — or on the same calls — as the client's own engineers. This applied across all three workstreams throughout the program.

Three workstreams,
running in parallel across the program.

01
Silicon Validation Work covered the full validation lifecycle for the compute architecture — architecture review, test plan definition, test case development, and execution across both pre-silicon and post-silicon environments. The validation application developed during this track supported functional correctness testing, performance characterisation, power analysis, and benchmarking against defined targets. Coverage was maintained across both simulation and physical silicon, with results feeding directly into the client's sign-off process.
02
ML Compiler Stack Contributions were made to selected modules within the client's ML compiler stack — the software layer responsible for taking models from standard ML frameworks and compiling them for execution on the target silicon. Work focused on correctness of model compilation, execution efficiency on-device, and performance of models running through the stack. This was done in direct coordination with the client's architecture and runtime teams, with changes integrated through the standard review process.
03
Test Automation & Regression A regression and automation framework was built to run over a thousand test cases on a daily and weekly cadence across both pre- and post-silicon platforms. The framework was structured to enable fast triage of regression failures — with results organised by test category, environment, and failure type. It was integrated into the client's existing validation workflows and used by both YantraVision and client engineers as a shared resource throughout the program.

Across multiple phases,
from pre-silicon through to production readiness.

The engagement covered multiple phases of the silicon program. Work began during the pre-silicon phase, where the focus was on simulation environments, architecture-level validation, and building the automation infrastructure that would carry through to physical silicon. As the program moved into post-silicon bring-up, the same frameworks and test cases were ported and extended to run on physical devices.

Each phase brought changes in scope and priority, and the contribution areas shifted accordingly. Silicon validation activity increased significantly during post-silicon bring-up. Compiler stack contributions were concentrated in the mid-phase, around stabilisation. Test automation work ran throughout and was maintained across both environments simultaneously.

Phase 1
Pre-Silicon
Architecture review, test plan definition, simulation environment setup, automation framework foundations, and initial compiler module work.
Phase 2
Post-Silicon Bring-Up
Test case porting to physical silicon, bring-up validation, power and performance characterisation, regression framework extension to cover device environments.
Phase 3
Production Validation
Full regression suite execution, coverage closure, benchmarking against production targets, and final validation sign-off support across the full test matrix.

Interested in a similar engagement?

We work within your team, contribute to your program, and move at your pace.