Guide
How to Use VAST Tester
This guide explains what the application can do, how to run manual test sessions, what you can modify in the UI, and when to use exports, share links, or automation.
Using the application
Overview
VAST Tester is a browser-based workspace for VAST-centered ad testing with compatibility support for VPAID and SIMID-oriented scenarios. It brings the main ad-testing workflow into one place: set an environment, load a tag or XML payload, play the ad, inspect behavior, validate the session, and export results.
In practice, that makes it useful for testers, developers, and ad-ops users who need faster triage, consistent repro steps, and clearer evidence when a tag or player integration is not behaving as expected.
Quick start workflow
- Choose an environment profile such as Desktop, Mobile, CTV / Smart TV, Outstream, or Custom.
- Load a tag URL, select a preset, or switch to Raw XML mode and paste XML directly.
- Start playback with the main controls or enable auto-start if you want the session to begin immediately after load.
- Watch the Events, Network, Validation, Session, Diagnostics, and API Inspector areas as the ad runs.
- Export Session JSON or copy a shareable URL when you need to reproduce, compare, or hand off the result.
Loading content into the tester
You can work in several ways depending on the situation. Direct tag loading is best when you already have a VAST URL from an ad server or a local sample. Raw XML mode is useful when you want to paste a payload directly, test a fragment quickly, or isolate issues without relying on an external endpoint.
Presets are useful when you want a known scenario immediately. They help with smoke checks, demos, regression testing, and faster onboarding because they remove the need to prepare a tag before you can begin.
The share button complements all of these flows by generating a URL that reflects the current setup so another person can open the same configuration with less back-and-forth.
Environment setup
Environment profiles let you approximate different playback contexts without rebuilding the app. Desktop Web is a standard browser-like baseline. Mobile is tuned for a smaller, touch-oriented viewport with autoplay-friendly defaults. CTV / Smart TV simulates a larger-screen context. Outstream is useful for placements that do not behave like a standard in-player video experience.
Custom mode is the most flexible option because it starts from one of the standard profiles and then lets you override specific values for targeted testing.
What you can modify in the UI
The main variables you can change directly in the application are the environment choice, the loaded tag or XML, and a set of playback and integration conditions exposed through the Custom environment panel.
- Profile and resolution: choose the base preset, then override width and height for custom viewport testing.
- Playback behavior: toggle autoplay, muted playback, outstream mode, and whether ad controls are shown.
- Integration-related flags: enable credential-aware requests and change the VPAID view mode.
- Timeout tuning: adjust AJAX timeout and creative load timeout when investigating slow or fragile integrations.
- Session options: turn auto-validate and auto-start on or off depending on whether you want more manual or automated test flow.
These controls are meant for scenario setup and reproduction. They let you approximate real-world delivery conditions without editing source code or changing the ad payload itself.
What each panel is for
- Events: view the real-time event stream from the ad session and filter it when you need to isolate specific lifecycle points.
- Network: inspect observed beacon and request activity related to the current run.
- Validation: run the built-in rule set and review pass, fail, and skipped outcomes.
- VAST XML: fetch and inspect the parsed XML structure behind the current input.
- Companion: view companion creative output when the loaded scenario includes it.
- Session: export or copy the session report JSON that summarizes the current run.
- Diagnostics: review higher-level issue analysis built from the recorded session signals.
- API Inspector: inspect live getter values and player state while the session is running.
Validation, diagnostics, and reproducibility
Validation is best used when you want a fast rule-based read on whether the current session behaved the way a healthy ad flow normally should. Diagnostics are better for interpreting the recorded evidence after the fact and spotting likely failure patterns. Used together, they help shorten the path from “something looks wrong” to “here is what probably broke.”
When you need repeatability, the two most useful tools are the share button and Session JSON export. Share links are good for lightweight reproduction, while session exports are better when you need structured evidence for debugging, bug filing, or comparison across runs.
URL parameters and saved scenarios
The application can read URL parameters for tag, XML, preset, environment, autoplay, muted state, validation, and export-related behavior. That makes it possible to create saved links for common scenarios, quickly reopen a known setup, or hand a teammate a reproducible entry point into the app.
Automation and advanced use
For advanced workflows, the app exposes window.VastTester as a public automation contract. That API is
useful when a manual click-through is not enough or when you want repeatable regression coverage in Playwright or other
browser automation frameworks.
- Load inputs: load a tag or XML payload directly from code.
- Control playback: play, pause, stop, skip, mute, and change volume programmatically.
- Change environment: switch profiles without relying on manual UI interaction.
- Read output: inspect state, events, network logs, validation results, session data, and full reports.
- Work with presets: evaluate one preset or run a larger preset sweep in automation.
- Wait for milestones: pause test logic until specific events or states are reached.
Automation is especially useful for recurring smoke tests, reproducing flaky integrations, or verifying that changes in a player integration did not break established ad behavior.
Limits and caveats
The app is a strong debugging and testing aid, but it does not replace every production environment. Third-party ad tags, media endpoints, tracking infrastructure, browser policies, and device-specific runtime differences can all affect what you observe. Some remote VPAID and SIMID scenarios are also inherently less reliable because of external dependencies, CORS behavior, or support drift outside the app itself.
The best way to use the tool is as a reproducible, inspectable middle layer: it helps you narrow the problem quickly, compare scenarios consistently, and produce cleaner evidence before escalating or fixing the underlying integration.