Help & Documentation
Frequently Asked Questions
Practical answers about what the tester does, who it helps, what you can modify, and how to use it to reproduce ad-tech issues quickly.
Questions about the product
What is VAST Tester used for?
VAST Tester is a browser-based workspace for loading ad tags or raw XML, playing ads in simulated environments, inspecting events and getter values, validating outcomes, and exporting a session report for debugging or bug triage.
Who is VAST Tester useful for?
The app is most useful for testers who need to verify ad behavior quickly, but it also helps ad-ops users who want to reproduce field issues, and developers who need a practical UI plus automation hooks for investigating integrations.
Which ad workflows are supported?
The app focuses on VAST testing and also includes compatibility support for VPAID and SIMID-oriented workflows where those technologies are part of the ad delivery or creative behavior being verified.
Can I load a tag URL or paste raw XML directly?
Yes. The main input supports direct tag URLs, and Raw XML mode lets you paste VAST XML into the app without hosting the markup elsewhere first.
What are presets for, and when should I use them?
Presets are built-in testing shortcuts for common scenarios such as inline linear, skippable, wrapper, non-linear, ad pod, companion, and error cases. They are useful when you want a repeatable smoke test, a known-good baseline, or a quick demo without preparing your own tag first.
Which environment profiles are available, and what do they change?
The standard profiles are Desktop Web, Mobile, CTV / Smart TV, and Outstream. These change the simulated viewport and playback conditions so you can observe how the same creative behaves under different assumptions about screen size, autoplay, muted state, and playback context.
What can I modify in Custom environment mode?
Custom mode layers overrides on top of a base profile. You can modify width and height, autoplay, muted playback, outstream mode, whether ad controls are shown, credential behavior, VPAID view mode, AJAX timeout, and creative load timeout. That makes it useful for edge-case reproduction without editing source code.
What do validation and diagnostics cover?
Validation checks the most important playback and measurement expectations, including impression, quartiles, timing, skippable behavior, tracking pixels, companion handling, and error handling. Diagnostics then summarizes session-level issues across playback state, network activity, validation results, and final completion signals.
What do validation results tell me, and what do they not guarantee?
Validation is meant to catch practical problems in the session data the app observed. It can show whether core rules passed, failed, or were skipped, but it does not replace end-to-end certification across every SDK, browser, player wrapper, device, or ad server involved in a production delivery chain.
What are diagnostics and Session JSON used for?
Diagnostics help you quickly see what likely went wrong during a session. Session JSON is the fuller export artifact: it can include environment details, event history, network activity, validation output, and diagnostic summaries, so it works well for bug reports, teammate handoff, or comparing repeated test runs.
Does the app store my data on a server?
The tool is designed as a client-side workspace. Local preferences such as theme or panel sizes stay in your browser, while any data collection performed by the VAST tags or third-party endpoints you load is controlled by those services, not by the tester itself.
How do share links work?
The share button copies a URL that reflects the current setup, such as the tag or XML source, environment choice, and related options. That is the fastest way to send someone else into the same scenario or keep a reproducible link for later debugging.
Is there automation support?
Yes. The app exposes window.VastTester for automation and scripting. You can use it to load tags or XML,
control playback, change environments, inspect state, read validation and report data, evaluate presets, and wait for
specific ad lifecycle events in automated tests.