VAS Support

Get your questions answered with our FAQs or see how our software can help your team - if we haven't answered your question here, talk to a VAS expert and we'll help you find what you need.

VAS Frequently Asked Questions (FAQs)

  • VAS is a predictive model that simulates a phase of vision called pre-attentive processing. We call it “first glance” vision to distinguish it from conscious viewing, a.k.a. post-attentive processing.
  • We switch between conscious viewing and first glance vision every time we shift our gaze. First glance vision may only last a few milliseconds if we are engaged in a visual task. Or, it can last up to five seconds if we are not engaged in a visual task.

    Visual tasks include reading this sentence, viewing a webpage, looking for an exit sign in a public space, a company logo on a piece of direct mail, or an employee at a retail store. But, for our vision system, these tasks translate into looking for colors, shapes, etc.
  • No. This phase of vision is identical in every human regardless of demographic, psychographic or cultural differences, which lends itself to modeling. Conscious viewing varies by age, gender, whether we read right-to-left or left-to-right, and other factors, so it would require a different model for every combination of these factors.
  • No. Conscious viewing must be triggered for a person to comprehend what they are looking at and consider it. Conscious viewing must also be triggered for common viewing conventions to matter, such as the “F” pattern of typical webpages. The critical first step for marketing – or any form of visual communication – is to get people to notice it during first glance vision, which increases the probability they’ll switch to conscious viewing.
  • VAS results predict which areas within a layout, photo or mock-up are likely to be viewed consciously. VAS analysis identifies 5 visual elements proven to attract attention during first glance vision, including Edges, Intensity, Red/Green Color Contrast, Blue/Yellow Color Contrast, and Faces.
  • VAS is 92% accurate in predicting the results of an eye-tracking study capturing pre-attentive processing – the first glance moment. The model powering VAS was developed by 3M neuroscientists and cognitive scientists using hundreds of thousands of data points from academic eye-tracking studies, and supplemented by 3M eye tracking studies. Models of visual attention have existed for decades following the scientific community’s discovery of pre-attentive processing. You can learn more about pre-attentive processing here
  • No, you don’t need to download software. VAS is a web-based platform that you can access using a computer (desktop or laptop), smart phone and a tablet with an internet connection.
  • Yes, to a degree. We recommend uploading images with a minimum size of 600 x 600 pixels. If it’s smaller, the software will resize it automatically before analyzing it, which may affect the results. It’s okay to analyze smaller images when you’re just trying to get a quick result, so don’t spend too much time resizing images. An important tip is to keep image size consistent if you are comparing VAS results for different versions of the same layout or mock-up.
  • Yes, but not as much as you might think. First glance vision is not high-resolution, and it’s not 3D. Standard resolution of .jpg, .png, .pdf files is suitable. An important tip is to keep image resolution consistent if you are comparing VAS results for different versions of the same layout or mock-up.
  • VAS will analyze .jpg, .png, and .pdf files.

  • There are three ways to upload an image when using a computer:


    • Navigate to a saved file on your computer or in your photo library.
    • Drag and drop a saved file.
    • Take a screen capture and use the cut and paste feature.

    • You can also take a photo on your smartphone or tablet.
  • No, VAS accounts are for individual use only as noted in the Terms of Service each user agreed to when signing up for their VAS account. VAS accounts will be temporarily disabled if two or more people try to access one VAS account simultaneously, or if a single user tries to access their VAS account on two or more devices simultaneously.

  • Try to replicate a consumer’s actual visual field as closely as possible. Take photos from sightlines that are likely to be popular or important. Use landscape orientation rather than portrait orientation. Frame photos wide rather than tight. Consider taking photos for more than one condition if you think it might matter to consumers. Examples of conditions include daytime versus nighttime, or when there are people present versus when there are no people present. An important tip is to limit the number of photos you take for analysis. This will limit the amount of extra work you’ll need to do, but we encourage people to avoid losing focus on the creative/marketing as the key topic by spending too much time discussing VAS results. VAS results are simply intended to add an objective data point to the discussion.

  • No, we consider VAS a complementary tool. Most often, eye tracking studies are intended to capture conscious viewing, and to solicit feedback from subjects about what they are looking at and thinking. But, rest assured that VAS accurately simulates eye tracking results during pre-attentive processing - the first glance moment. VAS results are objective, every time, because no live subjects are involved that may have been biased by knowing they are participating in research or being directed to look at stimulus. We encourage market research professionals to use VAS to help finalize stimulus before implementing an eye-tracking study, a focus group, or any activity.

  • Use VAS results to gain consensus on visual priorities. Get everyone on the same page as early as possible, which typically saves time and energy throughout a project. Use VAS results to support your recommendations. Increase everyone’s confidence that your marketing will get noticed, and reduce some of the subjective feedback that can create churn.

  • The VAS application is responsive, so the user interface will re-shape itself to fit the device screen. Remember that using a tablet requires an internet connection. The steps to analyze an image are identical, and you’ll get all 5 VAS results.

    However, there are 2 important differences on a smartphone or tablet:


    1. To mark-up areas of interest before analyzing an image, you simply tap your finger around an areas (instead of using a mouse on your laptop). Your final tap should be on/near the first tap, which will create the area of interest. Then tap the VAS button. Marking up an area using the tap method is less precise than using a mouse, but you can always re-do your mark up later on your laptop and analyze the image again.
    2. You cannot download the PDF report or JPGs on a tablet or smartphone. You can do this later on your laptop. If you want to share results immediately, we suggest taking a screen capture of one or more of the VAS results and sharing via text or email.
  • All screenshots will be saved in RGB mode automatically, since it is generated on a computer system. Generally CMYK is used for print, each letter represents a color that are generally used by printers (Cyan, Magenta, Yellow and Key (Black)). I suspect the file you tested was prepared by your creative as a print proof and that anything you save on your computer would be RGB. If you view the original image on your computer and then use the "save as" function to save as a new Jpg or PNG, that would result in an RGB version.

    • Edges are created by shapes, which can be created by objects and text. And edge occurs when a group of very similar. tightly packed pixels are adjacent to a different group of similar, tightly packed pixels. They can be increased in many ways, such as making the shape larger, changing spatial relationships, changing color schemes, removing blur or shadows, etc.
    • Intensity is another way of saying luminance contrast, brightness, black/white contrast. So, it’s more like the other versions of color contrast than opacity. It can be increased by adjusting “brightness” of areas/objects.
    • Red/Green and Blue/Yellow color contrast – Our vision system picks up both high- and low-levels of color, so when reports indicate the presence of R/G or B/Y contrast, that means there is either a lot of color, or a lack of color. You can witness this by examining images within any software tool that lets you measure the R/G/B levels of specific areas, like using the dropper tool in Photoshop. So, you can increase the impact of these elements by changing R/G/B levels to either increase or decrease color saturation, hue, etc.
    • Faces are perhaps the most powerful Visual Element. Adding a face to an image where there was none before is almost certain to attract attention, provided the face is relatively prominent. However, if many faces exist or the face is obscured, the impact may be decreased. Note that a face is “recognized” by VAS as a group of graphic elements with a – a horizontal “bar” representing eyes, a “dot” for the nose, and another “bar” for the mouth.
    • Keep in mind that attention-getting power is always relative – everything within the analyzed image/photo/mockup is evaluated by the model. Therefore, it’s the “mix” of the 5 visual elements in a specific image that dictate the attention getting potential. These elements are the “building blocks” of visual attention, but they work together to create the overall effect.

      You can also increase the attention-getting power of your visual priorities by modifying non-priority areas/objects, especially those that VAS results show have strong attention-getting power -- we call these areas distractors. For example, let’s say you want the product variant name on a package to be more prominent, but the ounces/grams text is attracting significant attention. You can do 3 things:


      • Strengthen the product variant name visual elements
      • Weaken the ounces/grams text visual elements
      • or you can do both.

      This is perhaps the most common question we receive, and because each image/scene is unique and the 5 elements contribute in a “relative” manner, it’s difficult to identify “best practices” in the conscious viewing sense. There is no formula, but thankfully designers are naturally good at this. Once they understand the model’s output, they are best equipped to recommend what to do next.

      VAS results, and especially the Visual Elements result, is intended to inform the designer (and team) to help them understand where the Visual Elements exist within a design/photo, and to provide more detailed data (when mark up is used), that *may* help guide what to try next


Ready to VAS?

Get started by signing up for a VAS account. Try it and analyze two images for free. Choose from monthly or annual subscription options, or talk to VAS expert to discuss group licensing benefits.

Close