Dashmote is a software platform that uses computer vision and other AI methods to provide visual and location analytics.
‘Visual Brand Intelligence’ analyses online images to extract insights based on content (concepts, colours and other attributes).
‘Visual Trend Analysis’ analyses the content and engagement rates of image-based social posts to determine the magnitude of trends.
‘Location Intelligence’ uses online data to augment profiles of retail, grocery and food & beverage outlets to help with strategic planning.
Attention Insight is a software platform that predicts where users will look while engaging with content.
It helps to identify design problems and provides insights into user attention without collecting fresh data from participants.
Outputs include heatmaps (visual representations of how users’ attention is distributed) and Areas of Interest (the percentage attention that different visual objects receive).
The system is based on deep learning and trained with data from over 20,000 previous eye tracking studies. It claims 84% accuracy of prediction for website designs.
Ultinous is an in-store video measurement and analytics platform for people counting and behavioural insights.
Cameras observe store areas from a horizontal plane of 10–30°, allowing the video stream to be used for counting passer-by traffic, measuring footfall, identifying shopper demographics (age and gender), creating heatmaps of store use, capturing dwell times and analysing promotions.
BirdSight is a social intelligence platform that uses image analysis to extract insight from visual posts.
AI-based technology recognises objects, scenes, attributes and emotions in images; it then combines other factors (shares, sentiment, reach, impressions) to associate measures of impact.
Aitrak is a platform that analyses images for visual salience without the need to conduct eye tracking studies. It uses a proprietary AI model trained with thousands of retail images viewed by people using traditional eye-tracking technology.
It claims to achieve typical accuracy levels of 95%-97% when compared with traditional eye-tracking studies of the same images.
Discover.ai is an AI driven tool for extracting meaning from text and image based sources in multiple markets and languages. Sources include internal documents, brand websites, influencer and expert blogs, online magazine articles, forums, communities, technical articles, social media, and more.
Dragonfly uses an algorithm to emulate how the eye and brain work in order to predict which areas of a design will grab customers’ attention. Developed with academics at Queen Mary University of London, it is based on understanding the neural architectures in the visual cortex. By replicating how the eye and brain process differences in light and shape, Dragonfly assigns each pixel with a numeric saliency value, turning the data into a virtual ‘heatmap’.
Picasso Labs is a visual analytics platform for social content. Use cases include competitor tracking, consumer insights and influencer selection. Images are automatically tagged based on content, and analytics determines which creative attributes boost or decrease engagement in social channels.
Eyequant helps to test the visual impact of a design without the need for tracking code or recruiting users for primary research. The software simulates thousands of users and uses AI to predict which elements will attract the most visual attention. UX design ideas can be analysed and validated instantly, and the best design variants can be selected to take forward to full scale A/B testing.