Many AI tools can watch today’s videos and summarise what’s going on, but when you ask model questions about multiple videos and footage that lasts over hours, things get a little harder.
This is a major limitation for security companies that want to scrub thousands of hours of footage from various cameras using AI, and for marketing companies that want to study various video campaigns and product shoots.
Memories.ai wants to tackle that issue with an AI platform that can process up to 10 million hours of video. For businesses with plenty of videos to analyze, startups want to provide a context layer with searchable indexes, tagging, segments and aggregations.
Co-founder Dr. Shawn Shen is a research scientist at Meta’s Reality Lab and has a PhD, and her counterpart, Enmin (Ben) Zhou, worked for Meta as a meta learning engineer.
“All top AI companies, such as Google, Openai, and Meta, focus on producing end-to-end models. These features are great, but these models have limited understanding of video context for over an hour or two,” Shen told TechCrunch.
“But when humans use visual memory, we sifted through the large context of data. We were inspired by this and wanted to build solutions to better understand a lot of the time,” he said.

Towards that goal, the company has now raised $8 million in a seed funding round led by SUSA Ventures, with participation from Samsung Next, Fusion Fund, Crane Ventures, Seedcamp and Creator Ventures. Shen said the company initially aimed to raise $4 million, but ended in an overregistered round due to investor interest.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
“Shen is a very technical founder and is obsessed with pushing the boundaries of video understanding and intelligence,” said Misha Gordon Law, partner at Susa Ventures. “Memories.ai can unleash many first-party visual intelligence data with its solutions. I felt there was a gap in the long-context visual intelligence market and was drawn to investment in the company,” he added.
Samsung had slightly different papers. Samsung’s investment division is seeing memories.
“One of the things we like about memory is that we can do a lot of on-device computing, meaning that you don’t necessarily have to store video data in the cloud. This allows you to unlock better security applications for those worried about leaving their security cameras at home due to privacy concerns.
Memories.ai says it uses its own technology stack and models to perform the analysis. First, remove the noise from the video and pass the output to the compression layer to store only the important data. Then there is an index layer that makes video data searchable with segmentation and tags (using natural language queries). There is also an aggregation layer that summarises data from the index and helps you create reports.
Nowadays, startups cater to two types of companies: marketing and security. Marketing companies can use startup tools to search for trends related to social media brands and identify what videos they want to create. Memories.ai also provides tools for marketers to create these videos.
The company is also working with security companies to help people determine potentially dangerous behaviors by video people by analyzing security footage and inferring patterns.

Companies currently using Memories.AI will need to upload their video libraries to the platform to have their clips analyzed. However, Shen said that in the future his clients will be able to create shared drives and sync content more easily. The plan is to allow customers to ask, “Tell me all about the people I interviewed last week.”
Shen envisions an AI assistant who can gain context about the user’s life, either through photos or when activating smart glasses. He also believes he plays a role in training humanoid robots to perform complex tasks or helps self-driving cars remember different routes.
The company currently employs 15 people and plans to use the fund to increase its team and improve its search.
Memories.ai competes against similar startups such as Mem0 and Letta, which are working to provide memory layers for AI models, but currently offers limited video support. They also need to face conflict with companies like Twelvelabs and Google, who are working to help AI models understand video.
However, Shen feels his company’s solution is more level, and that it can work with different video models as well.
Source link