Brad Menezes, CEO of Enterprise Vibe Coding Startup Superblocks, believes the next harvest of $1 billion startup ideas is almost hidden in sight.
System prompts are long prompts (over 5,000 to 6,000 words) used by AI startups to teach how to generate application-level AI products in the basic models of companies like Openai and humanity. They seem, in Menesez’s view, a master class in rapid engineering.
“All companies have systems with very different prompts, just like that. [foundational] Models,” he told TechCrunch.
The system prompt is not exactly hidden. Customers can ask them to share many AI tools. But they are not always public.
Therefore, Superblock has offered to share 19 system prompt files from some of the most popular AI coding products such as Windsurf, Manus, Cursor, Lovable, and Bolt as part of the launch of a new product for his own startup’s Enterprise Coding AI Agent.
The Menezes tweet went viral and was previously viewed by roughly 2 million people, including valley celebrities like Founders Fund and Brex’s Sam Blond, as well as Superblocks investor Aaron Levie. Last week, SuperBlocks announced that it raised a $23 million Series A, bringing a total of $60 million in vibe coding tools for corporate non-developers.
So we asked Menesez to explain to us how to study the prompts of other systems to gather insights.
“The biggest learning for us to build Clark and read system prompts is that the system prompt itself can be 20% of the secret source,” explained Menesez. This prompt provides the LLM with a baseline of what to do.
The other 80% is what he said is “quick enrichment.” This is the infrastructure that startups build around calls to LLM. That part includes steps to attach to the user’s prompt, and actions taken when returning a response, such as checking accuracy.
Roles, contexts, tools
He said there are three parts of the system prompt: role prompt, context prompt, and tool use.
The first thing you will notice is that the system prompts are written in natural language, but are very specific. “You basically have to tell your human colleagues to do that,” Menesez said. “And the instructions must be perfect.”
Role prompts help LLMs stay consistent and give both purpose and personality. For example, Devin’s says, “You are Devin, a software engineer using a real computer operating system. You are a real codewith. Understand the codebase, write functional and clean code, and repeat until you get the changes right.”
A context prompt provides the model with context to consider before it works. For example, you should provide guardrails that can reduce costs and ensure task clarity.
Cursor’s instructions are “Call the tool as needed and don’t include the tool name to the user. Explain what you’re doing… Don’t show the code unless you ask a question.
Use of the tool will not only instruct the model to generate text, so enable the agent task. For example, Replit is long and covers editing and searching code, installing languages, configuring and querying PostgreSQL databases, running shell commands, and more.
Studying other people’s system prompts helped Menes see what other atmosphere coders emphasized. “Manus, Devin, Openai Codex, and Replit” help users to create full stack applications, but “The output is still raw code” and “Tools like Lovable, V0, Bolt focus on “fast iterations”.
Menezes saw the opportunity to let non-programmers create apps if the startup could handle more, such as accessing security or enterprise data sources like Salesforce.
He still doesn’t run billions of dream startups, including Instacart and Paypaya Global, but Superblock has landed several well-known companies as clients.
Menezes has dog food internally. His software engineers are not allowed to create internal tools. They can only build products. So people in his business have built agents who use CRM data to identify leads, agents who track support metrics, and agents who balance assignments of human sales engineers.
“This is basically a way to build tools and not buy them,” he sizes.
Source link