Mint Explainer | A web for machines, not humans: Decoding ex-Twitter CEO Parag Agrawal’s next big move

Agrawal is building infrastructure and tools optimized for AI agents to access, verify, and organise web data. Simply put, he wants to change how AI surfs the internet by creating a platform that is built for AI, works in real time, can be trusted, and scales easily. But is the idea new? What does it entail? And will it truly transform the web or remain a private ecosystem? Mint explains
What is Parag Agrawal’s story so far?
India-born Parag Agrawal began his career in 2006 as a researcher at Microsoft before moving briefly to Yahoo and later returning to Microsoft. In 2009, he joined AT&T’s research division, but it was at Twitter, which he entered in 2011 as a distinguished software engineer, that his career took off.
After six years, he rose to chief technology officer and, in 2021, was named CEO. His tenure was cut short when Elon Musk acquired Twitter in October 2022, rebranded it as X, and ousted thousands of employees, including Agrawal and three other top executives. The employees collectively sued Musk for $500 million in severance pay, which the latter has partly tentatively settled for now. However, Agrawal and the other three senior executives continue to pursue their own claims in court.
Why is he in the news again?
The IIT Bombay graduate recently announced the launch of his new company’s “Deep Research API (application programming interface)”, which he claimed is “…the first to outperform both humans and all leading models including GPT-5 on two of the hardest benchmarks (the DeepResearch Bench and Browse Comp benchmarks show how well AI can dig up hard-to-find information and produce detailed reports)”.
In operations since last October, Parallel Web Systems already powers “millions of research tasks every day”. According to Agrawal’s LinkedIn post last week, “…some of the fastest growing AI companies use Parallel to bring web intelligence directly into their platform and agents. A public company automates traditionally human workflows, exceeding human-level accuracy with Parallel. Coding agents rely on our search to find docs and debug issues…”
What does it mean for users and enterprises?
Unlike Google or Perplexity, which serve people with answers or links, Parallel is designed for machines. Its Deep Research API enables AI agents to move beyond surface-level searches, using multiple research engines to deliver anything from quick responses to complex, time-intensive insights.
Each result comes with attribution, confidence scores, and structured outputs, making the data both verifiable and machine-ready. For enterprises, this means plugging the API into their own AI systems to power tasks like market analysis, due diligence, customer research, or competitive intelligence. By prioritising traceability and reliability, Parallel is attempting to tackle the problem of AI hallucinations. That makes it especially valuable for sectors such as finance, law, and healthcare, where accuracy and trust matter most.
But how unique is Agrawal’s idea?
Agrawal is right that today’s web still serves humans: we click links, juggle tabs, compare prices, and judge credibility. AI systems attempt the same with unstructured data, paywalls, and noise, limiting them to simple queries. His vision, though, isn’t entirely new.
The programmatic web has long imagined reshaping the internet so machines can interact with it directly. The agentic web goes further, envisioning AI agents that don’t just fetch facts but act on them— booking flights, restocking groceries, or running analysis. But unlike Web3, which focused on decentralised ownership but never scaled, Parallel is a web built for AI as its primary user. With APIs that promise clean, verifiable, real-time data, Agrawal is creating the first serious infrastructure for this shift in the hope that enterprises will pay for it.
What about standards and protocols?
Every major shift in the web’s history has relied on standards and protocols established by bodies like the World Wide Web Consortium (W3C) and Internet Engineering Task Force (IETF)—from hypertext mark up language (HTML) to the frameworks that make today’s web interoperable, allowing programs to communicate with each other.
In the early days of search engines, for instance, companies each used their own indexing methods until standards around metadata and site maps helped unify the ecosystem. Likewise, the rise of mobile apps forced developers and device makers to agree on protocols that allowed apps to work across platforms.
The programmatic web, though, is a complex marketplace dominated by opaque systems and proprietary tech. It still runs on the foundation set by bodies like the W3C and IETF, but remains a patchwork of open ideals and closed commercial interests even as standards groups are trying to rein it back.
Agrawal’s vision of a programmatic, machine-first web, too, would need common formats for attribution, verification, and structured outputs to enable AI agents to reliably share and interpret information across platforms, failing which his project risks becoming another siloed ecosystem as opposed to the transformative web infrastructure he envisions.
What about the AI bot problem?
Automated bots already make up close to half of all internet traffic, performing tasks such as price comparisons and content scraping, but also spamming or gaming systems for ads and clicks. Further, AI-driven bots can mimic human behaviour, learn from their environment, and evade detection.
A machine-first web risks amplifying those problems unless it can distinguish between “good” AI agents and malicious bots. Verification and attribution, which Parallel is building into its system, may help by giving enterprises a way to trust certain sources while filtering out noise. But how do you stop an AI-first internet from becoming overrun by low-quality or adversarial traffic? Search engines like Google already fight constant battles with SEO spam; a programmatic web like Agrawal’s Parallel could magnify that challenge many times over.
So, what can we conclude?
For now, the good news is that investors are supporting the idea. To date, Agrawal has secured $30 million in funding from investors including Khosla Ventures, First Round Capital, and Index Ventures. However, given the bad bot problem and interoperability challenges, Agrawal’s AI startup will not only have to build infrastructure for AI research, but also ensure governance and guardrails that prevent the platform from being gamed.