Abhay Bhat

Life | Technology | Investing

Abhay Bhat

Have been looking for paid solutions for quite some time. Western markets have options thanks to Plaid but not apac markets especially Singapore where open banking has been carefully throttled. With agents you build thrifty bespoke solutions. SaaS only for deep verticals


Tired of bank credit cards with zero useful spend tracking. Subscriptions stay hidden – my painpoint. Categories are a complete mess across separate bank statements. Enter @openclaw agent skill in less than 30mins. The agent finds my monthly statement from email, authenticates,


@steipete @quantumaidev Boss move

Forward Deployed Engineers: The New Product Superpower

Building products that actually work requires engineers who live with customers, not Zoom warriors

The Bold Truth About Product Discovery

Forward Deployed Engineers (FDEs) are techies who embed directly with customersโ€”not in Zoom calls, but in their offices, warehouses, or hospitalsโ€”to understand real problems and build solutions that actually work. This model, pioneered by Palantir and now spreading across AI startups, is becoming the secret weapon for creating products that deliver outcomes instead of features. If you’re building anything complex, especially AI agents, this might be the only way to win.

Clawdbot did two things right: built a flawless, unhinged open-source agent you can look under th…

@antirez @mayowaoshin Claude code is a lot more than just best in class coding model. It is the a…

Lets get a #polymarket on this https://t.co/dBkvYphTFz

The AI Chip Wars: GPU vs TPU vs LPU https://t.co/7XCQkGZkGY


Clawdbot 101 https://t.co/bLyRKrPqkq


ai chip wars header

The AI Chip Wars: GPU vs TPU vs LPU

Why the future of AI isn’t one chip to rule them all


The 60-Second Primer

Three chips are fighting for AI’s soul. GPUs (Graphics Processing Units) โ€” the Swiss Army knife that trains most AI models today. TPUs (Tensor Processing Units) โ€” Google’s secret weapon, hoarded for its own data centers. And LPUs (Language Processing Units) โ€” the new kid optimized purely for inference speed. Understanding which chip wins where isn’t just hardware trivia โ€” it’s the difference between a startup burning cash on the wrong infrastructure and an enterprise shipping AI that actually responds in real-time.

Page 1 of 78