Most AI commentary is produced by people one step removed from doing anything. Here are the 15 who are actually in the work.

The volume of AI content on X is not the problem. The problem is that most of it is produced by people one step removed from the actual work. They read the paper and thread it. They see the product launch and react to it. They quote the researcher and add a take. Useful, occasionally. But not where the real signal lives.

I've spent the last year testing which voices actually move my thinking forward. The ones that do share a common trait: they are doing the work, not covering it. Shipping products, running training runs, deploying at scale, managing the teams that build the infrastructure. That proximity changes what they notice and what they say.

This list is the result of that year. Pick 3-5 voices that match your specific question right now. Follow for a week. If your understanding of the AI landscape sharpens, they're earning their place in your feed.


The Signal Problem

If you follow "AI" broadly on X, most of what you see falls into one of four categories: repackaged news from bigger accounts, technical breakdowns of things covered elsewhere the same morning, hot takes engineered for engagement rather than clarity, and self-promotion dressed as insight. None of that is worthless, but none of it compounds.

Real signal has a different texture. It arrives as an observation you couldn't have made yourself because you weren't in the room where it happened. It reframes something you thought you understood. It surfaces a tradeoff that wasn't in the press release. That kind of signal comes almost exclusively from people with direct operational exposure: original research, shipped products, managed teams, deployed systems.

The list below is drawn entirely from that pool. The filter was simple: would I learn something I couldn't find in a newsletter two days later? If yes, they're on it.


Research & Foundations: Where the Models Actually Come From

Following researchers matters for a specific reason. They're the only people whose current work is structurally ahead of what the rest of the industry is building on. If you want to anticipate capability shifts rather than react to them, this is where to look.

1. Andrej Karpathy (@karpathy) - ~1.4M followers

Role: AI researcher, former Tesla Director of AI, early OpenAI team member

Karpathy's value isn't that he explains deep learning clearly (he does). It's that he's willing to publicly acknowledge when the ground has shifted under his feet. His post about never feeling so far behind as a programmer landed because it came from someone who has been at the frontier for a decade. That's not performance. That's an accurate report from a competent observer.

What you'll actually get: A practitioner's eye on the transition from writing code to orchestrating AI systems. Not trend coverage. A professional reorienting his own mental model in real time, out loud.

2. Ilya Sutskever (@ilyasut) - ~500K followers

Role: Co-founder of Safe Superintelligence, former Chief Scientist at OpenAI

Ilya posts rarely, which is the point. When someone who has spent years running large-scale training infrastructure decides to say something publicly, it's worth paying attention to. He operates at the intersection of theoretical foundations and the practical constraints of building frontier systems. That combination is uncommon.

What you'll actually get: A signal on which research directions are being taken seriously by the people with the resources and talent to pursue them. Not what's generating buzz. What's being worked on.

3. Yann LeCun (@ylecun) - ~972K followers

Role: NYU professor, Meta's Chief AI Scientist

LeCun's value isn't his optimism about AI progress. It's his willingness to fight the consensus, including consensus within his own field. He pushes back on scaling narratives, questions assumptions about what large language models actually do, and does it with the credibility of someone who has been building these systems since before the current hype cycle existed.

What you'll actually get: A disciplined filter against capability claims. If you want to think clearly about what AI can and can't do, following someone who challenges the dominant framing is more useful than following people who amplify it.

4. Pedro Domingos (@pmddomingos) - ~108K followers

Role: ML researcher, University of Washington professor

Pedro operates in the space between deep theory and clear communication. In a field where most technical writing trends toward either inaccessible formalism or dumbed-down simplification, he works the middle ground well. He'll engage with foundational questions that get skipped over when everyone is chasing the next model release.

What you'll actually get: The underlying ideas that tend to get lost when the conversation moves to scale. Useful for building a model of the field that holds up beyond the current cycle.


Product & Application: From Lab to Something People Pay For

The gap between "technically impressive" and "users actually want this" is wide and expensive. The voices in this section have crossed it, which means they've encountered problems the research community isn't thinking about yet.

5. Arvind Srinivas (@arvindsrinivas) - ~320K followers

Role: CEO of Perplexity AI

Arvind is building in one of the most competitive areas of applied AI and doing it at a pace that forces constant product judgment. His commentary isn't theoretical. It's informed by what users actually do, what retention looks like, and which features justify continued investment. That's a different kind of knowledge than what you get from someone reading product teardowns.

What you'll actually get: Founder-level thinking on the specific problem of making AI useful enough that people pay for it. The difference between a capable model and a retained product is almost entirely in that gap.

6. Logan Kilpatrick (@logankilpatrick) - ~230K followers

Role: Product lead at Google DeepMind, former OpenAI developer relations

Logan sits at the interface between the people building AI infrastructure and the developers building on top of it. That position makes him useful in a specific way: he knows what the APIs actually support, where the documentation falls short, and what real use cases are being attempted versus what gets announced at conferences.

What you'll actually get: Practical signal on how to build with these tools, not just what they're capable of in theory. The delta between capability and usability is where most products actually fail.

7. Linus Ekenstam (@linusekenstam) - ~220K followers

Role: Product designer and entrepreneur

Most AI product discussion focuses on capabilities. Linus focuses on the interaction layer: what does it feel like to use an AI product, where does trust break down, how do you design for uncertainty in model outputs. These are not soft questions. They determine whether a product gets used past day three.

What you'll actually get: A design and product lens that most technical voices don't have. Useful if you're building anything that a human actually has to interact with.


Venture & Startup Ecosystem: Where Capital Is Moving Before It's Obvious

VC signal is useful not because VCs are always right but because investment decisions are made with incomplete information and high stakes. That combination forces a particular kind of analytical discipline. The voices here are worth following for how they think, not just what they conclude.

8. Bojan Tunguz (@bojantunguz) - ~255K followers

Role: Venture capitalist, entrepreneur, ex-physicist

Bojan's background in physics matters here. He brings a structural pattern-recognition approach to spotting early-stage companies, which means he tends to identify category shifts before they become consensus. His takes on emerging startups often land months before the same analysis appears in mainstream tech coverage. That lead time is the value.

What you'll actually get: Early signal on where capital is concentrating and what structural bets are being made. If you want to anticipate which infrastructure and application layers will matter in 12-18 months, this is the right feed to watch.

9. Varun Mayya (@varunmayya) - ~219K followers

Role: CEO of Avalon Labs, founder of JobSpire

Varun builds in a context most Western AI commentary ignores: the Indian startup ecosystem, where capital constraints are real, talent density is high, and the pressure to find genuine product-market fit is unforgiving. The lessons that come out of that environment tend to be harder-edged than what you hear from founders operating with unlimited runway.

What you'll actually get: Founder thinking grounded in traction rather than funding. Useful for understanding how AI products work when you can't outspend the problem.

10. Rowan Cheung (@rowancheung) - ~567K followers

Role: Founder of The Rundown newsletter

Rowan is on this list for a different reason than the others. He's a curator, not a researcher or founder. But he's a skilled one, and if your constraint is time rather than depth, his weekly distillation of AI developments is a reliable compass-setting. Think of him as a lagging indicator of what the practitioner community has already processed, useful for cross-checking your own feed.

What you'll actually get: A filtered summary of the week's most significant developments, organized for someone who needs breadth without drowning in noise.


Enterprise & Systems Thinking: How AI Actually Scales in Organizations

Enterprise adoption lags research by design. But that lag contains information: what survives contact with real procurement cycles, compliance requirements, and legacy infrastructure tells you more about durable AI value than most launch announcements.

11. Ronald van Loon (@ronaldvanloon) - ~342K followers

Role: AI, big data, and enterprise trends commentator

Ronald's coverage sits at the intersection of AI, cloud infrastructure, IoT, and enterprise strategy. His value is in tracking what large organizations are actually doing with AI investments, which is consistently different from what they say in press releases. If research is the frontier, enterprise adoption is the lagging indicator of what genuinely works at scale under real constraints.

What you'll actually get: A read on what AI looks like when it has to clear procurement, integrate with existing systems, and justify its budget line by line. That's a harder test than most demos reflect.

12. Vin Vashishta (@vinvashishta) - ~29K followers

Role: ML strategist and engineer

Vin's smaller following is not a signal of lesser insight. He talks about what it actually takes to ship ML systems reliably: how teams are structured, where deployment fails, what reliability looks like in production versus in a notebook. These are unglamorous topics that determine whether AI investments produce returns or just prototypes.

What you'll actually get: The engineering and organizational reality of scaling ML. The difference between a team that ships models and one that perpetually rebuilds them is almost always in what Vin writes about.

13. Antonio Grasso (@antoniograsso) - ~350K followers

Role: Digital economy expert, enterprise AI strategist

Antonio operates at the systems level: how AI reshapes labor markets, shifts competitive dynamics across industries, and interacts with regulatory and economic structures. This is the layer most technical commentators skip, and it's the layer that matters for anyone making strategic decisions at the organizational level rather than the model level.

What you'll actually get: A macro frame for AI adoption. Useful for understanding the "why does this matter beyond the technology" question, which is the question every board and executive team is eventually going to ask.


Critical Perspective & Ethics: The Necessary Counterweight

Every high-momentum field develops blind spots. These two voices exist to challenge the assumptions the rest of the list mostly shares. That's not a reason to dismiss them. It's a reason to read them alongside the others.

14. Gary Marcus (@garymarcus) - ~198K followers

Role: Entrepreneur and cognitive scientist

Gary's skepticism is disciplined and specific. He isn't skeptical about AI as a category. He's skeptical about particular capability claims, about benchmarks that don't transfer to real-world performance, about safety assumptions that haven't been stress-tested. In a field with a structural incentive to overpromise, a credible skeptic with deep domain knowledge is a valuable input.

What you'll actually get: A sharper filter for AI claims. Reading Gary alongside Karpathy or LeCun creates productive tension that improves your own analysis.

15. Fei-Fei Li (@drfeifei) - ~405K followers

Role: Stanford professor, co-director of Stanford HAI, former Google Cloud AI lead

Fei-Fei brings something most AI commentary lacks: a framework for thinking about who bears the costs and who captures the benefits of AI deployment. Her emphasis on human-centered AI is not a soft constraint layered on top of technical work. It's a design philosophy that shapes what gets built and for whom. That matters for anyone making decisions about what to build, not just how.

What you'll actually get: A principled framework for thinking about AI as a social and organizational system, not just an engineering problem. Useful for anyone accountable for outcomes beyond accuracy metrics.


How to Use This List

The people who get the most out of a list like this aren't the ones who follow all 15. They're the ones who have a specific question they're trying to answer. What's the research frontier actually doing right now? How do I build something users keep using? What does enterprise AI adoption really look like? The list is organized to match those questions. Pick the section that matches yours.

Step 1: Pick your question, not your category

  • If you want to anticipate capability shifts before they hit the product layer, start with 2-3 from Research.
  • If you're building a product right now, start with 2-3 from Product & Application.
  • If you're making investment or strategic decisions, start with 2-3 from Venture & Enterprise.
  • If you're responsible for AI outcomes at an organizational level, add 1-2 from Critical Perspective.

Step 2: Follow for one week before judging

Give each voice seven days. The question isn't whether you agree with them. It's whether they surface something you wouldn't have found on your own. Watch for:

  • Posts that reframe something you thought you understood
  • Disagreements you find productive rather than dismissable
  • Ideas that show up in your thinking later in the week

Step 3: Prune deliberately

A feed that compounds is one that evolves. Keep the voices that consistently shift your thinking. Drop the ones that just confirm what you already believe. Your priorities will change as the field does; your list should too.


The Meta Point

The instinct in a fast-moving field is to consume more. More posts, more newsletters, more threads. That instinct is wrong. The actual constraint isn't information volume. It's having enough high-quality inputs to form independent judgments rather than just absorbing consensus.

Five voices who are doing actual work will compound faster than fifty accounts covering those five voices. The AI landscape in 2026 rewards people who can think clearly about what's happening, not people who are most up to date on what's been announced. Those are different skills, and they require different feeds.

Build the feed that makes you think, not the one that makes you feel informed.

Interested in translating AI insights into board-level outcomes? Read about why most AI strategies fail to produce ROI and how to fix the framing.

Working through the challenges in this post? I help engineering leaders and CTOs navigate complex technical decisions and scale high-performing teams. Schedule a consultation →