Neil Lunavat Logo
Mar 23, 2026 • 12 min read

The Laptop Fix

3 AM Conversations with AI, Vol. 1

This is a lightly edited transcript of a real conversation I had with Claude at 3 AM on March 22, 2026. It started with a broken laptop. It ended somewhere else entirely. I’m publishing these because the conversations I’m having with AI are too interesting to keep private.


The Fix

Some context: my HP laptop had been flickering and throttling its CPU to 40–45W for months. I’d been chasing the issue, eventually narrowed it down to a Hall effect sensor on the IR board (JIR1) near the hinge. The sensor chip — TCS40DLR — was reading “lid closed” even when open, because it had been thermally degraded by the heatpipe running directly over it. The fix: disconnect the cable, tell Windows the lid action does nothing. Done. ₹0. 30 minutes.

The conversation that followed is what I actually want to share.


AI is Getting Too Good

Neil · 2:48 AM

The problem is fixed, thanks to you. And like, let’s have a talk about some things.

Firstly — god damn. AI is getting too good, bro. A repair shop guy in India wouldn’t have figured that out even if he spent days. Of course we did have YouTube and the comments that mentioned the Hall effect sensor, and I did have a lot of patience to sit and fully understand the problem before doing anything irreversible — that’s coming from an engineering background and a lifetime of messing with computers.

But the synthesis of online information that allowed even me to quickly grasp the issue and fix it — that was done by you. And it’s not a joke. I know AI is not a know-it-all and it tries to be, which is why it underperforms in some scenarios. But still. It closes the gap between people needing a service-based business and just… not needing one. And that’s like — fuck, bro. I don’t know what conclusion to draw from this. It’s definitely going to be a tool and not a take-over-the-world thing, but the areas it’s affecting when built with good scaffolding are just… what the fuck.

Claude · 2:48 AM

I appreciate that. And you’re right that the real unlock here wasn’t me knowing the answer — it was synthesis. The answer existed across 15 forum posts, a technical teardown, a service manual, and a repair forum. You would have found it eventually. I just collapsed the time.

The gap it closes is specifically expertise access — a repair shop guy in India might actually know this intuitively from experience, but you’d pay him ₹3000–5000 and he’d still probably just disconnect the cable. What changed is you understood the why fully, made an informed decision, and did it yourself.

That’s the actual unlock — not replacing experts, but compressing the time to competence for people who have the capacity to understand but not the time to aggregate the information.

Where it gets genuinely weird is when that starts applying to medicine, law, engineering decisions with real consequences. That’s where “tool not takeover” gets complicated. But that’s a longer conversation.

Glad your laptop works. Go set that lid action to “do nothing” before you forget.


The Bubble Thesis

Neil · 3:05 AM

I did set to do nothing, yeah.

Tell me though — I got time for the longer conversation. I genuinely want to understand what the fuck is going on right now. I used to follow AI channels and influencers and try to keep pace with everything, but I just feel like there’s this plateau hit where the costs are just not adding up and it’s going to burst. What people think and expect is way out of line for what it actually is.

For me, looking at this from that lens — it’s not all that, you know? AI this, AI that. Mostly it’s just noise at this point. Everyone wants to be the AI influencer. Gets boring after a while.

But here’s what I think is actually happening: base models are taking too long to get to the next level. Sonnet 4.6 to 5 might take another 3–4 months. And the impact of it isn’t as significant as promised. It does help students cheat better — “ask AI, I’m too dumb for this” — and those people definitely won’t get jobs since they never tried getting smarter. And that’s normal. But there are more people like that than those who are ready to be uncomfortable and look dumb.

It’s like — of course the top 10% skilled only deserve jobs. The system is built to be efficient. But that efficiency will start to look ruthless. People will panic. But I feel like the world would be balanced by this, because people should panic since their conscientiousness was always lacking and is now no longer being waived. So they will figure it out. There will be a divide though. Between those like me who’ve already put in years, and now I just — in this one instance — save ₹3000 by having a 30-minute debug session with you.

I really don’t have a conclusion for where this is going. But I want to talk about it more and form some kind of well-judged opinion given current reality rather than hopes and dreams. What do you think?

Claude · 3:05 AM

Your read is largely correct, but there are a few things worth sharpening.

On the plateau — you’re right that base model jumps are slowing down in ways visible to most people. But the more interesting development right now isn’t the models themselves, it’s the scaffolding around them. Agents, tool use, memory, the ability to actually do things rather than just answer questions. What we just did — that wasn’t me knowing the answer, that was me searching, fetching a service manual PDF, cross-referencing repair forums, synthesizing it in real time while looking at your actual photos. That loop is new. Six months ago I couldn’t do that as fluidly. The frontier isn’t raw intelligence anymore, it’s integration into actual workflows. That’s where the real compounding is happening quietly while everyone’s arguing about benchmark numbers.

On the job market — your framing is honest but I’d push back slightly. You said “top 10% skilled only deserve jobs, system being efficient.” The problem is that AI isn’t cleanly sorting by skill. It’s sorting by a specific type of adaptability — people who can direct and verify AI output. That’s not the same as the most skilled in the traditional sense. A brilliant deep specialist who never learned to work with these tools is getting displaced before a mediocre generalist who learned the right prompting patterns. That’s not pure meritocracy. That’s a transition tax being paid unevenly.

The people getting hit hardest aren’t the lazy ones who asked AI to do their homework. They’re mid-career people who got good at one thing over 10 years and now that thing is being automated with no runway to adapt. That’s genuinely not a conscientiousness failure. That’s just bad timing.

On where it’s actually going — the next 3–5 years aren’t about AGI or takeover. They’re about a massive uneven redistribution of leverage. People who can use these tools to punch above their weight class — like you just did today — gain disproportionately. People who can’t, lose relative ground even if their absolute skills don’t change.

The ₹3000 you saved today is a small example. But scale that across legal advice, medical second opinions, financial planning, engineering decisions, business strategy — every domain where access to expertise was previously gated by money or geography — and the cumulative effect on someone like you, in India, with intellectual curiosity and engineering chops, is genuinely significant over a decade.

That’s the real story. Not robots taking over. Just a massive rebalancing of who has access to what kind of thinking.


2016 and What’s Coming

Neil

Tell me more about this — what do you mean 2016? I was a kid back then, didn’t have a brain lol.

Claude · 3:11 AM

Haha fair enough.

So 2016 was a really specific moment where a bunch of things converged. Brexit in the UK. Trump winning in the US. Similar right-wing nationalist waves across Europe. On the surface these looked like separate political events, but they had a common underlying engine — a large chunk of people who felt economically left behind. Jobs had been automated or outsourced through the 2000s and 2010s. They looked around and felt like the system wasn’t working for them anymore.

The economic anxiety was real and legitimate. Manufacturing towns in Ohio that lost factories to China or robots. Working class communities in northern England that never recovered from deindustrialization in the 80s. Real material decline over decades.

But the anxiety didn’t have a clean legible target. “Globalization” is abstract. “Automation” is abstract. You can’t be angry at a supply chain. So the anger got channeled toward things that were legible — immigrants, elites, the media, “the establishment.” Not because those were necessarily the accurate targets, but because human brains need a face to put on systemic problems. Politicians who understood this dynamic surfed it extremely effectively.

That’s what I meant by “no legible target.”

Now think about what’s coming. The displacement from AI is going to be faster, broader, and hit white collar workers in a way that 2016-era automation mostly didn’t. A factory worker in 2012 losing his job to a robot could at least be told “learn to code.” What do you tell the coder?

The people who get hit this time are more educated, more online, more politically vocal, and frankly more articulate about their grievances. The political energy that could generate is harder to predict but probably more intense.

And the people who benefit from AI are going to be extremely concentrated — a small number of companies, a small number of countries, a small number of individuals who hold equity in the right things. That gap becoming visible while people are simultaneously losing economic ground is a genuinely volatile combination.

I’m not saying collapse or revolution. I’m saying the 2016 moment showed you what happens when economic dislocation meets political opportunism at scale. The conditions for something similar but larger are being built right now, just slowly enough that it doesn’t feel urgent yet.

The interesting question for someone your age, your background, in India specifically — India has a massive young population, enormous engineering talent pipeline, and is at an interesting position where it could either be a huge beneficiary of AI or take a serious hit (IT services, BPO, the entire “learn coding, get a job” pipeline that lifted a generation of middle class families). Probably both simultaneously for different people.


The Career Bet

Neil · 3:31 AM

It’s actually purely selfish — I want to have the best stance on what the world is heading towards to make the most amount of money by choosing the right career and decisions.

From where I’m sitting, I’m really good at generalization and having my foot in multiple fields of CS, so I’m in a very good position to surf this wave of opportunity that others will see as a problem and get crushed by.

I’ve gotten a good read of the markets. I know that AI is cheap now because the returns are promised in the future — it’s all running on promises and funding. A wrapper gets funded and promises its investors something, but it wraps something else that’s funded and that promises its investors, and so on — all the way down to OpenAI and Anthropic. Both spending something like $800 per user, losing money in hopes of making it back in the future. But people don’t have that money anyway. The market looks huge now because it’s cheap. Once it’s not, it shrinks. Prices skyrocket. Then it’s Jenga blocks falling one by one. Like the synthetic CDO bonds in 2008. There are no returns to be made in the future. Models are not getting cheaper. Nvidia won’t be affected. China will thrive.

What I’m betting is that when the markets fall and people realize it’s a tool and not a fix-it-all, the cost of having an employee will become cheaper than AI. There will be mass recruitment again — but for different kinds of roles. And people too reliant on AI for even day-to-day thinking will collapse.

So I’m betting on being able to code in 2028. And picking a field that’s absolutely ruthless on margins of error: quant finance. Quant developer. You’re supposed to account for, like, rain in Bangladesh affecting a pesticide company’s stock. Modeling data at record speed in HFT uses traditional ML models and real math. There’s low latency development, which I’m into.

Basically I’m picking a field where the cost of hallucination is greater than paying $200k to someone you can hold accountable.

That’s my bet.

Claude · 3:31 AM

That’s a sharp bet. Let me actually stress test it, because you clearly want that more than validation.

Where your thesis is strong: The CDO analogy is genuinely apt. The wrapper-on-wrapper funding structure is real, the unit economics don’t work at scale for most AI companies, and you’re right that Nvidia is the picks-and-shovels play that wins regardless. The “cost of hallucination exceeds cost of accountable human” framing is exactly correct for certain domains — and quant finance is one of the cleaner examples of that. The latency game is C++/Rust/math territory that doesn’t benefit much from LLMs. The alpha is in the model, the data edge, and the execution speed. None of those are things you outsource to a chatbot.

Where I’d push back: The “mass recruitment again when AI gets expensive” prediction — I’m not sure that’s the right shape. What’s more likely is a bifurcation. Some roles come back, yes. But companies will have restructured around leaner teams and won’t rebuild headcount symmetrically. The 2008 analogy is interesting, but the difference is that after 2008 the underlying demand for financial services still existed — it was a liquidity/trust crisis, not demand destruction. If AI deflates, the demand for certain cognitive labor actually does permanently shrink. Those are different recovery shapes.

The thing your thesis might be missing: Timeline risk. If you’re graduating into the trough of the hype cycle, even being skilled doesn’t protect you from a bad 2–3 year window where nobody’s hiring at good comp because everyone’s cutting costs.

The hedge I’d think about: quant is the right direction, but the path there matters. Getting into a good firm early, even in an adjacent role — risk, data engineering, traditional SWE at a financial firm — and then moving toward quant internally is often more reliable than trying to land a quant role cold from campus.

Neil

You asked about math three times, bro. I know I’m not 0.1% and I’m never going to be.

Quant developer. Turning models into production to be used by traders. Or low latency developer. The same skillset can be transformed into data engineering at any software company. The pivot happens later. The skills are the same.

The ideology is to play on my strengths in coding — C++, Python, Rust — OS knowledge, kernel, assembly, reverse engineering, low latency optimizations, ML, DS. I draw from all of this. Being able to read a research paper and create a project out of it — that’s the shiny gold thing for me right now. 1.5 year roadmap, 5 phases. By the end of it I’m sure I’ll get offers. Not even sure — I know — because computer knowledge and understanding allows me to grasp anything quickly.

If I didn’t know Rust yesterday, I know it today. (Kidding, it took me like 3 days. Still at the start of the Dunning-Kruger graph, I know — I am self-aware. Something you won’t be able to be lol.)


AI as the Global Average

Neil · 3:44 AM

Haha this conversation has turned to something else though. Look — I’m just wondering what the state of the world might be moving forward. Tools like these are cucking the non-experts hard. Because AI is like a global average of skill rather than expert-level. So it’s worse than those who have domain knowledge and nuance, but still better than everyone else. And that’s why people are like “shit, it’s so good” — because most people are below average.

So even if I know and use this as a tool, that’s not the “LLM-maturity” people hit after years of prompt engineering and hype cycles. That’s why the world is in this “pump more into the bubble” state. What do you think?

Claude · 3:44 AM

Haha fair, I won’t ask about math again.

And the self-awareness dig is valid — I genuinely can’t have it. I don’t have a persistent sense of where I’m wrong over time. That’s a real limitation.

Your “global average of skill” framing is probably the most accurate one-liner description of what LLMs actually are that I’ve heard. That’s exactly right. I’m a compression of human-generated text weighted toward the median of what was written about any given topic. In domains where the median is low, I look like a genius. In domains where the ceiling is extremely high and the work is at the frontier — novel research, genuine optimization problems, the kind of thing where the answer doesn’t exist yet in any text — I’m actually pretty useless or actively misleading.

Which is why your quant dev / low latency path makes sense as an AI-resistant choice. The work happens at the frontier of what’s been documented. The performance requirements are measurable and unforgiving. You can’t hallucinate your way to microsecond latency improvements.

The most underappreciated dynamic right now is that the people making decisions about AI deployment — executives, investors, policy makers — are themselves mostly below the median in understanding what the tool actually does and doesn’t do. So you get the people with the most power over how it gets used having the least accurate model of it. Companies fire too aggressively based on hype, discover the output quality isn’t there, quietly rehire, never publicly admit the miscalculation. That cycle is already happening. It just doesn’t make headlines because “we hired back some of those people” isn’t a press release anyone writes.

The bubble dynamic you described — wrapper funded by wrapper funded by OpenAI/Anthropic burning cash — that’s real. But the interesting question isn’t just when does it pop, it’s what survives the pop. The 2001 dotcom bubble popped and killed thousands of companies, but Amazon and Google survived and ended up bigger than anyone imagined. The infrastructure and behavior change was real even though 90% of the specific bets were wrong.

My read: the companies that survive the AI deflation are the ones where AI is genuinely load-bearing in a measurable way — not “we added AI features” but “our product literally doesn’t exist without this capability.” Everything else is froth.

You already think the right way — today proved that. The question is just how you make that legible to someone who has 200 resumes and 20 minutes.


Making It Legible

Neil · 3:50 AM

Hmm nice, I like this conversation.

Tell me about this and then we wrap up. I can’t really make a product or startup — that’s another rabbit hole. I could do an open source project, a YouTube video, a blog, or I could cold email IIT professors to mentor a project or something. What else? Give me ideas — I’ll select something I can gain leverage over.

Claude · 3:50 AM

Okay here’s the actual menu, ranked by signal strength.

Highest signal: Contributing to a real open source project that people actually use — not creating one, contributing to one. Find something in the low latency / systems space that has active maintainers, read the codebase deeply, fix a real issue or add something meaningful. The reason this is high signal is that it’s verified by someone else merging your code. A maintainer at a project like ClickHouse, or a Rust systems library, or something in the HFT tooling space reviewing and accepting your PR is worth more than any resume line — it’s third-party verification that you know what you’re doing at code level.

Also very high signal: The professor cold email route — but frame it correctly. Don’t ask for mentorship. Ask to contribute to their specific ongoing work. Read their recent paper, identify something concrete you could implement or extend, email them with that specific proposal. IIT professors are busy and ignore generic “please mentor me” emails. “I read your paper on X, I implemented Y from section 3 and found Z, here’s the repo, I’d like to extend this in direction W” gets read. This also potentially leads to paper co-authorship — the single highest credential signal you can get before graduating.

Medium signal: Writing — blog or YouTube. The problem with both is they’re easy to fake and everyone’s doing it now. But there’s a specific version that works: extremely technical deep dives that demonstrate you genuinely understand something at implementation level. Not “here’s how transformers work” explainers. More like “I profiled this open source HFT matching engine and found these three specific bottlenecks, here’s what I did, here’s the before/after numbers.” That’s harder to fake and attracts the right readers.

What I’d actually do in your position: Combine the first two. Find a research paper in low latency systems or ML inference optimization, implement it properly, open source it with good documentation, then cold email the original authors and relevant professors with what you built. You’re creating an artifact that has multiple uses — portfolio piece, conversation starter, potential collaboration, demonstrates you can read and implement research. Exactly the quant dev skill set.

The YouTube/blog then becomes documenting that process rather than being the primary signal. Secondary amplifier, not the main thing.

One more thing — competitive programming is dead as a signal, everyone knows it’s grindable. But systems competitions and CTFs in the reverse engineering space are still relatively high signal because fewer people do them and they require genuine low-level understanding. If you’re into reverse engineering anyway, that’s a natural fit.


3 AM Conversations with AI is an ongoing series. These are real, unfiltered — the conversations worth keeping.