Three AI tools I actually kept
I tried roughly twenty. These are the three still open in a tab on my work laptop.
I have a bad habit of signing up for every AI tool that gets a mention on LinkedIn. Most of them last about a week before I forget to open them. These three are still in my browser.
1. GitHub Copilot (obviously)
I know, I know. But it earns its place. The chat interface in VS Code has replaced probably 60% of my Stack Overflow visits. The inline completions are useful for Bicep and ARM templates where I’m always looking up property names.
The thing I use most: /explain on a block of code I’ve inherited. Not because I can’t read code, but because it’s faster than tracing through it manually.
2. Perplexity
For technical research where I need sources. Unlike asking a model directly, Perplexity shows you where it’s getting the information from, which matters when I’m looking at something security-related and I actually need to verify the answer.
The Azure-specific stuff is sometimes out of date — the model doesn’t always know about GA vs preview status for services. But for general architecture questions it’s solid.
3. Notion AI
Specifically for meeting notes. I paste in a rough transcript or my bullet points and ask it to structure them. The output isn’t perfect, but it’s good enough that I’m not spending 20 minutes after every meeting cleaning up notes.
What didn’t survive
A few things I tried and dropped: an AI terminal assistant (too slow, too often wrong), a browser extension that “summarised” pages (added noise, not signal), and two writing assistants that couldn’t handle technical content without hallucinating CLI flags.
The pattern with everything that didn’t survive: it was useful in the demo, annoying in practice. The three above are the opposite — not impressive in demos, genuinely useful once you’ve built them into the workflow.