Yue Song

How I try new AI tools in real work

Feb 27, 2026

A simple way I use to see whether a new AI tool is actually useful in real tasks.

New AI tools appear almost every week.

Most demos look impressive, but after a few minutes of trying them, it is still unclear whether they are actually useful in real work.

Over time I found a simple question that helps me decide quickly:

Where in my current workflow could this tool realistically save me 10–20 minutes this week?

That question forces me to stop thinking about demos and start thinking about real tasks.

For the test to be useful, I try to keep a few simple constraints.

  1. The task must be real, not a toy example.
  2. The change must be easy to undo if the tool fails.
  3. I should know what success looks like before I try it.

“Success” is usually something small and practical, for example:

  • It produces a draft I can review in under five minutes.
  • It removes a step that normally requires switching between several tools.
  • It helps answer a real question faster without lowering quality.

Once I have that, the process is simple.

  1. Pick one real task from my current queue.
  2. Write down how I would normally do it.
  3. Try a slightly different path using the new AI tool.
  4. Compare the two approaches.

If the AI version is clearly worse, I stop immediately and write down why.

That note is often more valuable than the test itself, because it tells me where the tool failed — understanding the task, handling context, reliability, or just awkward workflow design.

If it looks promising, I keep using it for similar tasks for about a week.

Only after that do I decide whether the tool is worth keeping, adjusting, or dropping.

This site is where I keep those notes.

Not as formal experiments, but simply as a record of what I tried, what worked, and what didn’t.

If you'd like to follow what I'm learning about AI tools and workflows, you can subscribe here → Subscribe to my notes