6 Comments
User's avatar
Paras Doshi's avatar

love the 5 tips! my default now is basically if AI gives me slop, it's not AI's fault, it's me who is accountable and i need to iterate to make it useful for whatever i am doing.

-Paras

Om Prakash Pant's avatar

I’ve seen this play out a lot in teams. when the inputs are vague, AI fills in the gaps with something that looks right but isn’t really useful.

In real work the difference usually comes from context - constraints, data, and a clear sense of what the output is actually for. and without that, the output ends up generic no matter how good the model is.

ralph's avatar

That’s such a novel approach to it. Still doesn’t Ai get generic answers because it wants to produce generic answers?

Chandra Narayanan's avatar

Great question. Not really. We have done extensive testing. If you increase the amount of the right level of specificity, it does produce highly specific results.

IN fact, even now, it can be highly specific but wrong. (so my title is actually misleading) because it makes the wrong assumptions.

ralph's avatar

Does it get answers close but wrong a lot? Someone suggested it does that a lot

Chandra Narayanan's avatar

It depends on if the context it missed is critical or not.