love the 5 tips! my default now is basically if AI gives me slop, it's not AI's fault, it's me who is accountable and i need to iterate to make it useful for whatever i am doing.
I’ve seen this play out a lot in teams. when the inputs are vague, AI fills in the gaps with something that looks right but isn’t really useful.
In real work the difference usually comes from context - constraints, data, and a clear sense of what the output is actually for. and without that, the output ends up generic no matter how good the model is.
Great question. Not really. We have done extensive testing. If you increase the amount of the right level of specificity, it does produce highly specific results.
IN fact, even now, it can be highly specific but wrong. (so my title is actually misleading) because it makes the wrong assumptions.
love the 5 tips! my default now is basically if AI gives me slop, it's not AI's fault, it's me who is accountable and i need to iterate to make it useful for whatever i am doing.
-Paras
I’ve seen this play out a lot in teams. when the inputs are vague, AI fills in the gaps with something that looks right but isn’t really useful.
In real work the difference usually comes from context - constraints, data, and a clear sense of what the output is actually for. and without that, the output ends up generic no matter how good the model is.
That’s such a novel approach to it. Still doesn’t Ai get generic answers because it wants to produce generic answers?
Great question. Not really. We have done extensive testing. If you increase the amount of the right level of specificity, it does produce highly specific results.
IN fact, even now, it can be highly specific but wrong. (so my title is actually misleading) because it makes the wrong assumptions.
Does it get answers close but wrong a lot? Someone suggested it does that a lot
It depends on if the context it missed is critical or not.