At the end of the day, I'm realizing that more and more of my job is spotting the areas where people have subtly different understandings of a given thing, specifically around things like policy, process, or other "I thought Team X wanted me to do The Thing like this, but apparently I'm supposed to do it like that?"

GenAI just doesn't help with this. If anything, that work of spotting those subtle things that require clarification, and then pushing as needed for people to clarify what it is they want or need, is the antithesis of how genAI functions.

So, proobably a big part of what makes me so frustrated about genAI adoption is that it's part of its underlying, fundamental architecture that it can and will introduce subtle incorrectness or just ambiguity, when I've come to realize that so much of my work is exactly spotting and clarifying those subtle issues and ambiguity that can result in so much wasted effort or work.

0

If you have a fediverse account, you can quote this note from your own instance. Search https://social.treehouse.systems/users/hugo/statuses/115554780012683272 on your instance and quote it. (Note that quoting is not supported in Mastodon.)