I was saying recently that "having a human in the loop" is the less egregious case of using Large Language Model systems, but the way the very tech is implemented makes even this use case fraught.
#AI #LLM
There's magic trick that can make any #AI project successful. When the LLM's success rate is too low, just say "it's OK, we will have a human in the loop" and ship it anyway.There's just one problem with this magic trick: most #LLM implementations don't have the thoughtful and intentional guard rails that would make this a real human-in-the-loop system.
Instead, the loop becomes a vicious cycle of deception and deskilling, losing all benefits of oversight.
https://productpicnic.beehiiv.com/p/human-in-the-loop-is-a-thought-terminating-cliche

Without thoughtful guard rails, asking people to clean up after machines doesn't make machines better — it makes people worse.
The Product Picnic