Anti-AI technologists only:
If and when the AI bubble finally collapses due to financial strain, and LLMs become synonymous with several open-source, open-weight, ethically trained models that run on your modest local hardware, would you use it?
This is a world without OpenAI, Anthropic, and those massive AI data centres. Maybe some smaller shops spin up no-code wrapper tools, but nothing like the surveillance capital shit we have today.
Basically, if the ethics, ecology, and extractive capitalism bits go away, would you consider working with these tools?
#poll
- No. (42%, 48 votes)
- I'd consider trying it. (28%, 32 votes)
- It'd be another open-source library to me. (8%, 10 votes)
- I'd actively build with it. (8%, 9 votes)
- Other, comment. (11%, 13 votes)
Hypolite Petovan
•silverwizard
•@Hypolite Petovan @May Likes Toronto https://www.commitstrip.com/en/2016/08/25/a-very-comprehensive-and-precise-spec/
I think about this a lot
Hypolite Petovan
•@silverwizard @May Likes Toronto I've seen meta discussions around using Claude that involves creating a structured grammar to condense the context to minimize the token expense for a given query.
We're reinventing code to interface with a nondeterministic commercial entity and I'm not happy about it. Even when you strip the "commercial" aspect, you're still left with a nondeterministic system and while it works wonders for bounded problems, it's terrible when applied to open ended questions and not being able to balk when internal output confidence level is too low, none of which depend on the current unethical externalities.