Generally speaking, when a wide sweeping change is going to take effect, the onus of proof about why it should be done is on those advocating in favor of the change, not on those defending the standard. That is, "How is this change useful?" vs. "Why wouldn't this change be useful?" The latter assumes the change is good and asks for proof which cannot exist yet, while the former focuses the questioning on the underlying value of the change prior to implementation.
This particular change is a complex one. It clearly has w*stern politics crammed into its carcass, which are tedious to read, and doubly so to speak on. For that reason, I'll try and avoid such topic. Besides, I tend to assume your question is asked in good faith, so I would simply ask:
Can you give an example of where it making a "good" suggestion is helpful?
Under optimal circumstances, the ability to "help" the writer would be subjective, right? Under suboptimal conditions, the suggestion would be: unwanted, unneeded, or wrong.
> Can you give an example of where it making a "good" suggestion is helpful?
Sure:
Upon writing, "That guy is a loser," a response from the computer: "You're using using emotionally-loaded and ambiguous language. Consider revising to provide constructive criticism."
Ideally, the feedback a computer would provide would be similar in scope and wisdom that feedback from an experienced human editor would provide.
The projected cultural judgements are plain in your comment:
> "You're using using emotionally-loaded and ambiguous language
Whats wrong with emotionally loaded content? Are you afraid of feelings? That statement doesn't seem ambiguous at all to me - but even if it was ambiguous, I believe ambiguity is sometimes appropriate.
I'm partially with you - I don't often utter things like "That guy is a loser" either. But there are still plenty of contexts in which I'd happily write those words. For example:
- When writing dialogue in fiction
- When supporting a friend with an abusive partner, to let them know emotionally that I'm on their side in the conflict
- In conversation like this
But to go deeper, the language I use is an expression of me. There are very few things as intimately tied to our identity and world view as our choice of language. I can't think of many things as dehumanising to an adult as taking away their choice of how they express themselves.
Imagine if the suggestion, when writing about the war in Ukraine was "Using the word 'war' is inflammatory to some audiences. Have you considered 'military exercise' instead?". Or "Using the word 'they' is ambiguous language. Have you considered using he/she instead?". It doesn't feel as good when you don't agree politically with the suggestion.
To this day people cite doublespeak as one of the most chilling aspects of George Orwell's 1984. This whole thing spooks me for the same reason.
There are undoubtedly times when this sort of feedback is undesirable. If you're writing fiction, or poetry, or just want to flame somebody (damn the consequences), these sorts of prompts just get in the way. Similarly, if you're writing math equations, there's not much use in a spell or grammar check.
But as others have said, writing feedback -- automated or otherwise -- is a resource. Sometimes it's helpful, particularly in the professional context. Other times, it isn't. We still have the freedom to choose when to use it and when not to. And I see no harm in having the tools available to help when needed.
I hear what you're saying. I'm sure people who work at google appreciate an AI making sure they don't accidentally post some wrong-speak to an internal mailing list. Especially when doing so might get them fired.
I just think politically controversial writing suggestions should be opt-in. We don't all work at google. And not all documents are corporate memos. Its extremely important to a liberal society that people are free to think and express heretical thoughts without feeling like we're being watched and judged for doing so.
Getting political judgements ("suggestions") from an omnipresent AI looking over my shoulder while I write sounds dystopian. That sort of technology skeeves me out. I don't trust the sort of people who think this should be turned on by default with access to my private notes. And you shouldn't either.
I'd rather not outsource my morality to the arbitration of an algorithm, regardless of its provenance being of a company that claims to "not be evil." This honestly seems like a particularly flagrant application of this feature; we have enough human interaction mediated by coercive tech, the way we communicate personal beliefs about one another shouldn't be the next pillar to fall. That it's "just a suggestion", as others have argued to excuse it, belies how strongly its UI implies authoritativeness -- users reflexively view the squiggly underline as a sign that something is unambiguously wrong with what they've written.
I forgot to write in my previous reply "Thank you for your levelheaded response. I know these types of discussions can get out of hand, so thank you for approaching without that baggage that sometimes comes with the territory."
Saying both is fair. I generally assume there is a primary intended target, but both is workable too. Your assessment is that both parties benefit from the change?
Generally speaking, when a wide sweeping change is going to take effect, the onus of proof about why it should be done is on those advocating in favor of the change, not on those defending the standard. That is, "How is this change useful?" vs. "Why wouldn't this change be useful?" The latter assumes the change is good and asks for proof which cannot exist yet, while the former focuses the questioning on the underlying value of the change prior to implementation.
This particular change is a complex one. It clearly has w*stern politics crammed into its carcass, which are tedious to read, and doubly so to speak on. For that reason, I'll try and avoid such topic. Besides, I tend to assume your question is asked in good faith, so I would simply ask:
Can you give an example of where it making a "good" suggestion is helpful?
Under optimal circumstances, the ability to "help" the writer would be subjective, right? Under suboptimal conditions, the suggestion would be: unwanted, unneeded, or wrong.