Their CEO said he liked that people are saying please and thank you. Imo it’s because he thinks it’s helpful to their brand that people personify LLMs, they’ll be more comfortable using it, trust it more, etc.
Additionally, because of how LLMs work, basically taking in data, contextualizing user inputs, and statistically determining the output iteratively (my understanding, is oversimplified) - if being polite yields better responses in real life (which it does) then it’ll probably yield better LLM output. This effect has been documented.
I also feel like AI is already taking over the internet, might as well train it to be nice and polite. Not only dose it make the inevitable AI content nice to read, it helps with sorting out actual assholes.
Their CEO said he liked that people are saying please and thank you. Imo it’s because he thinks it’s helpful to their brand that people personify LLMs, they’ll be more comfortable using it, trust it more, etc.
Additionally, because of how LLMs work, basically taking in data, contextualizing user inputs, and statistically determining the output iteratively (my understanding, is oversimplified) - if being polite yields better responses in real life (which it does) then it’ll probably yield better LLM output. This effect has been documented.
I also feel like AI is already taking over the internet, might as well train it to be nice and polite. Not only dose it make the inevitable AI content nice to read, it helps with sorting out actual assholes.
AI isn’t trained by input from its users.
They tried that with Tay, and it didn’t work out so well