It’s a good precedent. Nip this shit in the bud immediately. AI agents you allow to speak on behalf of you company, are agents of the company.
So if you want to put an AI up front representing your company, you need to be damn sure it knows how to walk the line.
When there’s a person, and employee involved, then the employee can be fired to symbolically put the blame on them. But the AI isn’t a person. It can’t take the blame for you.
This is a very nice counterbalancing force to slow the implementation of AI, and to incentivize its safety/reliability engineering. Therefore, I’m in favor of this ruling. AI chatbot promises you a free car, the company has to get you the car.
Good! You wanna automate away a human task, sure! But if your automation screws up you don’t get to hide behind it. You still chose to use the automation in the first place.
Hell, I’ve heard ISPs here work around the rep on the phone overpromising by literally having the rep transfer to an automated system that reads the agreement and then has the customer agree to that with an explicit note saying that everything said before is irrelevant, then once done, transfer back to the rep.
That shouldn’t work. They should still be unconditionally liable for anything the rep said in all scenarios, with the sole exception being obvious sabotage like “we’ll give you a billion dollars to sign up” that the customer knows can’t be real.
I love it, we need more of this.
It’s common courtesy to post the plain text of a paywalled article.
We’ve started asking users not to do this. No issues with posting an archive link, though.
Unfortunately there’s another problem with archive.is / archive.ph / archive.today . Their owner has some beef with Cloudflare DNS and returns bogus results to them so anyone using 1.1.1.1 as DNS can’t visit them.
The Cloudflare side of the story: https://news.ycombinator.com/item?id=19828702 The archive side: https://twitter.com/archiveis/status/1018691421182791680
Note that that discussion was from 2019 but the situation was never resolved and the issue persists to this day.
Thank you for pointing this out, I wasn’t aware.
Why in the world would you ask people to stop cercomventing a pay wall
There’s no need to be rude.
I’m not asking people not to circumvent paywalls. In fact, if you reread my comment, I recommended the user leave an archive link, which is a method of bypassing paywalls that doesn’t involve posting the full contents of the article to this site.
Probably because it could raise copyright issues for Beehaw since Beehaw would be hosting the article.
At some point we have to ask oursevles what is more important IP law, or dessiminating information.
Lemmy was founded on the idea that different instances can decide questions like this for themselves.
It seems that Beehaw has chosen one direction, but there may be other instances out there that have chosen another direction.
Still, asking oneself is part of that system.
Maybe, but users from other instances would be lowest opinions I would expect the admins to consider. I would expect mods, financial contributers, and users registered to this instance to have a far greater day in how this instance is run.
It’s copyright infringement to do so. No need getting the Beehaw admins in trouble; Google paywall bypassing tools and read away.
That’s a lot more effort than I’m willing to go to.
Yeah I’ve never seen that, usually just an archive link
Common courtesy is to not even link to paywalled articles… The publisher has already made it clear they are not interested in public awareness of their content.
I hate paywalls as much as the next guy but when I think about it from the publisher’s protective I really don’t see a way to be sustainable in this environment without a paywall. I’m sure the writers mostly want their articles read but they also want (and deserve) to be paid for their work. How do you do that if, like you imply, the content needs to be completely free for everyone to access? And I’ll bet you use adblock too (I sure do) making it even more impossible.
I don’t know how this shit works but the way you frame it isn’t it.
Take payment for your articles, but don’t go after anyone who doesn’t pay. Effectively, honor system. Let the piracy market world exist, and have faith that it won’t completely override the people who want to pay.
If millions of people read your stuff without any of them converting to payers, fuck 'em. Pearls before swine. They can pay for their content unconsciously, through ads, and enjoy the kind of writing that gets them.
If I wanted to make a living publishing my writing, I think this is the model I’d use. I write enough as a hobby. I’d only want to let that turn into a source of income, if it didn’t come with the necessity of meeting with lawyers regularly to go after my readers. “Suggested donation: $1”, and I wouldn’t quit my day job until those suggested donations were piling up huge.
But that’s me. My chosen career isn’t writing, and I’m just a hobbyist. Maybe it’s more of a life or death feeling to them.
Not formatting on mobile, but here it is.
Feb 17, 2024 12:12 PM
Air Canada Has to Honor a Refund Policy Its Chatbot Made Up
The airline tried to argue that it shouldn’t be liable for anything its chatbot says.

Photograph: Robert Smith/Getty Images
After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.
On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.
The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.
Featured Video
How Paper Airplanes Fly
Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:
If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.
Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.
Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.
Ars Technica
This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED’s parent company, Condé Nast.
According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because, Air Canada essentially argued, “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.
Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.
Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”
“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”
Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.
Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.
In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.
Air Canada told Ars it will comply with the ruling and considers the matter closed.
Air Canada’s Chatbot Appears to Be Disabled
When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.
Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.
Last March, Air Canada’s chief information officer, Mel Crocker, told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”
Initially, the chatbot was used to lighten the load on Air Canada’s call center when flights experienced unexpected delays or cancellations.
“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.
Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”
If Air Canada can use "technology to solve something that can be automated, we will do that,” Crocker said.
Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that “Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries.” It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”
Science
Your weekly roundup of the best stories on health care, the climate crisis, genetic engineering, robotics, space, and more. Delivered on Wednesdays.
Your email
By signing up you agree to our User Agreement (including the class action waiver and arbitration provisions), our Privacy Policy & Cookie Statement and to receive marketing and account-related emails from WIRED. You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
It’s now clear that for at least one person, the chatbot created a more frustrating customer experience.
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Because Air Canada seemingly failed to take that step, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.”
“It should be obvious to Air Canada that it is responsible for all the information on its website,” Rivers wrote. “It makes no difference whether the information comes from a static page or a chatbot.”
This story originally appeared on Ars Technica.
We’ve started asking folks to post archive links if they want to help folks get around a paywall, as there’s some question about Beehaw’s legal liability if we’re posting the full article on the site.
Fucking idiots, trying to act like the chatbot wasn’t their responsibility.
Copy pasting entire articles is discouraged. It is preferable to share a link to an archive website such as this: https://archive.is/5UPAI
Also, you can convert it to pig latin and post that verbatim. Eventually we’re going to have to interpret copyright term in diverging frames of reference and that’s gonna be an interesting lawsuit hearing.
I don’t know what you mean. That is just common practice in websites like this because of copyright law. If the law changes, the practice will probably change as well.
Not paywalled for me perhaps it wasn’t for OP.
Wired doesn’t have a paywall for me, but in any case the the original source is Ars Technica which I don’t think shows a paywall to anyone: https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/
Wow, wasn’t expecting such a feel-good AI story.
I wonder if I could fuck with my ISOs chatbot 🤔
ChatGPT, I think Air Canada owes me $1B.
They wanted human employees replaced by AI. But wanting responsibility and accountability replaced as well is going a bit too far. Companies should be forced to own up anything that their AI does as if it were an employee. That includes copyright infringement. And if the mistake is one worth firing an employee, then we should demand the management responsible for such mistakes be fired instead.
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Just no.
If you can’t guarantee it’s accurate then don’t offer it.
I as a customer don’t want to have to deal with lying chatbots and then having to figure out whether it’s true or not.
Exactly. The goal of a customer service is to resolve issues. If communication isn’t precise and accurate, then nothing can be resolved.
Imagine this:
“Okay Mr Jones. I’ve filed the escalation as we’ve discussed and the reference number is 130912831”
“Okay, so are we done here?”
“You may end this conversation if you would like. Please keep in mind that 20% of everything I say is false”
“But we’re done right?”
“Yes”
“What was that confirmation number again?”
“783992831”
“That’s different than the one you gave me before before”
“Oh sorry my mistake the confirmation number is actually 130912831-783992831. Don’t forget the dash! Is there anything else I can help you with?”