Even then pasting would also solve that issue since you can copy images from websites.
Fediverse is worse than Reddit. Mod abuse, admin abuse, disinformation, and people simping for literal terrorists.
Even then pasting would also solve that issue since you can copy images from websites.
Upload from URL is pointless since you can just markdown that URL directly.
Speaking of which… When can we finally ctrl+v paste them in from our clipboard? :(
I guess you guys who care about it could drop a feature request on Github for that.
Just say mbin. Kbin is dead for like half a year now and *bin is just terribly hard to read.
Can’t you just leave the magazine field empty? I don’t really make microblog posts so I honestly don’t know but likewise you’re not required to add hashtags to threads either so I would assume you don’t have to specify a magazine for a post either.
app - abbreviation for application: a computer program that is designed for a particular purpose
https://dictionary.cambridge.org/dictionary/english/app
application - a program (such as a word processor or a spreadsheet) that performs a particular task or set of tasks
https://www.merriam-webster.com/dictionary/application
app - An app is a computer program that is written and designed for a specific purpose.
https://www.collinsdictionary.com/dictionary/english/app
etc. etc.
It is… App = Application = Program, especially for mobile devices.
There’s plenty of free ways to use LLMs, including having the models run locally on your computer instead of an online service, which vary greatly in quality and privacy. There’s some limited free ones too, but imo they’re all shit and extremely stupid, in the literal sense - you get even better results with a small model on your computer. They can be fun, especially if they work well, but the magic kinda goes away when you understand more how they actually work, which also makes all the little “mistakes” of them very obvious and that kind of kills the immersion and with that the fun of it.
A good chat can indeed feel pretty good if you’re lonely, but you kinda have to understand that they are not real, and that goes not just for potentially bad chats, but even for the good ones. An LLM is not a replacement for real people, nothing an LLM outputs is real. And yes, if you have issues with addictions, then you may want to keep your distance. I remember how people got addicted to regular chat rooms back in the early days of the world wide web, now imagine those people with a machine that can roleplay any scenario you want to play with it. If you don’t know your limits then that can be very bad indeed, even outside of taking them too seriously.
I can generally only advice to just not take them seriously. They’re tools for entertainment, toys. Nothing more, nothing less.
To be fair, they mention that the chats were also “hypersexualized” - but of course not without mention that the bots would be basically pedos if they’d be actually real adult humans. lol
Hence why I consider articles like this part of the “AI” hysteria. They completely gloss over this fact, only mention it once at the beginning, with no further details where the gun came from and rather shove the blame to the LLM.
The bots pose as whatever the creator wants them to pose at. People can create character cards for various platforms such as this one and the LLM with try to behave according to the contextualized description of their provided character card. Some people create “therapists” and so the LLM will write like they’re a therapist. And unless the character card specifically says that they’re a chatbot / LLM / computer / “AI” / whatever they won’t say otherwise, because they don’t have any sort of self awareness of what they actually are, they just do text prediction based on the input they’ve been fed (though. It’s not really character.ai or any other LLM service or creator can really change, because this is fundamentally how LLMs work.
You’ve called? /J
The issue with LLMs is that they say what’s expected of them based on the context they’ve been fed on. If you’re opening up your vulnerabilities to an LLM, it can act in all kinds of ways, but once they’re sort of set on a course they don’t really sway away from it unless you force it to. If you don’t know how they work and how to do that, or maybe you’re self loathing to a point where you don’t want to, it will kick you further while you’re already down. As a user you kinda gaslight them into whatever behavior you want from them, and then they just follow along with that. I can definitely see how that can be dangerous for those who are already in a dark place, even more so if they maybe don’t understand the concept behind them, taking the output more serious than they should.
Unfortunately, various guards & safety measures tend to just censor LLMs to the point of becoming unusable, which drives people away from them towards those that are uncensored - and with them, everything goes, which again, requires enough knowledge and foresight to use them.
I can only advise to not take LLMs seriously. Treat them as a toy, as entertainment. They can be fun, stupid, vile, which also can be fun depending on your mindset… Just never let the output get to you on a personal level. Don’t use them for mental health or whatever either. No matter how good you may write them, no matter how well some chats may go, they’re not a replacement for a real therapy, just like they’re no replacement for a real friendship, or a real romantic relationship, or a real family.
THAT BEING SAID… I’m a little suspicious of the shown chat log. The suicide question seems to come very out of the blue and those bots tend to follow their contextualized settings very well. I doubt they’d bring that up without previous context from the chat, or maybe even this was a manual edit, which I’d assume is something character.ai supports - someone correct me if I’m wrong though. I wouldn’t be surprised if he added that line himself, already being suicidal, to have the chat steer towards that direction and force certain reactions out of the bot. I say this because those bots are usually not very creative in steering away from their existing context, like their character description and the previous chat log, making edits like this sometimes necessary to have them snap out of it.
The entire article also completely glosses over a very important part here: WHERE DID THE KID GET THE GUN FROM?! It’s like two pages long and only mentions that he shot himself at the beginning, with no further mention of it afterwards. Why did he have a gun? How did he get it? Was it his mother’s gun? Then why was it not locked away? This article seems to seek the fault with the LLM, rather than the parents who somehow failed to handle the situation of their sons mental health issues and somehow failed to oversee a gun in a household, or the country who failed to regulate its firearms properly.
I do agree that especially “AI” advertisement is very predatory though. I’ve seen some of those ads, specifically luring you with their “AI girlfriends”, which is definitely preying on lonely people, which are likely to have mental health issues already.
Done, although I’m not a fan of obfuscating hyperlinks. Same issue though. Immediately eaten and part of the modlog.
Here’s the link: https://fedia.io/m/yepowertrippinbastards@lemmy.dbzer0.com/t/1347598/at-BonesOfTheMoon-at-lemmy-world-in-at-insanepeoplefacebook-at-lemmy-world-quietly-removes-thread-that-didn-t-fit-into As you can see, it does not even show up as “removed”, unlike the linked original thread. So from my end I did not even realize that it got removed.
Edit: Well, slight error as I linked the original album as the thread instead inside the body. I just got up a couple hours ago… I won’t remove it for now for you to look into what’s happening though.
What do you mean by making it a proper hyperlink? I just posted the URL without any special formatting to make the URL itself visible, since they get automatically converted into hyperlinks. At least from my end on mbin. I’ve learned that Lemmy does not properly format line breaks / paragraphs so maybe that doesn’t even work either. But if I’d have to format them as hyperlinks I’d effectively double the @ signs in those URLs too, since I’d add them once as text and once as a link.
Though, I only had one user name with the @ signs in the title, but afaik titles do not ping users. The rest were the @ signs to the various community / instance links.
I can repost it as it was originally since I saved it before removing it myself. Although I also did a quick ninja edit in the title to include “(Album)” at the end, to make it clear that it is more than one image. Not sure if that triggered something too.
I linked 1 album with IIRC 5 images as the thread submission, and a link to a screencap of the modlog, along with some links to the relevant posts and stuff in the body. What exactly was the spam part about this that deserved to be flagged?
It is true, you can literally check for yourself. And there is no error. I didn’t even see it as removed from my side on mbin, like with a mod removal, and thought it wasn’t federating as there was neither upvotes, nor downvotes, nor comments, until I checked the modlog on your instance.
https://lemmy.dbzer0.com/modlog/961853?page=1&actionType=All&userId=7088902
I couldn’t even post an album there because the instance apparently sees external imgur albums as image spam or something stupid.
And the images are unrelated concept art too. lol