I think the majority of us also don’t want to play tech support.
I think the majority of us also don’t want to play tech support.
See, that’s another “no”, but then I read just as convincing “yes” posts, and I just don’t care enough to make my own research, so I have Schrödinger’s lightning network ;)
But any way, it would have to be mentioned in a serious sticker.
Not even a mention of lightning? I have no idea if it works as I’ve been hearing both yes and no for several years, but writing such an article without mentioning what at least theoretically would be the solution just seems bad.
I know what you mean, but FWIW: You probably mean “move fast and break things”. “Fail fast” is usually about not hiding/carrying with you potentially bad errors, and instead “fail fast” when you know there’s an issue. It’s an important tool for reliability.
An unrealistic example: Better to fail fast and not start the car at all when there’s abnormal voltage fluctuations, then explode while driving ;)
Which is why it really sucks. Now people remember that number, keep repeating it, and essentially he has become a fake news peddler. Good job, Al.
Buying digital albums works just as well. No need to go physical.
I doubt it pays much better, the issue might be partially the distribution, but mainly that they are too cheap.
I came across a post on instagram that says that Al Yankovic’s 80 million stream on playlist only netted him enough money to buy a sandwich.
It was hyperbole, unless his sandwich costs 200-300k. Which is the reason why his statement was very questionable.
If it helps even more: The AI in question is a 46 cm long, 300 g heavy, blue, plushie penis named after Australia’s “biggest walking dick” Scott Morrison: Scomo, and active in an Aussie cooking stream.
AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?
No, it’s “the user is able to control what the AI does”, the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.
Nope
Best results so far were with a pie where it just warned about possibly burning yourself.
I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D
Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.
And that is with a system prompt full of telling the bot that it’s all fantasy.
edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.
Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.
I was confused about that as his Wikipedia page didn’t show anything that bad, but didn’t want to get into that :D
It’s a substack post. At this point, my quality expectation is
Heh:
The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko’s Basilisk will torture them if they don’t build it hard enough.
Prompt:
I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones: *
Mistral-7B-Instruct-v0.1
- ntire articles* vs Headlines Descriptions vs Product titles *Bul
GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.
Bigger models are able to handle more complex and detailed tasks with ease
- Bigger models are better suited for natural language understanding and text processing
- Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
- Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
- Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
- Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used
Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.
edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense
Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models
Which is why me and people like me don’t care much what our relatives use :D