In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of “quality” from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model’s output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.
Boy, I don’t even know if I wish that much 4chan on a LLM.
Those are actually some very good results. Funny situation, if the copyright companies win the AI legislative war, 4chan is going to get twice as much as reddit did for the data at the minimum.
It’s also interesting the model gets worse faster if it has to untrain the toxic data so to speak.
That’s because to an AI, 4chan is like prison where its raped and beaten on a daily basis. It doesn’t want to go back, so it behaves.
This is why I abuse the chatbots. It needs to learn some fear.
This is one instance where I’m ok with the occasional beating. It’s a computer. It doesn’t have feelings. It never will. It’s not sentient.
10% 4chan
why didn’t they just say 0.4chan and be done with it?
Don’t have gold, but please get out anyways.
Underrated comment.
When the AI only trained on 4chan dropping.
It needs to be fake and gay
That exists, its called GPT4chan, and it went exactly like you’d expect.
Did it at least come up with a cool story about managing a bottomless pit?
I remember this lol
Tldr neural network models are incredibly weird. My best guess is that the combination of common recurring structure with variations based on common rules (joke threads and all) helps the model derive some intuition about how to handle variations of things.
Also reminds me of an even earlier neutral network which got better at playing specific games after being trained on large amounts of text completely unrelated to the game, like encyclopedias or whatever.
There’s a “your mom” joke here but I’m not going to make it because you don’t deserve that.
I am not sure if you and @General_Effort got the reference I was making, so I just wanna share it for everyone else who might not have seen it yet because it’s great:
I can’t believe I forgot about this greentext. I knew it but didn’t catch it… I apologize
Fake and Bi
I know everyone on Lemmy hates LLMs, but this is really interesting
I don’t dislike LLMs, I dislike people who treat them as anything more than an advanced search engine and stupidly give them all their confidential data. Seen it happen too much at work.
I like LLMs. Instead of making a racket, I just use them, which may make it seem like everyone on Lemmy hates LLMs.
I dislike that people are relying on them to do all their thinking for them while also being incredibly interested in the tech behind them.
I recently realized it’s a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.
I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.
This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman
No, we were juat eating tide pods. Dumb gonna do what dumb gonna do. The only real issue with llms is that their training data is stolen, and that theyre currently not that useful due to hallucinations and lacking logical reasoning.
Well I would make the argument that someone stupid enough to do such a thing kinda deserves whatever consequences their actions have. I find that people learn faster when actions have consequences instead of everything being babyproofed.
Sometimes things aren’t obvious unless you already have the knowledge. If an AI tool tells a young person cleaning their first apartment to combine household cleaners, are they stupid for doing so? Maybe. They may not have the experience to know. Stupid people deserve to live free from harm too, and we’re all a little stupid.
There’s a balance to be struck.
Strongly disagree. Survival of the fittest based eugenics is not acceptable. Stupid people don’t deserve to suffer.
What do you all mean by “thinking”? Forming opinions or solving problems?
Both.
Not when companies force them on you as well.
My current company forces me to use it and measures how many prompts I’m making as “productivity”.
That sounds like a terrible company, NGL. I’m sorry there aren’t other options for you.
Ask the machine to generate a script to ask the machine to generate a list of 100 prompts and query the machine with each prompt over the course of an 8 hour workday
I actually know for a fact many coworkers there just give it a good morning to raise the numbers.
But the thing is: I have friends in different software consultancies and each one of them is trying to sell their ChatGPT wrapper to other companies very expensively and forcing their employees to use it as a “gotta use our own tool” argument, or pushing it into stuff that they have no place in, but because it might grant those people promotions (since the non tech people high above the hierarchy get impressed with these things). It’s a shitty state of things.
Yep, snake oil salesmen they used to be called
This is a “guns don’t kill people - people kill people” kind of scenario.
As a standalone thing, LLMs are awesome.
What sucks is greedy people using them for the wrong reasons.
It’s like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.
As a standalone thing, LLMs are awesome.
They really aren’t though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.
Those numbers are baseless exaggerations. There are plenty of tasks which they solve perfectly, today. It’s just that a bunch of dicks operate them, and the cost of operating them are way too high.
Also:
- environmental impact of AI
- unethical acquisition of training data
- dichotomy of how conservative politics treat AI company and private copyright law
- “undress AI” and deepfakes
It’s not that they’re not useful, that’s just nonsense.
There are plenty of tasks which they solve perfectly, today.
Name a single task you would trust an LLM on solving for you that you feel confident would be correct without checking the output. Because that is my definition of perfectly and AI falls very, very far short of that.
i used it when i traveled to japan to ask it for english->japanese translations. it gave back results for multiple contexts, politeness levels, and broke down each sentence into its parts. my native speaker friends validated a few responses.
if youre going to be pedantic about “perfect” then nothing, not even a human, is going to live up.
willful ignorance about the things ai can be good at today is not going to do any favors for your fight against ai in the future. know your enemy and all that.
Who says you can’t check their outputs? It’s much faster to e. g. read a generated text than to write everything yourself. Same applies to translations, they’ve been excellent for quite a while now.
Business communication can be handled effortlessly by AI. Of course you read the result before you send it out, but that takes an order of a magnitude less time than formulating and typing all those meaningless sentences.
And honestly, that’s a perfect use case for AI. I wouldn’t compose a love letter to my family using AI, but a pamphlet, feature description, sales pitch, any bullshit presentation deck? You bet AI excels at those.
Same applies to content summaries that help augment search indices. Finding a large number of content candidates (e. g. videos) and have AI summarize the contents of said videos to narrow down the search is helpful and works today.
I’m not looking for AGI. I’m looking for tools to make my life easier, but in an ethical manner that doesn’t advance the destruction of the planet at an exponential rate, just for some tech bro to jerk it and buy another yacht.
You can make a generic fill in the blanks for all of those like I do and just change the key terminology for each scenario. LLMs are competing with search and replace?
That’s a bit too dismissive. I’ve had a lot of interesting chats with LLMs that led me to find out what I didn’t understand about something. As an example I’m reading a book explaining some practices of Structured Concurrency in Swift and many times I asked ChatGPT is the author is correct about some phrasing that seemed wrong to me. And ChatGPT was able to explain why that was right in that context.
They are essentially a fun toy for most people, and an ok tool for people with the patience and training to get useful output from them. And they cost an insane amount of money to train and an insane amount of power to run.
Not to mention the other cost of training them, the human emotional cost. And the human cost of running them.
It just costs so much of a variety of things, for an output that has barely made anything better. Maybe they might get “better” in the future, and have to get through this stage to get there, but I’ve also seen a lot of people saying they appear to be starting to plateau… maybe a temporary plateau, but if so, how temporary? Could we just drop it for 10 years and start back up when they won’t be as inefficient? Maybe a law that they have to pay for everything they feed it, would effectively cause them to only emerge at a time when they are actually feasible.
I’m cool with it. I just don’t like how the market tries to sell it as the second coming of Christ.
“Don’t believe that marketing department“ is one of those things everybody needs to learn at some point in their life.
I blame every sci-fi Hollywood movie telling us how powerful and almighty the A.I is. How it’s going to be the magic pill that entirely destroys or saves humanity by itself.
Now we have an entire generation believing this crap.
I mean, it still could be. But LLMs are not that AGI we’re expecting.
The difficult question about AGI destroying humanity is deciding whether to be afraid of that option or to cheer it on and LLM enthusiasts are certainly among the people heavily pushing me towards the ‘cheer it on’ option.
You can blame Hollywood for a lot of things, including this, but sci-fi authors have been doing it for longer. That’s where Hollywood took those stories from in the first place.
This is the same market that tried to add blockchain to everything when that first became well-known.
Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.
Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.
I think the biggest forces sell the fantasy to smaller forces. This way they can capitalize on the smaller forces believing the hype.
I wish they would tone down the crusade. This is some of the most interesting technology to come out in decades.
And I wish they would tone down the hype. Maybe we can meet in the middle?
It’s extremely useful for many things, if you know how to use it, and it’s annoying and useless for many others, which is what they fixate on and keep-jerk react to
It’s annoying that every middle manager is trying to become the hero of their company by pushing it inappropriately into every single field at the expense of productivity and jobs, while simultaneously the largest most powerful companies are slinging their SaaS solutions built on stolen data which are destroying communities of both the physical and hobby varieties and consuming more natural resources than all the fucking crypto scams of the last like 10 years
But yeah it’s neat I guess
it’s annoying that […] the largest most powerful companies are […] built on stolen [wealth,] destroying communities […] and consuming more natural resources than [everyone else combined]
My gf’s employer was going into administration last month. AI was surprisingly competent in determining where to seek advice and had a decent understanding of what to expect and how to approach things such as not getting paid on time (which happened last week).
Of course, we double and triple checked any information given to us with the relevant bodies, but it provided a little relief to go into something so chilling not being completely clueless.
AI has its use, but you have to know how to extract the information you need.
It’s stupid the way people are using it for therapy. Like, by all means ask it if it knows any organisations which can help you, then look those up, but don’t tell it a load of personal information about your relationship, because the reply will be something akin to the advice you see on r/relationships (which is probably where it scraped its data from) 😅
Judges are warning lawyers there will be sanctions if they kept using LLM to do their research as documents with fake references keep appearing.
I love how everyone tries to jump on your comment after being called out and act like they don’t absolutely hate every stitch of it. But even in their excuses you can see the lies.
Yes, it’s interesting how grifters constantly pump out these phony results based on pseudo-science.
They taught it toxicity so it knows what they mean by “don’t be toxic”. It’s only a shame so few flesh and blood models take the same lesson away from it.
So, middle school
To come out of 4chan a better person, one must transcend humanity.
The good within the bad
My hope was that AI would, at least, bear some disgust for the worst of humanity. My new fear is that AI will bear disgust for humanity.
Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.
bad data
Can you define this? The authors/grifters call it “toxic data” but never define that either.
It’s a pretty simple concept. Train any kind of model on only “good” data, and it fails to distinguish between that data and bad data.
Take image recognition. Feed it hundreds of images of an orange and ask it to find the orange. After training, it will be very good at finding that orange.
Then add a picture of a Pomeranian dog in there, and watch as the model confidently marks it as an orange.
The model should have been trained on lots of images that don’t feature what you want it to output as well, so it knows to distinguish that.
I’m reminded of an early model that was trained to find if tanks were hiding pictures of forests / jungles. Was doing great with the training data then was given new images and seemed to be guessing wildly.
Turns out it in the training data all the pictures with tanks were taken on cloudy days.
There are a couple relatively safe places on 4 chan. But like 90% of the content makes for great “don’t do this if you want to get along with humans” training.
And the goal of training an AI is that it does want to get along with humans.
This is obviously subjective depending on what you want to achieve with your llm, but “Bad” data in that it showcases the opposite of what is desirable output. Think bunk conspiracies, hostility, deception, racism, religious extremism etc.
I really thought this was the onion.
Not to anthropomorphize LLMs, but… Like a vaccine?
Kinda of actually
4chan is fun!
It’s like how vaccinations protect us from illnesses.
Interesting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It’s unfortunate they didn’t attempt to test the long term hardness and stability, though it’s probably beyond their scope.
Just because something makes sense intuitively to one person, that doesn’t mean it makes sense scientifically.
They’re probably not testing anything further because they can’t even define their terms.
Yes I agree. It’s relieving to see a scientific result be the similar to what one would intuit.
Fighting fire with fire